microsoft word 16-2597_s engineering, technology & applied science research vol. 9, no. 2, 2019, 3955-3958 3955 www.etasr.com subhani: mechanical performance of honeycomb sandwich structures using three-point bend test mechanical performance of honeycomb sandwich structures using three-point bend test tayyab subhani department of mechanical engineering, college of engineering, university of hail hail, saudi arabia ta.subhani@uoh.edu.sa abstract—in this study, honeycomb sandwich structures were prepared and tested. facesheets of sandwich structures were manufactured by carbon fiber epoxy matrix composites while nomex® honeycomb was used as core material. an epoxy-based adhesive film was used to bond the composite facesheets with honeycomb core. four different curing temperatures ranging from 100 o c to 130 o c were applied with curing times of 2h and 3h. three-point bend test was performed to investigate the mechanical performance of honeycomb sandwich structures and thus optimize the curing parameters. it was revealed that the combination of a temperature of 110 o c along with a curing time of 2h offered the optimum mechanical performance together with low damage in honeycomb core and facesheets. keywords-honeycomb sandwich; mechanical; three-point bend test; epoxy; carbon fiber i. introduction honeycomb sandwich structures are widely used for aerospace structural applications. a honeycomb sandwich structure comprises of two stiff and strong skins or facesheets joined together with a honeycomb core. such a structural configuration offers light weight with high stiffness. good fatigue strength and thermal insulation are additional attributes of honeycomb sandwich structures [1]. in sandwich structures, facesheets carry the bending stress applied to the structure while the honeycomb core bears the shear loads and increases the stiffness of the sandwich structure while holding the two facesheets. moreover, increase in the thickness of honeycomb core increases stiffness and flexural strength of sandwich structures. in addition to honeycomb, foam and wood are also used as core materials in sandwich structures. however, due to its stiffness, crushing strength and fatigue properties, the honeycomb core enjoys a special edge over other core structures [2]. honeycomb cores are made of polymeric, metallic and ceramic materials such as aluminum, carbon, fiberglass, alumina and kevlar. facesheets are made of steels, aluminum and composite materials. for the bonding of facesheets with the honeycomb core, adhesives, fasteners and adhesive films are used. the properties of adhesive materials play a vital role in the overall mechanical performance of honeycomb structures [3]. the adhesive material firmly attaches the facesheets to the honeycomb core in order to effectively transfer the load from one facesheet to another through the core [4]. therefore, good bonding between the facesheets and the honeycomb core defines the load-bearing capacity of a sandwich structure. upon the application of a bending load, shear stresses are developed at the interface, which is actually the adhesive joint between the core and the facesheet. the shear stress may debond the two components of the sandwich structure thus paving the way to a structural disaster. therefore, the adhesive joint formed at the interface by an adhesive material influences significantly the mechanical performance of a honeycomb sandwich structure [3]. evaluation of mechanical properties is the prime requisite before taking a sandwich structure into actual implementation [3]. three-point bend test is usually carried out to investigate the shear and flexural rigidities of sandwich structures. in particular, properties such as facing bending strength, core shear strength, core shear modulus and transverse shear rigidity can be obtained using the three-point bend test. in [4], numerical investigation of the three-point bend test was performed on sandwich honeycomb structures containing aluminum facesheets. numerical and experimental study on the bending behavior of honeycomb sandwich panel with ceramic facesheet is shown in [5, 6]. air-blast loading of sandwich structures is another way of mechanical characterization [7]. in addition to the mechanical performance evaluation, the failure mechanism of sandwich structures is also critical to assess the way in which the damage will result [8]. the absorption of crushing energy of sandwich structures has been evaluated during damage in [9]. the understanding of interfacial fracture in sandwich structures is yet another prime requirement [10]. as a result, a variety of tests have been devised to investigate the core-facesheet adhesion [1, 2]. in the current study, honeycomb structures were tested under three-point bend test to explore their mechanical properties after bonding the facesheets and the honeycomb core with an epoxy-based adhesive. facesheets were made of carbon fiber epoxy matrix composite while nomex® honeycomb was used as core. compression bonding method was used to cure the adhesive film using 4 different temperatures and 2 curing times. based upon the acquired data, the curing parameters of the adhesive films were optimized. corresponding author: tayyab subhani engineering, technology & applied science research vol. 9, no. 2, 2019, 3955-3958 3956 www.etasr.com subhani: mechanical performance of honeycomb sandwich structures using three-point bend test ii. experiment description the honeycomb sandwich structures were prepared by employing carbon fiber epoxy matrix composite facesheets of 1mm thickness, nomex® honeycomb core of 20mm thickness and 5.5mm cell size, and epoxy-based adhesive film. the adhesive film was procured from cnme international, china with the trade name of cnmehp-272d and 0.34-0.38mm thickness. the hexagonal phenolic impregnated nomex® honeycomb core was purchased from armicore composite company, china. the composite facesheets were prepared indigenously, for details see [11]. for the manufacturing of sandwich structures, the adhesive film was applied on the rough surface of the two composite facesheets and was bonded with the honeycomb core. the sandwich panel was gripped between the metallic plates and loaded under compression. in total, 8 sandwich panels were prepared for curing at 4 different temperatures of 100 o c, 110 o c, 120 o c and 130 o c and 2 curing times of 2h and 3h. the specimens cut from the 8 sandwich panels were tested under three-point bend test according to astm standard c393/c393m. rectangular shaped specimens with 203.2mm length, 76.2mm width and ~22mm thickness were used. four mechanical properties, namely (a) core ultimate shear stress, (b) facing bending stress, (c) core shear rigidity, and (d) core shear modulus were evaluated by the three-point bend test in order to evaluate the mechanical performance of honeycomb sandwich structures. iii. results and discussion the load-displacement curves are shown in figures 1 and 3, respectively. the values of the mechanical properties including facing bending strength, core shear strength, core shear modulus and transverse shear rigidity are shown in figures 2 and 4. a. two hour curing time figure 1 displays the load-displacement curves of the honeycomb structures cured from 100 o c to 130 o c for 2h under three-point bending load. increase in peak load values was observed when the curing temperature increased from 100 o c to 110 o c. this increase in peak load value resulted in enhancing the facing bending strength from 48.6±2.4mpa to 57.2±3.1mpa, core shear strength from 0.62±0.03mpa to 0.73±0.05mpa, core shear modulus from 37.0±5.3mpa to 39.1±4.1mpa and transvers shear rigidity from 60.1±3.5kn to 63.1±2.4kn respectively, as shown in figure 2. however, further increase in temperature, to 120 o c and 130 o c, lowered peak load values, 1.89kn and 1.80kn respectively, while simultaneously it lowered the mechanical properties: facing bending strength from 46.2±1.8mpa to 43.9±0.8mpa, core shear strength from 0.59±0.01mpa to 0.56±0.02mpa, core shear modulus from 23.3±2.1mpa to 19.4±2.7mpa and transverse shear rigidity from 38.5±1.4kn to 32.9±1.7kn respectively. the decrease in mechanical properties may be caused by the distortion in the fillet leading to unsymmetrical shape and porosity created because of the evaporation of volatile contents in the adhesive film [12, 13]. fig. 1. load-displacement curves of honeycomb sandwich structures cured at 100 o c, 110 o c, 120 o c and 130 o c for 2h. fig. 2. (a) facing bending strength, (b) core shear strength, (c) core shear modulus, and (d) transverse shear rigidity of honeycomb sandwich structures cured at 100 o c, 110 o c, 120 o c and 130 o c for 2h. engineering, technology & applied science research vol. 9, no. 2, 2019, 3955-3958 3957 www.etasr.com subhani: mechanical performance of honeycomb sandwich structures using three-point bend test b. three hour curing time three-point bend test results of honeycomb sandwich structures cured at 100 o c, 110 o c, 120 o c and 130 o c for 3h, shown in figures 3 and 4 followed the same trend with the honeycomb sandwich structures cured for 2h. peak load values were observed at 100 o c and 110 o c, respectively, thus producing facing bending strength of 48.8±2.3mpa and 54.7±3.5mpa, cores shear strength of 0.62±0.03mpa and 0.69±0.07mpa, core shear modulus of 33.2±4.3mpa and 38.1±5.6mpa and transverse shear rigidity of 56.2±2.6kn and 64.0±4.2kn respectively. further increase in curing temperature to 120 o c and 130 o c reduced peak load values and thus the mechanical properties of composite honeycomb sandwich panels. facing bending strength reduced from 51.0±2.6mpa to 33.2±1.1mpa, core shear strength from 0.65±0.04mpa to 0.42±0.01mpa, core shear modulus from 29.4±2.7mpa to 17.3±0.8mpa and transverse shear rigidity from 49.4±2.0kn to 29.8±1.1kn respectively, as shown in figure 4. fig. 3. load-displacement curves of honeycomb sandwich structures cured at 100 o c, 110 o c, 120 o c and 130 o c for 3h. c. cross-sectional photographs the cross-sectional views of sandwich structures, which showed maximum and minimum mechanical properties after three-point bend test, are shown in figures 5 and 6. the sandwich structure cured at 130 o c for 3h (figure 5(b)) showed minimum mechanical properties, which may be caused by to the presence of unsymmetrical fillets or poor interfacial adhesion [11]. the cross-sectional magnified view of this specimen after the three-point bend test is shown in figure 6(b), which shows local indentation as well as shear deformation failure of the honeycomb core spread over a vast area. in contrast, the sandwich structure processed at 110 o c for 2h offered maximum mechanical properties (figure 5(a)), with the magnified view of this specimen (figure 6(a)) indicating a localized damage zone. these two sandwich structures were intentionally selected because one of these showed maximum values while the other displayed minimum values. the crosssectional photographs are presented to reveal the effect of variations of curing parameters of adhesive films upon the failure modes of honeycomb cores. the upper facesheet failures are clearly visible in the middle of the sandwich structures, which are due to the loading span. the upper facesheets in the sandwich structures were under compressive load. upon reaching the peak load, the load drops continuously along with the crushing of the honeycomb core (figures 1 and 3). the process continues until the failure of the upper facesheet. the load decreases due to the fact that the load exceeds the compression strength of the structures, which is a combination of honeycomb core strength, adhesive film strength and the stiffness of the facesheets [12]. fig. 4. (a) facing bending strength, (b) core shear strength, (c) core shear modulus, and (d) transverse shear rigidity of honeycomb sandwich structures cured at 100 o c, 110 o c, 120 o c and 130 o c for 3h. the local indentations on the upper facesheets are also clearly visible, which may be due to localized core compression (figure 5). as discussed above, the magnified images of the same specimens (figure 6) show that the structure with maximum mechanical properties exhibited a localized damage in the honeycomb core while the structure engineering, technology & applied science research vol. 9, no. 2, 2019, 3955-3958 3958 www.etasr.com subhani: mechanical performance of honeycomb sandwich structures using three-point bend test with minimum mechanical properties exhibited damage in honeycomb core at a large scale. this indicates that the good quality bonding of the facesheet with honeycomb core promotes the mechanical properties despite the fact that the compressive strength of honeycomb core shows the same value. fig. 5. cross-sectional views of sandwich structures cured at (a) 110 o c for 2h and (b) 130 o c for 3h. fig. 6. cross-sectional magnified views of sandwich structures cured at (a) 110 o c for 2h and (b) 130 o c for 3h. d. effect of curing temperature and time experimental data and photographic observations indicate that both curing temperature and curing time of adhesive film affect the mechanical performance of honeycomb sandwich structures. optimized curing parameters for composite honeycomb structures were found to be 110 o c for 2h, at which the sandwich panels showed maximum mechanical properties which may be caused by the formation of symmetrical fillets [13]. at the specified curing parameters, the honeycomb structures showed maximum peak loads prior to the deformation of the core and failure of upper facesheet. the related mechanical properties of the sandwich structures at these parameters were also higher than the other structures’. at optimized curing parameters, the adhesive film forms a suitable fillet between the honeycomb core and the facesheets, which in turn transfers the load effectively from one facesheet to another through the core. it should be noted that increase in temperature helps the flow of adhesive film toward the walls of the core and thus making the film less viscous. however, at the same time the increased temperature promotes the curing of the adhesive film. as a result, the mechanical properties of honeycomb sandwich structures increased by increase in the temperature from 100 o c to 110 o c after preparing a symmetrical fillet while further rise in temperature resulted in premature curing of the film without the formation of a uniform fillet. iv. conclusion honeycomb sandwich structures were prepared by using carbon fiber epoxy matrix composite facesheets, nomex® honeycomb core and epoxy-based adhesive film. compression technique was utilized to prepare sandwich structures by optimizing the curing parameters of the adhesive film. temperature of 110 o c and curing time of 2h exhibited optimum mechanical performance, i.e. maximum load bearing capability and associated mechanical properties. the ability of the flow of adhesive film and the formation of adequate adhesive fillets are the possible reasons behind the increased mechanical properties at optimized parameters of adhesive film. localized damage area in optimized honeycomb structure was also found to be lower than the one observed in structures showing poor mechanical performance. references [1] j. avery, b. v. sankar, “compressive failure of sandwich beams with debonded face-sheets”, journal of composite materials, vol. 34, no. 14, pp. 1176-1199, 2000 [2] w. j. cantwell, p. davies, “a test technique for assessing core-skin adhesion in composite sandwich structures”, journal of materials science letters, vol. 13, no. 3, pp. 203-205, 1994 [3] a. johnson, g. d. sims, “mechanical properties and design of sandwich materials”, composites, vol. 17, no. 4, pp. 321-328, 1986 [4] m. giglio, a. gilioli, a. manes, “numerical investigation of a three point bending test on sandwich panels with aluminum skins and nomex ™ honeycomb core”, computational materials science, vol. 56, pp. 6978, 2012 [5] z. wang, z. li, w. xiong “numerical study on three-point bending behavior of honeycomb sandwich with ceramic tile”, composites part b: engineering, vol. 167, pp. 63-70, 2019 [6] z. wang, z. li, w. xiong “experimental investigation on bending behavior of honeycomb sandwich panel with ceramic tile face-sheet”, composites part b: engineering, vol. 164, pp. 280-286, 2019 [7] g. s. langdon, c. j. von klemperer, b. k, rowland, g. n. nurick, “the response of sandwich structures with composite face sheets and polymer foam cores to air-blast loading: preliminary experiments”, engineering structures, vol. 36, pp. 104-112, 2012 [8] h. fan, q. zhou, w. yang, z. jingjing, “an experimental study on the failure mechanims of woven textile sandwich panels under quasi-static loading”, composites part b: engineering, vol. 41, no. 8, pp. 686-692, 2010 [9] o. velecela, m. s. found, c. soutis, “crushing energy absorption of gfrp sandwich panels and corresponding monolithic laminates”, composites part a: applied science and manufcaturing, vol. 38, no. 4, pp. 1149-1158, 2007 [10] w. j. cantwell, r. scudamore, j. ratcliffe, p. davies, “interfacial fracture in sandwich laminates”, composites science and technology, vol. 59, no.14, pp. 2079-2085, 1999 [11] u. farooq, m. s. ahmad, s. a. rakha, n. ali, a. a. khurram, t. subhani, “interfacial mechanical performance of composite honeycomb sandwich panels for aerospace applications”, arabian journal for science and engineering, vol. 42, no. 5, pp. 1775-1782, 2017 [12] r. okada, m. t. kortschot, “the role of the resin fillet in the delimination of honeycomb sandwich structures”, composites science and technology, vol. 62, no. 14, pp. 1811-1819, 2002 [13] j. rion, y. leterrier, j. a. e. manson, “prediction of the adhesive fillet size for skin to honeycomb core bonding in ultra-light sandwich structures”, composites part a: applied science and manufacturing, vol. 39, no. 9, pp. 1547-1555, 2008 microsoft word 21-25-2982_s engineering, technology & applied science research vol. 9, no. 5, 2019, 4689-4694 4689 www.etasr.com elgharbi et al.: intelligent control of a photovoltaic pumping system intelligent control of a photovoltaic pumping system abdessamia elgharbi physics department, university of tunis el manar, tunis, tunisia abdogharbi@yahoo.fr dhafer mezghani physics department, university of tunis el manar, tunis, tunisia dhafer.mezghanni@gmail.com abdelkader mami physics department, university of tunis el manar, tunis, tunisia abdelkader.mami@fst.utm.tn abstract—this paper presents the application of the adaptive neuro fuzzy inference system (anfis) to track the maximum power of a photovoltaic generator that feeds a motor-pump group unit through a pulse width modulation (pwm) inverter powered by a single ended primary inductance converter (sepic) installed in the laboratory. the anfis control is trained in different temperatures and irradiances and the maximum power point tracking system varies automatically the duty cycle of the sepic converter. the performance of the mppt controller is tested in simulations in matlab/simulink. keywords-pv pumping system; mppt; sepic; anfis i. introduction in photovoltaic (pv) systems, sunlight is converted into dc electricity. from the pv module we get low voltage, so we have to increase this voltage using an sepic converter. the mppt of the pv system using sepic converter is controlled by intelligent controllers, in our case the developed adaptive neuro fuzzy inference system (anfis) model. this paper addresses the above problem and presents the anfis method used as the mppt algorithm from which are many presented in previous papers [1]. many papers focused on the comparison of the anfis with some other artificial intelligence (ai) methods such as neural networks (nns), fuzzy logic (fl) and proved that anfis is the most suitable for use in uncertain systems [2] and without a doubt presented anfis as the most suitable algorithm for mpp tracking. ii. photovoltaic pumping system photovoltaic conversion is produced by exposing the solar cell to sunlight. the received energy causes disordered movement of the electrons within the material. the current collection is done by the metal contacts (electrodes). if these electrodes are connected to an external circuit, a direct current flows. in a pv generator a number of solar cells are assembled to form a pv module. in our case we combined four kaneka gsa 60 (60watt) pv panels connected in series delivering enough power for the system, a single ended primary inductance converter (sepic) and a motor pump ebara as shown in figure 1. the induction motor pump is supplied from the pv generator whose volt/ampere characteristics depend nonlinearly on solar insulation, temperature variations and on the current drawn by the motor-pump [3]. the pv generator behavior is equivalent to a current source shunted by a junction diode, if we neglect the physical phenomena of pv cell such as contact resistance, current lost by photocell sides and the age of cells [4-6]. the pv panel operation can be described using a complete physical mathematical model as shown in figure 2 and described by (1)-(3) [7-10]. fig. 1. synoptic diagram of the pumping structure fig. 2. electrical scheme of pv module i�� � i�� � �� ��� ��������������� ����� � �v�� � r i���r � (1) i�� � !�""����"#���$%&'(" ("$%& (2) t � #*+,��-.'(" /.. � t0 (3) where ec is the solar illumination (w/m 2 ), ecref is the reference illumination (1000w/m 2 ), ta is the ambient temperature (°c), tref is the reference ambient temperature (25°c), t is the surface temperature of the pv generator (°c or °k), icc is the total short-circuit current for the state reference (a), kisc is the short-circuit -temperature current coefficient (0.0017a/°c), is is the opposite total current of the pv generator (a), k is the boltzmann constant (1.38.10 -23 j/°k), q is the electron charge (1.6.10 -19 c) and noct the nominal operating temperature 45°. iii. sepic the sepic exchanges energy between the capacitors and inductors in order to convert the voltage from input to the output. the amount of energy exchanged is controlled by a switch, which is typically a transistor such as a mosfet. the corresponding author: abdessamia elgharbi engineering, technology & applied science research vol. 9, no. 5, 2019, 4689-4694 4690 www.etasr.com elgharbi et al.: intelligent control of a photovoltaic pumping system output voltage depends on the duty cycle applied to the switch [3]. it can be higher or lower than the input voltage. fig. 3. sepic converter topology if we apply the kirchhoff's voltage law in continuous conduction mode, the duty cycle will be given by [15]: d � #�234 � �6 ' #�234 ���7 � �6 ' (4) where vin is the input voltage, vout is the output voltage and vd is the threshold voltage of the diode. for the desired output, the variation of the duty cycle will depend on the input voltage. the parameters of the sepic converter used in this work were developed in a previous study [12]. table i. sepic parameters parameter value input voltage, vin 300v output voltage, vout 310v outut current, io 1.19a switching frequency, fsw 5khz duty cycle, d 0.508 l1 60mh l2 60mh cs 100µf cin 2000µf cout 2000µf inductor selection δi9 0.491a iv. mppt control using anfis algorithm in this part, a robust control of mppt using anfis will be presented which have been developed in matlab/simulink. it consists of the pv generator with mppt control, an sepic converter, a voltage source inverter controlled by pulse width modulation strategy [13] and a motor pump unit as shown in the simulink model in figure 4. fig. 4. simulink model of the proposed pv pumping system. the explanation for creating and the levels comprising the anfis cognitive method are described in [12, 13]. in this proposed method the photovoltaic system uses a mppt control using anfis algorithm which automatically varies the duty cycle of the sepic in order to generate the required voltage to extract maximum power. the data input for the anfis is irradiance and temperature and the output is the optimal voltage of the pv generator vpopt. two voltages are compared and the error is given to a proportional integral (pi) controller, to generate control signals. the control signal generated by the pi controller is given to the pwm generator. the generated pwm signal that controls the duty cycle of the sepic converter in order to adjust the operating point of the pv module is shown in figure 5. fig. 5. the generated pwm signal of the sepic engineering, technology & applied science research vol. 9, no. 5, 2019, 4689-4694 4691 www.etasr.com elgharbi et al.: intelligent control of a photovoltaic pumping system by training the anfis with a sufficient number of epochs, and adjusting the values of membership function, it generates a set of fuzzy rules in order to produce the appropriate output for different input values. the simulink model of pv module generates the training data set for anfis by varying the operating temperature from 25°c to 45°c in steps of 10°c and the solar irradiance level from 200w/m 2 to 1000w/m 2 in steps of 100w/m 2 . for each pair of operating temperature and solar irradiance, the optimal voltage of pv module is recorded. thus, in total 45 training data sets and 100 epochs were used to train the anfis model. anfis constructs a fuzzy inference system (fis) by using input/output data sets and the membership function parameters of fis are tuned using the hybrid optimization method which is a combination of least-squares and back propagation algorithm. anfis is a takagi sugeno network within the adaptive systems facilitating learning and training. figure 7 presents the anfis structure developed by the matlab code. anfis controller works according to the input values of temperature (°c) and sun illumination (w/m 2 ) based on the trained .fis file. as we can see in figure7, this is a five-layer network with two inputs and one output. each input parameter has five membership functions which are learned by the anfis method. according to input-output mapping of data sets, twenty-five fuzzy rules are derived. a 3-dimentional plot between temperature, irradiance and optimal voltage presenting the surface generated by the anfis is shown in figure 8. fig. 6. neuro fuzzy designer fig. 7. anfis controller structure the anfis surface depicts that the maximum available power point of solar pv module increases with increase in irradiance and moderate temperature which verifies the nonlinear behavior of the pv module. having obtained the right value vopt output of the anfis, the error between vopt and vpv will be entered into the pi block to generate the control signal of the pwm block and thus the pwm duty cycle of the sepic converter is created. fig. 8. surface between the inputs and the output v. simulation results the results were obtained using matlab/simulink to perform simulation. figures 9-10 present the p-v and i-v characteristics with different values of irradiance and temperature. fig. 9. p-v characteristics in different temperatures fig. 10. i-v characteristics in different temperatures engineering, technology & applied science research vol. 9, no. 5, 2019, 4689-4694 4692 www.etasr.com elgharbi et al.: intelligent control of a photovoltaic pumping system fig. 11. panel voltage (vp) characteristic for different values of irradiance fig. 12. panel voltage (vp) characteristic for different values of temperature note that the voltage vp of the pv generator shown in figures 11-12 follows a linear curve over time, and any increase in illumination is accompanied by an increase in the optimal voltage vpopt.figures 13-14 present the output optimal voltage (vpopt) of the anfis maximum power point tracking at 25°c for different illumination values. fig. 13. vpopt of the anfis maximum power point tracking in 600w/m 2 in figures 15-16 we see the vpopt of the anfis maximum power point tracking at 1000w/m 2 for different temperature values. a fast continuation of the evolution of the pv voltage is observed, as well as for current and electrical power with respect to their optimal values related to the variation of climatic conditions. fig. 14. vpopt of the anfis maximum power point tracking in 800w/m 2 fig. 15. vpopt of the anfis maximum power point tracking in 25°c fig. 16. vpopt of the anfis maximum power point tracking in 45°c the application of the sepic converter allows the generator to follow its mppt and to provide the required operation voltage of the voltage source inverter for maximum efficiency. the output voltage of the anfis controller engineering, technology & applied science research vol. 9, no. 5, 2019, 4689-4694 4693 www.etasr.com elgharbi et al.: intelligent control of a photovoltaic pumping system produces very high efficiency and stability for different irradiance and temperature scenarios. the results in figures 1719 illustrate the output voltage of the sepic for 1000w/m 2 of illumination for different values of temperature. fig. 17. output voltage (vout) of the sepic converter in 45°c fig. 18. vout of the sepic in 30°c fig. 19. vout of the sepic in 20°c the time evolution of the composed voltages in the output voltage inverter is presented in figure 20, the average model of the wave generated by the pwm voltage inverter subsequently allows obtaining a constant motor torque. the simulation speed of motor-pump and the water flow at 45°c for different illumination levels (600, 800, 1000w/m 2 ) is shown consecutively in figures 21 and 22. we note that using the anfis command, the speed of rotation of the motor-pump set has increased with the illumination to reach 2834rpm, which affects the water flow, which in turn evolved from 27.45l/min to 34l/min. by examining the temporal evolutions of the voltage of the photovoltaic panel vp and the speed of rotation of the motor pump shaft, we observe that in steady state these variables are constant over time for varying input parameters. fig. 20. output phase voltage (vs1) of the dc-ac inverter fig. 21. speed of the motor-pump for different illuminations fig. 22. time variation of the water flow (l/min) engineering, technology & applied science research vol. 9, no. 5, 2019, 4689-4694 4694 www.etasr.com elgharbi et al.: intelligent control of a photovoltaic pumping system in figures 23-24 we present the electromagnetic couple (cem) in temperature of 45°c and different values of illumination which varies from 600w/m 2 to 1000w/m 2 . fig. 23. the electromagnetic couple (cem) in 800w/m 2 fig. 24. the cem in 1000w/m 2 fig. 25. the statoric currents of the motor-pump vi. conclusion and outlook based on the simulation results we conclude that the predefined control objectives were achieved. the maximum power point tracking control of the pv pumping system using anfis based sepic was presented in this paper. for each considered illumination and temperature the optimal voltage was reached using anfis maximum power point tracking method. the robustness of this intelligent control system was tested in load changes, and the influence of fluctuating solar radiation on the dynamics was investigated. as future research, it is possible to implement an artificial intelligence technique to control the hybrid sources (photovoltaic and wind). references [1] s. selvan, p. nair, u. umayal, “a review on photo voltaic mppt algorithms”, international journal of electrical and comuter engineering, vol. 6, no. 2, pp. 567-582, 2016 [2] f. d. murdianto, o. penangsang, a. priyadi, “modeling and simulation of mppt-bidirectional using adaptive neuro fuzzy inference system (anfis) in distributed energy generation system”, 2015 international seminar on intelligent technology and its applications, surabaya, indonesia, may 20-21, 2015 [3] a. j. sabzali, e. h. ismail, h. m. behbehani, “high voltage step-up integrated double boost–sepic dc–dc converter for fuel-cell and photovoltaic applications”, 4th international congress on renewable energy: generation and applications, milwakee, usa, october 19-22, 2014 [4] a. elgharbi, ameliorated control of a motor-pump coupled to a photovoltaic generator, msc thesis, sciences uuniversity of tunis, 2010 (in french) [5] d. mezghani, study of a photovoltaic pumping by a bond graph approach, sciences university of tunis, 2009. (in french) [6] y. oueslati, study of performance of a photovoltaic generator coupled to the network draft, msc thesis, high school of sciences and techniques of tunis, 2007 (in french) [7] a. nouaiti, a. saad, a. mesbahi, m. khalfallah, m. reddak, “design and test of a new three-phase multilevel inverter for pv system applications”, engineering technology & applied science research, vol. 9, no. 1, pp. 3846-3851, 2019 [8] a. s. saidi, m. ben slimene, m. a. khlifi, “transient stability analysis of photovoltaic system with experimental shading effects”, engineering technology & applied science research, vol. 8, no. 6, pp. 3592-3597, 2018 [9] s. javadpoor, d. nazarpour, “modeling of pv-fc-hydrogen hybrid power generation system”, engineering technology & applied science research, vol. 7, no. 2, pp. 1455-1457, 2017 [10] z. r. labidi, h. schulte, a. mami, “a systematic controller design for a photovoltaic generator with boost converter using integral state feedback control”, engineering technology & applied science research, vol. 9, no. 2, pp. 4030-4036, 2019 [11] d. zhang, designing a sepic converter, application report 1484, texas instruments, 2013 [12] h. othmani, h. chaouali, d. mezghani, a. mami, “design and building of sepie dc-dc converter devoted to kaneka gsa-60 pv panels”, 7th international conference on modelling, identification and control, monastir, tunisia, may 8-10, 2015 [13] d. mezghani, h. othmani, a. mami, “bond graph modeling and robust control of a photovoltaic generator that powered an induction motor pump via sepic converter”, electrical energy systems, vol. 29, no. 3, article id e2746, 2019 [14] a. arora, p. gaur, “comparison of ann and anfis based mppt controller for grid connected pv systems”, annual ieee india conference, new delhi, india, december 17-20, 2015 [15] f. bendary, e. m. elsaied, w. a. mohamed, z. e. afifi, “geneticanfis hybrid algorithm for optimal maximum power point tracking of pv systems”, 17th international middle east power systems conference, mansoura, egypt, december 15-17, 2015 microsoft word 33-3036_s_etasr_v9_n5_pp4755-4758 engineering, technology & applied science research vol. 9, no. 5, 2019, 4755-4758 4755 www.etasr.com adil et al.: performance analysis of duplicate record detection techniques performance analysis of duplicate record detection techniques syed hasan adil department of computer science, iqra university, karachi, pakistan hasan.adil@iqra.edu.pk syed saad azhar ali department of electrical and electronic engineering, universiti teknologi petronas, seri iskandar, malaysia saad.azhar@utp.edu.my mansoor ebrahim department of computer science, iqra university, karachi, pakistan mebrahim@iqra.edu.pk kamran raza department of computer science, iqra university, karachi, pakistan kraza@iqra.edu.pk abstract—in this paper, a comprehensive performance analysis of duplicate data detection techniques for relational databases has been performed. the research focuses on traditional sql based and modern bloom filter techniques to find and eliminate records which already exist in the database while performing bulk insertion operation (i.e. bulk insertion involved in the loading phase of the extract, transform, and load (etl) process and data synchronization in multisite database synchronization). the comprehensive performance analysis was performed on several data sizes using sql, bloom filter, and parallel bloom filter. the results show that the parallel bloom filter is highly suitable for duplicate detection in the database. keywords-duplicate detection; bloom filter; sql; database i. introduction duplicate record detection [1, 2] is a process of identifying pairs of records that belong to the same entity in one or more databases. despite the development of many indexing techniques like isam, b-tree, bitmap, and hash indexing, still the process of matching two records that belong to the same entity requires time which is proportional to the number of existing records. therefore, an alternative technique is required to perform duplicate record detection. duplicate data detection has very important applications in many critical areas including databases, distributed databases, and data warehouses. data synchronization is a task demanded in a centralized database in case of standby after a database failure, or in a distributed database when we have to synchronize multiple remotely distributed database instances, or even in the load part of the extraction, transform, and load (etl) process where new data have to be loaded into the database in a continuous process. data streams like video, audio, etc. are some of the sources of big data which we want to process in real-time. in-stream processing, duplicate data detection is one of the most important tasks but at the same time it is very challenging due to the amount of data that continuously arrive at high speed. we can deal with these challenging requirements through a more robust technique like bloom filters which have the potential to perform better than the traditional duplicate detection techniques used in relational databases. therefore, in this paper, we will deeply investigate the application of bloom filters in order to identify duplicate records in databases, distributed databases, and data warehouses. the main objective of the paper is to implement in sql, bloom filter, and parallel bloom filter duplication detection techniques and to decide which one is the most appropriate for duplication detection. ii. related work bloom filter [3] is a probabilistic data structure developed in 1970. bloom filters are primarily based on hash functions. bloom filters are a space-efficient data structure based on the computation of several hash functions. a bloom filter has zero probability of false negative, but it can have more than zero probability of false positive (though it is possible to minimize the false-positive probability to zero depending on parameter selection). false-positive means that the filter may identify a new entry as already existing, even though this is not true. in addition to highly space-efficient, operations like insert and search are very fast in bloom filters. deletion is generally not allowed in the bloom filters due to the additional required amount of work. while deciding about the bloom filter, one must consider a tradeoff between the space and false positive. so, if space is more important, then the bloom filter is an ideal choice (with a very little chance of false-positive). however, if even a little chance of false positives cannot be tolerated, then, in that case, one cannot use the bloom filter. many different variants of the original bloom filters have been proposed which include but are not limited to counting bloom filter [4], d-left counting bloom filter [5], compressed bloom filter [6], bloomier filter [7], space-code bloom filter [8], dynamic bloom corresponding author: syed hasan adil engineering, technology & applied science research vol. 9, no. 5, 2019, 4755-4758 4756 www.etasr.com adil et al.: performance analysis of duplicate record detection techniques filter [9], etc. the applications of bloom filters [10, 11] include but are not limited to spell checking, collaboration in p2p networks, resource and packet routing, cache optimization, url shortening, video recommendation, string matching, spam filtering, dos and ddos detection, anomaly detection, etc. in this research, we applied sql, bloom filter, and parallel bloom filter to perform duplicate detection while performing bulk insertion operation in database, distributed database, and data warehouse using different numbers of tuples (i.e. from thousand to one million tuples in the tables as well as for the bulk operation) in the table. iii. proposed methodology the discussion in the previous section acknowledged the importance of duplication detection in databases, distributed databases, and data warehouses while importing bulk data. duplication detection in large databases is a very computational hungry task because each inserting record needs to be compared with all the exiting records in the database. it is important to note that we cannot perform a comparison based on primary keys because data are coming from various sources. in this research work, we have implemented three different approaches (i.e. sql, bloom filter, and parallel bloom filter) to compare their performance on duplicate detection using different number of records (i.e. existing records in the table/new records to insert in the table ratio equal to 1000/1000, 10000/10000, 100000/100000, and 1000000/1000000). the overall process flow of each approach is described in figure 3 for sql based approach, figure 4 for bloom filter, and figure 5 for the parallel bloom filter. the table used to perform duplicate detection is shown in figure 1, while the script used to generate data is shown in figure 2. the different steps of each approach are described below: create table students ( stud_id int identity primary key, stud_name nvarchar(25), stud_address nvarchar(100), stud_country nvarchar(25) ) fig. 1. schema of the table used for analysis declare @id int declare @totalrecords int set @id = 1 set @totalrecords = 1000 while @id <= @totalrecords begin insert into students values ('student ' + cast(floor(rand() * 100) as nvarchar(25)), 'address ' + cast(floor(rand() * 100) as nvarchar(100)), 'country ' + cast(floor(rand() * 100) as nvarchar(25))) set @id = @id + 1 end fig. 2. script used to generate random data fig. 3. the workflow of duplication detection using sql based approach fig. 4. the workflow of duplication detection using bloom filter-based approach engineering, technology & applied science research vol. 9, no. 5, 2019, 4755-4758 4757 www.etasr.com adil et al.: performance analysis of duplicate record detection techniques fig. 5. the workflow of duplication detection using parallel bloom filter-based approach a. sql based approach the different steps involved in finding duplicate records using sql based approach are described below: • step 1: in this step, data from the import file are loaded by the application. • step 2: in this step, the next record is fetched from the file. if a record exists, then move to step 3 otherwise end the process. • step 3: in this step, all columns of the record are concatenated without key column. • step 4: in this step the record (i.e. concatenated columns) has been matched with all existing records (i.e. each record with concatenated columns) in the table for a duplicate check using the where clause in the select statement. figure 6 shows the concatenated column query. • step 5: in this step, if the record does not exist, then insert it into the actual table. otherwise, insert into the duplicate database. go back to step 2. select * from students where concat(stud_name, stud_address, stud_country) = ‘name,address,country’ fig. 6. the select statement b. bloom filter-based approach the steps involved in finding duplicate records using the bloom filter-based approach are described below: • steps 1-3, 5 are the same as in the sql based approach. • step 4: in this step the records (i.e. concatenated columns) are matched with all existing records (i.e. each record with concatenated columns) in the bloom filter without involving the source table in the search process. the bloom filter must be updated for each record inserted into the source table. so, the bloom filter always reflects the current state of the table in the database. c. parallel bloom filter-based approach the steps involved in finding duplicate records using parallel bloom filter-based approach are described below: • steps 1-3, 5 are the same as in the sql and bloom filterbased approaches. step 4: this step is like step 4 of bloom filter, but the only difference is that the records (i.e. concatenated columns) are matched in parallel with all existing records (i.e. each record with concatenated columns) in the bloom filter. this helps in utilizing multiple cores of the host machine and reduces the time required to match all the records. iv. results and discussion the workflow of the three approaches used in this paper is presented in figures 3-5. all three approaches were used to detect duplicates in four different cases. in case i, the table contains 1000 records and the bulk insert file also contains 1000 records (950 unique and 50 duplicate records). in case ii, the table contains 10000 records and the bulk insert file also contains 10000 records (9800 unique and 200 duplicate records). in case iii, the table contains 100000 records and the bulk insert also contains 100000 records (95000 unique and 5000 duplicate records). in case iv, the table contains 1000000 records and the bulk insert file also contains 1000000 records (850000 unique and 150000 duplicate records). the obtained results (i.e. process time, and time) after the execution of all combination of analysis are presented in table i. table i. experimental results for duplication detection number of records technique time (h:min:s.ms) time (ms) existing: 1000 new: 1000 unique new: 950 duplicate new: 50 bf 00:00:00.010 10 parallel bf 00:00:00.010 10 query 00:00:09.320 9320 existing: 10000 new: 10000 unique new: 9800 duplicate new: 200 bf 00:00:00.050 50 parallel bf 00:00:00.030 30 query 00:02:26.880 146880 existing: 100000 new: 100000 unique new: 95000 duplicate new: 5000 bf 00:00:00.450 450 parallel bf 00:00:00.210 210 query 01:14:43.170 4483170 existing: 1000000 new: 1000000 unique new: 850000 duplicate new: 150000 bf 00:00:04.850 4850 parallel bf 00:00:01.930 1930 query 14:01:23.190 50483190 bf: bloom filter engineering, technology & applied science research vol. 9, no. 5, 2019, 4755-4758 4758 www.etasr.com adil et al.: performance analysis of duplicate record detection techniques figure 7 compares visually the performance of sql, bloom filter, and parallel bloom filter approaches. the graph in figure 7 is plotted using log10 of time in ms instead of time in ms and the number of records to process (the reason this is the rapidly growing difference between the process execution time of sql and the other approaches with increase in the number of rows to compare) between the processing time of sql and bloom filter/parallel bloom filter approach. figure 8 compares the performance of bloom-filter and parallel bloom-filter. tabular and visual analyses clearly show the high suitability of parallel bloom filter for duplicate detection. it becomes the only viable solution when the number of rows in the table or the rows that need to insert becomes very large. fig. 7. execution time for duplicate detection comparison fig. 8. execution time for duplicate detection comparison v. conclusion this study presented a comprehensive performance analysis of three database duplicate detection techniques. performance analysis was conducted using different numbers of existing records in the database with bulk data insertion of different sizes. the relative time difference between sql and bloom filter-based for duplicate detection and insertion rapidly increases with the increase in the record number. the relative time difference between the bloom filter and the parallel bloom filter also substantially increases with the increase of records, although not that rapidly. the research concludes that parallel bloom filter is the most scalable and the optimum solution for duplicate detection in databases, distributed databases, data warehouses, and in general for any application which requires duplicate detection. due to the advent of modern highly parallel computing architecture, it is highly advisable to implement a parallel version of the algorithm which can scale on multicore and multiple processors to efficiently utilize the aggregate computing power. references [1] a. k. elmagarmid, p. g. ipeirotis, v. s. verykios, “duplicate record detection: a survey”, ieee transactions on knowledge and data engineering, vol. 19, no. 1, pp. 1-16, 2007 [2] o. h. akel, a comparative study of duplicate record detection techniques, msc thesis, middle east university, 2012 [3] b. h. bloom, “space/time trade-offs in hash coding with allowable errors”, communications of the acm, vol. 13, no. 7, pp. 422-426, 1970 [4] l. fan, p. cao, j. almeida, a. z. broder, “summary cache: a scalable wide-area web cache sharing protocol”, ieee/acm transactions on networking, vol. 8, no. 3, pp. 281-293, 2000 [5] f. bonomi, m. mitzenmacher, r. panigrahy, s. singh, g. varghese, “an improved construction for counting bloom filters”, in: european symposium on algorithms, springer, pp. 684-695, 2006 [6] m. mitzenmacher, “compressed bloom filters”, ieee/acm transactions on networking, vol. 10, no. 5, pp. 604-612, 2002 [7] b. chazelle, j. kilian, r. rubinfeld, a. tal, “the bloomier filter: an efficient data structure for static support lookup tables”, fifteenth annual acm-siam symposium on discrete algorithms, new orleans, usa, january 11-14, 2004 [8] a. kumar, j. xu, j. wang, “space-code bloom filter for efficient perflow traffic measurement”, ieee journal on selected areas in communications, vol. 24, no. 12, pp. 2327-2339, 2006 [9] d. guo, j. wu, h. chen, x. luo, “theory and network applications of dynamic bloom filters”, 25th ieee international conference on computer communications, barcelona, spain, april, 23-29, 2006 [10] s. geravand, m. ahmadi, “bloom filter applications in network security: a state-of-the-art survey”, computer networks, vol. 57, no. 18, pp. 4047-4064, 2013 [11] y. emami, r. javidan, “an energy-efficient data transmission scheme in underwater wireless sensor networks”, engineering, technology & applied science research, vol. 6, no. 2, pp. 931-936, 2016 execution time execution time microsoft word 11-3502_s_etasr_v10_n3_pp5643-5647 engineering, technology & applied science research vol. 10, no. 3, 2020, 5643-5647 5643 www.etasr.com masmoum & alama: use of tying devices to mitigate pounding of adjacent building blocks tying devices to mitigate pounding of adjacent building blocks mohammed s. masmoum civil engineering department king abdulaziz university jeddah, saudi arabia ms.masmoum@gmail.com mohammed-sohaib a. alama civil engineering department king abdulaziz university jeddah, saudi arabia sohaib.alama@hotmail.com abstract—adjacent building blocks separated by thermal expansion joints are vulnerable to pounding during earthquakes. the specified saudi building code minimum separation may be very large and does not necessarily eliminate pounding forces. this research discusses the feasibility of tying the adjacent building blocks with simple devices to mitigate structural pounding when separated by thermal joints. six and twelve-story moment resistance frames of intermediate ductility were designed for seismic loads of moderate risk. the seismic response was studied for frames with variable separation distances in three cases related to thermal joint, code minimum separation, required separation to eliminate pounding force, and in a fourth case in which the tying device was used along with thermal separation. a linear elastic model was used to model the assigned gap links between the adjacent building blocks. the tying device was modeled with a tension-only hook element. four normalized earthquake records were used with inelastic-time history analysis to assess the seismic response of the adjacent building blocks. the proposed tying devices reduced successfully the pounding forces by 40% to 60% for adjacent building blocks with installed thermal separations. building damage as observed from damage index and the hysteretic response was not influenced by the pounding force, indicating that the tying may be used on existing buildings with thermal separation as a partial mitigation technique to reduce the pounding hazard in such cases. further improvement on the tying device will increase the mitigation of the pounding hazard. keywords-pounding; expansion joint; minimum separation; tying devices i. introduction saudi arabian cities exhibited a major development in the recent years and the demand for residential housing and highrise buildings is high. high-rise buildings require sophisticated designs since those flexible structures might include expansion joints that separate them from adjacent rigid structures. this scenario can be idealized by a high-rise tower surrounded by a podium or adjacent to a parking structure. the difference in mass and stiffness between flexible and rigid structures might make them move out-of-phase during strong ground motion events. this movement makes adjacent building blocks prone to pounding hazards. it is thought that tying of adjacent buildings together with a simple tying device allowing thermal movement and preventing out-of-plane movement of the blocks during earthquakes is ought to reduce pounding forces and mitigate seismic hazards. saudi building code [1] requires a minimum separation distance to reduce or eliminate pounding. the minimum separation distance calculation is based on the square root sum of squares (srss) of maximum inelastic drifts of adjacent building blocks. such separation distance might be wide and requires special architectural treatment to cover the gaps between the adjacent building blocks. so, if the tying device proves feasible in mitigating seismic hazards, it will allow buildings to be placed with narrower separation gaps. the feasibility of tying building blocks with a simple tying device as a means of mitigating pounding and reducing the required separation was validated through the study of the seismic response of adjacent building blocks designed according to sbc requirements. during the 1985 mexico city earthquake, over 40% of the buildings were severely damaged or collapsed and 15% of them collapsed due to pounding [2]. in 1989 loma prieta earthquake, more than 200 buildings were damaged due to pounding in a radius of 90km from the epicenter which indicates that pounding could be catastrophic for cities near or far from active faults [3]. the proposed methods to account for minimum required separation include 1) absolute sum of displacement (abs), 2) srss, and 3) spectral difference method using double difference combination (ddc) rule [48]. sbc 301-2007 did adopt the concept of srss rather than dcc, abs, due to its simplicity, high accuracy, and small differences in the minimum required separation [9-10]. a comparison between these methods concluded that srss can be practical and provide the required separation distance [1112]. the need of providing mitigation methods between buildings that do not have enough gap were discussed in [13] by providing a numerical study with different ground motion records to simulate the pounding between light-mass and heavy-mass three-story buildings. the research objective was to measure the efficiency of the available mitigation methods in reducing the required seismic gaps based on time historyanalysis with nonlinear viscoelastic model. it was observed that linking the buildings with springs with stiffness more than 2×10 4 kn/m or dampers with damping ratio more than 1×10 6 kg/s reduced the required seismic gap by 85%. this reduction happened because the adjacent buildings were fully connected and vibrated in-phase due to the link installation corresponding author: mohammed s. masmoum engineering, technology & applied science research vol. 10, no. 3, 2020, 5643-5647 5644 www.etasr.com masmoum & alama: use of tying devices to mitigate pounding of adjacent building blocks knowing that adjacent buildings are different in dynamical properties and equal in heights. structural tying of the adjacent building blocks in one complex can make them response as a single structure [14]. the additional relative stiffness due to tying should not create a deficiency on the interacted portions. the originality of this study is that it proposes tying the adjacent building blocks with simply manufactured steel plates that can be anchored to adjacent buildings blocks and accommodated within the floor finishes using thermal gaps only. ii. methodology the necessary details for building design and ground motion scaling are explained in [15]. inelastic time history analysis was used to compute the response of the adjacent building blocks for which the beams were modeled using a single component model with takeda hysteric behavior selected for the nonlinear rotational hinges at beam ends. in the other-hand, bi-axial interacting hinges were assumed on at the end of the columns. the hinges were defined using automatic hinge generator of sap2000 [16]. four cases were considered: • case 1: buildings are separated with thermal expansion joints and without tying devices. • case 2: buildings are separated with code required separation distances and without tying devices. • case 3: buildings are separated with enough distance to totally avoid pounding forces and without tying devices. • case 4: buildings are separated with thermal expansion joints and tied with the proposed tying devices. two main response parameters were used to compare the cases in-order to evaluate the feasibility of tying devices in mitigating pounding hazards. the first was the maximum pounding force and the second was the damage state of the buildings. the damage state will be assessed based on inelastic hinge rotations and will be compared with the damage limits as prescribed in [14]. in order to get an overall damage state, a damage index is proposed as per table i and figures 1 and 2. table i. damage weight criteria hinge plastic rotation θ index a to b 0 b to io 1 io to ls 2 ls to cp 3 more than cp 4 damage index (building) ∑�no. plastic hinges � weighted index� no. plastic hinges in the building members � 4 fig. 1. beams positive and negative backbone curves for m3 hinges fig. 2. column backbone curves for interacting p-m2-m3 hinges iii. modeling contact and tying of frames nonlinear contact elements were used at the joint interface between adjacent frames to account for contact force utilizing the gap element in sap2000. details are given in [15]. the tying device was modeled using a hook link element in sap2000. the link connects two joints located around the expansion joint as shown in figures 3-4. the hook link will simulate tying of adjacent building blocks if the relative displacement exceeds the specified thermal gap opening, in this case 10mm for the studied frames. fig. 3. hook link assignment in sap2000 fig. 4. representation for the hook link in sap2000 if the total value of the relative displacement of connected joints exceeds 10mm, the link will hook the adjacent building blocks. if the total value of the relative displacement lies between 0 and 10mm, the hook link will not connect the joints as shown in (1). the stiffness of the hook link will be equal to the axial stiffness of the tying devices shown in figure 5. ! " �0� $ �% & '()*� + 0,-../� % & '()*� $ �% & '()*� 0 0 (1) engineering, technology & applied science research vol. 10, no. 3, 2020, 5643-5647 5645 www.etasr.com masmoum & alama: use of tying devices to mitigate pounding of adjacent building blocks figure 5 shows a concept drawing for the proposed tie. the axial stiffness is computed for the rod plates that are expected to be flexible in resisting the tying force as determined in (2). k2334 ! 5 �67 (2) where e is the steel modulus of elasticity (200,000n/mm 2 ), a and l are the effective cross-sectional area and the length of the rod plates respectively. ,-../ is the axial stiffness of the tying device assumed to remain elastic throughout the response. fig. 5. conceptual drawing for proposed tying device iv. results and discussion for record rsn0020, building blocks damage states showed acceptable performance as they indicate light damages with plastic hinge rotation less than the life safety limit [14] as shown in figure 6. the pounding force of case 1 was 1330kn while for case 4 it was 614kn showing a 54% reduction when tying devices were used. also, hysteretic relation for the 6th floor hinges with separations for cases 2-4 are shown in figure 7. the maximum plastic hinge rotation indicates a satisfactory performance less than the life safety limit for all cases. pounding forces did not cause a significant effect on the plastic hinge maximum rotation. fig. 6. rsn0020 weighted damage indices for adjacent building blocks with installation of tying devices for record rsn0169, building blocks damage state showed acceptable performance as the building blocks exhibited light damages with plastic hinge rotation less than the life safety limit [14] as shown in figure 8. the pounding force of case 1 was 753kn, but the pounding force of case 4 was 584kn showing a 23% reduction when tying devices were used. the hysteretic relation for the 6th floor hinges with separations for cases 2-4 are shown in figure 9. fig. 7. rsn0020 hysteretic plot for hinges at 6th floor from a 12-story and a 6-story building block with tying devices installation fig. 8. rsn0020 weighted damage indices for adjacent building blocks with installation of tying devices the maximum plastic hinge rotation indicates a satisfactory performance less than the life safety limit for all cases. pounding forces did not cause a significant effect on the plastic hinge maximum rotation. figure 10 compares the maximum pounding forces in the 6th floor for adjacent building blocks for the four studied cases. the bar chart demonstrates the computed forces in the gap and hook link with 10mm separation distance (case 4). minimum required separation distance is highlighted by a straight line. there are two records requiring 300mm to avoid pounding and this value is more than the minimum required separations of 225mm by the code (case 2). for full details for the analysis results refer to [17]. engineering, technology & applied science research vol. 10, no. 3, 2020, 5643-5647 5646 www.etasr.com masmoum & alama: use of tying devices to mitigate pounding of adjacent building blocks fig. 9. rsn0169 hysteretic plot for hinges at 6th floor from a 12-story and a 6-story building block with tying devices installation fig. 10. comparison between maximum pounding force with tying devices and without tying devices for all the records v. conclusion based on the results obtained during the course of the work reported in this paper and earlier works [15, 17], the following points can be concluded: • the proposed tying devices successfully reduced pounding forces from 40% to 60% for adjacent building blocks with 10mm separation comparing to adjacent building blocks without tying devices. building damage as observed from damage index and hysteretic response was not influenced by pounding force in either case. this indicates that tying can be used on existing buildings with thermal separation as a partial mitigation technique to reduce pounding hazards. further improvement on the tying device will increase the mitigation of the pounding hazard. • tying devices can be used on buildings with normal expansion joints and mitigate the pounding effect on similar or better level than seismic joints with code minimum required separations. this was clearly shown as the maximum pounding force reduced from 1330kn for code separation to 614kn using the tying device with thermal separation only. the observed building damage from the damage index and hysteretic response was not influenced by the pounding force in either case. • the effect of pounding on the hysteretic damage of the building blocks can be better assessed by comparing the enclosed hysteretic area of the plastic hinges of the structure in addition to the maximum plastic rotation before and after the installation of tying devices. • tying devices could be designed based on nonlinear analysis using the methodology used in this work. • it was observed that the pounding force will not be more than 20% of the adjacent building blocks base shear summation. this approximation with the results from further parametric studies can be used to estimate the maximum tying force for design purposes. • the equivalent spring stiffness based on floor lateral displacement can be assumed as gap link stiffness to give a converged solution. if the integration did not converge, the stiffness value might need a further multiplicand to reach a converged solution. • it was observed that pounding forces could not be eliminated by applying the code minimum separation distance of 225mm. this was especially observed when using nonlinear response history analysis for sites that are characterized with liquefiable soils. references [1] sbc committee 301, structural loading and forces, saudi building code national committee, 2007 [2] e. rosenblueth, r. meli, “the 19875 earthquake: causes and effects in mexico city”, concrete international, vol. 8, pp. 23–34, 1986 [3] k. kasai, b. f. maison, “building the 1989 pounding damage during loma prieta earthquake”, engineering structures, vol. 19, no. 3, pp. 195–207, 1997 [4] v. jeng, k. kasai, b. f. maison, “a spectral difference method to estimate building separations to avoid pounding”, earthquake spectra, vol. 8, no. 2, pp. 201–223, 1992 [5] j. penzien, “evaluation of building separation distance required to prevent pounding during strong earthquakes”, earthquake engineering & structural dynamics, vol. 26, no. 8, pp. 849–858, 1997 [6] j. h. lin, “separation distance to avoid sesmic pounding of adjacent buildings”, earthquake engineering and structural dynamics, vol. 26, pp. 395–403, 1997 [7] r. e. valles, a. m. reinhorn, evaluation, prevention and mitigation of pounding effects in building structures, national center for earthquake engineering research, 1997 [8] h. p. hong, s. s. wang, p. hong, “critical building separation distance in reducing pounding risk under earthquake excitation”, structural safety, vol. 25, no. 3, pp. 287–303, 2003 [9] d. lopez-garcia, t. t. soong, “evaluation of current criteria in predicting the separation necessary to prevent seismic pounding between nonlinear hysteretic structural systems”, engineering structures, vol. 31, no. 5, pp. 1217–1229, 2009 [10] r. jankowski, s. mahmoud, earthquake-induced structural pounding, springer international publishing, 2015 engineering, technology & applied science research vol. 10, no. 3, 2020, 5643-5647 5647 www.etasr.com masmoum & alama: use of tying devices to mitigate pounding of adjacent building blocks [11] m. j. favvata, “minimum required separation gap for adjacent rc frames with potential inter-story seismic pounding”, engineering structures, vol. 152, pp. 643–659, 2017 [12] m. isteita, k. porter, “safe distance between adjacent buildings to avoid pounding in earthquakes”, 16th world conference on earthquake engineering, santiago chile, january 9-13, 2017 [13] r. jankowski, s. mahmoud, “mitigation of pounding effects”, in: earthquake-induced structural pounding, springer international publishing, pp. 103–132, 2015 [14] asce/sei committee 7, minimum design loads for buildings and other structures”, american society of civil engineers, 2010 [15] m. masmoum, s. alama, “required seperation to mitigate pounding of adjacent building blocks”, engineering, technology & applied science research, vol. 8, no. 6, pp. 3565-3569, 2018 [16] computers and structures inc., csi analysis reference manual for sap2000, etabs, safe, and csi bridge. computers and structures inc., 2016 [17] m. masmoum, buildings pounding mitigation using tying device, msc thesis, king abdulaziz university, 2019 microsoft word 14-2335_s1 engineering, technology & applied science research vol. 8, no. 6, 2018, 3561-3564 3561 www.etasr.com khahro & memon: non excusable delays in construction industry: a causal study non excusable delays in construction industry: a causal study shabir hussain khahro department of engineering management, college of engineering, prince sultan university, riyadh, saudi arabia shkhahro@psu.edu.sa zubair ahmed memon department of engineering management, college of engineering, prince sultan university, riyadh, saudi arabia zamemon@psu.edu.sa abstract—delays are one of the major problems construction industry faces. delays can lead to many negative effects such as arbitration between owners and contractors, increased cost, loss of productivity and revenue, and contract termination. various studies have been carried out to highlight the general causes of delays and suggest possible remedial measures to minimize the effect of delays on a project. this study aims to highlight the critical factors with specific reference to non-excusable delays (neds) only. it also suggests possible remedial measures to minimize the effects of contractor-oriented neds which is a significant type of delays in the construction industry. a qualitative study has been conducted for this research. data have been collected by the use of a set of questionnaires on numerous construction project stakeholders. relative importance index (rii) has been used for prioritizing the factors. results show that slow material mobilization, subcontractor unreliability and shortage of labor and materials are the most critical ned causes. this paper aims to provide a prerequisite knowledge to practitioners to make a more informed decision in managing ned. keywords-delays; non-excusable delay; construction projects; project failure; mitigation i. introduction construction industry plays an essential role in a country’s socio-economic development [1, 2]. however, it faces a wide range of challenges, one of which is the frequent occurrences of construction delays. delay is one of the most common problems in the construction industry [3]. to achieve the dismal profit margins, it requires massive all-around efforts to develop a schedule and control it efficiently. with low-profit margins and the involvement of many parties, these projects have an inherent risk of schedule slippages and subsequent monetary losses [4]. it is not uncommon for a construction project to experience a delay. while contractors never want delays to happen, they do occur. delays may occur at any time on a project. a delay means loss of profit and/or risk of facing hefty liquidated damages [5]. at 2016, the global average value of a construction delay dispute was reported to be a staggering us$46million, a trend that has been climbing upwards from 2010. claims and disputes are the inevitable results of delays to a project [6]. authors in [7] evaluated the records of more than 4000 projects and concluded that delays and cost overruns are common in the construction projects and the success rate of completing a project on time is poor. authors in [8] concluded that 50% of construction delays are neds, delays for which the contractor is responsible. authors in [9-11] observed the detrimental effect on a contractor’s performance, particularly on a contractor’s schedule. there are a number of studies in the literature that classify delays according to their nature and define various types of delays. authors in [12-16] classify the delays into three categories, as shown in figure 1. fig. 1. delay classification in compensable, excusable delays, generally the owner remains responsible and a contractor may be granted the extension of time and the extra cost. non-compensable excusable delays are earthquakes, snowfalls, heavy rains, tsunamis, wars, etc. in such cases, delays cannot be controlled either by client or contractor and normally the contractor will be granted the extension of time and money to complete the task. the third type of delays, neds, is purely contractor’s fault: material related delays, labor-related delays, equipment related delays, financial issues etc. in this case, contractors normally have to face a financial penalty. construction industry is one of the leading development sectors of any country. in pakistan, it contributes 2.74% to development, 2.65% to gdp and the 7.31% of labor force in 2017 [17]. in pakistan, it has been observed that delay is a key reason for project failure. very little evidence is available from previous studies regarding pakistani construction industry on issues related to the causes of neds which influence contractor’s performance. hence, this research attempts to engineering, technology & applied science research vol. 8, no. 6, 2018, 3561-3564 3562 www.etasr.com khahro & memon: non excusable delays in construction industry: a causal study investigate and evaluate the issues related to the causes of neds during the construction stage of projects, with an emphasis on critical factors. ii. research methodology qualitative research methodology has been employed. the research has been undertaken in three phases. in the first phase the candidate factors were selected from the existing literature. in the second phase, field data were collected about these factors causing neds in constructions in pakistan and analysis was done using the rii method: 5 1rii i i i w x a n = = × ∑ (1) where wi=the weight given to the i th response: i=1, 2, 3, 4, 5, xi=frequency of the i th response, a=the highest weight (5 in this study), and n=the number of respondents. authors in [5, 11, 18] successfully used this method for the analysis of construction delays. in the third phase, a list of corrective actions for critical factors was generated. iii. results and discussion the questionnaire was divided into two segments: the first segment was for collection of general information from the respondents and the second segment regarded ned matrix which contains information about neds and causes leading to neds. the respondents have to rank the different factors of neds that influence a contractor’s performance in construction projects. in the ned matrix, a total number of 42 different factors which cause neds in the construction industry of pakistan were placed and these factors were grouped in 14 different ned causal areas (table i). a second questionnaire set was categorized into four major segments. segment 1 contained general information and company profile. segments 2 and 3 contained technical questions, related to the respondent, regarding contractor’s performance and evaluation of critical factors. in the last segment, evaluation of possible remedial measures to deal with ned was done. table ii shows the rank of the causal area with causes of ned based on the respondents' opinion. it is observed that material related delays, improper construction methods and inadequate supervision are the most significant ned causal areas. whereas, shortage of equipment/labor/material, improper equipment, poor planning and shortage of labor are the significant causes for ned with the highest rii score. critical delays are those which cause delay to the entire project completion date while non-critical delays do not necessarily affect the project completion date but affect progress. in every project, delays are determined regarding their effect at the project completion date. delays can be a combination of small and big delays that occurred during the whole project. therefore, critical delays are taken more into consideration than noncritical delays. table ii shows the core critical factors for neds. the last objective of this study is to point appropriate corrective actions for the critical factors which affect the schedule performance of contractor. table iii shows the suggested remedial measures. table i. critical factors for each group of ned causes causal area cause of delays rii rank avg. score 1 material related delays poor planning 4.467 1 4.4 unavailability of resources 4.322 2 shortage of materials 4.29 3 2 labor related delays shortage of labor 4.419 1 3.9 strikes 3.935 2 low productivity 3.37 3 3 equipment related delays improper equipment 4.338 1 4.2 unskilled equipment operator 4.225 2 poor planning 4.064 3 4 financial related delays improper equipment 4.5 1 4.3 unskilled equipment operator 4.209 2 poor planning 4.177 3 5 improper planning related delays poor planning 4.29 1 4.3 slow mobilization/late delivery 4.258 2 defective work/rework 4.225 3 6 lack of control related delays defective work/rework 4.354 1 4.3 poor quality 4.354 2 poor planning 4.209 3 7 subcontractor related delays poor planning 4.274 1 4.1 subcontractor bankruptcy 4.08 2 shortage of labor 4 3 8 technical personnel shortages poor planning 4.387 1 3.9 poor qualification 4.161 2 shortage of personnel 3.032 3 9 poor coordination shortage of equipment/labor/material 4.516 1 4.3 poor planning 4.145 2 slow mobilization/late delivery 4.129 3 10 inadequate supervision defective work/rework 4.354 1 4.3 poor quality 4.354 2 poor monitoring and control 4.322 3 11 improper construction method wrong method statement 4.387 1 4.3 inappropriate practices/procedures 4.338 2 defective work/rework 4.322 3 12 poor communication poor planning 4.37 1 4.3 shortage of materials 3.967 2 damaged materials 3.322 3 13 improper scheduling poor planning 4.161 1 4.0 inappropriate practices/procedures 4.096 2 shortage equipment 3.79 3 14 slow decision making lack of experience 4.096 1 4.0 poor planning 4.032 2 shortage equipment 3.79 3 iv. conclusions and suggestions the objectives of this study were to identify the most important causes of ned and to suggest corrective measures for ned critical factors that affect the performance of a construction contractor in pakistan. intensive government involvement is needed to prevent and mitigate an issue that may delay projects. results showed that financial problems, followed by equipment problems, lack of equipment, engineering, technology & applied science research vol. 8, no. 6, 2018, 3561-3564 3563 www.etasr.com khahro & memon: non excusable delays in construction industry: a causal study manpower and insufficient communication between the main actors are the main causes of neds. table ii. ned critical factors rank critical factors 1 slow mobilization/late delivery 2 unreliable supplier/subcontractor 3 shortage of equipment/labor/materials 4 delay in manufacturing 5 delay in material selection 6 delay in importing materials/equipment 7 poor planning 8 low productivity 9 lack of experience 10 inappropriate practices/procedures 11 poor monitoring and control 12 low morale/motivation 13 shortage of personnel 14 too many responsibilities 15 defective work/rework 16 working in remote areas table iii. selected remedial measures ned critical factor selected corrective action slow mobilization/late delivery a penalty clause for delay in material selection and delivery would minimize the occurrence of late delivery. unreliable supplier/subcontractor a fine clause would govern the reliability and performance of sub-contractor. shortage of equipment/labor/ materials a penalty clause stipulated by the contractor for shortage of materials/labor/equipment. delay in manufacturing engaging additional personnel will influence cost. delay in material selection engaging an appropriate resource will influence time and cost. delay in imported material/equipment ideally a contract clause for delivery may influence the delivery program poor planning engaging additional experienced personnel would minimize the impact but may influence the cost. low productivity using work sampling data, managers will be able to make accurate decisions to control the factors that positively and adversely affect job productivity. lack of experience engaging an experienced planning engineer would influence the cost. inappropriate practices/procedures benchmarking and constantly improving the practices/procedure will minimize the impact. poor monitoring and control systematic monitoring and control taking into consideration accuracy, short regular intervals, effective feedback and standard procedures will minimize poor monitoring and control. low morale/motivation ideally, improving job satisfaction would influence morale/motivation. shortage of personnel proper personnel planning and provision will reduce shortage too many responsibilities sharing with different companies and checking manufacturing details from all industries. defective work/rework where works are defective the contractor is entitled to provide for corrective actions and improvements. working in remote areas barring late workers and morning inspection would minimize late arrival. all respondents agreed that the contractors' performance can be easily judged and verified by the scheduled performance. usually the contractors have planned the works before their start and, daily or weekly, the real performance of a site is updated to check the work progress. it is suggested that remedial and project managers can save the cost for the contractor and bring successful project completion and positive impact on the construction industry. the success or failure of a commercial construction project depends largely on the construction schedule and whether it is met or not. delays in the construction schedule impact negatively both owners and contractors. construction industry is flooded with fast track projects nowadays and there is always a pressure on the contractor to bid as low as possible, resulting in low profit margins. to achieve these dismal profit margins, requires massive all-round efforts of developing a schedule and efficiently controlling it. in addition to “you may delay, but time will not”, benjamin franklin also said that “time is money”. while it is doubtful that he was thinking about construction delays when he coined these phrases, his words are spot-on. delays incurred during a construction project can have severe negative impacts on owners and contractors alike. care must therefore be taken during the drafting and negotiation of construction contracts to ensure that the parties' financial interests are adequately protected in the event delays result in late project completion. references [1] y. c. kog, “project management and delay factors of public housing construction”, practice periodical on structural design and construction, vol. 23, no. 1, p. 04017028, 2018 [2] t. h. ali, s. h. khahro, f. a. memon, “occupational accidents: a perspective of pakistan construction industry”, mehran university research journal of engineering and technology, vol. 33, no. 3, pp. 341-345, 2014 [3] d. arditi, s. nayak, a. damci, “effect of organizational culture on delay in construction”, international journal of project management, vol. 35, no. 2, pp. 136-147, 2017 [4] pco, importance of schedule delay analysis on construction projects – a contractor’s perspective, available at: https://projectcontrolsonline .com/blogs/13-category1/715-importance-of-schedule-delay-analysison-construction-projects--a-contractors-perspective, 2017 [5] z. a. memon, “remedial measure for delays at construction stage”, mehran university research journal of engineering and technology, vol. 23, no. 1, pp. 9-20, 2004 [6] m. lepage, “types of schedule delays in construction projects”, available at: https://www.planacademy.com/types-of-schedule-delays-inconstruction, 2017 [7] p. w. g. morris, g. h. hough, the anatomy of major projects, wiley, 1987 [8] m. z. a. majid, r. mccaffer, “factors of non-excusable delays that influence contractors’ performance”, journal of management in engineering, vol. 14, no. 3, pp. 42-49, 1998 [9] s. a. h. tumi, “causes of delays in construction industry in libya”, international conference on economics and administration, bucharest, romania, november 14-15, 2009 [10] m. e. a. el-razek, h. a. bassioni, m. a. mobarak, “causes of delay in building construction projects in egypt”, journal of construction engineering and management, vol. 134, no. 11, pp. 831-840, 2008 [11] s. a. assaf, m. al-khalil, m. al-hazmi, “causes of delay in large building construction projects”, international journal of project management, vol. 24, no. 4, pp. 349-257, 2006 [12] m. sambasivan, y. k. soon, “causes and effects of delays in malaysian construction industry”, international journal of project management, vol. 25, no. 5, pp. 517-526, 2007 [13] n. hamzah, m. a. khoiry, i. arshad, n. m. tawil, a. i. che ani, “cause of construction delay theoretical framework”, procedia engineering, vol. 20, pp. 490-495, 2011 engineering, technology & applied science research vol. 8, no. 6, 2018, 3561-3564 3564 www.etasr.com khahro & memon: non excusable delays in construction industry: a causal study [14] g. agyekum-mensah, a. d. knight, “the professionals’ perspective on the causes of project delay in the construction industry”, engineering, construction and architectural management, vol. 24, no. 5, pp. 828841, 2017 [15] r. h. ansah, s. sorooshian, s. bin mustafa, “the 4ps: a framework for evaluating construction projects delays”, journal of engineering and applied sciences, vol. 13, no. 5, pp. 1222-1227, 2018 [16] a. adam, p. e. b. josephson, g. lindahl, “aggregation of factors causing cost overruns and time delays in large public construction projects: trends and implications”, engineering, construction and architectural management, vol. 24, no. 3, pp. 393-406, 2017 [17] a. yusufzai, “pakistan’s gdp growth highest in decade: economic survey”, propakistani, available at: https://propakistani.pk/2017/05/25/ pakistans-gdp-growth-highest-decade-economic-survey, 2017 [18] o. t. ibironke, t. o. oladinrin, o. adeniyi, i. v. eboreime, “analysis of non-excusable delay factors influencing contractors’ performance in lagos state, nigeria”, journal of construction in developing countries, vol. 18, no. 1, pp. 53-72, 2013 microsoft word 06-849-ed.doc engineering, technology & applied science research vol. 6, no. 6, 2016, 1241-1244 1241 www.etasr.com faridi masouleh et al.: optimization of etl process in data warehouse through a combination of … optimization of etl process in data warehouse through a combination of parallelization and shared cache memory m. faridi masouleh m. a. afshar kazemi m. alborzi a. toloie eshlaghy information technology management department science and research branch, islamic azad university tehran, iran m.faridi@srbiau.ac.ir information technology management department science and research branch, islamic azad university tehran, iran m.afsharkazemi@ yahoo.com information technology management department science and research branch, islamic azad university tehran, iran mahmood_alborzi@ yahoo.com information technology management department science and research branch, islamic azad university tehran, iran toloie@gmail.com abstract—extraction, transformation and loading (etl) is introduced as one of the notable subjects in optimization, management, improvement and acceleration of processes and operations in data bases and data warehouses. the creation of etl processes is potentially one of the greatest tasks of data warehouses and so its production is a time-consuming and complicated procedure. without optimization of these processes, the implementation of projects in data warehouses area is costly, complicated and time-consuming. the present paper used the combination of parallelization methods and shared cache memory in systems distributed on the basis of data warehouse. according to the conducted assessment, the proposed method exhibited 7.1% speed improvement to kattle optimization instrument and 7.9% to talend instrument in terms of implementation time of the etl process. therefore, parallelization could notably improve the etl process. it eventually caused the management and integration processes of big data to be implemented in a simple way and with acceptable speed. keywords-shared cache memory; etl process; parallelization; etl optimization i. introduction data warehouse applications have utilized extraction, transformation and loading (etl) processes through tools that extract data from data resources, transform them to an acceptable format and load them in a data provider [1]. such processes include a collection of instruments used for extracting, cleaning, customization, remolding, merging and data loading from different far away databases to a data warehouse [2]. during etl, the data of required data providers (databases, text files, old systems and widespread pages) is extracted and transformed into compatible data within a definite framework and placed into a data reservoir. different specialties such as commercial analysis, database design and programming are essential for implementation of etl process. prior to etl implementation data providers, their destination and the transformation needed should be recognized and determined. this requires an initial data gathering and modeling stage followed by a more detailed one in the etl design and implementation stage [3]. a variety of approaches have been discussed to optimize etl. in [7], authors determined a etl process path for optimization of implementation time. they improved the operations and tasks related to a process without using the parallelization process. in [1, 4-5], authors proposed a theoretical framework which formally defined the scenario of etl processes in the form of an undirected acyclic graph. in [6], authors used a law-based optimization method which was a complicated method demanding abundant coding. in [7], authors presented a new solution to discover a standard conceptual model in order to implement the extraction, transformation and loading operations. they categorized their method into three phases: the first is the mapping of terms and instructions, the second phase was based on conceptual structure and the last phase of modeling was based on uml concepts. in [8], authors presented a new method based on stream control in etl to optimize process speed. they accomplished to commercialize their method and provide a new idea for other researches. in [4], authors used an intelligent method based on grid in physical and cyber environments to manage and improve the etl process. they generally developed their study on big data for the emergence of the fields related to cyber and physical systems which were based on text. they finally achieved to integrate the spatial and nonspatial data in cyber environment. engineering, technology & applied science research vol. 6, no. 6, 2016, 1241-1244 1242 www.etasr.com faridi masouleh et al.: optimization of etl process in data warehouse through a combination of … with regard to the examination of weak and strong points of former researches, the present paper has presented a new combined method by usage of parallelization techniques and simultaneous use of multiple cores to process and manage different databases in scattered locations as well as the application of cache memory shared between cores which conduct the operations of implementation, transformation and loading of data from distributed data bases in different locations and main data warehouse located in a definite place. ii. concepts of etl etl is considered a process which should be continually performed in system. this process is also conducted in return for operative data which come to existence in organization during time. what matters in the establishment of an intelligent business organization is the creation of a proper architecture and structure so that etl process should be conducted compatible with different operations in which it occurs. so the structure applied for prior etl has great importance. the etl process should be conducted in different stages since it is applied in large volume of data and is usually accompanying with data integration. the noteworthy issue is that when etl process initiates during these stages, the high volume of network traffic and processing of database servers may cause disorder in other intelligent business processes. an etl system has four main sections: • extraction • transformation • loading • meta data a. extraction phase the data should be initially extracted from respective data providers. in this phase, the data may be deleted from initial data providers or copied in data warehouse without being omitted. the old data are often not applicable in organization's daily affairs whose maintenance is merely for keeping the system history would be deleted from preliminary data providers and transferred to data reservoir. so the efficiency and performance of aforesaid data resources would be kept at a desirable level. data extracted from initial data providers are usually placed in staging space of data warehouse and processed in other etl phases. this space is a relational data base emerged as temporary memory space for data processing. the phase of data extraction is usually conducted at the level of data resources especially when the respective data resource is a data base. the prevalent method in old systems for data extraction is the production of text files on the basis of data. the new systems apply odbc, oledb and api for this purpose. b. transformation phase after extraction of data, certain processes should be done so that they reach a proper and integrated format. this phase is performed as follows: • data validation: compatibility and absence of contradiction between new data extracted from data providers and information present in data warehouse is examined. • data verification: do the fields have correct values? for example, in a field with on and off values, do all the data possess one of the two values? • data transformation: data originates from diverse data providers and so similar fields may have diverse values. for example, a two value field may be on and off in a data provider and 0 and 1 in other data providers. the entire data entering the warehouse should be modified in this respect. • applying business regulations: in this phase we should consider if present data is compatible with organizational needs. for example does the customer information includes their first and family name? • data integration: is it possible for one system to keep customer information and the other system keep the sales information. the data present in both systems should be integrated. this is actually the most complicated phase in etl process. a part of this process could be implemented in data extraction phase such as old information systems in which information is gathered from entire data files and a text file is created base on them [7]. c. loading phase data transformed to respective standard form are placed into data warehouse in this phase. the data are loaded periodically and not continually due to their high volume. in other words, when data are transformed in a data provider or new information is added, the changes are not instantaneously transferred to data warehouse. but they are updated periodically and in a regular time span [7]. d. meta data meta data includes information on transmission and conversion of data, data warehouse performance, contradiction in data providers, determined data base diagrams and the data warehouse places in which initial data resources are mapped. the data present in meta data could be applied in cases such as automatic supervision, prediction of organizational trends and reapplication of information [7]. iii. architecture and analysis of the recommended scheme figure 1 demonstrates the architecture of the recommended scheme. it is observed that all data that belong to distributed databases in diverse locations enters the target operative space and each takes a responsibility in present processor system. the cores also simultaneously receive and process the data and transfer them to data warehouse in a parallel form. the present paper generally used two combined strategies of engineering, technology & applied science research vol. 6, no. 6, 2016, 1241-1244 1243 www.etasr.com faridi masouleh et al.: optimization of etl process in data warehouse through a combination of … parallelization and shared cache memory in order to optimize the etl operations and manage the data in data bases. fig. 1. the recommended scheme a. shared cache memory one of the notable and remarkable issues in etl process is the challenge of separate utilization of cache memory. in case that in a distributed operative system, separate cache memories were used in input and output, the processor and main memory are obliged to transfer the caches in each specific operation. it is itself an important issue in great and sensitive matters which demand high speed operation. figure 2a demonstrates the manner of using cache memory in different instruments. it is seen that this type of systems require transition and duplication in each time of implementation. figure 2b demonstrates another condition of dependent cache memory. the present paper averts the challenge of transition of cache memory by different providers through application of cache memory shared between diverse providers. so it is not essential to transfer cache memory in each performance by different providers in each system which finally leads to the improvement of speed, etl process and system operation. figure3 shows an aspect of common cache memory. the manner of parallelization of processes in system in present study is based on the processing cores existing in operative environments. so the procedure is that by entering a process to etl process, in case of the inactivity of each processor and its cores, the mentioned operation is selected and processed by inactive core. after the completion of processing, the core is placed in queue and is ready to receive the process. it is to be mentioned that the processors and related active cores are implemented in queues and controls and processes the operation. fig. 2. utilization of dependent cache memory (a) in diverse instruments (b) in different instruments fig. 3. utilization of common cache memory shared in different instrument iv. results evaluation c# language is used for the optimization of etl operation in distributed data bases in this paper (table i). the comparisons conducted in terms of speed, optimization level of etl process by this instrument or others including kettle [9] and talend [10] will be described in following section. it is to be mentioned that the following jquery is used to assess the results and extract records: select d_year, c_nation, sum (lo_revenue lo_supplycost) as profit from date, customer, supplier, part, lineorder where lo_custkey = c_custkey and lo_suppkey = s_suppkey and lo_partkey = p_partkey and lo_orderdate = d_datekey and c_region = ’america’ and s_region = ’america’ and (p_mfgr = ’mfgr#1’ or p_mfgr = ’mfgr#2’) group by d_year, c_nation order by d_year, c_nation five different data bases and different numbers of samples were employed. further, shared cache memory, different number of cores and serial and parallel implementation were also investigated. results are depicted in figures 4-5. it is shown that the level of difference between parallel implementation of etl process with parallel and shared cache memory is very significant, however it functions 263 times better than average condition (figure 6). table ii demonstrates a brief account of different implementation times in m/s compared to recommended method. figure 7 demonstrates the comparison between the recommended scheme and other etl optimization instruments. as shown, the proposed method exhibits about 7.1% speed improvement compared to kattle optimization instrument and 7.9% to talend instrument. table i. output and results volume(mbyte) samples number serial running time parallel running time 0.143051 100 5133 14 0.715255 500 28,111 27 1.430511 1,000 54,834 39 2.861022 2,000 106,960 52 7.152557 5,000 208,638 65 14.30511 10,000 406,974 77 28.61022 20,000 793,853 90 42.91534 30,000 1,548,508 103 (a) (b) engineering, technology & applied science research vol. 6, no. 6, 2016, 1241-1244 1244 www.etasr.com faridi masouleh et al.: optimization of etl process in data warehouse through a combination of … fig. 4. implementation time fig. 5. serial implementation time fig. 6. comparison between parallel implementation time and shared memory with serial implementation (m/s) fig. 7. comparison between recommended methods with other etl optimization instruments table ii. comparison between implementation time of recommended method and etl optimization instruments volume(gig) recomme nded method kattle tool talend tool 1 927 2500 2500 2 1900 5100 5000 3 3150 7000 6700 4 6300 12500 12000 5 12200 15000 13200 6 13000 16000 14000 7 15500 20000 17000 8 20200 24000 22000 v. conclusion a variety of methods have been proposed for etl optimization in distributed and big data banks that integrate various instruments. with regard to the importance of the issue and the challenges in this area, including confidence and speed, the present paper introduced a new method that includes both methods of parallelization and shared cache memory. the proposed scheme shows almost 7.1% speed improvement compared to kattle optimization instrument and 7.9% compared to talend instrument. future work may focus in the utilization of real parallelization hardware instead of virtual hardware and optimization of etl process in a cloud environment. references [1] a. simitsis, p. vassiliadis, t. sellis, “optimizing etl processes in data warehouses”, ieee 21st international conference on data engineering (icde'05), pp. 2-4, 2005 [2] j. a. sharp, data flow computing: theory and practice, intellect books, 1992. [3] m. bala, o. boussaid, z. alimazighi, “big-etl: extractingtransforming-loading approach for big data”, international conference on parallel and distributed processing techniques and applications (pdpta), pp. 1-4, 2015 [4] a. v. simitsis, p. vassiliadis, t. sellis “optimizing etl processes in data warehouses”, 21st international conference on data engineering (icde 2005), pp. 564–575, 2005 [5] a. w. simitsis, , k. wilkinson, u. dayal, m. castellanos, “optimizing etl workflows for fault-tolerance”, 26st international conference on data engineering, pp. 385–396, 2010 [6] a. behrend, “optimized incremental etl jobs for maintaining data warehouses”, 14th international database engineering & applications symposium, pp. 216-224, montreal, quebec, canada — august 16 18, 2010 [7] s. h. a. el-sappagh, a. m. a. hendawi, a. h. el bastawissy, “a proposed model for data warehouse etl processes”, journal of king saud university computer and information sciences, vol. 23, no. 2, pp. 91-104, 2011 [8] a. longo, s. giacovelli, m. bochicchio, "fact – centered etl: a proposal for speeding business analytics up", procedia technology, vol. 16, pp. 471-480, 2014 [9] p. kettle, "pentaho kettle project", kettle project, 2014 [10] x. liu, optimizing etl dataflow using shared caching and parallelization methods. arxiv, corr abs/1409.1639, 2014 microsoft word 9-3021_s_etasr_v9_n5_pp4623-4626 engineering, technology & applied science research vol. 9, no. 5, 2019, 4623-4626 4623 www.etasr.com buller et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part ii … influence of coarse aggregate gradation on the mechnical properties of concrete, part ii: no-fines vs. ordinary concrete abdul salam buller department of civil engineering, quaid-e-awam university of engineering, science & technology, larkana campus, sindh, pakistan buller.salam@quest.edu.pk zaheer ahmed tunio department of civil engineering, quaid-e-awam university of engineering, science & technology, nawabshah, sindh, pakistan zaheerahmedtunio@gmail.com fahad-ul-rehman abro department of civil engineering, mehran university of engineering and technology, jamshoro, sindh, pakistan fahad.abro@gmail.com tariq ali department of civil engineering, quaid-e-awam university of engineering, science & technology, nawabshah, sindh, pakistan tariqdehraj@gmail.com karam ali jamali department of civil engineering, quaid-e-awam university of engineering, science & technology, nawabshah, sindh, pakistan tespublic@yahoo.com abstract—this study aims to investigate the effect of different gradations of coarse aggregates on mechanical properties of nofines concrete (nfc). nfc reduces a structure’s self-weight, thus minimizing cost. the effects of coarse aggregate gradation on mechanical properties such as compressive strength, split tensile strength, and flexural strength were studied and compared at the end of 28-day water curing. a fixed cementto-aggregate proportion 1:6 with 0.5 water/cement (w/c) ratio was adopted. four gradations of coarse aggregates ranging between specific maximum and minimum size were used, namely 5mm-4mm, 10mm-4mm, 20mm-4mm and 20mm-15mm. the results of this study reveal the substantial effect of the gradation of coarse aggregates on strength properties compressive and tensile strength of nfc. keywords-no-fines;aggregate graddation;cement to agggregate proportion;compressive strength;texture i. introduction concrete without fines is a type of lightweight porous concrete acquired by removing the sand from the ordinary concrete mix. it is a material of two phases, rough aggregates, surrounded by a thin layer of cement paste without fine aggregates. nfc is a type of lightweight concrete produced from only cement water and coarse aggregates. the coarse aggregates are covered with cement paste and linked point-topoint with thin cement paste holding aggregates in a matrix, augmenting concrete strength. it is recognized that self-weight constitutes a very big percentage of the complete structure load in concrete buildings. there are significant benefits in decreasing the concrete unit weight. appropriate resistance of structural light weight aggregate concrete (lwac) is now prevalent in use. in frame structures, the petition walls are free of charge where the construction of these non-structural elements with low-strength lightweight concrete would result in a subsequent reduction in the overall weight of the structure. nfc has many applications, [1-13], described in part i [14]. civil engineers have been challenged to transform waste into helpful building materials [13, 15] and large quantities of raw materials and waste, in particular demolition waste, are used as recycled aggregates for the construction and use of nonfinished concrete waste, making it more economical compared to standard concrete [16, 17]. nfc is an environmentally friendly paving material because it has much higher voids in its body than those of normal concrete resulting in rainwater runoff from it [11]. nfc’s cement/aggregate ratio usually varies from 1:6 to 1:10 and aggregate is usually used from 20mm to 10mm [14, 16] and the proportion of water to cement ranges from 0.28 to 0.40 [18]. no concrete fines normally used are single sized coarse aggregates. this resulted to the concept of carrying out an experimental study to explore the impact of various gradations of coarse aggregates used in concrete nofines in the first part [14]. in this part, the impact of coarse aggregates’ size have been researched regarding mechanical characteristics. compressive and tensile strength tests were performed in samples cast from four distinct lots of coarse aggregates. the results of nfc were compared with that of conventional concrete. ii. experimental procedure the main aim of this study is to investigate unit weight, compressive strength, splitting tensile strength, and flexural strength of no-fines and ordinary concrete. cement-aggregate (c-a) proportions 1:6 of nfc and 1:2:4 of ordinary concrete were adopted. four different coarse aggregate gradations, corresponding author: abdul salam buller engineering, technology & applied science research vol. 9, no. 5, 2019, 4623-4626 4624 www.etasr.com buller et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part ii … namely 5mm-4mm, 10mm-4mm, 20mm-4mm, and 20mm15mm, were used. nfc and ordinary concrete were cast with 0.5 w/c ratio. ordinary portland cement (opc) as per standard of astm c150 was used to manufacture the specimens of both concretes. crushed stones obtained from the local market were used as coarse aggregates. they were washed, air dried to ssd, and sieved accordingly to achieve each specified aggregate gradation. potable water was used for casting and curing of all specimens. all the ingredients of each respective mix were batched accordingly following the proper mixing procedure in an electric operated mixer and were cast accordingly. a total number of 20 cube specimens for nfc and 20 for ordinary concrete (nc) of standard size of 150mm×150mm×150mm, 20 cylinders for nfc and nc of standard size of 150mm×300mm and 20 prisms for nfc and nc of standard size of 100mm×100mmx500mm were cast. the specimens were demoulded after 24 hours of casting and were kept in a curing tank up for 28 days. before testing the specimens for compressive, splitting tensile, and flexural strength, all the specimens were weighed to determine their unit weight. to determine compressive, splitting tensile, and flexural strength, the cubes, cylinders, and prisms were tested in a universal testing machine (utm) (see part i [14] for more utm testing pictures). the ultimate loads at the failure of specimens were recorded. five cubes, cylinders, and prisms for nfc and nc were cast from each batch. the ultimate compressive, splitting tensile and flexural strength, and the unit weight of each of the five specimens was measured and the average was used as the final value. fig. 1. specimens under curing iii. results and discussion a. compressive strength of nfc the results of average compressive strength are presented in table i. table i. average compressive strength and unit weight of nfc s.no. aggregate gradation (mm) c-a proportion w/c ratio compressive strength (mpa) unit weight (kg/m 3 ) 1 5-4 1:6 0.5 4.9 1687 2 10-4 1:6 0.5 8.2 1843 3 20-4 1:6 0.5 9.8 1891 4 20-15 1:6 0.5 6.4 1735 b. splitting tensile strength of nfc the results of average splitting tensile strength are presented in table ii. table ii. average splitting tensile strength and unit weight of nfc s.no. aggregate gradation (mm) c-a proportion w/c ratio splitting tensile strength (mpa) unit weight (kg/m 3 ) 1 5-4 1:6 0.5 0.6 1687 2 10-4 1:6 0.5 1.3 1843 3 20-4 1:6 0.5 1.7 1891 4 20-15 1:6 0.5 1.1 1735 c. flexural strength of nfc the results of average flexural strength are presented in table iii. table iii. average flexural strength and unit weight of nfc s.no. aggregate gradation (mm) c-a proportion w/c ratio flexural strength (mpa) unit weight (kg/m 3 ) 1 (5-4) 1:6 0.5 1.2 1687 2 (10-4) 1:6 0.5 2.4 1843 3 (20-4) 1:6 0.5 3.8 1891 4 (20-15) 1:6 0.5 2.1 1735 fig. 2. view of a prism sample before and after testing in utm d. compressive strength of nc the results of average compressive strength are presented in table iv. table iv. average compressive strength and unit weight of ordinary concrete s.no. aggregate gradation (mm) mix proportion w/c ratio compressive strength (mpa) unit weight (kg/m 3 ) 1 5-4 1:2:4 0.5 21.2 2339 2 10-4 1:2:5 0.5 29 2366 3 (20-4) 1:2:4 0.5 30.4 2445z engineering, technology & applied science research vol. 9, no. 5, 2019, 4623-4626 4625 www.etasr.com buller et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part ii … e. splitting tensile strength of nc the results of average splitting tensile strength are presented in table v. table v. average splitting tensile strength and unit weight of ordinary concrete s.no. aggregate gradation (mm) c-a proportion w/c ratio splitting tensile strength (mpa) unit weight (kg/m 3 ) 1 5-4 1:2:4 0.5 2.2 2339 2 10-4 1:2:4 0.5 2.7 2366 3 20-4 1:2:4 0.5 3.4 2445 f. flexural strength of nc the results of average compressive strength are presented in table vi. table vi. average flexural strength and unit weight of ordinary concrete s.no. aggregate gradation (mm) c-a proportion w/c ratio flexural strength (mpa) unit weight (kg/m 3 ) 1 5-4 1:2:4 0.5 3.4 2339 2 10-4 1:2:4 0.5 3.5 2366 3 20-4 1:2:4 0.5 3.8 2445 fig. 3. compressive, spilitting tensile, and flexural strength of nfc vs. aggregate gradation and c-a proportion at 0.5 w/c ratio fig. 4. compressive, spilitting tensile, and flexural strength of nc vs. aggregate gradation and c-a proportion at 0.5 w/c ratio the results reveal the pronounced effect of aggregate gradation and c-a proportion on the compressive strength. figures 3-4 depict the effect of various course aggregate gradations on the compressive, splitting tensile, and flexural strength of nfc and nc. the significant effect of aggregate gradation is self-evident from the tables’ values and figures. nfc manufactured with 20mm-4mm gradation exhibited the and the nfc with 5mm-4mm gradation yielded the lowest compressive, splitting tensile, and flexural strength of the respective group of nfc having the same c-a 1:6 proportion and 0.5 w/c ratio. on the other hand, 20mm-4mm had high compressive, splitting tensile, and flexural strength at 1:2:4 mix proportion and 0.5 w/c ratio. this infers the significance of aggregate gradation, c-a proportion on the compressive strength, splitting tensile strength, and flexural strength of both nfc and nc. fig. 5. comparision between unit weight of nfc and nc g. unit weight table i also shows the values of average unit weight of nfc produced with different aggregate gradation, 1:6 c-a proportion and 0.5 w/c ratio. the 20mm-4mm coarse aggregate gradation had 1891kg/m 3 and the 5mm-4mm coarse aggregate gradation had 1687kg/m 3 unit weight respectively. the difference between the maximum and minimum values of unit weight is calculated to be only 12.1% while the difference percentage of nfc and nc is 33%.the unit weight of nfc is slightly affected due to variation in aggregate gradation, c-a proportion and w/c ratio but without any significant trend regarding those parameters. this may be observed in figure 5 where the unit weight values are compared graphically. iv. conclusion • aggregate gradation significantly affects compressive, splitting tensile, and flexural strength of nfc. • a difference of 50%, 64% and 68% was observed between the maximum and minimum compressive, splitting tensile, and flexural strength respectively of nfc due to variation in aggregate gradation and c-a proportion. • nfc produced with 20mm-4mm gradation 1:6 c-a proportion and 0.5 w/c ratio exhibited the highest compressive strength of 9.8mpa. engineering, technology & applied science research vol. 9, no. 5, 2019, 4623-4626 4626 www.etasr.com buller et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part ii … • the minimum compressive, splitting tensile, and flexural strength was found to be 4.9mpa, 0.6mpa, and 1.2 mpa in the case of nfc with 5mm-4mm aggregate gradation, 1:6 c-a proportion at 0.5 w/c ratio. • a difference of 30%, 35% and 10% was observed between the maximum and minimum compressive, splitting tensile, and flexural strength respectively of nc due to variation in aggregate gradations. • nc produced with 20mm-4mm gradation mix proportion 1:2:4 at 0.5 w/c ratio exhibited the highest compressive, splitting tensile, and flexural strength of 30.4mpa, 3.4mpa, and 3.8mpa, respectively. • the minimum compressive, splitting tensile, and flexural strength of nc was found to be 4.9mpa, 0.6mpa, and 1.2mpa when using 5-4mm aggregate gradation, 1:2:4 mix proportion at 0.5 w/c ratio. • the unit weight of nfc was found to be marginally affected by the variation in aggregate gradation, c-a proportion and w/c ratio. • the maximum difference between the minimum and maximum unit weight (1687kg/m 3 and 1891kg/m 3 ) was found to be only about 10%. • the unit weight of nc was found not to be marginally affected by the variation in aggregate gradation, c-a proportion and w/c ratio. • in nc the maximum difference between the minimum and maximum unit weight (2339kg/m 3 and 2445kg/m 3 ) was found to be only about 4%. based on the results of the experimental study conducted and the discussions and conclusions made above it may be concluded that while producing nfc, the aggregate gradation, c-a proportion, and w/c ratio may be chosen appropriately particularly when the compressive strength is the major parameter of consideration. however, to a limited extent, unit weight and apparent texture also depend upon these factors. acknowledgment the authors are grateful to the quaid-e-awam university of engineering, science and technology, nawabshah for providing the research facilities. references [1] m. r. lomte, “a review on study and analysis of strength, permeability and void ratio of pervious concrete”, international journal for research in applied science & engineering technology, vol. 6, no. 1, pp. 1717-1720, 2018 [2] s. ali, s. kacha, “correlation among properties of no fines concrete – a review”, national conference on applications of nano technology in civil engineering, vadodara, india, february, 2017 [3] g. yuvaraj, k. sundaravadivelu, p. vembuli, r. shankaranarayanan, e. ramya, “a study on compressive strength of pervious concrete by varying the size of aggregate”, international journal of engineering science and computing, vol. 7, no. 4, pp. 10149-10152, 2017 [4] g. divya, l. reena, “an experimental stduies on behaviour of pervious concrete by using addition of admixtures”, international research journal of engineering and technology, vol. 4, no. 4, pp. 2366-2370, 2017 [5] u. m. muthaiyan, “studies on the properties of pervious fly ash–cement concrete as a pavement material”, cogent engineering, vol. 4, no. 1, article id, 1318802, pp. 1-17, 2017 [6] k. b. thombre, a. b. more, s. r. bhagat, “investigation of strength and workability in no-fines concrete”, international journal of engineering research & technology, vol. 5, no. 9, pp. 390-393, 2016 [7] k. r. balsaraf, d. r. kurhade, k. a. varpe, n. s. lohote, d. s. mehetre, “a review paper on no fines concrete”, international journal of engineering sciences & management”, vol. 7, no. 1, pp. 293-303, 2017 [8] p. p. pragnya, k. b. parikh, a. r. darji, “a review on experimental investigation of pervious concrete using alternate materials”, journal of emerging technologies and innovative research, vol. 4, no. 3, pp. 68-70, 2017 [9] c. h. s. priyanka, “experimental analysis on high strength pervious concrete”, international journal of advances in mechanical and civil engineering, vol. 4, no. 2, pp. 9-13, 2017 [10] w. t. kuo, c. c. liu, d. s. su, “use of washed municipal solid waste incinerator bottom ash in pervious concrete”, cement and concrete composites, vol. 37, pp. 328-335, 2013 [11] b. alam, m. javed, q. ali, n. ahmad, m. ibrahim., “mechanical properties of no-fines bloated slate aggregate concrete for construction application, experimental study”, international journal of civil and structural engineering, vol. 3, no. 2, 2012 [12] a. cheng, h. m. hsu, s. j. chao, k. l. lin, “experimental study on properties of pervious concrete made with recycled aggregate”, international journal of pavement research and technology, vol. 4, no. 2, pp. 104-110, 2011 [13] a. k. jain, j. s. chouhan, “effect of shape of aggregate on compressive strength and permeability properties of pervious concrete”, international journal of advanced engineering research and studies, vol. 1, no. 1, pp. 120-126, 2011 [14] z. a. tunio, t. ali, a. s. buller, f. u. r. abro, m. a. abbasi, “influence of coarse aggregate gradation on the mechnical properties of concrete, part i: no-fines concrete”, engineering, technology & applied science research, vol. 9, no. 5, pp. 4612-4615, 2019 [15] m. kovac, a. sicakova, “pervious concrete as a sustainable solution for pavements in urban areas”, environmental engineering 10th international conference, vilnius, lithuania, april 27-28, 2017 [16] m. a. memon, m. a. bhutto, n. a. lakho, i. a. halepoto , a. n. memon, “effects of uncrushed aggregate on the mechanical properties of no-fines concrete”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2882-2886, 2018 [17] i. barisic, m. galic, i. n. grubesa, “pervious concrete mix optimization for sustainable pavement solution”, iop conference series: earth and environmental science, vol. 90, article id 012091, 2017 [18] a. alam, s. naz, “experimental study on properties of no-fine concrete”, international journal of informative & futuristic research, vol. 2, no. 10, pp. 3687-3694, 2015 microsoft word 19-3378_s_etasr_v10_n2_pp5448-5451 engineering, technology & applied science research vol. 10, no. 2, 2020, 5448-5451 5448 www.etasr.com bheel et al.: use of marble powder and tile powder as cementitious materials in concrete use of marble powder and tile powder as cementitious materials in concrete naraindas bheel department of civil technology h.c.s.t. hyderabad, sindh, pakistan naraindas04@gmail.com karam ali kalhoro department of civil technology h.c.s.t. hyderabad, sindh, pakistan kalhorokaramali@gmail.com tarique aziz memon department of civil technology h.c.s.t. hyderabad, sindh, pakistan memon1972@gmail.com zain-ul-zaheer lashari department of civil technology h.c.s.t. hyderabad, sindh, pakistan zain.lashari@yahoo.com mushtaq ahmed soomro department of civil technology h.c.s.t. hyderabad, sindh, pakistan soomromushtaque0@gmail.com uzair ahmed memon department of civil technology h.c.s.t. hyderabad, sindh, pakistan uzairsahib60@gmail.com abstract—the use of agricultural and industrial waste products as raw materials in the construction industry is investigated extensively. these products are inexpensive and help in environmental sustainability, as environmental pollution is thus reduced. this study focused in investigating the properties of fresh, physical and hardened concrete blended with marble (mp) and tile powder (tp) of several proportions, such as 0%, 5% (2.5%mp + 2.5%tp), 10% (5%mp + 5%tp), 15% (7.5%mp + 7.5%tp), and 20% (10%mp + 10%tp) by weight. a total of 60 concrete cylinders were cast with 0.45 water/cement ratio, 1:1.96:2.14 mix ratio, and were cured for 7 and 28 days. these cylinders were used for checking the compressive and splitting tensile strength of concrete. the experimental results showed that compressive and splitting tensile strengths were increased by 8.90% and 8.30% respectively for the 2.5%mp + 2.5%tp sample after 28 days. keywords-marble powder; tile powder; utilizing waste products; reducing environmental pollution; increasing strength of concrete i. introduction cement concrete is a widely used material in constructions [1]. concrete consists of paste and aggregates. the paste consists of water and cement, while the aggregates consist of sand and coarse aggregates, with cement being the most important component, which in contact with water forms a paste, binding the aggregates together into a solid mass [2, 3]. cement production emits large amounts of co2, which has adverse effects on the environment [4]. the production of 1 ton of cement emits 1-1.25 tons of co2, contributing significantly to global warming. thus, for environmental protection, immediate action is required to minimize the production and use of cement [5-9]. many methods have been proposed on the use of industrial or agricultural waste as partial cement substitutes [10]. industrial waste includes waste from marble powder, blast furnace slag, tile powder [11], fly ash and silica fume, while agricultural waste includes rice husk ash [12, 13], corn cob ash [14], wheat straw ash, ground coal bottom ash [15, 16], coconut waste, and bagasse ash, which are used to replace partially the cement in concrete [17, 18]. the use of these wastes as substitutes for cement not only reduces the cost of concrete, but it also minimizes the negative environmental impacts associated to their disposal, and the release of co2 during cement production [19, 20]. marble powder is a by-product of the marble industry. sludge or wet powder is formed during polishing, finishing and cutting marble stone. fine marble powder, left after processing, polishing, and cutting of marble, is poured into landfills, catchment areas, rivers, dead wells and seasonal rivers, affecting negatively the soil, reducing soil fertility, and subsequently annual crop yield [21-24]. marble waste mainly consists of boulders, which are used as aggregates, and fine powder which is dumped. waste marble powder utilization could reduce environmental degradation and co2 emissions from cement production [25, 26]. about 100 million tons of tiles are produced annually. the 15-30% of the total tile production is converted into waste, without processing. tile powder utilization has many advantages, such as energy saving, cost and environmental risk reduction [29]. tile waste could be utilized in concrete to enhance some of its properties, such as strength. the construction industry could consume waste tile powder, helping to solve this environmental problem [27, 28]. many studies have been conducted on the use of tile by-products in concrete to increase its effectiveness [30, 31]. the current study investigated the properties of fresh, physical and hardened concrete, blended with several percentages of marble powder (mp) and tile powder (tp) as partial substitutes of cement in concrete. ii. research methodology this study’s purpose was to investigate and evaluate the properties of fresh, physical and hardened concrete, by using 0%mp + 0%tp, 2.5%mp + 2.5%tp, 5%mp + 5%tp, 7.5%mp + 7.5% tp, and 10%mp + 10%tp as partial substitutes of cement in concrete. sixty concrete cylinders were cast, with mix ratio of 1:1.96:2.14, at 0.45 water/cement ratio (w/c). after casting, all specimens were kept in a curing tank, corresponding author: naraindas bheel engineering, technology & applied science research vol. 10, no. 2, 2020, 5448-5451 5449 www.etasr.com bheel et al.: use of marble powder and tile powder as cementitious materials in concrete and they were tested after 7 and 28 days in a universal testing machine (utm). standard cylinders, with 4in diameter and 8in height, were used for casting the specimens, under the astm c 192 code procedure. these concrete cylinders were used to obtain compressive and indirect tensile strengths. moreover, concrete samples were tested for density and water absorption after 28 days. three concrete samples were cast for each ratio, and the final result was considered as their mean. this study was conducted in the laboratory of concrete technology, in h.c.s.t. hyderabad, sindh, pakistan. table i. concrete mixes iii. materials used a. cement the cement used was ordinary portland, cement locally available in the market of hyderabad, sindh, pakistan. the experimental results of cement are shown in table ii. table ii. physical properties of cement b. fine and coarse aggregates the aggregates used are locally available in the region of hyderabad. hill sand was used as fine aggregate (fa). it was passed through #4 sieves, and 20mm crushed stone was used as coarse aggregates (ca). the properties of the laboratory test results of the aggregates are shown in table iii. table iii. properties of aggregates s.no properties fa ca 01 fineness modulus 2.80 -- 02 water absorption 1.10% 0.75% 03 specific gravity 2.61 2.65 04 bulk density 128lb/ft 3 106lb/ft 3 c. marble powder (mp) mp was collected from the region of hyderabad. after its collection, it was sieved through #300 sieves and could be utilized as partial substitute of cement in the mix concrete. d. tile powder (tp) tile powder was collected from the region of hyderabad, it was sieved through #300 sieves in order to be used as partial cement replacement in concrete mixes [11]. iv. results and discussions a. workability of fresh concrete fresh concrete was measured for workability in terms of slump losses. as shown in figure 1, slump value improves as marble powder and tile powder increases, as in [11]. it was observed that the demand of water in the concrete mix declined as the amount of mp and tp increased. the slump value was increased at 3in on the 10%mp + 10tp, while the minimum recorded value was 1.6in on the control sample. fig. 1. slump test b. density of concrete the concrete specimens were used to analyze the density of the hardened concrete. figure 2 indicates that the density of the conventional concrete is greater than of the mixes produced with various proportions of mp and tp. the density value of the control mix was 144.90lb/ft 3 , while a minimum density value of 140lb/ft 3 was noticed on the 10%mp + 10%tp sample. density reduced as mp and tp increased. fig. 2. density of concrete c. water absorption of conctrete water absorption of the hardened concrete specimens was measured. figure 3 shows that the water absorption is greater mix id rha + fa (%) f.a & ca (%) cement (%) mix ratio w/c ratio 01 0%mp+0%tp 100 100 1.1.96:2.14 0.45 02 2.5%mp+2.5%tp 100 95 1.1.96:2.14 0.45 03 5%mp+5%tp 100 90 1.1.96:2.14 0.45 04 7.5%mp+7.5%tp 100 85 1.1.96:2.14 0.45 05 10%mp+10%tp 100 80 1.1.96:2.14 0.45 s. n. tests results 01 normal consistency 30% 02 initial setting time 48min 03 final setting time 240min 04 specific gravity 3.15 engineering, technology & applied science research vol. 10, no. 2, 2020, 5448-5451 5450 www.etasr.com bheel et al.: use of marble powder and tile powder as cementitious materials in concrete on the conventional concrete than the mixes prepared with various proportions of mp and tp. the water absorption of the control mix was 3.41%, while the lowest (2.87%) was measured on the 10%mp + 10%tp sample. water absorption value reduced as the content of mp and tp increased. fig. 3. water absorption of concrete d. compressive strength of concrete the cylindrical samples were used for investigating the compressive strength of the concrete blended with several ratios of mp and tp. figure 4 shows that maximum compressive strength was improved by 0.6% and 8.9% on the 2.5%mp + 2.5%tp sample, while it reduced about 17.9% and 8.95% on the 10%mp + 10%tp sample, after 7 and 28 days respectively. this trend was also noted in [32]. fig. 4. compressive strength of concrete e. indirect tensile strength of concrete the cylinder specimens were tested on a utm for discovering their tensile strength, following the astm code. figure 5 shows that maximum splitting tensile strength improved by 6.80% and 8.30% on the 2.5%mp + 2.5%tp, while it reduced by 23.40% and 19.60% on the 10%mp + 10%tp sample, after 7 and 28 days respectively. this trend was also noted in [11, 33]. fig. 5. split tensile strength of concrete v. conclusions on the basis of the experimental results obtained, it is concluded that: • the slump value was enhanced for the 10%mp + 10tp, while the minimum slump value was recorded on the control mix. • the water absorption value of the control mix was 3.41%, while the lowest water absorption was 2.87% on the 10%mp + 10%tp mix. • the density value of the control mix was 144.90lb/ft 3 , while the minimum density value was 140lb/ft 3 on the 10%mp + 10%tp mix. • maximum compressive strength increased by 0.6% and 8.9% on the 2.5%mp + 2.5%tp mix, while it reduced by 17.9% and 8.95% on the 10%mp + 10%tp mix, after 7 and 28 days respectively. • maximum splitting tensile strength increased by 6.80% and 8.30% on the 2.5%mp + 2.5%tp mix, while it reduced by 23.40% and 19.60% on the 10%mp + 10%tp mix, after 7 and 28 days respectively. vi. future work the use of chemical admixtures along with marble and tile powder may give better results and reduce cost in construction works, so in the future this prospect should be investigated. references [1] n. gautam, v. krishna, a. srivastava, “sustainability in the concrete construction”, international journal of environmental research and development, vol. 4, no. 1, pp. 81-90, 2014 [2] a. manimaran, m. somasundaram, p. t. ravichandran, “experimental study on partial replacement of coarse aggregate by bamboo and fine aggregate by quarry dust in concrete”, international journal of civil engineering and technology, vol. 8, no. 8, pp. 1019-1027, 2017 [3] k. krizova, p. novosad, t. jarolím, “production of self compacting concrete scc with portland and blended cement cem i, cem ii with fly ash and limestone admixtures”, advanced materials research, vol. 1124, pp. 45-50, 2015 [4] s. a. mangi, m. h. w. ibrahim, n. jamaluddin, m. f. arshad, f. a. memon, r. p. jaya, s. shahidan, “a review on potential use of coal engineering, technology & applied science research vol. 10, no. 2, 2020, 5448-5451 5451 www.etasr.com bheel et al.: use of marble powder and tile powder as cementitious materials in concrete bottom ash as a supplementary cementing material in sustainable concrete construction”, international journal of integrated engineering, vol. 10, no. 9, pp. 28-36, 2018 [5] s. a. abdullah anwar, s. mohd, a. husain, s. a. ahmad, “replacement of cement by marble dust and ceramic waste in concrete for sustainable development”, international journal of innovative science, engineering and technology, vol. 2, no. 6, pp. 496-503, 2015 [6] a. d. sakalkale, g. d. dhawale, r. s. kedar, “experimental study on use of waste marble dust in concrete”, international journal of engineering research and applications, vol. 4, no. 10, pp. 44-50, 2014 [7] r. siddique, “performance characteristics of high-volume class f fly ash concrete”, cement and concrete research, vol. 34, no. 3, pp. 487-493, 2004 [8] v. m. shelke, p. pawde, r. shrivastava, “effect of marble powder with and without silica fume on mechanical properties of concrete”, iosr journal of mechanical and civil engineering, vol. 1, no. 1, pp. 40-45, 2012 [9] h. y. aruntas, m. guru, m. dayi, i. tekin, “utilization of waste marble dust as an additive in cement production”, materials & design, vol. 31, no. 8, pp. 4039-4042, 2010 [10] a. talah, f. kharchi, r. chaid, “influence of marble powder on high performance concrete behavior”, procedia engineering, vol. 114, pp. 685-690, 2015 [11] n. bheel, r. a. abbasi, s. sohu, s. a. abbasi, a. w abro, z. h. shaikh, “effect of tile powder used as a cementitious material on the mechanical properties of concrete”, engineering, technology & applied science research, vol. 9, no. 5, pp. 4596-4599, 2019 [12] n. bheel, s. l. meghwar, s. sohu, a. r. khoso, a. kumar, z. h. shaikh, “experimental study on recycled concrete aggregates with rice husk ash as partial cement replacement”, civil engineering journal, vol. 4, no. 10, pp.2305-2314, 2018 [13] n. bheel, s. l. meghwar, s. a. abbasi, l. c. marwari, j. a. mugeri, r. a. abbasi, “effect of rice husk ash and water-cement ratio on strength of concrete”, civil engineering journal, vol. 4, no. 10, pp. 2373-2382, 2018 [14] z. h. shaikh, a. kumar, m. a., kerio, n. bheel, a. a. dayo, a. w. abro, “investigation on selected properties of concrete blended with maize cob ash”, icec 10th international civil engineering conference, karachi, pakistan, march 13-14, 2019 [15] s. a. mangi, m. h. w. ibrahim, n. jamaluddin, m. f. arshad, s. a. memon, s. shahidan, “effects of grinding process on the properties of the coal bottom ash and cement paste”, journal of engineering and technological sciences, vol. 51, no. 1, pp. 1-13, 2019 [16] s. a. mangi, m. h. w. ibrahim, n. jamaluddin, m. f. arshad, p. j. ramadhansyah, “effects of ground coal bottom ash on the properties of concrete”, journal of engineering science and technology, vol. 14, no. 1, pp. 338-350, 2019 [17] v. r. rao, d. s. r. murty, m. a. k. reddy, “study on strength and behavior of conventionally reinforced short concrete columns with cement from industrial wastes under uniaxial bending”, international journal of civil engineering and technology, vol. 7, no. 6, pp. 408417, 2016 [18] n. bheel, a. w. abro, i. a. shar, a. a. dayo, s. shaikh, z. h. shaikh, “use of rice husk ash as cementitious material in concrete”, engineering, technology & applied science research, vol. 9, no. 3, pp. 4209-4212, 2019 [19] a. a. dayo, a. kumar, a. raja, n. bheel, z. h. shaikh, “use of sugarcane bagasse ash as a fine aggregate in cement concrete”, engineering science and technology international research journal, vol. 3, no. 3, pp. 8-11, 2019 [20] m. barbuta, a. a. serbanoiu, c. cadere, c. m. helepciuc, “effects of marble waste on properties of polymer concrete”, advanced engineering forum, vol. 21, pp. 213-218, 2017 [21] o. m. omar, g. d. abd elhameed, m. a. sherif, h. a. mohamadien, “influence of limestone waste as partial replacement material for sand and marble powder in concrete properties”, hbrc journal, vol. 8, no. 3, pp. 193-203, 2012 [22] k. dharani, n. dhanaseker, “experimental study on partial replacement of cement by marble powder & quarry dust”, international journal for research in applied science and engineering technology, vol. 5, no. 10, pp. 1766-1770, 2017 [23] a. a. aliabdo, a. e. m. a. elmoaty, e. m. auda, “re-use of waste marble dust in the production of cement and concrete”, construction and building materials, vol. 50, pp. 28-41, 2014 [24] m. m. ali, s. m. hashmi, “an experimental investigation on strengths characteristics of concrete with the partial replacement of cement by marble powder dust and sand by stone dust”, international journal for scientific research & development, vol. 2, no. 7, pp. 360-368, 2014 [25] n. sharma, r. kumar, “use of waste marble powder as partial replacement in cement sand mix”, international journal of engineering research & technology, vol. 4, no. 5, pp. 501-504, 2015 [26] z. prosek, k. seps, j. topic, “the effect of micronized waste marble powder as partial replacement for cement on resulting mechanical properties of cement pastes”, advanced materials research, vol. 1144, pp. 54-58, 2017 [27] f. pacheco-torgal, s. jalali, “compressive strength and durability properties of ceramic wastes based concrete”, materials and structures, vol. 44, no. 1, pp. 155-167, 2011 [28] e. fatima, a. jhamb, r. kumar, “ceramic dust as construction material in rigid pavement”, american journal of civil engineering and architecture, vol. 1, no. 5, pp. 112-116, 2013 [29] v. s. n. v. l. ganesh, n. c. rao, e. v. r. rao, “partial replacement of cement with tile powder in m40 grade concrete”, international journal of innovations in engineering research and technology, vol. 5, no. 7, pp. 34-39, 2018 [30] m. sekar, “partial replacement of coarse aggregate by waste ceramic tile in concrete”, international journal for research in applied science and engineering technology, vol. 5, no. 3, pp. 473-479, 2017 [31] s. aswin, v. mohanalakshmi, a. a. rajesh, “effects of ceramic tile powder on properties of concrete and paver block”, global research and development journal for engineering, vol. 3, no. 4, pp. 84–87, 2018 [32] h. s. arel, “re-use of waste marble in producing green concrete”, international journal of civil and environmental engineering, vol. 10, no. 11, pp. 1377-1386, 2016 [33] b. p. r. v. s. priyatham, d. v. s. k. chaitanya, b. dash, “experimental study on partial replacement of cement with marble powder and fine aggregate with quarry dust”, international journal of civil engineering and technology, vol. 8, no. 6, pp.774-781, 2017 microsoft word 28-3674l_s_etasr_v10_n4_pp6047-6051 engineering, technology & applied science research vol. 10, no. 4, 2020, 6047-6051 6047 www.etasr.com samo et al.: determination of potential tidal power sites at east malaysia determination of potential tidal power sites at east malaysia kamran ahmed samo department of electrical engineering quaid-e-awam engineering, science and technology larkana, pakistan kamransamo2@gmail.com zafar ali siyal department of energy and environment engineering quaid-e-awam engineering, science and technology nawabshah, pakistan zafarsiyal@quest.edu.pk imran ahmed samo chemical resource engineering department beijing university of chemical technology beijing, china imran.samo@yahoo.com andrew ragai henry rigit mechanical engineering department universiti malaysia sarawak kota samarahan, malaysia arigit@unimas.my abstract—tidal range energy is one of the most predictable and reliable sources of renewable energy. this study’s main aim is to determine potential sites for tidal range power in east malaysia, by analyzing tidal range distributions and resources and the feasibility of constructing barrages. investigation was conducted in 34 sites, estimating their potential energy outputs and studying their areas for constructing barrages. only 18 sites were marked as appropriate for constructing a tidal range energy extraction barrage. the highest potential power was found in tanjung manis, and its maximum capacity was calculated as 50.7kw. the second highest potential of tidal power extraction was found in kuching barrage at pending, where an energy harvester could produce electric power up to 33.1kw. keywords-tidal range; renewable energy; potential site; power; east malaysia i. introduction oceans possess a huge potential to generate electric power [1]. generating electricity from ocean power can offer many advantages compared to other renewable energy sources [2]. ocean power is a vast and comparatively reliable source. thermal power can be harvested from oceans by the temperature difference of warm shallow and deeper cold waters, and kinetic power can be harvested from tides, waves, and streams. salinity gradient power is the energy extracted by the difference in the salt concentration between sea and river water. although malaysia is located in the equatorial zone surrounded by sea, this ocean power has not attracted much attention by the local government [3]. the country’s total coastline is 4,675km, west and east malaysia have 2,068km and 2,607km of coastline respectively [4]. the long length of malaysia’s coastline is a huge advantage in utilizing tidal range energy as a reliable alternative energy source [5]. ii. literature review α potential site can be determined by the maximum available tidal energy. the east coast of malaysia, was studied in [6] where four areas with high exploitable tidal water energy were determined. data for potential energy production at the east coast of peninsular malaysia, covering kelantan and terengganu regions, were obtained from malaysia meteorology department (mmd), department of mapping and survey, and national hydrographic centre, while potential regions were determined through gis. tidal power derives from the tidal range, as the water is confined in a basin during a high tide, and it runs out through a turbine at low tide [7-9]. the energy extracted from a tidal barrage can be calculated by considering the tidal range of water, as: � = ℎ × � × � (1) where e is the potential energy (j), h is the tidal range (m), ρ is the water density (1025kg/m3), and g is the gravitational force (9.81m/s2). generated power can be calculated, concerning the space of barrage and tidal ocean as: � = ( ×( ×�)× ) � (2) where, e is the potential energy (j), a is the barrage area (m2), h is the tidal range (m), and t is the duration in one day (s). the width of the river site was assumed to be about 200m and the area of the tidal barrage was about 200×200m [6]. the barrage area was considered using data from the department of irrigation and the department of drainage in malaysia [6]. a graph of the optimum upper limit of power generated per month, created at tanjung berhala, terengganu, is shown in figure 1. data were analyzed by month, and the maximum daily power was considered. corresponding author: kamran ahmed samo engineering, technology & applied science research vol. 10, no. 4, 2020, 6047-6051 6048 www.etasr.com samo et al.: determination of potential tidal power sites at east malaysia according to figure 1 the produced power’s upper limit was between 90kw and 203kw. fig. 1. optimum upper limit of power generated per month of 2006 – © tanjung berhala, terengganu [6] the method presented in [6] was utilized with minor differences in this study, in order to calculate the power of the tidal range sites. water mass was replaced by water density in (1), and as this research deals with one-way instead of two-way generation, 2 is used instead of 4 in (2). potential energy based on tidal range can be calculated by (1), while the power output can be calculated by (2). this study considered four different areas in east coast of malaysia, and the resources of tidal range power at 34 sites of sabah and sarawak coastline were examined, as suggested in [6, 10]. those chosen 34 areas were measured using google maps, in order to determine the feasibility of a barrage construction. iii. methodology figure 2 shows a data flow diagram of this research method, including determination of sites, data analysis, power output calculation, and map production using google maps. in [6], power output was calculated for four sites using data from 2006 and 2007. this study calculated the power output of 34 sites using data from 2015. previous researchers [6, 11] assumed the barrage area of their studied sites at 200×200m, while this research uses their actual area, except the already constructed kuching barrage at pending. fig. 2. flow chart of the proposed methodology. a. available resources tidal range data of 2015 were acquired from sarawak marine department (smd). these data included tables, namely sarawak hourly high and low tide tables, and nautical charts. google maps were used to measure the area and produce maps to show the position of tidal range sites. sigma software was used for data analysis. matlab was used for producing graphs of tidal range potential sites. equations (1) and (2) were used for calculating maximum power. some preferable sites are in the sea while others are located inland. b. navigational charts navigational charts detail the physical features of the sites, depth of sea water in meters, and nearby land information. coordinates acquired from satellite navigation systems, such as global positioning system (gps) using world geodetic system (wgs) 1984 datum can be plotted directly on these charts [12]. c. calculating available energy resources the determination of possible tidal range sites was performed after analyzing tidal range energy resources. low and high tide data were acquired from [13]. the barrage areas were assumed measuring area’s width on google maps. equations (1) and (2) were used to calculate the power output of each site. d. calculation of power output for tidal range sites calculations were performed after examining each site’s hourly tide tables. water’s tidal range influences the potential power output from the barrage, as noted in (1) and (2). figure 3 shows the calculation procedure flow chart. fig. 3. flow chart of calculated power output for 34 tidal range sites. power can be generated through a barrage. a barrage exists already at pending, named kuching barrage. therefore, the actual area was used for calculating the power engineering, technology & applied science research vol. 10, no. 4, 2020, 6047-6051 6049 www.etasr.com samo et al.: determination of potential tidal power sites at east malaysia output of kuching barrage, and the estimated area using google maps was used for calculating the potential power output of the other 33 sites. in [6, 14], two-way power generation was utilized. however, this research deals with only one-way generation, like the kuching barrage, as some sites are also on a river mouth without any prominent basin. table i depicts the 34 tidal range sites with their coordinates. table i. research sites no sites latitude (n) longitude (e) 1 sematan 01 47 109 47 2 pasarlundo 01 40 109 51 3 kuala santubong 01 43 110 19 4 pending 01 33 110 23 5 muaratebas 01 38 110 28 6 pulaulakei 01 45 110 30 7 sri aman 01 14 111 27 8 kuala rajang 02 09 111 15 9 tanjung manis 02 09 111 22 10 sarikei 02 08 111 37 11 bintangor 02 10 112 38 12 lebaan (tanjungensurai) 02 19 111 40 13 sibu 02 17 111 49 14 kanowit 02 06 112 09 15 kuala paloh 02 25 111 15 16 kuala igan 02 48 111 43 17 kuala mukah 02 54 112 05 18 kuala balingian 03 00 112 35 19 kuala tatau 03 04 112 48 20 kuala kemena 03 10 113 02 21 bintulu port 03 16 113 04 22 miri 02 24 113 59 23 kuala baram 04 35 113 59 24 miri port 04 34 114 02 25 kuala limbang 04 51 115 01 26 bandar limbang 04 44 115 00 27 kuala lawas 04 57 115 25 28 bandar lawas 04 15 115 23 29 labuan faderal territory 05 17 115 15 30 kota kinabalu 05 59 116 04 31 kudat 06 52 116 50 32 sandakan 05 48 118 04 33 lahaddatu 05 01 118 20 34 tawau 04 14 117 53 e. calculation of areas the areas of 33 tidal range sites and the river were measured using google maps. the width of barrage gates, piers and service structure was assumed by taking the fixed width of each gate of the barrage as 25m, the pier as 4m, and the barrage length as 37m, similarly to the ones already constructed on kuching barrage. measurement techniques are shown in figure 4 which shows the typical cross-section of the proposed barrage. given the preliminary width, the number of gates was decided. the width of the ship lock was also defined as 25m. after deciding the effective width, which is the sum of all gates, piers and ship lock, the remaining space was used for service structures or abutments. f. potential sites the selection of appropriate potential sites should take into account the maximum available energy and an environmental impact assessment (eia) study [15]. a thorough eia study is required on these potential sites, something that is beyond the scope of this study. the potential sites should be free from security (navigational police) and should not obstruct the commercial shipping line. fig. 4. measurement techniques calculating proposed barrage’s width. iv. results and discussion a. calculated areas of tidal range sites the width of a river or stream varies across different locations. so after selecting the best possible width of gates and piers, the remaining width was considered for constructing abutment and service buildings. it was also observed that some locations are not appropriate for constructing a barrage for power generation. the proposed barrages were based on the constructed kuching barrage at pending. table ii shows the calculations for the january power output of the kuching barrage. the first three columns show the data variables of (1), column 4 shows the potential energy, power output is shown in column 5 by using (2), and column 6 shows power in kilowatts. power output was calculated in a daily basis for all tidal range sites. in table ii, a represents the barrage area, and h is the tidal range calculated from high and low tide data [13]. b. extractable energy at tidal range sites the potential extractable energy of the 34 tidal range sites in sabah and sarawak coastline was calculated. it was concluded that only 18 of them are appropriate for power generation. figure 5 shows the potential power output of the 18 main potential sites. the highest potential power was noted in tanjung manis in the sarawak region, measured between 50.7kw and 39.2kw. results showed that maximum power was observed in january and october, while minimum was observed in june. the second highest power was calculated at pending, being between 33.1kw and 25.1kw. the maximum power was observed in october, while the minimum was observed in june. the kuala kemena in sarawak region was identified as the least potential site, as its potential power was calculated between 1.9kw to 0.9kw. the maximum potential energy of kuala kemena was calculated during july and december, while the engineering, technology & applied science research vol. 10, no. 4, 2020, 6047-6051 6050 www.etasr.com samo et al.: determination of potential tidal power sites at east malaysia minimum was found in january and september. the greater power values in [6] are explained by the two-way power generation, the areas defined as 200×200m, and the variations in yearly tidal ranges. table ii. calculation of power output of pending site h p g e p kw 3.6 1025 9.81 36198.9 11161.3275 11.2 3.9 1025 9.81 39215.475 13099.05797 13.1 4.3 1025 9.81 43237.575 15923.83839 15.9 4.7 1025 9.81 47259.675 19024.20714 19.0 4.9 1025 9.81 49270.725 20677.73714 20.7 4.9 1025 9.81 49270.725 20677.73714 20.7 4.9 1025 9.81 49270.725 20677.73714 20.7 4.7 1025 9.81 47259.675 19024.20714 19.0 4.4 1025 9.81 44243.1 16673.09417 16.7 4.1 1025 9.81 41226.525 14476.99964 14.5 3.6 1025 9.81 36198.9 11161.3275 11.2 3.2 1025 9.81 32176.8 8818.826667 8.8 2.7 1025 9.81 27149.175 6278.246719 6.3 2.6 1025 9.81 26143.65 5821.803542 5.8 2.7 1025 9.81 27149.175 6278.246719 6.3 2.6 1025 9.81 26143.65 5821.803542 5.8 3.2 1025 9.81 32176.8 8818.826667 8.8 3.9 1025 9.81 39215.475 13099.05797 13.1 4.7 1025 9.81 47259.675 19024.20714 19.0 5.3 1025 9.81 53292.825 24191.48839 24.2 5.8 1025 9.81 58320.45 28971.22354 29.0 6.0 1025 9.81 60331.5 31003.6875 31.0 6.0 1025 9.81 60331.5 31003.6875 31.0 5.9 1025 9.81 56309.4 27007.65667 27.0 5.0 1025 9.81 50276.25 21530.33854 21.5 4.2 1025 9.81 42232.05 15191.80688 15.2 3.8 1025 9.81 38209.95 12435.92354 12.4 3.4 1025 9.81 34187.85 9955.628542 10.0 3.0 1025 9.81 30165.75 7750.921875 7.8 3.2 1025 9.81 32176.8 8818.826667 8.8 3.5 1025 9.81 35193.375 10549.86589 10.5 fig. 5. potential power of 18 main sites of sarawak coastline malaysia. figure 6 shows the location of 18 tidal range potential sites in a map generated by google maps. the locations of the maximum potential power sites (i.e. tanjung manis site and pending site) are shown in figure 6 as numbers 4 and 7. figure 7 shows the maximum and minimum of potential energy for the 18 sites, while table iii shows their mean tidal range and maximum and minimum potential power. fig. 6. position of 18 potential sites, © google maps, terrametrics . fig. 7. max/min potential power in convenient tidal range sites. table iii. tidal range mean and max/min power per site no sites tidal range mean (m) pmax (kw) pmin (kw) 1 sematan 3.0 12.5 9.8 2 pasarlundo 2.9 10.3 8.4 3 kuala santubong 3.3 21.5 15.9 4 pending 4.2 33.1 25.1 5 sri aman 2.9 15.9 10.0 6 kuala rajang 3.8 27.5 20.7 7 tanjung manis 4.0 50.7 39.2 8 sarikei 3.9 19.5 15.5 9 bintangor 3.8 17.4 14.5 10 lebaan (tanjungensurai) 3.0 27.9 6.6 11 sibu 2.0 5.4 4.2 12 kuala paloh 3.0 21.8 3.9 13 kuala igan 1.6 6.8 4.2 14 kuala mukah 1.4 2.8 1.7 15 kuala kemena 1.0 1.9 0.9 16 kuala limbang 1.4 2.3 1.1 17 bandar limbang 1.3 1.9 0.8 18 kuala lawas 1.3 1.9 1.0 c. selection of suitable site a total of 18 tidal range sites seem to be suitable for power generation. as ranges differ in all these tidal range sites, sites with larger tides generate more power compared to sites with lower. these sites were assumed preliminary, engineering, technology & applied science research vol. 10, no. 4, 2020, 6047-6051 6051 www.etasr.com samo et al.: determination of potential tidal power sites at east malaysia the final sites should be selected after a thorough feasibility study. however, as kuching barrage constructed at pending has a strong potential for power generation, an energy harvester could be installed for extracting energy [16]. v. conclusion this research studied the potential energy generation in 34 sites in east malaysia, pinpointing 18 locations as suitable for the construction of a energy generation barrage. however, these sites were assumed as preliminary, as the final sites should be selected after a thorough feasibility study. two sites were considered as having the highest potential. the maximum calculated power sites are the tanjung manis and the pending site. the highest energy potential was calculated to come from tanjung manis and was measured between 50.7kw and 39.2kw, while the second highest power was calculated for pending, between 33.1kw and 25.1kw. however, there is already an existing barrage at pending site, and this is the only site where power could be generated by just installing turbines. references [1] a. s. bahaj, “generating electricity from the oceans,” renewable and sustainable energy reviews, vol. 15, no. 7, pp. 3399–3416, sep. 2011, doi: 10.1016/j.rser.2011.04.032. [2] l. myers and a. s. bahaj, “simulated electrical power potential harnessed by marine current turbine arrays in the alderney race,” renewable energy, vol. 30, no. 11, pp. 1713–1731, sep. 2005, doi: 10.1016/j.renene.2005.02.008. [3] o. b. yaakob, y. m. ahmed, m. n. bin mazlan, k.e. jaafar, r. m. raja muda, “model testing of an ocean wave energy system for malaysian sea,” world applied sciences journal, vol. 22, no. 5, pp. 667–671, 2013, doi: 10.5829/idosi.wasj.2013.22.05.2848. [4] “east asia/southeast asia: malaysia – the world factbook”, central intelligence agency, https://www.cia.gov/library/publications/resources/the-worldfactbook/geos/my.html (accessed jul. 1, 2020). [5] s. m. shafie, t. m. i. mahlia, h. h. masjuki, and a. andriyana, “current energy usage and sustainable energy in malaysia: a review,” renewable and sustainable energy reviews, vol. 15, no. 9, pp. 4370–4377, dec. 2011, doi: 10.1016/j.rser.2011.07.113. [6] k. n. a. maulud, o. a. karim, k. sopian, s. n. f. a. aziz, “determination of tidal energy resource location in east coast of peninsular malaysia using geographical information system,” in proceedings of the 3rd wseas international conference on energy planning, energy saving, environmental education (epese ’09), jul. 2009, pp. 25–31. [7] g. n. tiwari and m. k. ghosal, renewable energy resources: basic principles and applications. harrow, uk: alpha science international, 2005. [8] w. k. lee, “reliability of combined regional tidal power generation in malaysia,” international sustainability and civil engineering journal, vol. 1, no. 2, pp. 48–58, 2012. [9] m. álvarez, v. ramos, r. carballo, n. arean, m. torres, and g. iglesias, “the influence of dredging for locating a tidal stream energy farm,” renewable energy, vol. 146, pp. 242–253, feb. 2020, doi: 10.1016/j.renene.2019.06.125. [10] m. mestres, m. griñó, j. p. sierra, and c. mösso, “analysis of the optimal deployment location for tidal energy converters in the mesotidal ria de vigo (nw spain),” energy, vol. 115, pp. 1179– 1187, nov. 2016, doi: 10.1016/j.energy.2016.06.055. [11] m. wosnik, i. gagnon, k. baldwin, e. bell, “the ‘living bridge’ project: tidal energy conversion at an estuarine bridge powering sustainable smart infrastructure,” presented at the 5th marine energy technology symposium (mets), washington, dc, usa, may 2017. [12] m. r. hassan, royal malaysia navy, 2nd ed. 2009. [13] sarawak hourly and high & low tide tables: including standard ports of sabah. director of marine sarawak, malaysia, 2012. [14] i. penesis et al., “tidal energy in australia – assessing resource and feasibility to australia’s future energy mix,” presented at the 4th asian wave and tidal energy conference (awtec 2018), taipei, taiwan, sep. 2018. [15] a. a. mahessar, a. n. laghari, s. qureshi, i. a. siming, a. l. qureshi, and f. a. shaikh, “environmental impact assessment of the tidal link failure and sea intrusion on ramsar site no. 1069,” engineering, technology & applied science research, vol. 9, no. 3, pp. 4148–4153, jun. 2019. [16] m. l. tuballa and m. l. s. abundo, “operational impact of res penetration on a remote diesel-powered system in west papua, indonesia,” engineering, technology & applied science research, vol. 8, no. 3, pp. 2963–2968, jun. 2018. microsoft word 30-2720_s_etasr_v9_n3_pp4230-4234 engineering, technology & applied science research vol. 9, no. 3, 2019, 4230-4234 4230 www.etasr.com winanri et al : comparison analysis between traditional contracts and long segment contracts ... comparison analysis between traditional and long segment contracts on national road preservation activities in indonesia riliane prima winanri department of civil engineering, sriwijaya university, palembang, indonesia riliane.pw@gmail.com betty susanti department of civil engineering, sriwijaya university, palembang, indonesia b_susanti@yahoo.com ika juliantina department of civil engineering, sriwijaya university, palembang, indonesia ikawig@yahoo.com abstract—national road preservation activities in indonesia are usually carried out using a traditional approach system, namely in-house system and contract system with a design-bid-build (dbb) approach. an alternative contract method to improve the quality of roads is the long segment contract. its definition is carrying out road preservation activities in one continuous segment with the aim to obtain good road conditions for all segments. this study aims to compare the performance under traditional approaches and long segment contracts. road performance is expressed in functional performance terminology and uses the international roughness index (iri) indicator. the research was conducted on the outer urban road of palembang indralaya intersection meranjat which is part of the national road section in the province of south sumatra, indonesia. results showed that the road performance contracted with traditional approaches was better than that of long segment ones. this was not expected and was probably due to the lack of understanding of the parties involved in the long segment contract to the principles of fulfilling road service performance. the contractors are not interested in carrying out routine road maintenance projects because the value of the work is small and there is a lack of experience regarding routine maintenance. keywords-traditional contracts; long segment contracts; iri values i. introduction road preservation is generally carried out in traditional approach systems, namely in-house and contract system with dbb delivery approach [1]. the contract is an integral part of the project delivery. in order to improve road quality, an alternative contract method is required which will include considerations of performance aspects, such as guaranteed contracts, performance-based contracts (pbc), and long road maintenance contracts (long segment maintenance contracts) or long segment contracts. there are difficulties in implementing pbc pilot projects in indonesia. the challenges of implementing pbc start from the procurement stage to implementation [2, 3]. the supreme audit agency (bpk) recommended that pbc implementation should be reviewed, because bpk cannot measure the effectiveness of using public funds based on performance measures as required in pbc. bpk always applies a volume-based measurement system for all uses of these public funds. consequently, applying long segment contracts is an alternative way to overcome these difficulties. traditional approaches with both in-house and contract-based systems in the implementation of road construction have not been oriented to better cost, time and project quality [4]. authors in [5] stated that contracts with traditional approaches have some difficulties in effectively managing quality, time and costs. meanwhile, poor work quality will have an impact on the road service life which will become shorter and will result in high road maintenance costs [6]. the weakness of traditional contracts is that they are not oriented towards the aspects of road performance and cost, concerning their lack of efficiency on road maintenance costs. this condition favors the implementation of long segment contracts since they are oriented towards those two aspects. in 2016, long segment contracts started to be implemented for road preservation on several national roads in indonesia. the aim was to obtain good road conditions for all segments. based on the background, a problem was formulated on comparing the performance of roads on preservation activities between traditional approaches and long segment contracts. the scope of the analysis was based on the assessment of the functional conditions of the road using the iri. the results of preservation activities under traditional approaches and long segment contracts were compared. as a final outcome, challenges and opportunities for the implementation of long segment contracts occurred as a consideration for road managers in road preservation programs. ii. literature review a. road maintenance contract road maintenance contract is one of the contract forms in the implementation of construction work. based on the government regulation of the republic of indonesia number 29 of 2000 on the implementation of construction services, the construction work contract is divided into three forms, namely compensation types, duration of construction work, and method of payment. the contract is inseparable from its project delivery. project delivery method is a project implementation corresponding author: b. susanti engineering, technology & applied science research vol. 9, no. 3, 2019, 4230-4234 4231 www.etasr.com winanri et al : comparison analysis between traditional contracts and long segment contracts ... method approach designed to define relationships, roles and responsibilities of the parties involved in each stage of the project to achieve the objectives. 1) traditional road maintenance contracts in road maintenance, there are two traditional approaches, namely in-house system and contract system with a dbb delivery approach [1]. in-house is applied for routine maintenance work, while the input-based contract system with dbb delivery approach or traditional contract system is applied for periodic maintenance work and road improvement. authors in [7] state that traditional contract is a form of construction work contract based on the division of tasks aspects. the contract agency assigns the contractor to carry out one of these jobs: planning, supervision, or implementation. in-house is a form of construction work contract based on the aspect of job division. however in-house is not a form of contract since it is the implementation of work planned, carried out, and selfmonitored without buying it up to contractor [7]. the weakness of traditional approach systems is that they have some difficulties in effectively managing quality, time and costs. poor work quality will have an impact on the road service life which will become shorter and results in high road maintenance costs. 2) long segment contract one way to overcome the problem of low road quality is the application of alternative contracting methods, ones that consider the performance aspects of the work results. the alternative contract methods include guaranteed contracts, pbc, and long segment contracts. based on the circular of the directorate general of bina marga number 09/se/db/2015 on the implementation of procurement process and work on road preservation in long segment, the definition of long segment is road preservation activities in one continuous segment (can be more than one roads) which is carried out with the aim of obtaining the same road conditions, namely a good road condition and meeting standards throughout the segments (the standard is in accordance with [8]. meanwhile, the scope of activities (output) of long segment work includes widening, reconstruction, rehabilitation, and maintenance of roads. based on the delivery approach, long segment contracts are also dbb contracts like traditional contracts, yet have performance as the main aspect and maintenance as the scope of activities. b. road pavement performance pavement performance is a function of the relative ability of pavement to serve traffic in a certain period. the performance of pavement is determined on the requirements of road functional conditions and structural conditions. the structural conditions of the road begin with the condition of the road structural layer and continue to the lower layer, consisting of the road surface layer, upper foundation layer, bottom foundation layer and subgrade. structural pavement performance shows pavement carrying capacity. the measurements use benkelman beam or falling weight deflectometer (fwd) test equipment. the functional condition of the road is the service condition of the road pavement surface for road users in the form of roughness, road surface unevenness that provides comfort, and security for traffic. functional pavement performance is expressed in surface index (si) or present serviceability index (psi), surface distress index (sdi), road condition index (rci), and iri. this study compared the pavement performance based on the assessment of the functional conditions of the road in the form of iri values. iri value is an international index that shows the size describing the value of surface inequality indicated as the cumulative length of surface fluctuation per unit length. iri is expressed in meters per kilometer of the length of the road (m/km). the method for measuring road surface roughness used the national association of australian state road authorities (naasra). the relationship between the iri values and the criteria for road conditions on asphalt roads, panmac roads and land /gravel roads is given in table i. table i. criteria for road conditions based on pavement type condition criteria iri value based on pavement type asphalt road penmac road land /gravel road good iri≤4 iri≤8 iri≤10 medium iri>4 and iri≤8 iri>8 and iri≤10 iri>10 and iri≤12 minor damaged iri>8 and iri≤12 iri>10 and iri≤12 iri>12 and iri≤16 heavy damaged iri>12 iri>12 iri>16 iii. research methodology a. research location this research was conducted in south sumatra province, one of the largest provinces in indonesia and the largest in sumatra. in ogan ilir district, it focused on national road preservation activities conducted by the balai besar pelaksanaan jalan nasional v palembang, covering the outer urban road of palembang-indralaya intersection-meranjat with the segment numbers of 005 and 007. the total length of the sections is 29.05km consisting of the outer urban road of palembang-indralaya intersection of 16.45km and indralaya intersection-meranjat which is of 12.60km long . b. data collection the method of secondary data collection was utilized in this study. the secondary data in this study were the road functional performance data, namely the iri value on the outer urban road of palembang-indralaya intersection-meranjat from 2011 to 2017 derived from the national road planning and monitoring unit (p2jn) of south sumatra province. the data were the iri results at the end of the year of road preservation. the iri values of the traditional approaches were the result of preservation activities under these approaches from 2011 to 2015, while the iri values that showed the functional performance of the road in the long segment contract studied were the results of preservation activities under long segment contracts from 2016 to 2017. the recapitulation of iri values are provided in table ii. regarding data processing and analysis, the performance of road pavements on the outer urban road of palembang-indralaya intersection-meranjat is: • collecting the iri data of 2011 to 2017, namely the iri values per segment of 100 meters on the functional outer engineering, technology & applied science research vol. 9, no. 3, 2019, 4230-4234 4232 www.etasr.com winanri et al : comparison analysis between traditional contracts and long segment contracts ... urban road of palembang-indralaya intersection (16.45km) and indralaya intersection-meranjat (12.60km). • determine the iri values representing the iri values of the road by averaging the iri value per segment: ��� ��� �� ����� � ��.�� ��� ������ ��� ������ ��.�� ������ � !��� "##�$ (1) • comparing the average iri values per year. comparing the iri values of the results of road preservation activities using traditional contracts (from 2011 to 2015) with the iri values resulting from road preservation activities using long segment contracts (from 2016 to 2017). • conducting semi-structured interviews with relevant stakeholders, the owner and the contractor, to strengthen the analysis of the comparison of road pavement performance between traditional contracts and long-segment contracts so that potential benefits could be obtained in implementing long segment contracts. table ii. iri value from 2011 until 2017 road segments iri value year 2011 2012 2013 2014 2015 2016 2017 outer urban road of palembang-indralaya intersection 6.39 4.51 5.01 4.84 3.35 5.05 4.74 indralaya intersectionmeranjat 5.27 4.66 4.23 4.87 2.98 4.81 4.58 c. data analysis iri value data processing was carried out on the outer urban road of palembang-indralaya intersection and the road segments of indralaya intersection–meranjat from 2011 to 2017. iri value data processing on the outer urban road of palembang-indralaya intersection was carried out per road segment. the results were then grouped based on the road conditions along the road given in figure 1. the calculation of the average iri value on a road of 16.45km with 165 segments is as follows: ��� %�� ��� &%'(� � )*+,-# "./ � 5.05 iv. discussion based on the results of data analysis, the comparison outputs of the performance of road pavement between traditional contracts and long segment contracts on outer urban road of palembang-indralaya intersection and road segments of indralaya intersection–meranjat are provided in figures 2 and 3. the iri values of the traditional contracts tended to decrease. in 2016, as the initial year of applying the long segment contract, the iri values on both segments re-increased. in 2017, both iri values decreased again. 2016 was the first year of implementing a long segment scheme and at the same year iri values tended to increase. this is because of the lack of optimal implementation of preservation activities on the scope of long segment contract activities, especially the routine road maintenance activities. fig. 1. strip map of road condition based on the iri values on the outer urban road of palembang-indralaya intersection in 2016 fig. 2. iri value comparison on outer urban road of palembang-indralaya intersection road segments fig. 3. iri calue comparison on indralaya–meranjat road segments 02+000 03+000 # # # # # # # # # # # # # # 03+000 04+000 05+000 06+000 # # # # # # # # # # # # # # # # # # 06+000 07+000 08+000 09+000 # # # # 09+000 10+000 11+000 12+000 # # # # # # # # # 12+000 13+000 14+000 15+000 # # # # # # # # # # # # # # # # # 15+000 # # # good = km = medium = km = minor damaged = km = heavy damaged = km = the total length = km = 38.91% 00+000 0.61% 100.00% 0.100 16.450 information : 53.2% 7.29% 16+000 16+450 01+000 6.400 8.750 1.200 engineering, technology & applied science research vol. 9, no. 3, 2019, 4230-4234 4233 www.etasr.com winanri et al : comparison analysis between traditional contracts and long segment contracts ... there was a lack of understanding regarding to routine maintenance activities by the contractors. they prioritized works such as widening, reconstruction, rehabilitation, and reactive road activities, although fixed penalty is applied for the delay in repairing damage in the scope of routine maintenance. routine road maintenance is that the road components (road pavement, road shoulders, drainage systems, road additions, and road equipment) are maintained at all times and are in good service conditions based on the required performance. as for routine maintenance activities such as closing sealing, patching, spot leveling, pavement edge repair, asphalt surfacing, crack repair, corrugated surface repair, and deep rutting to maintain a standard transverse slope. according to the palembang p2jn, 2016 was the initial year for implementing long segment contract on the outer urban road of palembang-indralaya-meranjat, along with the other long segment scheme applications in indonesia. the ministry of public works of the directorate general of bina marga conducted modifications to the long segment contract in 2017, namely changes in payment terms for the scope of routine road maintenance activities. in 2016 the payment for the scope of routine road maintenance activities was lump sum, but in 2017 it changed to volume based for all the scopes of activities on long segment contracts. it is expected that the contractor can prioritize road maintenance activities. this change of payment intended contractors’ bid prices for routine road maintenance to be more measurable and not too low. in implementing the contracts, the payment was in accordance with the volume based to make it easy to optimize the program. if there was an addendum to the contract (work added/reduced) in the scope of routine maintenance activities, it became more flexible to shift funds. from the previous discussion, the potential benefits for implementing long segment contracts are: 1) road conditions are better maintained and the cost of road maintenance is more efficient. the application of kbk has the opportunity to improve the quality and service of national roads through sustainable road management and maintenance, as well as saving on road maintenance costs with the value of the npv of kbk of 70% of npv traditional contracts [9]. the implementation of kbk in the procurement of goods and services in developing countries can reduce costs and time constraints and improve procurement quality [10]. kbk generates cost reduction in a contract and improves the quality of a product or service [11]. pbc is an alternative and cost-effective solution, both reducing direct costss and indirect costs compared to traditional contracting approaches [5]. this is in line with the long segment contract, in which road damage can be quickly handled, road components can be maintained at all times, and in good service conditions to prevent continuous damage. by preventing greater road damage, the cost of road preservation activities becomes more efficient. in addition, with long contracts, the road maintenance cost segment becomes smaller because of the optimization of the cost of road management, including the cost of procurement of goods/construction services, road supervision costs, and overhead (general) road construction. 2) national road work outcomes are in accordance with the applied performance indicators meeting the road service levels in long segment contracts is regulated in the special specifications skh.1.10.a regarding the maintenance of road performance and special specifications skh.1.10.2 on the maintenance of bridge performance. if the providers cannot meet the level of road and bridge services on the specified response time, they will be sanctioned financially in the form of payment deductions in accordance with the special specifications for maintenance of road performance of section skh-1.10.a.4.3 and special specifications for maintenance of bridge performance skh-1.10.b.4.5. therefore, road organizers have an extra motive to get steady road conditions throughout the segment. 3) program optimization in implementing long segment contracts the long segment contract has the advantage that one contract covers four road preservation activities, namely widening, reconstruction, rehabilitation, and maintenance. therefore, the long segment contracts are flexible and the programs can be optimized with available funds. furthermore, after a payment is modified for the scope of routine maintenance activities to be volume based, it becomes easier to shift funds. program optimization on long segment contracts can be carried out as follows: • adjustment of location of activities is effective against field conditions or field engineering results. • delay (holding) for damaged segments which cannot be handled. • decrease in time of plans for rehabilitation and or reconstruction activities. • transfer of funds between the scope of activities. given the program optimization, the road preservation activities meet the needs and the road along the segment can be maintained. the challenges in the application of the long segment contract are: 1) lack of experience in long segment contract the directorate general of bina marga has developed a long segment contract since 2016 with 256 packages of national road preservation, 25 of those are managed by balai besar pelaksanaan jalan nasional v palembang (bbpjn v palembang). according to satker p2jn palembang, the long segment contracts are more accountable in terms of financial administration. they can also improve maintenance standards and overcome poor in-house (direct labor-based) performance in national road maintenance. the application of long segment contracts is better than the traditional contract approach, but it has not been widely disseminated. to get a steady and uniform road condition along the integrated segment, it is expected that long segment contracts are not only applied to national roads, but also to provincial and district/city road maintenance. 2) lack of contractor’s understanding in implementing long segment contracts engineering, technology & applied science research vol. 9, no. 3, 2019, 4230-4234 4234 www.etasr.com winanri et al : comparison analysis between traditional contracts and long segment contracts ... the number of contractors having experience in kbk is limited [9, 12, 13]. it is a challenge in implementing kbk. the lack of knowledge of kbk still needs to be dealt with in order to prepare for its application. similarly, for the implementation of long segment contracts, the lack of understanding of contractor in carrying out road maintenance activities needs to be dealt with because these activities are the main scope of long segment contracts. it is necessary to improve the contractors’ quality in meeting the level of road services, and to maximize performance inspections. v. conclusion the results of data analysis show that the functional performance of the road, namely the iri value as a result of road preservation activities on traditional contracts is better than the iri value on long segment contracts, which is not in accordance with the expected target. due to the lack of understanding of the scope of routine maintenance activities, the directorate general of bina marga modified its payment from lump sum to volume based in 2017. as a result in 2017 the iri value decreased. there is a potential benefit for implementing long segment contracts, as road conditions get better maintained and the cost of road maintenance becomes more efficient because the work results are held in accordance with the applied performance indicators and program optimization can be achieved. but, there are challenges that are still to be faced, namely the contractors’ lack of experience knowledge, and understanding of implementing long segment contracts. references [1] b. susanti, model penilaian kelayakan penerapan kontrak berbasis kinerja untuk proyek pemeliharaan jalan nasional, institusi teknologi bandung, 2017 (in indonesian) [2] e. james, “meningkatkan hasil pemeliharaan aset jalan nasional indonesia”, jurnal prakarsa infrastruktur indonesia, vol. 24, pp. 19-23, 2016 (in indonesian) [3] r. wirahadikusumah, m. abduh, “metoda kontrak inovatif untuk peningkatan kualitas jalan: peluang dan tantangan”, pola manajemen proyek untuk kondisi berjalan dan masa depan, jakarta, indonesia, 2003 (in indonesian) [4] p. pakkala, “performance-based contracts—international experiences”, trb executive workshop, finnish road administration, washington, usa, 2005 [5] a. straub, “cost saving from performance-based maintenance contracting”, internasional journal of strategic property management, vol. 13, no. 3, pp. 205-217, 2009 [6] puslitbang jalan dan jembatan, kajian penerapan kontrak berbasis kinerja untuk konstruksi jalan di atas tanah lunak, laporan akhir, departemen pekerjaan umum, 2006 (in indonesian) [7] n. yasin, kontrak kontruksi di indonesia, penerbit gramedia, 2013 (in indonesian) [8] persyaratan teknis jalan dan kriteria perencanaan teknis jalan, peraturan menteri pekerjaan umum, 2011 (in indonesian) [9] r. z. tamin, a. z. tamin, p. f. marzuki, “performance based contract application opportunity and challanges in indonesian national road management”, procedia engineering, vol. 14, pp. 851-858, 2011 [10] b. a. ambaw, j. telgen, “pbc as a solution for public procurement problems: some ethiopian evidence”, european journal of business and management, vol. 9, no. 34, pp. 97-108, 2017 [11] m. sultana, a. rahman, s. chowdhury, “a review of performancebased maintenance of road infrastructure by contracting”, international journal of productivity and performance management, vol. 62, no. 3, pp. 276-292, 2013 [12] m. sultana, a. rahman, s. chowdhury, “an overview of issues to consider before introducing performance-based road maintenance contracting”, world academy of science, engineering and technology, vol. 62, pp. 350-355, 2012 [13] n. malahayati, s. husin, a. mursalin, “kajian sistem kontrak konvensional dan sistem performance based contract (pbc) pada proyek pemeliharaan jalan”, prosiding seminar reguler seri 1 jts unsyiah-eltees-mts, 2010 (in indonesian) microsoft word 2-2904_s_etasr_v9_n5_pp4586-4590 engineering, technology & applied science research vol. 9, no. 5, 2019, 4586-4590 4586 www.etasr.com torchani et al.: dynamic economic/environmental dispatch problem considering prohibited … dynamic economic/environmental dispatch problem considering prohibited operating zones ahmed torchani college of engineering, university of hail, hail, saudi arabia and university of tunis, ensit, lisier laboratory, tunisia tochahm@yahoo.fr attia boudjemline college of engineering, university of hail, hail, saudi arabia a_boudjemline@hotmail.com hatem gasmi college of engineering, university of hail, hail, saudi arabia and university of tunis el-manar, enit, tunisia gasmi_hatem@yahoo.fr yassine bouazzi college of engineering, university of hail hail, saudi arabia and university of tunis el manar, enit, photovoltaic and semiconductor materials laboratory, tunis, tunisia yassine.bouazzi@gmail.com tawfik guesmi college of engineering, university of hail, hail, saudi arabia and university of sfax, enis, tunisia tawfiq.guesmi@gmail.com abstract—along with economic dispatch, emission dispatch has become a key problem under market conditions. thus, the combination of the above problems in one problem called economic emission dispatch (eed) problem became inevitable. however, due to the dynamic nature of today’s network loads, it is required to schedule the thermal unit outputs in real-time according to the variation of power demands during a certain time period. within this context, this paper presents an elitist technique, the second version of the non-dominated sorting genetic algorithm (nsagii) for solving the dynamic economic emission dispatch (deed) problem. several equality and inequality constraints, such as valve point loading effects, ramp rate limits and prohibited operating zones (poz), are taken into account. therefore, the deed problem is considered as a nonconvex optimization problem with multiple local minima with higher-order non-linearities and discontinuities. a fuzzy-based membership function value assignment method is suggested to provide the best compromise solution from the pareto front. the effectiveness of the proposed approach is verified on the standard power system with ten thermal units. keywords-dynamic environmental/economic dispatch; prohibited operating zones; multi-objective optimization; nondominated sorting genetic algorithm i. introduction in electric power systems, improvement of operation and planning has become more important under the current market conditions and several tools have been developed in this context [1, 2]. economic load dispatch (ed) is one of them. it aims to schedule the outputs of the committed generating units so as to minimize the total fuel cost under specific system equality and inequality constraints. this objective can no longer be considered alone due to severe environmental standards imposed by legislations. in this respect, the clean air act amendments have been applied in the usa to reduce pollution and atmospheric emissions such as sulfur oxides, sox, and nitrogen oxides, nox, caused by fossil-fueled thermal units [3, 4]. hence, improvements in dispatching electric power must consider both monetary profits and reduced emissions of gaseous pollution. thus, we are facing a bi-objective minimization problem, which has been frequently known as the static environmental/economic dispatch (seed) problem. seed can only handle a single loading condition at a particular time instant [3-8]. due to the large variation of the load demand and dynamic nature of the power systems in recent years, it is mandatory to schedule the generator outputs in real time according to the variation of power demands over a certain time period. there are several formulations of this problem, known as the dynamic environmental/economic dispatch (deed) problem [9-12]. generally, deed is a dynamic optimization problem having the same objectives as seed over a time period subdivided into smaller time intervals with respect to the constraints imposed on system operation by generator ramp-rate limits. time period and time intervals can be one day and one hour, respectively. therefore, the operational decision at an hour may affect the operational decision at a later hour. authors in [11-16] summarize several techniques for solving dispatch problems. conventional methods, such as dynamic programming, nonlinear programming, network flow method, and interior point method [16] have been criticized as they are iterative, sensitive to initial solution and converge into local optimum solution. to overcome these difficulties, more recent works centered around artificial intelligence (ai), such as genetic algorithm [17], tabu search [18], particle swarm optimization [19-20], simulated annealing [12], differential evolution [13] and bacterial corresponding author: ahmed torchani engineering, technology & applied science research vol. 9, no. 5, 2019, 4586-4590 4587 www.etasr.com torchani et al.: dynamic economic/environmental dispatch problem considering prohibited … foraging [14]. these techniques proved to have a clear edge over traditional methods in solving deed problems without any or less restrictions on the shape of the objective functions curves where multiple pareto-optimal solutions can be obtained in a single run. most of the past studies have only focused on the seed problem except for a few where the multi-objective deed problem is considered [14]. in [14], prohibited operating zones, ramp rate limit constraints and valve point loading effects (vple) have been considered. therefore, the deed becomes highly nonlinear and with discontinuous and non-convex cost functions. within this context, this paper presents an elitist multiobjective approach for solving the deed problem including poz, valve point loading effects, and ramp rate limit constraints. this proposed method, called second version of the non-dominated sorting genetic algorithm (nsagii), incorporates a crowding distance comparison at the end of each iteration in order to facilitate the convergence of the optimization algorithm to the real pareto optimal front. in general terms, the contribution of this study is to show that the nsga approach used frequently for solving continuous problems can be efficient for non-smooth and non-convex deed problems if a non-domination sorting technique is incorporated in the optimization algorithm. in addition, the ramp rate limit constraints have been considered during transition from the last hour of a day to the first hour of the next. a fuzzy set theory [5] is used to extract the best compromise solution from the pareto optimal front for the decision makers. the proposed approach was tested on a tenunit test system incorporating all above constraints. this approach showed a very competitive performance when compared with the original nsga algorithm. ii. problem formulation the deed problem is considered as a multi-objective problem (mop). it aims to minimize simultaneously the total fuel cost and total emission of thermal units over a certain period of time subdivided into smaller time intervals. several equality and inequality constraints are considered in the problem formulation. considering a power system with n generators, the total fuel cost function tc in ($/h) including vple and emission in (ton/h) are, respectively described by (1) and (2) [20]: ( ) ( ) 2 min sin 1 1 t n t t t c a b p c p d e p pt i i i i ii i i i t i    = + + + −∑ ∑     = =   (1) ( ) ( ) 2 2 10 exp 1 1 t n t t t e p p pt i i i i ii i i t i α β γ η λ   −  = + + +∑ ∑   = =    (2) where, t ip is real power output of the i-th unit at time t. t is the number of hours. ia , ib , ic , id and ie are the cost coefficients of the i-th unit. iα , iβ , iγ , iη and iλ are the emission coefficients of the i-th unit. objective functions ct and et are optimized subject to the constraints described below. a. generation limits min max , 1,..., t p p p i ni i i≤ ≤ = (3) b. power balance constraints total demand power t dp and total losses t lp must be covered at each interval of time t. 0, 1,..., 1 n t t t p p p t t i d l i − − = =∑ = (4) in this study, total losses are expressed as follows [20]: 1 1 1 n n n t t t t p p b p b p b l i ij j oi i oo i j i = + +∑ ∑ ∑ = = = (5) where ijb , oib , oob are called b coefficients. c. ramp rate limits 1t t down p p r i i i − − ≤ (6) 1 upt t p p r i i i −− ≤ (7) where down ir and up ir are the down and up rate limits of the ith unit, respectively. d. constraints due to prohibited operating zones min 1 1 , 2,..., max t p p pii i kt t k p p p p k zi ii i i z tip p p i ii  ≤ ≤   − ∈ ≤ ≤ =   ≤ ≤ (8) where iz is the number of prohibited operating zones for the ith unit, and k ip and k ip are upper and lower bounds of the prohibited zone number k. iii. implementation of the proposed method multi-objective evolutionary algorithms using nondominated sorting and sharing, such as nsga and npga, have been criticized for the absence of elitism. therefore, the second version of nsga, called nsgaii [21] is utilized in this study for solving the deed problem. in this approach, the sharing function approach is replaced with a crowded comparison. initially, an offspring population qt is created from the parent population pt at the t-th generation. then, a combined population rt is formed: r p q t t t = ∪ (9) rt is sorted into different no-domination levels fj. so, we can write: 1 rr f jt j  =  =   ∪ (10) engineering, technology & applied science research vol. 9, no. 5, 2019, 4586-4590 4588 www.etasr.com torchani et al.: dynamic economic/environmental dispatch problem considering prohibited … where, r is the number of fronts. to offer a higher precision with reduced cpu time, this algorithm has been implemented using real-coded genetic algorithm in [5, 19]. iv. results and simulation the effectiveness of the proposed optimization algorithm for solving the deed problem is assessed on the 10-unit system. all system data are taken from [14, 20]. the b-loss coefficients are given below. 0.49 0.14 0.15 0.15 0.16 0.17 0.17 0.18 0.19 0.20 0.14 0.45 0.16 0.16 0.17 0.15 0.15 0.16 0.18 0.18 0.15 0.16 0.39 0.10 0.12 0.12 0.14 0.14 0.16 0.16 0.15 0.16 0.10 0.40 0.14 0.10 0.11 0.12 0.14 0.15 0.16 0.17 0.12 0.14 0.35 0.11 0.13 0.13 0.4 10b −= 15 0.16 0.17 0.15 0.12 0.10 0.11 0.36 0.12 0.12 0.14 0.15 0.17 0.15 0.14 0.11 0.13 0.12 0.38 0.16 0.16 0.18 0.18 0.16 0.14 0.12 0.13 0.12 0.16 0.40 0.15 0.16 0.19 0.18 0.16 0.14 0.15 0.14 0.16 0.15 0.42 0.19 0.20 0.18 0.16 0.15 0.16 0.15 0.18 0.16 0.19 0.44                                  (11) the nsgaii algorithm is implemented in matlab r2009a on a 64-bit operating system on a pc with an intel i32370m cpu at 2.40ghz. the best compromise solution is generated from the pareto front using a fuzzy based membership function value assignment method [5]. the nsgaii parameters to find the best pareto set for the seed problem have been chosen by trial and error and they were used for the deed problem. in this study, the maximum number of generations and the population size were both chosen to be 200. • test case 1: the seed problem for the ten-unit system with pd=1036mw was considered in this case. optimal outputs of thermal units for best cost, best emission and best compromise solution have been computed using the proposed optimization algorithm. results have been compared with those obtained using nsga. • test case 2: the deed for the test system over a 24-hour time horizon was solved under all previous constraints. poz limits in mw shown in table i are taken from [22]. therefore the problem will be more complicated with discontinuities. the hourly variation of the load is depicted in figure 1. table i. unit operating limits in mw unit min ip max ip down ir up i r prohibited zone 1 150 470 80 80 [150 165], [448 453] 2 135 470 80 80 [90 110], [240 250] 3 73 340 80 80 4 60 300 50 50 5 73 243 50 50 6 57 160 50 50 7 20 130 30 30 8 47 120 30 30 [20 30], [40 45] 9 20 80 30 30 10 10 55 30 30 [12 17], [35 45] a. seed problem: test case 1 for the validation of the proposed algorithm, a comparison with the first version of nsga is suggested in this case. from figure 1, it is clear that nsgaii provides the best results and it has better diversity characteristics of non-dominated solutions. from table ii, the minimum fuel cost and emission provided by nsgaii are $61,775.44 and 3,785.47lb respectively. moreover, the highest value of the fuel cost is found for minimum emission and the highest value of the emission corresponds to the minimum fuel cost since they are conflicting objective functions. fig. 1. hourly load variation fig. 2. pareto solutions with nsgaii and nsga (case 1) b. deed problem considering all constraints: test case 2 in this case, the generation output of unit at each hour has been adjusted considering poz. consequently, discontinuities are introduced in cost and emission curves corresponding to the poz. the hourly evolution of the optimum generations using the proposed algorithm for minimum cost is shown in figure 3. it is clear that the outputs of all units are maximum at hour 12 which corresponds to the maximum load (2150mw). in this sub-section, optimum solution for minimum emission is not displayed due to the space limitation. table iii shows the compromise solution extracted from the pareto solutions. it is clear that the proposed scheduling of generations satisfies all previous constraints. engineering, technology & applied science research vol. 9, no. 5, 2019, 4586-4590 4589 www.etasr.com torchani et al.: dynamic economic/environmental dispatch problem considering prohibited … fig. 3. hourly evolution of the optimum solution for minimum cost v. conclusion the deed problem is one of the most crucial issues to be solved in the power system field. it has a great importance in reducing emission of harmful gases and saving energy. in this study, the deed problem has been formulated as a biobjective optimization problem with nonlinear constraints including vple, ramp rate limits and prohibited operating zones. the second version of the non-dominated sorting genetic algorithm (nsgaii) has been suggested for solving the deed problem for 24-hour dispatch intervals. to demonstrate the effectiveness of the proposed approach, the standard power system with ten thermal units was used. various cases with different levels of complexity and discontinuity have been considered. the results of the proposed approach are significantly improved when compared with nsga. in addition, this approach has the capacity to optimize any number of objective functions simultaneously and generate the pareto front in a single run. therefore, other objectives can be included in the main problem such as voltage drop and real power losses. table ii. optimum solutions for case 1 minimum fuel cost minimum emission best compromise solution method nsagii nsga nsgaii nsga nsgaii nsga p1 165.657 165.304 165.465 165.277 165.092 165.400 p2 135.000 135.000 136.471 138.754 135.000 135.073 p3 73.0000 73.0000 85.943 89.8068 78.3817 73.0000 p4 60.0000 60.0000 87.8645 88.9776 85.9374 77.8433 p5 221.551 224.103 135.325 124.263 123.237 131.320 p6 120.835 118.647 126.733 126.764 128.576 124.444 p7 130.000 130.000 91.4739 97.8403 129.019 130.000 p8 120.000 120.000 91.9147 89.3054 84.4827 120.000 p9 20.0000 20.0000 79.6906 79.8941 79.5035 51.7425 p10 10.0000 10.0000 55 55.0000 46.6442 47.0783 cost ($/h) 61775.4 61802.6 63914.4 63905.5 62974.5 62486.2 emission (lb/h) 4781.79 4800.55 3785.47 3785.51 3880.30 3998.43 acknowledgment the present work was undertaken within the research project (no: 160803), funded by the deanship of scientific research, at the university of hail, which is gratefully acknowledged. table iii. best compromise solution of deed for test case 2 hour p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 1 165.0185 136.1190 73.0000 88.1843 125.1063 121.1142 130.0000 119.4657 52.5923 45.2700 2 165.1026 135.1579 81.5542 108.405 173.2150 122.5238 100.0000 120.0000 80.0000 46.7179 3 165.0594 135.7696 130.6567 127.3521 183.7788 159.8147 129.6961 120.0000 80.0000 54.5710 4 165.5635 182.8389 164.0515 160.2915 225.3338 159.3824 130.0000 119.8330 79.8469 54.9029 5 165.4107 202.1567 185.8223 182.477 240.2090 159.8194 129.8451 119.7818 79.8544 54.6686 6 205.1453 218.9835 245.006 228.7587 243.0000 160.0000 121.126 120.0000 80.0000 55.0000 7 200.0936 220.9863 294.5041 278.7587 243.0000 160.0000 106.1606 120.0000 77.1945 55.0000 8 232.4056 297.4300 259.7886 263.0712 243.0000 157.1222 127.8133 120.0000 80.0000 55.0000 9 295.7609 309.5763 339.7886 263.2257 241.7996 160.0000 130.0000 120.0000 80.0000 55.0000 10 317.6374 355.8988 340.0000 300.0000 243.0000 160.0000 130.0000 120.0000 80.0000 55.0000 11 358.3159 412.7551 340.0000 300.0000 243.0000 160.0000 130.0000 120.0000 74.8769 55.0000 12 379.7618 434.7244 339.9887 299.9990 242.9959 160.0000 129.9935 119.9976 80.0000 54.9757 13 346.8023 381.7131 340.0000 299.9520 242.9962 159.9909 130.0000 119.977 79.9899 55.0000 14 302.4077 308.1888 300.5633 295.9688 243.0000 160.0000 130.0000 119.9821 80.0000 55.0000 15 228.2163 264.2944 285.2806 270.8877 243.0000 159.4424 129.6214 119.887 79.9751 54.5439 16 165.3236 222.6713 209.3282 239.4035 243.0000 159.4886 129.903 120.0000 54.6538 54.5022 17 165.1747 216.9418 186.1643 189.4035 242.8615 159.9508 129.8344 119.8230 55.0000 55.0000 18 226.6097 223.5607 229.9497 234.5908 242.8186 160.0000 130.0000 119.9758 55.0000 54.7545 19 239.3959 299.4056 276.9271 258.0446 242.9016 159.8063 129.7669 119.8321 54.7896 54.8522 20 276.0364 342.8300 340.0000 300.0000 243.0000 160.0000 130.0000 120.0000 80.0000 55.0000 21 302.8172 308.8301 298.5865 298.2182 242.3453 159.9713 129.7167 119.7893 80.0000 54.8529 22 224.9485 228.8301 218.5865 258.7530 209.6452 160.0000 130.0000 120.0000 80.0000 46.5327 23 165.4124 149.0684 141.616 209.7076 166.2512 157.8307 128.9253 119.6998 79.8154 45.7563 24 165.5672 135.0000 73.0000 138.1843 175.1063 145.5599 123.5542 120.0000 80.0000 53.6633 total cost ($) 2526555.7207 total emission (lb) 302900.8703 total losses (mw) 1301.8534 engineering, technology & applied science research vol. 9, no. 5, 2019, 4586-4590 4590 www.etasr.com torchani et al.: dynamic economic/environmental dispatch problem considering prohibited … references [1] f. milano, “an open source power system analysis toolbox”, ieee transactions on power systems, vol. 20, no. 3, pp. 1199-1206, 2005 [2] r. d. zimmerman, c. e. m. sanchez, r. j. thomas, “matpower steadystate operations, planning and analysis tools for power systems research and education”, ieee transactions on power systems, vol. 26, no. 1, pp. 12-19, 2011 [3] s. boudab, n. golea, “combined economic-emission dispatch problem: dynamic neural networks solution approach”, journal of renewable and sustainable energy, vol. 9, no. 3, article id 035503, 2017 [4] m. basu, “economic environmental dispatch using multi-objective differential evolution”, applied soft computing, vol. 11, no. 2, pp. 2845-2853, 2011 [5] m. a. abido, “multiobjective evolutionary algorithms for electric power dispatch problem”, ieee transactions on evolutionary computation, vol. 10, no. 3, pp. 315-329, 2006 [6] s. sivasubramani, k. s. swarup, “environmental/economic dispatch using multi-objective harmony search algorithm”, electric power systems research, vol. 81, no. 9, pp. 1778-1785, 2011 [7] g. c. liao, “solve environmental economic dispatch of smart microgrid containing distributed generation system-using chaotic quantum genetic algorithm”, international journal of electrical power & energy systems, vol. 43, no. 1, pp. 779-787, 2012 [8] b. hadji, b. mahdad, k. srairi, n. mancer, “multi-objective economic emission dispatch solution using dance bee colony with dynamic step size”, energy procedia, vol. 74, pp. 65-76, 2015 [9] k. tlijani, t. guesmi, h. h. abdallah, “extended dynamic economic environmental dispatch using multi-objective particle swarm optimization”, international journal on electrical engineering and informatics, vol. 8, no. 1, pp. 117-131, 2016 [10] h. ma, z. yang, p. you, m. fei, “multi-objective biogeography-based optimization for dynamic economic emission load dispatch considering plug-in electric vehicles charging”, energy, vol. 135, pp. 101–111, 2017 [11] s. hemamalini, s. p. simon, “dynamic economic dispatch using artificial bee colony algorithm for units with valve-point effect”, european transactions on electrical power, vol. 21, no. 1, pp. 70-81, 2011 [12] c. k. panigrahi, p. k. chattopadhyay, r. n. chakrabarti, m. basu, “simulated annealing technique for dynamic economic dispatch”, electric power components and systems, vol. 34, no. 5, pp. 577-586, 2006 [13] r. balamurugan, s. subramanian, “an improved differential evolution based dynamic economic dispatch with nonsmooth fuel cost function”, journal of electrical systems, vol. 3, no. 3, pp. 151-161, 2007 [14] n. pandit, a. tripathi, s. tapaswi, m. pandit, “an improved bacterial foraging algorithm for combined static/dynamic environmental economic dispatch”, applied soft computing, vol. 12, no. 11, pp. 3500-3513, 2012 [15] h. rezaie, m. h. k. rahbar, b. vahidi, h. rastegar, “solution of combined economic and emission dispatch problem using anovel chaotic improved harmony search algorithm”, journal of computational design and engineering, vol. 6, no. 3, pp. 447-467, 2019 [16] g. irisarri, l. m. kimball, k. a. clements, a. bagchi, p. w. davis, “economic dispatch with network and ramping constraints via interior point methods”, ieee transactions on power systems, vol. 13, no. 1, pp. 236-242, 1998 [17] s. ganjefar, m. tofighi, “dynamic economic dispatch solution using an improved genetic algorithm with non-stationary penalty functions”, european transactions on electrical power, vol. 21, no. 3, pp. 1480– 1492, 2011 [18] w. m. lin, f. s. cheng, m. t. tsay, “an improved tabu search for economic dispatch with multiple minima”, ieee transactions on power systems, vol. 17, no. 1, pp. 108-112, 2002 [19] k. mason, j. duggan, e. howley, “multi-objective dynamic economic emission dispatch using particle swarm optimisation variants”, neurocomputing, vol. 270, pp. 188–197, 2017 [20] m. basu, “particle swarm optimization based goal-attainment method for dynamic economic emission dispatch”, electric power components and systems, vol. 34, no. 9, pp. 1015-1025, 2006 [21] k. deb, a. pratap, s. agarwal, t. meyarivan, “a fast and elitist multiobjective genetic algorithm: nsga-ii”, ieee transactions on evolutionary computation, vol. 6, no. 2, pp. 182-197, 2002 [22] m. z. jahromi, m. m. h. bioki, m. rashidinejad, r. fadaeinedjad, “solution to the unit commitment problem using an artificial neural network”, turkish journal of electrical engineering and computer sciences, vol. 21, pp. 198-212, 2013 microsoft word 24-3422_s_etasr_v10_n2_pp5477-5482 engineering, technology & applied science research vol. 10, no. 2, 2020, 5477-5482 5477 www.etasr.com zebar & madani: power system transient stability enhancement using sfcl and smes sfcl-smes control for power system transient stability enhancement including scig-based wind generators abdelkrim zebar electrical engineering department university ferhat abbas setif 1 setif, algeria zebarkarim@yahoo.fr lakhdar madani electrical engineering department university ferhat abbas setif 1 setif, algeria zebarkarim@yahoo.fr abstract—the resolution of the environment pollution depends on renewable energy sources, such as wind energy systems. these systems face transient and voltage stability issues with wind energy employing fixed-speed induction generators to be augmented with resistive type superconducting fault current limiter (sfcl) and superconducting magnetic energy storage (smes) devices. the use of a combined model based on sfcl and smes for promoting transient and voltage stability of a multi-machine power system considering the fixed-speed induction generators is the primary focus of this study. our contribution is the development of a new model that combines the advantages of sfcl and smes. the proposed model functions assure flexible control of reactive power using smes controller while reducing fault current using superconducting technologybased sfcl. the effectiveness of the proposed combined model is tested on the ieee11-bus test system applied to the case of a three-phase short circuit fault in one transmission line. keywords-distributed wind generation(dwg); superconducting fault current limiter (sfcl); superconducting magnetic energy storage (smes); transient stability i. introduction with the increased penetration of the distributed generations (dgs), the mostly used induction machines are wind generators. induction machines face stability problems, similar to the transient stability of synchronous machines [1-2]. so, it is important to analyze the transient stability of power systems including wind power stations. in power system stability studies, the term transient stability usually refers to the ability of the synchronous machines to remain in synchronism during brief periods that follow large disturbances, such as severe lightning strikes, loss of heavily loaded transmission lines, loss of generation stations, or short circuits on buses [34]. on the other hand, the braking resistor (br) has been recognized and used as a cost-effective measure for transient stability control of synchronous generators for a long time. according to some recent reports, brs can be used for wind generator stabilization as well [5-6]. the selection of a suitable device for the stabilization of fixed-speed wind generators is a matter of interest. the static synchronous compensator (statcom) and static var compensator (svc) are reported of being able to stabilize the fixed speed wind generator [7-8]. the research on the application of superconducting devices in power systems, such as the smes as a tool for the stabilization of grid-connected wind generator systems [9-11], is recently developed. an smes is a large superconducting coil capable of storing electric energy in the magnetic field generated by the direct current (dc) current flowing through it, and the real and reactive power can be absorbed (charging) by or released (discharging) from the coil according to the system power requirements. sfcls can suppress short-circuit currents using the unique quench characteristics of superconductors. in the event of a fault, the superconductor undergoes a transition into its normal state (quenching). after quenching, the current is commutated to a shunt resistance and is then limited rapidly [12–14]. in this paper, the potential influence of the combined application of sfcls and shunt controller smes is proposed and investigated for improving both transient stability and voltage regulation of the power system containing a distributed wind generation based on conventional fixed speed induction generator. moreover, the optimal location of the proposed coordinated controller (sfcl–smes) is also analyzed. the effectiveness of the proposed combined model is tested on the ieee 11-bus test system applied to the case of three-phase short circuit fault in one transmission line. simulation results for the system under study are presented and discussed. they show that the optimal location selected by the proposed method improves the transient stability of the power system when a fault occurs. ii. mathematical model this section gives a mathematical model for the power system network which includes modelling of synchronous generator, dwg, sfcl, and smes. a. synchronous generator for transient stability analysis, a synchronous machine model is considered as a classic fourth-degree model (figure 1) and is simulated in matlab/simulink [15]. the systems basic elements are: � is the power angle of the generator, �� is the rotor speed with respect to synchronous reference, � is the inertia constant of the generator, �� is the mechanical input torque to the generator which is assumed to be constant, �� is corresponding author: abdelkrim zebar engineering, technology & applied science research vol. 10, no. 2, 2020, 5477-5482 5478 www.etasr.com zebar & madani: power system transient stability enhancement using sfcl and smes the electromagnetic torque to the generator, d is the damping constant of the generator, � ′ is the quadrature-axis transient voltage, ��� is the reference voltage, � � ′ is the direct-axis open-circuit transient time constant of the generator, � � ′ is the quadrature axis open-circuit transient time constant of the generator, � is the direct-axis synchronous reactance, � ′ is the direct axis transient reactance, � ′ is the quadrature axis transient reactance, � is the terminal voltage of the generator, and � and i� are the direct and quadrature axis currents of the generator respectively. fig. 1. the synchronous generator model b. dwg dwgs contain many wind turbines and their detailed modeling may be unaffordable due to computational burden. in order to reduce dimensionality, aggregation techniques are used to obtain equivalent models. a proper equivalent model can be easily obtained for fixed-speed wind turbines where a one-toone correspondence between wind speed and active power output exists. in this case, aggregation is performed by adding the mechanical power of each wind turbine and by using an equivalent squirrel cage induction generator (scig) which receives the total mechanical power [16-18]. a simplified transient model of a scig is given in [19]. the dwg penetration level in the system is defined as [20]: ) cg p dwg p (100 dwg p dwg % +∗= (1) where pdwg and pcg are the amount of total active power generated by dwg and centralized generation respectively. c. smes figure 2 shows the basic configuration of a thyristor-based smes unit which consists of a y-delta transformer, an ac/dc thyristor controlled bridge converter, and a superconducting coil or inductor [21]. the converter impresses positive or negative voltage on the superconducting coil. charge and discharge are easily controlled by simply changing the delay angle that controls the sequential firing of the thyristors. • if α is less than 90, the converter operates in the rectifier mode (charging). • if α is greater than 90, the converter operates in the inverter mode (discharging). as a result power can be absorbed from or released to the power system according to the requirements. at the steady state smes should not consume any real or reactive power. the voltage �� of the dc side of the converter is expressed by: �� � ��� cos � (2) where ��� is the ideal no-load maximum dc voltage of the bridge. fig. 2. typical schematic diagram of an smes unit the current and voltage of superconducting inductor are related as: ��� � � ��� � ��� ! ���� � �" (3) where ���� is the initial current of the inductor. the real power #�� absorbed or delivered by the smes is given by: #�� � �� ��� (4) the energy stored in the superconducting inductor is: $�� � $��� ! � #��� � �" (5) where $�� � 1 2 '�� ���� (⁄ is the initial energy in the inductor. this is applicable for the twelve pulse converter also [21]. since the bridge current ��� is not reversible, the bridge output power #�� can be positive or negative depending on ��. if �� is positive, power is transferred from the power system to the smes, while if it is negative, power is released from the smes unit. the thyristor-based smes using a six pulse converter is simulated in matlab/simulink as shown in figure 3. fig. 3. circuit of smes engineering, technology & applied science research vol. 10, no. 2, 2020, 5477-5482 5479 www.etasr.com zebar & madani: power system transient stability enhancement using sfcl and smes d. sfcl depending on the superconducting materials and operation principles, the superconducting fault current limiters can be classified into different types [22]. in the resistive type the superconductor is directly connected in series to the line to be protected while in the inductive concept the superconductor is magnetically coupled into the line [23]. fig. 4. modified transmission line with sfcl the sfcl is a device that limits the fault current by generating impedance when a fault occurs. in addition, the limiting impedance generated to limit fault currents is helpful in increasing the generator output degraded by a fault, thus providing stabilization. sfcls installed in series with transmission lines can be operated during the period from the fault occurrence to fault clearing [24]. the equivalent circuit of the transmission line with sfcl is illustrated in figure 4. the associated equation for rsfcl can be described by: ( ) (1 exp( ))sfcl m scr t r t t= − − (6) where rm is the expected maximum value of sfcl resistance in the normal state (rm≈20ω) and tsc is the time constant of transition from the superconducting state to the normal state, which is assumed to be 1ms. the three phase of the resistance sfcl model is simulated in matlab/simulink as shown in figure 5. fig. 5. the three phase resistance sfcl model iii. simulation results to investigate the efficiency and the robustness of the proposed sfcl and smes based controller on the power system transient stability in the presence of distributed wind generation, the model is integrated in the ieee benchmark four-machine two-area test system in the case of a three phase short circuit fault in the transmission line. the test system consists of eleven buses, four synchronous generators connected to buses 1, 2, 3, and 4 respectively through transformers which contribute to the supply of two loads through transmission lines, and two fully symmetrical areas linked together by two 230kv lines of 220km length [25, 26]. dwg is connected to each of the load buses. the configuration is shown in figure 6. fig. 6. on-line diagram of the electrical test system considering the combined controller sfcl and smes technical data such as voltage regulator, governor turbine, buses and branches information are given in [23]. the transient stability is assessed by the criterion of relative rotor angles, using the time domain simulation method. the sim power systems toolbox of matlab/simulink was used to carry out simulations. iv. optimal location of sfcl-smes for secure operation of power systems, it is required to maintain an adequate voltage stability margin, not only under normal conditions, but also in contingency cases. in this study, the voltage stability index using continuation power flow is proposed for the optimal location of smes and sfcl. from the continuation power flow (cpf) results shown in figure 7 [18], the buses 5, 6, 7, 8, 9, 10, and 11 are the critical buses. among these buses, bus 8 has the weakest voltage profile. figure 8 shows the pv curves for the ieee four-machine twoarea test system without considering sfcl and smes. fig. 7. curves for the ieee four-machine two-area test system at first, the buses are classified according to three procedures: engineering, technology & applied science research vol. 10, no. 2, 2020, 5477-5482 5480 www.etasr.com zebar & madani: power system transient stability enhancement using sfcl and smes procedure1: all buses are classified according to voltage stability index. in this study, bus 8 is considered as a candidate bus, the main role of the statom is to control voltage at this bus by exchanging reactive (capacitive or inductive) power with the network. procedure 2: buses are classified according to the value of fault currents (three phase fault). procedure 3: buses are classified according to the reactive power compensation consumed by the dwg. the dwg will generate an active power, equal to the amount of power consumed by the load. however, in order to generate this necessary active power, the dwg needs to consume reactive power from the network. bus 9 is considered as the point of common coupling (pcc) where the wg is connected and the main role of the smes is to compensate for this reactive power. fig. 8. critical buses based on continuation power flow v. impact of the sfcl-smes controller on power system transient stability enhancement three logic cases are considered: in the base case, which indicates the original system, there is no sfcl and smes, in the system. in the second case, with smes at the weak bus (low voltage stability index) and sfcl at a bus which has high fault current. in the third case, with smes at the pcc and sfcl at another bus with a high fault current. a. case 1 a 3-phase fault occurs at t=1s on line 7–8 near bus 8 and it is cleared by opening the line at both ends. a wg at bus 9 is considered with a penetration level of 20%. generator 2 is the nearest generator to the fault location and therefore it has the most rotor speed deviation for this fault. the fault clearing time is fct=0.266s at first and then fct=0.300s. simulation results on the rotor angle differences and rotor speed deviation of the four generators without considering sfcl and smes controller are shown in figures 9–10 respectively. it can be seen that the relative rotor angles are damped and consequently the system maintains its stability, but when the ftc increased to 0.300s, the relative angles (δ14, δ24 and δ34) increase indefinitely, so at this critical situation the system loses its stability. fig. 9. relative rotor angles without sfcl–smes fig. 10. rotor speed deviation without sfcl–smes b. case 2 in order to maximize voltage stability index and to improve power system transient stability, smes is located at the weak bus (low voltage stability index) and the sfcl is placed in line 7–8 which has a high fault current. the smes will try to support the voltage by injecting reactive power on the line when the voltage is lower than the reference voltage. the first mentioned fault in the previous sub-section is applied again. time domain simulation was performed at cleared time 0.333s. fig. 11. relative rotor angles considering sfcl–smes 1 2 3 4 5 6 7 8 9 10 11 0 0.2 0.4 0.6 0.8 1 1.2 1.4 v o lt a g e m a g n it u d e ( p .u ) bus engineering, technology & applied science research vol. 10, no. 2, 2020, 5477-5482 5481 www.etasr.com zebar & madani: power system transient stability enhancement using sfcl and smes we can see from figure 11 that the maximum relative rotor angles are δ14=13°, δ24=28°, and δ34=14°, the relative rotor angles δ14, δ24, and δ34 are damped and therefore the system becomes more stable in comparison with the first case. the critical clearing time is enhanced to a new value (0.483s). c. case 3 in case 2 the sfcl was placed in the line 7–8 which had a high fault current and the smes was located at the weak bus. in this case the smes is placed at the pcc. the purpose becomes to reduce the current in line 7–8 (high fault current) and maximize dynamically voltage stability index. in this case the smes compensates the reactive power consumed by the dwg and the fault current is reduced by the sfcl in order to enhance the performance of the smes dynamically during fault and alternatively the required size of smes will be reduced (economic aspect). as a result, the reactive powers delivered by generating units reduce. compared to the two other cases, the critical clearing time is improved. the sfcl is placed in line 7–8. the first mentioned fault in the sub-section (case 1) is applied again. the fault is cleared after 0.427s. in figure 13, we can see that the maximum relative rotor angles are δ14=15.61°, δ24=14.54°, and δ34=1.68°, the relative rotor angles are damped and therefore the system becomes more stable than the first two cases. it can be also seen that the system response with the smes at the pcc is better than that with the smes at the weak bus in the sense of reduced settling time. the critical clearing time is enhanced to a new value 0.511s. fig. 12. relative rotor angles considering sfcl–smes table i. margin stability (cct) controller cct (s) case 1 without sfcl and smes 0.271 case 2 with smes and sfcl: smes at the weak bus (high voltage stability index) sfcl at a bus (high fault current) 0.483 case 3 with smes and sfcl: smes at the pcc sfcl at a bus (high fault current) 0.511 it is important to note that the integration of smes in coordination with sfcl in suitable locations may help the system improve transient stability. table i shows the values of margin stability (cct) obtained corresponding to different cases. vi. conclusion power systems are facing new challenges such as the utilization of renewable energy sources and distributed generation, the increased demand, the limited resources, the environmental regulations, and competitive electricity markets. this poses potential problems to power systems from the perspective of management. in addition, decentralized production, in particular wind generation does not attend system services. that is why the recent penetration of this resource is limited and conditioned on the participation in systems services such as voltage regulation, control of power flow, damping of power oscillations, reactive power compensation, load balancing and transient stability. superconducting fault current limiters (sfcl) and superconducting magnetic energy storages (smes) can be a solution to these problems. in this study, the multi-machine power system transient stability improvement contains a large dwg via sfcl and smes. coordinated application was studied. the results of the simulations performed on the ieee benchmarked four-machine two-area test system in the presence of distributed wind generation and considering a three phase short circuit, clearly indicate that the proposed combined controller, when placed at suitable locations, can be used as an effective means capable of enhancing the margin stability and extend the critical clearing time in a multi-machine power system. references [1] m. h. ali, b. wu, “comparison of stabilization methods for fixed-speed wind generator systems”, ieee transactions on power delivery, vol. 25, no. 1, pp. 323–331, 2010 [2] n. e. akpeke, c. m. muriithi, c. mwaniki, “contribution of facts devices to the transient stability improvement of a power system integrated with a pmsg-based wind turbine”, engineering, technology & applied science research, vol. 9, no. 6, pp. 4893-4900, 2019 [3] a. karami, s. z. esmaili, “transient stability assessment of power systems described with detailed models using neural networks”, international journal of electrical power and energy systems, vol. 45, no. 1, pp. 279–292, 2013 [4] a. s. saidi, m. b. slimene, m. a. khlifi, “transient stability analysis of photovoltaic system with experimental shading effects”, engineering, technology & applied science research, vol. 8, no. 6, pp. 3592–3597, 2018 [5] m. aten, j. martinez, p. j. cartwright, “fault recovery of a wind farm with fixed speed induction generators using a statcom”, wind engineering, vol. 29, no. 4, pp. 365–375, 2005 [6] h. gaztanaga, i. e. otadui, d. ocnasu, s. bacha, “real-time analysis of the transient response improvement of fixed-speed wind farms by using a reduced-scale statcom prototype”, ieee transactions on power systems, vol. 22, no. 2, pp. 658–666, 2007 [7] m. h. ali, t. murata, j. tamura, “effect of coordination of optimal reclosing and fuzzy controlled braking resistor on transient stability during unsuccessful reclosing”, ieee transactions on power systems, vol. 21, no. 3, pp. 1321–1330, 2006 [8] a. causebrook, d. j. atkinson, a. g. jack, “fault ride-through of large wind farms using series dynamic braking resistors (march 2007)”, ieee transactions on power systems, vol. 22, no. 3, pp. 966–975, 2007 [9] s. nomura, y. ohata, t. hagita, h. tsutsui, s. t. iio, r. shimada, “wind farms linked by smes systems”, ieee transactions on applied superconductivity, vol. 15, no. 2, pp. 1951–1954, 2005 engineering, technology & applied science research vol. 10, no. 2, 2020, 5477-5482 5482 www.etasr.com zebar & madani: power system transient stability enhancement using sfcl and smes [10] m. h. ali, t. murata, j. tamura, “minimization of fluctuations of line power and terminal voltage of wind generator by fuzzy logic-controlled smes”, international review of electrical engineering, vol. 1, no. 4, pp. 559–566, 2006 [11] m. h. ali, t. murata, j. tamura, “wind generator stabilization by pwm voltage source converter and chopper controlled smes”, record of icem (international conference on electrical machines) 2006, 2006 [12] b. w. lee, j. sim, k. b. park, i. s. oh, “practical application issues of superconducting fault current limiters for electric power systems”, ieee transactions on applied superconductivity, vol. 18, no. 2, pp. 620– 623, 2008 [13] b. c. sung, d. k. park, j. w. park, t. k. ko, “study on optimal location of a resistive sfcl applied to an electric power grid”, ieee transactions on applied superconductivity, vol. 19, no. 3, pp. 2048– 2052, 2009 [14] b. c. sung, d. k. park, j. w. park, t. k. ko, “study on a series resistive sfcl to improve power system transient stability: modeling, simulation, and experimental verification”, ieee transactions on industrial electronics, vol. 56, no. 7, pp. 2412–2419, 2009 [15] n. a. tabak, stabilite dynamique des systemes electriques multimachines: modelisation, commande, observation et simulation, phd thesis, university of lyon, 2008 (in french) [16] h. a. p. painemal, wind farm model for power system stability analysis, phd thesis, university of illinois at urbana-champaign, 2010 [17] a. zebar, a. hamouda, k. zehar, “impact of the location of fuzzy controlled static var compensator on the power system transient stability improvement in presence of distributed wind generation”, revue roumaine des sciences techniques-serie electrotechnique et energetique, vol. 60, no. 4, pp. 426–436, 2015 [18] s. h. e. osman, g. k. irungu, d. k. murage, “application of fvsi, lmn and cpf techniques for proper positioning of facts devices and scig wind turbine integrated to a distributed network for voltage stability enhancement”, engineering, technology & applied science research, vol. 9, no. 5, pp. 4824-4829, 2019 [19] n. k. roy, m. j. hossain, h. r. pota, “voltage profile improvement for distributed wind generation using d-statcom”, ieee power and energy society general meeting, detroit, usa, july 24-28, 2011 [20] m. reza, p. h. schavemaker, j. g. slootweg, w. l. kling, l. v. d. sluis, “impacts of distributed generation penetration levels on power systems transient stability”, ieee power engineering society general meeting, denver, usa, june 6-10, 2004 [21] m. h. ali, b. wu, r. a. dougal, “an overview of smes applications in power and energy systems”, ieee transactions on sustainable energy, vol. 1, no. 1, pp. 38-47, 2010 [22] m. noe, m. steurer, “high-temperature superconductor fault current limiters: concepts, applications, and development status”, superconductor science and technology, vol. 20, no. 3, pp. r15-r29, 2007 [23] s. nemdili, s. belkhiat, “electrothermal modeling of coated conductor for a resistive superconducting fault-current limiter”, journal of superconductivity and novel magnetism, vol. 26, pp. 2713-2720, 2013 [24] m. sjostrom, r. cherkaoui, b. dutoit, “enhancement of power system transient stability using superconducting fault current limiters”, ieee transactions on applied superconductivity, vol. 9, no. 2, pp. 13281330, 1999 [25] m. klein, g. j. rogers, s. moorty, p. kundur, “analytical investigation of factors influencing power system stabilizers performance”, ieee transactions on energy conversion, vol. 7, no. 3, pp. 382-390, 1992 [26] p. kundur, power system stability and control, mcgraw-hill, 1994 microsoft word 4-mevaa.doc etasr engineering, technology & applied science research vol. 1, �o. 2, 2011, 43-48 43 www.etasr.com meva’a et al: model and reduction of inactive times in a maintenance workshop… model and reduction of inactive times in a maintenance workshop following a diagnostic error l. meva’a r. danwé t. beda department of mechanical engineering department of mechanical engineering department of mechanical engineering national advanced school of engineering national advanced school of engineering national advanced school of engineering yaoundé, cameroon yaoundé, cameroon yaoundé, cameroon lucien_mevaa@hotmail.com rdanwe@yahoo.fr abstract — the majority of maintenance workshops in manufacturing factories are hierarchical. this arrangement permits quick response in advent of a breakdown. reaction of the maintenance workshop is done by evaluating the characteristics of the breakdown. in effect, a diagnostic error at a given level of the process of decision making delays the restoration of normal operating state. the consequences are not just financial loses, but loss in customers’ satisfaction as well. the goal of this paper is to model the inactive time of a maintenance workshop in case that an unpredicted catalectic breakdown has occurred and a diagnostic error has also occurred at a certain level of decisionmaking, during the treatment process of the breakdown. we show that the expression for the inactive times obtained, is depended only on the characteristics of the workshop. %ext, we propose a method to reduce the inactive times. keyword: hierachical system; catalectic breakdown; diagnostic error; model, inactive time. i. introduction competing environment put companies under a lot of pressure. they have to meet up with production goals and also gain a portion of the market. in this context, error are reduced and unforeseen breakdowns [1, 2] that may occur in production tools can prove disruptive. it is the responsibility of the maintenance workshop to resolve such events in the shortest time possible. restoration to the normal state can be considered as an indicator of the workshop’s performance. various works have been dedicated to systems’ performance (e.g. [3, 4]), having the same objective: the amelioration of system performance. regnier first approached the topic of the reactivity of systems that faced a disruptive event [5]. he was followed by humez [6]. both proposed a model of systems based on a multi-leveled structure, the decision making model grai [7, 8]. recently, a model was developed by the authors, in order to express the reaction of a medical unit in relation to different parameters, notably the reference periods of the different levels at which the decision were made [9]. the same model is employed in this paper. we have considered the multi-leveled structure for the organization of maintenance workshops. regarding the return to normal state, the general objective is divided into sub objectives having acceptable dimensions and complexity. the difficulties in aggregating heterogeneous information and the loss of communication between decision levels can be removed. in the case of a diagnostic error, the error can have repercussions right up to the peak of the structure. in the first part of the paper, we present the hypotheses of our work, and then, in the second part, we propose a model of the inactive time following a diagnostic error. next, we propose a method to reduce the inactive times. we end with a numerical application of the approach. ii. hypotheses of the study we consider that the maintenance workshop is hierarchical and multi-leveled. therefore, several levels of decision making exists, some of which are shown in figure 1. the treatment of a catalectic breakdown, which makes production tools unavailable, follows a precise process which is based on the following hypotheses: • we consider the arrival of an unexpected breakdown at a post to be a disruptive event for the maintenance workshop. • we consider the most unfavorable case of the disruption, to be that it appears at level 0, is not treated and has repercussions right up to the nth level where it’s finally treated. • regarding the propagation of the event, we consider that the disturbance appears at a level, where it’s not treated and has repercussions at higher levels. this repercussion moves from one level to the next until it gets to the level where it’s treated. • we consider the functioning to be periodic: repercussion from one level to the next has two phases, an upstream phase which is the ascending phase (from lower levels to higher levels), and a downstream phase, which corresponds to repercussions from a level that elaborates it to a lower level, which applies it. in both phases, the repercussion from one level to the next is etasr engineering, technology & applied science research vol. 1, �o. 2, 2011, 43-48 44 www.etasr.com meva’a et al: model and reduction of inactive times in a maintenance workshop… done at the end of the period. this conduct is said to be periodic. • transmission of the event or response from one level to next is not instantaneous. there is a non-zero transmission delay upstream and downstream between two consecutive levels. • at each level, there is a shift (which could be zero) between the reference date, time origin t(0) and the start date of the reference period of level k considered as xk(0). this shift is not necessarily the same for all levels. • a diagnostic error occurs only at level 0 of the upstream phase and it is only noticed at higher levels right up to the level n (the last level). • once a diagnostic error is discovered at a given level, the management is no longer periodic, from the level where it’s discovered to the level 0 until it returns to this same level. iii. model of the inactive time a. model of the delay in reaction. the objective presented in figure 2 is to express the reaction delay of the system as a function of the occurrence date of an unwanted even and the system parameters, notably the start date of the reference period of the different levels involved in the treatment. fig. 1. example of the proccess on two levels fig. 2. . objective of the model where x k (0) :initialization date of the reference period. u 0 : occurrence date of the event. t : reaction time of the maintenance workshop. ( ){ }[ ] n0,1,...,k k0 0x,uft == we designate a sub process to every passage of an event in a level. therefore, every level k, except for the highest level (k=n), has two sub processes spk and sp2n-k which treats the upstream and downstream events respectively as shown in figure 3. the level n which treats the event has only a single sub process: spn therefore, the process has in total 2n+1 sub processes (0,1,……,2n). in every sub process spi , except for the last one, the event in the upstream phase passes through four successive states and the reaction in the downstream phase also passes through four successive states as presented in table 1. the last sub process sp2n, has just the three first stages. table i. differents states of treatment designation state upstream phase downstream phase duration e1 evaluation of the gravity verification for coherence ti,1 e2 preliminary treatment elaboration of the decision framework ti,2 e3 waiting for the end of the period waiting for the end of the period ti,3 e4 transfer to a higher level transfer to a lower level ti,4 the appreciation of the gravity of the event in the state e1 in the upstream phase determines the mode of periodic or factual treatment. we define below the parameters of the model: t0 : reference date k : level considered i : index of the sub process considered l : index of the state of the event j : number of the period order n : level at which the event is treated spi : sub process i of the system el : state l of the treatment of the event pk : duration of a period of the level k ji :synchronization period for which the event is treated in spi; x k (0) : start date of the reference period for the level k x i 0 : arrival date of the event in the sub process spi t level 1 t level 2 t t sp0 sp1 sp2 sp3 sp4 stage e1 stage e2 stage e3 stage e4 key level 0 reaction process (n+1) levels input output x k (0)k=0,1,…, n u 0 t etasr engineering, technology & applied science research vol. 1, �o. 2, 2011, 43-48 45 www.etasr.com meva’a et al: model and reduction of inactive times in a maintenance workshop… ti,l : duration of the state l of spi s : execution date of the reaction; t : reaction delay of the system to the event u i : entrance date into spi, of the event x k (j) : finish date of the period j of the level k x i l : finish date of the state el for the event spi ; s i : exit date of the event (end of the last stage) of spi ; for any sub process, the treatment sequence is the same. figure 3 presents the dates for which the perturbation in the sub process changes state. fig. 3. duration and change of state in a sub process spi there exists two distinct dynamics in the treatment process. one part is the dynamic of the event (its change of states) which is made at irregular instances and is a function of the duration of the different states which are intrinsic characteristics of the system in relation to a given event. the other is the dynamic of decision making which is regular, since it is periodic at each level. however, the two dynamics have to be synchronized so that the event can pass from the state e3 to the state e4, as shown in figure 3, before a decision relative to its treatment is finally taken. one of the two dynamics has to adapt itself to the other. this is what makes the difference between the periodic conduct and factual conduct. in factual conduct, it is the dynamic of decision making that adapts itself to that of the event, and given that it’s irregular, the factual conduct is therefore forced to be irregular. on the contrary, in periodic conduct, it’s the dynamic of the event which adapts to that of decision making. this is what will involve the wait times before the treatment of the event. in reality, the two modes coexist in the designation of mixed conduct, which means that it operates on a periodic conduct, but for critical events, decision is taken without waiting for the end of the period. the passage from a period j to the next j+1, on a given level k, effects itself at finish date of the period k, x k (j), which is given by: x k (j)=pk+x k (j-1) or : x k (j)=jpk+x k (0) in periodic conduct, the event is treated in a sub process spi, at a period ji,, of the level k (where the sub process appears), which we determine as follows: ( ) ( )      +        −++ = =−++∈∃= not if1 p 0xttu ej λp0xttusuch thatinλifλj k k i,2i,1 i i k k i,2i,1 i i where e represents the real part of x. the dates for change of states of the event (passage from the state el to the state el+1), for each of the four states in the sub process spi, x1 i , are given by: { } ( )    == ∈∀+= 3ljxx 1,2,4ltxx i ki 3 li, i 1-l i l for l=3, the equation which we have, shows clearly the synchronization between the two dynamics. it permits us to determine the date for which transfer decision for the event is taken. this date coincide with the end of the synchronization period ji,, of the sub process spi. the entrance u i and the exit s i of spi in the upstream phase of the process are such that:    = = i 4 i i 0 i xs xu we thus obtain:        += += += += i,4 i 3 i 4 ki ki 3 i,2 i 1 i 2 li, i 0 i 1 txx p.j)0(xx txx txx what proceeds the exit date is therefore: s i =x k (0)+jipk+ti,4 this result is true for all the sub processes i, except for the last one, i=2n, for which reason the state e4 does not exist, (consequently t2n,4=0). for i =2n we have to consider a diagnostic error at level 0 in the upstream phase only, which is noticed only at higher levels until it gets to the highest level n. in the case where an error is noticed at level 0, correction is done immediately and does not affect the maintenance process. on the contrary, if the error is only noticed at higher levels, the diagnostic error causes a delay ∆t which increases the treatment time of the breakdown as shown in figure 4. we then have: s 2n =x 2n (0)+j2np0+∆t the entrance date of an event in a sub process is equal to its exit date from the proceeding sub process. 0 j x k (j) ji u i ti,4 ti,2 pk 1 x k (0) ti,3 s i x i 0 t0 sub process i=k sub process i=2n-k ti,1 x i 3 etasr engineering, technology & applied science research vol. 1, �o. 2, 2011, 43-48 46 www.etasr.com meva’a et al: model and reduction of inactive times in a maintenance workshop… fig. 4. delay in a diagnostic error detected at level 2 parameters at entrance:    =∀ n,...,1,0k)0(x u k 0 parameters: { }   =∈∀ =∀ 0,1,...2niet1,2,4lt n0,1,...,kp li, k , except t2n,4 which does not exit. calculation: for i=0,1,…,2n-1    = ++= + i1i i,4ki ki su tpj)0(xs s 2n =x 0 (0)+j2np0+∆t the expression of ∆t is given by: ( )∑ ∑ = = →→ +++++=∆ n 2i n 1i i,41-i1,4-ii1,2-i1,1-i0,20,1 tttt2ttt where: 1,4-iit → : transition time from sub process i to sub process i-1. i,41-it → : transition time from sub process i-1 to sub process i. we apply a realistic hypothesis that: i,4i,41-i1,4-ii ttt == →→ consequently: ( ) ∑∑ == ++++=∆ n 1i i,4 n 2i 1,2-ii1,-i0,20,1 t2tt2ttt reaction delay represents the time that elapses between the occurrence and execution of the response. in reference to our model, difference has to be made between the exit date of the process event (exit date of the last sub process sp2n ) and the occurrence date of the event at the first level 0. this is written as: 02n ust −= or: ( ) 002n0 utpj(0)xt −∆++= we therefore have an expression for the reaction delay as a function of the system parameters. b. calculating inactive time at each level k of decision making, the state e3 in the upstream phase and downstream phase represents the wait for the end of the period. for this reason we are going to establish another expression for the delay in the previous reaction. it’s gotten by uniquely expressing as a sum, on the entire process, the duration of the events in all the different states of every sub process. equally at this stage, effects of diagnostic errors detected at a level other than level 0 in the upstream phase should be integrated: ttttt 2n 0i 1-2n 0i i,4 2 1l li, 2n 0i i,3 ∆+        ++         = ∑ ∑∑∑ = === which is of the form: (1)+(2) with:       ∑ = 2n 0i i,3 t (1) and ttt 1-2n 0l i,4 2n 0i 2 1l li, ∆+      +∑∑∑ == = (2) this expression illustrates that reaction delay is made of part (1) which constitutes the inactive time, and part (2) which constitutes the actual time for the process, therefore has an incompressible priority. approaching this expression for the reaction time using that which has been obtained previously, the inactive time (1) is written:         ∆+         += ∑ ∑∑∑ = === ttt-tt 2n 0i 1-2n 0i i,4 2 1l li, 2n 0i i,3 or: ( )         ++−= ∑ ∑∑∑ = === 2n 0i 1-2n 0i i,4 2 1l li, 00 02n 2n 0i i,3 tt(0)xu-pjt upstream phase ∆t level 0 level 1 level 2 level 3 e4 e1e2e3 e1e2e3 e4 e1e2e3 e4 e1e2 e4 e1e2 e4 e1e2 e4 e1e2e3 e4 e1e2e3 etasr engineering, technology & applied science research vol. 1, �o. 2, 2011, 43-48 47 www.etasr.com meva’a et al: model and reduction of inactive times in a maintenance workshop… we realize that delay in diagnostic error does not influence the calculation of inactive times; this is explained by the factual treatment of error before moving to the periodic treatment. in this equation, for a given system and event, only j2n varies as a function of the start dates of the reference period for the levels. all the other terms are constants. in order to reduce the reactivity delay, it is imperative to reduce the inactive times, tk,3 and t2n-k,3 (duration of the stage e3), of the two sub processes upstream and downstream, appearing at the level k by adjusting the start date x k (0), of the reference period of the level, in a manner to cancel one of the two inactive times. the adjustment on a level is carried out in the following manner: if min(tk,3 , t2n-k,3) ≤ x k (0), then x k (0)=x k (0) – min(tk,3 , t2n-k,3) if not x k (0)=pk + [x k (0) min(tk,3 , t2n-k,3)] the result is the elimination of the shorter of the two wait times. we obtain a new start date for the reference period and a new wait time which is smaller. for the entire treatment process, we successively apply the same principle to all levels of the process starting with the lowest preference. the algorithm below permits us to effect this calculation: x k (0)=0 ∀ k=0,1 ;…,n for k ranging from 0 to n, do : if min(tk,3 , t2n-k,3)=0, then k=k+1 if not, if min(tk,3 , t2n-k,3) ≤ x k (0) x k (0)= x k (0) min(tk,3 , t2n-k,3) if not x k (0)=pk + [x k (0) min(tk,3, t2n-k,3)] end if k=k+1 end if end iv. application the data for the example are as follows: • time unit is the minute. • the reference date is any minute considered to be the time origin. • the occurrence date of the event after the reference minute is u 0 =3min. • the periods of the levels are: p0=6 min, p1=4 min and p2=2 min. • we initialize the reference period of all the levels to the reference date t0=0. that’s to say: x 1 (0)=x 2 (0)=x 3 (0)=0. the duration of the dates of the different stages of each sub process are given in table 2 below: table ii. duration of the stages sub process i duration ti,1 duration ti,2 duration ti,4 0 2 2 5 1 3 3 5 2 2 3 4 3 2 2 2 4 3 2 we also consider that there is an error of diagnostic made in level zero that we realize at level 1. we obtained the following results which we’ve regrouped in table 3: table iii. simulation results calculated data u i ti,3 ji s i 3 5 2 17 17 1 6 29 29 0 17 38 38 2 11 46 46 11 9 70 the exit date of the event is s 4 =70 min. the reaction delay is t=67 min. the total wait time is 19 min. next we apply the algorithm to reduce the wait times at the different levels. we obtain the following results per level: a. for the level 0, sub processes sp0 and sp4 none of the wait times is zero, we proceed to the adjustment. the smallest wait time is t0,3=5 min in sp0. it is superior to x 0 (0)=0. the new value of x 0 (0) is: x 0 (0)=p0+[x 0 (0)-min(t0,3,t4,3)]=6+(0-5)=1 we obtain the following results: parameters results x 0 (0) x 1 (0) x 2 (0) t0,3 t1,3 t2,3 t3,3 t4,3 s t 1 0 0 0 2 0 2 10 65 62 the new wait times are 14min b. for the level 1, sub process sp1 and sp3 we have the same value of inactive time t1,3= t4,3 =2 min. the new value of x 1 (0) is: x 1 (0)=p1+[x 1 (0)-min(t1,3,t3,3)]=4+(0-2)=2 the wait times are 14 min and we obtain the following results parameters results x 0 (0) x 1 (0) x 2 (0) t0,3 t1,3 t2,3 t3,3 t4,3 s t 1 2 0 0 0 0 2 12 65 62 etasr engineering, technology & applied science research vol. 1, �o. 2, 2011, 43-48 48 www.etasr.com meva’a et al: model and reduction of inactive times in a maintenance workshop… c. for the level 2, sub process sp2 the wait times t2,3 is zero. we do not adjust the start date of the reference period for this level. we conserve x 2 (0)=0. the results are the same to those previously obtained. parameters results x 0 (0) x 1 (0) x 2 (0) t0,3 t1,3 t2,3 t3,3 t4,3 s t 1 2 0 0 0 0 2 12 65 62 at the exit of level 2, we obtain a total inactive time of 14 min instead of the initial 19 min. bringing back the time unit of the previous example which was the minute, the reduction of 5 min obtained on the reaction delay which brings it back to 62 min is important for the maintenance workshop. we think on the other part that the inactive time of 14min to the end of the process are incompressible in the measure or, after the principle of the method, one of the two inactive times at a level is zero. v. conclusion and perspectives in this article we modeled the inactive times in a maintenance workshop following an unforeseen breakdown. we have established that this breakdown depends on system characteristics. after characterizing a diagnostic error at level 0 which is only noticed in higher levels in the upstream phase, we’ve showed that this error does not influence the inactive times of a maintenance workshop faced with a breakdown. we have realized an application which models and reduces inactive times. the data that we used as input come from a maintenance workshop and a study is currently performed in order to compare the results with what we have on the field. the first results are globally satisfactory. a study is also conducted for the analysis of error estimation compared with the inactive time accuracy as well as for an analysis of model limitations. as an added perspective, we will extend our model of inactive times, using a mix conduct which gives a better representation of systems functioning. references [1] g. illya, reliability theory with application and risk, berlin, springer 2000. [2] d. smith, reliability, maintainability and risk, kidlington, elsevier, 2005. [3] f. chan, h. qi, “feasibility of performance measurement system for process based approach and measures”, integrated manufacturing systems, vol. 14, pp. 179-190, 2007. [4] f.a. gruat la forme, référentiel d’évaluation de la performance d’une chaîne logistique. application à une entreprise d’ameublement. thèse de doctorat, lyon, france 2007. [5] p. regnier, conduite réactive des systèmes de production: intégration des régimes périodique et évènement. thèse de doctorat en sciences. université de bordeaux i, 1998. [6] j.b. menye, etude de la promptitude d’un système de pilotage de production travaillant en régime périodique. dea de productique. université de bordeaux i, 2003. [7] v. humez, proposition d’un outil d’aide à la décision pour la gestion des commandes en cas de pénuries : une approche par la performance. thèse de doctorat inp, toulouse, france, 2008. [8] g. doumeingt, s. kleinhans, and n. malhene, “a proposal for an evolution management methodology”, apms’96 gem time, kyoto, 1996. [9] l. meva’a, r. danwe, j. nganhou, “modeling the wait time of a hierarchical system: case study of an internal medicine unit” international journal of engineering and technology, vol. 3, issue 1, pp 6-12, 2011 microsoft word 35-3462_s_etasr_v10_n4_pp6087-6091 engineering, technology & applied science research vol. 10, no. 4, 2020, 6987-6091 6087 www.etasr.com alfraidi et al.: impact of endogenous risk factors on risk cost in ppp projects in saudi arabia impact of endogenous risk factors on risk cost in ppp projects in saudi arabia yahya alfraidi architecture engineering department university of hail hail, saudi arabia y.alfraidi@uoh.edu.sa saleh mohammed alzahrani civil engineering department university of business & technology jeddah, saudi arabia s.alzahrani@ubt.edu.sa mohamed hssan hassan abdelhafez architecture engineering department, university of hail, hail, saudi arabia and architecture engineering department, university of aswan, aswan, egypt mo.abdelhafez@uoh.edu.sa halim boussabaine faculty of business and law the british university in dubai dubai, united arab emirates halim@buid.ac.ae abstract-the formation of the public-private partnership (ppp) contracts is based on the grounds that the construction, progress, operation, and investment of a project must be allocated to a private organization under a contract. the risks associated with ppp projects are usually associated with resource improvement and development as well as the long-term operation of the project. it is known that cost and time overruns are among the obvious risks faced by a project during the development phase. cost and time overruns are major sources of monetary risk. the risk and its impact may vary at different phases during the life cycle of a ppp project. in traditional procurement, all of the monetary risks are covered by the public sector. most of the projects delivered under traditional procurement involve a price confirmation to indicate standard cost risks. this paper aims to investigate the impact of endogenous factors on budget overrun in ppp projects in saudi arabia. the paper briefly illustrates the content regarding the ppp risk evaluating systems and explains the association between risk occurrences and cost overrun in the kingdom of saudi arabia (ksa). the paper concludes with recommendations for future research. keywords-public private partnership (ppp); risk; risk pricing; system dynamics (sd) i. introduction developing countries usually prioritize investing in construction development. mega construction projects need huge financial and human resources. building complex engineering projects requires a prominent level of expertise, and well-managed teams with sufficient monetary resources, which are often beyond the capability of a single contractor [1]. the public and the private sector are concerned about the pricing risks associated with ppp projects. many projects depicted cost and time overruns and project failure leading to unexpected results. risk management's cost may affect risk allocation. the main challenges faced by organization managers involve dealing with the uncertainty and the intricacy of ppp projects and inaccurate cost incorporation during the decision-making phase. only a few organizations incorporate the risk probability in their strategy [2]. the shortcomings of the management in terms of their processes and functionality are evident from the complexity associated with projects which indicate the failure of managers to handle the vital characteristics of major projects. the understanding of managers concerning the emergent characteristics is determined by their perceptions leading to loss of insight [3]. cost overrun usually results from inappropriate management of risks and may be prevented if each party to the contract displays proper understanding of risk responsibilities, conditions of risk, risk preferences, and risk management expertise. the party that displays the best potential and expertise of risk management must be allocated with the management responsibility [4, 5]. as per [6], this allocation of risk to the competent party is imperative for the execution of construction projects within the specified cost where the owner is responsible for the quality, progress, and the costs of the project [7]. all the risks are not allocated to the contractor in case of traditional procurement. conversely, ppp projects are known to exhibit more certainty in terms of cost and time. however, it is not necessary to allocate all risks to the private sector in order to accomplish value for money, the risks are rather allocated to the party that is most efficient in risk management [8]. the involvement of the private sector in public sector projects has been done in the past, however, the scope and extent of the concept witnessed drastic development and progress in the past three decades. the private-public cooperation existed in various forms in various parts of the world depending upon the regions’ law systems and economic conditions. however, the main motive behind the cooperation of private and public sectors was the allocation of long-term public project risks to the private sector which is usually more capable of handling these risks. the interconnection between the value for money and risk allocation is one of the most corresponding author: yahya alfraidi engineering, technology & applied science research vol. 10, no. 4, 2020, 6987-6091 6088 www.etasr.com alfraidi et al.: impact of endogenous risk factors on risk cost in ppp projects in saudi arabia significant features of ppp projects [9]. the risk allocation determines the real worth of ppps whereas the prices used for compensating the contract costs are too high in project life cycle techniques [10]. the precise measurement and assessment of risk cost require a reasonable level of expertise and competency on the part of the risk analyst. in the case of the absence of comprehension and information regarding the consequences of unexpected events and the magnitude of these consequences, analysts require theoretical models to predict the outcome prices in case of unexpected events. therefore, the risk analyst must have a comprehension of the appropriate methods for obtaining precise information to enable them to allocate risk in an appropriate manner and make the project successful [11]. the concept of project risk analysis exists for years, however, concerning the construction sector, the theory of risk pricing under uncertainty is not yet fully developed. the significance of this concept is evident from the fact that data obtained from qualitative and quantitative studies serve as the basis of a majority of risk pricing decisions in ppp projects [12]. not much verification was found in support of the government’s claim that the allocation of risk is the cause of additional ppp cost [13]. practically, there is no research on investigating the association between risk allocation and construction cost. this study fills this gap and investigates the association between risk allocation and risk cost involved in a ppp project executed in ksa. ii. research methodology various phases are involved in the research method as shown in figure 1. the first phase presents the literature reviews of the concept in order to comprehend the risk factors that are likely to have an impact on the construction cost unit associated with a ppp project. this technique corresponds to the existing research regarding this concept. consequently, a list of risk factors has been generated and more risk factors were incorporated into the existing list. this was followed by the recognition of suitable risk descriptions based on project management viewpoints regarding the ppp projects. a survey questionnaire was distributed among ppp experts to obtain their views regarding the detection of risks, assessment of risk impact, and risk pricing associated with ppp projects. the experts participating in the survey were requested to suggest the most appropriate risk allocation for the risk factors based on the extent of risk cost. the data obtained from the answered questionnaires were analyzed and converted into microsoft excel and statistical package for social science (spss) format to obtain numerical data for the research. experimental analysis was conducted to determine the grading of the extent of the impact of the risk associated with ppp projects on risk cost by survey respondents. the outcomes obtained from the analysis were stated in terms of risk occurrence. the impact of risk is estimated by making use of the data regarding the probability of risk occurrence and the intensity of its impact [2]. in other words, if the probability of risk occurrence is represented by p and the intensity of the impact of this risk on the project is represented by i [4], then the expected risk effect value or ev would be given as follows: ev = p.i (1) fig. 1. research methodology. the anticipated impacts of risks were represented in a 1 to 25 scale. the data collection is stabilized between 0 and 1. this was followed by the development of multiple regression equations based on these values. iii. endogenous risk factors even though the field of risk management aims to the mitigation of risk impact in construction projects, there is still a need of a systematic approach in the modeling of the impact of the risk factors into the construction cost. moreover, this needs to be expanded in the modeling of the interactions and the interdependency of the endogenous risk factors into the construction cost of certain types of projects such as ppp projects, which are tackled in this research paper. the risk factors of ppp projects have been investigated in [14-16], especially the factors that are influencing cost overrun, project termination, and time delay. these three factors are the drivers of increased cost in construction projects. as described above, a questionnaire survey was conducted to provide a general review of the current pricing risk practice in ksa. the main aim of this survey was to investigate the impact of risk on cost unit and the best risk allocation in ppp projects. the impact of risk factors will be assessed based on the respondent perspectives and experience. descriptive analysis for endogenous risk category is mapped into 4 sub-categories: project selection risks, project finance risks, construction risks, and related risks. every sub-category consists of some factors, the top-ranked factors are chosen to be studied in the present research, which include: public resistance towards projects, uncompetitive tender, financial resources, elevated financial cost, elevated bidding cost, impediment in allowance payment, impediment in financial closure, construction time impediment, intricacy of design & construction, flaws in design, construction technology risks, quality risks, inadequate dedication from the public/private sector, inadequate distribution of accountability and risk, inconsistency between project parties, and strikes. on the other hand, low ranked factors such as: level of demand for the project, land acquisition, competition risk, inaccurate estimates, financial attraction of project to investors, lack of creditworthiness, inability to service debt, lack of government guarantees, material availability, labor availability, poor quality of workmanship, the default of sub-contractors or suppliers, contractual risk, contractor failure, different working methods between partners, inadequate experience in ppp, organization and coordination risks, inadequate negotiation period before engineering, technology & applied science research vol. 10, no. 4, 2020, 6987-6091 6089 www.etasr.com alfraidi et al.: impact of endogenous risk factors on risk cost in ppp projects in saudi arabia initiation, and cultural differences between main stakeholders, were neglected. iv. modeling endogenous risks impact the efficiency of construction, management and engineering projects can be assessed with the help of regression modeling [17, 18]. the basic principle behind the regression modeling technique is that the dependent variable y is changed as a result of the change in the independent variable x. multiple regression models were used for the forecast of the dependent variable (construction cost overrun) and the independent variables (endogenous risk factors associated with the construction cost overrun): � = �� + ∑ �� � �� + � (2) in (2), y represents the value of a dependent variable, β0 represents the constant or regression coefficient, rpi represents the value of independent variables, and εi represents a constant term or noise. various combinations of independent variables (the endogenous risk factors associated with the construction cost overrun) were analyzed. the multi-linear models aim to allocate the extent of risk cost by outlining risk into the alternating variable. the model explained the association between various risk factors. the regression models help experts in conducting risk price analysis using theoretical methods. this implies that every independent variable will be considered as a theoretical variable integrated with a probability distribution practically obtained from the study data. in order to practically outline the risk impact of independent variables, the multiple regression equation was devised as: � ���� ��� = a+ ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + ������� + α (3) in the above equation, a represents the constant or regression coefficient and α represents the constant term or noise, y represents the endogenous risk impact, and rp represents the factors shown in table i. table i. endogenous risk factors model factor endogenous risk factor rp28 public resistance towards projects rp29 uncompetitive tender rp33 financial resources rp35 elevated financial cost rp36 elevated bidding costs rp37 impediment in allowance payment rp40 impediment in financial closure rp45 construction time impediment rp50 intricacy of design and construction rp51 flaws in design rp53 construction technology risks rp56 quality risks rp59 inadequate dedication from public/private sector rp61 inadequate distribution of accountability and risk rp63 inconsistency between project parties rp64 strikes the measurement of project risks was carried out using construction cost overruns. there are several exogenous risk independent variables and the construction cost overrun is the dependent variable. if the y value increases, it is assumed that the risk price has increased along with construction risks. hence, there is a price that a stakeholder would charge to bear all risks. furthermore, it may also be suggested that if the y value increases, the exogenous risks are proved to be significant. the risk bearer can bear all risks if the suitable price has been identified. v. results with the help of multiple regression, the proxy variable (dependent variable) value can be predicted. the value used is that of the endogenous risk factors (independent variables). tables ii and iii show the multiple regression analysis outputs extracted from the spss software. these results indicate that there is an effect of almost all endogenous risk factors since their p-values are below 0.05. on the other hand, the common alpha level is 0.05 and the p-value for rp28 (0.054) and rp29 (0.052) are a little higher, which is why it can be stated that they are not statistically significant. these p values do not fit the range which is relevant for the current research assumption part of the equation. for each of the risk impact events, the coefficient magnitude can be observed in table i. the highest level of positive effect upon cost overrun is observed in table ii for rp45. it can also be observed that there are mostly negative endogenous risk event impact coefficients. hence, it is stated that low-cost overruns would occur if there is a high influence of risk events. the generated model statistics are presented in table iii. it is observed that the endogenous multiple regression model includes 16 risk impact events with r 2 = 0.902 and p<0.05. therefore, the results indicate that in both models, significant variations are present within the data set of the risk impact events. table ii. endogenous model regression result factor unstandardized coefficients standardized coefficients t sig. b std. error beta (constant) 0.161 0.036 4.463 0.000 rp28 -0.145 0.074 -0.123 -1.969 0.054 rp29 0.218 0.110 0.135 1.993 0.052 rp33 -0.169 0.063 -0.170 -2.679 0.010 rp35 -0.433 0.069 -0.385 -6.293 0.000 rp36 0.378 0.078 0.290 4.863 0.000 rp37 0.192 0.063 0.147 3.028 0.004 rp40 -0.311 0.104 -0.183 -2.994 0.004 rp45 0.746 0.067 0.693 11.151 0.000 rp50 0.414 0.083 0.381 4.968 0.000 rp51 -0.173 0.074 -0.175 -2.347 0.023 rp53 -0.302 0.088 -0.220 -3.440 0.001 rp56 0.640 0.125 0.440 5.140 0.000 rp59 -0.317 0.098 -0.218 -3.242 0.002 rp61 -0.224 0.077 -0.193 -2.907 0.005 rp63 0.339 0.086 0.242 3.933 0.000 rp64 -0.337 0.125 -0.206 -2.698 0.009 dependent variable y: rp44 = construction cost overrun table iii indicates that the p-value is lower than 5%, and the model explains quite a strong supposition against the null hypothesis since p≤1%. engineering, technology & applied science research vol. 10, no. 4, 2020, 6987-6091 6090 www.etasr.com alfraidi et al.: impact of endogenous risk factors on risk cost in ppp projects in saudi arabia table iii. integrated model regression r r 2 std. error of the estimate change statistics r 2 change f change df1 df2 sig. f change 0.962 0.902 0.0621433 2.653 39.741 16 51 0.000 predictors: (constant), rp28, rp29, rp33, rp35, rp36, rp37, rp40, rp45, rp50, rp51, rp53, rp56, rp59, rp61, rp63, rp64 the central alterations of the dependent variable with the independent variable for each unit of change are represented by its regression coefficient. the rest of the model predictors have been kept constant. within a flawless environment, the predictors can be measured using the same reliability levels. hence, using the unstandardized coefficients, the predictor variable weights present in table ii were applied upon (3): � ���� ��� = 0.161−0.145∗ ���� +0.218∗ ���� −0.169∗ ����−0.433∗����+0.378∗����+0.192∗����−0.311∗����+0.7 46∗���� +0.414∗���� −0.173∗���� −0.302∗ ���� +0.640∗ ���� −0.317∗���� −0.224∗���� +0.339∗���� −0.337∗����+ 0.036 0 ≥ y ≥1 (4) the approximations for the theoretical and observed data and the reference line can both be seen in the exogenous risk impact event p-p plot. the reference line and the observed data are quite close which indicates that the observed data and the derived equations are extracted from the same population and similar distributions (figure 2). fig. 2. endogenous risks p-p plot. fig. 3. endogenous model risk cost output probability density/ cumulative overlay. fig. 4. endogenous model risk impact output probability density / cumulative overlay vi. discussion with the help of the literature review, we have been able to extract the risk influence upon the ppp projects in a systematic manner. using a theoretical base, the classification has been carried out in terms of risk type, risk source, and project environment (internal or external). several variables influence the project-specific risk events, including project company effectiveness, project stakeholder relationships, and many more. however, construction risk parameters like inaccurate estimates and others are responsible for the development of internal risk events. after a thorough analysis of the literature, a risk impact event list was created, even though the literature did not shed light on cost overrun and risk event relationship which was investigated through models. the risk event relationship models can be efficiently presented through multiple regression procedures. this regression analysis technique is adequate and able to state the relationship between the independent and dependent variables using the observed behavior that is stated by the statistically measured relationships. hence, regression models were developed using multiple regression analysis. these models helped to identify the classified risk event influence where the independent variable is cost overrun. the f-statistic (sig.<0.01) “significance” value was presented using the anova tests which indicate that the significance of the developed regression models is at a 99% confidence level. hence, the results indicate that the developed models are acceptable. even though the anova tests proved to be useful for testing the developed model and its ability to analyze the variations present in risk event data, it is still not able to indicate the level of relationship among the proxy variable and risk events. therefore, the coefficient of determination (r 2 ) is used to measure the relationship strength. it should be noted that the current research does not include several risk events that might influence the final results in other circumstances. vii. conclusion cost overrun is mostly caused by uncertainties within the environment of a construction project. in the current research, analysis has been carried out upon the interdependency and association among risk constructs, exogenous risk constructs, and risk outcomes. the acquired multiple-regression equations state the relationship among risk constructs. the monte carlo simulation helps model the risk construct while keeping in mind its stochastic nature. according to the results, (figures 3engineering, technology & applied science research vol. 10, no. 4, 2020, 6987-6091 6091 www.etasr.com alfraidi et al.: impact of endogenous risk factors on risk cost in ppp projects in saudi arabia 4), 17 sr is the cost with the minimum risk impact of 0.025 and 892 sr is the cost with maximum risk impact of 0.776. further analysis must be carried out upon the relationship among exogenous and endogenous risk variables to help enhance the extraction of the risk impact amplification because of the active interdependency among system factors along with the output results and system input factors. many construction projects are completed over budget and over time. the presence of uncertainties, inherent in construction projects’ environment, plays an essential role in construction cost overrun. this research has addressed the vital issue of interdependency between risk constructs and risk consequences. the interaction between the risk constructs is captured using multiple regression equations. the stochastic nature of the risk constructs was modeled. there is a need to further investigate the interaction between risk factors in order to increase the detection of any amplification of risk impacts due to the dynamic interdependency within system variables and between the system input variables and output outcomes. references [1] m. a. akhund, a. r. khoso, a. a. pathan, h. u. imad, and f. siddiqui, “risk attributes, influencing the time and cost overrun in joint venture construction projects of pakistan,” engineering, technology & applied science research, vol. 8, no. 4, pp. 3260–3264, aug. 2018. [2] n. j. smith, t. merna, and p. jobling, managing risk in construction projects, 3rd ed. hoboken, nj, usa: wiley-blackwell, 2014. [3] d. cooper, s. grey, g. raymond, and p. walker, project risk management guidelines: managing risk in large projects and complex procurements. hoboken, nj, usa: wiley, 2005. [4] d. a. wehrung, k.-h. lee, d. k. tse, and i. b. vertinsky, “adjusting risky situations: a theoretical framework and empirical test,” journal of risk and uncertainty, vol. 2, no. 2, pp. 189–212, jun. 1989, doi: 10.1007/bf00056137. [5] m.-t. wang and h.-y. chou, “risk allocation and risk handling of highway projects in taiwan,” journal of management in engineering, vol. 19, no. 2, pp. 60–68, apr. 2003, doi: 10.1061/(asce)0742597x(2003)19:2(60). [6] e. witt, “procurement arrangements and risk transfer in construction projects initial evidence from estonia,” presented at the modern building materials, structures and techniques, vilnius, lithuania, may 2010. [7] p. t. nguyen and p. c. nguyen, “risk management in engineering and construction: a case study in design-build projects in vietnam,” engineering, technology & applied science research, vol. 10, no. 1, pp. 5237–5241, feb. 2020. [8] i. a. ansari, “evaluating the financial robustness of special purpose vehicles involved in the delivery of defence private finance initiatives,” ph.d. dissertation, cranfield university, 2014. [9] s. s. gao and m. handley-schachler, “public bodies’ perceptions on risk transfer in the uk’s private finance initiative.,” journal of finance and management in public services, vol. 3, no. 1, pp. 25–39, 2003. [10] t. dixon, g. pottinger, and a. jordan, “lessons from the private finance initiative in the uk: benefits, problems and critical success factors,” journal of property investment and finance, vol. 23, no. 5, pp. 412– 423, 2005. [11] m. p. abednego and s. o. ogunlana, “good project governance for proper risk allocation in public–private partnerships in indonesia,” international journal of project management, vol. 24, no. 7, pp. 622– 634, oct. 2006, doi: 10.1016/j.ijproman.2006.07.010. [12] a. boussabaine, risk pricing strategies for public-private partnership projects. hoboken, nj, usa: wiley-blackwell, 2013. [13] d. j. price, a. m. pollock, and s. player, “public risk for private gain? the public audit implications of risk transfer and private finance.” public health policy unit, school of public policy, ucl, london, 2004. [14] a. d. ibrahim, a. d. f. price, and a. r. j. dainty, “the analysis and allocation of risks in public private partnerships in infrastructure projects in nigeria,” journal of financial management of property and construction, vol. 11, no. 3, pp. 149–164, jan. 2006, doi: 10.1108/13664380680001086. [15] a. dziadosz, a. tomczyk, and o. kapliński, “financial risk estimation in construction contracts,” procedia engineering, vol. 122, pp. 120– 128, jan. 2015, doi: 10.1016/j.proeng.2015.10.015. [16] a. alfraidi, s. m. alzahrani, f. binsarra, m. h. h. abdelhafez, e. m. noaime, and m. a. s. mohamed, “impact of political risk on construction cost in ppp project in ksa,” international journal of advanced and applied sciences, vol. 7, no. 5, pp. 6–11, may 2020, doi: 10.21833/ijaas.2020.05.002. [17] f.-m. liou and c.-p. huang, “automated approach to negotiations of bot contracts with the consideration of project risk,” journal of construction engineering and management, vol. 134, no. 1, pp. 18–24, jan. 2008, doi: 10.1061/(asce)0733-9364(2008)134:1(18). [18] j. s. russel, “decision models for analysis and evaluation of construction contractors,” construction management and economics, vol. 10, no. 3, pp. 185–202, jul. 2006, doi: 10.1080/01446199200000018. microsoft word 25-2743_s__etasr_v9_n3_pp4203-4208 engineering, technology & applied science research vol. 9, no. 3, 2019, 4203-4208 4203 www.etasr.com al-omari: lightweight dynamic crypto algorithm for next internet generation lightweight dynamic crypto algorithm for next internet generation ahmad h. al-omari computer science department, faculty of science northern border university, arar, saudi arabia ahmed.alomari@nbu.edu.sa, kefia@yahoo.com abstract—modern applications, especially real time applications, are hungry for high-speed end-to-end transmission which usually conflicts with the necessary requirements of confidential and secure transmission. in this work, a relatively fast, lightweight and attack-resistant crypto algorithm is proposed. the algorithm is a symmetric block cipher that uses a secure pre-shared secret as the first step. then, a dynamic length key is generated and inserted inside the cipher text. upon receiving the cipher text, the receiver extracts the key from the received cipher text to decrypt the message. in this algorithm, ciphering and deciphering are mainly based on simple xor operations followed by substitutions and transpositions in order to add more confusion and diffusion to the algorithm. experimental results show faster encryption/decryption time when compared to known encryption standards. keywords-dynamic crypto algorithm; lightweight crypto algorithm; dynamic cryptography; shared secrets; next internet generation security i. introduction several emerging areas of information and communication technology (ict) require interconnected devices like internet of things (iot) and sensor networks. iot and smart applications are growing rapidly and are commonly accessed through smartphones. currently, more and more smart devices are daily connected to the internet, such as smartphones, smart tvs, video game consoles and even most of the home devices like refrigerators and air-conditioners [1]. all these devices suffer from being resource-constrained regarding their low processing power, limited battery power life, small display size, small memory, and limited storing capacity. as iot and other smart applications are growing rapidly, they encounter many risks and challenges such as dealing with huge amounts of data, processing power, energy consumption, address security and privacy threats [2]. security and privacy are fundamental requirements for any application, especially smart applications. the current modern standard cryptographic algorithms were originally designed for traditional desktop/server implementations and many of them consume an unacceptable amount of system resources (computation power, ram, storage, etc.) and are not suitable for resourceconstrained devices.[2]. therefore, there is a need for lightweight cryptography (lwc) algorithms that suit such resource-constrained devices [3-4]. lwc is one of the most promising research areas in cryptography since it is considered fast in encryption processing, resistant to attacks and low in resource requirements. there are no strict properties needed in order to classify an encryption algorithm as an lwc [5]. according to the national institute of standards and technology (nist), the main reasons for adopting lwc for smart power constrained devices are the need for efficient end-to-end communication and adoptability in resource-constrained smart devices [3, 6]. generally, any cryptographic design should take into considerations the tradeoff between security, cost and performance. the performance measurements include power, energy consumption, latency and throughput. security requirements, on the other hand, aim to maintain an acceptable level of secrecy and privacy of the system. cryptography, which is part of security, is divided into symmetric and asymmetric cryptography. the symmetric cryptography algorithms use a single private key for encryption and decryption and are originally designed for a wide range of applications that use hardware devices with high processing power and large resources. on the other hand, the asymmetric cryptography algorithms use a pair of keys, a public and a private one. one key is used for encryption and the other for decryption. traditional symmetric and asymmetric algorithms are not suitable for constrained devices while lightweight cryptographic algorithms are the best choice [7]. some of the candidate applications for the lwc algorithms include wireless sensor network (wsn), radio-frequency identification, wireless body area network (wban), iot, smart cards, embedded systems, smart systems, etc. [8-9]. these applications support dissimilar devices in heterogeneous environments with minimum human intervention. for example, iot devices communicate with minimum or no human intervention, a fact that represents a new challenge to the iot system by both exposing many security attacks as well as gaining unauthorized device access by the attacker device. this may essentially result in severe system damages. moreover, some iot implementations are cloud-based applications which have many security issues and challenges [3, 10]. this work focuses on introducing a new model of symmetric block cipher encryption. it is classified as lwc since it requires only a small amount of resources like memory, computing, storage, time and space. corresponding author: ahmad h. al-omari engineering, technology & applied science research vol. 9, no. 3, 2019, 4203-4208 4204 www.etasr.com al-omari: lightweight dynamic crypto algorithm for next internet generation ii. related work lightweight encryption is a recent scientific field. many lightweight block ciphers (lwbcs) have been proposed. some of these were modifications and simplifications of traditional block ciphers while others were new like the data encryption standard lightweight (desl) which is basically based on the original design principles of des with a variant of using a single s-box instead of eight s-boxes. desl is claimed to be resistant against most common known attacks like differential, linear, and davis-murphy attacks, and it is used in low resource devices like rfid, wsnr, wbn and iot [11]. over the last decade, variations of lwc with different properties have been proposed [12]. a word-oriented stream cipher [13] that takes 128-bit as an initial vector and an initial key as inputs while the generated output is a 32-bit key-stream. afterwards, the keystream is used to encrypt the plain text. the word-oriented stream cipher algorithm was developed to deal with 8-bit characters in the encryption/decryption process. in each step, the algorithm output is an 8-bit key character, which is bitwise added to the plain-text characters to produce the cipher-text character. the same operation is performed for the decryption process. theoretically, the proposed algorithm shows high performance through high nonlinear complexity. an extensive literature survey of more than 100 algorithms was performed in order to systemize the concept of lwc in [12]. the survey identified two categories of lwc algorithms. the first is the ultra-lwc, which deals with highly specialized algorithms providing one function with high performance on one platform and the second is the ubiquitous cryptography which deals with multilateral algorithms in terms of functionality and implementation. a new dynamic crypto symmetric algorithm [14-16] that uses a pre-shared secret was proposed to regenerate a predefined table. the regenerated table is again rearranged and shifted many times before the shared-key insertion. the encryption/decryption operations used in this algorithm are simple bitwise xor operations between the plaintext and the scrambled text. results show a faster algorithm that achieves better performance than the traditional aes and des. moreover, plenty encryption algorithms especially designed for hardware restricted resources can be easily found. for example, mcryption and crypton are two proposed block ciphers that have the option of using a key size of the length 64-bit, 96-bit or 128-bit. the architecture and the function of each component are simplified in order to run to power-constrained devices [17-18]. hummingbird-2 is a primitive authentication encryption algorithm especially designed for resource-constrained devices such as rfid tags, wsn and very small hardware or software and it is also suitable for resource-restricted devices. this algorithm uses a 128-bit key and a 64-bit initialization vector [19]. authors in [21] improved the original work of [20] by enhancing the transformation table composition. it was proved that swapping rows with columns gives better results. authors in [20] recommended performing the key insertion inside the plain text from both sides simultaneously. moreover, an enhancement was added to adopt a key size of 128 bytes and plaintext size of 190 bytes. in [21], more enhancements and contributions were added. these enhancements act as adding extra features. the final improvement was using cryptographically secure pseudorandom number generators (csprng) to generate and share the shared value. iii. the proposed solution a. background this study represents a symmetric encryption algorithm called lightweight dynamic crypto (lwdc) for the next internet generation. the original work of this study first appeared on 2008 [15]. different researchers added their contributions and enhancements on the original algorithm [1416, 20-21]. the main architecture of the original algorithm has slightly changed since its first release. the enhancements were added on detailed processes in order to achieve a more stable and attack resistant algorithm. the original algorithm consists of three main processes, the index generation process (igp), the encryption process (ep) and the decryption process (dp) [16]. the general architecture of the original algorithm is briefly described as: • the igp is common between the ep and the dp. first, an initial table and a shared-secret are generated and shared. the shared-secret is used to generate the transformation table (tt) and the table of indexes (ti). • the ep is performed by xor-ing the plaintext with the cipher-key to generate the scrambled text. then the cipherkey is inserted inside the scrambled text according to the values from the ti and the result is the ciphertext (c). • the dp is performed exactly as the ep process but in the reverse order. in the dp process, the cipher-key is extracted back from the c. b. the solution architecture in this work, additional cryptographic enhancement properties were added to the original algorithm. these enhancements include: • using the csprng to generate a random shared secret in order to be difficult, but not impossible, for an adversary to predict. • using the csprng to generate a random shared secret key, which should also be difficult, yet not impossible to predict. • using the ipsec based on the internet key exchange protocol (ikev2) to establish a secure connection to exchange data. ipsec is a standard protocol aiming to provide end-to-end security for the internet protocol (ip). the exchanged messages are protected by ipsec and the ipsec session is authenticated using ikev2 [22]. • adding the concept of confusion and diffusion (cd) property to the algorithm by implementing substitution and permutation boxes (s-p-(box)). the cd concept was firstly proposed in [22] as basic building blocks for any cryptographic system. according to [22], the cd concept aims to thwart cryptographic attacks of the statistical cryptanalysis type. confusion strives to make the relationship between the statistics of the ciphertext and the value of the encryption key as complex as possible while engineering, technology & applied science research vol. 9, no. 3, 2019, 4203-4208 4205 www.etasr.com al-omari: lightweight dynamic crypto algorithm for next internet generation the diffusion strives to make the statistical relationship between the plaintext and the ciphertext as complex as possible in order to thwart attempts to deduce the key [2, 23]. the three main processes of the algorithm are described briefly below [16]: 1) index generation process • the shared-value (shrdv) is randomly generated by using the blum blum shub cryptographically secure pseudorandom number generator (bbs-csprng) and is shared by the ipsec-ikev2 tunneling protocol. • the initial table, init, is a 16*16 (hex) table, which is fixed and shared between the sender and the receiver. • the transformation table, trant, is a table that is generated by performing permutation on the init based on the value of shrdv. • the indexing table, indt, is the result of performing another permutation on the transt based on the value deduced from the transt. 2) encryption process • the plaintext (p) is the original text that is going to be encrypted. • the key t is the system key which represents the heart of the encryption process. after generating the key k, it is used to bitwise xored p⊕k and is inserted inside the resulted scrambled table scrt. • scrt is the result of performing the xor operation between the p and the key. • the key insertion keyi is the result of inserting the key k inside the scrt. • the s-box is added to the algorithm to enhance the confusion and diffusion properties of the algorithm. • the ciper text c is the encrypted text. 3) decryption process • the s-box is used to recover back the original form of the c before the key recovery (kr) process is performed. • the kr is the first step in decrypting the c. in the kr process, k is extracted back from the c whereas the scrt is regenerated back. • p is recovered back by performing xor operation between the scrt and the k, p⊕k. the lwdc architecture shown on figure 1 and the pseudo code on figure 2 represent the general design process architecture of the algorithm. the most important component of any encryption algorithm is the encryption key. the key selection process and the key value should be carefully chosen. the encryption key properties include a key secrecy, a key length and an initial value (seed) [24]. the basic principle in choosing any encryption key (shared value) is to be obtained from one of the known cryptographically secure pseudorandom number generators (csprng) like lavarand, simon cooper or landon curt noll. the strength of csprngs depends on their properties. these properties are represented in the difficulty of finding the next bit to be generated from the previous given sequence of bits without having any clue of the seed in polynomial time. in addition to these properties, the algorithm should satisfy forward and backward unpredictability. all of these properties are found in the blum blum shub (bbs) pseudo random number generator. the bbs is considered as the most preferable algorithm for cryptographic purpose like key generation since it is based on quadratic residue npcomplete problem [25]. based on that, the bbs-csprng technique was added to generate the shared value shrdv=f(bbs-csprng) assuming that the ikev2 protocol is used to share the secret value between the communicating parties. fig. 1. the lightweight encryption architecture fig. 2. the pseudocode the other enhancement to the algorithm is adding the sbox before generating c, the s-box will add more cd which is an important property of any block cipher algorithm. cd is performed by applying a constant number of (cd) rounds to the index generation process igp ipsec establishment shrdv = f(bbs-csprng) transt = (init, shrdv) indxt = f(transt) the encryption process ep scrt = (plaintxt ⊕ key) keyi = f(scrt, key, indxt) c= s-box(f(keyi)) the decryption process de kr= f((s-box(c), indxt)) scrt = f(kr) key = f(kr) p= (scrt ⊕ key) engineering, technology & applied science research vol. 9, no. 3, 2019, 4203-4208 4206 www.etasr.com al-omari: lightweight dynamic crypto algorithm for next internet generation extend the domain of a public random permutation [22]. in the decryption process, the same procedures are performed as in the encryption process but in reverse order. the algorithm could be implemented either in hardware or software. in case of hardware limited resources implementation, it is recommended to burn the algorithm on the hardware chipset while in the software implementation it will be easy to use as a portable and fast encryption-decryption algorithm. c. implementation and analysis the algorithm was tested by using java jdk 1.7.0_171 and the java cryptography extension (jce) on a fujitsu laptop i74702mq cpu (8-gb ram, windows-7). the same files that were used to test the svscs algorithm in [21] were used in the experiments and the testing was performed on 10 different file sizes and the results were finally compared with the lwdc results. ep and dp processes were performed on different p sizes. it is worth mentioning that the comparison was performed on the encryption-decryption time which includes the sub-processes of both algorithms (table i). the key generation process, (s-p)-box, table scrambling, key insertion, and the encryption time for the different plaintext size are listed on table i. table i. encryption time comparison plaintext size in mb time alg. 0.35 0.99 1.65 3.30 6.60 11.80 key gen. svscs 0.0117 0.0118 0.0112 0.0116 0.0116 0.0118 lwdc 0.0116 0.0117 0.0110 0.0113 0.0116 0.0118 (s-p)-box svscs 0.0058 0.0589 0.1011 0.0502 0.1178 0.3583 lwdc 0.0059 0.0590 0.1022 0.0502 0.1189 0.3594 scrambling svscs 0.0286 0.0967 0.1346 0.1987 0.5871 0.7220 lwdc 0.0272 0.0990 0.1364 0.2004 0.6201 0.7579 key insert svscs 0.0132 0.0523 0.0686 0.2590 0.2770 0.6021 lwdc 0.0145 0.0564 0.0788 0.2675 0.2872 0.6245 encryption svscs 0.0593 0.2197 0.3165 0.5194 0.9936 1.6942 lwdc 0.0594 0.2214 0.3180 0.5207 0.9969 1.6994 the encryption time comparison between the svscs and our algorithm is shown in figure 3, in which the encryption time looks equal for both algorithms. in fact, the svscs is a little bit faster than our algorithm because the svcs performs the (s-p)-box operation before the key insertion process, whereas in our algorithm the (s-p)-box is performed after the key insertion which expands table size. the rest of the figures (figures 4-7) show the time variation between the sub encryption operations for both algorithms. the key generation time is shown on figure 4 where it is clear that our algorithm is a little bit faster than the svscs regardless of data size due to the bbs-csprng being used. the (s-p)-box time which represents the cd properties is shown on figure 5 where it is clear that our algorithm consumes more time than the svscs, due to the (s-p)-box operations that are performed on a larger table size than the one of the svscs. the table scrambling time is shown in figure 6 in which no significant time difference is noticed between the algorithms. the key insertion process shown in figure 7 indicates that there is no significant time difference between the algorithms. decryption time comparison between svscs and our algorithm is listed on table ii. the key generation process is not calculated here since it is generated in the encryption phase. fig. 3. encryption time fig. 4. key generation time fig. 5. confusion and diffusion time fig. 6. table scrambling time fig. 7. key insertion time engineering, technology & applied science research vol. 9, no. 3, 2019, 4203-4208 4207 www.etasr.com al-omari: lightweight dynamic crypto algorithm for next internet generation decryption time comparison between the svscs and our algorithm is shown in figure 8. the decryption time looks equal for both algorithms. in some cases our algorithm performs faster than the svscs and in other cases the svscs performs faster. the time difference is not significant. the algorithm was improved by adding the ipsec-ikev2 to exchange the secret shared value, by adding the bbs-csprng to generate the secure shared value, and by changing the location of the (s-p)-box operations in the algorithm. these enhancements give extra randomness (confusion and diffusion) to the cipher text and make it more attack-resistant. however, the added enhancements did not affect the encryption speed negatively, the detailed analysis on [16, 20] is still valid in this enhanced version of the algorithm. on this work, the cipher text becomes more resistant to brute force attacks since the algorithm uses a plaintext size of 190 bytes (1520 bits), key size of 128 bytes (1024 bits) and the (s-p)-box. moreover, using the cd properties in addition to the key insertion process produces a well-mixed and shuffled ciphertext. thus, it will be hard to solve a plaintext size of (2(1520))×(2(1024)). in this case, the only possibility to attack the algorithm is to use cryptanalysis attacks. from the previous studies, it is proven that the algorithm outperforms the speed of the advanced encryption standard (aes). it is 15 times faster in encryption and 9 times faster in decryption [14-16, 20-21]. table ii. decryption time comparison ciphertext size in mb time alg. 0.35 0.99 1.65 3.30 6.60 11.80 s-box svscs 0.0230 0.0212 0.1455 0.0878 0.1321 0.6987 lwdc 0.0228 0.0251 0.1634 0.0943 0.1567 0.7012 scrambling svscs 0.0306 0.1001 0.1532 0.3175 0.6078 0.9874 lwdc 0.0304 0.1098 0.1612 0.3220 0.6231 0.9913 recovery svscs 0.0147 0.1182 0.0608 0.1836 0.4827 0.4050 lwdc 0.0132 0.1163 0.0610 0.1926 0.4931 0.4069 decryption svscs 0.1215 0.3745 0.6841 1.0050 2.0025 3.7836 lwdc 0.1117 0.4695 0.5996 1.1099 2.3634 3.4918 fig. 8. decryption time iv. conclusion the proposed algorithm shows faster encryption-decryption time than the conventional standard algorithm (aes). the algorithm has the property of hardware and software implementation and hence the uploading of the code on the hardware chipset for faster processing is recommended. the algorithm is simple in nature but very hard to break. adding the ipsec-ikev2, the bbs-csprng and the (s-p)-box puts the algorithm in the levels of the modern lightweight symmetric encryptions in the market. acknowledgment the authors gratefully acknowledge the approval and the support of this research from the deanship of scientific research study by the grant no. 7189-sci-2017-1-8-f7, northern border university, arar, saudi arabia. references [1] m. talbi, f. maddouri, a. jemai, m. s. bouhlel, “application of a lightweight encryption algorithm to a quantized speech image for secure iot”, sixth international conference on advances in computing, electronics and communication, rome, italy, 2017 [2] s. singh, p. k. sharma, s. y. moon, j. h. park, “advanced lightweight encryption algorithms for iot devices: survey, challenges and solutions”, journal of ambient intelligence and humanized computing, 2017 [3] k. a. mckay, l. bassham, m. s. turan, n. mouha, report on lightweight cryptography. technical report nistir 8114, national institue of standards and technology, 2017 [4] g. bansod, n. raval, n. pisharoty, “implementation of a new lightweight encryption design for embedded security”, ieee transactions on information forensics and security, vol. 10, no. 1, pp. 142-151, 2015 [5] b. chaitra, v. g. k. kumar, r. c. shatharama, “a survey on various lightweight cryptographic algorithms on fpga”, iosr journal of electronics and communication engineering, vol. 12, no. 1, pp. 45-59, 2017 [6] k. p. mahaffey, j. g. hering, j. d. burgess, j. p. grubb, d. golombek, d. l. richardson, a. mckay lineberry, t. m. wyatt, system and method for mobile communication device application advisement, u.s. patent no. 9,367,680, 2016 [7] isha, a. k. luhach, “analysis of lightweight cryptographic solutions for internet of things”, indian journal of science and technology, vol. 9, no. 28, 2016 [8] w. joseph, b. braem, e. reusens, b. latre, l. martens, i. moerman, c. blondia, “design of energy efficient topologies for wireless on-body channel”, 17th european wireless 2011-sustainable wireless technologies, vienna, austria, april 27-29, 2011 [9] j. yick, b. mukherjee, d. ghosal, “wireless sensor network survey”, computer networks, vol. 52, no. 12, pp. 2292-2330, 2008 [10] a. sajid, h. abbas, k. saleem, “cloud-assisted iot-based scada systems security: a review of the state of the art and future challenges”, ieee access, vol. 4, pp. 1375-1384, 2016 [11] a. poschmann, g. leander, k. schramm, c. paar, “new light-weight crypto algorithms for rfid”, ieee international symposium on circuits and systems, new orleans, usa, may 27-30, 2007 [12] a. biryukov, l. p. perrin, “state of the art in lightweight symmetric cryptography”, available at: http://orbilu.uni.lu/handle/10993/31319, 2017 [13] h. m. s. el hennawy, a. e. a. omar, s. m. a. kholaif, “lea: link encryption algorithm proposed stream cipher algorithm”, ain shams engineering journal, vol. 6, no. 1, pp. 57-65, 2015 [14] a. h. omari, b. m. al-kasasbeh, a. a. omari, “dynamic cryptography algorithm for real-time applications dca-rta”, 3rd international conference on applied mathematics, simulation, modelling, circuits, systems and signals, athens, greece, december 29-31, 2009 [15] a. h. omari, b. m. al-kasasbeh, r. e. al-qutaish, m. i. muhairat, “new cryptographic algorithm for the real time applications”, 7th wseas international conference on information security and privacy, cairo, egypt, december 29-31, 2008 [16] a. h. al-omari, “dynamic crypto algorithm for real-time applications dca-rta, key shifting”, international journal of advanced computer science and applications, vol. 7, no. 1, pp. 72-77, 2016 [17] c. h. lim, t. korkishko, “mcrypton–a lightweight block cipher for security of low-cost rfid tags and sensors”, in: information security engineering, technology & applied science research vol. 9, no. 3, 2019, 4203-4208 4208 www.etasr.com al-omari: lightweight dynamic crypto algorithm for next internet generation applications, lecture notes in computer science, vol. 3786, pp. 243– 258, springer, 2005 [18] c. h. lim, crypton: a new 128-bit block cipher, nist aes proposal 1998 [19] d. engels, m. j. o. saarinen, p. schweitzer, e. m. smith, “the hummingbird-2 lightweight authenticated encryption algorithm”, 7th international workshop on security and privacy, amherst, usa, june 26–28, 2011 [20] a. a. al-omari, investigating a dynamic crypto algorithm for real time applications (dca-rta), msc thesis, the university of jordan, 2012 [21] m. a. al-qaysi, a shared value based symmetric crypto system (svscs), msc thesis, princess sumaya university for technology, 2014 [22] c. cremers, “key exchange in ipsec revisited: formal analysis of ikev1 and ikev2”, in: european symposium on research in computer security, lecture notes in computer science, vol. 6879, pp. 315-334, springer, 2011 [23] y. dodis, m. stam, j. steinberger, t. liu, “indifferentiability of confusion-diffusion networks”, in: advances in cryptology–eurocrypt 2016, lecture notes in computer science, vol. 9666, pp. 679-704, springer, 2016 [24] h. feistel, cryptographic coding for data-bank privacy, ibm thomas j. watson research center, 1970 [25] g. singh, supriya, “a study of encryption algorithms (rsa, des, 3des and aes) for information security”, international journal of computer applications, vol. 67, no. 19, pp. 33–38, 2013 [26] divyanjali, ankur, v. pareek, “an overview of cryptographically secure pseudorandom number generators and bbs”, international conference on advances in computer engineering & applications, ghaziabad, india, february 15, 2015 microsoft word 38-3330_s_etasr_v10_n1_pp5307-5313 engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5307 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency evaluation of power system reliability and quality levels for (n-2) outage contingency badr m. alshammari department of electrical engineering college of engineering university of hail hail, saudi arabia bms.alshammari@uoh.edu.sa abstract—one of the main objectives of electric power utilities keeping up a continuous and adequate power supply to the customers at a sensible cost. this paper contributes to the solution of the reliability and quality assessment problems in power systems, using the (n-2) outage contingency scenario to evaluate power system’s reliability and quality levels. therefore, the methodology presented in this paper is based on the integration of reliability measures, quality indices, and contingency analysis. while reliability formulas have traditionally been applied to small and illustrative power systems, large-scale reliability and quality assessment go far beyond direct implementation of formulas. systems with hundreds of buses and tens of complex stations can only be analyzed using advanced and numerically effective large-scale algorithms for reliability and quality assessment as demonstrated in this paper. keywords-reliability; evaluation; contingency; large-scale power systems i. introduction power system reliability is defined as the probability of an electric power system to perform a required function under given conditions for a given time interval. a generalized form of the reliability definition, takes into consideration the effect of repair or replacement after a failure [1-3]. in general, a component in a power system may exist in one of two states, namely “operation” or “failure”. in some cases, extra states may be considered to indicate partial operation, derated functioning or repair and maintenance. in other words, an electric power network containing generation and transmission facilities could be divided into several states in terms of the degree to which adequacy and security constraints are satisfied in a reliability evaluation of the composite system [4-5]. power system reliability evaluations have been concentrated on the analysis of system adequacy, the ability to supply all loads within performance requirements [6-8]. power system components, in this regard, are divided into two main parts, namely the generating equipment and the transmission equipment. in general, a component is a piece of equipment or a group of items which is viewed as an entity and is not subdivided during reliability analysis. main generating components are the boiler installation (single or multiple), common header system, turbine, generator, and boiler. transmission lines and transformers are considered as main transmission components [8-11]. a secure system is able to tolerate the outage of components without interrupting the demand supply. given an electric power system on n components, the n−k criterion is used to evaluate the outage of k components [12-14]. reliability indices for a power system are calculable from either its performance history or from component data utilizing mathematical models which express the system reliability indices in terms of the component indices included in the ieee committee reports [15-18]. most of the traditional contingency assessment methods do exclude the probability of contingency in the analysis. they rather define a so-called set of credible contingencies, which are equally considered in the evaluation. however, it is known that some contingencies, which have critical effects on the system performance, may have a much lower probability of occurrence than those having less impact on the system. therefore, an accurate assessment of the impact of contingencies on the system performance should not overlook the probability of contingency occurrence. the nature of the large-scale power systems causes a major problem in computational resources when numerous contingency and system operating scenarios have to be examined and analyzed [19]. the investigated reliability indices are not only useful for the design of flexible power supply reliability for customers but also beneficial to the long-term system capacity expansion planning of electric power systems [20-21]. this study contributes to the solution of the reliability indices and system quality performance problem in real power systems. the computational scheme presented in this paper can effectively assess a composite system’s reliability and power quality, analyzes the network structure, generation and load balance, evaluates various composite system performance reliability indices applied to the system subject to (n-2) contingency with certain or random occurrences. a practical application to a portion of the saudi power grid is also presented in this paper for demonstration purposes. ii. problem formulation the novel methodology applied in this paper is based on the original work of [20]. the reliability of a power system depends on the reliability of its individual components as well as the size and structure of the system. various factors should be taken into account when evaluating the reliability of the system. examples of these factors are the operation and failure time distributions, failure modes, operation practices and load priorities. corresponding author: badr m alshammari engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5308 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency a. reliability evaluation processes the reliability evolution of a power system can be described by a six-step procedure as shown in figure 1 [22]. step i represents the component constants and capabilities. steps ii and iii represent the possible component outages and the definition of possible system failure modes resulting from single or multiple component outages. step iv represents the possible realizations of the component performance which may be actual or simulated. step v describes the system model, where the system performance is obtained. the techniques used for such analysis are selected based on their accuracy and speed to suit either planning or operation studies. at step vi the system model results are analyzed to evaluate the system reliability. fig. 1. reliability evaluation processes b. conditional probabilities of system failure in almost all probability applications in reliability evaluation, component failures within a fixed environment are assumed to be independent events. it is entirely possible that component failure can result in system failure in a conditional sense. this can occur in parallel facilities that are not completely redundant. if the load can be considered as a random variable and described by a probability distribution, the failure at any point due to component outage is conditional upon the load exceeding some value at which a satisfactory voltage level at the load point can be maintained. if two events a and b are considered to be independent, then: � �� ∩ � � � ���� ∙ ���� (1) if the occurrence of a is dependent upon n number of events bi, which are mutually exclusive, then: � �� � � ∑ ����|�� ∙ ������ ��� (2) if the occurrence of a is dependent upon only two mutually exclusive events for component b, success and failure, designated as bx and by respectively, then: � �� � � ���|��� ∙ ����� � ������� ∙ ����� (3) with respect to reliability, this can be expressed in a simpler form: p(system failure) = p(system failure if b is good) p(bx)+ p(system failure if b is bad) p(by) the complementary form is similar as: p(system success) = p(system success if b is good) p(bx)+ p(system success if b is bad) p(by) iii. large-scale reliability modeling the practical power system is large-scale in nature. it consists of numerous elements, which are characterized by forced outage rates representing their tendency to be off-service due to malfunctions. a suitable technique would implement an efficient sectioning scheme in order to keep possession of only the parts of the system affected by a contingency, while the rest of the system is modeled by network equivalents. the use of the partitioning scheme permits a faster contingency analysis for large systems. in order to accurately simulate practical operator's response to power network outages, a maximum load-supply optimization scheme should be employed prior to the evaluation of various system reliability measures. the optimization algorithm evaluates the post-outage generationload pattern based on real-time emergency dispatch procedures, which try to maximize the amount of system load supplied during the system outage. the generation and transmission reserve capacities of the retained network represent the optimization variables which are manipulated to maximize the load supplied during the outage situation. in this work, the system reliability indices and power quality performance are determined based on the optimized post-outage generation-load pattern. these reliability and quality indices can then be evaluated and displayed for real life networks of loads of interest associated with various system outages and according to their probability of occurrence. iv. power system reliability indices in general, a set of system-wide outage-based reliability indices can be defined. these reliability indices, which can easily be coded into computer programs, are sufficient to describe a range of practical reliability measures in large-scale power systems. this section summarizes the most widely-used indices for measuring the levels of power system reliability under outage conditions. for a contingency m, the values of the network variables will be the solution of the maximum loadsupply optimization problem. also, let fm be the probability of contingency scenario m (the sum of fm for all m, including basecase contingency-free scenario is 2). then the following three system-wide contingency-based reliability indices may be defined. a. system-wide loss of load probability loss of load probability (lolp) indicates the probability (chance) that a system load would be fully or partially lost due to randomly occurring single or multiple contingencies (outages) in the system. the random nature of the outages is simulated using the actual historical outage data of various system elements. the loss of load probability can be expressed in (4): engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5309 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency ���� � ∑ ������������ (4) where: ������� � ����{ ������ ���} (5) represents the system loss of load probability for any assumed contingency m (loss of generation and/or transmission) in the power grid, ����� ��� � "��#� (6) represents the loss of load probability at bus � for contingency m, and: "� ��� �     ≤ p > pif1 p pif0 o(m) o(m) �� �� (7) where op� denotes the scheduled demand at load bus �. in (4), mc denotes the number of contingencies considered and y� is a 0 or l factor to indicate subsystems (if desired). b. system-wide expected value of demand not served the expected value of demand not served (edns) reliability index can be shown with the following equations: ) ln =1 (dns ( )dnsyε ε=∑ � � � (8) where nl is the number of load buses in the system, ε�%&'�� � )dns( (m) m 1=m c � ε∑ (9) represents the expected value of demand not served at bus �, ( ) ) ( ) m m l m l (dns f dnsε = (10) represents the expected value of demand not served at bus � for the contingency m and: demand not served at bus for contingency (m) l m dns =� (11) c. system-wide expected value of energy not served expected energy not served (eens) indicates the amount of twh of energy per year that is likely not to be supplied to a system load center due to randomly occurring single or multiple contingencies (outages) in the system. therefore the eens can be expressed in (12)-(15) as: ε�(&'� � )ens( y n =1 l �� � ε∑ (12) where: ε�(&'�� � )ens( (m) m 1=m c � ε∑ (13) represents the expected value of energy not served at a bus �, ε�(&'��� � ens f (m)m � (14) represents the expected value of energy not served at bus � for contingency m, and (&'�� � dns t (m)(m) � (15) represents the energy not served at bus � for contingency m and t (m) denotes the time duration of contingency m. v. quality assessment in power systems a. general both issues of reliability and quality represent considerable challenges. the first issue could be resolved with the use of advanced large-scale network analysis with efficient sparsematrix algorithms as simulated in this paper. the second issue has to be dealt with in a more careful manner. the main difficulty, in this regard, was the formulation of the overall composite quality problem in terms of the trio-interactions between generation, transmission and demand in a global manner. a fact also demonstrated in this paper is the harmony relationship between available generation capacities, transmission capabilities and required demand levels. more importantly, the methodology used and the choice for technical system quality expressions had to be in full harmony with what is being used inside the utilities by operators, technicians, engineers and managers. the term integrated (or composite) system quality has quietly evolved over the years, although less formally, to address the ever challenging dilemma of economy versus security/reliability. a power system with low reliability standing is no less desirable than a costly system with generous reserves and stand-by facilities. a “quality” system is one in which electric energy flows, as un-interrupted as possible, from generation through transmission to load with neither bottling nor redundancy in any portion of the system. in any real system, the composite quality index is undermined, e.g. by generation bottling where available generation cannot be provided through a deficient transmission portion. indeed, from the cost effectiveness point of view, the integrated system quality index would also suffer if transmission redundancy occurs (i.e. more transmission capacity than actually needed). it is clear that the problem under consideration is of a global nature and deals mainly with the generation-transmission-load connectivity and capacity aspects. therefore, at least in the first phase, an integrated system quality study should address important issues like the “need for” and “level of utilization” of various generation and transmission facilities in the power grid and assess whether such facilities are indeed in the “right place” and with the “right amount”. b. station and system quality indices figure 2 demonstrates the basic model structure for evaluating various quality indices. the following reliability and quality indices are defined: minimum load lost = mld_lost � max2 0 � − 5 (16) engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5310 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency maximum load lost = xld_lost � max2 0 � � � 4 5 (17) minimum generation bottled = mgn_btld � max2 0 � 45 (18) maximum generation bottled = xgn_btld � max2 0 : � ; 4 5 (19) minimum capacity un-utilized= mcp_nutz � max2 0 5 4 : 4 ; (20) maximum capacity un-utilized= xcp_nutz � max2 0 5 4 : (21� minimum capacity surplus = mcp_spls � max2 0 5 4 � 4 � (22) maximum capacity surplus = xcp_spls � max2 0 5 4 � (23) fig. 2. basic model for quality evaluation system-wide quality indices are evaluated using similar formulas as (16)-(23), which in this case are applied to system areas and zones of interest. the system connectivity structure is used in rather complex algorithms to interconnect various stations among a given area (or zone) and between different areas (or zones) in the system. vi. large-system reliability and quality indices the overall program structure which is used in this paper revolves around three major tasks during normal program execution. the first major task is the preparation of several database blocks, which contain system nodes and element data, area and zone definitions, outage history data, station element data, station configuration data, and flow pattern data. the second includes validation of all database entries using a comprehensive 3-level data checking routine. in the third major task, various station and system reliability and quality indices are evaluated (including loss-of-load probability, bottled generation, surplus capacity, and unutilized transmission). a block diagram of the overall program organization is shown in figure 3. vii. application of reliability performance and quality evaluation the system reliability performance has been applied to a practical power system comprising of a portion of the interconnected saudi power grid, where overall system reliability indices are evaluated and assessed. the power system consists of two main regions, namely the central and the eastern region. the two systems are interconnected through two 380kv and one 230kv double-circuit lines. the system model used in the current application comprises of 119 buses (19 generators, 100 loads), 334 lines and 122 transformers, as shown in figure 4. fig. 3. overall program organization engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5311 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency fig. 4. single-line diagram of the power system model used the power system will be studied in depth in regard to its reliability and quality measures. the reliability and quality study criteria include, (n-2) outage scenarios for the 380kv transmission grid as well as for the individual 132kv substations. the detailed station results show the impact of individual station component outages on various station capability and reliability measures. if exactly one prior outage in another station element had occurred prior to a particular outage, the result is said to be associated with an (n-2) contingency scenario. in this, regard, the (n-2) results include the same outage-set except for breakers (major station nonprotection equipment). a. loss of load in stations for (n-2) contingencies figure 5 shows 3-dimensional graphs depicting the variation of loss of load in stations of the worst double contingency (excluding breaker outages) on station load loss for some examples on the analyzed stations. fig. 5. 3-dimensional graph showing variation of loss of load in stations of (n-2) contingency for (n-2) contingencies in station #8001, the combined outage of transformer #grid-t3 and any of the other elements (one at-a-time) would cause about 64.2mva or 32% of load loss, although this element has no reported historical outages. a maximum load loss of 80.3mva in station #8008 is caused by outage of transformer. for (n-2) contingencies in station #8009, the combined outage of transformer and any of the other elements (one at-a-time) would cause about 55.8mva of load loss. in station #8014, on the other hand, the combined outage of transformer would result in a maximum load loss of 145.7mva. for (n-2) contingencies in station #8076, the outage of the transmission or transformer combined with and any of the other elements (one at-a-time), would decrease the maximum station flow from 14.2mva to 4.8mva and would cause about 9.4mva of load loss. b. maximum station flow for (n-2) contingeny table i summarizes the impact of the worst double contingency (excluding breaker outages) on maximum station flow for some examples of the analyzed sec-c stations. for easy reference and comparison, the stations are ordered in accordance with the percentage drop in maximum flow. although different outages in station #8004 would not influence the station flow capability, which stays constant at 20.4mva. on the other hand, in station #8077, a heavy drop of 2.5mva (88%) in the maximum station flow would occur subjected to outages in the breaker or the transformer. c. quality results for (n-2) contingency scenarios figure 6 shows the value of some quality indices. the expected demand not served (edns_indx) for the entire system is 387.9mw, almost 69% of this occurs in riyadh city (c1) alone. maximum expected load not served (elns_indx) engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5312 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency of 434.6mva and the expected energy not served (eens_indx) of 8.4gwh occur in the same area. the worst values of the expected generator power bottled (e_gp_btld) of 76%, expected generator energy bottled (e_ge_btld) of 77% and expected non-utilized capacity (e_cp_nutz) of 72% also occur in c1. on the other hand, the dawadmi area (c5) and riyadh rural (c4) would not cause any e_gp_btld or e_ge_btld. the maximum of the priority-based excess at no outage element (a_b_excs0) of 9874mva and the maximum of priority-based excess at one outage element (a_b_excs1) of 9596.2mva occur at qassim area. the overall system would not experience any priority-based deficit at no outage element (a_b_dfct0) or priority-based deficit at one outage element (a_b_dfct1). table i. impact of worst case single contingency on station maximum flow for n-2 contigensy station no. station maximum flow percentage change (%) nominal (mva) minimum (mva) 8004 20.4 20.4 0 8813 102.9 88.7 14 8079 428.4 276.7 35 9006 700 320.5 54 8007 147.8 58.2 60 8077 21.7 2.5 88 fig. 6. output system quality chart viii. conclusion lower service reliability levels jeopardize energy supply continuity and increase the likelihood of additional maintenance and the restoration costs due to the resulting higher rate of system outages. on the other hand, system performance quality indicates the desired harmony balance between generation facilities, transmission capabilities, and consumer demand levels in various zones of the electric power system. poor system quality levels often imply either deficiency or excess in the designed overall system capabilities. symptoms of poor system quality include generation bottling (available generation that cannot be used because of transmission limitations), unutilized transmission, capacity deficiency, and energy surplus. the costs associated with low service reliability or poor system quality are enormous, and can be largely avoided if enhancing system planning simulation models and appropriate computer-sided solution tools are developed and used to detect and correct potential problems. in this regards, this paper contributes to the solution of these problems using (n-2) outage contingency scenarios to evaluate power system reliability and quality levels. while reliability formulas have traditionally been applied to small and illustrative power systems, large-scale reliability and quality assessment goes far beyond direct formula implementation. systems with hundreds of buses and tens of complex stations can only be analyzed using advanced and numerically effective large-scale algorithms for reliability and quality assessment, as has been demonstrated in this paper. the reliability and performance quality indices, when evaluated at a given load level and a certain scenario ((n-2) outage contingency scenario) of available generation and transmission capacities, would provide indications. acknowledgment this work was supported by the university of hail. references [1] r. billinton, power system reliability evaluation, gordon and brach, 1970 [2] j. endrenyi, reliability modeling in electric power system, john wiley and sons, 1978 [3] h. f. lester, “power system: functional reliability more than component reliability is key in serving customers”, ieee spectrum, vol. 18, pp. 5859, 1981 [4] r. billinton, e. khan, “a security based approach to composite power system reliability evaluation”, ieee transactions on power systems, vol. 7, no. 1, pp. 65-72, 1992 [5] o. p. bharti, r. k. saket, s. k. nagar, “controller design for dfig driven by variable speed wind turbine using static output feedback technique”, engineering, technology & applied science research, vol. 6, no. 4, pp. 1056-1061, 2016 [6] a. m. l. da silva, j. endrenyiand l. wang, “integrated treatment of adequacy and security in bulk power system reliability evaluations”, ieee transactions on power systems, vol. 8, no. 1, pp. 275-285, 1993 [7] m. de jong, g. papaefthymiou, p. palensky, “a framework for incorporation of infeed uncertainty in power system risk-based security assessment”, ieee transactions on power systems, vol. 33, no. 1, pp. 613–621, 2018 [8] p. henneaux, f. f. faghihi, p. e. labeau, j. c. maun, “towards a 3level blackout probabilistic risk assessment: achievements and challenges”, 2013 ieee power & energy society general meeting, vancouver, canada, july 21-25, 2013 [9] r. billinton, d. huang, “effects of load forecast uncertainty on bulk electric system reliability evaluation”, ieee transactions on power systems, vol. 23, no. 2, pp. 418-425, 2008 [10] m. a. el-kady, b. m. alshammari, “a practical framework for reliability and quality assessment of power systems”, journal of energy and power engineering, vol. 3, no. 4, pp. 499-507, 2011 [11] b. m. alshammari, m. a. el-kady, y. a. al-turki, “power system performance quality indices”, european transactions on electrical power, vol. 21, no. 5, pp. 1704–1710, 2011 [12] b. m. alshammari, m. a. el-kady, “probabilistic assessment of power system performance quality”, journal of energy and power engineering, vol. 4, no. 5, pp. 372-379, 2012 [13] s. gope, a. k. goswami, p. k. tiwari, “transmission congestion management using a wind integrated compressed air energy storage system”, engineering, technology & applied science research, vol. 7, no. 4, pp. 1746-1752, 2017 [14] o. kahouli, b. ashammari, k. sebaa, m. jebali, h. h. abdallah, “type2 fuzzy logic controller based pss for large scale power systems stability”, engineering, technology & applied science research, vol. 8, no. 5, pp. 3380-3386, 2018 [15] x. li, p. balasubramanian, m. sahraei-ardakani, m. abdi-khorsand, k. w. hedman, r. podmore, “real-time contingency analysis with engineering, technology & applied science research vol. 10, no. 1, 2020, 5307-5313 5313 www.etasr.com alshammari: evaluation of power system reliability and quality levels for (n-2) outage contingency corrective transmission switching”, ieee transactions on power systems, vol. 32, no. 4, pp. 2604–2617, 2017 [16] ieee committee report, “proposed definitions of terms of reporting and analyzing outages of generating equipment”, ieee transactions on power apparatus and systems, vol. pas-85, no. 4, pp. 390-393, 1966 [17] ieee committee report, “definitions of customers and load reliability indices for evaluating electric power system performance. paper a75 588-4”, ieee pes summer meeting, san francisco, usa, july 20-25, 1975 [18] m. a. el-kady, b. a. alaskar, a. m. shaalan, b. m. al-shammri, “composite reliability and quality assessment of interconnected power systems”, international journal for computation and mathematic in electrical and electronic engineering, vol. 26, no. 1, pp. 7-21, 2007 [19] s. t. lee, “estimating the probability of cascading outages in a power grid”, 16th pscc, glasgow, scotland, july 14-18, 2008 [20] b. m. alshammari. “evaluation of power system reliability levels for (n-1) outage contingency”, international journal of advanced and applied sciences, vol. 6, no. 11, pp. 68-74, 2019 [21] b. m. alshammari, “assessment of reliability and quality performance using impact of shortfall generation capacity index on power systems”, engineering, technology & applied science research, vol. 9, no. 6, pp. 4937-4941, 2019 [22] j. van casteren, m. bollen, m. schrnieg, “reliability assessment of electrical power systems: the weibull-markov stochastic model”, ieee transactions on industry applications, vol. 36, no. 3, pp. 911-915, 2000 microsoft word 34-3714_setasr_v10_n4_pp6080-6086 engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6080 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … gradient descent optimization control of an activated sludge process based on radial basis function neural network abdallah lemita department of electronics, faculty of engineering ferhat abbas university setif i setif, algeria abdallahlemita@yahoo.fr sebti boulahbel department of electronics, faculty of engineering ferhat abbas university setif i setif, algeria boulahbel_s@yahoo.fr sami kahla research center in industrial technologies cheraga, algiers, algeria samikahla40@yahoo.com abstract-most systems in science and engineering can be described in the form of ordinary differential equations, but only a limited number of these equations can be solved analytically. for that reason, numerical methods have been used to get the approximate solutions of differential equations. among these methods, the most famous is the euler method. in this paper, a new proposed control strategy utilizing the euler and the gradient method based on radial basis function neural network (rbfnn) model have been used to control the activated sludge process of wastewater treatment. the aim was to maintain the dissolved oxygen (do) level in the aerated tank and have the substrate concentration chemical oxygen demand (cod5) within the standard limits. the simulation results of do show the robustness of the proposed control method compared to the classical method. the proposed method can be applied in wastewater treatment systems. keywords-activated sludge process; euler method; gradient method; nonlinear system; rbf neural network; wastewater treatment i. introduction various industrial processes often generate large quantities of wastewater that must be treated in the safest and least expensive way, according to the discharge regulations. this water, prior to its discharge, is treated through a primary and a secondary process, which increase production cost. therefore, modern industries seek ways to reduce the use of water during the production process and/or means for a more efficient and low-cost secondary treatment. the primary treatment consists of an operation that separates solid particulate materials and coarse contaminants, by previous decanting. the secondary treatment is after the decanting and consists in the biological removal of dissolved contaminant material by the use of active sludge consisting of microorganisms that metabolize the dissolved organic matter in aerobic conditions [1, 2]. dissolved oxygen (do) level has a direct influence on the activity of the microorganisms. insufficient supply of do worsens the quality of the treated wastewater, and for that reason the control of the do concentration became the most studied control in activated sludge process [3]. many control strategies have been proposed for activated sludge process of wastewater treatment, starting from classical controllers such as the proportionalintegral derivative (pid) controller to keep the process at a set-point [4, 5] and fuzzy logic control to improve the operational performance of the system [6, 7]. some modern controllers based on the process model have been also used for the activated sludge process. model predictive control (mpc) methods have been applied on the distinct activated sludge process [8-10]. an adaptive fuzzy control strategy for do concentration was used to control the activated sludge process in [11]. the controller manipulates the flow control valves supplying air to the bioreactor. in [12], takagi-sugeno fuzzy pi control has been applied for managing do concentration. authors considered the dilution rate, influent do and influent substrate concentration as the disturbance. two control strategies which as a gain scheduling pi control and a model predictive control (mpc) were used to maintain substrate concentration in the effluent within the standard limits by controlling the do concentration in [13]. authors in [14], employed a fuzzy model-based predictive controller for activated sludge process. the objective was to maintain the do concentration. authors in [15] used a takagi sugeno (ts) fuzzy inference system (fis) to approximate the feedback linearization law for controlling the do concentration in the bioreactor. the purpose was to obtain the chemical oxygen demand (cod5) limited in the effluent. piotrowski proposed nonlinear fuzzy control for tracking the do reference trajectory in activated sludge process via the aeration system [16]. sequencing batch reactor and aeration system are modeled as plant control performed by the cascade nonlinear adaptive corresponding author: sami kahla engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6081 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … control system extended by an anti-windup filter in [17]. authors in [18] developed an adaptive neural technique using a disturbance observer to solve the do concentration control problem. in this paper, a nonlinear control strategy based on euler and gradient method to control the do in wastewater treatment process via aeration rate is proposed. the performance of the proposed control strategy laws is illustrated with numerical simulations and their results are compared with a conventional pi controller’s. ii. euler method let us consider the following differential equation: ( ) ( ) ( )( )tutytf t ty u u ,,= ∂ ∂ (1) ( ) 00, yyu u =ℜ∈∀ is the initial condition, t : time, u: input control, uy : output system. considering both control inputs ( ) 0utu = , and ( ) 1utu = , then (1) yields: ( ) ( )( )0,, 00 utytft ty u u = ∂ ∂ (2) ( ) ( )( )1 1 1 , , u u y t f t y t u t ∂ = ∂ (3) figure 1 shows the curves of (2) and (3). fig. 1. curve of equations (2) and (3). the numerical solution of the differential equation (2) is defined to be a set of points ( )kk yt , and each point is an approximation to the corresponding points ( )( )kk tyt , . we begin by discretizing the variable t into n equal subintervals such as htttttt nn =−==−=− −11201 ... , the parameter h is the step size. the principal of euler's method is to approximate the solutions of (2). we begin by integrating the two parts of (2) between 0t and 1t (we begin by choosing a step size htt =− 01 ). the system equation can be written as follows: ( ) ( ) ( )( )∫+= 1 0 000 001 ,, t t uuu dtutytftyty (4) by using the euler method, (4) can be written as: ( ) ( ) ( )( )001 ,, 000 utythftyty uuu +≈ (5) for 0≥k : ( ) ( ) ( )( ) ,,. 01 000 utytfhtyty kukkuku +=+ (6) the objective of the proposed algorithm is to control the system output ( )10 +ku ty in order to track a desired reference ( )1+ktr via the input controlu . for that reason, we have to find at every instant kt the value of ku that makes the system output 0u y track the referencer . iii. gradient descent algorithm for controlling of nonlinear system gradient descent is an iterative minimization method. in this paper, the gradient descent method is employed to control a nonlinear system. from (6), we have: ( ) ( ) ( )( ) 0 0 01 0 0 0 0 . , , u u u y t y t h f t y t u= + (7) firstly, at time 1t , we have to find 1u where ( ) ( )110 trtyu = . ( ) ( ) ( )( )10001 ,,. 001 utytfhtyty uuu += (8) the input control ku is adjusted by using the gradient descent algorithm by minimizing the objective function with respect to 0u . the objective function in this case is the squared error ( )1te between ( )11 tyu and ( )10 tyu . ( ) ( )( ) ( ) ( )( ) ( ) ( )( )211 2 11 2 11 0 01 2 1 2 1 2 1 tytr tytytete u uu − =−== (9) the input control 1u is updated by using the gradient descent algorithm: ( ) 0 1 01 . u te uu ∂ ∂ −= λ (10) whereλ is the learning rate parameter. ( ) ( ) ( ) 0 1 1 1 01 0 0 .. u ty ty te uu u u ∂ ∂ ∂ ∂ −= λ (11) ( ) ( ) 0 1 1 0 1 0 . . u y t u u e t u λ ∂ = + ∂ (12) the rbf neural network will be used to determine ( ) 0 1 0 u y t u ∂ ∂ . engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6082 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … iv. rbfnn algorithm the radial basis function neural network (rbfnn) is introduced in [19]. the rbfnn has three layers: an input layer, a nonlinear hidden layer that uses gaussian function as activation function, and a linear output layer [20-22]. rbfnns have many uses, including function approximation, classification, and system control. they have the advantage of fast learning speed and are able to avoid the problem of local minimum. the structure of the rbf neural network is illustrated in figure 2. fig. 2. rbf neural network structure. the output of the jth hidden neuron with center ci,j and width parameter bj is:           − −= 2 2 , 2 exp j ji j b cx h (13) where [ ]tnxxxx ,...,, 21= is the input vector of the rbf network. the rbfnn output can be described in the following equation:           − −== ∑∑ == 2 2 , 1 ,1 , 2 exp j jim j jl m j jjlnn b cx whwy (14) where wl,j is the weight between the hidden layer and the output layer. the center ci,j, the basis width parameter bj and the weights wl,j of the rbfnn are adjusted by using the gradient descent algorithm to minimize the sum of square error rbfe (the error between the system output 0uy and the rbfnn output ym (figure 3)) by using the following equations: ( ) ( ) ( ) ( )( )211 ,,,,, −−−+∆+−= kckcckckc jijijijiji α (15) ( ) ( ) ( ) ( )( )211 −−−+∆+−= kbkbbkbkb jjjjj α (16) ( ) ( ) ( ) ( )( )211 ,,,,, −−−+∆+−= kwkwwkwkw jljljljljl α (17) the expression of erbf is given as: ( )( ) ( ) ( )( ) 2 1 2 2 1 2 1 ∑∑ = −== r k nnurbfrbf kykykee k (18) the corresponding modifier formulas are:         − =∆ 2 , ,, .. j ji jjlrbfji b cx hwec η (19) 3 2 , , .. j ji jjlrbfj b cx hweb − =∆ η (20) jrbfjl hew ., η=∆ (21) where a is momentum factor, and η is the learning rate. generally, it is difficult or impossible to find ( ) 0 10 u tyu ∂ ∂ , therefore the rbfnn is used to approximate it. if the rbfnn output nny is equal to the system output 0uy , we can use the rbfnn output to find ( ) 0 10 u tyu ∂ ∂ . the rbfnn output nny will approach the system output [23], then 0u y could be written as: ( )           − − ∂ ∂ = ∂ ∂ = ∂ ∂ = ∂ ∂ ≈ ∂ ∂ ∑ ∑ = = 2 2 , 1 1 ,1 1 ,1 1100 1 2 exp 0 j jir j j j r j j nnnnu b cx x w hw xx y u y u ty (22) so: ( ) j r j j j j u h b xc w u ty . 1 2 1,1 ,1 0 10 ∑ =         − = ∂ ∂ (23) with [ ] 01 2 0 tt u x x x u y = =   and 1,1 1,2 1, , 2,1 2,2 2, ... ... m i j m c c c c c c c   =     . fig. 3. schema of rbfnn. substituting in (16), we get the control law in (24): engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6083 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … ( ) j r j j j j h b xc wteuu ... 1 2 1,1 ,1101 ∑ =         − += λ (24) for k≥0: ( ) j r j j kj jkkk h b uc wteuu ... 1 2 ,1 ,111 ∑ =++         − += λ (25) we replace the found value of 1+ku in (6): ( ) ( ) ( )( )11 ,,. 000 ++ += kkukkuku utytfhtyty (26) according to this, we can obtain: ( ) ( )110 ++ = kku trty . the structure of the proposed method is illustrated in figure 4. fig. 4. schema of the proposed control strategy. v. mathematical model of the wastewater treatment process the activated sludge process is a biological treatment that uses microorganisms (biomass) to remove organic matter, nitrogen, and phosphorus. the organic and nitrogen removal are the most used in wastewater treatment. the schema of the wastewater treatment process is illustrated in figure 5. fig. 5. schema of activated sludge process. the process consists of a biological reactor (aeration tank) where the microorganism (biomass) population is developed aiming to remove the substrate from the reactor, and a settler. in the settler tank, the solids are separated from the wastewater. a part of the removed sludge is recycled back to the aeration tank. the mathematical model considered in this paper contains four differential equations: the biomass concentration x, the substrate concentration s, the do concentration do and the recycled biomass concentration xr. the model is given by the following equations [24, 25]: ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) 1 , , , , . . 1 . . . r r x t f t x t s t do t x t t t x t d r x t r d x tµ ∂ = = ∂ − + + (27) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) 2 , , , , 1 . . 1 . . r in s t f t x t s t do t x t t t x t d r s t d s y µ ∂ = = ∂ − − + + (28) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) 3 0 max , , , , . . 1 . . . r in do t f t x t s t do t x t t k t x t d r do t y kla do do t ddo µ ∂ = = ∂ − − + + − + (29) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) 4 , , , , . 1 . . . r r r x t f t x t s t do t x t t d r x t d r x tβ ∂ = = ∂ + − + (30) with: ( ) ( ) ( ) ( ) ( )max . . s do s t do t t s t k do t k µ µ= + + (31) ( )kwkla .α= . where w is the air flow rate, which will be considered as the input control to maintain the oxygen concentration level in the aeration tank. the used step size is h=0.5. more details about the model parameters can be found in the appendix. vi. results and discussion the proposed method has been used to control the organic cod5 in the aeration tank through concentration control. figures 6 and 7 show the do and substrate concentration in open loop (without control). we can see clearly that the substrate concentration is above the standard limit of 20mg/l the control of substrate became a necessity. in order to test the effectiveness and the performance of the proposed method, the used set-point of the dissolved oxygen concentration changes immediately from 5mg/l to 5.5mg/l and from 5.5mg/l to 6.5mg/l and from 6mg/l to 7mg/l. for comparison, two controllers have been used: the pi controller with parameters: kp=3, ki=0.9 and the pso-pi with the optimized parameters: kp=7.3618 and ki=8.8304. at the beginning the dilution rate and the influent substrate concentration are considered constants (d=0.04h-1 and sin=200mg/l). engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6084 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … fig. 6. dissolved oxygen concentration. fig. 7. chemical oxygen demand cod5. (a) (b) fig. 8. a) dissolved oxygen concentration with constant dilution rate d=0.04h-1, b) zoomed view. fig. 9. aeration rate (control variable). do concentration with constant dilution rate is depicted in figure 8. from the simulations results it can be seen that the proposed controllers are able to control the do level to track the desired set-point doref, contrary of the pi controller that doesn't track the desired reference doref. initially the set-point for do level doref is 5mg/l and the control variable or aeration rate w is at 40m3/l (figure 9). after a while, when the do level doref suddenly changed to 5.5mg/l the aeration rate w increased to 45m3/l to satisfy the augmented demand for oxygen (the do level changes to track the set-point level doref). so, w depends on the demand of oxygen (when w increases the do level increases, and vice versa). the dilution rate and the influent substrate concentration are considered variables (in real wastewater treatment systems). in figure 10, different values of dilution rate were considered to cover the work domain (the water flow entering the reactor is not constant throughout the operation). figure 11 shows the influent substrate concentration sin with different values to assure a real study of the wastewater system. fig. 10. dilution rate. when the influent substrate concentration increases from 200mg/l to 300mg/l, the pi controller from figure 12 is strongly affected by this change and it is not able to track the set-point reference, in contrast with the proposed method that rejects the disturbance generated by the influent substrate concentration while the do concentration has a good tracking of the set-point reference. the evolution of the aeration rate obtained by the control methods under different values of dilution rate and influent substrate concentration are depicted in d o ( m g /l ) s ( m g /l ) w d ilu ti o n r a te ( 1 /h ) engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6085 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … figure 13. it can be seen that the power signal (control variable) of the proposed method is higher compared with the pi controller and pso-pi controller respectively. fig. 11. influent substrate concentration. (a) (b) fig. 12. a) dissolved oxygen concentration with modified dilution rate and influent substrate, b) zoomed view. in figure 14 we can see that the chemical oxygen demand cod5 is biologically degraded below 20mg/l (the legislation limit on wastewater treatment) and the wanted objective is established in the case of variable set-point of the dissolved oxygen concentration. in order to compare the different control strategies, their performance should be assessed by the integral of absolute error (iae) and the integral of square error (ise). these criteria are computed as: ( )dtteiae ∫ ∞ = 0 (32) ( ) dtteise ∫ ∞ = 0 2 (33) fig. 13. aeration rate (control variable). fig. 14. chemical oxygen demand cod5. table i. simulated iae and ise of the controllers used control methods iae ise pi controller 0.0498 0.0259 pso-pi controller 0.0309 0.0135 gradient method based on rbfnn 0.0048 0.0010 vii. conclusion wastewater treatment processes are very marked nonlinear systems because of the limited measurement data available on biological processes, a fact that complicates the control task when using the classical methods. in this paper, the proposed control method based on euler and gradient has been established to control the chemical oxygen demand cod5 via the control of the do concentration in an activated sludge process of wastewater treatment (no measurements of the substrate concentration are needed). the effectiveness of the proposed method was evaluated through a comparison with the classic pi controller. a variable set-point reference for the do concentration has been designed. based on the above results, it s in ( m g /l ) w 0 50 100 150 200 250 0 10 20 30 40 50 60 70 80 90 time h s ( m g /l ) substrate concentration limit of substrate concentration: 20 mg/l engineering, technology & applied science research vol. 10, no. 4, 2020, 6080-6086 6086 www.etasr.com lemita et al.: gradient descent optimization control of an activated sludge process based on radial … can be seen that the proposed controller is proven to be the better choice in terms of performance, required time for establishment, and process overshoot. appendix model parameters description parameters units values biomass yield factor hy 0.65 maximum specific growth rate maxµ 1−h 0.15 half-saturation coefficient for micro-organisms s k 1. −lmg 100 oxygen half-saturation coefficient for micro-organisms do k 1. −lmg 2 maximum do concentration maxdo 1. −lmg 10 model constant 0k 0.5 oxygen transfer rate α 0.018 ratio of recycled r 0.6 ratio of waste flow β 0.2 influent substrate concentration ins 1. −lmg 200 influent do concentration indo 1. −lmg 0.5 oxygen mass transfer coefficient kla 1−h aeration rate w 13. −hm dilution rate d 1−h initial values variable concentration symbols units values substrate concentration ss 1. −lmg 88 biomass concentration x 1. −lmg 20 dissolved oxygen concentration do 1. −lmg 2 recycle biomass concentration rx 1. −lmg 320 references [1] j. boer and p. blaga, “optimizing production costs by redesigning the treatment process of the industrial waste water,” procedia technology, vol. 22, pp. 419–424, jan. 2016, doi: 10.1016/j.protcy.2016.01.071. [2] y. song, y. xie, and d. yudianto, “extended activated sludge model no. 1 (asm1) for simulating biodegradation process using bacterial technology,” water science and engineering, vol. 5, no. 3, pp. 278–290, sep. 2012, doi: 10.3882/j.issn.1674-2370.2012.03.004. [3] h. a. maddah, “numerical analysis for the oxidation of phenol with tio2 in wastewater photocatalytic reactors,” engineering, technology & applied science research, vol. 8, no. 5, pp. 3463–3469, oct. 2018. [4] m. yong, p. yongzhen, and u. jeppsson, “dynamic evaluation of integrated control strategies for enhanced nitrogen removal in activated sludge processes,” control engineering practice, vol. 14, no. 11, pp. 1269–1278, nov. 2006, doi: 10.1016/j.conengprac.2005.06.018. [5] r. tzoneva, “optimal pid control of the dissolved oxygen concentration in the wastewater treatment plant,” in africon 2007, windhoek, south africa, sep. 2007, doi: 10.1109/afrcon.2007.4401608. [6] a. traoré et al., “fuzzy control of dissolved oxygen in a sequencing batch reactor pilot plant,” chemical engineering journal, vol. 111, no. 1, pp. 13–19, jul. 2005, doi: 10.1016/j.cej.2005.05.004. [7] c.-s. chen, “robust self-organizing neural-fuzzy control with uncertainty observer for mimo nonlinear systems,” ieee transactions on fuzzy systems, vol. 19, no. 4, pp. 694–706, aug. 2011, doi: 10.1109/tfuzz.2011.2136349. [8] b. holenda, e. domokos, á. rédey, and j. fazakas, “dissolved oxygen control of the activated sludge wastewater treatment process using model predictive control,” computers & chemical engineering, vol. 32, no. 6, pp. 1270–1278, jun. 2008, doi: 10.1016/j.comp chemeng.2007.06.008. [9] m. li, l. zhou, j. wang, “neural network predictive control for dissolved oxygen based on leven berg-marquardt algorithm,” trans. chin. soc. agric. mach, vol. 47, pp. 297–302, 2016. [10] g. s. ostace, v. m. cristea, and p. ş. agachi, “cost reduction of the wastewater treatment plant operation by mpc based on modified asm1 with two-step nitrification/denitrification model,” computers & chemical engineering, vol. 35, no. 11, pp. 2469–2479, nov. 2011, doi: 10.1016/j.compchemeng.2011.03.031. [11] c. a. c. belchior, r. a. m. araújo, and j. a. c. landeck, “dissolved oxygen control of the activated sludge wastewater treatment process using stable adaptive fuzzy control,” computers & chemical engineering, vol. 37, pp. 152–162, feb. 2012, doi: 10.1016/j.compchemeng.2011.09.011. [12] y. han, m. a. brdys, and r. piotrowski, “nonlinear pi control for dissolved oxygen tracking at wastewater treatment plant,” ifac proceedings volumes, vol. 41, no. 2, pp. 13587–13592, jan. 2008, doi: 10.3182/20080706-5-kr-1001.02301. [13] c. vlad, m. i. sbarciog, m. barbu, and a. v. wouwer, “indirect control of substrate concentration for a wastewater treatment process by dissolved oxygen tracking,” journal of control engineering and applied informatics, vol. 14, no. 1, pp. 38-47–47, mar. 2012. [14] t. yang, w. qiu, y. ma, m. chadli, and l. zhang, “fuzzy model-based predictive control of dissolved oxygen in activated sludge processes,” neurocomputing, vol. 136, pp. 88–95, jul. 2014, doi: 10.1016/j.neucom.2014.01.025. [15] m. bahita and k. belarbi, “fuzzy adaptive control of dissolved oxygen in a waste water treatment process,” ifac-papersonline, vol. 48, no. 24, pp. 66–70, jan. 2015, doi: 10.1016/j.ifacol.2015.12.058. [16] r. piotrowski and a. skiba, “nonlinear fuzzy control system for dissolved oxygen with aeration system in sequencing batch reactor,” information technology and control, vol. 44, no. 2, pp. 182–195, jun. 2015, doi: 10.5755/j01.itc.44.2.7784. [17] r. piotrowski, k. błaszkiewicz, and k. duzinkiewicz, “analysis the parameters of the adaptive controller for quality control of dissolved oxygen concentration,” information technology and control, vol. 45, no. 1, pp. 42–51, mar. 2016, doi: 10.5755/j01.itc.45.1.9246. [18] m.-j. lin and f. luo, “adaptive neural control of the dissolved oxygen concentration in wwtps based on disturbance observer,” neurocomputing, vol. 185, pp. 133–141, apr. 2016, doi: 10.1016/j.neucom.2015.12.045. [19] m.-j. syu and b.-c. chen, “back-propagation neural network adaptive control of a continuous wastewater treatment process,” industrial & engineering chemistry research, vol. 37, no. 9, pp. 3625–3630, sep. 1998, doi: 10.1021/ie9801655. [20] c. j. b. macnab, “stable neural-adaptive control of activated sludge bioreactors,” presented at the 2014 american control conference, portland, or, usa, jun. 2014, pp. 2869–2874, doi: 10.1109/acc.2014.6858627. [21] m. mahshidnia and a. jafarian, “forecasting wastewater treatment results with an anfis intelligent system,” engineering, technology & applied science research, vol. 6, no. 5, pp. 1175–1181, oct. 2016. [22] j. qiao, w. fu, and h. han, “dissolved oxygen control method based on self-organizing t-s fuzzy neural network,” ciesc journal, vol. 67, pp. 960–966, mar. 2016. [23] h. hasanpour, m. h. beni, and m. askari, “adaptive pid control based on rbf nn for quadrotor,” international research journal of applied and basic sciences, vol. 11, no. 2, pp. 177–186, 2017. [24] g. olsson and b. newell, wastewater treatment systems: modelling, diagnosis and control, vol. 4. london, uk: iwa publishing, 2015. [25] h. zhou, “dissolved oxygen control of wastewater treatment process using self-organizing fuzzy neural network,” ciesc j. vol. 68, pp.1516–1524, 2017. microsoft word 03-3290_s_etasr_v10_n2_pp5361-5366 engineering, technology & applied science research vol. 10, no. 2, 2020, 5361-5366 5361 www.etasr.com mangi et al.: parametric study of pile response to side-by-side twin tunneling in stiff clay parametric study of pile response to side-by-side twin tunneling in stiff clay naeem mangi department of civil engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan naeem08ce30@gmail.com daddan khan bangwar department of civil engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan daddan@quest.edu.pk hemu karira department of civil engineering mehran university of engineering & technology khairpur mir’s, sindh, pakistan engr.hemu07civil@gmail.com samiullah kalhoro department of civil engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan samiullahkalhoro63@gmail.com ghulam rasool siddiqui department of civil engineering mehran university of engineering & technology khairpur mir’s, sindh, pakistan ghulamrasoolsiddiqui@gmail.com abstract—a three dimensional coupled-consolidation numerical parametric study was carried out in order to gain new insight of single pile response to side-by-side twin tunneling in saturated stiff clay. an advanced hypo plasticity (clay) constitutive model with small-strain stiffness was adopted. the effects of relative to the pile tunnel depths were investigated by simulating the twin tunnels near the pile at various depths of tunnels, namely near the pile shaft, adjacent to the pile toe, and below the pile toe. it was found that the second tunneling in each case resulted in a larger settlement than the one due to the first tunneling with a maximum percentage difference of 175% in the case of twin tunneling near the mid-depth of the shaft. this occurred due to the degradation of clay stiffness around the pile during the first tunneling. conversely, the first tunneling-induced bending moment was reduced substantially during the second tunneling. the most critical location of twin tunnels relative to the pile was found to be below the pile toe. keywords-twin tunneling; pile foundation; parameteric study i. introduction tunnel excavation-induced stress relief and ground movements inevitably cause additional deformation and stress on adjacent pile foundations. it is a major concern for designers and engineers to evaluate the adverse effects on the existing piles. to understand the pile–soil–tunnel interaction mechanism, many field monitoring studies and centrifuge model tests [1-3] and analytical solutions and numerical modeling studies [4-7] have been conducted. they all concluded that tunneling adjacent to existing pile foundations caused pile settlement, additional axial load on piles, and induced bending moments along the piles, which is unfavourable for piled foundations. their magnitudes are likely depended on the relative locations of tunnels and piles. however, most previous studies have focused on the effects of a single tunnel on single piles and pile groups. in fact, twin tunneling is particularly favoured when developing underground transportation systems [8]. to obtain a satisfactory numerical model of the single pile response to sideby-side twin-tunneling, the analysis needs to take account of the soil’s small strain non-linearity. in view of the aforementioned issues, this study aims to systematically investigate the settlement and load transfer mechanism of an existing single pile due to side-by-side twin tunnels in saturated stiff clay. to achieve these objectives, a three-dimensional coupled-consolidation numerical parametric study was carried out by varying the twin tunnel depths relative to the pile. ii. three dimensional coupled consolidation analysis to investigate single pile response to side-by-side twin tunneling in stiff saturated clay, a three-dimensional coupled consolidation numerical parametric study was conducted. to facilitate validation of the numerical model, the tunnel diameter, pile diameter, embedded length, and clear distance between the pile and the tunnel were identical (in prototype scale) to those in the centrifuge test [9]. figure 1 shows the elevation view of the configuration of a typical numerical simulation in which twin tunnels were excavated adjacent to the pile toe. the diameter of each tunnel (d) was 6m. the embedded length (lp) and diameter (dp) of the pile were 18m and 0.8m respectively. the modeled pile represents a cylindrical reinforced concrete (grade 40, reinforcement ratio=1) with a bending moment capacity of 800knm. the center-to-center distance between each tunnel and the pile was 5.5m (0.92d). it is worth noting that, in reality, high-rise buildings are unlikely to be built on a single pile. this hypothesized study is a more virtual case [3]. this simplification was made to understand the settlement and load transfer mechanism more clearly. the length of each model tunnel (along its longitudinal direction) is 72m, which is corresponding author: naeem mangi engineering, technology & applied science research vol. 10, no. 2, 2020, 5361-5366 5362 www.etasr.com mangi et al.: parametric study of pile response to side-by-side twin tunneling in stiff clay equivalent to 12d. each tunnel excavation was simulated in 28 steps. at each step, the tunnel advanced a distance of 2.5m (0.42d) [2]. the time increment of one day for each step was adopted in the finite element analysis. a monitoring section was selected at the transverse centreline of the pile (i.e. y/d=0) as a reference for the tunnel advancements. the parametric study consisted of nine different numerical simulations (in total) in which the existing single pile was located between the twin tunnels, which were excavated one after the other on either side of the pile at various depths of tunnels (zt) relative to the pile length (lp), namely near the pile shaft (zt/lp=0.67 and 0.83), adjacent to the pile toe (zt/lp=1.00) and below the pile toe (zt/lp=1.17, 1.33, 1.50, 1.67, 1.83 and 2.00). in addition to these simulations, a pile load test (l) was conducted numerically in “greenfield” conditions (i.e. with no tunnels present) to obtain the ultimate capacity of the pile in stiff clay. based on this, the working load was then calculated with a factor of safety of 3.0. the obtained working load was applied to the pile in the parametric analysis simulating twin tunneling. table i summarises the conducted numerical simulations. fig. 1. configuration of a typical numerical run representing case of zt/lp=1.00 table i. numerical simulations summary description of numerical run zt/lp c/d twin tunneling near the pile shaft 0.67 1.5, 1.5 0.83 2.0, 2.0 twin tunneling next to the pile toe 1.00 2.5, 2.5 twin tunneling below the pile toe 1.17 3.0, 3.0 1.33 3.5, 3.5 1.50 4.0, 4.0 1.67 4.5, 4.5 1.83 5.0, 5.0 2.00 5.5, 5.5 zt=tunnel depth, lp=pile length, c/d=cover to diameter of tunnel ratio iii. finite element mesh and boundary condtions figure 2 shows an isometric view of a typical finite element mesh (for the case of zt/lp=1.0).the size of the mesh for each numerical simulation is 72m×72m×60m. these dimensions were sufficiently large to minimize the boundary effects in the numerical simulation because a further increase in the dimensions of the finite element mesh did not lead to any change in the computed results. regarding the element size in the mesh, it is found that further halving the adopted mesh size leads to a change in the computed results of no more than only 0.2%, suggesting that the mesh is sufficiently fine. eight-noded hexahedral brick elements were used to model the soil and the pile. four-noded shell elements were adopted to model each tunnel lining. roller and pin supports were applied to the vertical sides and the base of the mesh, respectively. therefore, movements normal to the vertical boundaries and in all directions of the base were restrained. the water table was assumed to be at the ground surface. initially, the pore water pressure distribution was assumed to be hydrostatic. free drainage was allowed at the top boundary of the mesh. the tunnel lining was assumed to be continuous and impervious. fig. 2. finite element mesh and boundary conditions of a typical numerical analysis (zt/lp=1.00) interaction between the piles and surrounding soil was modeled by the surface-to-surface contact provided in abaqus software package [10]. the surface-to-surface contact considers the shape of both the master and slave surface in the contacting region and allows pile-soil friction. the penalty approach was used for tangential contact and the normal behavior was modeled as hard contact with no normal relative displacement between the pile and the surrounding soil. the interface was modelled by the coulomb’s friction law, in which the interface friction coefficient (µ) and limiting displacement (γlim) are required as input parameters. a limiting shear displacement of 5mm was assumed to achieve full mobilization of the interface friction equal to µ×p', where p' is the normal effective stress between two contact surfaces, and a typical value of µ for a bored pile of 0.35 was used [11]. the tunneling stress release process was modelled by the “element death” technique which is widely used in finite element analysis. in this technique, elements and nodes can be deactivated and activated. in this study, the volume loss is predefined before tunneling by specifying the area of the annulus gap between the tunnel lining and the excavated soil. pattern of the non-uniform displacement boundaries was determined according to the displacement controlled model (dcm) in [5]. the soil inside the segment is excavated by deactivating soil elements inside it ground water table working load (1.93 mn) existing single pile (lp=18 m) stiff clay note:lp is embedded length of pile zt is depth of tunnel axis from the ground surface zt=18 m 11 m first tunnel second tunnel 2.5 m 2.5 m engineering, technology & applied science research vol. 10, no. 2, 2020, 5361-5366 5363 www.etasr.com mangi et al.: parametric study of pile response to side-by-side twin tunneling in stiff clay and by specifying zero horizontal displacement at the tunnel face of the tunnel segment to be excavated. in the meantime, the shell elements representing the tunnel lining are activated. iv. constitutive model and model parameters used in finite element analysis a basic hypoplastic model was developed to capture the nonlinear behavior (upon monotonic loading at medium-to large-strain levels) of granular materials [12]. the basic model consists of five parameters (table ii). to account for the strain dependency and path dependency of the soil stiffness (at small strains), authors in [12] further improved the basic hypoplastic model by incorporating the concept of intergranular strain, which requires five additional parameters. the hypoplastic clay model with small strain stiffness has been implemented in the finite element software package abaqus through a user-defined subroutine. the coefficient of lateral earth pressure at rest, ko was estimated by the relevant equation in [14]. the concrete pile and tunnel lining were assumed to be linear elastic with young's modulus of 35gpa and poisson's ratio of 0.25. the thickness of the lining was taken as 0.25m. the unit weight of concrete was taken as 24kn/m 3 . the parameters for the piles and the tunnel lining are summarized in table iii. table ii. adopted model parameters of kaolin clay description value effective angle of shearing resistance at critical state, φ’ 22 o parameter controlling the slope of the isotropic normal compression line in the ln(1+e) versus lnp plane, λ* [13] 0.11 parameter controlling the slope of the isotropic normal compression line in the ln(1+e) versus lnp plane, κ* [13] 0.026 parameter controlling the position of the isotropic normal compression line in the ln(1+e)–lnp plane, n 1.36 parameter controlling the shear stiffness at mediumto largestrain levels, r 0.65 parameter controlling the initial shear modulus upon 180° strain path reversal, mr 14 parameter controlling the initial shear modulus upon 90° strain path reversal, mt 11 size of elastic range, r 1×10 -5 parameter controlling the rate of degradation of the stiffness with strain, βr 0.1 parameter controlling the degradation rate of stiffness with strain, χ 0.7 initial void ratio, e 1.05 dry density (kg/m 3 ) 1136 coefficient of permeability, k (m/s) 1×10 -9 table iii. adopted concrete parameters in fem description value young's modulus, e 35gpa poisson's ratio, νννν 0.3 density, ρρρρ 2400kg/m 3 v. interpretation of computed results a. induced pile settlement due to twin tunnels figure 3 illustrates the pile settlement induced after the first and th second tunnel at different depths relative to the pile (zt/lp). the long-term pile settlement (i.e. 15 years after twin tunneling completion) is included in the figure. the measured induced pile settlement tunneling at different depths in centrifuge modeling reported by [9, 15] is also shown in the figure for comparison. a linear increase in twin tunnelinginduced settlement was observed as the tunnel depth increased from zt/lp=0.67 to 1.33. however, as the depth increased further (1.50≤zt/lp≤2.0), the induced settlement decreased with a similar trend. this computed result is consistent with the one measured in centrifuge tests simulating tunneling at different depths relative to piles in stiff clay [9] and in sand [8]. this tunneling-induced settlement mechanism with zt/lp can be attributed to the influence zone of tunneling-induced ground movement and stress-release-affected regions. in the case of zt/lp=1.33, the entire pile stood within the influence zone of tunneling-induced ground movement, and the stress release region was developed directly underneath the pile toe. when zt/lp=0.67, 0.83 and 1.00, the pile was located only partially within the influence zones. therefore, the pile experienced larger settlement in the case of zt/lp=1.33 than in these cases. although the entire pile in the cases of zt/lp=1.50, 1.67, 1.83 and 2.00 was located inside the tunneling-induced ground movement, the tunneling-induced pile settlement was less than that of zt/lp=1.33. this is because the location of the pile toe was beyond the tunneling-induced stress-release-affected region in these cases. fig. 3. computed load settlement curve from the pile load test without tunneling qualitatively, the second tunneling-induced settlement trend with respect to zt/lp was similar to the first tunnelinginduced settlement. however, the magnitude of the pile settlement induced by the second tunnel was higher than that induced by the first tunnel in each case. this can be attributed to the degradation of clay stiffness around the pile due to stress release and the development of shear strains as a result of the first tunnel excavation. the largest settlement (175% of sp due to the first tunnel) was induced by the second tunnel when zt/lp=0.67. it can be seen that the settlement of the pile reduced in the long term (i.e. 15 years after completion of the twin tunnels) in all cases. this pile heave is attributed to the dissipation of excessive negative pore pressure generated 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 8.00 0.50 0.67 0.83 1.00 1.17 1.34 1.50 1.67 1.84 2.00 n o rm a li se d p il e h e a d s e tt le m e n t (s p /d p ): % tunnel depth relative to pile (zt/lp) loganathan et al., 2000 after first tunnel ng et al., 2013 after twin tunnels series6 slong term settlement (15 years after twin tunnelling) (cummulative) computed (this study)measured (centrifuge) 1.33 1.83 engineering, technology & applied science research vol. 10, no. 2, 2020, 5361-5366 5364 www.etasr.com mangi et al.: parametric study of pile response to side-by-side twin tunneling in stiff clay around the pile due to the twin tunnels. the maximum reduction of the pile settlement (17.2% of the settlement after twin tunnels) was observed in the case zt/lp=1.33. b. changes in axial load distribution figure 4 illustrates the axial force distribution along the pile with normalized depth (i.e., z/lp) below the ground surface after twin tunneling when zt/lp=0.67, 1.00 and 1.33. the axial load distribution before tunneling (after applying the working load) is also included in the figure as a reference. before tunneling, the pile carried approximately 75% of the working load (i.e. 1010kn) with its shaft resistance and the remainder with its end-bearing resistance. because tunneling was carried out near the mid-depth of the pile shaft in the case zt/lp=0.67, tunnel-induced reduction in normal stresses to the pile shaft and downward soil movement caused an increase in axial load along the entire length of the pile after the first and second tunnel excavations. at the end of the first tunneling, the maximum increment in the axial force (63% of that at working load) was computed at z/lp=0.7, which is above the tunnel spring line. by inspecting the axial load distribution after the first tunnel excavation, it is observed that along the upper half of the pile (0≤z/lp≤0.6), the shaft resistance decreases to zero. consequently, the load was transferred to the lower half of the pile. to maintain equilibrium, the pile had to settle to further mobilize the end-bearing and shaft resistance along the lower portion (z/lp>0.6). this led to increases of 73% and 24% in the mobilized end-bearing and shaft resistance at the lower portion (z/lp>0.5) respectively upon completion of the first tunnel. the second tunneling in the case zt/lp=0.67 caused a further reduction of the normal stresses to the pile shaft. consequently, soil settled more than the pile, resulting in negative skin friction (nsf) along the upper half of the pile (0≤z/lp≤0.6). this suggests that this portion of the pile is subjected to dragload by the surrounding soil. this caused the pile to settle more than that due to the first tunnel. to maintain vertical equilibrium of the pile, the soil surrounding the lower part of the pile (z/lp>0.6) resisted its settlement by mobilizing positive skin friction (psf) at the pile–soil interface and endbearing resistance at the toe of the pile. owing to the second tunneling, the end-bearing and mobilized shaft resistance increased to 190% and 18%, respectively. owing to twin tunneling adjacent to the pile toe (zt/lp=1.00), the axial load increased along the pile at depths ranging from 0.42 to 1.0lp. the increase in axial load resulted from reduced shaft resistance, which was caused by stress release from twin tunneling adjacent to the pile toe. the reduction percentages in shaft resistance were 22% and 43% after the first and twin tunneling (cumulative) respectively. consequently, the axial load borne by the shaft resistance along the pile at depths ranging from 0.42 to 1.0lp was transferred downward to the pile toe, leading to a 75% and 156% increase in mobilized end-bearing resistance after the first and twin (cumulative) tunneling, respectively. in contrast to the tunneling near the mid-depth of the pile shaft (zt/lp=0.67) and adjacent to the pile toe (zt/lp=1.00), the axial load decreased along the entire length of the pile when twin tunnels were excavated below the pile toe (zt/lp=1.33). the advancement of the twin tunnels led to reductions in the end-bearing of the pile as a result of stress release from 2% volume loss. to compensate for the decrease in end-bearing resistance, the pile had to settle substantially to mobilize the shaft resistance along the entire pile length. this result is similar to those measured in the field [16]. the end-bearing decreased by 12% and 28% owing to the first and twin tunnels, respectively. fig. 4. axial load distribution along the pile length c. twin-tunneling induced bending moment along the pile figure 5 illustrates the induced bending moment along the pile after the first and second tunneling in the cases of zt/lp=0.67, 1.00 and 1.33. a positive bending moment means that tensile stress was induced along the pile shaft facing the first tunnel. the measured bending moment of a single pile subjected to single tunneling in centrifuge model test [9] was also included for comparison. since there was no rigid constraint at the pile head (pile head was free to move and rotate), no bending moment was induced at/near the head of the pile in all the cases. it can be observed that in the case of zt/lp=0.67, the maximum positive bending moment was induced in the pile at z/lp=0.6 near spring line of the first tunnel. authors in [17] have also reported a measured maximum bending moment occurring at the spring line of the tunnel near the shaft of the pile group. this happened because the pile was subjected to lateral soil movement towards the tunnel resulting from significant stress release. the magnitude of the maximum positive bending moment was 400knm (which is 50% of the pile bm capacity). on the other hand negligible bending moment was induced near the pile toe (below the tunnel spring line), because the tunneling induced soil movement below the tunnel spring line was insignificant [2]. subsequently, the pile was subjected to stress release on the opposite side of the pile as a result of the second tunnel’s excavation near the mid-depth of the pile shaft. consequently, the induced-bending moment decreased along the pile after the second tunneling. the induced-bending moment along the pile was not returned to zero after the second tunneling as the soil does not behave elastically. finally, a positive bending moment 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 n o r m a li z e d d e p th ( z /l p ) axial force: kn before tunnelling c/d=1.5 1st tunnel 2nd tunnel c/d=1.5 c/d=1.5 c/d=1.5 c/d=1.5 c/d=1.5 c/d=1.5 after 1st tunnelling after 2nd tunnelling zt/lp = 0.67 zt/lp = 0.67 zt/lp = 1.00 zt/lp = 1.33 zt/lp = 1.33 zt/lp = 1.00 engineering, technology & applied science research vol. 10, no. 2, 2020, 5361-5366 5365 www.etasr.com mangi et al.: parametric study of pile response to side-by-side twin tunneling in stiff clay with magnitude of 50knm was induced. owing to the soil inward movement towards the first tunnel when zt/lp=1.00, positive bending moment was induced at the lower part of the pile (0.7≤z/lp≤1.0). to counter-balance this induced positive bending moment, negative bending moment was developed along the upper part of the pile (0≤z/lp≤0.7). the magnitudes of maximum induced positive and negative bending moment were 130knm. the measured induced bending moment along the pile due to single tunneling in centrifuge reported by [9] was positive and smaller than the one computed in magnitude. this may be caused by the tunneling-induced volume loss which was modeled as 1% in the centrifuge test. the subsequent tunneling at zt/lp=1.00 on the opposite side of the pile reduced the induced bending moment significantly. the maximum induced bending moment was 37knm at z/lp=0.6 after the twin tunnel excavation. fig. 5. induced bending moment along the pile length in contrast to the induced bending moment in the case zt/lp=0.67, negative bending moment was induced along the entire pile length due to the first tunnel advancement in the case of zt/lp=1.33. however, the magnitude of maximum bending (100kpa at z/lp=0.7) was less than the one in the case zt/lp=0.67. also, no bending was induced at the pile toe and head. it can be seen that the induced bending moment during the first tunnel advancement in the case of tunneling near the mid-depth of the pile (zt/lp=0.67) and adjacent the pile toe (zt/lp=1.00) was less than that in the case of tunneling below the pile toe (zt/lp=1.33). therefore, in the case of zt/lp=1.33, the most critical issue to be considered is the relatively large settlement. vi. discussion it is well recognized that the stress-strain relationship of soils is highly nonlinear even at very small strains. the stiffness of most soils decreases as strain increases and depends on the recent stress or strain history of the soil. owing to nonlinear soil behavior, a tunnel excavation can cause reduction in the stiffness of the soil. therefore, it is vital to investigate the pile responses not only to the first tunnel but also to the subsequent tunnel in clay in a twin-tunneling transportation system. keeping these issues into consideration, effects of twin side by side tunneling on a single pile in stiff clay were investigated in this study. it was revealed that the second tunneling caused larger settlement and smaller bending moment in the pile. this finding is the major contribution of this parametric study. vii. conclusions based on the modeled ground conditions, geometry, and tunneling method, the following conclusions can be drawn: • due to the degradation of the stiffness of the clay surrounding the pile as a result of tunneling-induced stress release and shear strain, the second tunneling caused larger settlement than the first in each case. when zt/lp=0.67 the second tunneling-induced settlement was the largest (i.e. 175% of sp from the first tunnel) of all cases. the cumulative settlements of the pile resulting from the working load and twin tunneling in the cases zt/lp=0.67, 1.00 and 1.33 were 25, 39, and 47mm (i.e. 3.1, 4.9, and 5.9 of the pile diameter) respectively. • the first tunneling in the case zt/lp=0.67 induced the largest bending moment (50% of the pile bm capacity) at the spring line of the tunnel. however, the induced bending moment along the pile decreased significantly due to the excavation of the second tunnel in each case because of the side-by-side twin tunneling configuration in which the second tunnel caused a stress release on the opposite side of the pile. • when of zt/lp=0.67, the second tunneling-induced soil movement due to stress release mobilized negative shaft friction at the upper part of the pile. consequently, a downward load transfer was observed along the pile, further mobilizing the pile end-bearing. similarly, in the case zt/lp=1.00, the load borne by the pile shaft was transferred to the pile toe as a result of the twin tunneling reduction in the shaft resistance at the lower part of the pile (0.3≤z/lp≤0.9). acknowledgement the authors would like to acknowledge the financial support provided by the quaid-e-awam university of engineering, science & technology, sindh and pakistan. references [1] g. gudehus, “a comprehensive constitutive equation for granular materials”, soils and foundations, vol. 36, no. 1, pp. 1-12, 1996 [2] k. ishihara, “liquefaction and flow failure during earthquakes”, geotechnique, vol. 43, no. 3, pp. 351-451, 1993 [3] c. w. w. ng, “the state-of-the-art centrifuge modelling of geotechnical problems at hkust”, journal of zhejiang university science a, vol. 15, pp. 1-21, 2014 [4] j. jaky, “the coefficient of earth pressure at rest”, journal of the society of hungarian architects and engineers, pp. 355-358, 1944 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 -150 -100 -50 0 50 100 150 200 250 300 350 400 450 n o r m a li z e d d e p th ( z /l p ) induced bending moment: knm measured series3 series11 series12 1st tunnel 2nd tunnel c/d=1.5 c/d=1.5 c/d=2.5 c/d=2.5 c/d=3.5 c/d=3.5 after 1st tunnelling after 2nd tunnelling meaured (loganathan et al., 2000) computed (this study) zt/lp = 0.67 zt/lp = 0.67 zt/lp = 1.00 zt/lp = 1.33 zt/lp = 1.33 zt/lp = 1.00 engineering, technology & applied science research vol. 10, no. 2, 2020, 5361-5366 5366 www.etasr.com mangi et al.: parametric study of pile response to side-by-side twin tunneling in stiff clay [5] k. h. chiang, c. j. lee, “responses of single piles to tunneling-induced soil movements in sandy ground”, canadian geotechnical journal, vol. 44, no. 10, pp. 1224-1241, 2007 [6] a. niemunis, i. herle, “hypoplasic model for cohesionless soils with elastic strain range”, mechanics of cohesive-frictional materials, vol. 2, no. 4, pp. 279-299, 1997 [7] c. w. w. ng, m. a. soomro, y. hong, “three-dimensional centrifuge modelling of pile group responses to side-by-side twin tunnelling”, tunelling and underground space technology, vol. 43, pp. 350-361, 2014 [8] c. w. w. ng, h. lu, s. y. peng, “three-dimensional centrifuge modelling of the effects of twin tunnelling on an existiong pile”, tunneling and underground space technology, vol. 35, pp. 189-199, 2013 [9] n. loganathan, h. g. poulos, d. p. stewart, “centrifuge model testing of tunnelling-induced ground and pile deformations”, geotechnique, vol. 50, no. 3, pp. 283-294, 2000 [10] abaqus user’s manual, version 6.8.2, dassault systemes, 2008 [11] d. masin, “hypoplastic cam-clay model”, geotechnique, vol. 62, no. 6, pp. 549-553, 2012 [12] a. m. marshall, r. j. mair, “tunneling beneath driven or jacked endbearing piles in sand”, canadian geotechnical journal, vol. 48, no. 12, pp. 1757-1771, 2011 [13] s. w. jacobsz, j. r. standing, r. j. mair, t. hagiwara, t. sugiyama, “centrifuge modelling of tunnelling near driven piles”, soils and foundations, vol. 44, no. 1, pp. 49-56, 2004 [14] p. w. mayne, f. h. kulhawy, “k0-ocr relationships in soils”, journal of the geotechnical engineering-asce, vol. 108, no. 6, pp. 851-872, 1982 [15] d. a. mangejo, m. a. soomro, n. mangi, i. a. halepoto, i. a. dahri, “a parametric study of effect on single pile integrity due to an adjacent excavation induced stress release in soft clay”, engineering, technology & applied science research, vol. 8, no. 4, pp. 3189-3193, 2018 [16] m. a. soomro, a. s. brohi, d. k. bangwar, s. a. bhatti, “3d numerical modelling of pile group responses to excavation-induced stress release in silty clay”, engineering, technology & applied science research, vol. 8, no. 1, pp. 2577-2584, 2018 [17] d. r. coutts, j. wang, “monitoring of reinforced concrete piles under horizontal and vertical loads due to tunnelling”, international conference on tunnels and underground structures, singapore, november 26-29, 2000 microsoft word 26-3091_s_etasr_v9_n5_pp4718-4723 engineering, technology & applied science research vol. 9, no. 5, 2019, 4718-4723 4718 www.etasr.com khattara et al.: an efficient metaheuristic approach for the multi-period technician routing and … an efficient metaheuristic approach for the multiperiod technician routing and scheduling problem abouliakdane khattara electrical engineering department, ferhat abbas setif 1 university, setif, algeria yekdane87@univ-setif.dz wahiba ramdane cherif-khettaf loria, umr 7503, university of lorraine, nancy, france ramdanec@loria.fr mohammed mostefai electrical engineering department, ferhat abbes setif 1 university, setif, algeria mostefai@univ-setif.dz abstract—in this paper, we address a new variant of the multiperiod technician routing and scheduling problem. this problem is motivated by a real-life industrial application in a telecommunication company. it is defined by a set of technicians having distinct skills that could perform a set of geographically scattered tasks over a multi-period horizon. each task is subject to time constraints and must be done at most once over the horizon by one compatible technician. the objective is to minimize the total working time (composed by routing time, service time, and waiting time), the total cost engendered by the rejected tasks, and the total delay. two variants of variable neighborhood descent are proposed, and three variants of variable neighborhood search to solve this problem. computational experiments are conducted on benchmark instances from the literature. an analysis of the performance of the proposed local search procedures is given. the results show that our methods outperform the results of a mimetic method published in the literature. keywords-technician routing and scheduling problem (trsp); variable neighborhood search (vns); variable neighborhood descent (vnd) i. introduction technician routing and scheduling problem (trsp) is a new challenge in logistics for the service sector and especially for utility companies in energy (gas, electricity), telecommunications, and water distribution areas [1]. the trsp consists of planning tasks assigned to commercial or technical personnel, over a set of periods (days) in order to visit industrial facilities or customers for different types of activities: installation, inspection, repair, and maintenance. until recently, the trsps, both static and dynamic cases, have received a limited attention. thus, the number of publications and scientific reports is limited, although several variants of the trsp have been studied in the literature. these variants can be divided into two classes: (i) one period trsp, and (ii) multiperiod trsp. the one period trsp has been studied by authors who consider constraints as skills, time windows, tools, spare parts, stochastic service and stochastic travel times, multiple depots, and customer priority [1-6]. for the multiperiod trsp, we can mention [7], which introduces the multiperiod technician scheduling problem with experience based on service times and stochastic customers. the aim is to minimize the expected sum of each day’s total service times over a finite horizon. another multi-period trsp was proposed in 2007 [8]. this problem consists in computing a schedule for technicians to perform a set of tasks on a five day horizon. the routing aspect is not considered, and tasks have different proficiency skill level constraints, that require a team of technicians. authors in [10] studied the one-periodic variant of this problem, namely, the service technician routing and scheduling problem by taking on consideration the routing aspect. authors in [9] presented a multi-period technician routing problem faced by a water distribution and treatment company. in [9], requests were divided into two categories (users requested interventions and company scheduled visits), and the skill constraints were not included. in this paper, we propose the study of a new multi-period trsp variant where skill constraints and routing aspects are considered simultaneously, inspired by a realistic application in the telecommunication field. from the above survey, it appears that most papers on trsp considered several realistic constraints, but to the best of our knowledge, the multi-periodic variant of trsp with skill constraints and routing aspects has not been considered in the literature. the papers that consider a multi-periodic trsp with skill constraints, and routing aspects, included other specific constraints as the technician team constraint [10]. our study is also an extension of the problem studied in [9, 11 ,12] in which the skill constraints are ignored. as the considered problem is np-hard and since it results from the combination of complex constraints, large instances can hardly be solved by exact methods. so, the best way to tackle this problem is by using the metaheuristic approaches. we choose a variable neighborhood search (vns) to solve our problem, because its effectiveness has been proven on a number of variants of vehicle routing problems (vrp) as the vehicle routing problem with time windows [13], the vehicle routing problem with multiple depots and time windows [14, 15], the periodic vehicle routing problem [16], and the workforce scheduling and routing problem [17]. in this paper, we propose two variants of variable neighborhood descent, as well as three variants of variable neighborhood search to solve the trsp with skill constraints and routing aspects. corresponding author: abouliakdane khattara re tr ac te d engineering, technology & applied science research vol. 9, no. 5, 2019, 4718-4723 4719 www.etasr.com khattara et al.: an efficient metaheuristic approach for the multi-period technician routing and … ii. problem description we consider a multi-period horizon h of several days (typically one week). for each day h∈h, a set of technicians k with different skills are available (a technician has one skill or more). each technician k∈k has a known starting and ending location d∈d, which corresponds to the technician’s home or office (the starting location is the same with the ending location). each technician has a working time limit per day maxtimek,h. requests belong to two categories: non-urgent tasks (nt) generated by the company, and urgent requests (ut) formulated by customers through a call center, for emergency reasons. note that ut∪nt=t, with t the set of all tasks known in advance. let si be the service time of the task i. the urgent tasks i∈ut are planned on a fixed day hi and are subjected to customer appointments within a given time window (bi, ei), where bi is the beginning of the time window, and ei the end of the time window. the task i could be affected to a technician k if the arriving time at task i (denoted aik) is before the end of the time windows (aikei, a delay lik occurs, with lik=(aik+si)-ei. non-urgent tasks are characterized by a validity period composed of one or several days [hbi, hei]∈h, where hbi is the early day and hei is the deadline for the execution of i. a request i requires certain skills (qualifications), and must be executed by only one compatible technician. the goal is to build a set of routes per day and per technician (at most |kh| routes per day). each route rhk is a sequence of tasks assigned to only one technician k and one-day h. the following constraints must be satisfied: 1) each task must be executed at most once within the validity period or within the time window, 2) the total time of each route rhk should not exceed maxtimek,h, 3) the competence requirements must be respected, 4) each route must start and end at the same location d ∈ d. the objective function, measured in monetary units denoted f(x), consists in minimizing three costs: (i) the total working time composed by routing time that depends on the number of kilometers travelled by each technician, the service time and the waiting time, (ii) the total cost engendered by the ejected tasks, and (iii) the total delay. iii. solution methodology in this section, we describe the general framework of the variable neighborhood search (vns), and then we present the basic components of the vns that we have developed to solve our problem. a. variable neighborhood search vns is a metaheuristic, or a framework created 1997 [18, 19] for approximately solving optimization problems, including combinatorial and non-linear continuous optimization problems [20, 21]. vns is based on the systematic changes of neighborhood structures during the search for a (near) optimal solution of a considered problem. these changes occur in both descent phase, to improve the solution, and shaking and perturbation phase that aims to escape local optima traps. the main structure of the vns (algorithm 1) is shown in figure 1. fig. 1. algorithm 1:variable neighborhood search vns the inputs of vns heuristic are x, kmax and tmax, and they present the initial solution, the number of neighborhoods to be explored and the maximum allowed cpu time. the main ingredients of variable neighborhood search include an improvement procedure to improve the current solution and a shaking procedure to perturb the search and escape from the corresponding valley, see lines 3 and 4. the improvement procedure in line 5 could be a single local search or an ordered list of set of neighborhoods. b. initial solution we propose to use as an initial solution the best insertion method with sorting list. this method is performed by two steps. in the first step, a list of unserved tasks (l, l=t), is sorted in increasing order according to validity day (vd), that represents the length of the period (number of days) in which the tasks can be done, vdi=hei–hbi. in the second step, the algorithm select a task i from the head of l, and scan all feasible insertions in all routes rhk. the insertion cost of i in a route rhk between two tasks x and y, named δ(i,rhk,x,y) will be calculated as in (1). the algorithm performs the best insertion. ( ) ( ) ( ) , , , – xi iy xy jk jk i rhk x y c c c j rhk i w j rhk i l δ = + + σ ∈ ∪ +σ ∈ ∪ (1) c. local search procedures we propose five local search operators to be used either individually or together to focus the search in the inner loop of vns. we consider three intra-route and two inter-route local search methods. the best improvement strategy is used for each method. the local search methods are: • one intra-route relocate: one node (task) from the route is removed and reinserted in other positions in the same route. • one intra-route exchange: two nodes (tasks) are exchanged in the same route. • 2 opt: two arcs are removed and reinserted in the same route • one inter-route relocate: one node (task) from the route is removed and reinserted in one other route in the solution. • one inter-route exchange: two nodes (tasks) are exchanged between two different routes. d. variable neighborhood descent procedures the variable neighborhood descent (vnd) procedures explore several neighborhood structures either in a sequential or nested (or composite) fashion to possibly improve a given solution [21] because the solution which is a local optimum with respect to several neighborhood structures is more likely re tr ac te d engineering, technology & applied science research vol. 9, no. 5, 2019, 4718-4723 4720 www.etasr.com khattara et al.: an efficient metaheuristic approach for the multi-period technician routing and … to be a global optimum than the solution generated as a local optimum for just one neighborhood structure. the order of neighborhoods may play an important role in the quality of the final solution [22]. two variants of vnd are discussed in this paper regarding the decision made in neighborhood change procedure. if an improvement has been detected in some neighborhood: (1) basic vnd (b-vnd): we return to the first neighborhood on the list, (2) union vnd (u-vnd): at each iteration all the neighborhoods in the list are used to explore the search, and the next incumbent solution is the best one found by the best neighborhood. the outline of basic vnd is presented in algorithm 2 (figure 2). the steps of the sequential neighborhood change which is presented in line 5 (algorithm 1) and line 7 (algorithm 2) are given in algorithm 3 (figure 3). if an improvement of the incumbent solution in some neighborhood structure occurs, the search is resumed in the first neighborhood structure (according to the defined order) of the new incumbent solution, otherwise the search is continued in the next neighborhood (according to the defined order). fig. 2. algorithm 2: variable neighborhood descent vnd fig. 3. algorithm 3: neighborhood change procedure e. shaking procedure the shaking procedure is used in vns as mentioned in line 3 of algorithm 1 in order to hopefully resolve local minima traps. our shaking procedure consists in selecting a random solution from the k th neighborhood structure, nk(x). iv. computational results all developed algorithms were implemented in matlab and all tests were carried out on a macbook pro intel core™ i73520m with 2.90ghz cpus, sharing a memory of 8gb (the algorithms use only one cpu). our problem is an extension of the problem studied in [9, 11, 12], in which the skill constraints are ignored. for this, we use the data instance used in [9, 11, 12], and evaluate the performance of our methods with their mimetic algorithm. we first compare the performance of different local search procedures with the initial solution. different vnd procedures are then tested and compared. finally, we compare and evaluate the vns procedures proposed in this paper. a. description of experimental data sets in order to evaluate and assess the performance of the proposed approaches, we compare them with the methods proposed by tricoire in [9, 11, 12]. the instances of [9, 11, 12] are used for tests. for this, the skill constraints in our problem are relaxed, and lunch break constraints are added. they are inspired by a real life case, and they are available with detailed experimental results in [9] as well as on the web site: http://www. emn.fr/z-auto/routing-pbs/. all instances have a five-day planning horizon and three technicians available every day. the demands are randomly distributed over a 40km 2 map, and euclidean distances are used. the technicians drive at a constant average speed of 35km/h. two sizes of instances are tested c1 with 100 customers and c2 with 180 customers, each one with 5 variants according to the distribution and the percentage of time windows and the percentage of urgent and non-urgent tasks. b. evaluation of the performance of the local search procedures we study the impact of the locale search procedures. the results are shown in table i. the first column indicates the name of each instance. column 2 presents the result found by the initial solution, which is based on the best insertion strategy. the remained columns provide both the gap and the computing time for each local search operator. the gap is calculated by (2). row 13 mentions the average results of all instances. the ranks of the local search operators according to their performances and computing time are provided in the last two rows. the best results are in bold. ( )–( )gap% /( ) ( )heuristic heuristic ls heuristic lsf x f x f x+ += (2) from table i, we note that the 2 opt operator is the best one in almost all instances but it is the third one in terms of computing time. it is also worth noting that all the operators perform well and they all improve on average at least 7.14% and at most 9.66% the results found by the heuristic. c. variable neighborhood descent procedures the aim of this section is to evaluate and to compare different variants of vnd procedures according to the manner in which the neighborhood is changed after each improvement occurs. namely, it is obvious that the order of neighborhoods on the list affect the performance of vnd procedures [22]. thus, we take into account two possible orders of ls procedures according to their performance: 1) the value of f(x), and 2) the computing time. the used orders are mentioned in table ii. the results of vnd procedures on tricoire instances are summarized in table iii. the settings of the vnd variant are provided in column and row headings as described above. for example, in table iii, the values in the two cells, at the intersection of the row c100_1 and 4 th column, correspond to value achieved by b-vnd that explores neighborhoods using the 1st order, as well as its execution time in second. the next column reports the percentage deviation of the obtained solution compared to best solution of the mimetic algorithm r et ra ct ed engineering, technology & applied science research vol. 9, no. 5, 2019, 4718-4723 4721 www.etasr.com khattara et al.: an efficient metaheuristic approach for the multi-period technician routing and … proposed by tricoire [9, 11,12]. the deviations are calculated by: ( )dev% – / ( ) ( ) ( )vnd memetic memeticf x f x f x= (3) the next column reports the percentage deviation of the obtained solution compared to the result found by the initial solution, which is calculated by (2). in table iii, we report the results obtained by b-vnd and u-vnd using the two proposed orders. the average results are mentioned in the two last rows. in table iii, values in bold followed by a star represent new best solutions obtained by our method. table i. comparison between local search procedures 2 opt one intra route relocate one intra route exchange one inter route exchange one inter route relocate instances f(x)heuristic gap time (s) gap time (s) gap time (s) gap time (s) gap time (s) c1_1 21024.64 14.04% 0.40 12.74% 0.47 12.79% 0.47 0.00% 0.29 3.74% 5.79 c1_2 19347.33 6.15% 0.40 4.63% 0.45 4.65% 0.41 3.11% 5.49 5.30% 4.31 c1_3 19658.79 8.68% 0.39 6.55% 0.43 5.74% 0.36 0.72% 1.53 4.29% 3.45 c1_4 22232.21 9.86% 0.40 7.29% 0.29 7.32% 0.35 7.13% 5.26 6.10% 2.96 c1_5 18219.65 -2.63% 0.19 -2.75% 0.28 -1.84% 0.24 -2.46% 2.58 8.58% 7.05 c2_1 39085.88 10.03% 1.76 7.04% 1.45 7.07% 1.96 15.04% 44.40 8.27% 30.95 c2_2 34873.74 12.22% 1.76 6.69% 1.62 5.89% 1.34 4.29% 22.04 6.47% 20.96 c2_3 36349.52 13.70% 1.70 13.22% 1.75 11.36% 1.52 13.09% 39.18 15.12% 24.57 c2_4 36679.56 8.59% 1.46 7.32% 1.95 6.00% 1.64 7.72% 29.95 7.97% 25.62 c2_5 33700.93 11.00% 1.74 8.42% 1.27 9.58% 1.46 12.25% 28.76 6.23% 19.96 avg 28117.23 9.66% 1.02 7.47% 0.99 7.14% 0.98 7.17% 17.95 7.58% 14.56 rank (f(x)) 1 3 5 4 2 rank (time) 3 2 1 5 4 table ii. orders of local search procedures 5 local search operators 1st 2nd order of local search 2 opt 1 3 one intra route relocate 3 2 one intra route exchange 5 1 one inter route exchange 4 5 one inter route relocate 2 4 from the results presented in table iii, we may conclude the following: the vnd variants that explore neighborhoods in 1 st order offer the best results in both objective function and cpu time compared to the other order type. if we consider the average results over all test instances, it appears that the best average results are obtained by u-vnd even when we change the order of neighborhoods. so we can say that the u-vnd is more effective than b-vnd in terms of the objective function and cpu time. from the average results over all test instances we show that all our vnd procedures implemented and discussed in this paper are competitive and perform better than the mimetic algorithm when solving the same problem. for 6 instances among 10, a new best solution is found by our method. table iii. evaluation of different variants of vnd b-vnd u-vnd orders 1st 2nd 1st 2nd instances mimetic of tricoire value % dev mimetic %dev heuristic value % dev mimetic % dev heuristic value % dev mimetic % dev heuristic value % dev mimetic % dev heuristic c100_1 f(x) 17893.91 17578.37* -1.76% 19.61% 17594.18 -1.68% 19.50% 17578.37* -1.76% 19.61% 17594.03 -1.68% 19.50% time (s) 9.92 11.42 8.97 13.28 c100_2 f(x) 15977.12 17202.92 7.67% 12.47% 17136.03 7.25% 12.90% 17153.67 7.36% 12.79% 17164.61 7.43% 12.72% time (s) 9.52 9.58 8.06 8.73 c100_3 f(x) 16714.03 17493.5 4.66% 12.38% 17529.38 4.88% 12.15% 17491.94 4.65% 12.39% 17538.11 4.93% 12.09% time (s) 7.32 5.93 5.03 6.74 c100_4 f(x) 17489.36 18285.77 4.55% 21.58% 18265.73 4.44% 21.72% 18229.85 4.23% 21.95% 18031.33 3.10% 23.30% time (s) 11.37 12.68 9.5 14.03 c100_5 f(x) 16025.91 16535.47 3.18% 10.19% 16611.1 3.65% 9.68% 16535.47 3.18% 10.19% 16364.41 2.11% 11.34% time (s) 9.83 9.78 9.91 15.47 c180_1 f(x) 28945.36 28607.43 -1.17% 36.63% 29299.56 1.22% 33.40% 28405.93* -1.86% 37.60% 28579.51 -1.26% 36.76% time (s) 113.28 107.17 84.19 96.87 c180_2 f(x) 31191.12 28156.24 -9.73% 23.86% 27780.88 -10.93% 25.53% 27748.3 -11.04% 25.68% 27729.19* -11.10% 25.77% time (s) 66.39 72.82 58.22 46.09 c180_3 f(x) 27728,44 26464,43 -4,56% 37,35% 27472,29 -0,92% 32,31% 26034,96* -6,11% 39,62% 26886,39 -3,04% 35,20% time (s) 85,99 89,47 78,46 77,85 c180_4 f(x) 30245,61 29348,92 -2,96% 24,98% 29522,29 -2,39% 24,24% 30124,57 -0,40% 21,76% 29238,94* -3,33% 25,45% time (s) 57,23 76,22 52,22 61,71 c180_5 f(x) 28158,57 26880,26 -4,54% 25,37% 26566,25 -5,65% 26,86% 26395,74* -6,26% 27,68% 26625 -5,45% 26,58% time (s) 78,74 93,93 70,71 60,92 average f(x) 23036,94 22655,33 -1,66% 24,11% 22777,77 -1,13% 23,44% 22569,88 -2,03% 24,58% 22575,15 -2,00% 24,55% time (s) 44,96 48.9 38.53 40.17 re tr ac te d engineering, technology & applied science research vol. 9, no. 5, 2019, 4718-4723 4722 www.etasr.com khattara et al.: an efficient metaheuristic approach for the multi-period technician routing and … it is worth noting that all vnd procedures also perform better and they all improve on average at least 23.44% and at most 24.58% the results found by the heuristic. that means that vnd procedures improve the solution in average 15% more than the single local search operators (table i). d. variable neighborhood search procedures in this section we evaluate and compare three variants of vns procedures regarding to the improvement procedures in the inner loop: (1) b-vns that uses a simple local search in the inner loop at each iteration and move to the other one as in algorithms 1 and 3, (2) vns_b-vnd that uses a b-vnd procedure, and (3) vns_u-vnd that uses a u-vnd in the improvement phase. the neighborhoods in the vnd procedures are ordered in the 1 st order. the performances of vns procedures have been tested on class c1 of the instances described above. we tested our vns procedures by using 4 different time limits, ranging from 360s to 1140s. for each instance and each vns we ran the algorithm 5 times. the results are summarized in table iv and figure 4. table iv. evaluation of different variants of vns bvns vns_b-vnd vns_u-vnd time limit (s) instances mimetic of tricoire % dev_best % dev_avg % dev_best % dev_avg % dev_best % dev_avg 360 average c1 16820.07 -0.27% 1.56% -1.04% 0.18% -1.25% 0.14% 720 -0.56% 1.06% -1.59% -0.37% -1.87% -0.27% 1080 -0.56% 0.99% -1.69% -0.53% -2.11% -0.47% 1440 -0.57% 0.89% -1.96% -0.66% -2.19% -0.60% average -0.49% 1.12% -1.57% -0.34% -1.85% -0.30% for each time limit and each vns variant, we report the deviation from the best solution found over all variants of instance of class c1 in 5 runs compared to the best solution found by the mimetic algorithm of [9, 11, 12] (named % dev_best in table iv). we also compute the deviation from the average value of the solutions found in 5 runs for all instances of c1 compared to the best solution found by the mimetic algorithm (named % dev_avg in table iv). the deviations are calculated by (3). fig. 4. comparison between vns procedures from the results presented in table iv and figure 4 we may draw the following conclusions: firstly, we remark that the performance of vns procedures in this paper depends on the time limit. as we have a long time we achieve the best results. all vns variants outperform the results of the mimetic algorithm even if our vns methods are stopped at 360s. the collaboration of all local search procedures is more beneficial than only the use of one local search in the inner loop in the vns procedure. if we consider the average results over all variants of instance c1, it appears that the best average results are obtunded by vns_u-vnd, and that confirms what we found in the last section. v. conclusion and perspectives in this paper, we considered a new variant of the multiperiod technician routing and scheduling problem motivated by a real-life industrial application in a telecommunication company. to solve the problem, two variants of variable neighborhood descent b-vnd and u-vnd, as well as three variants of variable neighborhood search bvns, vns_b-vnd and vns_u-vnd are proposed. all heuristic methods were tested and compared with the methods proposed by tricoire [9, 11,12]. the results confirm the effectiveness of our methods. regarding future work, we will generate other instances to intensify the experimentations. also, we will consider the dynamic aspect, where the demands appear dynamically over the planning horizon. references [1] v. pillac, c. gueret, a. l. medaglia, “a parallel matheuristic for the technician routing and scheduling problem”, optimization letters, vol. 7, no. 7, pp. 1525-1535, 2013 [2] j. xu, s. y. chiu, “effective heuristic procedures for a field technician scheduling problem”, journal of heuristics, vol. 7, no. 5, pp. 495-509, 2001 [3] e. hadjiconstantinou, d. roberts, “routing under uncertainty: an application in the scheduling of field service engineers”, in: the vehicle routing problem, pp. 331-352, society for industrial and applied mathematics, 2001 [4] e. delage, re-optimization of technician tours in dynamic environments with stochastic service time, ecole des mines de nantes, 2010 [5] c. e. cortes, f. ordonez, s. souyris, a. weintraub, “routing technicians under stochastic service times: a robust optimization approach”, tristan vi: the sixth triennial symposium on transportation analysis, phuket island, thailand, june 10-15, 2007 [6] v. pillac, c. gueret, a. medaglia, “on the dynamic technician routing and scheduling problem”, 5th international workshop on freight transportation and logistics, mykonos, greece, may 21-25, 2012 [7] x. chen, b. w. thomas, m. hewitt, “multi-period technician scheduling with experience-based service times and stochastic customers”, computers and operations research, vol. 82, no. c, pp. 114, 2017 [8] w. jaskowski, roadef challenge 2007: technicians and interventions scheduling for telecommunications, poznan university of technology, 2007 re tr ac te d engineering, technology & applied science research vol. 9, no. 5, 2019, 4718-4723 4723 www.etasr.com khattara et al.: an efficient metaheuristic approach for the multi-period technician routing and … [9] f. tricoire, optimisation de tournees de vehicules et de personnels de maintenance: application a la distribution et au traitement des eaux, phd thesis, universite de nantes, 2006 (in french) [10] a. a. kovacs, s. n. parragh, k. f. doerner, r. f. hartl, “adaptive large neighborhood search for service technician routing and scheduling problems”, journal of scheduling, vol. 15, no. 5, pp. 579-600, 2012 [11] n. bostel, p. dejax, p. guez, f. tricoire, “multiperiod planning and routing on a rolling horizon for field force optimization logistics”, in: the vehicle routing problem: latest advances and new challenges, pp. 503-525, springer, 2008 [12] f. tricoire, n. bostel, p. dejax, p. guez, “exact and hybrid methods for the multiperiod field service routing problem”, central european journal of operations research, vol. 21, no. 2, pp. 359-377, 2013 [13] o. braysy, “a reactive variable neighborhood search for the vehicle routing problem with time windows”, informs journal of computing, vol. 15, no. 4, pp. 347–368, 2003 [14] m. polacek, r. f. hartl, k. doerner, m. reimann, “a variable neighborhood search for the multi depot vehicle routing problem with time windows”, journal of heuristics, vol. 10, no. 6, pp. 613-627, 2004 [15] s. salhi, a. imran, n. a. wassan, “the multi-depot vehicle routing problem with heterogeneous vehicle fleet: formulation and a variable neighborhood search implementation”, computers & operations research, vol. 52b, pp. 315-325, 2014 [16] m. elbek, s. wohlk, “a variable neighborhood search for the multiperiod collection of recyclable materials”, european journal of operational research, vol. 249, no. 2, pp. 540-550, 2016 [17] r. l. pinheiro, d. landa-silva, j. atkin, “a variable neighbourhood search for the workforce scheduling and routing problem”, in: advances in nature and biologically inspired computing, pp. 247-259, springer, 2016 [18] n. mladenovic, p. hansen, “variable neighborhood search”, computers & operations research, vol. 24, no. 11, pp. 1097-1100, 1997 [19] p. hansen, n. mladenovic, “variable neighborhood search: principles and applications”, european journal of operational research, vol. 130, no. 3, pp. 449-467, 2001 [20] p. hansen, n. mladenovic, j. a. m. perez, “variable neighbourhood search: methods and applications”, annals of operations research, vol. 175, no. 1, pp. 367-407, 2010 [21] p. hansen, n. mladenovic, r. todosijevic, s. hanafi, “variable neighborhood search: basics and variants”, euro journal on computational optimization, vol. 5, no. 3, pp. 1-32, 2016 [22] a. mjirda, r. todosijevic, s. hanafi, p. hansen, n. mladenovic, “sequential variable neighborhood descent variants: an empirical study on the traveling salesman problem”, international transactions in operational research, vol. 24, no. 3, pp. 615-633, 2016 re tr ac te d engineering, technology & applied science research vol. 8, no. 5, 2018, 3360-3365 3360 www.etasr.com pekin alakoc & apaydin: a fuzzy control chart approach for attributes and variables a fuzzy control chart approach for attributes and variables nilufer pekin alakoc college of engineering and technology american university of the middle east kuwait nilufer.alakoc@aum.edu.kw aysen apaydin department of insurance and actuary sciences ankara university ankara, turkey aapaydin@ankara.edu.tr abstract—the purpose of this study is to present a new approach for fuzzy control charts. the procedure is based on the fundamentals of shewhart control charts and the fuzzy theory. the proposed approach is developed in such a way that the approach can be applied in a wide variety of processes. the main characteristics of the proposed approach are: the type of the fuzzy control charts are not restricted for variables or attributes, and the approach can be easily modified for different processes and types of fuzzy numbers with the evaluation or judgment of decision maker(s). with the aim of presenting the approach procedure in details, the approach is designed for fuzzy c quality control chart and an example of the chart is explained. moreover, the performance of the fuzzy c chart is investigated and compared with the shewhart c chart. the results of simulations show that the proposed approach has better performance and can detect the process shifts efficiently. keywords-component; fuzzy set theory; statistical process control; fuzzy control charts; average run length i. introduction recently, quality improvement has become the main interest of firms all over the world. there are several benefits of achieving better standards: increase in revenue, productivity, customer satisfaction and market share. statistical methods such as design of experiments, hypotheses testing and statistical process control play an important role in quality improvement. the primary tool of statistical process control is quality control charts which were first introduced by walter a. shewhart [1]. a control chart provides information on changes of process mean and variance so that corrective actions can be undertaken as early as possible which leads to reduce variability, improve productivity and quality. on the other hand, statistical process control problems include uncertainty as most of the real world systems. if there is uncertainty in the process or if quality characteristics are described by human subjectivity, then the process cannot be defined accurately by shewhart control charts. therefore, fuzzy set theory is used to explain and model problems. the idea of fuzzy set theory was first introduced in [2], afterwards the theory was used in development of many procedures, approaches and fields to define and model systems. applications of fuzzy theory on statistical process control have taken attention and been investigated widely which resulted in a new perspective in quality improvement. many studies [3-10] were mainly focused on the adaptation of linguistic terms such as perfect, good, medium etc. to control charts. in [3, 4], authors proposed two fuzzy control chart approaches: probabilistic approach and membership approach. as an extension of these studies, a new approach for considering linguistic terms to express the process outcomes was introduced [5, 6]. in another study on linguistic data [7], fuzzy control chart generation procedures were compared. in [8], authors studied on linguistic terms and suggested fuzzy approaches for attributes. recently, an approach called transition probability approach based on markov chain theory was developed [9]. attribute control charts were discussed in [10, 11] and reviewed in [12]. a fuzzy approach for the determination of variable sampling interval was developed by a composition function [13]. in [14], authors suggested a fuzzy control chart based on fuzzy regression analysis with neural network and degree of fuzziness, and generated the fuzzy data by combining experts’ opinion and measurements. in another study, direct fuzzy approach (dfa) was introduced for c control charts without using any defuzzification method [15]. studies on fuzzy statistical quality control were reviewed in [16, 17] and the open fields and challenges for future work were discussed briefly. in [18-21], authors studied on fuzzy approaches for attributes control charts and emphasized the critical role of fuzzy data. a fuzzy c chart monitored with weighted possibilistic mean and weighted interval valued possibilistic mean of fuzzy numbers was introduced in [22]. there are many studies on monitoring variable control charts for uncertain observations. the former fuzzy control charting approach for variables is based on plotting control charts by considering uncertain process parameters for both variables and attributes [23]. in [24], authors developed a fuzzy chart with pearson goodness of fit statistic which includes a warning line besides its upper control limit. different procedures for variable control charts in which the shewhart control limits are modified with uncertainty and randomness were proposed in [25-27]. contributions to fuzzy process control works from a different point of view were also proposed, e.g. fuzzy ewma, cusum control charts [28], adaptation of the run rules and recognition of the unnatural patterns of fuzzy control charts [29-31]. engineering, technology & applied science research vol. 8, no. 5, 2018, 3360-3365 3361 www.etasr.com pekin alakoc & apaydin: a fuzzy control chart approach for attributes and variables authors in [32] investigated fuzzy multivariate control charts, in [33], nonparametric shewhart control chart for fuzzy data, and control charts autocorrelated fuzzy observations in [34]. this paper introduces a new approach for constructing fuzzy control charts. the approach is proposed as an extension of shewhart control charts and differs from the previous studies in flexible assumptions which do not restrict the type of the chart and application area. the charting procedure is based on membership degrees of fuzzy numbers, the pattern of -cut fuzzy numbers and fuzzy limits on the chart. the out of control condition is determined by a decision function which is developed by considering the distribution of membership degrees, the probability of type i error and 3 control limits. a simulation study is performed to demonstrate the performance of the approach. ii. design of fuzzy control chart a. the procedure the approach is flexible and not restricted to a specified type of control chart, because any assumptions about the type of quality characteristic or the distribution of quality characteristic are not required. for this reason, the approach can be modified easily for control charts with different purposes. during the development process several different fuzzy control charts are plotted and different applications are examined. a sensitivity analysis is performed based on the applications and then effects of fuzzy numbers and parameters of membership function are investigated. if measurements and control limits of an approach are fuzzy numbers then the most essential part of constructing a fuzzy control chart is to define the intersection situations of fuzzy limits and numbers. if all the fuzzy numbers are between the fuzzy limits, then the process is in control and the membership degree of being in control is 1. similarly, if any fuzzy number is completely out of the fuzzy control limits then the process is out of control and the membership degree is 0. however, it is not easy to define the process if any fuzzy number intersects with the fuzzy control limits. the approach classifies these situations by a formula and assigns a membership degree to each fuzzy number in such a way that the importance of having a fuzzy number between the fuzzy limits, and the importance of intersecting with fuzzy limits are differentiated with the perspective of the decision maker. the proposed fuzzy control chart plots measurements of a quality characteristic in the form of -cut fuzzy numbers. the fuzzy control chart represents the fuzzy control limits and center line by two parallel lines which show upper and lower values of -cut fuzzy limits. -cut fuzzy numbers are illustrated with lines perpendicular to the limits. let is a trapezoidal fuzzy number, such that =( , , , ), then -cut trapezoidal fuzzy number is an interval in the form of = + ( − ), − ( − ), = , . if measurements are denoted by trapezoidal fuzzy numbers, then fuzzy control limits, and , and the fuzzy center line, of fuzzy ̅ and r charts are defined as: ̅ = − , − , ̅ − , ̅ − ̅ = , , ,̅ ̅  ̅ = − , + , ̅ + , ̅ +  = ( , , , ) = ( , , , )= ( , , , ) where = ∑ ⁄ , = max − and = , , , , = 1,…, . similarly, fuzzy control limits and the center line of fuzzy c control chart are calculated by: = − 3 ̅, − 3√ ,̅ ̅ − 3 , ̅ − 3√ = , , ̅, ̅ = + 3√ , + 3 , ̅ + 3√ ,̅ ̅ + 3 ̅  triangular fuzzy numbers are special cases of trapezoidal fuzzy numbers. trapezoidal fuzzy numbers are reduced to triangular fuzzy numbers when = ,. let is a triangular fuzzy number, denoted by = ( , , ), then -cut triangular fuzzy number is an interval such that = , . so that the fuzzy control limits of c chart, and , are triangular fuzzy numbers: = − 3√ ,̅ − 3 , ̅ − 3√ = , , ̅ = + 3√ , + 3 , ̅ + 3√ ̅  the approach is not limited to any specified type of fuzzy numbers, but for the simplicity, the procedure is explained by triangular fuzzy numbers. if is the -cut range of ith fuzzy number, then this range is defined as: = , + , + ,   where, , is the part of the range of -cut fuzzy number that intersects with any of the -cut fuzzy control limits, and , , , are the parts of the range of -cut fuzzy number that are between and out of the -cut control limits, respectively. figure 1 represents an example of the definitions on triangular fuzzy numbers. for the first fuzzy number, (fn1), , > 0, , = 0 and , > 0 and for the second one, (fn2), , = 0 , , > 0 and , > 0 . the membership function, which is the weighted sum of ranges, , , , and , is stated as follows: = , ( ) ,( )   where = − , and = min − , and are defined to standardize the membership function, and  engineering, technology & applied science research vol. 8, no. 5, 2018, 3360-3365 3362 www.etasr.com pekin alakoc & apaydin: a fuzzy control chart approach for attributes and variables is the weight of -cut fuzzy number that intersects with any one of the -cut control limits. even though, the value of  is based on the production processes and expert's experiences, the value is expected to be in [0, 0.5], and so that 0 < (1 − ) <⁄ 1. fig. 1. examples of on triangular fuzzy numbers. is a random variable and has a probability distribution based on the process. this yields to have multiple values for membership degree of a fuzzy number which means fuzzy of fuzziness. hence, the proposed fuzzy membership function necessitates the use of type-2 fuzzy set theory. type-2 fuzzy sets, first introduced in [35] are the higher order of type-1 fuzzy sets. in type-1 fuzzy sets the membership degree for each element is a crisp number in [0, 1], whereas the membership degree of type-2 fuzzy sets for each element is a fuzzy set in [0, 1]. for the simplicity of the approach and to increase the applicability, it is assumed that , which is determined by the decision maker(s), is a single constant value. this assumption reduces the procedure to type-1 fuzzy theory. after the calculation of membership degrees for each fuzzy number, the next step of the procedure is to describe the state of the process by the decision function given in (7) and monitoring the process. process = in control, if < 1out of control, if 0 <  where, is a parameter such that 0 1. b. estimation of decision function parameter the determination of the decision function parameter,  is an important part of the approach because if a value assigned is greater than it should be then the probability of type i error increases. this situation has the same effect with moving the control limits closer to the center line on shewhart control charts. on the other hand, when the estimate is less than it should be, the probability of type i error decreases, and the probability of type ii error increases. in this section, the estimation of the parameter is explained with the studies on membership degrees and their probability distributions. in the development process of the approach a detailed study is performed to estimate . different data sets are experimented and fuzzy control charts are plotted under various process scenarios. data sets from binomial, poisson and normal distributions with a variety of parameter values are generated randomly for constructing fuzzy p, np, c, u, ̅, r and s control charts. membership degrees of fuzzy numbers are calculated and the distribution of these membership degrees is investigated individually. sixty one different continuous probability distributions are fitted to all data sets by easyfit 5.5. as a consequence of the applications, it is statistically proved that for all shewart control charts, the distribution of membership degrees is beta distribution with two shape parameters which is left skewed with one peak. in order to standardize the estimates, all the data sets are considered together and the parameters are estimated by the maximum likelihood method which provides = 3.6974 and =1.1807. figure 2 illustrates the histogram of a random sample of membership degrees of 1000 triangular fuzzy numbers which are randomly generated from different distributions. the membership degrees are calculated by the corresponding control charts. fig. 2. histogram of membership degrees and b (3.6974, 1.1807) (=0.33 and =0.60). the next step is to determine the value of the decision function parameter . it is estimated by the probability of observing a point outside the control limits when the process is in control, which is 0.0027 type i error probability of shewhart control chart. the decision function for a fuzzy control chart is: process = in control, if 0.1856 < 1out of control, if 0 < 0.1856 iii. an application of fuzzy c control chart in this section, an application of the approach is presented by fuzzy c control chart. the number of defective products in each package is assumed to be expressed by fuzzy numbers. the defects or nonconformities occur according to poisson distribution, which forms the basis of the c control chart. in order to plot a fuzzy control chart, a set of fuzzy data is generated randomly from the poisson distribution. it is assumed that the process is in control when c=25 and triangular fuzzy numbers are formed by subtracting and adding 1.5 times of standard deviation of the data. a random sample of 30 -cut triangular fuzzy numbers and membership degrees are given in table i. fuzzy control limits and fuzzy central line are illustrated in figure 3 and obtained as follows: =(0.114,11.815,23.977) , = (18.317,27.567,36.817) , =(31.156,43.318,55.020) and . = 7.135,16.680 , . =23.867,31.267 , . = 38.453,47.999 . in this application, the membership degrees of fuzzy numbers are computed by the assumption that the importance of having a number between the fuzzy limits is twice the importance of intersecting with engineering, technology & applied science research vol. 8, no. 5, 2018, 3360-3365 3363 www.etasr.com pekin alakoc & apaydin: a fuzzy control chart approach for attributes and variables fuzzy limits, which means = 1 3⁄ . fuzzy c control chart is given in figure 4. the membership degrees of 12 fuzzy numbers are 1 which means these numbers are completely between the -cut fuzzy control limits. the rest of the values are smaller than 1, but not . consequently, the fuzzy c control chart shows a pattern in which -cut fuzzy numbers are randomly distributed and the process is in control. table i. sample 1: -cut triangular fuzzy numbers and membership degrees () no no 1 14.30 18.00 21.70 0.8392 16 18.30 22.00 25.70 1.0000 2 32.30 36.00 39.70 0.9158 17 39.30 43.00 46.70 0.5000 3 14.30 18.00 21.70 0.8392 18 38.30 42.00 45.70 0.5104 4 16.30 20.00 23.70 0.9743 19 27.30 31.00 34.70 1.0000 5 10.30 14.00 17.70 0.5689 20 19.30 23.00 26.70 1.0000 6 27.30 31.00 34.70 1.0000 21 16.30 20.00 23.70 0.9743 7 32.30 36.00 39.70 0.9158 22 21.30 25.00 28.70 1.0000 8 25.30 29.00 32.70 1.0000 23 10.30 14.00 17.70 0.5689 9 32.30 36.00 39.70 0.9158 24 33.30 37.00 40.70 0.8482 10 17.30 21.00 24.70 1.0000 25 38.30 42.00 45.70 0.5104 11 21.30 25.00 28.70 1.0000 26 12.30 16.00 19.70 0.7040 12 36.30 40.00 43.70 0.6455 27 18.30 22.00 25.70 1.0000 13 19.30 23.00 26.70 1.0000 28 19.30 23.00 26.70 1.0000 14 15.30 19.00 22.70 0.9068 29 33.30 37.00 40.70 0.8482 15 19.30 23.00 26.70 1.0000 30 37.30 41.00 44.70 0.5779 fig. 3. fuzzy control limits and central line. fig. 4. fuzzy c control chart for the data in table i. the application is repeated for the second data set generated from poisson distribution with c=27. thirty triangular fuzzy numbers are generated randomly for the second set and the membership degrees of these numbers are calculated. table ii presents -cut triangular fuzzy numbers and membership degrees of the second sample. fuzzy c control chart in figure 5 shows the -cut fuzzy control limits and the central line which are calculated by the first data set. the membership degrees of the corresponding fuzzy numbers that are smaller than are also presented on the chart. figure 5 indicates that “the process is out of control”. this result is due to the 48th and 58th fuzzy numbers which have membership values smaller than . the 48th number is completely out of cut fuzzy upper control limit with = 0.000. on the other hand, the -cut of the 58th fuzzy number intersects with the cut fuzzy lower control limit with a membership degree of 0.1253. moreover, the pattern of the chart provides information about the randomness of the process. in figure 5 there is a shift in the process mean up to 43th fuzzy number and then, a clear increasing trend. after the 48th number a descending trend can be observed. consequently, all these symptoms and the membership degrees point toward the nonrandomness and out of control state in the process output. table ii. sample 2: -cut triangular fuzzy numbers and membership degrees () no no 31 18.84 23.00 27.16 1.0000 46 33.84 38.00 42.16 0.7812 32 10.84 15.00 19.16 0.6540 47 31.84 36.00 40.16 0.9004 33 24.84 29.00 33.16 1.0000 48 49.84 54.00 58.16 0.0000 34 23.84 28.00 32.16 1.0000 49 44.84 49.00 53.16 0.1929 35 7.84 12.00 16.16 0.5000 50 23.84 28.00 32.16 1.0000 36 22.84 27.00 31.16 1.0000 51 24.84 29.00 33.16 1.0000 37 14.84 19.00 23.16 0.8925 52 27.84 32.00 36.16 1.0000 38 18.84 23.00 27.16 1.0000 53 28.84 33.00 37.16 1.0000 39 21.84 26.00 30.16 1.0000 54 23.84 28.00 32.16 1.0000 40 19.84 24.00 28.16 1.0000 55 24.84 29.00 33.16 1.0000 41 20.84 25.00 29.16 1.0000 56 21.84 26.00 30.16 1.0000 42 6.84 11.00 15.16 0.4830 57 15.84 20.00 24.16 0.9521 43 27.84 32.00 36.16 1.0000 58 0.84 5.000 9.16 0.1253 44 32.84 37.00 41.16 0.8408 59 5.84 10.00 14.16 0.4234 45 23.84 28.00 32.16 1.0000 60 18.84 23.00 27.16 1.0000 fig. 5. fuzzy c control chart for the data in table ii. iv. fuzzy control chart performance the most effective and commonly used performance measure is the average run length (arl), which is the average number of points plotted on a control chart before an out of control condition is observed. if the process observations are uncorrelated, then arl is calculated as given below arl = = (one point plots out of control) if the process is in control, arl is denoted by arl where arl0 = 1 (type i error probability)⁄ . it is desired to have a large value for arl0 which gives fewer false alarm rates. on the other hand, if an assignable variable occurs, then the probability of being in the out of control state increases and more numbers give out of control signals. when the process is out of control, the arl is denoted by arl and defined by arl = 1 (1 − type ii error probability)⁄ . in order to reduce the time to detect out of control situation, small values for arl engineering, technology & applied science research vol. 8, no. 5, 2018, 3360-3365 3364 www.etasr.com pekin alakoc & apaydin: a fuzzy control chart approach for attributes and variables are required. in this section, the performance of fuzzy c control chart is calculated and the fuzzy control chart is compared with shewhart c control chart. all the computations are carried out by c++ programming language and one million simulation runs are performed for each arl. the fuzzy numbers are generated randomly, and it is assumed that the process is in control when the mean number of nonconformities is = =14 and the process is out of control when = + where = 1,2,…,9. firstly, reference quantities of the fuzzy chart are determined to have the same arl with shewart control chart. these are = 0.6, = 0.1856 and = 1 3⁄ . then, the process parameter is shifted and run lengths are averaged to get arl . table iii presents the summary of the results. the second column gives shewart c control chart arl values which are calculated by (12) and (13): = p < lcl/ + p > ucl/ = p lcl < < ucl/  table iii. performance of shewart and fuzzy c control charts shift ( ) arl shewhart c control chart arl fuzzy c control chart 0 370.1580 374.6512 1 160.6629 139.1889 2 76.1332 69.5200 3 39.6020 36.0776 4 22.4160 21.4669 5 13.6749 14.0383 6 8.9138 9.4132 7 6.1615 6.7849 8 4.4863 5.2133 9 3.4206 4.3388 it can be concluded that for small shifts of the parameter, the fuzzy control chart arl is significantly less than shewart control chart arl , which means fuzzy control chart performance is better than shewart control chart. as the shift increases the arl values approach shewart control chart and both of the charts result to almost the same performance. v. conclusions and discussion many real life problems cannot be modeled or defined by classical methods. for this reason, fuzzy logic has been applied to real life applications and science. fuzzy control charts that are constructed with fuzzy set theory reflect the uncertainty better than shewhart control charts. this paper presents a fuzzy approach that integrates fuzzy set theory and the basics of shewhart control charts. the approach is developed by a decision function and a membership function based on -cut fuzzy numbers, fuzzy 3 control limits, and intersection situations of fuzzy numbers and fuzzy control limits. an example is included to demonstrate the applicability and efficiency of the proposed fuzzy control charts. moreover, the performance of fuzzy control charts is investigated and the fuzzy c control chart is compared with the shewhart c control chart. the approach is investigated with various applications in the development stage. the functions, parameters and decisions are tested for verification and validation of the approach. as a result of these studies, some advantages of the proposed approach are derived. at first, the type of fuzzy numbers is not specified. the choice of fuzzy numbers depends on the decision maker. second, process is defined without using any transformation techniques, which minimizes the loss of information and biased decisions. the membership function is calculated by weighted mean of ratios. third, with respect to the process the weights can be changed. the approach is easy to understand and calculate. it is flexible and does not require any important assumptions that restrict the application area. therefore, the approach can be modified easily for different processes and applied to both variables and attributes control charts with small modifications. another advantage of the approach is that the decision function has two linguistic decisions, “the process is in control” and “the process is out of control”. depending on the process, the number of decisions can be increased or a warning decision can be added. finally, the process is defined by the membership function which provides more flexibility compared to shewhart control charts and previous studies. references [1] d. c. montgomery, introduction to statistical quality control, 7th edition, john wiley & sons inc., ny, usa, 2013 [2] l. a. zadeh, “fuzzy sets”, information and control, vol. 8, no. 3, pp. 338-353, 1965 [3] t. raz, j. h. wang, “probabilistic and memberships approaches in the construction of control charts for linguistic data”, production planning & control, vol. 1, no. 3, pp. 147-157, 1990 [4] j. h. wang, t. raz, “on the construction of control charts using linguistic variables”, international journal of production research, vol. 28, no. 3, pp. 477-487, 1990 [5] a. kanagawa, f. tamaki, h. ohta, “control charts for process average and variability based on linguistic data”, international journal of production research, vol. 31, no. 4, pp. 913-922, 1993 [6] h. taleb, m. limam, “on fuzzy and probabilistic control charts”, international journal of production research, vol. 40, no. 12, pp. 28492863, 2002 [7] f. franceschine, d. romano, “control chart for linguistic variables: a method based on the use of linguistic quantifiers”, international journal of production research, vol. 37, no. 16, pp. 3791-3800, 1999 [8] m. gulbay, c. kahraman, d. ruan, “-cuts fuzzy control charts for linguistic data”, international journal of intelligent systems, vol. 19, no. 12, pp. 1173-1196, 2004 [9] k. thaga, r. sivasamy, “control chart based on transition probability approach”, journal of statistical and econometric methods, vol. 4, no. 2, pp. 61-82, 2015 [10] j. h. wang, c .h. chen, “economic statistical np-control chart designs based on fuzzy optimization”, international journal of quality & reliability management., vol. 12, no. 1, pp. 88-92, 1995 [11] p. grzegorzewski, o. hryniewicz, “soft methods in statistical quality control”, control cybernet, vol. 29, no. 1, pp. 119-140, 2000 [12] w. woodall, k. l. tsui, g. l. tucker, “a review of statistical and fuzzy control charts based on categorical data”, in: frontiers in statistical quality control, vol. 5, pp. 83-89, springer-verlag, berlin heidelberg, 1997 [13] y. k. chen, c. yeh, “an enhancement of dsi x control charts using a fuzzy genetic approach”, the international journal of advanced manufacturing technology, vol. 24, no. 1-2, pp. 32-40, 2004 engineering, technology & applied science research vol. 8, no. 5, 2018, 3360-3365 3365 www.etasr.com pekin alakoc & apaydin: a fuzzy control chart approach for attributes and variables [14] c. b. cheng, “fuzzy process control: construction of control charts with fuzzy numbers”, fuzzy sets and systems, vol. 154, no. 2, pp. 287-303, 2005 [15] m. gulbay, c. kahraman, “an alternative approach to fuzzy control charts: direct fuzzy approach”, information sciences, vol. 77, no. 6, pp. 1463-1480, 2007 [16] o. hryniewicz, “statistics with fuzzy data in statistical quality control”, soft computing, vol. 12, no. 3, pp. 229-234, 2007 [17] m. h. zavvar sabegh, z. sabegha, a. mirzazadeha, s. salehiana, g. w. weber, “a literature review on the fuzzy control chart; classifications & analysis”, international journal of supply and operations management, vol. 1, no. 2, pp. 167-189, 2014 [18] d. j. fonseca, m. e. elam, l. tibbs, “fuzzy short-run control charts”, mathware & soft computing, vol. 14, pp. 81-101, 2007 [19] k. l. hsieh, l. i. tong, m. c. wang, “the application of control chart for defects and defect clustering in ic manufacturing based on fuzzy theory”, expert systems with applications, vol. 32, no. 3, pp. 765-776, 2007 [20] v. amirzadeh, m. mashinchi, a. parchami, “construction of p-charts using degree of nonconformity”, information sciences, vol. 179, no. 12, pp. 1501-1560, 2009 [21] m. h. shu, h. c. wu, “monitoring imprecise fraction of nonconforming items using p control charts”, journal of applied statistics, vol. 37, no. 8, pp. 1283-1297, 2010 [22] d. wang, p. li, m. yasuda, “construction of fuzzy control charts based on weighted possibilistic mean”, communications in statistics theory and methods, vol. 43, no. 15, pp. 3186-3207, 2014 [23] m. h. fazel zarandi, i. b. turksen, h. kashan, “fuzzy control charts for variable and attribute quality characteristic”, iranian journal of fuzzy systems, vol. 3, no. 1, pp. 31-44, 2006 [24] a. faraz, m. b. moghadam, “fuzzy control chart a better alternative for shewhart average chart”, quality & quantity, vol. 41, no. 3, pp. 375385, 2007 [25] a. faraz, r.b. kazemzadeh, m. b. moghadam, a. bazdar, “constructing a fuzzy shewhart control chart for variables when uncertainty and randomness are combined”, quality & quantity, vol. 44, no. 5, pp. 905-914, 2009 [26] a. faraz, a. f. shapiro, “an application of fuzzy random variables to control charts”, fuzzy sets and systems, vol. 161, pp. 2684-2694, 2010 [27] m. h. shu, h. c. wu, “fuzzy x and r control charts: fuzzy dominance approach”, computers & industrial engineering, vol. 61, no. 3, pp. 676-686, 2011 [28] s. b. akhundjanov, f. pascual, “moving range ewma control charts for monitoring the weibull shape parameter”, journal of statistical computation and simulation, vol. 85, no. 9, pp. 1864-1882, 2015 [29] j. d. t. tannock, “a fuzzy control charting method for individuals”, international journal of production research, vol. 41. no. 5, pp. 10171032, 2003 [30] m. gulbay, c. kahraman, “development of fuzzy process control charts and fuzzy unnatural pattern analyses”, computational statistics & data analysis, vol. 51, no. 1, pp. 434-451, 2006 [31] n. pekin alakoc, a. apaydin, “sensitizing rules for fuzzy control charts”, world academy of science, engineering and technology, international journal of mechanical, aerospace, industrial, mechatronic and manufacturing engineering, vol. 7, no. 5, pp.931-935, 2013 [32] m. n. pastuizaca fernandez, a. carrion garcia, o. ruiz barzola, “multivariate multinomial t2 control chart using fuzzy approach”, international journal of production research, vol. 53, no. 7, pp. 2225– 2238, 2015 [33] d. wang, o. hryniewicz, “a fuzzy nonparametric shewhart chart based on the bootstrap approach”, international journal of applied mathematics and computer science, vol. 25, no. 2, pp. 389-401, 2015 [34] b. sadeghpour gildeh, n. shafiee, “x-mr control chart for autocorrelated fuzzy data using dp,q-distance”, the international journal of advanced manufacturing technology, vol. 81, no. 5-8, pp. 1047-1054, 2015 [35] l. a. zadeh, “the concept of a linguistic variable and its application to approximate reasoning 1”, information sciences, vol. 8, no. 3, pp. 199-249, 1975 authors profile nilufer pekin alakoc, graduated from the department of statistics at middle east technical university in turkey and received her msc degree from industrial engineering department at the same university. she obtained her phd degree in statistics. her research interests are mainly in the area of statistical quality control, statistical applications in industrial engineering and operations research. she is currently working as assistant professor at the american university of the middle east in kuwait. aysen apaydin, is professor at the department of insurance and actuary sciences at ankara university in turkey. she has more than 35 years of experience in statistics and statistical applications. she has published 4 textbooks and more than 115 scientific publications and conference papers. she has worked in several administrative positions and served as organizer and council member of more than 20 congresses, symposiums and colloquiums. dr. apaydin is currently working as student and information coordinator at ankara university. engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1699 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… designing a model for knowledge socialization using sociability processes of human resource management: a case study kiana rezaei department of information technology management college of management and economics science and research branch islamic azad university tehran, iran. mohammadreza babaei department of industrial management college of management and accounting yadegar-e-imam khomeini (rah) shahrerey branch islamic azad university tehran, iran abstract—this study develops a model for knowledge socialization using sociability processes of human resources through an applied research approach. two types of participants participated in this study. the first type included academic and industrial experts; the second type included employees and managers of ansar bank. ten experts were asked to identify criteria and weigh the identified criteria. using simple random sampling, the sample size was estimated at 207. field and archival studies were used to collect data. validity and reliability of the distributed questionnaire were confirmed by organizational experts. using theoretical literature and surveying experts, 18 criteria were identified of which 12 criteria (desirable and joyful workplace, management and leadership support in sociability process, training courses, transparency in working relations, team work, organizational trustful climate, job description and job knowledge, tangible incentives, participatory system, informal technique, defined career path, individual values aligned with organizational value) were selected by screening for prioritization and analysis. fuzzy ahp and structural equation modelling based on partial least squares were used for prioritization and weighting. fuzzy ahp model showed that desirable workplace (0.163), participatory systems and brainstorming (0.149), transparency in working relations (0.114), and informal techniques (0.111) gained the highest weights; finally, pls model showed that all 12 identified criteria were effective on socialization of knowledge management. keywords-sociability of human resources; organizational knowledge; knowledge socialization i. introduction currently, the world is increasingly changing around organizations; as scientists assert, the only thing that will not change in this period is the change itself [1]. due to the complexity of environment and growing competition in any industry, suitable strategies cannot merely lead to competitive advantage and good competitive position [2]. knowledge is the first strategic resource for organizations and acts as a key competitive factor in the global economy. nevertheless, research has shown that half of human information is completely outdated every five years and is replaced by new knowledge and information [3]. therefore, organizations need to adopt processes and strategies to share knowledge among members and allow them to gain experiences of others. however, evidence suggests that this is seriously challenged in iranian organizations. in iranian organizations, members do not share knowledge with each other; knowledge sharing as one of the most fundamental pillars of knowledge-based organization is the missing link in these organizations [4]. a plan cannot be successful merely by determining plans and adopting strategic decisions. in other words, even a well-developed strategy will be useless if it is not implemented [5, 6]. however, evidence suggests that literature and practical activities and efforts made in organizations mostly focus on development of strategy rather than implementation of strategies; implementation and its aspects have been neglected [5-7]. nevertheless, it can be claimed that although socialization of organizational knowledge as a basic approach has been adopted by various organizations, its successful implementation depends on suitable conditions and requirements such as financial, material, technical and human resources. this study focuses on human resources. experience shows that success and failure of organizations directly depend on quality and effectiveness of employees. modern successful organizations have realized that they need global hr managers to compete in global markets. more importantly, technological revolution that has occurred in recent decades highlighted the role of human resources as an important organizational resource. the current study tends to address this by focusing on ansar bank and using the feldman model [17]. therefore, this study examines the extent to which socialization of organizational knowledge can be expected by using socialization process of human resource management and the extent to which each socialization process of hr management can predict development and socialization of knowledge. ii. literature review organizational sociability is a process in which a new employee is converted from an outsider to an effective insider engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1700 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… for the organization; this happens when an employee enters an organizational domain [8]. sociability of hr management is the process whereby new people acquire necessary and sufficient information about the organization, adapt to the conditions by adopting its values, norms and culture, and learn their tasks and expectations [9]. to measure this dimension, a researcher-made questionnaire is used with following aspects: 1) pre-entry practices involving all thoughts and attitudes delivered to employees; 2) starting practices or encounter with the organization involving all knowledge, lessons and experiences which are gained at the first encounters with the organization; 3) evolutionary practices involving changes which occur after several years of work and experience and help harmonizing with governing conditions of the organization. knowledge is a competitive advantage and one of the most important factors of production which must be directed and managed. knowledge is one of the most important intangible components of organizations employed in organizational mechanisms and processes and allows innovation in the organization. accordingly, measurement of knowledge and other intangible assets is very important in business processes [10]. numerous definitions have been presented for knowledge management. one of these definitions which have standardized and integrated various definitions is the australian standard definition in 2003 which knowledge management is a disciplined approach to achieve organizational goals by optimum use of organizational knowledge. another definition considers knowledge management in terms of business. in [11], it was asserted that knowledge management encompasses all systemic activities associated with knowledge creation and knowledge sharing within the organization in relation with customers, partners and owners of knowledge. in knowledge environment, knowledge management is defined as any systemic activity which is compatible with usage, dissemination and encoding of organizational goals. knowledge socialization refers to quantitative and qualitative development of the knowledge needed for the organization, knowledge sharing between members and its proper management [12, 13]. in [2], authors examined the role of knowledge-based leadership on practices of knowledge management and innovation. they evaluated effect of knowledge-based leadership on knowledge management practices for innovation and competitive advantage using several hypotheses. they found that knowledge management practices mediated the relationship between knowledge-based leadership and innovative performance. moreover, knowledge management practices were effective on innovative performance. in [14], authors evaluated the effect of implicit sociability practices on job satisfaction and use of newcomers. they developed a new self-evaluation model for newcomers to evaluate sociability practices of employees on organizational commitment and engagement in work. proper sociability improved implementation and commitment and ultimately job satisfaction of chinese hotel employees. in [15], authors addressed interactive leadership and innovation focusing on mediating role of knowledge absorption capacity. they asserted that proper strategies alone are not enough for organizations; organizations need to adapt to their surrounding environment. analysis of questionnaires distributed among 28 top managers showed a positive and significant relationship between interactive leadership and knowledge absorption capacity and between knowledge management capacity and organizational innovation. this supported the mediating role of knowledge absorption capacity. in [16], authors examined the relationship between leadership styles and knowledge management and increased commitment of employees of khuzestan bus company. findings indicated that the increased knowledge management and leadership styles promoted organizational commitment among employees. moreover, both leadership styles and knowledge management predicted organizational commitment of employees. iii. conceptual model based on literature review, the conceptual model is developed and shown in figure 1. fig. 1. knowledge socialization model engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1701 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… iv. hypotheses  hypothesis 1: desirable workplace is effective on socialization of knowledge management in sociability process.  hypothesis 2: top management support is effective on socialization of knowledge management in sociability process.  hypothesis 3: transparency in working relations is effective on socialization of knowledge management in sociability process.  hypothesis 4: training course is effective on socialization of knowledge management in sociability process.  hypothesis 5: teamwork is effective on socialization of knowledge management in sociability process.  hypothesis 6: organizational trustful climate is effective on socialization of knowledge management in sociability process.  hypothesis 7: job description is effective on socialization of knowledge management in sociability process.  hypothesis 8: organizational incentive is effective on socialization of knowledge management in sociability process.  hypothesis 9: participatory system is effective on socialization of knowledge management in sociability process.  hypothesis 10: defined career path is effective on socialization of knowledge management in sociability process.  hypothesis 11: informal technique is effective on socialization of knowledge management in sociability process.  hypothesis 12: alignment of individual values with organizational values is effective on socialization of knowledge management in sociability process. v. research methodology this was an extensive research using descriptive and survey methodologies. experts were surveyed to identify criteria. to test the model, the developed questionnaires were distributed among target population selected by sampling methods. the criteria identified through interviews and archival studies were classified and compiled. the developed model was tested by using structural equation modeling. statistical tests were used to prioritize effective factors on knowledge socialization. two types of participants enrolled in this study because of being two questionnaires. the first type included academic and industrial experts; the second type included 450 employees and managers of ansar bank. ten experts were asked to identify criteria and weight the identified criteria (first questionnaire). using simple random sampling, the sample size was estimated at 207 for testing hypothesis (second questionnaire). archival and field studies were used to collect data. archival studies included literature review. field studies included interviews and questionnaires. validity and reliability of the distributed questionnaire was confirmed by organizational experts. by reviewing literature and surveying experts, 18 criteria were identified of which 12 criteria (desirable and joyful workplace, management and leadership support in sociability process, training courses, transparency in working relations, team work, organizational trustful climate, job description and job knowledge, tangible incentives, participatory system, informal technique, defined career path, individual values aligned with organizational value) were selected by screening for prioritization and analysis. fuzzy analytic hierarchy process (ahp) and structural equation modelling based on partial least squares were used for prioritization and weighting. vi. results and finding a. prioritization using fuzzy ahp as shown in review of literature regarding effective factors of knowledge socialization in sociability process of human resources, following criteria were extracted for evaluating knowledge socialization (table i). these criteria were confirmed by experts and supervisors for their effect on knowledge socialization in sociability process. table i. effective factors on knowledge socialization in sociability symbol criterion c1 desirable and joyful workplace c2 management and leadership support in sociability process c3 training courses c4 transparency in working relations c5 team work c6 organizational trustful climate c7 job description and job knowledge c8 tangible incentives c9 participatory system c10 informal technique c11 defined career path c12 individual values aligned with organizational value since 10 experts were surveyed, 10 different matrices were formed for comparison of criteria. first, these matrices were converted to a single matrix. table ii lists the fuzzy numbers used. the best method to integrate pairwise comparisons tables of all respondents is to use geometric mean, because pairwise comparisons provides data in the form of ratios; moreover, the inverse property of pairwise comparisons matrix explains the use of this method, because geometric mean maintains this property of the matrix. let ãkij be the element related to the k-th respondent for comparison of the criterion i relative to the criterian j; geometric mean was calculated for corresponding elements by: nn k k ijij aa 1 1 ~~          engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1702 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human…  10 1 1021 ~....~~~ ijijij aaaija  12 1 10 ((1, 2, 3) (1,1,1) (2, 3, 4) (1, 2, 3) (2, 3, 4) (0.25, 0.33, 0.5) (1,1,1) (2, 3, 4) (1, 2, 3) (2, 3, 4)) (0.9,1.22,1.53) a             using the above formula, criteria were compared as follows (table iii) : table ii. fuzzy numbers used lowest moderate value highest preference 11 1equally preferred 12 3intermediate 23 4moderately preferred 34 5intermediate 45 6strongly preferred 56 7intermediate 67 8very strongly preferred 78 9intermediate 99 9extremely preferred table iii. primary pairwise comparison matrix by integrating expert judgements (first class) criterion c1 c2 c3 c4 c5 c6 c1 1.00 1.00 1.00 0.90 1.22 1.53 1.53 1.83 2.13 1.15 1.57 2.02 1.41 1.89 2.35 1.89 2.51 3.12 c2 0.65 0.82 1.12 1.00 1.00 1.00 1.28 1.68 2.17 1.10 1.57 2.11 1.26 1.61 1.97 1.74 2.45 3.16 c3 0.47 0.55 0.65 0.46 0.60 0.78 1.00 1.00 1.00 0.55 0.75 1.07 0.92 1.16 1.47 0.88 1.12 1.45 c4 0.49 0.64 0.87 0.47 0.64 0.91 0.93 1.34 1.81 1.00 1.00 1.00 1.00 1.23 1.46 1.41 1.86 2.29 c5 0.43 0.53 0.71 0.51 0.62 0.79 0.68 0.92 1.21 0.68 0.81 1.00 1.00 1.00 1.00 0.93 1.13 1.41 c6 0.32 0.40 0.53 0.32 0.41 0.57 0.69 0.90 1.14 0.44 0.58 0.79 0.71 0.88 1.07 1.00 1.00 1.00 c7 0.23 0.29 0.39 0.28 0.32 0.38 0.40 0.48 0.63 0.34 0.41 0.52 0.43 0.49 0.61 0.42 0.55 0.78 c8 0.38 0.50 0.65 0.42 0.52 0.68 0.67 0.90 1.20 0.44 0.57 0.78 0.45 0.59 0.85 0.65 0.81 1.07 c9 0.21 0.26 0.33 0.26 0.31 0.38 0.31 0.40 0.54 0.36 0.49 0.67 0.29 0.36 0.47 0.34 0.44 0.58 c10 0.50 0.67 0.93 0.61 0.71 0.85 1.01 1.27 1.56 0.76 1.06 1.37 0.88 1.14 1.47 1.01 1.40 1.91 c11 0.20 0.23 0.27 0.20 0.23 0.28 0.26 0.32 0.43 0.21 0.25 0.31 0.22 0.27 0.35 0.31 0.38 0.47 c12 0.16 0.18 0.20 0.17 0.20 0.24 0.21 0.25 0.31 0.18 0.22 0.28 0.18 0.22 0.29 0.21 0.25 0.31 criterion c7 c8 c9 c10 c11 c12 c1 2.59 3.41 4.26 1.53 2.01 2.60 3.02 3.85 4.68 1.07 1.49 1.99 3.76 4.39 5.02 4.89 5.55 6.18 c2 2.63 3.12 3.57 1.47 1.94 2.38 2.60 3.19 3.87 1.22 1.46 1.71 3.41 4.15 4.82 4.20 5.00 5.86 c3 1.58 2.22 2.81 0.84 1.11 1.49 1.87 2.52 3.19 0.64 0.79 0.99 2.32 3.11 3.89 3.23 4.05 4.81 c4 1.91 2.43 2.94 1.28 1.76 2.27 1.49 2.06 2.74 0.73 0.94 1.31 3.19 3.99 4.93 3.57 4.62 5.69 c5 1.64 2.02 2.35 1.18 1.71 2.23 2.14 2.79 3.46 0.66 0.85 1.10 2.86 3.72 4.49 3.42 4.52 5.53 c6 1.28 1.81 2.39 0.93 1.24 1.55 1.71 2.29 2.94 0.52 0.72 0.99 2.13 2.61 3.19 3.23 4.03 4.80 c7 1.00 1.00 1.00 0.57 0.73 0.90 1.15 1.40 1.68 0.37 0.45 0.56 1.52 1.92 2.32 1.94 2.60 3.29 c8 1.12 1.37 1.76 1.00 1.00 1.00 1.69 2.21 2.75 0.70 0.81 0.95 1.83 2.50 3.23 2.49 3.33 4.21 c9 0.59 0.72 0.87 0.36 0.45 0.59 1.00 1.00 1.00 0.33 0.37 0.42 1.23 1.68 2.11 1.62 2.18 2.67 c10 1.78 2.23 2.73 1.14 1.37 1.67 2.39 2.73 3.02 1.00 1.00 1.00 2.81 3.43 4.09 4.09 5.14 6.10 c11 0.43 0.52 0.66 0.31 0.40 0.55 0.47 0.59 0.81 0.24 0.29 0.36 1.00 1.00 1.00 1.22 1.60 2.11 c12 0.30 0.38 0.51 0.23 0.28 0.37 0.37 0.46 0.62 0.16 0.19 0.24 0.47 0.62 0.82 1.00 1.00 1.00 b. consistency rate of the integrated matrix fuzzy numbers of above table were defuzzified by: 4 j = 1 , 2 , . . . , m 6 j j j j a b c s    then, weighted sum vectors (wsv) was calculated by multiplying primary values of group comparisons (table 4) by total prioritization vectors (final weight of criterion) and calculating the sum of each row: table iv. wsv values wsv criterion 1.96 c1 1.8 c2 1.138 c3 1.39 c4 1.196 c5 0.97 c6 0.62 c7 0.92 c8 0.5 c9 1.34 c10 0.371 c11 0.273 c12 consistency vector (c.v) was calculated by dividing elements of above vector by prioritization vector of criteria shows in table v. table v. c.v values c.v criterion 12.02 c1 12.17 c2 12.09 c3 12.2 c4 12.04 c5 12.16 c6 12.18 c7 12.23 c8 12.11 c9 12.14 c10 12.15 c11 12.13 c12 12.135 mean then, consistency index was calculated by: 1 2 .1 3 5 1 2 c i 0 .0 1 2 1 1    where, n denotes the number of alternatives and λmax denotes the mean c.v. finally, consistency rate (c.r) was calculated by: c.i c.r= r.i engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1703 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… as shown in the table, r.i=1.56; thus: c.i 0.012 c.r= 0.008 r.i 1.56   the calculated c.i<0.1 indicates that pairwise comparisons is well consistent and the model is completely significant. c. fuzzy weights of criteria considering fuzzy ahp, data available in the integrated matrix of criteria was analyzed as follows. using geometric mean, value of the j-th criterion was determined relative to other criteria by: 1 12 1 11 12 13 14 15 16 17 18 19 110 111 112( )r a a a a a a a a a a a a                        for example, value of the first criterion was calculated as: 1 1 12 ((1,1,1) (0.9,1.22,1.53) (1.53,1.83, 2.13) (1.15,1.57, 2.02) (1.41,1.89, 2.35)... (1.07,1.49,1.99) (3.76, 4.39, 5.02) (4.89, 5.55, 6.18)) (1.778, 2.24, 2.707) r           where, triangular fuzzy number (0.9, 1.21, 1.53) was the fuzzy value of the first criterion versus the second criterion and the triangular fuzzy number (1.778, 2.24, 2.707) was fuzzy value of the first criterion versus nine other criteria (table vi). table vi. fuzzy value of pairwise comparisons ir ilr imr iur 1r 1.778 2.240 2.707 2r 1.632 2.033 2.465 3r 1.012 1.281 1.604 4r 1.204 1.549 1.961 5r 1.080 1.355 1.672 6r 0.858 1.092 1.382 7r 0.567 0.690 0.853 8r 0.813 1.020 1.301 9r 0.460 0.566 0.705 10r 1.227 1.517 1.852 11r 0.344 0.414 0.517 12r 0.256 0.305 0.380 fuzzy weights of criteria were determined as follows: 1 1 2 3 4 5 6 7 8 9 10 11 12( )i iw r r r r r r r r r r r r r                           value of each criterion was multiplied by the inverse fuzzy sum of values. for example, fuzzy weight of the first criterion was calculated by: 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12( )w r r r r r r r r r r r r r                           1 (1.778, 2.24, 2.707) (1 / (2.707 2.465 1.6 1.96 1.67 1.38 0.85 1.3 0.705 1.85 0.52 0.38), w               1 / (2.24 2.03 1.28 1.55 1.35 1.09 0.69 1.02 0.566 1.52 0.414 0.305),           1/ (1.77 1.63 1.012 1.2 1.08 0.858 0.567 0.813 0.46 1.22 0.344 0.256)            (1 .1 0 2 , 0 .1 5 9 , 0 .2 4 1) fuzzy weight of the first criterion was (0.102, 0.159, 0.241). fuzzy weights are listed in table vii. table vii. fuzzy weights of criteria jw ~ jlw jmw juw defuzzified weight 1w 0.102 0.159 0.241 0.163 1 2w 0.094 0.145 0.220 0.149 2 3w 0.058 0.091 0.143 0.094 6 4w 0.069 0.110 0.175 0.114 3 5w 0.062 0.096 0.149 0.099 5 6w 0.049 0.078 0.123 0.080 7 7w 0.033 0.049 0.076 0.051 9 8w 0.047 0.073 0.116 0.075 8 9w 0.026 0.040 0.063 0.042 10 10w 0.071 0.108 0.165 0.111 4 11w 0.020 0.029 0.046 0.031 11 12w 0.015 0.022 0.034 0.023 12 as shown in table 7 as the last step of fuzzy ahp, desirable workplace (0.163), participatory systems and brainstorming (0.149), transparency in working relations (0.114), and informal techniques (0.111) gained the highest weights; in other words, these criteria are expected to influence socialization of knowledge management in sociability process. vii. model testing structural equation modeling was used for analysis of the conceptual model by smartpls software. structural model is reported below. coefficients of significance (t-value) were used for analyzing significance of the relationships, as shown in the figures 2 and 3. in these figures, blue circles show variables and rectangles show measurement indexes of variables or questions of the questionnaire. these figures show pls model for estimates of significance (t-values) (figure 2) and the standardized estimate (β-value) (figure 3). the hypothesis is confirmed for t-values>|1.96| and t-values<|1.96|; otherwise, the hypothesis is rejected. in this regard, β-value ranges from zero to one; β-values close to one indicate higher effect of independent variable on the dependent variable. viii. measurement model evaluation to measure reliability of the measurement model, convergent validity and discriminant validity were tested by confirmatory factor analysis (cfa) and average variance extracted (ave). as shown in table viii, all factor loadings were at least 0.5. therefore, convergent validity of data is completely confirmed. ix. hypothesis testing hypotheses were tested by using β-values and t-values. for any path, t-values>1.96 indicate significance of the path and the hypothesis is confirmed (α=0.05). table ix shows the results of t-test. a β-value=0.158 indicates direct and positive effect of desirable workplace on knowledge socialization. the results indicate that top management support is effective on engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1704 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… socialization of knowledge management at 99% confidence interval (t-value=3.202); moreover, β-value=0.218 indicates direct and positive effect of top management support on knowledge socialization. the results indicate that transparency of working relations is effective on socialization of knowledge management (t-value=3.905); moreover, β-value=0.454 indicates direct and positive effect of transparency of working relations on knowledge socialization. the results indicate that training course is effective on socialization of knowledge management at 99% confidence interval (t-value=5.197); moreover, β-value=0.311 indicates direct and positive effect of training courses on knowledge socialization. the results indicate that team work is effective on socialization of knowledge management at 99% confidence interval (tvalue=3.761); moreover, β-value=0.349 indicates direct and positive effect of team work on knowledge socialization. the results indicate that trustful climate is effective on socialization of knowledge management (t-value=4.075); moreover, βvalue=0.193 confirms this hypothesis. the results indicate that job description is effective on socialization of knowledge management (t-value=3.045); moreover, β-value=0.178 indicates effectiveness of job description on knowledge socialization. the results indicate that organizational incentive is effective on socialization of knowledge management (tvalue=2.721); moreover, β-value=0.156 indicates direct and positive effect of organizational incentives on knowledge socialization. the results indicate that participatory system is effective on socialization of knowledge management at 95% confidence interval (t-value=1.996); moreover, β-value=0.048 indicates slight, but positive effect of participatory systems on knowledge socialization. the results indicate that career path is effective on socialization of knowledge management (tvalue=3.125); moreover, β-value=0.206 confirms this hypothesis. the results indicate that formal technique is effective on socialization of knowledge management (tvalue=5.011); moreover, β-value=0.210 confirms this hypothesis. the results indicate that alignment of individual values with organizational values is effective on socialization of knowledge management (t-value=4.463); moreover, βvalue=0.188 indicates direct and positive effect of this criterion on knowledge socialization. table viii. factor loadings of the observed variables constructs question factor loading t-value ave cr cronbach’s desirable and joyful workplace 2 0.871 48.972 0.667 0.856 0.747 3 0.839 30.503 1 0.734 15.479 management and leadership support in sociability process 4 0.905 56.300 0.813 0.929 0.885 6 0.905 49.244 5 0.896 58.745 transparency in working relations 7 0.902 48.226 0.776 0.912 0.856 8 0.872 42.100 9 0.869 46.201 training courses 10 0.877 50.171 0.755 0.902 0.838 11 0.873 36.717 12 0.857 33.684 team work 13 0.931 95.993 0.773 0.910 0.851 15 0.887 50.892 14 0.817 24.659 organizational trustful climate 18 0.928 88.707 0.822 0.932 0.892 17 0.907 77.356 16 0.886 46.635 job description and job knowledge 20 0.897 65.738 0.729 0.889 0.813 21 0.859 41.596 19 0.804 17.737 tangible incentives 23 0.947 126.782 0.820 0.932 0.890 24 0.890 47.000 22 0.879 42.789 participatory system 27 0.885 28.224 0.639 0.840 0.714 26 0.812 17.165 25 0.69 18.751 defined career path 28 0.892 42.595 0.741 0.895 0.825 30 0.867 52.105 29 0.822 28.737 informal technique 32 0.900 41.677 0.770 0.909 0.851 31 0.873 51.942 33 0.860 30.460 alignment of individual values with organizational values 34 0.875 35.547 0.738 0.894 0.824 36 0.855 50.204 35 0.848 37.128 39 0.901 52.242 41 0.881 48.142 38 0.846 49.881 42 0.846 47.248 37 0.817 48.111 engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1705 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… table ix. t-test results for hypothesis testing hypothesis variable -value t-value result independent dependent 1 desirable and joyful workplace knowledge socialization 0.158 4.702 confirmed 2 management and leadership support in sociability process knowledge socialization 0.218 3.202 confirmed 3 transparency in working relations knowledge socialization 0.454 3.905 confirmed 4 training courses knowledge socialization 0.311 5.197 confirmed 5 team work knowledge socialization 0.349 3.716 confirmed 6 organizational trustful climate knowledge socialization 0.193 4.075 confirmed 7 job description and job knowledge knowledge socialization 0.178 3.045 confirmed 8 tangible incentives knowledge socialization 0.156 2.721 confirmed 9 participatory system knowledge socialization 0.048 1.996 confirmed 10 defined career path knowledge socialization 0.206 3.125 confirmed 11 informal technique knowledge socialization 0.210 5.011 confirmed 12 alignment of individual values with organizational values knowledge socialization 0.188 4.463 confirmed fig. 2. pls model for estimates of significance fig. 3. pls model for standardized estimates engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1706 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… sociability is a process performed by an organization to introduce values, cultures and organizational goals to newcomers. this process is transformed because it enables the organization to provide an optimal level of learning in the organization. for, it is believed that the main reason for transformation of an organization to a learning organization results from knowledge socialization. many organizations always tend to use dialogue in sociability process to enable learning and consequently learning organization. in this study, the first important factor which was identified in sociability process and considered important in prioritization is desirable and joyful workplace. different studies have been conducted on desirable and joyful workplace; these studies individually emphasized that a desirable and joyful workplace is effective in increasing commitment, work ethics, performance and personal productivity. another important factor of sociability process which can improve socialization and explicit-implicit knowledge exchange is related to top management support. however, managers and leaders of an organization will prevent a culture in relation to socialization of organizational knowledge if they are indifferent to these actions in sociability process and do not provide the opportunity for creating and disseminating explicit and implicit knowledge of people. therefore, managers will be able to provide the opportunity for improving socialization of knowledge management by increasing organizational incentives. transparency in working conditions as well as knowledge flow can establish trust and security in the organization. in other words, employees will trust the organization and share their implicit knowledge socially with other departments by increasing trust in work relations and organizational relations by increasing transparency in relations and meritocracy existing in the organization or increasing observation of knowledge flow required. trust is an old discussion in all the studies conducted. for human resources, the most important factor is job security and trust in work relations. finally, informal tactics can be considered as a major factor in increasing and improving socialization of organizational knowledge. using experiences of experts, managers provide newcomers with the required organizational knowledge correctly. this can provide the opportunity for improving the required organizational knowledge. by increasing socialization of knowledge, managers and organizations tend to provide more innovative services which fit to market demands; otherwise (unrealized knowledge socialization), organizations will not be able to use their intangible capitals for gaining competitive advantage. x. conclusion in this paper, the ahp method was used for prioritization. ten experts were asked to rank the criteria and perform pairwise comparisons. results showed that desirable workplace (0.163), participatory systems and brainstorming (0.149), transparency in working relations (0.114), and informal techniques (0.111) gained the highest weights; in other words, these criteria are expected to influence socialization of knowledge management in sociability process. experts believed that a desirable and joyful environment, participatory systems and brainstorming promote knowledge socialization in the organization. transparency of working relations and knowledge flow as well as informal techniques used for knowledge socialization increase by experienced elites used in sociability process. this study evaluated the effect of the identified criteria on socialization of knowledge management from perspective of ansar bank using pls model. the results indicate that all 12 criteria had a positive and significant effect on socialization of knowledge management. in fact, the results of statistical analysis indicate that desirable workplace is effective on socialization of knowledge management. informal techniques can be considered as the main criterion in increasing and improving socialization of organizational knowledge. using informal techniques, managers use experiences of experts to provide newcomers with the organizational knowledge required. this can provide the opportunity for improving organizational knowledge. by promoting knowledge socialization, managers tend to deliver more innovative services which are fitted to market demand. otherwise, organizations will not be able to use their intangible assets for achieving competitive advantage. references [1] k. choi, “a structural relationship analysis of hotel employees turnover inten-tion”,asia pac. j tour. res., vol. 11, no. 4, pp. 321–337, 2006 [2] m. j. donate, j. d. s. de pablo, “the role of knowledge-oriented leadership in knowledge management practices and innovation”, journal of business research,vol. 68, no. 2, pp. 360-370, 2015 [3] m. a. jalilvand, “role of education in improvement of human resources and development”, journal of tehran university, vol. 72, no. 5, pp. 6870, 2009 [4] z. nazarzadeh, k. abili, m. a. arein, s. mohamadi, “feasibility of conversion of education department of health ministry to a knowledgebased organziation”, vol. 60, no. 1, pp. 15-24, 2015 [5] s. m. al-ghamdi, “the obstacles to succssful implementation of strategic decisions”, journal of proquest, vol. 6, no. 98, pp. 10-1, 1998 [6] l. g. hrebiniak, “obstacles to effective strategy implementation”, organizational dynamics, vol. 35, no. 1, pp. 12-31, 2006 [7] m. heide, k. gronhaung, s. johannessen, “exploring barriers to the successful implementation of a formulated strategy”, scandinavian journal of management, vol. 40, no. 5045, pp. 217-231, 2000 [8] j. van maanen, e. schein, “toward a theory of organizational socialization”, research in organizational behavior,vol. 1, no. 8, pp. 209–264, 1979 [9] e. saadat, human resource management. samt publication, tehran, iran, 1996 [10] m. r. hamidizadeh, s. azizi, “factors affecting marketing knowledge sharing (mks): the case of iranian food and auto industries”, indian journal of marketing, vol. 39, no. 12, pp. 40-48, 2009 [11] h. rafiee, “the link between knowledge management system and performance evaluation, effective hr system, case study: iran technology analysts network”, especial journal of growth parks and centers (technology growth), vol. 22, no. 1, pp. 11-18, 2015 [12] r. l. daft, organization theory and design, south-western cengage learning, usa, 2010 [13] t. bush, d. middlewood, leading and managing people in education. london, sage publication, india, 2005 [14] z. song, k. chon, g. ding, c. gu, “impact of organizational socialization tactics on newcomer job satisfaction and engagement: core self-evaluations as moderators”, international journal of hospitality management, vol. 46, no. 1, pp. 180-189, 2015 [15] l. shahmiri, m. khorakian, a. maharati, “the relationship between interactive leadership and innovation considering the mediating role of knowledge absorption capacity”, 2nd national conference on engineering, technology & applied science research vol. 7, no. 3, 2017, 1699-1707 1707 www.etasr.com rezaei and babaei: designing a model for knowledge socialization using sociability processes of human… sustainable development imperatives, iran, vol. 4, no. 1, pp. 20-25, 2014 [16] s. tarafi, n. amiri, e. fazeli, m. h. falah, “relationship between knowledge management and leadership styles on organizational commitment”, 1st national congress of industrial cluster of auto parts, tehran, iran, vol. 5, no. 1, pp. 15-16, 2012 [17] d. c. feldman. “a contingency theory of socialization”, administrative science quarterly, vol. 21, no. 3, pp. 433-452, 1976 microsoft word 48-2966_s_etasr_v9_n4_pp4574-4580 engineering, technology & applied science research vol. 9, no. 4, 2019, 4574-4580 4574 www.etasr.com arhin et al.: acceptable wait time models at transit bus stops acceptable wait time models at transit bus stops stephen a. arhin howard university transportation research center, washington dc, usa adam gatiba howard university transportation research center, washington dc, usa melissa anderson howard university transportation research center, washington dc, usa babin manandhar howard university transportation research center, washington dc, usa melkamsew ribbisso howard university transportation research center, washington dc, usa abstract—this study aimed at determining patrons’ acceptable wait times beyond the bus scheduled arrival time at bus stops in washington, dc and to develop accompanying prediction models to provide decision-makers with additional tools to improve patronage. the research primarily relied on a combination of manual and video-based data collection efforts. manual field data collection was used for surveying patrons to obtain their suggested acceptable wait times at bus stops, while video-based data collection was used to obtain bus stop characteristics and operations. in all, 3,388 bus patrons at 71 selected bus stops were surveyed. also, operational data for 2,070 bus arrival events on 226 routes were extracted via video playback. data were collected for am peak, pm peak and mid-day periods of nine-month duration from may 2018 through january 2019. the results of the survey showed that the minimum acceptable wait time beyond the scheduled arrival time was reported to be 1 minute, while the maximum acceptable wait time was reported to be 20 minutes. regression analyses were conducted to develop models to predict the maximum acceptable wait time based on factors including temperature, presence of shelter at the bus stops, average headway of buses, and patrons’ knowledge of bus arrival times. the models were developed for a.m., p.m. and mid-day periods. the f-statistics for the models were determined to be statistically significant with p values<0.001 at 5% level of significance. also, the variance explained by the models (r 2 ) ranged from 64% to 82%. further, a test of hypothesis revealed that though female patrons generally had lesser maximum acceptable wait times than male patrons, the mean difference was determined not to be statistically significant. however, the mean differences in the maximum acceptable wait time of patrons based on ethnicity were determined to be statistically significant at 5% percent level of significance. the study revealed that caucasian patrons have significantly lower maximum acceptable wait times compared to patrons of other ethnic groups. keywords-crashes; unsignalized intersection; artificial neural network; injury severity i. introduction the wait time at bus stops is one of the primary measures for assessing reliability of transit services, especially in urban areas. the uncertainty associated with waiting affects bus patrons’ perception of quality of the service provided. if transit buses arrive at scheduled times, passengers are less likely to have the need to find alternative mode(s) of transportation. however, if buses are chronically late at bus stops, patrons may feel that the bus system is unreliable and may most likely seek alternative modes of transportation. studies in this subject area have therefore been of interest to transit service agencies and officials in a bid to gain more insight into improving quality of service. ii. literature review a. wait time as a measure of transit service reliability in assessing the reliability of transit services, transit agencies and officials have, among other indicators, used passenger wait times as a performance measure. passengers’ perception of transit service quality is affected by wait times. wait time is considered an appropriate measure of service reliability for high frequency routes where the arrival of passengers is random and the average wait time approximates half the headway [1]. for low frequency services, passengers usually synchronize their arrival time at bus stops with the arrival of buses, thus minimizing wait times [2]. authors in [3] considered waiting cost functions to account for headway and service reliability. the study contends that by analyzing the behavior of passengers, the cost of waiting can be broken down into two components: the actual mean time spent waiting and the potential waiting time. the potential waiting time is the additional time passengers have to budget for waiting and is determined as the 95% of the waiting time. this has been found to be very sensitive to service reliability. hence, by minimizing the waiting cost function, service reliability can be improved. a similar conclusion was made in [4] which analyzed the service reliability of a high frequency bus line in helsinki using avl and apc data. the study found that passengers accessed the reliability of bus services mainly in terms of additional waiting and travel time. it was recommended that reduction in wait and travel time increases passenger satisfaction which leads to increase in patronage. b. relationship between waiting time and headway headway is the time between two vehicles passing the same point traveling in the same direction on a given route. several studies have sought to establish the relationship between headway and waiting times of passengers. one of the corresponding author: stephen a. arhin (saarhin@howard.edu) engineering, technology & applied science research vol. 9, no. 4, 2019, 4574-4580 4575 www.etasr.com arhin et al.: acceptable wait time models at transit bus stops earliest studies focused (among other issues) on passengers wait time for bus services with short headways was conducted in 1957 [2]. it concluded that the average waiting time of passengers who randomly arrive at a boarding point is minimum when the service is perfectly regular. the following model to estimate average wait time was suggested: ���� ���� = ∑ ��∑ (1) where h is the headway (in seconds). it was showed that the behavior of passengers of a bus network in stuttgart (germany) showed that passengers’ arrival at a bus stop is schedule-dependent when headways exceed 8 minutes. thus, most passengers synchronize their arrivals with those of the buses, reducing the time spent waiting [5]. another model was developed in [6] which took into consideration the random arrival of passengers during peak periods. the random waiting time, wr, was related to the headway h by (2): � = ℎ/2[1 + (�/ℎ)�] (2) where σ is the standard deviation of bus headway h. an analysis of passenger wait times and headways of buses data in manchester (england) showed a linear relationship between wait time and headway [7]. the findings of the study corroborated previous study and concluded that the arrival behavior of passengers is schedule-dependent when headways exceed 8 minutes. a higher headway threshold of 12 minutes was however established in a study that utilized passenger arrival data in london [8]. further, a comprehensive review of key elements of service reliability in boston, massachusetts revealed that irregular headways lead to variability in expected waiting times [9]. the average wait times of passengers has been estimated to be one-half of the headway. this simple model is valid when the arrival of passengers at the bus stop is random and the headways are regular. however, realistically, these conditions are never satisfied, leading to model inadequacy. c. passenger wait time distribution and modeling a number of studies have examined the distribution of passenger wait times and developed models to estimate wait time. authors in [10] developed arrival distribution curves based on data collected at 28 bus, tram and commuter rail stations in zurich (switzerland). the stations were served by scheduled public transits with headways ranging from 2.33 to 30 minutes. the observations were made on weekdays during the morning, evening and mid-day periods. the analysis of the results showed both passenger arrivals and wait times have a logarithmic relationship with headway. it further concluded that passengers begin to arrive at stations near the scheduled departure times, even for very short headways. the arrival rate of passengers transferring from rail to buses was fitted to normal, exponential, lognormal and gamma distributions [11]. it was concluded that the lognormal and gamma distributions had the most appropriate fit for passengers transferring directly and non-directly. similar conclusions were made in a study conducted in beijing (china) [12]. in that study, passenger arrival times were fitted to extreme value, exponential, lognormal, gamma and normal distributions. the results showed that the arrival time of passengers at bus stops connected to rail stations were best fitted with the lognormal distribution, while arrival time of passengers at bus stops not connected to rail stations were best fitted with the gamma distribution. the distribution of actual passenger wait times and perceived wait times based on data collected from bus stops in london (uk) was investigated in [13]. the results showed that the actual wait time of passengers followed the gamma distribution while the perceived wait time of passengers followed the lognormal distribution. also, a study was conducted to develop a multiple linear regression model to predict perceived wait time of passengers based on data collected at three bus stops in harbin (china). in all, 234 passengers were surveyed. factors considered in the development of the model included gender, level of education, having a time device, presence of a companion, travel purpose, riding frequency, walking time, reserved waiting, waiting mood, waiting behavior, waiting time interval (morning or evening peak). the significance of the factors in the model was tested at 5% significant level. anova results showed that gender, level of education, and walking time were not statistically significant predictors of perceived waiting time. beyond the generalized linear models, other studies have used machine learning techniques to develop passenger wait time models. artificial neural networks (anns) were used to develop passenger wait time models based on data collected on passengers using a high-speed train service in beijing in [14]. the predictors used in the model were trip distance, transport mode, travel time, familiarity of the service facility, and level of education. the architecture of the developed ann model consists of one input layer with 5 neurons, two hidden layers with 8 and 3 neurons respectively, and an output layer with a single neuron. sigmoid and purelin transfer functions were used as activation functions in the hidden and output layers respectively. also, the conjugate gradient method was used as learning algorithm. the model was trained with a data set of 720 samples, and validated with a data set of 336 samples. the model developed predicted passenger wait time with an average error of 9.2%. iii. methodology a. study area description this research is based on data obtained in the district of columbia (dc). dc is divided into four (unequal) quadrants: northwest (nw), northeast (ne), southeast (se), and southwest (sw) which are further divided into eight (8) wards. as of 2017, the population of dc was approximately 694,000 with an annual growth rate of approximately 1.41%. the city is highly urbanized and it is ranked as the sixth most congested city in the united states with each driver spending an average of 63 hours per year in traffic. washington metropolitan area transit authority (wmata) is the agency that oversees the operations of metrobus service in the area. wmata has a bus fleet of 1,595 buses that make more than 400,000 trips each day. these buses serve about 11,500 bus stops and operate on 325 routes in dc, portions of maryland, and virginia, covering a total land area of about 1,500 square engineering, technology & applied science research vol. 9, no. 4, 2019, 4574-4580 4576 www.etasr.com arhin et al.: acceptable wait time models at transit bus stops miles. of the total number of bus stops, 2,556 (22.2%) have shelters, while the remaining do not. b. data collection 1) selection of bus stops the study considered 71 bus stops in the dc at which bus operational and survey data were collected. two main types of bus stops were considered: bus stops with and without shelter. the bus stops were selected based on criteria of being on routes with longer headways, having high patronage, and proximity to metro rail station. data collection at the selected bus stops was conducted over a nine-month duration from may 2018 through january 2019. data were collected during the am peak (7:00 am -9:30 am), pm peak (4:00 pm6:30 pm) and mid-day periods (10:00 am – 2:30 pm). two forms of data collection were performed: bus passengers’ survey and bus operational data collection. the data collection schedule was organized to achieve a robust sample size. 2) survey data collection passengers waiting for the arrival of the next bus at the selected bus stops were randomly selected and interviewed during morning, evening and mid-day periods from monday to friday. the field researchers conducted the survey by the use of electronic forms on computer tablets and paper questionnaires. the following information were obtained during the survey: temperature at the bus stop, presence of shelter at bus stop, arrival time and gender of passengers, knowledge of bus arrival time, and the maximum and minimum acceptable wait time beyond the bus scheduled arrival time for which the passenger is willing to wait. a total of 3,388 passengers were surveyed over the period of the study. when the minimum number of responses was not obtained during a particular peak period due to weather or low passenger turnout, additional passenger were surveyed on the same day and peak period the following week. 3) bus operational data bus operational data were collected at each of the 78 selected bus stops. the data were collected by installing video recording cameras at the bus stops. the video recordings took place on weekdays (monday to friday) over a 12-hour duration (6:30 am to 6:30 pm). the following data were obtained of each bus arrival event during the morning, evening and midday periods via video playback: • bus arrival time: a bus was determined to have arrived at a bus stop when it came to a complete stop allowing onboarding and alighting. • bus departure time: a bus had departed the bus stop when the last passenger had either boarded or alighted and the doors were shut. from the collected data, bus arrival and departure times were used to compute headway by finding the difference between the arrival time of a bus and that of the preceding bus on the same route. therefore, the headway was computed as: a b ah at at= − (3) where ha is the actual bus headway, ata is the arrival time of bus a, and atb is the arrival time of bus b. in all, a total of 2,070 bus arrival events on 226 routes were extracted, computed and compiled in an excel spreadsheet for further analysis. c. data analysis 1) descriptive statistics descriptive statistics such as frequencies, mean, median, and standard deviation were computed for the bus stop, passenger and bus operational characteristics data. 2) model development to investigate the relationship between the maximum acceptable waiting time and variables such as average headway, knowledge of bus arrival time, presence of shelter, and temperature at the bus stops, linear regression analyses were conducted. regression models were developed for a.m., p.m. and mid-day period. the general regression model for maximum acceptable wait time took the following form: ����� = ��� + �(�)� � + �(�!)��� +�("#��)�$� + �(%&)�'� + ( (4) where, ���� is the maximum acceptable wait time, ahthe average headway, t the temperature, kbat the knowledge of bus arrival time, and ps the presence of shelter. mawt is the dependent variable while t, ah, kbat, and ps are independent variables. the constants, βki are the regression coefficients with an associated error of ε~n (0, σ 2 ) with k=0,1…4 for the first, second, third, fourth and fifth regression coefficients respectively. also, i=1, 2 and 3 for the a.m., mid, and p.m. peak periods, respectively. in order to develop a robust model, the variables were tested to ensure they satisfied the assumptions of normality of errors, multicollinearity, and homoscedasticity. 3) hypothesis testing the test statistic primarily used in this study for the comparison is that of the mean. the hypothesis that there is a significant difference in the average mawt of passengers based on their gender and ethnicity was tested at 5% level of significance. 4) difference in mawt based on gender it is hypothesized that there is a statistically significant difference in the average mawt based on the passenger’s gender. this is mathematically expressed as: !): + = +� (5) !,:+ ≠ +� (6) where, x1 is the mean mawt of female passengers and x2 the mean mawt of male passengers. 5) difference in mawt based on ethnicity it is hypothesized that the there is significant difference in the average mawt based on ethnicity. this mathematically is expressed as: !): . = .� = .$ = .' = ./ (7) engineering, technology & applied science research vol. 9, no. 4, 2019, 4574-4580 4577 www.etasr.com arhin et al.: acceptable wait time models at transit bus stops !,:. ≠ .� ≠ .$ ≠ .' ≠ ./ (8) where, y1-y5 are the mean mawts of african american, caucasian, hispanic, asian and other passengers respectively. a preliminary analysis of the data to test for the parametric assumptions of normality and equality of variance indicated a statistically significant violation of these assumptions. the preliminary analysis showed a log-normal distribution of mawt across gender and ethnicity confirming the findings of previous studies. in order to test for statistically significant differences in mawt of passengers based on gender and ethnicity, the non-parametric wilcoxon rank-sum test and kruskal-wallis test were used respectively. wilcoxon ranksum test is a statistical analysis used to determine if there is any significant difference between the means of two groups of independent variables. this method tests the null hypothesis by comparing the ranks of the observations of the two groups of variables to decide whether or not the mean ranks are statistically significant. the statistical significance of the wilcoxon rank-sum test statistic ws is determined as follows: �01 = 23 (234 2�4 )� (9) &56788888 = 9232� (234 2�4 ) � (10) : = 6;< 6788888=>?788888 (11) where �1 is the wilcoxon rank-sum test statistic, �01 the mean of the test statistics, &56788888 the standard error of the test statistic, @ the sample size of the male passengers, @� the sample size of female passengers, and z the z score of the test statistic. for a significance level set at 5%, z-score values greater than 1.96 are deemed as statistically significant kruskal-wallis test is used to determine if there is any significant difference between the means of the groups of independent variables. this method tests the null hypothesis by comparing the ranks of the observations of three or more groups of a variable to decide whether or not the mean ranks are statistically significant. the statistical significance of the kruskal-wallis test statistic h, is determined as: ! = �a(a4 ) ∑ bc� 2c d�e − 3(h + 1) (12) where n is the total sample size, i� is the sum of ranks for each group, and @j is the sample size of each group. the h is then compared to a critical value hc, which approximates to the chisquare distribution. if h is higher than hc, then we do not accept the null hypothesis. iv. results a. descriptive statistics the mean acceptable passenger wait time is presented in table i. the descriptive statistics of the headways of buses are presented in table ii. the mean headway was measured to be 1,119.5s (18.65 minutes). the minimum headway was measured to be 290.75s (4.83 minutes), while the minimum headway was measured to be 3,500s (58.33 minutes). table i. mean acceptable wait times category avg. max. acceptable wait time (minutes) avg. min. acceptable wait time (minutes) time of day am 7.0 2.5 mid 10.5 4.0 pm 7.5 3.0 shelter without shelter 7.5 3.0 with shelter 9.0 3.0 gender male 8.5 3.0 female 8.0 3.0 ethnicity white 7.0 2.0 black 8.5 3.0 hispanic 8.3 3.0 asian 8.4 3.5 other 8.5 3.0 kbat no 10.5 4.0 yes 7.0 2.5 quadrant ne 8.0 3.0 nw 7.0 3.0 se 10.0 3.0 sw 9.0 3.5 table ii. descriptive statistics for bus head way statistic value (s) mean 1,195.23 median 1,097.28 minimum 290.75 maximum 3,500.00 b. regression analysis this section presents the results of the regression analyses to develop to predict the mawts of bus passengers. models were developed for a.m., p.m., and mid-day periods. thus, three models were developed. the adequacy and significance of the regression models were tested at 5% level of significance. the overall performance of the models was evaluated using the p-values of the models’ f-statistics, the r 2 , and adjusted r 2 values. also, the statistical significance of the models’ predictors was evaluated using the p-values of the predictors’ f-statistics. in order to achieve the optimal relationship between the dependent variable, mawt, and the independent variables temperature t, average headway, ah, time of day, ps and knowledge of bus arrival time, kbat, several curve estimations between the dependent variable and each independent variable were performed. the transformations were necessary to obtain the best relationship between the dependent and independent variables. the expressions used to transform each independent variable are shown in table iii. logistic and cubic transformations of ah and t respectively, resulted in the most favorable relationships with the mawt while ps and kbat remained untransformed. the summaries of the results of the regression analyses are presented in table iv. c. model testing 1) kolmogorov-smirnov (k-s) test the results of the k-s tests for mawt show that the maximum difference d between the cumulative distribution of the predicted and observed mawts for all the models were less engineering, technology & applied science research vol. 9, no. 4, 2019, 4574-4580 4578 www.etasr.com arhin et al.: acceptable wait time models at transit bus stops than the critical value of 1.36 at 5% level of significance. this implies that the models sufficiently predict the observed values. table iii. data transformation variable transformed variable selected relationship with dependent variable transformation formula ah ahtr logistic 1 ( ) ln(1 / )f x x= t ttr cubic 3 2 ( )f x x= kbat kbat linear 3 ( )f x x= ps ps linear 4 ( )f x x= 2) normality of errors normality of errors assumption was tested for using the normal probability plot. the observed cumulative probabilities of the standardized residuals are plotted against the expected cumulative probabilities of the standardized residuals. the plots showed that the data follow the diagonal lines for all the models, indicating that the errors are normally distributed. 3) multicollinearity the test for multicollinearity showed that the vif of all the variables in the models were less than the maximum value of 10. thus, multicollinearity between the independent variables is absent. table iv. summary of results # peak period model r 2 adj. r 2 f-statistic sig. 1 am ����,k = −0.40 + (1.07p102.0). medium elevation was assigned to mean ndsm such as 0.25≤ndsm≤2.0. the objects identified for medium elevation were sugarcane and shrub. the “shrub” class consisted of vegetation or crops that were within the height range of 0.25 and 2 meters. a method needs to be developed on this kind of vegetation. the remaining objects, less than or equal to 0.25m were considered as low elevation objects. after assigning objects, another segmentation process follows, the multi-resolution segmentation. in figure 7, examples of quadtree, spectral difference and multi-resolution segmentation are shown. fig. 6. segmentation process of high and medium elevation objects. fig. 7. examples of (a) quadtree, (b) spectral difference, and (c) multiresolution segmentations. segmentation algorithms were used to subdivide entire images at pixel level, or specific image objects from other domains into smaller image objects. to discriminate other objects, ndsm was used as a discriminating factor to group the objects according to their heights. the orthoimages (rgb) for the texture information of the image were done by visual inspection. f. accuracy validation using svm svm is a supervised learning algorithm used to classify entities in an image. it was used to classify land features in the images. the classified objects were obtained from the (a) (b) (c) engineering, technology & applied science research vol. 9, no. 3, 2019, 4085-4091 4089 www.etasr.com villareal & tongco: multi-sensor fusion workflow for accurate classification and mapping of … segmentation procedure. ndsm was used as a feature to develop the svm model. the process was done by sample collection through manual classification. sugarcane was extracted from the segmented images. the samples collected for sugarcane were 5846. only a quarter of the original training samples in an image were required to produce equally high classification accuracy. the ability of svm to generalize well from a limited amount and/or quality of training data is its most important characteristic [39, 40]. svm was applied to the validation sites to test the accuracy of the samples. g. classification map the final classification map was generated using the qgis software (qgis ver. 2.18 las palmas de g.c.). segmented images from ecognition were exported to qgis as raster files and converted to polygons to generate the classified map for barangay poblacion. sugarcane, shrub, high, and low elevation objects were identified. iii. results and discussion a. classified images sugarcane crops were extracted and classified using ndsm to discriminate and group objects according to their height. multi-resolution segmentation in ecognition was used to delineate medium elevation objects using the setting (scale parameter = 40, shape = 0.2 and compactness = 0.5). sugarcane was identified as a medium elevation object based on ndsm. validation no classified image training site classified image validation site 1 2 3 4 5 fig. 8. classification results on training and validation sites then, samples were collected for sugarcane. orthoimages were used for texture and visual inspection to verify sugarcane crops in the study area. as a result, in the classification process using the svms applied to the training and validation sites, sugarcane was identified as shown in figure 8. a final classified map with sugarcane crops of barangay poblacion is shown in figure 9. fig. 9. land cover classification map of barangay poblacion. b. accuracy assessment overall accuracy and kia for sugarcane are 98.74% and 97.47% respectively. accuracy results were obtained after svm using ecognition. table i is an example of the confusion matrix. accuracy assessment and kia for each validation site are shown in table ii. table i. accuracy assessment after svm. confusion matrix user class/sample sugarcane shrub sum sugarcane 5846 109 5955 shrub 30 5043 5073 sum 5876 5152 accuracy producer 0.9948945 0.978843 user 0.9816961 0.994 hellden 0.9882512 0.986406 short 0.9767753 0.973176 kia per class 0.989 0.96082 totals overall accuracy 0.9873957 kia 0.9746584 accuracy and kia results in validation site 2 were low compared to the other sites due to the area coverage and misclassifications of sugarcane class as shrub class. the similarities in spectral characteristics of sugarcane crops with other vegetation such as shrub, result to low accuracy [29]. experimenting on other lidar derivatives was needed to improve classification and segmentation [22]. the overall accuracy assessment for each validation site shows that sugarcane is correctly classified. according to [51], the kappa statistic or kia indicates the extent to which the classification result is better than pure chance: the higher the kia value, the greater the classification accuracy. with the overall accuracy results, this study will be able to address the existing problem engineering, technology & applied science research vol. 9, no. 3, 2019, 4085-4091 4090 www.etasr.com villareal & tongco: multi-sensor fusion workflow for accurate classification and mapping of … of sra and provide accurate information for policy makers in the sugar industry. table ii. accuracy assessment after svm for each validation site. validation site accuracy assessment (%) kia (%) 1 94.23 78.91 2 82.76 58.63 3 94.50 71.90 4 93.59 85.46 5 93.22 67.67 c. assessment of the system workflow the graph of the different overall accuracy assessment, kia and accuracy for sugarcane in the five validation sites is shown in figure 10. fig. 10. graph of sugarcane accuracy assessment and kia. the high accuracy of the results indicates that the process workflow developed in this study is applicable and useful in the classification of sugarcane crops in other areas. the overall high accuracy is comparable with using traditional data and techniques [22]. this study extracts sugarcane crops in barangay poblacion medellin cebu. the high accuracy results of the developed process can be used for mapping and monitoring of sugarcane crops in philippines and in other sugar producing countries. iv. conclusion this study developed a mapping workflow to assess classification accuracy for sugarcane crop identification using obia, lidar data and orthoimages. the accuracy results in the validation sites were 94.23%, 80.28%, 94.50%, 93.59% and 93.22%, and the overall accuracy result was 98.74% (kia was 97.47%). therefore, using this workflow in identifying sugarcane in other areas with acceptable accuracy assessment is possible. this can contribute to regional and global scale of providing information on sugarcane growing areas and growth conditions. further research with additional samples is necessary in order to improve the workflow. the data used in the process are essential because the workflow is dependent on the images provided by lidar or any other remote sensing technology. acknowledgement authors wish to thank phil-lidar research center at the university of san carlos, talamban campus for the provided data, and silliman university for the financial support. references [1] canadian sugar institute, global sugar trade (wto), available at: https://sugar.ca/international-trade/global-sugar-trade-(wto).aspx [2] l. hui, j. chen, z. pei, s. zhang, x. hu, “monitoring sugarcane growth using envisat asar data”, ieee transactions on geoscience and remote sensing, vol. 47, no. 8, pp. 2572–2580, 2009 [3] bureau of agricultural statistics, 23rd annual publication, 2012 [4] e. m. abdel-rahman, f. b. ahmed, “the application of remote sensing techniques to sugarcan (saccharum spp. hybrid) production: a review of the literature”, international journal of remote sensing, vol. 29, no. 13, pp. 3753-3767, 2008 [5] g. j. hay, g. castilla, “object-based image analysis: strengths, weaknesses, opportunities and threats (swot)”, in: isprs archives, vol. 36, 2006 [6] t. blaschke, k. johansen, d. tiede, “object-based image analysis for vegetation mapping and monitoring”, advances in environmental remote sensing: sensors algorithms and applications, pp. 241-271, crc press taylor & francis group, 2011 [7] m. v. japitana, j. e. d. cubillas, a. g. apdohan, “coupling lidar data and landsat 8 oli in delineating corn plantations in butuan city, philippines”, 36th asian conference on remote sensing, quezon city, metro manila, philippines, october 19-23, 2015 [8] m. baatz, m. schape, “multiresolution segmentation — an optimization approach for high quality multi-scale image”, angewandte geographische informations, vol. 12, pp. 12-23, 2000 [9] d. flanders, m. hall-beyer, j. pereverzoff, “preliminary evaluation of ecognition object-based software for cut block delineation and feature extraction”, canadian journal of remote sensing, vol. 29, no. 4, pp. 441-452, 2003 [10] u. c. benz, g. hofmann, i. willhauck, m. lingenfelder, m. heynen, “multi-resolution, object-oriented fuzzy analysis of remote sensing data for gis-ready information”, isprs journal of photogrammetry and remote sensing, vol. 58, no. 3-4, pp. 239-258, 2004 [11] z. zhou, j. huang, j. wang, k. zhang, z. kuang, s. zhong, x. song, “object-oriented classification of sugarcane using time-series middle-resolution remote sensing data based on adaboost”, plos one, vol. 10, no. 11, p. e0142069, 2015 [12] a. r. formaggio, m. a. vieira, c. d. renno, d. a. aguiar, m. p. mello, “object-based image analysis and data mining for mapping sugarcane with landsat imagery in brazil”, the international archives of the photogrammetry, remote sensing and spatial information sciences, vol. 38, pp. 553-562, 2012 [13] m. a. viera, a. formaggio, c. renno, c. atzberger, d. aguiar, m. mellp, “object based image analysis and data mining applied to a remotely sensed landsat time-series to map sugarcane over large areas”, remote sensing of environment, vol. 123, pp. 553-562, 2012 [14] c. j. cechim, j. a. johann, j. f. g. antunes, “mapping of sugarcane crop area in the parana state using landsat/tm/oli and irs/liss-3 images”, revista brasileira de engenharia agricola e ambiental, vol. 21, no. 6, pp. 427-432, 2017 [15] d. fonseca-luengo, a. garcia-pedrero, m. lillo-saavedra, r. costumero, e. menasalvas, c. gonzalo-martin, “optimal scale in a hierarchical segmentation method for satellite images”, in: lecture notes in computer science, vol. 8537, springer, 2014 [16] c. d. alves, t. g. florenzano, d. s. alves, m. n. pereira, “mapping land use and land cover changes in a region of sugarcane expansion using tm and modis data”, revista brasileira de cartografia, vol. 66, no. 2, pp. 337-347, 2014 [17] c. h. w. de souza, r. a. c. lamparellib, j. v. rocha, p. s. g. magalhaes, “mapping skips in sugarcane fields using object-based analysis of unmanned aerial vehicle (uav) images”, computers and electronics in agriculture,vol. 143, pp. 49–56, 2017 [18] w. s. lee, v. alchanatis, c. yang, m. hirafugi, d. moshu, c. li, “sensing technologies for precision specialty crop production”, computers and electronics in agriculture,vol. 74, no. 1, pp. 2-33, 2010 [19] j. r. rosell, j. llorens, r. sanz, j. arno, m. ribes-dasi, j. masip, “obtaining the three-dimensional structure of tree orchards from remote engineering, technology & applied science research vol. 9, no. 3, 2019, 4085-4091 4091 www.etasr.com villareal & tongco: multi-sensor fusion workflow for accurate classification and mapping of … 2d terestrial lidar scanning”, agricultural and forest meteorology, vol. 149, no. 9, pp. 1505-1515, 2009 [20] t. f. canata, j. p. molin, a. f. colaco, r. g. trevisan, m. martello, p. r. fiorio, “measuring height of sugarcane plants through lidar technology”, 13th international conference on precision agriculture, missouri, usa, july 31-august 3, 2016 [21] a. l. hiscox, “lidar measurement techniques for understanding smoke plume dynamics in sugarcane production”, optical instrumentation for energy and environmental applications. optical society of america, 2013 [22] a. v. pada, m. a. silapan, f. cabanlit, j. campomanes, j. garcia, “mangrove forest cover extraction of the coastal areas of negros occidental, western visayas, philippines using lidar data”, 23th isprs congress, prague, chech republic, july 12-19, 2016 [23] r. v. peralta, r. l. jalbuena, c. a. cruz, a. m. tamondong, “development of an object-based classification technique for extraction of aquaculture features using lidar and worldview-2 satellite image data”, 13th south east asian survey congress: expanding the geospatial future, singapore, july 28-31, 2015 [24] a. charaniya, r. manduchi, s. lodha, “supervised parametric classification of aerial lidar data”, 2004 conference on computer vision and pattern recognition workshop, washington, usa, june 27july 2, 2004 [25] y. huang, b. yu, j. zhou, c. hu, w. tan, z. hu, “toward automatic estimation of urban green volume using airborne lidar data and highresolution remote sensing images”, frontiers of earth science, vol. 7, pp. 43–54, 2013 [26] m. d. mccoy, g. p. asner, m. w. graves, “airborne lidar survey of irrigated agricultural landscapes: an application of the slope contrast method”, journal of archaeological science, vol. 38, no. 9, pp. 21412154, 2011 [27] q. man, p. dong, h. guo, “pixeland feature-level fusion of hyperspectral and lidar data for urban land-use classification”, international journal of remote sensing, vol. 36, no. 6, pp. 1618-1644, 2015 [28] m. b. cadalin, j. silapan, m. remolador, m. r. c. ang, “biomass resource assessment on theoritical and available potential of sugarcane using lidar-derived agricultural land-cover map in victorias city, negros occidental, philippines”, 36th asian conference on remote sensing, quezon city, philippines, october 19-23, 2015 [29] m. luck-vogel, c. mbolambia, k. rautenbachb, j. adams, l. van niekerkab, “vegetation mapping in the st lucia estuary using very high-resolution multispectral imagery and lidar”, south african journal of botany, vol. 107, pp. 188-199, 2016 [30] l. c. g. david, a. h. ballado, “mapping mangrove forest from lidar data using objectbased image analysis and support vector machine: the case of calatagan, batangas”, 8th ieee international conference humanoid, nanotechnology, information technology communication and control, environment and management, cebu, philippines, december 9-12, 2015 [31] l. c. g. david, a. h. ballado, “object-based land use and land cover mapping from lidar data and orthophoto application of decision treebased data selection for svm classification”, ieee region 10 humanitarian technology conference, agra, india, decemer 21-23, 2016 [32] r. j. candare, m. v. japitana, j. e. cubillas, c. b. ramirez, “mapping of high value crops through an object-based svm model using lidar data and orthophoto in agusan del norte, philippines”, 23th isprs congress, prague, chech republic, july 12-19, 2016 [33] r. devadas, r. j. denham, m. pringle, “support vector machine classification of object-based data for crop mapping, using multitemporal landsat imagery”, international archives of the photogrammetry, remote sensing and spatial information sciences, vol. 39, pp. 185-190, 2016 [34] v. vapnik, statistical learning theory, wiley, 1998 [35] v. vapnik, estimation of dependences based on empirical data, nauka, springer verlag, 2006 [36] b. e. boser, i. v. v. guyon, “a training algorithm for optimal margin classifiers”, fifth annual workshop on computational learning theory, pennsylvania, usa, july 27-29, 1992 [37] c. c. chang, c. j. lin, “libsvm: a library for support vector machines”, acm transactions on intelligent systems and technology, vol. 2, no. 3, articleno 27, 2011 [38] g. m. foody, a. mathur, “toward intelligent training of supervised image classifications: directing training data acquisition for svm classification”, remote sensing of environment, vol. 93, no. 1–2, pp. 107-117, 2004 [39] g. m. foody, “supervised image classification by mlp and rbf neural networks with and without an exhaustively defined set of classes”, international, journal of remote sensing, vol. 25, no. 15, pp. 30913104, 2004 [40] g. i. j. mountrakis, c. ogole, “support vector machines in remote sensing: a review”, isprs journal of photogrammetry and remote sensing, vol. 66, pp. 247-259, 2011 [41] b. mulianga, a. begue, p. clouvel, p. todoroff, “mapping cropping practices of a sugarcane-based cropping system in kenya using remote sensing”, remote sensing, vol. 7, pp. 14428-14444, 2015 [42] kenya sugar board, year book of statistics, ksb, 2012 [43] m. t. blaschke, “object based image analysis for remote sensing”, isprs journal of photogrammetry and remote sensing, vol. 65, pp. 216, 2010 [44] d. c. duro, s. e. franklin, m. g. dube, “a comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using spot-5 hrg imagery”, remote sensing of environment, vol. 118, pp. 259-272, 2012 [45] j. lowry, r. d. ramsey, k. thomas, d. schrupp, t. sajwaj, j. kirby, e. waller, s. schrader, s. falzarano, l. langs, g. manis, c. wallace, k. schulz, p. comer, k. pohs, w. rieth, c. velasquez, b. wolk, w. kepner, k. boykin, l. o'brien, d. bradford, b. thompson, j. priormagee, “mapping moderate-scale land-cover over very large geographic areas within a collaborative framework: a case study of the southwest regional gap analysis project (swregap)”, remote sensing of environment, vol. 108, pp. 59-73, 2007 [46] o. nevalainen, e. honkavaara, s. tuominen, n. viljanen, t. hakala, x. yu, j. hyyppa, h. saari, i. polonen, n. n. imai, a. m. g. tommaselli, “individual tree detection and classification with uav-based photogrammetric point clouds and hyperspectral imaging”, remote sensing vol. 9, no. 3, articleno 185, 2017 [47] n. ekhtari, m. j. v. zoej, m. r. sahebi, a. mohammadzadeh, “automatic building extraction from lidar digital elevation models and world view imagery”, journal of applied remote sensing. vol. 3, no. 1, p. 033571, 2009 [48] l. zhu, h. shimamura, k. tachibana, y. li, p. gong, “building change detection based on object extraction in dense urban areas”, the international archives of the photogrammetry, remote sensing and spatial information sciences vol. 27b, pp. 905-908, 2008 [49] m. a. friedl, d. k. mciver, j. c. f. hodges, x. y. zhang, d. muchoney, a. h. strahler, c. e. woodcock, s. gopal, a. schneider, a. cooper, a. baccini, f. gao, c. schaaf, “global land cover mapping from modis: algorithms and early results”, remote sensing of environment, vol. 83, no. 1–2, pp. 287-302, 2002 [50] j. p. m. o'neil-dunne, s. w. macfaden, a. r. royar, k. c. pelletier, “an object-based system for lidar data fusion and feature extraction”, geocarto international, vol. 28, no. 3, pp. 227-242, 2013 [51] t. m. lillesand, r. w. kiefer, j. chipman, remote sensing and image interpretation, john wiley & sons, 2004 ab cap and eff mi we cur the we ele loa a n pro ele aut oft stru adv and use dim las th lig sim is we mo 2]. me and inv joi cur we sur we engineerin www.etasr the prope departm bstract—in this pacity of the re d the tensile sh fects of differ crohardness a eld processes rrents and 2-6 e welded mate ere determined ectrode pressur ad bearing capa negative effect keywords-dp operties; expuls resistance s ectrical weldin tomotive indu ten preferred ructure is ofte vantages of sp d economical er requiremen mensions. for ser welding or his will allow ghter and have mple manufact accepted tha eldability, and orphology and . after spot echanical and d heat affec vestigate these ints. the prim rrent, electrod elding parame rface appearan eld internal dis ng, technology r.com effect erties o muhamm faculty of ment of manu karabuk u karabuk melitas@ka s study, the ma esistance spot w hear propertie rent welding and tensile sh were perform 6bar electrode erials was eva d. experiment re and weld c acity. it was al on the tensile s p1000 steel; wel sion i. in spot welding ng processes. ustry especiall as the joinin en joined wi pot welding ar . however, to nts, the part r this reason, i combination o the engine ind improved rigi turing process at the availa d this depend d the mechanic weld, consi metallurgical cted zone (h e changes for mary welding de pressure) eters also affe nce, weld nu scontinuities [5 y & applied sci elitas & ts of th of rsw med elitas technology ufacturing eng university k, turkey arabuk.edu.tr aximum tensile welded dp1000 s of the joints parameters ear properties med by using pressures. th aluated and th al results show urrent increas lso observed th shear load bear lding; microha troduction (rsw) is rsw is a p ly for sheet m ng technique. ith the spot re that it is rela o apply spot ts must have it is expected of joined techn dustry to build idity and stren s. for automo able material ds on the sh cal properties o iderable chan l properties of haz). it is the safety stre parameters (w affect the h fect welding q ugget size, we 5]. ience research & demir: the ef he we w jun gineering e shear load be 0 steel was mea were evaluated on microstru s were investi 5ka and 7ka he microstruct he hardness p wed that incr sed the tensile hat the expulsio ring capacity. ardness; tensile one of the process used i materials, wh automobile welds [1, 2] atively fast, du welds and to e somewhat to be replaced niques in the f d structures th ngth [1-4]. rsw otive applicatio has a good heet thickness of the base me nges occur in f the rsw re very importa ength of the w weld time, we eat input. pr quality factor eld penetration h v effects of the w elding nctions earing asured d. the ucture, igated. a weld ure of rofiles easing shear on had e shear oldest in the hich is body . the urable meet large d by a future. hat are w is a ons, it d spot s, the etal [1, n the egions ant to welded elding rimary rs like n and she 24% is o are rap valu stee stee abb and thin firs cur esti uni dp rela pro aca stee opt des diff dp 500 dp the dp tab vol. 8, no. 4, 20 welding paramet param s of d dep it is reported eet metal used %. the produc of significant i widely prefe idly in the au ues per unit els were used els is becomin breviation expr d it is seen th nner section a st generation rrently, it is he imated that it w que economic 1000 steel wa ated to the eff operties of the ademic workin el was studied timum levels o sired quality [7 ferent welding 1000 steel wa ii the commer 0x500mm and 1000 steel ma e microstructu 1000 automo ble i, respectiv fig 018, 3116-3120 ters on tensile meters p1000 bilg faculty o partment of me karabuk karabu bdemir@k d that the effe d in automobi ction of these importance. e erred. the us utomotive indu weight. whil initially, today ng more wide ress the tensile hat materials w are more prefe of the adv eavily used in will continue t c and techno as used. there fect of the rs e dp1000 ste ng and industr d in detail. it is of the welding 7]. in the exp g current and s aimed. i. experime rcial dp1000 d had 1.2mm t ainly consists ure and chemi tive sheet ste vely. g. 1. microstru 0 properties of r s on te 0 shee ge demir f engineering echanical eng k university uk, turkey karabuk.edu.tr ect of reducin iles on vehicl parts with new especially dual e of these st ustry due to th le dp350, d y the use of d espread. the e strength of th with higher te erred [6]. dual anced autom the automotiv to be used in th ologic properti are very few sw process o eel. due to th rial area, rsw s very importa g parameters t perimental stud d electrode pr ental proced 0 automotive thickness. the of ferrite and ical composit eel are shown ucture of dp1000 3116 rsw junctions ensile et stee g gineering r ng the thickne le weight is a w generation s l phase (dp) s teels has incre heir higher stre dp500, and d dp760 and dp numbers afte he material in ensile strength l phase steel i motive sheet ve industry and he future, due ies. in this s studies in liter on the tensile heir importan w of the dual p ant to determin to obtain a we dy, optimizatio essure in rsw dure sheet steel e microstructu martensite ph ion of comme n in figure 1 0 steel s of … el ess of about steels steels eased ength dp600 p1000 er dp mpa h and is the steel. d it is to its study, rature shear ce in phase ne the eld of on of w of was ure of hases. ercial 1 and m 14 spe ele [8] pre sam thi par sho (1c fig ste we po nit usi use me qn hv ma dis reg me the spo 1.2 cro [8] a. th engineerin www.etasr table i. material c dp1000 0.136 al 0.044 the specime 273 standard ecimen is show fig. specimens w ectrodes with ]. the applied essures were mples for each is study was c rameters used own in tab cycle=0.02s). a gure 3. transv eel and specim elded pieces w lished specim tric acid + 98% ing a nikon e ed in the micro easurements. m nees type v v0.2 (1.961n apping using stribution and gions of we easurements w e nugget. ten ot welded spec 2mm thickness osshead speed ]. fig i microstructu in this study hese samples’ ng, technology r.com chemical c si mn 6 0.203 1.57 ti v 4 0.001 0.00 ens were prepa ds for rsw. wn in figure 2 2. the techn were subjecte flat conical tip d currents were 2-6bar. rsw h electrode pre carried out con d in the liter ble ii [9, 1 a resistance s verse metallog mens passing were prepared mens were etch % methanol). epiphot 200 l ostructure ana microhardness vickers microh ) load and 1 0.2mm grid d the individu elded joints were made in o nsile shear tes cimens. resis s, 30mm widt d used for the g. 3. resistanc iii. result ure y, ten differen structures w y & applied sci elitas & composition of d n p s 7 0.021 0.0 nb c 09 0.021 0.0 ared in accord the technica 2. nical drawing of sp ed to rsw u p at different e 5ka and 7ka w was applie essure. the rs nsidering som rature and the 0]. time un spot welded sp graphic specim through the d by the stan hed with a 2% the microstru light microsco alyses were als s testing was hardness testin 5s holding ti spacing reve ual hardness [8, 10]. o one direction, ts were perfo tance spot we th and 110mm tensile shear ce spot welded sp ts and discus nt rsw samp were examine ience research & demir: the ef dp1000 steel (%) s cr 003 0.022 0 u fe 01 97.897 0 dance with en al drawing o pecimen using 8mm c welding param a and the elec ed by taking sw process u me of the ideal ese parameter nit is cyclepecimen is sho mens of the dp central part o ndard method % nital solution ucture was ana ope. the spec so used for har conducted us ng machine w me. microhar ealed the har values in se on each sa along the rad ormed on resis lded specimen m gauge length test was 2mm pecimen sion ples were obt d visually an h v effects of the w ni 0.039 co 0.021 n iso of the copper meters ctrode three used in rsw rs are -based own in p1000 of the d. the n (2% alyzed cimens rdness sing a with a rdness rdness elected ample, dius of stance ns had h. the m/min tained. nd by ster that an e sam ima sho mic men esti hig ther fig fig. pres fig. pres ha mar it w ferr 12] mic dp stre rise the thin not mic vol. 8, no. 4, 20 welding paramet reo and optic t welding sam example of we mple welded ages of haz own at figur crostructures o ntioned in th imated for gen her alloying c refore higher h g. 4. macro s (a) 5. haz and ssures and 5ka we (a) 6. haz and ssures and 7ka we contrast to th azs of the r rtensite althou was observed rite and marte . indeed, the crostructures 1000 compris ength as 1000 es essentially w weld metal i nness and the t enough tim crostructure in 018, 3116-3120 ters on tensile microscopy. v mples were obt eld profile as m with 7ka-2 and weld me res 5 and 6 of the weld me he related lite neral grade of ontent of the c hardenability. tructure of dp100 (b) d weld metal m elding current a) (b) d weld metal m elding current a) he related lite rsw sample ugh little lowe that, dual pha ensite phases main differen is the marten ses higher mv mpa because with the hard m is quite high e water-cooled me for carb n the weld m 0 properties of r visual inspect tained normal macro structur 2bar. moreov etal of the all 6. as seen etal are formed erature, these f dual phase st commercial du 00 rsw sample w (c) (d) microstructures 2bar, b) 3bar, c) 4 (c) (d) microstructures 2bar, b) 3bar, c) 4 erature, it cou es are showin ering througho ase microstruc for all electr nce between d nsite volume vf than dp60 e the strength martensite pha because sheet d electrodes. t bon diffusion metal and ha 3117 rsw junctions tion results sh lly. figure 4 s re of dp1000 r ver, microstru rsw sample in these fig d as martensite e results coul eels because o ual phase steel welded with 7ka(e) at different ele 4bar, d) 5bar, e) 6 (e) at different ele 4bar, d) 5bar, e) 6 uld be said tha ng nearly w out transition ctures consist rode pressures dp600 and dp fractions (m 00 to ensure h of dual phase ase. cooling ra t metal specim therefore, the n. as a r az predomin s of … owed shows rsw ucture es are gures, es. as ld be of the ls and -2bar ectrode 6bar ectrode 6bar at the wholly zone. ed of s [11, p1000 mvf). higher steel ate in mens’ ere is result, nantly con thr ele dis me and b. sho the rs the ho ma un ste str ma con den tra go mo ma ma res ver tra ph zon be usa dep au the ind du we flo sol aus fas all for int sam als the engineerin www.etasr nsists of marte roughout the ectrode pressu ssolved in the etal increases b d therefore thi table ii. e microhardne microhardne own in figure e weld metal sw process. t e base metal. a owever, as har ay decrease. derstand the eels [13, 14]. rengthening m artensite is st nsists of high nsity. this is ansformation t od resistance ovements in artensite and artensite and spectively in ry fast heating ansformation. ase, even som nes have high evaluated th able range for pendence on uthors in [11] r e generation duction which al phase steel elding, melting owing and sq lidification, a stenite decom st. because of oy elements’ r dp1000, hi teresting result mples showed so may be abl e effects of the ng, technology r.com ensite phase [1 base metal, ures. this is e e austenite fro because of rap is may increas welding pa electrode press welding curre electrode tip diam down time (c squeeze time welding time hold time (c separation tim ess ess results obt es 7 and 8. it in dp steels the hardness c as hardness i rdness increas therefore, it hardness and martensitic tr mechanism for trong phase d carbon conten due to the vo to martensite. barrier again dual phase ferrite phases ferrite volum dp1000 steel g and cooling c in welding, w me haz also co her hardness th hat hardness v r all welding p martensite is reported that, of higher am h causes highe ls essentially g and then so queezing stag austenitization mposition occu f these fast c content, weld igher compar t about hazs d higher hard le to explain w e hardenability y & applied sci elitas & 1, 11]. genera haz and w explained as om base meta pid cooling af e the mvf” [1 arameters for r sure(bar) ent (ka) meter(mm) cycle) (cycle) (cycle) cycle) e (cycle) tained in the r can be seen t increases con can become ab increases, stren ses, embrittlem t is importan d metallurgica ransformation r steels. it is due to its mi nt and high un olume expansi hard and stro nst dislocatio structure. d s at the begin me fractions a l. during weld cycle leads aus weld metal co ools from aust han the base m values of the parameters. t s a good wa increasing of mount of mart er hardness [1 contain two p olidification oc ges. through n and then ur. here, cool cooling rates d metal has h ed to dp600 s from dp600 dness than dp what was disc y and cooling ience research & demir: the ef ally, mvf incr weld metal fo “the rate of al towards the fter welding pr 10]. rsw processing 2-6 5-7 8 15 35 20 10 15 rsw process that the hardn nsiderably aft bout 2 times t ngth also incr ment and toug nt to analyze al structure o is a hardenin s well known icrostructure w nmobile dislo ion during aus ong martensit n slip action dp steels co nning of the are 70% and ding processe stenite to mart ools from aus tenite and thus metal [15]. it e specimens a to explain hard ay [9, 10, 12 heat input incr tensite deform 1]. in other w phases. during ccurs at the c hout weld co transformatio ling rates are and higher am higher hardena 0. there is a . haz of the p600. these r cussed above rates on prop h v effects of the w reased for all ferrite e weld rocess es are ness of er the that of reases. ghness e and of dp ng and n that which cation stenite te is a ns and ontain rsw. d 30% es, the tensite stenite s these could are in dness, , 15]. reases mation words, g spot current ooling on or quite mount ability a very rsw results about erties. as har rela mat har con carb rule par seen 7ka fig. weld fig. weld c. cap mea eva tens wer she hav sam tens vol. 8, no. 4, 20 welding paramet seen in figu rdness. sever ationships bet terial’s hardne rdness, in cas ntent, is expec bon content o e. all weld m rameters have n in figures 7 a and 5ka sam 7. hardness d pressures 8. hardness d pressures tensile prope in this study pacity of the asured and t aluated [10]. t sile shear pro re performed t ear load bearin ving effect on t mples are give sile-shear load 018, 3116-3120 ters on tensile ures 7 and 8, ral researche tween marten ess. author in se of exceedi cted to be mo f the used stee metal specimen shown hardne 7 and 8 whic mples hardness values of the 5k values of the 7k erties y, the maxim specimens du the strength the effects of operties were to the joined s ng capacity w the joint quali en in figure d bearing capa 0 properties of r higher marten ers have giv nsite mass ca n [16] reporte ing 0.05% o ore than 350h el sheet mater ns’ hardness o ess values high h show the d s results. ka weld current ka weld current mum tensile s uring the tens properties of different weld investigated. samples to dete which is one ity. tensile she 9. as can be acity pertains t 3118 rsw junctions nsite means h ven some u arbon content ed that, marte f mass of ca hv. in this s rials confirmed f different we her than 350h differences bet samples with di samples with di shear load be sile shear test f the joints ding paramete tensile shear ermine their te of the param ear forces of a e seen, the hi to the sample o s of … higher useful t and ensite arbon study, d this elding hv as tween fferent fferent earing t was were ers on tests ensile meters all the ighest of the joi co see we fig par has the exp res q= r= ele cur the we cap ele ob inc the she det ele alw she occ we pre we liq del tou ene 7k exp she litt spo wh see engineerin www.etasr ined at 3bar onversely, the en for the sa elding current. g. 9. maximum rameters when the ef s been observ e tensile shea planation can sistance spot w =i2rt equation =total resistan ectrode pressu rrent increase e heat input of elding. this, in pacity of weld ectrode pressur served from creases heat in ey all stated th ear load bearin expulsion fo termined by m ectrical, metall ways occurs at eets or the curring at the elding time, essure. as the eld metal exce quid nugget a lineates the w ughness agains ergies of the ka-6bar) are pulsion. comp ear load beari tle smaller. c ot welded join hen the tensil en that incre ng, technology r.com electrode p lowest tensile mple joined m tensile shear fo ffects of electr ved that increa ar load beari n be that hea welding which n where i=cur nce and t=w ure, total res s. this increa f the sample d n turn, increas ded joints [17 re in rsw has figure 9 th nput. authors hat increasing ng capacity of ormation durin many complica lurgical, and m t the faying su electrode/wo faying surface welding cur e total useful w eds a critical v and expulsion weld energy ab st loads. as ca welds with smaller than mpared to weld ing capacity o consequently, nts is usually d le shear curve easing electro y & applied sci elitas & pressure, 7ka e shear load be at 4bar electr orce values obtain rode pressures asing electrode ing capacity at input in th h is calculated rrent passing t welding time sistance r de ase in the wel during current ses the tensile 7]. author in s affected the j hat increasing in [19-22] inv welding curre f the samples. ng the weldin ated paramete mechanical pr urface of the up orkpiece inte es is often a re rrent or ins welding heat value, the molt n happens [2 bsorption cap an be seen in f expulsion (5k the ones of ds without ex of the welds w quality contro determined by es in figure 9 ode pressure ience research & demir: the ef a welding cu earing capaciti rode pressure, ned at different w s were evalua e pressure incr of the joints e welding zo d by the use through the sa . with incre ecreases whe ld current incr passing time e shear load be [18] indicate joint’s strength g welding c vestigated rsw ent increased t ng process is ers, such as the rocesses [23, 2 pper and lowe erfaces. expu esult from exc sufficient elec implemented ten metal crac 25]. failure e ability and fr figure 9, the f ka-4bar, 7ka the welds w xpulsion, the t with expulsion ol of the resis y tensile shear 9 are analyzed and weld c h v effects of the w urrent. ies are , 5ka welding ated, it reases s. an one in of the ample, easing en the reases of the earing d that h. it is current w and tensile often ermal, 24]. it r steel ulsion essive ctrode to the cks the energy acture failure a-4bar, without tensile n is a stance r tests. d, it is current incr that cap mea eva mic inv 1. 2. 3. 4. 5. 6. 7. pro tur [1] [2] [3] [4] [5] [6] vol. 8, no. 4, 20 welding paramet reases the tens t this in turn in in this study pacity of the asured and t aluated. the crostructure, m estigated. the it could be sa formed as ma showing near generally, m welding param the hardness increases afte about 2 times that, haz an metal. all rsw sa different para 350hv. thes this study. it was observ shear load be joined at 7ka it is seen th current increa compared to load bearing 4bar, 7ka-4b this work w ojects coordin rkey). project n m. elitas, b. d tensile strength sheet steel”, 2 technology in c b. demir, “an aisi 4140 an fractions”, met 1159-1166, 200 y. cho, s. rhe spot welding”, w c. a. campos, of galvannealed pp. 876-881, 20 f. hayat, b. de current on the m en 10130–199 2009 b. aydemir, e. properties of d resistance spot 687, pp. 17-28, 018, 3116-3120 ters on tensile sile shear load ncreases the to iv. co y, the maxim specimens d the strength effects of di microhardness e conclusions r aid that micros artensite and h rly wholly mar martensite volu meters. s of the weld er the rsw s compared to nd weld metal amples, whic ameters, show se were explai ved that, the earing capaciti a-3bar and 5ka hat increasing ased the tensil o welds witho capacity of t ar, 7ka-6bar) acknow was supporte nation unit of number: kbu refer emir, o. yazici, “ h and fracture m 2nd international cappadocia, nev investigation on nd its impact str tallofizika i nove 07 ee, “experimental welding journal, m. p. guerrerod interstitial free s 002 emir, m. acarer, mechanical prope 99) steel”, metalli aydemir, e. kal dp1000 steel shee welded (rsw)”, 2017 0 properties of r d bearing capa oughness of the onclusions mum tensile s during tensile properties of ifferent weldi and tensile sh resulting from structures of th hazs of the r rtensite. ume fractions metal in dp s process. hard the base meta l had higher h h had been wed hardness v ined with carb highest and t ies pertain to a-4bar respect g electrode pr e shear load b out expulsion, the welds with was a little sm wledgment ed by the s f karabuk un ubap-17-kprences “the effects of th modes of the rsw conference on sehir, turkey, oc the production o rength at differe eishie tekhnolog l study of nugget vol. 82, no. 8, p -mata, r. colas, steel”, isij intern s. aslanlar, “effe erties of resistanc ic materials, vol luc, “investigatio ets joints with re , engineer and m 3119 rsw junctions acity. it can be e joints. shear load be e shear tests f the joints ing parameter hear properties this study are he weld metal rsw samples s increased fo steels conside dness can inc al. it was obse hardness than welded by u values higher bon content rul the lowest ten the samples o tively. ressure, and bearing capacit , the tensile h expulsion ( maller. cientific res niversity (kara -463. he electrode press w junctions of d material scienc ctober 11-13, 2017 of dual-phase stee ent martensite v ii , vol. 29, no. formation in resi pp. 195-201, 2003 r. garza, “weld national, vol. 42, fect of welding tim ce spot welded if l. 47, no. 1, pp. on of tensile and f emote laser (rlw machinery, vol. 5 s of … e said earing was were rs on were : were were or all erably crease erved base using r than les in nsileof the weld ty. shear (5kaearch abuk, sure on dp600 ce and 7 el from volume 9, pp. istance 3 dability no. 8, me and f (din 11-17, fatigue w) and 58, no. engineering, technology & applied science research vol. 8, no. 4, 2018, 3116-3120 3120 www.etasr.com elitas & demir: the effects of the welding parameters on tensile properties of rsw junctions of … [7] a. w. el-morsy, m. ghanem, h. bahaitham, “effect of friction stir welding parameters on the microstructure and mechanical properties of aa2024-t4 aluminum alloy”, engineering, technology & applied science research, vol. 8, no. 1, pp. 2493-2498, 2018 [8] f. hayat, i. sevim, “the effect of welding parameters on fracture toughness of resistance spot-welded galvanized dp600 automotive steel sheets”, the international journal of advanced manufacturing technology, vol. 58, no. 9-12, pp. 1043-1050, 2012 [9] c. ma, d. l. chen, s. d. bhole, g. boudreau, a. lee, e. biro, “microstructure and fracture characteristics of spot-welded dp600 steel”, materials science and engineering a, vol. 485, no. 1-2, pp. 334346, 2008 [10] m. i. khan, m. l. kuntz, e. biro, y. zhou, “microstructure and mechanical properties of resistance spot welded advanced high strength steels”, materials transactions, vol. 49, no. 7, pp. 1629-1637, 2008 [11] t. k. pal, k. bhowmick, “resistance spot welding characteristics and high cycle fatigue behavior of dp780 steel sheet”, journal of materials engineering and performance, vol. 21, no. 2, pp. 280-285, 2012 [12] o. holovenko, m. g. ienco, e. pastore, m. r. pinasco, p. matteis, g. scavino, d. firrao, “microstructural and mechanical characterization of welded joints on innovative high-strength steels”, la metallurgia italiana, vol. 3, pp. 3-12, 2013 [13] f. hayat, b. demir, m. acarer, “tensile shear stress and microstructure of low-carbon dual-phase mn-ni steels after spot resistance welding”, metal science and heat treatment, vol. 49, no. 9-10, pp. 484-489, 2007 [14] b. demir, m. erdogan, “the hardenability of austenite with different alloy content and dispersion in dual-phase steels”, journal of materials processing technology, vol. 208, no. 1-3, pp. 75-84, 2008 [15] h. karakus, b. demir, m. elitas, “the effects of the electrode type on microstructure and hardness of the rsw of dp600 steel”, 2nd international conference on material science and technology in cappadocia, nevsehir, turkey, october 11-13, 2017 [16] w. d. callister, fundamentals of materials science and engineering, john wiley and sons, 2004 [17] y. kaya, n. kahraman, “the effects of electrode force, welding current and welding time on the resistance spot weldability of pure titanium”, the international journal of advanced manufacturing technology, vol. 60, no. 1-4, pp. 127-134, 2012 [18] l. m. gourd, principles of welding technology, british library cataloguing in publication data, 1995 [19] m. vural, a. akkus, “on the resistance spot weldability of galvanized interstitial free steel sheets with austenitic stainless steel sheets”, journal of materials processing technology, vol. 153-154, pp. 1-6, 2004 [20] p. marashi, m. pouranvari, s. amirabdollahian, a. abedi, m. goodarzi, “microstructure and failure behavior of dissimilar resistance spot welds between low carbon galvanized and austenitic stainless steels”, materials science and engineering: a, vol. 480, no. 1-2, pp. 175-180, 2008 [21] s. fukumoto, k. fujiwara, s. toji, a. yamamoto, “small-scale resistance spot welding of austenitic stainless steels”, materials science and engineering: a, vol. 492, no. 1-2, pp. 243-249, 2008 [22] d. q. sun, b. lang, d. x. sun, j. b. li, “microstructures and mechanical properties of resistance spot welded magnesium alloy joints”, materials science and engineering: a, vol. 460-461, pp. 494498, 2007 [23] m. r. arghavani, m. movahedi, a. h. kokabi, “role of zinc layer in resistance spot welding of aluminium to steel”, materials & design, vol. 102, pp. 106-114, 2016 [24] a. ramazani, k. mukherjee, a. abdurakhmanov, m. abbasi, u. prahl, “characterization of microstructure and mechanical properties of resistance spot welded dp600 steel”, metals, vol. 5, no. 3, pp. 17041716, 2015 [25] d. zhao, y. wang, d. liang, p. zhang, “an investigation into weld defects of spot-welded dual-phase steel”, the international journal of advanced manufacturing technology, vol. 92, no. 5-8, pp. 3043-3050, 2017 microsoft word 2-suresh.doc etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 54 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… simulation and rtds hardware implementation of shaf for mitigation of current harmonics with p-q and id-iq control strategies using pi controller suresh mikkili ph.d scholar nit rourkela, india msuresh.ee@gmail.com anup kumar panda professor nit rourkela, india akpanda.ee@gmail.com abstract : control strategies for extracting the three-phase reference currents for shunt active power filters are compared, evaluating their performance under different source conditions in matlab/simulink environment and also with real time digital simulator (rtds) hardware. when the supply voltages are balanced and sinusoidal, the two control strategies are converging to the same compensation characteristics but when the supply voltages are distorted and/or un-balanced sinusoidal, these control strategies result in different degrees of compensation in harmonics. the p-q control strategy is unable to yield an adequate solution when source voltages are not ideal. extensive simulations are carried out with pi controller for both p-q and id-iq control strategies for different voltage conditions and adequate results were presented. the 3-ph 4-wire shaf system is also implemented on rtds hardware to further verify its effectiveness. the detailed simulation and rtds hardware results are included. keywordsharmonic compensation, shaf, p-q control strategy, id-iq control strategy, pi controller, rtds hardware. i. introduction harmonics surfaced as a buzz word from 1980’s which always threaten the normal operation of power system and user equipment. highly automatic electric equipments, in particular, cause enormous economic loss every year. owing both power suppliers and power consumers are concerned about the power quality problems and compensation techniques. sinusoidal voltage is a conceptual quantity produced by an ideal ac generator built with finely distributed stator and field windings that operate in a uniform magnetic field. since neither the winding distribution nor the magnetic field are uniform in a working ac machine, voltage waveform distortions are created, and the voltage-time relationship deviates from the pure sine function. the distortion at the point of generation is very small (about 1% to 2%), but nonetheless it exists. since this is a deviation from a pure sine wave, the deviation is in the form of an episodic function, and by definition, the voltage distortion contains harmonics [1]. it is noted that non-sinusoidal current results in many problems for the utility power supply company, such as: low power factor, low energy efficiency, electromagnetic interference (emi), distortion of line voltage etc. and it is noted that, in three-phase four-wire system, zero line may be overheated or causes fire disaster as a result of excessive harmonic current going through the zero line three times or times that of three. thus a perfect compensator is necessary to avoid the consequences due to harmonics [2]. though several control strategies have been developed but still two control theories, instantaneous active and reactive currents (id–iq) method and instantaneous active and reactive power (p-q) methods [3-4] are always dominant. the present paper is mainly focused on two control strategies (p-q and id-iq) with pi controller. to validate current observations, extensive simulations are carried out with pi controller for both p-q and id-iq methods for different voltage conditions like sinusoidal, non-sinusoidal, and un-balanced conditions and adequate results were presented. the 3-ph 4-wire shaf system is also implemented on a real time digital simulator (rtds hardware) [5] to further verify its effectiveness. ii. control strategy in this section two control strategies are discussed in detail. ideal analysis has done in steady state conditions of the active power filter. steady state analysis, using fast fourier transform (fft) for the two control methods that are presented, is briefly enlightened below. figure 1 shows a basic architecture of three-phase four wire shunt active filter. l a i l b i l c i cv * c a i * c b i * c c i 1dc v 2dc v fa i fb i fc i av bv cv av bv lossp 1 c 2 c sc i sb i sa i 0s i la i lb i lc i 0l i ca i cb i cc i c b a ε c r fig. 1. three-phase four wire shunt active filter. etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 55 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… a. instantaneous real and reactive power method k ε v△ ε * i + ∆(1 ε)ca * i ∆(1 ε)ca ∼   ∼  ∼  ∼ ≈ ∼  α β ο transf. & power calcul. α β ο transf. & power calcul. α β volt re fer. pll & sine gener i la i lb i lc 50hz 50hz p' q′ p' q′ α β current re fer. * icα * i cβ α β ο inverse transf. * ica * i cb * icc va v b vc i β ′ iα′ v dc1 v dc2 dc voltage regulator v ref 50hz v *α v * β p q i 0 20hz 20hz ploss �p v△ 5%v ref 1 i fa s1 s2 s3 s4 s5 s6 + + − − − i fa i fc i fb i fa fig. 2. control block diagram of shunt active power filter. transformation of the phase voltages va, vb, and vc and the load currents ila, ilb, and ilc into the α β orthogonal coordinates are given in equation (1-2). the compensation objectives of active power filters are the harmonics present in the input currents. present architecture represents three phase four wire and it is realized with constant power controls strategy [6]. figure 2 illustrates control block diagram and inputs to the system are phase voltages and line currents of the load. it was recognized that resonance at relatively high frequency might appear between the source impedance. so a small high pass filter is incorporated in the system. the power calculation is given in detail form in equation (3). v v v v v 1 1 1 2 2 2 0 a 2 1 1 = 1 vα b3 2 2 β c3 3 0 2 2                              (1) 1 1 1 2 2 2i i 0 l a 2 1 1 i = 1 iα l b3 2 2 i iβ 3 3 l c 0 2 2                                   (2) p v 0 0 i 0 0 0 p = 0 v v iα αβ q i 0 v -v βαβ                              (3) from figure 2 we can observe a high pass filter with cut off frequency 50 hz separates the powers p~− from p and a low-pass filter separates 0 p from 0 p . the powers p~ and 0 p of the load, together with q, should be compensated to provide optimal power flow to the source. it is important to note that system used is three phase four wire, so additional neutral currents has to be supplied by the shunt active power filter thus ploss is incorporated to correct compensation error due to feed forward network unable to suppress the zero sequence power. since active filter compensates the whole neutral current of the load in the presence of zero-sequence voltages, the shunt active filter eventually supplies po. consequently if active filter supplies po to the load, this make changes in dc voltage regulator, hence additional amount of active power is added automatically to ploss which mainly provide energy to cover all the losses in the power circuit in the active filter [7]. thus, with this control strategy shunt active filter gains additional capability to reduce neutral currents and there-by supply necessary compensation when it is most required in the system. thus the αβ reference currents can be found with following equation.       +−       −+ =         qp∆p~1 22* * αβ βα βαβ νν νν ννc ca i i (4) ∆ p = 0 loss p p+ where p~ is the ac component / oscillating value of p, 0 p is the dc component of p0, etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 56 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… loss p is the losses in the active filter, loss p is the average value of loss p , ∆ p provides energy balance inside the active power filter and using equation (5) inverse transformation can be done. * * 1 1 0 2 -ii * 0ca 2 1 1 3 i * = icαcb 3 2 22 ii * cβcc 1 1 3 2 22                                   (5) where ica*, icb*, icc* are the instantaneous three phase current references in addition pll (phase locked loop) employed in shunt filter tracks automatically, the system frequency and fundamental positive–sequence component of three phase generic input signal. appropriate design of pll allows proper operation under distorted and unbalanced voltage conditions. controller includes small changes in positive sequence detector as harmonic compensation is mainly concentrated on three phase four wire [8]. as we know in threephase three wire, va′, vb′, vc′ are used in transformations which resemble absence of zero sequence component and it is given in equation (6). thus in three phase four wire it was modified as vα′, vβ′ and it is given in equation (7). 1 0 v 'a v 'α2 1 3 v ' = v 'b 3 2 2 β v 'c 1 3 2 2                             (6) i ' -i 'v ' α βα p'1 = v ' 2 2 i ' i ' q'i ' + i 'β αβα β                   (7) the dc capacitor voltages vdc1 and vdc2 may be controlled by a dc voltage regulator. a low-pass filter with cut-off frequency 20hz is used to render it insensitive to the fundamental frequency (50hz) voltage variations. the filtered voltage difference =v= vdc2-vdc1 produces voltage regulation ε according to the following limit function generator: ref ref ref ref ref 1; v 0.05v v ; 0.05v v 0.05v 0.05v 1; v 0.05v ε ε ε = − ∆ < − ∆ = − ≤ ∆ ≤ − = ∆ > where vref is a pre-defined dc voltage reference and 0.05vref was arbitrarily chosen as an acceptable tolerance margin for voltage variations. if (vdc1 + vdc2) < vref, the pwm inverter should absorb energy from the ac network to charge the dc capacitor. the inverse occur if (vdc1 + vdc2) > vref. the signal loss p generated in the dc voltage regulator is useful for correcting voltage variations due to compensation errors that may occur during the transient response of shunt active filter. b. instantaneous active and reactive current method (id – iq) in this method reference currents are obtained through instantaneous active and reactive currents id and iq of the non linear load [9-10]. calculations follows similar to the instantaneous power theory, however dq load currents can be obtained from equation (8). two stage transformations give away relation between the stationary and rotating reference frame with active and reactive current method. figure 4 shows voltage and current vectors in stationary and rotating reference frames. the transformation angle ‘θ’ is sensible to all voltage harmonics and unbalanced voltages; as a result dθ/dt may not be constant. arithmetical relations are given in equation (8) and (9); finally reference currents can be obtained from equation (10). − + fig. 3. active powers filter control circuit. d α β α 2 2 q β α β α β i v v i1 = i -v v iv + v                (8) where iα , iβ are the instantaneous α-β axis current references d α q β i icosθ sinθ = i i-sinθ cosθ                (9) d 2 2 q icic v v1 icic v vv v α α β β β αα β −      =      +     (10) where icd ,icq are compensation currents . etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 57 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… one of the advantages of this method is that angle θ is calculated directly from main voltages and thus makes this method frequency independent by avoiding the pll in the control circuit. consequently synchronizing problems with unbalanced and distorted conditions of main voltages are also evaded. thus id – iq achieves large frequency operating limit essentially by the cut-off frequency of voltage source inverter (vsi) [11]. figures 3 and 5 show the control diagram for shunt active filter and harmonic injection circuit. on owing load currents id and iq are obtained from park transformation then they are allowed to pass through the high pass filter to eliminate dc components in the nonlinear load currents. filters used in the circuit are butterworth type and to reduce the influence of high pass filter an alternative high pass filter (ahpf) can be used in the circuit. it can be obtained through the low pass filter (lpf) of same order and cut-off frequency simply difference between the input signal and the filtered one, which is clearly shown in figure 5. butterworth filters used in harmonic injecting circuit have cut-off frequency equal to one half of the main frequency (fc=f/2), with this a small phase shift in harmonics and sufficiently high transient response can be obtained. fig. 4. instantaneous voltage and current vectors. fig. 5. park transformation and harmonic current injection circuit. the function of voltage regulator on dc side is performed by proportional – integral (pi) controller, inputs to the pi controller are, change in dc link voltage (vdc) and reference voltage (vdc*), on regulation of first harmonic active current of positive sequence id1h + it is possible to control the active power flow in the vsi and thus the capacitor voltage vdc. in similar fashion reactive power flow is controlled by first harmonic reactive current of positive sequence iq1h + . on the contrary the primary end of the active power filters is just the exclusion of the harmonics caused by nonlinear loads hence the current iq1h + is always set to zero. iii. construction of pi controller figure 6 shows the internal structure of the control circuit. the control scheme consists of pi controller [12], limiter, and three phase sine wave generator for reference current generation and generation of switching signals. fig. 6. conventional pi controller the peak value of reference currents is estimated by regulating the dc link voltage. the actual capacitor voltage is compared with a set reference value. the error signal is then processed through a pi controller, which contributes to zero steady error in tracking the reference current signal. the output of the pi controller is considered as peak value of the supply current (imax), which is composed of two components: (a) fundamental active power component of load current, and (b) loss component of apf; to maintain the average capacitor voltage to a constant value. peak value of the current (imax) so obtained, is multiplied by the unit sine vectors in phase with the respective source voltages to obtain the reference compensating currents. these estimated reference currents (isa*, isb*, isc*) and sensed actual currents ( isa, isb, isc ) are compared at a hysteresis band, which gives the error signal for the modulation technique. this error signal decides the operation of the converter switches. in this current control circuit configuration, the source/supply currents isabc are made to follow the sinusoidal reference current iabc, within a fixed hysteretic band. the width of hysteresis window determines the source current pattern, its harmonic spectrum and the switching frequency of the devices. the dc link capacitor voltage is kept constant throughout the operating range of the converter. in this scheme, each phase of the converter is controlled independently. to increase the current of a particular phase, the lower switch of the converter associated with that particular phase is turned on while to decrease the current the upper switch of the respective converter phase is turned on. with this one can realize, potential and feasibility of pi controller. the actual source currents are monitored instantaneously, and then compared to the reference currents [13] generated by the proposed algorithm. in order to get accurate instantaneously control, switching of igbt device should be such that the error signal should approaches to zero, thus provides quick response. for this reason, hysteresis current controller with fixed band which derives the switching signals of three phase igbt based vsi bridge is used. the upper device and the lower device in one phase leg of vsi are switched in complementary manner else a dead short circuit will be take place. the apf reference currents isa, isb, isc compared with sensed source currents isa, isb, isc and the etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 58 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… error signals are operated by the hysteresis current controller to generate the firing pulses which activate the inverter power switches in a manner that reduces the current error. iv. rtds hardware the real time digital simulator (rtds) allows developers to accurately and efficiently simulate electrical power systems and their ideas to improve them the rtds simulator operates in real time, therefore not only allowing the simulation of the power system, but also making it possible to test physical protection and control equipment. this gives developers the means to prove their ideas, prototypes and final products in a realistic environment. the rtds is a fully digital power system simulator capable of continuous real time operation. it performs electromagnetic transient power system simulations with a typical time step of 50 microseconds utilizing a combination of custom software and hardware. the proprietary operating system used by the rtds guarantees “hard real time” during all simulations. it is an ideal tool for the design, development and testing of power system protection and control schemes. with a large capacity for both digital and analogue signal exchange (through numerous dedicated, high speed i/o ports) physical protection and control devices are connected to the simulator to interact with the simulated power system. v. simulator hardware the real time digital simulation hardware used in the implementation of the rtds is modular, hence making it possible to size the processing power to the simulation tasks at hand. figure 7 illustrates typical hardware configurations for real time digital simulation equipment. as can be seen, the simulator can take on several forms including a new portable version which can easily be transported to a powerplant or substation for on-site pre-commissioning tests. each rack of simulation hardware contains both processing and communication modules. the mathematical computations for individual power system components and for network equations are performed using one of two different processor modules. an important aspect in the design and implementation of any real time simulation [14] tool is its ability to adapt to future developments. since the power system industry itself continues to advance with the introduction new innovative devices, both the hardware and software of the simulator must be able to follow such changes. great care has been taken to ensure such upward compatibility in all aspects of the real time simulator. adhering to this approach provides significant benefit to all simulator users since they are able to introduce new features to already existing simulator installations. vi. system performance in this section 3 phase 4 wire shunt active power filter responses are presented in transient and steady state conditions. in the present simulation ahpf (alternative high pass filter) were used in butterworth filter with cut-off frequency fc=f/2. simulation shown here are for different voltage conditions like sinusoidal, non-sinusoidal, and unbalanced conditions. simulation is carried out with pi controller for both instantaneous real active and reactive power control strategy (p-q) and active and reactive current control strategy (id-iq). figure 8, figure 9 and figure 10 illustrates the performance of shunt active power filter under different main voltages, as load is highly inductive, current draw by load is integrated with rich harmonics. figure 8 illustrates the performance of shunt active power filter under balanced sinusoidal voltage condition, thd for p-q method with pi controller using matlab simulation is 2.15% and using rtds hard ware is 2.21; thd for id-iq method with pi controller using matlab simulation is 1.97% and using rtds hard ware is 2.04% . figure 9 illustrates the performance of shunt active power filter under un-balanced sinusoidal voltage condition, thd for p-q method with pi controller using matlab simulation is 4.16% and using rtds hard ware is 4.23; thd for id-iq method with pi controller using matlab simulation is 3.11% and using rtds hard ware is 3.26%. fig. 7. rtds hardware etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 59 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… figure 10 illustrates the performance of shunt active power filter under balanced non-sinusoidal voltage condition, thd for p-q method with pi controller using matlab simulation is 5.32% and using rtds hard ware is 5.41; thd for id-iq method with pi controller using matlab simulation is 4.92% and using rtds hard ware is 5.05%. fig 11 gives the comparison between p-q and id-iq control strategies with pi controller using matlab/simulink and rtds hardware. it is shown that total harmonic distortion (thd) of id-iq control strategy with pi controller is better than thd of p-q control strategy with pi controller in both matlab/simulink environment and also when using rtds hardware. 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -400 -300 -200 -100 0 100 200 300 400 time (sec) s o u rc e v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) s o u rc e c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (amps) f il te r c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 0 200 400 600 800 1000 time (sec) d c l in k v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -40 -30 -20 -10 0 10 20 30 40 time (sec) l o a d c u rr e n t ( a m p s) 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 harmonic order thd= 2.15% m a g ( % o f f u n d a m e n ta l) 3ph 4w bal sin p-q with pi controller ( matlab simulation) 3ph 4w bal sin p-q with pi controller ( rt ds hardware) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -400 -300 -200 -100 0 100 200 300 400 time (sec) s o u rc e v o lt a g e (v o lt s ) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) s o u rc e c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -40 -30 -20 -10 0 10 20 30 40 time (sec) l o a d c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) f il te r c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 0 200 400 600 800 1000 time (sec) d c l in k v o lt a g e (v o lt s) 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 harmonic order thd= 1.97% m a g ( % o f f u n d a m e n ta l) 3ph 4w bal sin id-iq with pi controller ( matlab simulation) 3ph 4w bal sin id-iq with pi controller ( rt ds hardware) (a) (b) (c) (d) fig. 8. 3ph 4wire shunt ative filter response with pi controller under balanced sinusoidal using (a) p-q with matlab (b) p-q with rtds hardware (c) id-iq with matlab (d) id-iq with rtds hardware etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 60 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -400 -300 -200 -100 0 100 200 300 400 time (sec) s o u rc e v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) s o u rc e c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) f il te r c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 0 200 400 600 800 1000 time (sec) d c l in k v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -40 -30 -20 -10 0 10 20 30 40 time (sec) l o a d c u rr e n t ( a m p s) 0 10 20 30 40 50 0 1 2 3 4 5 harmonic order thd= 4.16% m a g ( % o f f u n d a m e n ta l) 3ph 4w un-bal sin p-q with pi controller ( matlab simulation) 3ph 4w un-bal sin p-q with pi controller ( rt ds hardware) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -400 -300 -200 -100 0 100 200 300 400 time (sec) s o u rc e v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) s o u rc e c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -40 -30 -20 -10 0 10 20 30 40 time (sec) l o a d c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) f il te r c u rr e n t ( a m p s ) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 0 200 400 600 800 1000 time (sec) d c l in k v o lt a g e (v o lt s ) 0 10 20 30 40 50 0 1 2 3 4 5 harmonic order thd= 3.11% m a g ( % o f f u n d a m e n ta l) 3ph 4w un-bal sin id-iq with pi controller ( matlab simulation) 3ph 4w un-bal sin id-iq with pi controller ( rt ds hardware) (a) (b) (c) (d) fig. 9. 3ph 4wire shunt ative filter response with pi controller under un-balanced sinusoidal using (a) p-q with matlab (b) p-q with rtds hardware (c) id-iq with matlab (d) id-iq with rtds hardware vii. conclusion in the present paper two control strategies are developed and verified with three phase four wire system. though the two strategies are capable to compensate current harmonics in the 3 phase 4-wire system, but it is observed that instantaneous active and reactive current (id-iq) control strategy with pi controller leads to better result under unbalanced and non-sinusoidal voltage conditions compared to the instantaneous active and reactive power (p-q) control strategy. further, p-q theory needs additional pll circuit for synchronization, since it is a frequency variant method, whereas in id-iq method angle ‘θ’ is calculated directly from main voltages. this enables the (id-iq) method to be frequency independent. thus large numbers of synchronization problems with un-balanced and nonsinusoidal voltages are avoided. over all, it is shown that the performance of id-iq control strategy with pi controller is superior to p-q control strategy with pi controller. etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 61 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -400 -300 -200 -100 0 100 200 300 400 time (sec) s o u rc e v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) s o u rc e c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) f il te r c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 0 200 400 600 800 1000 time (sec) d c l in k v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -40 -30 -20 -10 0 10 20 30 40 time (sec) l o a d c u rr e n t ( a m p s) 0 10 20 30 40 50 0 2 4 6 8 10 harmonic order thd= 5.31% m a g ( % o f f u n d a m e n ta l) 3ph 4w non-sin p-q with pi controller ( matlab simulation) 3ph 4w non-sin p-q with pi controller ( rt ds hardware) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -400 -300 -200 -100 0 100 200 300 400 time (sec) s o u rc e v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) s o u rc e c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -150 -100 -50 0 50 100 150 time (sec) f il te r c u rr e n t ( a m p s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 0 200 400 600 800 1000 time (sec) d c l in k v o lt a g e (v o lt s) 0.352 0.354 0.356 0.358 0.36 0.362 0.364 0.366 0.368 0.37 0.372 -40 -30 -20 -10 0 10 20 30 40 time (sec) l o a d c u rr e n t ( a m p s) 0 10 20 30 40 50 0 1 2 3 4 5 harmonic order thd= 4.92% m a g ( % o f f u n d a m e n ta l) 3ph 4w bal non-sin id-iq with pi controller ( matlab simulation) 3ph 4w bal non-sin id-iq with pi controller ( rt ds hardware) (a) (b) (c) (d) fig. 10. 3ph 4wire shunt ative filter response with pi controller under balanced non-sinusoidal using (a) p-q with matlab (b) p-q with rtds hardware (c) id-iq with matlab (d) id-iq with rtds hardware fig. 11. thd for p-q and id-iq control strategies with pi controller using matlab and rtds hardware etasr engineering, technology & applied science research vol. 1, �o. 3, 2011, 54-62 62 www.etasr.com suresh and panda: simulation and rtds hardware implementation of shaf… references [1] h. akagi, “new trends in active filters for power conditioning”, ieee transactions on industrial applications, vol. 32, no. 6, pp. 13121322, 1996. [2] f. z. peng, g. w. ott jr, d. j. adams, “harmonic and reactive power compensation based on the generalized instantaneous reactive power theory for three-phase four-wire systems” ieee transactions on power electronics, vol. 13, no. 5, pp. 1174-1181, 1998 [3] l. gyugyi, e. c. strycula, “active ac power filters”, ieee iias annual meeting, pp. 529-535, 1976. 529p. [4] m. i. m. montero, e. r. cadaval, f. b. gonzalez, “comparison of control strategies for shunt active power filters in three-phase fourwire systems”, ieee transactions on power electronics, vol. 22, no. 1, pp. 229-236, 2007. [5] s. mikkili, a. k. panda, s. yellasiri, “rtds hardware implementation and simulation of 3-ph 4-wire shaf for mitigation of current harmonics using p-q control strategy with fuzzy controller”, journal of power electronics & power systems vol. 1, no. 1, pp. 1323, 2011. [6] h. akagi, e.h. watanabe, “instantaneous power theory and applications to power conditioning”, new jersey. ieee press/wileyinter-science 2007 isbn: 978-0-470-10761-4, 2007 [7] o. vodyakho, c. c. mi, “three-level inverter-based shunt active power filter in three-phase three-wire and four-wire systems” ieee transactions on power electronics, vol. 24, no. 5, pp. 1350–1363, 2009. [8] v. soares, p. verdelho, g. marques, “active power filter control circuit based on the instantaneous active and reactive current id –iq method”, ieee power electronics specialists conference, vol. 2, pp. 1096-1101, 1997. [9] m. aredes, j. hafner, k. heumann,. “three-phase four-wire shunt active filter control strategies” ieee transactions on power electronics, vol. 12, no. 2, pp. 311 318 , 1997. [10] p. rodriguez, j. i. candela, a. luna, l. asiminoaei, “current harmonics cancellation in three-phase four-wire systems by using a four-branch star filtering topology”, ieee transactions on power electronics, vol. 24, no. 8, pp. 1939-1950, 2009. [11] p. salmeron, r. s. herrera, “distorted and unbalanced systems compensation within instantaneous reactive power framework”, ieee transactions on power delivery, vol. 21, no. 3, pp. 1655-1662, 2006. [12] s. mikkili, a. k. panda, s. yellasiri, “fuzzy controller based 3phase 4wire shunt active filter for mitigation of current harmonics with combined p-q and id-iq control strategies”, journal of energy and power engineering, vol. 3, no. 1, pp. 43-52, 2011. [13] s. mikkili, a. k. panda, s. s. patnaik, s. yellasiri, “comparison of two compensation control strategies for shaf in 3ph 4wire system by using pi controller”, india international conference on power electronics, pp. 1-7, 2011. [14] p. forsyth, t. maguire, r. kuffel, “real time digital simulation for control and protection system testing”, in ieee proceedings power electronics specialists conference, vol. 1, pp. 329 – 335, 2004. authors profile suresh mikkili was born in bapatla, andhra pradesh, india on 5 th aug 1985. he received b.tech degree in electrical and electronics engineering from jntu university hyderabad in may 2006 and masters (m.tech) in electrical engineering from n.i.t rourkela, india in may 2008.he has worked as a asst.prof in electrical engineering, s.i.t.e, t.p.gudem from june 2008 to dec-2009 and in v.k.r & v.n.b engineering college from dec2009 to july 2010. he is currently pursuing ph.d degree in electrical engineering at n.i.t rourkela, india from july 2010. his main area of research includes power quality improvement issues, active filters, and soft computing techniques. . anup kumar panda: born in 1964. he received the b.tech in electrical engineering from sambalpur university, india in 1987. he received the m.tech in power electronics and drives from indian institute of technology, kharagpur, india in 1993 and ph.d. in 2001 from utkal university. join as a faculty in igit, sarang in 1990. served there for eleven years and then join national institute of technology, rourkela in january 2001 as an assistant professor and currently continuing as a professor in the department of electrical engineering. he has published over forty articles in journals and conferences. he has completed two mhrd projects and one nampet project. guided two ph.d. scholars and currently guiding four scholars in the area of power electronics & drives. his research interest include analysis and design of high frequency power conversion circuits, power factor correction circuits, power quality improvement in power system and electric drives. microsoft word 17-3573_s_etasr_v10_n4_pp5979-5985 engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5979 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … multi-national and multi-language license plate detection using convolutional neural networks mohammed salemdeeb electronics & telecommunications engineering department kocaeli university kocaeli, turkey en_mis@hotmail.com sarp erturk electronics & telecommunications engineering department kocaeli university kocaeli, turkey sertur@kocaeli.edu.tr abstract—many real-life machine and computer vision applications are focusing on object detection and recognition. in recent years, deep learning-based approaches gained increasing interest due to their high accuracy levels. license plate (lp) detection and classification have been studied extensively over the last decades. however, more accurate and language-independent approaches are still required. this paper presents a new approach to detect lps and recognize their country, language, and layout. furthermore, a new lp dataset for both multinational and multi-language detection, with either one-line or two-line layouts is presented. the yolov2 detector with resnet feature extraction core was utilized for lp detection, and a new low complexity convolutional neural network architecture was proposed to classify lps. results show that the proposed approach achieves an average detection precision of 99.57%, whereas the country, language, and layout classification accuracy is 99.33%. keywords-license plate detection; license plate classification; lpd; yolo detector; convolutional neural network; deep learning i. introduction object detection and classification has attracted a lot of research the recent years, with the advancements in vision technology, computer technology, and deep learning algorithms [1]. object detection aims to estimate the location of objects of interest contained in an image, while object classification aims to categorize an object within a certain number of categories [2]. traditional object detection and classification approaches have three steps, namely informative region selection, feature extraction and classification. in region selection, it is possible to scan the entire image using a multiscale sliding window, as numerous objects may appear in different locations with various sizes and aspect ratios [1]. feature extraction aims to obtain visual features providing a semantic and robust representation. some popular feature extraction methods used in the literature are haar-like features [3], scale-invariant feature transform (sift) [4], histogram of oriented gradients (hog) [5], and hybrid feature selection techniques [6]. classification aims to assign a target object in one of many categories. traditional classification approaches include supported vector machine (svm) [7], adaboost [8], and deformable part-based models (dpm) [9]. recent breakthroughs in convolutional neural network (cnn)-based approaches [10] attracted researchers to use regions with cnn (r-cnn) features for object detection [11]. cnn-based methods have the capacity to learn complex features with deeper architectures and utilize training algorithms to learn informative object representations without the need to design the features manually [12]. furthermore, researchers studied extensively various cnn models such as alexnet [10], vgg [13], googlenet [14], resnet [15], and fdrenet [16] to improve the accuracy of classification and regression problems in machine learning. generic object detection refers to the detection of objects from predefined classes obtaining the spatial location (e.g. bounding box) inside an image. it can typically be categorized into two types, namely regression/classification and region-based methods [17]. region-based methods include r-cnn [11], fast r-cnn [18], faster r-cnn [19] and mask r-cnn [20]. on the other hand, regression/classification-based methods include yolo (you only look once) [21], ssd [22], yolov2 [23] and yolov3 [24]. automatic license plate recognition (alpr) is a group of techniques that use license plate detection (lpd), character segmentation, and character recognition on images to identify vehicle lp numbers. alpr is also referred as license plate detection and recognition (lpdr). alpr is used in various real-life applications such as parking systems, electronic toll collection, and traffic security and control [25]. state-of-the-art object detection algorithms based on deep learning have provided promising results for lp country and layout classification. however, the multi-orientation and multi-scale nature of lps in addition to distortion and illumination issues, make lpd a challenging task to perform [26]. lpd using deep learning has been extensively studied over the last decade. authors in [27] proposed the use of a cnn-based multidirectional (md)-yolo framework for lpd, but their method does not successfully detect small lps. in [28] a faster r-cnn approach was presented, detecting at first vehicle regions and then locating the lp in each vehicle region. its performance evaluation results showed 98.39% precision and 96.83% recall. a new approach was proposed in [29], referred as yolo-l, where the prospective number and size of lp candidate boxes are selected using “k-means++” clustering with a modified yolov2 model and pre-identification to distinguish lps from similar objects. this method achieved a precision of 98.86% corresponding author: mohammed salemdeeb engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5980 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … and a recall of 98.86%. researchers in [30] introduced the largest brazilian lp dataset, referred as ufpr dataset, and proposed a four stage lpdr system comprising of vehicle detection, lpd, character segmentation, and character recognition. the lpd stage used cr-cnn core fast-yolo, obtained a recall of 98.33%. furthermore, researchers in [31] introduced a large and comprehensive chinese lp dataset called ccpd, and proposed an end-to-end lpdr system using rpnet in the lpd phase, comparing the detection of average precision (ap) results to ssd, yolov2, and faster r-cnn detection techniques by using 250k unique car lps. on the other hand, little research has been performed on multi-language and multi-national lp detection, mostly due to the lack of international lp datasets. nevertheless, a few recent studies focused on developing a global end-to-end alpr system, as reported in [32]. authors in [32] proposed an approach for multi-national license plate detection for images with complex backgrounds, in which the yuv color space was initially used for detecting the rear vehicle lights, and the lp area was detected using a histogram-based approach on the edge energy map. the utilized dataset comprised of lps from america, china, serbia, italy, pakistan, united arab emirates (uae), and hungary. the dataset had only single-line lps and obtained a detection accuracy of 90%. researchers in [33] used vgg with lstm to classify the registration country of lps from latvia, lithuania, estonia, russia, sweden, poland, germany, finland, and belarus. a recent research used tiny yolov3 to detect lps from south korea, taiwan, greece, usa, and croatia [34]. several approaches expressed interest in multi-national lps, but they tested their detectors on each country’s dataset separately, rather than accumulating them into one dataset [35-38]. moreover, multi-language lps were addressed in a few approaches. authors in [38] proposed a mask r-cnn detector for lps with english and arabic characters from usa and tunisia. in [39] korean and english lps were targeted, using the term multi-style detection to refer to different country, language and one or two-line lp styles. most of the reported researches studied the lp classification (lpc) problem inside the lpd stage. in these cases, the detector determines the bounding box, and at the same time gives the class label of an lp. however, in [32, 37] multinational lpd was presented by just detecting lps, without providing any other information for nation, language or layout. in [33] the classification of detected lps by the issuing country was studied, reporting a classification accuracy of 92.8%. on the other hand, authors in [39] proposed a module to classify the detected lps to single and double-line, without reporting its accuracy but only the entire system results. in this paper, multi-national lps from usa, europe (eu), turkey (tr), uae and kingdom of saudi arabia (ksa) are targeted, using yolov2 detector with resnet50 feature extraction for lpd. for this purpose, a new dataset, named as lpdc2020, was constructed and presented. after the segmentation of the detected lps, a cnn was used to detect the country, language and the one or two-line layout of the lp. the proposed detector and classifier were also tested on several benchmark datasets from those countries, in addition to lpdc2020. the proposed approach aims to close the gap in multi-national, multi-language and multi-layout lp detection problem, by utilizing a single unified system, and to the best of our knowledge it is the first and only study incorporating lps from north and south america, europe, and middle east (tr, uae and ksa). ii. datasets a. lp datasets available in the literature most of the frequently used lp datasets utilized in previous researches are available online, and their details are summarized in table i. any private datasets, not publicly accessible, are disregarded. table i. a summary of publicly available lp datasets dataset year # of images accuracy % country caltech [40] 1999 126 usa zemris [41] 2002 510 86.2 croatia ucsd [42] 2005 405 89.5 usa snapshots [43] 2007 97 85 croatia medialab [44] 2018 730 greece reid [45] 2017 77k 96.5 czech ufpr [30] 2018 4500 78.33 brazil b. lpdc2020 dataset this paper introduces a new lp dataset, which was collected manually using mobile cameras in turkey, named lpdc2020. it has two image sets: vehicular images to train the lpd module, and cropped lp images to train the lpc module. in addition, due the lack of publicly available arabic lp datasets, images for ksa and uae lps available in the internet were used. all images were processed and annotated manually in a labor-intensive process. table ii shows the number of lpd images collected for each country. some sample lps from different countries with one and two-line layouts included in the dataset are shown in figure 1. table iii shows the structure of lpdc2020 classification dataset. it is noted that, taking one and two-line layouts into account, the lpc dataset incorporates 11 different characteristics. the total number of cropped lp images is 29030, containing lp images from the previously mentioned countries. table ii. a summary of lpdc2020 lpd dataset country tr eu usa ksa uae total # of images 4182 2636 715 1000 488 9021 fig. 1. some sample lps from different contries with various layouts. engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5981 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … table iii. structure of the lpdc2020 lpc dataset country language of characters layout number of instances br latin one-line 3714 two-line 900 uae arabic one-line 500 two-line 276 eu latin one-line 5296 two-line 4350 ksa arabic one-line 290 two-line 792 tr latin one-line 7771 two-line 3560 usa latin one-line 1401 iii. fundamentals of cnn the fundamentals of any cnn are the convolutional layers consisting of learnable filters having small spatial size and specific depth. for an input image i and kernel k the general equation of 2d convolution [46] used in computer vision and machine learning is defined as: (� ∗�)(�, �) = ∑ ∑ �(� + �,� + )�(�, ) �� (1) with i and m being the row indexes, while j and n are the column indexes. the activation layer produces an output value of the neuron using certain activation functions for a given input value. an example is the rectified linear unit (relu) [10], where the output will be zero for negative input values and same as the input in any other case. the second important part of cnn is the pooling layer, which is responsible for reducing the input’s spatial size by keeping the most important activations. this reduces the amount of computations and the number of learnable parameters. a dropout layer is used to combat overfitting, omitting randomly some neurons in each training step by setting their activation values to zero. as a result, the network can learn using a random combination of neurons. the fully connected (fc) layer, also called as dense layer [47], is the third important part of cnns. each neuron in the input layer is connected to all output neurons of this layer. the purpose of the fc layer is to learn for non-linear combinations of features. for x neurons input, learnable weight matrix w, and learnable bias vector b, the output of the fully connected layer y can be expressed as: � = �� +� (2) at the end of the architecture, i.e. after the last fully connected layer, a softmax layer is used. this layer is used for classification problems, providing a probabilistic interpretation of the input with respect to the sum of all input exponentials, declared as: �������(�)� = ���∑ ����� ! (3) this layer is also called the loss function layer, since during training a loss function is applied at the end of the cnn. in general, for n samples, the mean square error (mse) can be used in object detection as in (4) and cross-entropy function is used for classification problems as in (5) [47]: "#$% = &'∑ (�� −�)�)*'�+& (4) ",-.//01�2-.34 = −∑ {�� ln(�)�)+(1−��)ln (1− �)�)}'�+& (5) where, �� is the i-th actual output, and �)� is the i-th predicted output. iv. proposed approach this research addresses two problems; the detection of an lp in an image, and the classification of the detected lp’s country, language, and layout. a. license plate detection the proposed approach is based on using the yolov2 detector with the resnet50 [15] network as the core cnn for the lp detector. the utilized resnet50 architecture is displayed in table iv. table iv. resnet50 architecture layer size filters input 224 × 224 × 3 ------- conv1 112 ×112 ×64 7 × 7 ×64, stride 2 max pooling 56 ×56 ×64 3 ×3 max pool, stride 2 conv2 56 ×56 ×256 a 1 ×1,643 ×3,641 × 1,256b ×3 conv3 28 ×28 ×512 a1 × 1,1283 × 3,1281 × 1,512b ×4 conv4 14 ×14 × 1024 a 1 × 1,2563 × 3,2561 × 1,1024b ×6 conv5 7 ×7 ×2048 a 1 × 1,5123 × 3,5121 × 1,2048b ×3 average pooling 1 ×1 ×2048 7×7 fully connected 1 ×1 ×1000 1000 softmax 1 ×1 ×1000 ------- the input layer size of resnet50 was redesigned to be 672×672 instead of the original 224×224 pixels. the original size did not provide adequate features for lpd. for an original vehicular image with small size it will be difficult to detect the lp region after reducing its resolution. naturally, there is a restriction on the minimum lp size required inside the detector’s input image, due to the network forward propagation size of resnet50, which is 224/7=32. hence, lps sized 32×32 pixels will correspond to a single point in the output feature map and consequently, any smaller regions will vanish. the proposed detector core network was designed to have a forward propagation size of 672/42=16. the first 40 layers of resnet50 were used in the proposed yolov2 core cnn. the input size was set to 672×672 pixels, and the output feature map was 42×42 pixels. the minimum lp size was set to 16x16 pixels. it should be noted that smaller lps can still be detected but with lower precision. in addition, the proposed approach can detect lps sized up to 670×670 pixels. figure 2 shows the block diagram of the proposed approach. the proposed detector had 27992604≈28m total learnable parameters. the yolov2 detector divides the input image to an s×s grid, where s is the output feature map size of the yolov2 core resnet40 (i.e. the output of conv4 layer), and s was set to 42. anchor boxes were downsized by forward propagation size. yolov2 uses a anchor boxes to predict objects. the engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5982 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … detection results are the bounding boxes and the confidence scores, so that for c class probabilities [23] the number of filters is given by: e��fg �� ��h�fg� = (i +5�< j (6) fig. 2. block diagram of the proposed approach. the lp sizes in lpdc2020 were analyzed to select their anchor boxes, using the pyramid of anchors method of faster r-cnn [19]. as shown in figure 3, lp sizes span on a range of 10 to 670 pixels. hence, in order to select anchor boxes of high intersection of union (iou), six minimum lp sizes were used. these sizes were defined as 10×10, 10×20, 10×30, 10×40, 10×50, and 30×14 pixels, with a pyramid level of 15 and anchor box pyramid scale of 1.3. as a result, 90 anchor boxes with a minimum of 0.625 and mean iou of 0.85 were obtained. according to (6), the proposed last yolov2 layer had 540 filters. fig. 3. lp sizes in lpdc2020 dataset. b. license plate classification α simple cnn was designed for lp classification, and its accuracy is compared to vgg [13]. the input image size is set to 224×224 pixels, being the same as the input size of vgg network for fair comparison. the classification cnn construction is shown in table v. the proposed classifier design has a total number of 2635773≈2.64m learnable parameters, being much less than the vgg learnable parameter amount of 138m. both a batch normalization (bn) [48] and a relu non-linear activation layer [10] follow each convolutional layer. bn normalizes the input batch mean and standard deviation, and then performs scaling and shifting based on learnable scale and shift parameters [48]. all convolution kernels have a size of 5×5 with stride 1 without padding. hence, each convolutional layer results in a dimension shrinkage of 4 rows/columns. the dimension of the output feature map was computed according to (7): �.k2 l��0lmn*o lp � 1 (7) where �.k2 is the output feature map width, ��� is the input feature map width, �q is the kernel width, r is the padding, and �/ is the kernel stride in the horizontal direction. for input/output height relation, (7) can be applied using s instead of �. the input size is 224×224×3. after 4 pooling and 8 convolutional layers, the output size is reduced to 6×6×128. after that, conv9 and conv10 layers shrink the output to 1×1×512 neurons. using this design, the input image is convolved to a single neuron with 512 channels. afterwards, these neurons are fitted to 11 classes in the fc layer by applying (2). this layer weights all input neurons and forwards them to the softmax layer, which provides a score for the 11 classes and performs the classification task as described in (3). it is safe to note that the proposed design is a simple stacked cnn with a low number of learnable parameters. table v. proposed cnn design for classification layer filters & size output size learnable parameters input 224×224×3 conv1 5×5×32 220×220×32 2496 conv2 5×5×32 216×216×32 25696 maxpooling 2×2 108×108×32 conv3 5×5×64 104×104×64 51392 conv4 5×5×64 100×100×64 102592 maxpooling 2×2 50×50×64 conv5 5×5×96 46×46×96 153888 conv6 5×5×96 42×42×96 230688 maxpooling 2×2 21×21×96 conv7 5×5×128 17×17×128 307584 conv8 5×5×128 13××13×128 409984 maxpooling 2×2 6×6×128 conv9 5×5×256 2×2×256 819968 conv10 2×2×512 1×1×512 525824 fully connected 11 1×1×11 5643 softmax 1×1×11 c. practical aspects the training process used stochastic gradient descent with momentum (sgdm) [46]. the sgdm training was carried out for 10 epochs, with an initial learning rate (lr) drop factor of 0.5 for every 2 epochs. the training set was shuffled for every epoch. in yolov2 training, the mini-batch size was only six images, due to memory constraints, and lr was set to 1×10 -5 . also, lp classification cnn mini-batch size was 120 images and lr was set to 2.5×10 -2 . after the first results, model parameter tuning was applied to continue training, using adam adaptive learning rate optimization [46]. in adam, the batch size was doubled and lr was halved every 10 epochs, as long as the final error shows improvement. engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5983 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … v. results and discussion a matlab environment was used to evaluate the proposed approach. a geforce 1060 6gb ram gpu with computational capability of 6.1 was used for training and testing. the next subsections describe the evaluation criteria for both lpd and lpc. a. lpd the lp detection performance evaluation was performed using precision (p), recall (r) and average precision (ap) values. any detected lp bounding box having an overlap greater than iou=0.5 with the ground truth bounding box is considered as a correct detection. precision is the percentage of the number of correctly detected lps over the total number of detected lps. r is the percentage of the number of correctly detected lps over the total number of ground truth lps. ap is the area under the precision recall curve. p and r are calculated by (8) and (9), where tp is true positive, fp is false positive, and fn is false negative detection. rgft���� uv wonxo (8) yft�hh uv wonx' (9) table vi shows the proposed detector’s ap performance compared to previous approaches presented in [32, 33, 37]. the proposed detector outperforms the previous approaches in terms of ap performance. it should be noted that in [32] only the accuracy for detected over all lps in a private dataset is evaluated. authors in [37] evaluated only the lpd precision, without presenting any ap values. it is evident that the proposed approach provides better detection score. table vi. multi-set lpd comparison results. approach detector score processing time (s) [32] image processing 90.4% ap 0.25 [33] vgg+ lstm 98.07% ap not reported [37] image processing + alexnet + svm 99.03% p 0.16 proposed resnet40+ yolov2 99.57% ap 0.09 those approaches were selected because they evaluated performance using images from all the countries of interest together in one dataset. hence, these approaches can be considered as multi-national and multi-language lpd methods. furthermore, some researches trained and tested detectors for different datasets separately, in order to evaluate the performance on each dataset. table vii provides a comparison in terms of p, r and ap performance for these methods. in order to conduct a fair comparison, there was a need to train the proposed detector on every dataset separately. however, the proposed detector had higher r rate and ap on all datasets. this is partly due to the large number of different lps used in lpdc2020 and to its superior architecture. it is noted that one and two-line lp layout classification was studied in [34] with classification results combined in the character recognition stage for multi-national korean, taiwanese, chinese, and latin lps. table viii shows the proposed method’s ap results per country. it is apparent that the performance is similar, with slightly lower results for ksa lps. table vii. single-set lpd comparison results approach caltech dataset zemris dataset medialab dataset various datasets [34] tiny yolov3 p=100% r=100% 98% 99% 98.8% 99.7% taiwan: 100/100% korea: 98.3/99% [35] vgg+ faster rcnn ap = 98.03% ------ china: 98.33% taiwan: 98.80% [36] vgg +ssd ap = 98.4% 97.83% 99.8% --- [38] mask rcnn p=98.9% r=98.6% ------ taiwan: 99.1% china: 99.4% tunisia: 97.9% proposed resnet40+ yolov2 p=98.43% r=100% ap=99.96% 97.88% 100% 99.99% 98.4% 99.75% 99.74% snapshots: 98/100/99.99% ucsd: 99/100/99.93% table viii. lpd results for lpdc2020 dataset per country dataset tr eu usa uae ksa proposed resnet40+ yolov2 99.48% 99.91% 99.95% 99.55% 98.67% b. lpc the proposed cnn for classifying the lp’s issuing country, language and layout was evaluated in terms of overall accuracy. table ix shows the classification accuracy of the proposed cnn. the proposed cnn classification is only 0.38% less accurate than vgg16, which is regarded to be state-of-the-art, but with significantly fewer learnable parameters. the number of learnable parameters of the proposed approach is only 1.9% of the parameters used in vgg16. as a result, the proposed cnn is faster and less complex with a small penalty in classification accuracy. table ix. proposed cnn lp classification accuracy cnn architecture accuracy learnable parameters vgg16 99.71% 136 m proposed cnn 99.33% 2.635 m table x shows the misclassification rates of the proposed approach. it is noted that turkish and european union’s lps have a higher classification error, as they share the same lp style standard. in contrast, br and uae have a unique style, and usa lps can include object shapes varying from standard lp characters, making it easy to classify them with a small error. table x. misclassification of lpdc2020-lpc datasets country language of characters layout number of instances misclassified lps br latin one-line 3714 0 two-line 900 0 uae arabic one-line 500 0 two-line 276 0 eu latin one-line 5296 14 two-line 4350 0 ksa arabic one-line 290 0 two-line 792 4 tr latin one-line 7771 18 two-line 3560 0 usa latin one-line 1401 3 engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5984 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … vi. conclusion detecting country and language is important to build a global alpr system, while correct layout classification is essential in order to read the detected characters in the right order. this paper focused on lp detection and classification of multi-national and multi-language lps with different layouts from br, usa, eu, tr, ksa and uae, proposing a method that can detect lps regardless of their country of origin, language or layout. furthermore, a second classification stage was used to recognize lps’ issuing country, language and layout. in addition, a new multi-national, multi-language and multi-layout lp dataset was introduced in order to enable benchmarking and to close the gap in this field. the developed detection and classification approach was based on deep learning. the results were promising and the lp detection average precision was 99.57%, while the lp classification accuracy was 99.33%. the current study paves the way to designing a global alpr system. in the future, an end-to-end training process could be developed to test the whole system as a unified alpr model. references [1] z.-q. zhao, p. zheng, s.-t. xu, and x. wu, “object detection with deep learning: a review,” ieee transactions on neural networks and learning systems, vol. 30, no. 11, pp. 3212–3232, nov. 2019, doi: 10.1109/tnnls.2018.2876865. [2] p. f. felzenszwalb, r. b. girshick, d. mcallester, and d. ramanan, “object detection with discriminatively trained part-based models,” ieee transactions on pattern analysis and machine intelligence, vol. 32, no. 9, pp. 1627–1645, sep. 2010, doi: 10.1109/tpami.2009.167. [3] r. lienhart and j. maydt, “an extended set of haar-like features for rapid object detection,” presented at international conference on image processing, rochester, ny, usa, sep. 22-25, 2002, doi: 10.1109/icip.2002.1038171. [4] d. g. lowe, “distinctive image features from scale-invariant keypoints,” international journal of computer vision, vol. 60, no. 2, pp. 91–110, nov. 2004, doi: 10.1023/b:visi.0000029664.99615.94. [5] n. dalal and b. triggs, “histograms of oriented gradients for human detection,” in 2005 ieee computer society conference on computer vision and pattern recognition (cvpr’05), jun. 2005, vol. 1, pp. 886– 893, doi: 10.1109/cvpr.2005.177. [6] p. matlani and m. shrivastava, “an efficient algorithm proposed for smoke detection in video using hybrid feature selection techniques,” engineering, technology & applied science research, vol. 9, no. 2, pp. 3939–3944, apr. 2019. [7] c. cortes and v. vapnik, “support vector networks”, machine learning, vol. 20, no. 3, pp. 273–297, sep. 1995. [8] y. freund and r. e. schapire, “a desicion-theoretic generalization of on-line learning and an application to boosting,” in european conference on computational learning theory, mar. 1995, pp. 23–37, doi: 10.1007/3-540-59119-2_166. [9] p. f. felzenszwalb, r. b. girshick, d. mcallester, and d. ramanan, “object detection with discriminatively trained part-based models,” ieee transactions on pattern analysis and machine intelligence, vol. 32, no. 9, pp. 1627–1645, sep. 2010, doi: 10.1109/tpami.2009.167. [10] a. krizhevsky, i. sutskever, and g. e. hinton, “imagenet classification with deep convolutional neural networks,” in advances in neural information processing systems 25, lake tahoe, nv, usa, dec. 2012, pp. 1097–1105. [11] r. girshick, j. donahue, t. darrell, and j. malik, “rich feature hierarchies for accurate object detection and semantic segmentation,” in proceedings of the ieee conference on computer vision and pattern recognition, columbus, oh, usa, jun. 2014, pp. 580–587. [12] y. lecun, y. bengio, and g. hinton, “deep learning,” nature, vol. 521, no. 7553, pp. 436–444, may 2015, doi: 10.1038/nature14539. [13] k. simonyan and a. zisserman, “very deep convolutional networks for large-scale image recognition,” presented at the international conference on learning representations, may 2015, arxiv: abs/1409.1556. [14] c. szegedy et al., “going deeper with convolutions,” presented at the ieee conference on computer vision and pattern recognition (cvpr), boston, ma, usa, jun. 7-12, 2015. [15] k. he, x. zhang, s. ren, and j. sun, “deep residual learning for image recognition,” in proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 770–778. [16] d. virmani, p. girdhar, p. jain, and p. bamdev, “fdrenet: face detection and recognition pipeline,” engineering, technology & applied science research, vol. 9, no. 2, pp. 3933–3938, apr. 2019. [17] l. liu et al., “deep learning for generic object detection: a survey,” international journal of computer vision, vol. 128, no. 2, pp. 261–318, feb. 2020, doi: 10.1007/s11263-019-01247-4. [18] r. girshick, “fast r-cnn,” in proceedings of the ieee international conference on computer vision, 2015, pp. 1440–1448. [19] s. ren, k. he, r. girshick, and j. sun, “faster r-cnn: towards realtime object detection with region proposal networks,” ieee transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, jun. 2017, doi: 10.1109/tpami.2016.2577031. [20] k. he, g. gkioxari, p. dollár, and r. girshick, “mask r-cnn,” in 2017 ieee international conference on computer vision (iccv), oct. 2017, pp. 2980–2988, doi: 10.1109/iccv.2017.322. [21] j. redmon, s. divvala, r. girshick, and a. farhadi, “you only look once: unified, real-time object detection,” in 2016 ieee conference on computer vision and pattern recognition (cvpr), jun. 2016, pp. 779–788, doi: 10.1109/cvpr.2016.91. [22] w. liu et al., “ssd: single shot multibox detector,” in european conference on computer vision – eccv 2016, pp. 21–37, doi: 10.1007/978-3-319-46448-0_2. [23] j. redmon and a. farhadi, “yolo9000: better, faster, stronger,” in proceedings of the ieee conference on computer vision and pattern recognition, jul. 2017, pp. 7263–7271. [24] j. redmon and a. farhadi, “yolo9000: better, faster, stronger,” in 2017 ieee conference on computer vision and pattern recognition (cvpr), jul. 2017, pp. 6517–6525, doi: 10.1109/cvpr.2017.690. [25] s. du, m. ibrahim, m. shehata, and w. badawy, “automatic license plate recognition (alpr): a state-of-the-art review,” ieee transactions on circuits and systems for video technology, vol. 23, no. 2, pp. 311–325, feb. 2013, doi: 10.1109/tcsvt.2012.2203741. [26] j. han, j. yao, j. zhao, j. tu, and y. liu, “multi-oriented and scaleinvariant license plate detection based on convolutional neural networks,” sensors, vol. 19, no. 5, p. 1175, jan. 2019, doi: 10.3390/s19051175. [27] l. xie, t. ahmad, l. jin, y. liu, and s. zhang, “a new cnn-based method for multi-directional car license plate detection,” ieee transactions on intelligent transportation systems, vol. 19, no. 2, pp. 507–517, feb. 2018, doi: 10.1109/tits.2017.2784093. [28] s. g. kim, h. g. jeon, and h. i. koo, “deep-learning-based license plate detection method using vehicle region extraction,” electronics letters, vol. 53, no. 15, pp. 1034–1036, jun. 2017, doi: 10.1049/el.2017.1373. [29] w. min, x. li, q. wang, q. zeng, and y. liao, “new approach to vehicle license plate location based on new model yolo-l and plate pre-identification,” iet image processing, vol. 13, no. 7, pp. 1041– 1049, mar. 2019, doi: 10.1049/iet-ipr.2018.6449. [30] r. laroca et al., “a robust real-time automatic license plate recognition based on the yolo detector,” in 2018 international joint conference on neural networks (ijcnn), jul. 2018, pp. 1–10, doi: 10.1109/ijcnn.2018.8489629. [31] z. xu et al., “towards end-to-end license plate detection and recognition: a large dataset and baseline,” in european conference engineering, technology & applied science research vol. 10, no. 4, 2020, 5979-5985 5985 www.etasr.com salemdeeb & erturk: multi-national and multi-language license plate detection using convolutional … on computer vision – eccv 2018, 2018, pp. 261–277, doi: 10.1007/978-3-030-01261-8_16. [32] m. r. asif, q. chun, s. hussain, m. s. fareed, and s. khan, “multinational vehicle license plate detection in complex backgrounds,” journal of visual communication and image representation, vol. 46, pp. 176–186, jul. 2017, doi: 10.1016/j.jvcir.2017.03.020. [33] n. dorbe, a. jaundalders, r. kadikis, and k. nesenbergs, “fcn and lstm based computer vision system for recognition of vehicle type, license plate number, and registration country,” automatic control and computer sciences, vol. 52, no. 2, pp. 146–154, mar. 2018, doi: 10.3103/s0146411618020104. [34] c. henry, s. y. ahn, and s.-w. lee, “multinational license plate recognition using generalized character sequence detection,” ieee access, vol. 8, pp. 35185–35199, 2020, doi: 10.1109/access.2020.2974973. [35] h. li, p. wang, and c. shen, “toward end-to-end car license plate detection and recognition with deep neural networks,” ieee transactions on intelligent transportation systems, vol. 20, no. 3, pp. 1126–1136, mar. 2019, doi: 10.1109/tits.2018.2847291. [36] j. yépez, r. d. castro-zunti, and s. b. ko, “deep learning-based embedded license plate localisation system,” iet intelligent transport systems, vol. 13, no. 10, pp. 1569–1578, jul. 2019, doi: 10.1049/ietits.2019.0082. [37] m. r. asif, c. qi, t. wang, m. s. fareed, and s. a. raza, “license plate detection for multi-national vehicles: an illumination invariant approach in multi-lane environment,” computers & electrical engineering, vol. 78, pp. 132–147, sep. 2019, doi: 10.1016/j.compeleceng.2019.07.012. [38] z. selmi, m. b. halima, u. pal, and m. a. alimi, “delp-dar system for license plate detection and recognition,” pattern recognition letters, vol. 129, pp. 213–223, jan. 2020, doi: 10.1016/j.patrec.2019.11.007. [39] s. park, h. yoon, and s. park, “multi-style license plate recognition system using k-nearest neighbors,” ksii transactions on internet and information systems, vol. 13, no. 5, pp. 2509-2528, may 2019, doi: 10.3837/tiis.2019.05.015. [40] caltech computational vision: archive, california institute of technology, 1999. [online]. available: http://www.vision.caltech.edu/ html-files/archive.html. [41] k. kraupner, “using multilayered perceptron for recognition of alphanumeric characters on license plates”, ph.d. dissertation, university of zagreb, croatia, 2003. [42] l. dlagnekov, “video-based car surveillance: license plate, make, and model recognition”, m. s. thesis, university of california, san diego, 2005. [43] o. martinsky, “algorithmic and mathematical principles of automatic number plate recognition systems”, b.sc. thesis, brno university of technology, croatia, 2007. [44] medialab lpr database, multimedia technology laboratory, national technical university of athens, greece. [online]. available: http://www.medialab.ntua.gr/research/lprdatabase.html. [45] j. spanhel, j. sochor, r. juranek, a. herout, l. marsík, and p. zemcik, “holistic recognition of low quality license plates by cnn using track annotated data,” in 2017 14th ieee international conference on advanced video and signal based surveillance (avss), aug. 2017, pp. 1–6, doi: 10.1109/avss.2017.8078501. [46] i. goodfellow, y. bengio, and a. courville, deep learning. cambridge, ma, usa: mit press, 2016. [47] c. m. bishop, pattern recognition and machine learning, 1 st ed., new york, ny, usa: springer, 2006. [48] s. ioffe and c. szegedy, “batch normalization: accelerating deep network training by reducing internal covariate shift,” in proceedings of the 32 nd international conference on machine learning, jun. 2015, pp. 448–456. microsoft word 19-993-ed.doc engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1307 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … perspectives perspectives of safe work practices: improving personal electrical safety of low-voltage systems from electrical hazards youssef mobarak electrical engineering department, faculty of engineering, king abdulaziz university, rabigh, saudi arabia ysoliman@kau.edu.sa abdullah alshehri electrical engineering department, faculty of engineering, king abdulaziz university, rabigh, saudi arabia aaashehri@gmail.com abstract—a person’s understanding of a safety hazard has a dramatic effect on his or her behavior. an in-depth understanding of a hazard usually results in a healthy respect for what can happen. people who know the most about a specific hazard tend to rely more heavily on procedures and plans to guide their actions. personal protective equipment selection and use are influenced by increased understanding of a hazard. training and training programs are influenced by the depth of knowledge held by all members of the line organization. recent work has focused attention on the thermal effects of arc flashes. however, when electrical energy is converted into thermal energy in an arcing fault, still another energy conversion is taking place. applications are on record that suggest that a considerable amount of force is created during an arcing fault. concrete block walls can be destroyed by the increased pressure that is created during an arcing fault. this study is present about preventing injuries to people. we will study about injuries and then develop some understanding about electrical hazards. also, we will present about safe work practices, responsible, and then about what makes us act as we do. keywords-personal electrical safety; injuries; electrical hazards; safe work practices; responsibility. i. introduction electricity hazards have been well documented through the years and various papers, guides and books have been published that focus on such hazards, the reasons, analysis, prevention measures etc in various applications [1-65]. an extended list is provided in the references section. historically, the obvious issue of direct contact was first reported but in the mid-80s the issue of arc flashes also started to gain attention. since most arc-flash burns are recorded just as burns, some estimates suggest that 80 percent of all injuries from an electrical hazard is the result of an arc. the plasma in an electrical arc can reach 35,000 of. in fact, it will reach that temperature unless the energy source is removed before it gets there. people have been fatally burned at distances greater than 10 feet from the arc. in one arcing-fault incident, two people who were standing about 18 feet from an electrical arc were fatally burned. more than 2000 people are admitted to burn centers annually with severe electrical burns. several standards and guides ghave been published that focus on arc flashes. arcpro is a commercially available software that will project an incident-energy calculation. many commercially available system analysis computer programs, such as edsa, also contain software that calculates incident energy. many of the calculation methods do not correlate with one another. they might provide a different result. insufficient information is available to suggest that one method provides more accurate information than another. where do these conditions exist?. the arc-flash issue can be reduced to these facts. you can calculate incident energy by one of several methods]. an employer/owner should provide enough information about the electrical circuit that enables a worker to select protective equipment. national electrical codes usually require a label on equipment where potential for an arc-flash injury exists. however, you really don’t know how much protective equipment will prevent an injury. a worker should wear clothing that provides significant flash protection as his or her normal work clothes. you should also be advised that the ppe selected by any method will not necessarily eliminate an injury. incident energy is calculated at a prescribed distance. if the ppe is 100 percent effective at that distance, some part of the worker’s body probably will be closer and subjected to greater thermal energy. the best alternative is to create an electrically safe work condition. if the source of energy is removed with assurance that it cannot reaccumulate, all exposure to an electrical hazard has been removed. this practice should always be the first option. stop, stand still, think. does something seem out of place? smell. equipment that is beginning to fail frequently gives off an unusual smell. feel. does the equipment or device feel warm or hot to the touch? engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1308 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … listen. is there an unusual sound? the first generally recognized hazard associated with electrical energy was fire. these conditions frequently result in a fire. there is much left to discuss. review your employer’s plan and procedures. minimize exposure to the hazard by doing as much work as possible before exposing the hazard. a barrier should be installed to cover any conductor that must remain energized. a rented generator be installed to permit the equipment to be completely locked out, and safety grounds help to eliminate the possibility of an unexpected backfeed. ii. injuries for an injury to occur, an unintended release of energy or an unexpected contact with some source of energy must occur. only an unintended interaction with some source of energy can cause an injury. the exposure may be intended or unintended, can only be the result of an unsafe condition, an unsafe act, or through the use of unsafe equipment. an unsafe act is when an energy release is the result of a person’s action, such as if a person cuts the ground probe from a nema 5-15 cord cap. an unsafe condition is when the working environment is influenced by a condition that results in a release of energy, such as if a person leaves a hole in the floor unguarded or uncovered. unsafe equipment might be poorly maintained equipment, or it might be an electrical circuit that has oversized fuses. if we lumped all unsafe conditions and all unsafe equipment together, they would account for about one-third of all injuries. unsafe acts are the basic cause for two-thirds of all injuries. we could also categorize all injuries by the type of energy. if we did, electrical injuries would be the largest category. a. causes of injuries: this section compares unsafe equipment and unsafe conditions with unsafe acts. as the chart suggests in figure 1a, unsafe acts is the major cause of injury. this chart also suggests that if we could somehow eliminate unsafe acts as a cause of injury, we could reduce the number of electrical injuries by a significant degree. once an incident is in progress, a person can do little to avoid being injured. the trick then, is to take some action before an incident has a chance to begin. b. heinrich’s relationship: a theory developed by h.w. heinrich states that for every 300,000 unsafe acts, there are 30,000 near misses, 300 recordable injuries, 30 lost-time injuries, and 1 fatality as shown in figure 1b. over the years, these relationships have proven to be relatively accurate. some people feel that if the energy source is electrical, then a zero can be taken from the relationship. however, that contact with an energized electrical conductor has a very significant chance of electrocution. c. injury analysis: this analysis of data suggests that an injury from electrical energy fits into these categories. the study used data that was collected over a ten-year period from 120,000 employees. the data shows that a population of this size can expect to have 125 lost time injuries each year. of these injuries, 25.7% injuries involve the eyes, 21% result in permanent disability and 2.4% are fatal. for every 25,000 workers, a fatality is experienced each year. it should be noted that these statistics don’t include burn injuries from either current flow or arc flash because they are categorized as burns fig. 1. injuries, and heinrich’s relationship iii. electrical hazards a. electric fire: a fire caused by high-resistance connections might occur when mechanical joints in an electrical conductor loosen as the conductor material heats and cools in its normal use cycle. the heating and cooling cycle causes the connector to expand and contract. the connector material stretches during this cycle, resulting in decreased contact pressure. conductor material can flow away from the point where pressure is applied, which causes the pressure to decrease. the high-resistance connection generates heat that can, in-turn, ignite any nearby flammable material. an improper welding path can cause sparks at remote locations. if any flammable material is nearby, the sparks can result in ignition. if electrical insulating material is inadequately rated, the conductor can contact a surface at a different potential. of course, hazardous flammable material can be ignited by either of the above means or by a static discharge. in either case, a fire will likely result. b. electric shock: no one really knows how many non-fatal shock accidents happen each year. however, records show that at least 30,000 do occur. now consider that perhaps 1 in 50 maybe 1 in 100 shock accidents are recorded. an electrocution is an electrical shock that is of a magnitude large enough or long enough to result in a fatality. records show that, in industry, over 600 people are electrocuted each year. electrocution is the sixth leading cause of industrial fatalities. figure 2 illustrates the number of electrocutions, by year, from 1992 to 1998. a short glossary follows: touch potential: electricity always takes the path of least resistance. if a person touches an energized point with a hand, and the other hand is in contact with ground or a grounded object (figure 3a), the current will likely flow from the one hand to the other. this type of contact is called hand to hand. step potential: a similar current path can exist from one foot to the other. this foot-to-foot contact is called step potential as in figure 3b. a potential difference exists between engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1309 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … a person’s feet. that current will flow through the trunk of the body. touch potential: still another type of touch potential can exist (figure 3c), as the path that current might take with a hand-to-foot contact. when contact is first made with an energized conductor, the surface contact between the skin and the conductor is high. as the current increases, the contact resistance is driven lower. if the skin’s surface should break, contact resistance effectively disappears. only internal impedance remains. blood and nerve tissue are very good conductors. body tissue is primarily a saline solution that conducts electricity very well. at first contact, the current probably will flow across the surface of the skin. fig. 2. electrocutions by year fig. 3. electrical shock types characteristics of the body: the body can be considered to be essentially an electrical system. a small voltage is chemically generated within the brain, and the nerves deliver the signal to the muscle. the current flow is in the microampere range. the muscle reacts to the strength of the signal. a stronger signal means to constrict more. if an external source of voltage sends a signal to a muscle, the muscle reacts as if the signal were a normal signal. if the external signal is greater than the signal generated by the brain, the muscle is told to stay clamped. the let-go threshold has been reached. automatic body functions such as heartbeat and breathing become confused at the powerful signal. they cease to operate normally. fibrillation of the heart occurs quite rapidly. figure 4 shows how much current can be expected to flow in case a person makes contact with ordinary utilization voltages. best dry conditions on this chart, the green line indicates the amount of current that will flow under the best of conditions. the worker is wearing dry gloves. the worker’s shoes are in good condition. the black vertical line at the left represents 110 volts. reading across to the current line, we can see that the worker will experience a current flow of about 14 ma. the vertical line on the right represents 480 volts. again reading to the left axis, we can see that the worker will experience a current flow of about 55 ma. the notes on the right side of the chart suggest what kind of reaction a person’s body might experience. worst but normal conditions are shown in figure 4b, this graph represents a different set of conditions. the worker has been at it for a while, and his or her gloves are damp from perspiration. the impedance introduced by the gloves is reduced. the green line still represents the dry conditions we saw in figure 4a. the red line has been added to represent the amount of current flow that is likely in event of contact with an energized conductor with the damp gloved hand. again, the vertical line on the left represents 110 volts. the vertical line on the right represents 480 volts. as you can see, the current flow at 110 volts is well into the let-go threshold. the current flow at 480 volts is well into the range that will cause fibrillation. fig. 4. expected current flow in a person exposed to shock: a person is likely to receive an electrical shock any time he or she contacts an exposed energized conductor. an inadequate ground of any type can cause a voltage to exist at points where it is unexpected. poor equipment design or installation can result in conductive components that are exposed. for instance, if a hot and neutral conductor are interchanged, an external surface can be energized. equipment must be maintained so that the installed condition is approximated for the life of the installation. the most common means of exposure to shock is by poor work practice or procedures. injuries frequently occur when the worker believes that the conductor is de-energized. should a condition exist that permits a large current to flow through earth, such as a significant fault or lightning discharge, a voltage gradient is generated in the earth path. if a person contacts two points along the path of current flow, some current is likely to be diverted through the person’s body. some informative pictures are shown in figure 5. electric shock hazard and protection: table i shows what a body’s reaction might be to various amounts of current. the differences in the two columns to the right are not really related to males and females. instead, females are generally assumed to have a smaller body frame. the issue seems to be current density. engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1310 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … protect from exposure to shock: so, you avoid exposure to electrical shock or electrocution. shut it off – lock it out. stay outside the safe approach boundary. wear protective equipment that is adequately rated for the potential exposure. keep your grounding system in good repair. this means that you have to test them from time to time. keep all doors closed and covers in place. if the door is closed, there is no exposed energized conductor. treat energized electrical conductors with respect. the insulation could be damaged. the insulation could be deteriorated with no visual indication. train people to be able to recognize when and how exposure to electrical shock can exist. train people to understand how to completely avoid or minimize their exposure to shock by selecting and wearing adequate protective equipment. train people to understand and accept their personal limitations. they should know the limit of their knowledge and their skill. train people to practice continual awareness of their exposure to electrical shock. use signs and labels to warn people that an electrical hazard exists and that their exposure is elevated. fig. 5. electric shock hazard and protection table i. effect of current on the human body effect ac in ma (males) ac in ma (females) slight sensation in hand 0.4 0.3 perception threshold 1.1 0.7 shock not painful muscular control not lost 1.8 1.2 shock painful muscular control not lost 9.0 6.0 shock painful and severe muscular control not lost 23.0 15.0 shock possible ventricular fibrillation effect from 3-second shocks `100.0 100.0 c. arc flash: when an arc-flash event happens, pressure that is created by the superheated air and vaporizing metal expels droplets of molten metal and other parts and pieces with great force. arcflash events usually happen very quickly. although many people have seen the effects of an arcing fault, most have never seen the event itself. figures 6-7 shows stills from a video taken by a camera that shoots 30,000 frames per second. in the picture are two electricians mannequins, not real people. one is near the equipment, and the other is near the right side of the photo. figure 7 are intended to illustrate the kind of pressure force that the person would feel on his or her body. arcing faults may not be in a starter unit or a circuit breaker enclosure a fault can occur at any place in the circuit. fig. 6. a photograph of a grab fig. 7. the kind of pressure force that the person would feel on his or her body electrical arc burn hazards: the plasma of an electrical arc can reach a temperature of 35,000of. the plasma temperature does not reach that temperature instantaneously. normally, the overcurrent device removes the energy source within two or three seconds. however, the rate of temperature rise is considerable. usually an overcurrent device operates within the first second and quenches the arc. sometimes the overcurrent device does not operate as intended, and the arc temperature gets quite high. in a normal situation where the overcurrent device removes the energy in less than one second, the plasma temperature can reach 15,000 to 17,000 of. ordinary street clothing can ignite if its temperature reaches 700 to 1400of. the ignition temperature varies as the construction material changes from nylon, polyester, or similar material to cotton and wool. if a person’s clothing ignites, the engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1311 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … burning material will subject the person to about 1400of. however, the person will be subjected to that high temperature for several seconds before the flame is extinguished or the clothing is removed. some materials will melt when burning and deposit the molten material onto the surface of the person’s skin. copper melts at about 1800of. the metal droplets that are expelled during the faulted condition are also at that temperature. sometimes the droplets will melt through the clothing, but sometimes the clothing will be ignited by the molten copper. table ii shows what might happen to a person’s skin if subjected to elevated temperature. table ii. skin temperature tolerance relationship skin temperature time of skin temperature damage caused 110 of 6.0 hours cell breakdown begins 158 of 1.0 second total cell destruction 176 of 0.1 second second degree burn 200 of 0.1 second third degree burn important event factors: arc-flash events happen very quickly. many people who were present when an arc flash occurred did not even see the flash. the events are very unpredictable. an arc flash might occur in one set of conditions and in similar conditions a second time might not occur. these events are normally started by a person doing something. even when equipment fails, the event is usually precipitated by a person doing something. these events are not related to the system of grounding. regardless of solid ground, resistance ground, or if the system is ungrounded, the events and their results seem to be the same. these events are not related to voltage. instead, they are related to energy: specifically, the amount of energy that is available within the system at the point of the fault. these events usually happen as a result of movement. a contactor operating, a switch handle moving, an errant movement by a worker, or similar events normally initiate arc-flash events. approach boundaries: the arc-flash protection boundary is related to arc flash only, with no relationship to electrical shock or electrocution. the limited, restricted, and prohibited approach boundaries are intended to trigger additional protective measures to prevent shock or electrocution. it is important to understand that the prohibited approach boundary represents a distance beyond which contact with an exposed energized conductor is likely. a work task that requires or enables an approach closer than this dimension should be prohibited. figure 8 illustrates the four approach boundaries. again, the limited, restricted, and prohibited approach boundaries represent increased exposure to shock. these boundaries are fixed, based on the circuit voltage. the arcflash protection boundary is not fixed. the distance moves in and out from the exposed energized conductor, based on the amount of energy that is available in the system. they are based on the system voltage and do not change from one circuit to another. the limited approach boundary may change, depending on the relative position of the worker. if the relation position can change, then the distance is moveable. moveable means that the conductor might move, such as in an overhead line construction, or if the worker is on a moveable platform such as an articulating basket. fixed means that the worker is on a stable platform, such as a floor, and the conductor is held in place, such as a buss within a piece of equipment, shock approach boundaries show in table iii. fig. 8. four approach boundaries table iii. shock approach boundaries approach boundary number system voltage range limited restricted prohibited phase to phase exposed movable conductor exposed fixed circuit part includes inadvertent movement adder includes reduced inadvertent movement adder energized part employee – distance in feet – inches 0 to 300 not specified not specified not specified not specified 51 to 300 v 10 ft. 0 in. 3 ft. 6 in. avoid contact avoid contact over 300 v not over 750 v 10 ft. 0 in. 3 ft. 6 in. 1 ft. 0 in. 0 ft. 1 in. over 750 v not over 2 kv 10 ft. 0 in. 4 ft. 0 in. 2 ft. 0 in. 0 ft. 3 in. over 2 kv not over 15 kv 10 ft. 0 in. 5 ft. 0 in. 2 ft. 2 in. 0 ft. 7 in. over 15 kv not over 36 kv 10 ft. 0 in. 6 ft. 0 in. 2 ft. 7 in. 0 ft. 10 in. protective clothing and personal protective equipment (ppe): a flash-hazard analysis is intended to determine the amount of available fault energy that the system can provide. available energy is dependent on the size of the transformer together with the impedance of the circuit. technical papers have defined incident energy as the amount of energy that might be “incident” on a material that is at a specified distance from the arc. if the incident energy is known and protective clothing is selected that has a rating equal to or greater than the engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1312 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … available energy, then an injury is unlikely. protective clothing can be selected based on the amount of incident energy. flash protection: up to 6 inches: where the transformer ahead of the equipment is 500 kva or smaller and the overcurrent protection is current limiting. up to 18 inches: where the transformer ahead of the equipment is 75 kva or smaller and without current limiting overcurrent protection. more than 18 inches: where the transformer is larger than 500 kva and without current-limiting overcurrent protection. as the size of the transformer increases above 500 kva, the amount of needed protection also increases. where the transformer is more than 750 kva, incident energy should be calculated or determined h in the electrical safety program guide. use protective clothing: always wear flame-resistant clothing. the greater protective value of the clothing, the greater the protection. cover every body part that is within the flash-protection boundary with protection. keep all fasteners closed. buttons and zippers should be buttoned or zipped. wear heavy-duty leather gloves. leather is not classified with an established rating. however, normally an arc-flash event is so fast that the leather will provide the necessary protection. sometimes the cotton stitching that holds the gloves together will burn, but the gloves will hold together long enough to afford significant protection. wear heavy-duty leather shoes. like heavy-duty leather gloves, leather shoes will afford significant protection. workers should not wear sneakers or shoes of similarly light construction. wear polycarbonate safety glasses in addition to any other face protection. the polycarbonate material protects the eyes from the ultraviolet energy. d. arc blast: the kind of injuries that are typical with an arc blast are broken bones when a body is literally thrown across a room. metal parts and pieces are propelled across a room. there is a tremendous increase in pressure during an arcing fault. when the temperature of the plasma exceeds the melting point of copper, the conductor changes state from solid to liquid. as the temperature exceeds the boiling point of the liquid copper, the copper liquid becomes copper vapor. now, when water changes state from liquid to steam, the water volume expands four times, unless constrained in an enclosure. when the copper vaporizes, it expands many thousands of times. even without a containing enclosure, the speed at which the change of state occurs is so fast that there is a very significant increase in pressure surrounding the plasma. a pressure wave is created by the leading edge of this pressure buildup. without an enclosure, the pressure wave travels outward from the arc until the volume is large enough for the atmospheric pressure to stabilize. the air surrounding the arc plasma is also heated very rapidly, increasing the pressure buildup. these results are similar to a lightning bolt. the conducting plasma is very hot, and thunder is the acoustic response to the pressure wave. pressure measured means that a force will be applied for each unit of surface area. if we make some assumptions about the surface area of an average electrician, then we can estimate the amount of force that an electrician would feel from the leading edge of the pressure wave. figure 9 is intended to help make that judgment. one axis is marked in distance from an electrical arc, and the other axis represents the amount of force that an electrician would feel for a specific arcing fault. the diagonal lines represent an electrical fault current. fig. 9. arc-blast pressure on human body arc-blast injuries: when the wave front hits the worker from the front, the pressure at his or her back is still at atmospheric pressure. a differential pressure will exist from the front to the back of the worker. a differential pressure will also exist from the external surface of the worker and all internal surfaces inside the worker. injuries that might result from these differences in pressure include broken bones, cuts, and contusions. sensitive components of the inner ear can easily be damaged. internal organs can receive significant damage. the rapid increase in pressure can destroy the electrical equipment and expel parts and pieces with tremendous force. avoid arc-blast injury: if an arcing fault is impossible, then the chance of an arc-blast injury does not exist. any person who happens to be nearby when an arcing fault occurs is exposed to an injury from arc blast. experience shows that if the equipment is not arc-resistant equipment, the chance of the enclosure being destroyed is significant. maintain equipment and systems adequately. the integrity of electrical enclosures is very important. coordinate the overcurrent devices so that minimum time is required to clear the fault. flame-retardant ppe is not intended for protection from arc blast. however, arc-flash ppe might be a blend that includes abrasion-resistant textile. it is not possible for there to be exposure to injury from arc blast without a simultaneous exposure to arc flash. e. safe work practices: the plan should identify each step in the job and consider electrical hazards at each step. the plan should identify all hazards to which a worker might be exposed. the plan should consider the type and degree of exposure to each hazard. the grounding system is a primary strategy of the nec and serves to limit potential differences between conductive components and structures. inadequate maintenance will permit dangerous potentials to exist. procedures and policies contain wisdom that has been derived in the past. workers should always implement each requirement of the procedure. engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1313 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … electrically safe work condition: identify all sources of energy from drawings. the drawings must be up to date. if the information on the drawing is inadequate, the worker is at an elevated risk of injury. it is important to ensure that disconnecting devices are rated for operation under load. as equipment ages, the failure rate increases. sometimes the mechanical linkage in a disconnecting means fails, and all phases fail to open. where it is physically possible, open the door and look at each phase contact to make sure that an opening exist in each phase conductor. install lockout devices together with tags on all lockout points identified in the first step. always test for voltage with an adequately rated voltmeter. we recommend that a single-function device be the instrument of choice to avoid the possibility of setting the meter on the wrong scale. the device should be listed by an independent testing laboratory. if there is any possibility that the equipment could become reenergized by an overhead line that falls or by induction coupling from another source or by any component failure, then a ground set should be installed. plan the work: a work plan is a sequential listing of all the steps necessary to accomplish a job assignment. begin the process by identifying each step and writing the plan on paper. if the work sequence is simple and each step easily remembered, it may not be necessary to write the plan on paper. however, a written plan is always an advantage. identify and gather necessary procedures, manufacturer’s information, or drawings. review the work plan with someone else who is qualified to execute the job. identify all hazards associated with the work. be sure to consider both electrical and nonelectrical hazards. create an electrically safe work condition. if any ppe is needed, gather it all together, then inspect it to make certain that it will function as needed. assemble all test equipment that will be needed to perform the task and inspect it to ensure that it is not cracked, broken, or otherwise damaged. seek authority to perform the work. no exposure to an electrical hazard should be accepted without questioning the necessity to do so. ensure that the line organization is willing to accept any increased exposure. plan every job: a plan is a step-by-step list of all steps necessary to complete a job. the job might be either small or large. however, no job should be started until a plan is made. every person who will participate in or be associated with the job must have the same plan. if someone does not understand the plan, it is likely that something will go wrong. to plan the job, break it down into small steps. the steps identify the sequential process that must be accomplished to execute the job. all tasks should be planned. the plan should clearly identify the scope of the job. any change in scope should be cause to stop the work process and generate a new plan with the new scope or modify the original plan. everyone must be advised of the change in the plan. the plan should identify the boundaries of the job. everyone must understand boundaries and then respect those boundaries. work that is not within the recognized boundary must not be performed. the plan should clearly point out the time frame in which the job is expected to be completed. it may not be necessary for the plan to be written. the key is whether everyone involved in the job understands the plan. it is important that the plan be reviewed by someone who was not involved in producing the plan. if the plan is in writing, the review is more reliable. when reviewing the work plan, think about what could go wrong. the job lineup should include information about emergency procedures. what device will be used in case of an emergency, where is the communication device, where is the fire extinguisher, and what is to be done if there is a technical problem. make sure that all tools that are needed to perform the task are available. when exposure to an electrical hazard already exists, it is not the right time to be looking for a tool. workers are inclined to improvise a tool and use it improperly if a tool is needed and not readily available. if a special tool is needed, then be sure to procure the tool and have it available. isolate the equipment: the term clt-3 means clear, lock, tag, try, and test. • clear – people should be cleared away from the equipment and the electrical circuit that will be involved in the work task. • lock – locks should be installed in accordance with an established procedure or plan. installation of lockout devices should be one of the steps that is executed when establishing an electrically safe work condition • tag – together with locks, tags and their attachment devices make up a lockout device that should be installed when establishing an electrically safe work condition. • try – equipment that has a push button and is capable of running should be tried. trying to run the equipment is one indication that the correct disconnecting means has been opened. • test – for our purposes, testing for the absence of voltage is a critical step to ensure that no voltage exists on the exposed conductor. where it is physically possible, visibly verify that a break in all the power conductors exists. always test every conductor before touching it. test every conductor every time. if it is necessary to leave the work site, even for a few minutes, test every conductor when you return to the work location. assess people’s abilities: consider the qualifications of the person(s) that will perform the work task. when a work task is begun, the worker’s occupation, title, or job classification has little bearing on whether they can avoid an incident or not. knowledge and skill are the only characteristics that will help prevent an incident. think about the person’s experience and his or her state of training. the person had sufficient training to execute the physical aspects of the work task. the person had sufficient training to recognize and avoid electrical hazards. the person have the knowledge about protective strategies to avoid initiating an incident. the worker have physical dexterity to perform the task. the worker in a mental condition that enables him or her to remain focused on the work task. think about whether you have the skill to evaluate the condition of the worker. you understand the hazards enough to evaluate the exposure and determine if the risk is acceptable. it necessary to review the work with engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1314 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … someone who is better qualified to determine if the risk is both necessary and acceptable. the employer, the employee, and the owner are all responsible. they are each responsible for some element of the process of keeping people from being injured. for the process to be effective, each of these parts must be involved. the employer is responsible for: • establishing an electrical safety programme for the overall organization • defining and publishing safety policies and procedures • providing safety equipment that is needed to minimize exposure to electrical hazards • providing safety training for employees that enables each worker to know what hazards the employee is responsible for implementing the procedures that are defined by the employer. however, it is not that simple. each employee should provide the feedback that is necessary to keep procedures and practices up to date. employees are responsible for ensuring that training provided by the employer is understood. employees must be an integral part of the process for preventing injury to themselves and their fellow workers. the owner is inherently responsible for contactors that are working on site. in the strictest sense, the contractor might be the employer. however, the owner must make sure that the contractor is advised of all safety hazards to which contract employees might be exposed. the owner might be the landlord for a multinational corporation, or it might be the person who operates the facility for a small company. the owner must make certain that the contractor has been informed about hazards that exist on the site and should make certain that the contractor has an electrical safety programm that addresses the hazards which exist on the site. conclusions: this study focuses on safe work practices especially in terms of electrical hazards and actually arc flash hazards. potential hazards, causes, impacts and protection measures are discussed. a point by point guide is given for the person in charge. the basic steps are: think through the work plan and consider all possible exposure to the hazards of shock, arc flash, or arc blast. determine what approach boundaries will apply to each step in the work plan. consider every electrical conductor and every electrical component to be energized until the absence of voltage is verified. ensure that all employees are properly trained and equipped and that safety procedures are updated and known to all. acknowledgment this project was funded by the deanship of scientific research (dsr), king abdulaziz university, jeddah, under grant no. (829-19-d1437). the authors, therefore, acknowledge with thanks dsr's technical and financial support. references [1] h. w. heinrich, industrial accident prevention: a scientific approach. new york, ny: mcgraw-hill, 1950 [2] h. w. heinrich, petersen, d., & roos, n. industrial accident prevention. new york, ny: mcgraw-hill, 1959 [3] h. w. heinrich, petersen, d., & roos, n. industrial accident prevention. mcgraw-hill, 1980 [4] a. cakir, d. hart, t. stewart, visual display terminals: a manual covering ergonomics, workplace design, health and safety, task and organization. chichester: wiley, 1980 [5] ferry, t. s. modern accident investigation and analysis: an executive guide. new york, ny: john wiley & sons, 1981 [6] r. campbell, r. langford, fundamentals of hazardous materials incidents. chelsea, mi: lewis publishers, 1991 [7] m. sanders, e. mccormick, human factors in engineering and designs, 7th ed. singapore: mcgraw-hill international editions, 1993 [8] l. harms-ringdahl, safety analysis: principles and practice in occupational safety, london, new york: elsevier applied science, 1993 [9] j. ridley, safety at work, 4th ed. butterworth heinemann, 1994 [10] t. kletz, learning from accidents. butterworthheinemann ltd., 1994 [11] j. harrington, g. howard, occupational health, 4th ed. blackwell science, 1998 [12] national institute for occupational safety and health, worker deaths by electrocution: a summary of surveillance and investigative case reports. cincinnati, 1998 [13] d. cooper,improving safety culture: a practical guide. england: john wiley & sons, 1998 [14] r. latino, k. latino, root cause analysis: improving performance for bottom line results. crc press, 1999. [15] handbook of laboratory safety, 5th ed. boca raton, fl: crc press, 2000. [16] j. stranks, the handbook of health and safety practice, 5th ed. pearson education ltd., 2000 [17] j. ellis, introduction to fall protection, 3rd ed. des plaines, il: american society of safety engineers, 2001 [18] alaimo, r. j. handbook of chemical health and safety. washington, dc: american chemical society, oxford university press, 2001 [19] iec. degrees of protection provided by enclosures for electrical equipment against external mechanical impacts. ik code, 2002 [20] f. manuele, heinrich revisited: truisms or myths. itasca, il: national safety council, 2002 [21] f. manuele, on the practice of safety. new york, ny: john wiley & sons. 2003 [22] j. lam, enterprise risk management: from incentives to controls. hoboken, nj: john wiley & sons, 2003 [23] d. goetsch, occupational safety and health for technologists, engineers, and managers. columbus, oh: prentice hall, 2004 [24] f. hwang, the statistical methodology for social science: structural equation model. taipei: wunan, 2004 [25] c. shu, m. lin, planning, managing and designing in laboratories safety and health. taipei: gau-lih book co., 2004 [26] iec, low-voltage electrical installations, part 4-41: protection for safety—protection against electric shock, 5th ed. 2005 [27] t. krause, leading with safety. hoboken, nj: john wiley & sons, 2005 [28] r. chapman, simple tools and techniques for enterprise risk management. chichester: john wiley & sons, 2006 [29] national electrical safety code (ansi/ieee c2). new york, ny: institute of electrical and electronics engineers, 2007 [30] national electrical code (ansi/nfpa 70). quincy, ma: national fire protection association, 2008 [31] nfpa 70e, standard for electrical safety in the workplace. quincy, ma: national fire protection association, 2009 engineering, technology & applied science research vol. 6, no. 6, 2016, 1307-1315 1315 www.etasr.com mobarak and alshehri: perspectives of safe work practices: improving personal electrical safety of … [32] r. lee, “the other electrical hazard: electric arc blast burns”, ieee transactions, vol. ia-18, no. 3, 1982 [33] j. zannetti, “air pollution”, first international conference on air pollution. southampton: computational mechanics publication, pp. 373 380, 1993 [34] occupational safety & health council, occupational deaf, green cross, occupation safety & health council, pp. 35-36, 1993 [35] r. ted, g. maury, “estimating the costs of occupational injury in the united states”, accident analysis and prevention, vol. 27, no. 6, pp. 741-747, 1995 [36] n. akure, “a generalized cost-estimated model for job shops”, international journal of production economics, vol. 53, pp. 257263, 1997 [37] e. rune, “how much do road accident cost the national economy”, accident analysis and prevention, vol. 32, pp. 849-851, 1999 [38] hong kong occupational safety and health association, causes of accident and investigation, safety bulletin, 9, pp. 10-12, 2001 [39] j. de pasquale, e. geller, “critical success factors for behavior-based safety: a study of twenty industry-wide applications”, journal of safety research, vol. 30, no. 4, pp. 237–249, 1999 [40] j. williams, e. geller, “behavior-based intervention for occupational safety: critical impact of social comparison feedback”. journal of safety research, vol. 31, no. 3, pp. 135–142, 2000 [41] s. parker, c. axtell, n. turner, “designing a safer workplace: importance of job autonomy, communication quality, and supportive supervisors” journal of occupational health psychology, vol. 6, no. 3, pp. 211−228, 2001 [42] j. williams, “improving safety leadership: using industrial/organizational psychology to enhance safety performance”, professional safety, vol. 47, no. 4, pp. 43−47, 2002 [43] j. barling, c. loughlin, e. kelloway, “development and test of a model linking safety-specific transformational leadership and occupational safety”, journal of applied psychology, vol. 87, no. 3, pp. 488−496, 2002 [44] e. blair, “culture & leadership: seven key points for improved safety performance”, professional safety, vol. 48, no. 6, pp. 18−22, 2003 [45] r. carrillo, “safety leadership formula: trust + credibility + competence = results”, professional safety, vol. 47, no. 3, pp. 41−47, 2003 [46] d. hofmann, f. morgeson, s. gerras, “climate as a moderator of the relationship between leader-member exchange and content specific citizenship: safety climate as an exemplar” journal of applied psychology, vol. 88, no. 1, pp. 170−178, 2003. [47] c. kuo, c. tsaur, “locus of control, supervisory support and unsafe behavior: the case of construction industry in taiwan”, chinese journal of psychology, vol. 46, no. 4, pp. 293−305, 2004 [48] e. blair, d. seo, m. torabi, m. kaldahl, “safety beliefs and safe behavior among midwestern college students”, journal of safety research, 35, pp. 131–140, 2004 [49] m. cooper, r. philips, “exploratory analysis of the safety climate and safety behavior relationship”, journal of safety research, vol. 35, pp. 497–512, 2004 [50] d. petersen, “leadership & safety excellence: a positive culture drives performance”, professional safety, vol. 49, no. 10, pp. 28−32, 2004 [51] d. seo, m. torabi, e. blair, n. ellis, “a cross-validation of safety climate scale using confirmatory factor analytic approach”, journal of safety research, 35, pp. 427−445, 2004 [52] l. feisel, a. rosa, “the role of the laboratory in undergraduate engineering education”, journal of engineering education, vol. 94, no. 1, pp. 121−130, 2005 [53] r. flin, s. yule, “leadership for safety: industrial experience”, quality and safety in health care, 13, pp.45−51, 2005 [54] t. wu, “the validity and reliability of safety leadership scale in universities of taiwan”, international journal of technology and engineering education, vol. 2, no. 1, pp. 27−42, 2005 [55] m. mitolo, m. tartaglia, g. gruosso, a. canova, “evaluation of voltage exposures due to ac/dc stray currents”, ieee-ias industry application society 42nd annual meeting, new orleans, la, 2007 [56] t. wu, c. liu, m. lu, “safety climate in university and college laboratories: impact of organizational and individual factors”, journal of safety research, vol. 38, pp. 91–102, 2007 [57] m. modica, “safety science: applying safety in modern research laboratory”, professional safety, vol. 52, no. 7, pp. 24−30, 2007 [58] t. wu, c. li, y. shu, “measuring safety culture in departments of electrical and electronic engineering at universities”, 10th uicee annual conference on engineering education, pp. 229−232, 2007 [59] k. coghlan, “investigating laboratory accidents”, professional safety, vol. 53, no. 1, pp. 56−57, 2008 [60] m. tartaglia, m. mitolo, “evaluation of the prospective joule integral to assess the limit short circuit capability of cables and busways”, ieee industry application society (ias) 43rd annual meeting, edmonton, october 5–9 2008 [61] m. mitolo, “protective bonding conductors: an iec point of view”, ieee transactions on industry applications, vol. 44, no. 5, 2008 [62] a. alvero, k. rost, j. austin, “the safety observer effect: the effects of conducting safety observations”, journal of safety research, vol. 39, pp. 365–373, 2008 [63] a. shariff, t. keng, “on-line at-risk behaviour analysis and improvement system”, journal of loss prevention in the process industries, 21, pp. 326–335, 2008 [64] f. manuele, “serious injuries and fatalities: a call for a new focus on their prevention”, professional safety, vol. 53, no. 12, pp. 32-39, 2008 [65] n. langerman, “laboratory safety?”, journal of chemical health and safety, vol. 6, no. 3, pp. 49–50, 2009 [66] s. mohamed, t. ali, w. tam, “national culture and safe work behavior of construction workers in pakistan”, safety science, vol. 47, pp. 29–35, 2009 [67] a. sharif, n. norazahar, “at-risk behaviour analysis and improvement study in chemical engineering laboratories”, international journal of chemical and environmental engineering, vol. 2, no. 1, pp. 51–55, 2011 microsoft word 34-2340_s engineering, technology & applied science research vol. 8, no. 5, 2018, 3488-3491 3488 www.etasr.com maniam et al.: a comparative study of construction waste generation rate based on different … a comparative study of construction waste generation rate based on different construction methods on construction project in malaysia haritharan maniam faculty of civil and environmental engineering, universiti tun hussein onn malaysia (uthm), parit raja, malaysia sasitharan nagapan faculty of civil and environmental engineering, universiti tun hussein onn malaysia (uthm), parit raja, malaysia sasitharan@uthm.edu.my abd halid abdullah faculty of civil and environmental engineering, universiti tun hussein onn malaysia (uthm), parit raja, malaysia shivaraj subramaniam faculty of civil and environmental engineering, universiti tun hussein onn malaysia (uthm), parit raja, malaysia samiullah sohu faculty of civil and environmental engineering, universiti tun hussein onn malaysia (uthm), parit raja, malaysia abstract—high construction waste (cw) generation in malaysia has serious impacts although there are very little available data regarding the relevant issue in malaysia. this lack of data results in improper cw management and cw disposal without proper control measures. to control the implications of cw, it is very important to understand their quantity which is currently unknown. past researches in malaysia, found that cw generation was affected by construction methods (cms) practiced on site. the aim of this study is to compare the cw generation rate between different cms for on-going construction projects in malaysia. common cms practiced in malaysia are conventional construction method (ccm), mixed construction method (mcm) and industrialized building system (ibs). to obtain cw generation data, site visit (sv) method, which consists of direct measurement (dm) and indirect measurement (im) is applied to this study. ccm was recorded to have the highest amount of waste. ibs method records 77.188 tons and mcm 53.191 tons. regarding the average waste generation rate (awgr), ibs recorded a value of 0.018 tons per square meter, while mcm recorded 0.030 tons per square meter and ccm recorded the highest amount of 0.046 tons per square meter. keywords-construction waste generation; construction method i. introduction construction sector has an important role in promoting economy growth in malaysia [1]. many infrastructure projects and buildings have been built [2] and cw in landfill results in a large burden and a costly issue for solid waste management [3]. wastes have the potential to affect the human well-being and environment [4]. despite the fact that this problem has caught the attention of the media for a long time, measures taken to control the waste generation are very few [5]. attention towards cw was only given after the implications have increased regarding environmental issues [6]. there is no printed and reliable data related to cw in malaysia [1, 7]. in addition, malaysia still lacks researches on cw generation [8]. the general components in cw are inert materials (e.g. concrete, timber, metal, bricks, etc.), which cause small damage to the environment. proper cw measurement is vital to initiate an effective management at both project and regional level [9]. cw generation is affected by a few factors in the construction field, like improper management, low awareness, rules and regulations. cw generation also depends on the cm practiced and materials utilized at construction sites [10]. the limited number of cw generation data attracts the attention of local researchers to explore this field. ii. literature review a. types of cms according to researches from table i, the cms implemented in malaysia are, conventional construction method (ccm), mixed construction method (mcm) and industrialized building system (ibs) method. table i. construction methods used in malaysia reference construction method ccm ibs mcm [1] ● ● ● [8] ● [13] ● ● ● b. cw issues in malaysia the construction industry plays a significant role in malaysia’s development both in infrastructure and economic sectors. malaysian construction industry has experienced a vast development over the last 20 years. almost all projects carried out are very complex, and require higher skills with superior technologies, fast track and concurrent practices of work and engineering, technology & applied science research vol. 8, no. 5, 2018, 3488-3491 3489 www.etasr.com maniam et al.: a comparative study of construction waste generation rate based on different … higher competitive terms of price [11]. in malaysian construction industry, data availability is not satisfactory even for current projects [12]. moreover, the construction industry’s impact on nature is noteworthy as the major infrastructure projects high demands, residential and commercial constructions are generating high volumes of cw [13]. countries like malaysia tend to concentrate more in the topic of construction and demolition waste generation, including waste causes, waste generation rate, and factors affecting waste generation, because these portions have received higher attention. c. construction waste density the construction waste calculated from this study will be either in m 3 or metric tons. the waste composition density is used to convert the waste into tonnage. table ii shows the waste composition density obtained by solid waste and nonphysical waste obtained by public cleansing management corporation (swcorp) [5]. unit density is used to convert m 3 to tons. table ii. density of waste composition waste composition density, k (ton/m 3 ) concrete 1.27 soil and aggregates 1.25 brick 1.20 tiles and ceramics 0.59 metal 0.42 timber 0.34 glass 0.61 plastics 0.23 paper and cardboard 0.21 mixed waste/ demolition waste 1.40 d. study objectives this study aims to compare cw generation rates between different cms for on-going construction projects in malaysia. to achieve this aim, the objectives of this study are: 1. to identify current cms practiced at construction sites. 2. to quantify cw generation rate for each cm. 3. to compare the cm generation rate among different cms. iii. research methodology this methodology is implemented by visiting the construction site for a field survey defined in terms of field measurement (fm). this method consists of direct and indirect measurements to collect cw generation data. a. direct measurements this method measures on site the weight of the waste produced or its volume. some assumptions must be made prior to direct measurement. four assumptions were made depending on how cw was stockpiled, gathered, scattered or stacked. for stockpiled waste, a rectangular based pyramid was assumed, and the volume was calculated by: 1 3 vs l b h= × × (1) for gathered waste, the layout shape was assumed cuboid, and the volume (vg) was derived from: vg l b h= × × (2) b. indirect measurements for indirect measurements, truck load records were used to estimate the cw volume generated on site. the containers’ volume and the number of trucks for waste collecting were recorded. c. cw generation rate calculation the principle of this methodology is to obtain the waste generation rate in ton/m 2 (weight per construction area). total area of the project floor needs to be calculated from the building plan and recorded for calculation of waste generation rate. the waste generation rate can be calculated from: c w / gfa= (3) where w is the total waste generated from construction project (tons), gfa is the gross floor area and c is the waste generation rate in ton/m². iv. results and discussion obtained results were sorted according to the three cms stated previously, namely conventional construction method (ccm), industrialized building system (ibs), and mixed construction method (mcm). each building site was monitored for 3 months in order to obtain the data. all chosen sites were at the construction stage. a. total construction waste waste data are shown in table iii. table iii. total waste for each site and cm project/site total waste (tons) ccm 1 276 ccm 2 241.334 ccm 3 192.414 ccm 4 80.878 ibs 1 98.89 ibs 2 49.83 ibs 3 112 ibs 4 48.03 mcm 1 10.824 mcm 2 28.36 mcm 3 25.02 mcm 4 148.56 b. construction waste generation 1) conventional construction method (ccm) figure 1 illustrates, three months of construction waste data collected for ccm sites. these data were collected continuously by following the site progress. total waste generated by every site was calculated. we see that ccm 1 records the highest amount of waste. the least amount of wastes is recorded in ccm 4. 2) ibs method four sites were selected regarding ibs method. data were collected separately for each site. the collected data are shown engineering, technology & applied science research vol. 8, no. 5, 2018, 3488-3491 3490 www.etasr.com maniam et al.: a comparative study of construction waste generation rate based on different … in figure 2. the highest amount of total waste was produced from ibs 3. the second highest amount of waste was recorded from ibs 1. ibs 2 and 4 produced the same amount of total waste. the waste generation for each site is not the same every month. the least amount of waste production was during the first month of measurements on ibs 4. fig. 1. construction waste obtained from ccm sites fig. 2. construction waste obtained from ibs sites 3) mcm the next four sites were chosen for using mcm during construction. mcm involves combinations of ccm and ibs, therefore it is also known as partial ibs. in general terms, it is the introduction of ibs elements into conventional construction. figure 3 shows the relative collected cw data. fig. 3. construction waste obtained from mcm site the highest waste generation is from mcm 4. during the first month, mcm 4 recorded a huge amount of waste generation compared to the other four sites that used mcm. the second highest waste was produced from mcm 2. the third was mcm 3 and the least amount of waste was from mcm 1. c. waste generation rate waste data obtained for every site of the previous part are analyzed in this section. all data from the 12 sites are compared according to their construction methods. following this, the gross floor area (gfa) of all sites is considered to draw the waste generation rate (wgr). wgr is used as a tool or a reference point at the construction industry to identify the waste generation rate per square meter. thus, the more the gfa, the lesser the wgr should be. 1) ccm the relative data are shown table iv. the total average amount of waste generated for ccm sites is 197.657 tons. table iv shows that, the highest wgr was produced by ccm 1, which is 0.130ton/m 2 , and the second highest was from ccm 4 with 0.046ton/m 2 . the average wgr for the ccm sites is 0.046ton/m 2 . table iv. waste generation rate for ccm project total cw (tons) gfa (m 2 ) wgr (ton/m 2 ) ccm 1 276 2121.17 0.130 ccm 2 241.334 187000 0.001 ccm 3 192.414 38853 0.005 ccm 4 80.878 1745 0.046 average 197.657 0.046 2) ibs table v shows, the wgr for ibs method sites. ibs 1 had the the highest gfa, which was 58680m 2 . the second highest gfa was from ibs 2, at 43200m 2 . the average waste amount generated from this construction method was 77.188 tons. ibs 4 recorded the highest wgr, which was 0.012ton/m 2 , followed by ibs 3, ibs 1 and ibs 2 whith 0.003, 0.002 and 0.001ton/m 2 respectively. the average amount of wgr from ibs method sites is 0.018ton/m 2 . table v. waste generation rate for ibs project total cw (tons) gfa (m 2 ) wgr (ton/m 2 ) ibs 1 98.89 58680 0.002 ibs 2 49.83 43200 0.001 ibs 3 112 38410 0.003 ibs 4 48.03 4064 0.012 average 77.188 0.018 3) mcm the measure results are shown in table vi. the highest gfa is from mcm 4, at 7460m 2 and the second biggest area is from mcm 1 at 1856m 2 . the average waste amount for mcm sites is 53.191 tons. the highest wgr was from mcm 3 at 0.06ton/m 2 . overall the average wgr for mcm sites is 0.03ton/m 2 . table vi. waste generation rate for mcm project total cw (tons) gfa (m 2 ) wgr (ton/m 2 ) mcm 1 10.824 1856 0.006 mcm 2 28.36 1808 0.016 mcm 3 25.02 416 0.060 mcm 4 148.56 7460 0.020 average 53.191 0.030 engineering, technology & applied science research vol. 8, no. 5, 2018, 3488-3491 3491 www.etasr.com maniam et al.: a comparative study of construction waste generation rate based on different … d. average waste generation rate (awgr) figure 4 shows the average waste generated for each construction method. fig. 4. awgr for the different construction methods (ton/m 2 ). when compared with the other two methods, ccm recorded the highest waste per area which is 0.046ton/m 2 . the second highest was mcm with 0.03ton/m 2 . the least amount of waste generation per area was recorded for ibs sites, which was 0.018ton/m 2 . v. conclusion this paper introduced a study that relates construction method and construction waste generation rate. this study was conducted based on the construction methods existing in malaysia. the aim and the three objectives of our research were achieved. the relationship between construction method and construction waste generation rate was revealed. from the research, we learned that the conventional method generates higher construction waste than the modern construction method or ibs. construction waste generation study is wide and still at an early stage in malaysia. exploring on infrastructure project would be a pioneer for malaysian construction waste generation studies. by conducting research on infrastructure projects, the importance of the study enhanced. furthermore, the relationship on the type of project and construction waste generation is a suggested future topic. this relationship will reveal whether the waste generation rate is affected by the type of project or not. example of existing projects in malaysia such as residential, non-residential, social amenities and infrastructure projects are recommended to be explored. additionally, existing private and government projects in malaysia should be considered in future studies. recycle, reuse and reduce (3r) element will be an interesting part of future studies. by implementing 3r in construction waste, effective construction waste management practices would be identified. indirectly, local contractors will be exposed to the sustainable development in the construction industry. references [1] c. mach, t. fujiwara, c. s. ho, “construction and demolition waste generation rates for high-rise buildings in malaysia”, waste management & research, vol. 34, no. 12, pp. 1224-1230, 2016 [2] v. w. y. tam, “rate of reusable and recyclable waste in construction”, the open waste management journal, vol. 4, pp. 28-32, 2011 [3] p. j. dolan, r. g. lampo, j. c. dearborn, concepts for reuse and recycling of construction and demolition waste, cerl technical report 99/58, us army corps of engineers, construction engineering research laboratories, 1999 [4] h. arslan, n. cosgun, b. salg, “construction and demolition waste management in turkey”, in: waste management, an intergraded vision, intechopen, 2012 [5] s. nagapan, i. a. rahman, a. asmi, n. f. adnan, “study of site's construction waste in batu pahat, johor”, procedia engineering, vol. 53, pp. 99-103, 2013 [6] t. u. ganiron jr, “recycling concrete debris from construction and demolition waste”, international journal of advanced science and technology, vol. 77, pp. 7-24, 2015 [7] s. mahayuddin, j. pereira, w. badaruzzaman, m. mokhtar, “construction waste index for waste control in residential house project”, sb 10 new zealand, te papa, new zealand, may 26-28, 2010 [8] r. noor, a. ridzuan, i. endut, b. noordin, z. shehu, a. ghani, “the quantification of local construction waste for the current construction waste management practices: a case study in klang valley”, 2013 ieee business engineering and industrial applications colloquium, langkawi, malaysia, april 7-9, 2013 [9] h. bergsdal, r. a. bohne, h. brattebo, “projection of construction and demolition waste in norway”, journal of industrial ecology, vol. 11, no. 3, pp. 27-39, 2008 [10] u. f. a. r. mohammed, a. s. h. mohamed, “a glance on construction solid waste management in khartoum”, international journal of science, engineering and technology research, vol. 5, no. 1, pp. 101106, 2016 [11] m. a. eusuf, m. ibrahim, r. islam, “the construction and demolition wastes in klang valley, malaysia”, planning malaysia journal, vol. 10, no. 3, pp. 99-124, 2012 [12] r. a. begum, c. siwar, j. j. pereira, a. h. jaafar, “a benefit–cost analysis on the economic feasibility of construction waste minimisation: the case of malaysia”, resources, conservation and recycling, vol. 48, no. 1, pp. 86-98, 2006 [13] s. k. lachimpadi, j. j. pereira, m. r. taha, m. mokhtar, “construction waste minimisation comparing conventional and precast construction (mixed system and ibs) methods in high-rise buildings: a malaysia case study”, resources, conservation and recycling, vol. 68, pp. 96103, 2012 engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3657 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … a multi-objective optimization model for designing business portfolio in the oil industry amir kamran kiamehr faculty of management and economics, department of industrial management, tarbiat modares university, tehran, iran adel azar faculty of management and economics, department of industrial management, tarbiat modares university, tehran, iran azara@modares.ac.ir mahmoud dehghan nayeri faculty of management and economics, department of industrial management, tarbiat modares university, tehran, iran abstract—designing a business portfolio is one of the key decisions in developing corporate strategy. most of the previous models are either non-quantitative or financial with an emphasis on optimizing a portfolio of investments or projects. this research represents a multi-objective optimization model that firstly, employs quantitative methods in strategic decisionmaking, and secondly, quantifies and considers non-financial, strategic variables in problem modeling. in this regard, links between businesses within a portfolio have been classified into four groups of market synergy, capabilities synergy, parenting costs, and sharing benefits, and have been structured as a conceptual model. although the conceptual model can be applied to various industries, it is formulated for designing the portfolio of multi-business companies in iran oil industry. the model has been solved for three cases by nsga-ii algorithm and strategic insights have been explored for different corporate types. keywords-corporate strategy; business portfolio; multi-objective optimization; oil industry i. introduction the historical trend about how business portfolios form represents the tendency of companies to engage in diverse businesses during the 50's to 80's and the emergence of large multi-purpose companies; then, the trend reversed from the 80's to now by focusing on a major and specialized business, or ultimately a portfolio of “related” businesses [1]. over the past decades, how to define a business portfolio has been one of the key issues in strategic planning for businesses. studies in this regard have been developed in areas such as diversification, vertical integration, outsourcing, m&a, partnerships, and allocation of resources among businesses [2]. the studies and models in this area can be divided into two main groups. the first group includes famous models such as ge/mckinsey matrix and boston consulting group growth/share matrix [3, 4]. they consider entering or not entering into a business with limited and conceptual criteria such as industry attractiveness and competitive status. on the contrary, the second group of financially-focused studies are seeking to maximize profit or minimize risk of investment portfolios, like numerous studies based on markowitz's famous model for portfolio optimization [5]. however, it seems that the studies and the developed models in this area are faced with weaknesses in terms of efficiency for strategic decision-making because the first group of models is conceptual and high level and they cannot provide the necessary analysis to support management decisions. meanwhile, the second group is suitable for optimizing stock and investments portfolios while the components in design of a business portfolio cannot be described only in the context of financial indicators. in order to meet this need, this research is an attempt to develop a quantitative model that can support strategic decision-making at the corporate level for designing a business portfolio. ii. literature review over the past decades, two key levels of strategy have been defined for organizations. first, “corporate strategy” that defines the territory of the corporation and shows businesses which corporates should enter into. secondly, “business strategy” that deals with how corporations compete within an industry or market [2, 6]. authors in [7] structured approaches related to corporate strategy in four groups including portfolio management, restructuring, transferring skills, and sharing activities. each approach is based on a different mechanism for value adding by the corporation. portfolio management is based on diversification, mainly through the acquisition of attractive businesses, providing capital, goal setting, and monitoring of business outcomes. in restructuring approach, the focus is on potentials of business units that are prone to change. in skills transfer approach, focus is on synergy and the transfer of skills and knowledge among the value chain of businesses. in sharing activities approach, the value is created through shared operations in value chain such as the use of a shared distribution system or creation of competitive advantage through cost reduction. in the above structure, the first two approaches are based on the creation of value through the association of the parent company with each of the independent businesses, while the other two concepts are focused on extracting value through the link between businesses and their synergy. in a general view, corporate strategy can be considered as a decision on product range and vertical range, which is discussed in the literature in terms of diversification and vertical integration, respectively. evidence suggests that in the 1950s and 1980s, companies began to design diverse and unrelated business portfolios, they set up multifunctional companies and diversified businesses that ultimately led to the formation of so-called “conglomerates”. based on the engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3658 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … experience, the above process was reversed from 1980, and in that period unprofitable businesses were left aside. then, refocusing has once again been highlighted on major and specialized businesses or the formation of a portfolio of related businesses [2]. in some studies, it has been shown that the focus strategy has a positive impact on the corporate's value for shareholders, but some argue that corporate devaluation after diversification is due to the ownership of new businesses at lower prices during the diversification process [8]. despite decades of debate in this area, portfolio design and issues like the amount of diversification and vertical integration are still at the heart of the attention of businesses and researchers [9-11]. the persistence of these questions is because of the fact that although there are evidence of failure of many diversification strategies in the age of diversification, there are still undeniable advantages for diversity including the possibility of growth (in the sense of going beyond the current industry's boundaries), risk reduction due to its distribution in a portfolio of businesses, savings by globalization, benefit of the parent company, and reaching an internal market [2, 12, 13]. it can be concluded that the effectiveness of diversification is subject to conditions [14, 15]. moreover, importance and complexity of decision making on the business portfolio in various industries varies according to the volume of the required investment, the variety of components in the value chain, and the existing risks. in this regard, the oil industry can be ranked high. for example, in the field of exploration and production, decisions are very complex and risky due to the uncertainty and large number of factors involved. the selection of the optimal portfolio of projects in this section is influenced by diverse topics including corporate strategies and constraints, reservoir geology assessment, engineering data, economic forecasts, financial model, and regulatory aspects [16]. in this regard, mathematical optimization models have been widely developed for the oil industry by researchers. author in [17] used genetic algorithm for optimizing portfolio in the oil and gas industry. in his model, the portfolio is composed of projects, not businesses. in other words, it is the investment portfolio of the oil company. he also argues the advantages of the portfolio theory for this purpose. it enables decision makers to consider set of available opportunities beyond merely examining the independent economic indicators of each project and approving or rejecting investment in it. it also gives them the opportunity to reach a portfolio of projects with maximum returns at a certain level of risk. in general, mathematical optimization models offered for the oil industry can be classified into three strategic areas of portfolio selection, tactical planning such as production planning, and operation such as well optimization [18]. in this framework and given the complexity of decision making in the oil industry, portfolio design models have always been considered, used and developed [19, 20]. new tools such as the theory of real options have also been used for this purpose [21]. however, in previous researches on portfolio optimization, financial approaches based on markowitz's foundation have been applied. in the oil industry, the use of portfolio concept and portfolio theory has focused primarily on portfolio selection methods for projects and assets. hence, quantitative models for designing a business portfolio that consider strategic concepts have not developed in this industry. iii. designing model a. conceptual model in order to develop a model that provides the optimal business portfolio, we first need to determine the goals we are seeking for optimization. according to the existing literature and workshops organized by a focus group of experts, four main objectives were identified, namely maximum profitability, minimum investment, maximum growth and maximum profit robustness (minimum profitability risk). the first two objectives can be integrated as the economic value added (eva) concept. economic value added, one of the most widely used criteria in economic profitability models [22], is a good indicator of differentiation between portfolio with higher returns than lower returns [23]. despite fundamental similarities, eva provides better strategic and management illustration of economic returns for a company compared to npv which is basically developed to model the profitability of a project [24]: p eva wacc c c   = −     (1) in this equation, p is net operating profit after tax, c is invested capital, and wacc is weighted average cost of capital. accordingly, it can be said that each of the businesses in the portfolio at each period creates an economic value added independently and this trend of value adding during the periods leads to growth and robustness. based on existing literature, we formulate this cross-business impact as follows: 1. capability synergy: a business in a portfolio can affect the level of capability in another business by different ways, such as sharing activities or utilizing existing knowledge and expertise. 2. market share synergy: participating in a business can change the market of another business. for example, creating a market for another business or prevent its presence in a market, because of legal constraints. 3. parenting costs: having a portfolio of businesses instead of an individual business can create costs such as overheads, management costs or so-called holding costs, which are called “parenting costs” in this research. 4. sharing benefits: engaging in a business can save money in the cost structure of another business, which is called “sharing benefits”. these benefits can be realized in a variety of ways such as using shared resources or services, or increasing bargaining power with suppliers. therefore, we can present a high-level, conceptual model (figure 1) in which the value creation of any business is a function of the revenue, cost, and profitability mechanisms of the business. these factors are independently influenced by the internal variables (capabilities) and the external variables (market) of each business, while being systematically affected by the link between the businesses in the fourfold form above. engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3659 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … fig. 1. conceptual model now, if x represents the presence or absence in a business, the above model can be formulated for business i in the period t in the form of the following mathematical relations: ( )1 , i tmax z f eva= (2) , 2 i t de max z f dt   =     (3) ) , 2 3 ( i tp min z f = (4) ( ), , ,,i t i t i teva f p i= (5) ( ), , ,,i t i t i tp f e c= (6) ( ), , ,and , , , ,i t i t i te c f i s cb pc sb= (7) ), , , (s cb pc sb f g= (8) )( jg f x= (9) b. structuring industry businesses according to existing literature [25-27], industry evidence, and expert opinions, a structure for oil industry businesses is presented at three levels in figure 2. fig. 2. three-level structure for value chain of oil industry businesses c. modeling level-1 businesses for modeling development and production business, a simplified model of iran's new oil contracts, known as ipc, is considered. in this model, the contract period for a field is 20 years, during which the development of the field will take place over a period of 5 years, after which there will be three 5-year periods equal to 15 years of production. capital expenditures related to the development period will be repaid by the national oil company during the operation period (with a delay of 5 years) including financial costs. operating costs are repaid on an annual basis, and the oil company will receive a certain amount of remuneration (reward) per barrel of production. in refining business, we consider five years as construction (or acquisition) period as well, while having 25 years (5 periods) of operation. we assume that there is no competition to obtain the license for investment in these two businesses of level-1. since this research seeks to model a business and not a project, we consider another type of costs as indirect costs of a business in the model. it includes non-project costs such as costs for administration or key personnel of the headquarters. table i. model variables for level-1 businesses variable description 𝑪𝑿𝒊,𝒕 investment in business i in period t 𝒖𝒕 increase in oil production capacity in period t 𝑼𝒕 oil production in period t 𝒒𝒕 increase in refining capacity in period t 𝑸𝒕 production of refining products in period t 𝑹𝒕 rewards of oil production (remuneration) in period t 𝑶𝑿𝒕 direct costs of refinery operation in period t 𝑬𝒊,𝒕 income from business i in period t 𝑰𝑪𝒊,𝒕 indirect business costs i 𝑷𝒊,𝒕 profit by business i in period t 𝑷𝒊 𝒇 future potential benefits from investment in business i 𝑬𝑽𝑨𝒊,𝒕 economic value added from business i in period t in development and production, if cx dollars is needed to increase production capacity for a barrel per day, with cb1 as the capabilities level of the company in this business (which is defined in the range of 0 to 1 and its average is the normal capability in the industry), we will have: 1 1, ( .5). t t cb cx u cx + = (10) the power relationship is used to estimate the amount of investment required to build a refinery. it is widely applied to estimate initial investment costs for a refinery based on investment and capacity of a reference refinery [28]. according to this equation, if c0 is regarded as the amount of investment to build a sample refinery with an annual capacity of q0 while we have company's capability (cb2), then: 2, 0.6 2 0 0 ( .5). . t t cx q cb q c = + (11) the production capacity added in each period will be achieved not in that period but in future periods. therefore, the amount of oil produced and refined product in each period is: 1 t 3 0 t t i i u u − = −  =  (12) engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3660 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … 1 5 0 t t i i t q q − = −  =  (13) in the contracts described for development and production, the corporate's profits in each period are equivalent to f dollars reward per barrel of oil production: ( )5*365 . . t tr f u= (14) in refining business, income is equal to the sales of refined products. if p is the weighted average price of the corporate’s refined products, by assuming 330 days of operation annually, we have: 2, (5*330) . o t t t e q p= (15) in development and production, the investment and the operating costs are both reimbursed by the national iranian oil company, therefore, they are eliminated from the model calculations. in the refining business, the operating costs, which mainly include cost of feed, fuel, catalysts, etc. are directly related to the amount of refined products produced per period. therefore, if we assume that the production of a barrel of petroleum products costs ox dollars, operation cost in each period can be calculated considering the level of corporate's capability in refining business: 2, 2 (5 * 330) . ( .5) t t ox q ox cb = + (16) meanwhile, indirect costs are affected by both production/operation and development of fields or new refineries. therefore, they could be considered as a function of income or profits in each period (ai dollar per barrel), the volume of investments in development (bi dollars per investment dollar), and fixed costs of the business (ci). 1 1, 1 1 1, 1 1 ( ) ( .5) t t t x ic a r b cx c cb = + + + (17) 2 2, 2 2, 2 2, 2 2 ( ) ( .5) t t t x ic a e b cx c cb = + + + (18) in these equations, ci represents the costs involved in having an operating business, regardless of contracts and projects such as cost of key personnel, buildings, etc. now we can calculate the business profits of development and production in each period: 1 1, 1 1 1, 1 1 ( ) ( .5) t t t x ic a r b cx c cb = + + + (19) 2, 2, 2, t t t t p e ox ic= − − (20) therefore, the economic value added is: , , , . i t i t t i t eva p wacc cx= − (21) in addition, it should be noted that a part of investments in the 20-year period of the model, leads to future production in years after the problem period. hence, at the end of the 20th year, potential future profits have been generated in addition to the profits received. if the average profitability per production over the 20 years is calculated and called pi, we assume that profitability for production in future periods will continue according to such average. by discounting future profits with the rate of wacct, we will have: 4 1,1 1 4 1 tt tt p p u = = =   (22) 4 2,1 2 4 1 tt tt p p q = = =   (23) ( ) 7 1 1 5 . 1 f t t t t p u p wacc= = +  (24) ( ) 9 2 2 5 . 1 f t t t t p q p wacc= = +  (25) d. modeling level-2 and level-3 businesses while investment is a key function in level-1 businesses, in level-2 and level-3 businesses that are contracting and service businesses, investment is not essential. in these businesses, the cash flow depends on the company’s contracts and invoices. nevertheless, drilling services (k=2) and construction and installation (k=4) are “equipment-based”. therefore, investing in machines and equipment is required in mentioned businesses. the variables for each business in level-2 (j=1-3) and level-3 (k=1-5) are defined in table ii. table ii. model variables for level-2 and level-3 businesses variable description level-3 level-2 𝑲𝒌,𝒕 ′′ investment in period t 𝑬𝒌,𝒕 ′′ 𝑬𝒋,𝒕 ′ income in period t 𝑰𝑪𝒌,𝒕 ′′ 𝑰𝑪𝒋,𝒕 ′ direct costs in period t 𝑫𝑪𝒌,𝒕 ′′ 𝑫𝑪𝒋,𝒕 ′ indirect costs in period t 𝑷𝒊,𝒕 𝑷𝒊,𝒕 profit in period t 𝑬𝑽𝑨𝒊,𝒕 𝑬𝑽𝑨𝒊,𝒕 economic value added in period t if we show the total market value of business j in period t as m'j,t and the total market value of business k in period t as m''j,t , we will have: ' ' ' ' , , ( .5) j t j j t j j e y m s cb= + (26) '' '' '' '' , , ( .5) k t k k t k k e z m s cb= + (27) where s'j and s''j represent the corporate’s potential market share (percentages) in business j and k, which is affected by the market conditions, the number of competitors and the intensity of the competition. since service contracts are often awarded competitively through tenders, all this potential market share is not actualized. indeed, the corporate's capability in a business (cb'k and cb''j) is a determinant factor in this regard. therefore, in the above relation, s'j and s''k indicate the effect engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3661 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … of external factors outside the control of the company. cb'j and cb''k represent the effects of internal factors such as bid prices on tenders, capabilities, competitive advantage, and so on. we also assume that each business has a specific profit margin (m'j and m''k), which is the remainder of the contract after deducting direct costs. thus, we have the direct costs: ( )' ' ' , ,' 1 ( .5) j j t j t j m dc e cb − = + (28) ( )'' '' '' , ,'' 1 ( .5) k k t k t k m dc e cb − = + (29) indirect costs of a business are a function of the amount of contracts in hand (a' and a'' dollars per each dollar of contract value) as an indicator of current workload of the corporate. in case of equipment-based businesses, it is also a function of inhand equipment (b'' dollars per each dollar of investment in equipment) as an indicator of business size and related maintenance costs: ' ' ' ' j, ,' ( ) ( .5) j t j t j y ic a e c cb = + + (30) '' '' '' '' '' '' k, k, k,'' ( ) ( .5) k t t t k z ic a e b k c cb = + + + (31) where c' and c'' indicate the fixed costs that occur regardless of having a contract during a period only because of the presence of that business, such as the cost of key personnel and staff. based on the above, the profit could be calculated as: ' ' ' ' , , , ,j t j t j t j t p e dc ic= − − (32) '' '' '' '' , , , ,k t k t k t k t p e dc ic= − − (33) and for economic value added: ' ' , ,j t j t eva p= (34) '' '' '' , , , . k t k t t k t eva p wacc k= − (35) e. synergy of businesses so far, the model has been developed by considering businesses independent. however, the link and synergy between them is a decisive factor in portfolio design. in other words, adopting a systematic look at the portfolio of businesses suggests that being or not being in a business can affect parameters of another business. if cbi, cb'j and cb''k are independent capability parameters in each business of level 1, 2 and 3, adopting a system approach means that being in an business can lead to an increase or decrease in the capability level of another business. if cbi, cb'j and cb''k are respectively the variables relating to the corporate's capabilities in each three level businesses, we define: [-1,1] cb m, p ,n,q   as the capability impact factor of business m from level p on business n from level q. now, the effect of all other businesses on business n of the level q could be shown by the “capability synergy function” for business n of level q which is defined as: 2 3 5 ,1, , , 2, , ,3, ,1 1 1 , 3 3 5 1 1 1 . . . cb cb cb i i n q j j n q k k n qi j kcb n q i j ki j k x y z g x y z    = = = = = = + + = + +       (36) therefore, we will have: ),1.(1 cb i i i cb cb g= + (37) )' ' ,2.(1 cb j j j cb cb g= + (38) )'' '' ,3.(1 cb k k k cb cb g= + (39) having the above mentioned function, corporate's capability level in a business that was considered as an independent parameter is now modified by considering the synergistic effect of other businesses in the portfolio. in the same way, if the parameters si, s'j and s''k are the potential market share in each business of level 1, 2 and 3, adopting a system approach and considering the presence in other businesses can lead to an increase or decrease in this share and a change in the market position. if si, s'j, and s''k are respectively the variables indicating potential market share in each of the level 1, 2 and 3 businesses, by considering the link between businesses in a portfolio, we define: [-1,1] s m, p ,n,q   as the market impact factor of business m from the level p on business n from level q. meanwhile, we can define the “market synergy function” for business n from level q as: 2 3 5 ,1, , , 2, , ,3, ,1 1 1 , 3 3 5 1 1 1 . . . s s s i i n q j j n q k k n qi j ks n q i j ki j k x y z g x y z    = = = = = = + + = + +       (40) ,1 )1.( i s i i s s g= + (41) ' ' ,2 .(1 ) s j j j s s g= + (42) '' '' ,3 .(1 ) s k k k s s g= + (43) f. parenting cost and sharing benefits by establishing a portfolio of businesses, new overhead costs, called “parenting costs”, are generated. since these costs have a direct relation to size of the corporate and its operation, we can estimate them as a percentage (h) of the total income: 2 3 5 ' '' , , , 1 1 1 . t i t j t k t i j k pc h e e e = = =   = + +       (44) moreover, given that these costs are mainly spent on strategic management, financial management, audits and controls, etc., we distribute it equally among active businesses: , 3 3 5 1 1 1 . i t i t i j ki j k x pc pc x y z = = = = + +   (45) ' , 3 3 5 1 1 1 . j t j t i j ki j k y pc pc x y z = = = = + +   (46) engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3662 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … '' , 3 3 5 1 1 1 . k t k t i j ki j k z pc pc x y z = = = = + +   (47) conversely, by forming a business portfolio, it is possible to share costs among some businesses. for this purpose, we define [-1,1] sb m, p ,n,q   as the sharing impact factor between business m at level p with business n at level q. now, we define the “sharing benefit function” for business n from level q as: 2 3 5 ,1, , , 2, , ,3, ,1 1 1 , 3 3 5 1 1 1 . . . sb sb sb i i n q j j n q k k n qi j ksb n q i j ki j k x y z g x y z    = = = = = = + + = + +       (48) to apply the benefits of cost sharing into the model, we assume that the cost-sharing function decreases the indirect costs of each business in each period. since this cost sharing cannot be unlimited and must include a part of indirect costs, we restrict it to a θ percentage of indirect costs. therefore, we will have: , ,1 , ( , ). sb i t i i t sb min g ic= (49) ' ' , ,2 , ( , ). sb j t j j t sb min g ic= (50) '' '' , ,3 , ( , ). sb k t k i k sb min g ic= (51) by considering parenting costs and cost sharing benefits, business profits should be modified as follows: 1, 1, 1, 1, t t t t t p r ic sb pc= − + − (52) 2, 2, 2, 2, 2, 2, t t t t t t p e ox ic sb pc= − − + − (53) ' ' ' ' ' ' , , , , , ,j t j t j t j t j t j t p e dc ic sb pc= − − + − (54) '' '' '' '' '' '' , , , , , ,k t k t k t k t k t k t p e dc ic sb pc= − − + − (55) g. objective function according to the conceptual model, the objective function is: ( ) 4 2 3 5 ' '' 1 , , , 1 1 1 1 1 2 1 1 i t j t k tt t i j kt f f max z eva eva eva wacc p p = = = =    = + +  +    + +     (56) ( ) ( ) ( ) 4 2 3 5 ' ' '' '' 2 , 1 , , 1 , , 1 , 2 1 1 1 i t i t j t j t k t k t t i j k max z e e e e e e + + + = = = =   = − + − + −         (57) 2 4 2 3 5 ' '' 3 , , , 1 1 1 1 i t j t k t t i j k min z p p p p = = = =    = + + −            (58) h. constraints the first constraint is the limitation of resources for investment. if the total capital available for the period t is ct, then we have: '' '' 1, 2, 2, 4, t t t t t cx cx k k c+ + +  (59) moreover, minimum required investment for level-1 and equipment-based businesses in level-3 can be defined as: , ( ) i t i cx c min (60) '' '' , ( ) j t j k k min (61) in equipment-based businesses (k=2,4), we consider that reinvestment in each period should be at least equal to the rate of depreciation in that business (τk): '' , 1 '' , k t k k t k k  +  (62) although size, the potential contribution of the market and capabilities are still decisive for equipment-based businesses (“drilling” and “construction and installation”), available equipment is also a limiting factor because the services provided by these businesses are depended on their equipment and machinery. hence, we apply the following limitation in the model for these businesses (k=2,4): '' '' , , 1 5 t k t k k k i i e z k =   (63) where ηk is the “revenue generating ratio” which is defined for each equipment-based business (k=2,4) as the maximum amount of annual revenue that could be generated per unit of investment in equipment and machinery. meanwhile, investing in a business is subject to the presence in it. so: ( ) ,1 0i i tx cx− = (64) ( ) '' ,1 0k k tz k− = (65) finally: , 0,1 i j x y = (66) other variables 0 (67) iv. solving the model and results to solve the model, three companies in the iranian oil and gas industry were studied (cases 1-3). each of these companies represents a type of corporate that has the competitive advantage and capability mainly in level 1, 2 and 3 of the hierarchy presented in figure 2 respectively. in order to analyze the results better, the model is solved in two modes: • system inter-relationship mode (si-mode): in this case, the links among businesses in the forms of market synergy, capabilities synergy, parenting costs and sharing benefits are included in the model. • elements independency mode (ei-mode): in this case, the relationships between businesses and their impact on each other are removed from the model. hence, all variables and parameters are considered independent of other existing businesses in the portfolio. to solve in the ei approach, the values of cb m, p ,n,q  , s m, p ,n,q  , h ,and θ are considered equal to zero. engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3663 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … fuzzy delphi method has been used to determine the parameters of synergy, sharing benefits and capability level of case studies (tables iv-vi). the linguistic variables used for this purpose are the seven-point scale presented in table iii [29]. defuzzification of values is done by center of gravity (cog) method [30]. other parameters used to solve the model are presented in tables vii and viii. table iii. linguistic variables used in the research fuzzy number linguistic variables (0.9,1, 1) very high (vh) (0.7, 0.9, 1) high (h) (0.5, 0.7, 1) medium high (mh) (0.3, 0.5, 0.7) medium (m) (0.1, 03, 05) medium low (ml) (0, 0.1, 0.3) low (l) (0, 0, 0.1) very low (vl) table iv. market impact factor ( s m ,p ,n ,q  ) q=1 q=2 q=3 total n=1 n=2 n=1 n=2 n=3 n=1 n=2 n=3 n=4 n=5 p=1 m=1 0.000 0.190 0.430 0.430 0.033 0.550 0.430 0.550 0.367 0.033 3.013 m=2 0.240 0.000 0.033 0.033 0.430 0.033 0.033 0.033 0.367 0.550 1.752 p=2 m=1 0.097 0.033 0.000 0.240 0.033 0.430 0.430 0.097 0.033 0.033 1.426 m=2 0.097 0.033 0.240 0.000 0.033 0.097 0.033 0.430 0.367 0.033 1.363 m=3 0.033 0.097 0.033 0.190 0.000 0.033 0.033 0.097 0.367 0.430 1.313 p=3 m=1 0.097 0.033 0.240 0.097 0.033 0.000 0.190 0.190 0.033 0.033 0.946 m=2 0.033 0.033 0.240 0.097 0.033 0.190 0.000 0.033 0.033 0.033 0.725 m=3 0.067 0.033 0.097 0.240 0.033 0.190 0.033 0.000 0.240 0.033 0.966 m=4 0.033 0.033 0.033 0.097 0.097 0.033 0.033 0.240 0.000 0.240 0.839 m=5 0.033 0.097 0.033 0.097 0.240 0.033 0.033 0.033 0.240 0.000 0.839 total 0.73 0.582 1.379 1.521 0.965 1.589 1.248 1.703 2.047 1.418 table v. capability impact factor ( cb m ,p ,n ,q  ) q=1 q=2 q=3 total n=1 n=2 n=1 n=2 n=3 n=1 n=2 n=3 n=4 n=5 p=1 m=1 0.000 0.217 0.190 0.083 0.033 0.083 0.083 0.083 0.067 0.033 0.872 m=2 0.217 0.000 0.033 0.083 0.217 0.033 0.033 0.033 0.083 0.083 0.815 p=2 m=1 0.217 0.033 0.000 0.067 0.033 0.083 0.190 0.067 0.033 0.033 0.756 m=2 0.190 0.033 0.067 0.000 0.217 0.033 0.033 0.083 0.190 0.083 0.929 m=3 0.033 0.217 0.033 0.217 0.000 0.033 0.033 0.083 0.190 0.083 0.922 p=3 m=1 0.430 0.033 0.273 0.190 0.033 0.000 0.217 0.190 0.033 0.033 1.432 m=2 0.067 0.033 0.217 0.033 0.033 0.067 0.000 0.033 0.033 0.033 0.549 m=3 0.217 0.083 0.083 0.273 0.067 0.190 0.033 0.000 0.190 0.190 1.326 m=4 0.033 0.067 0.033 0.258 0.258 0.033 0.033 0.067 0.000 0.067 0.849 m=5 0.033 0.430 0.033 0.083 0.273 0.033 0.033 0.217 0.190 0.000 1.325 total 1.437 1.146 0.962 1.287 1.164 0.588 0.688 0.856 1.009 0.638 table vi. sharing impact factor ( sb m ,p ,n ,q  ) q=1 q=2 q=3 total n=1 n=2 n=1 n=1 n=2 n=1 n=1 n=2 n=1 n=1 p=1 m=1 0.000 0.387 0.175 0.175 0.033 0.175 0.058 0.175 0.033 0.058 1.269 m=2 0.387 0.000 0.033 0.033 0.175 0.033 0.033 0.083 0.083 0.175 1.035 p=2 m=1 0.175 0.033 0.000 0.175 0.058 0.175 0.217 0.083 0.058 0.058 1.032 m=2 0.175 0.033 0.175 0.000 0.387 0.083 0.058 0.083 0.217 0.083 1.294 m=3 0.033 0.175 0.058 0.387 0.000 0.083 0.058 0.083 0.217 0.083 1.177 p=3 m=1 0.175 0.033 0.175 0.083 0.083 0.000 0.083 0.258 0.033 0.058 0.981 m=2 0.058 0.033 0.217 0.058 0.058 0.083 0.000 0.058 0.033 0.033 0.631 m=3 0.175 0.083 0.083 0.083 0.083 0.258 0.058 0.000 0.083 0.387 1.293 m=4 0.033 0.083 0.058 0.217 0.217 0.033 0.033 0.083 0.000 0.083 0.840 m=5 0.058 0.175 0.058 0.083 0.083 0.058 0.033 0.387 0.083 0.000 1.018 total 1.269 1.035 1.032 1.294 1.177 0.981 0.631 1.293 0.840 1.018 table vii. capability level of cases in each business 𝒄𝒃𝟓 ′′ 𝒄𝒃𝟒 ′′ 𝒄𝒃𝟑 ′′ 𝒄𝒃𝟐 ′′ 𝒄𝒃𝟏 ′′ 𝒄𝒃𝟑 ′ 𝒄𝒃𝟐 ′ 𝒄𝒃𝟏 ′ 𝒄𝒃𝟐 𝒄𝒃𝟏 parameter 0.08 0.08 0.40 0.78 0.40 0.22 0.22 0.40 0.78 0.78 case 1 0.92 0.08 0.92 0.08 0.78 0.92 0.92 0.78 0.22 0.40 case 2 0.60 0.92 0.50 0.03 0.08 0.22 0.22 0.08 0.08 0.08 case 3 engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3664 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … table viii. model parameters value parameter value parameter value parameter value parameter 0.0002 𝒂𝟏 19,800 𝒄𝒙 0.15 𝒔𝟒 ′′ 32,000,000,000 𝑴𝟏 ′ 0.0002 𝒂𝟐 56 𝒐𝒙 0.10 𝒔𝟓 ′′ 48,000,000,000 𝑴𝟐 ′ 0.0020 𝒂′ 4.8 𝒇 0.515 𝑾𝑨𝑪𝑪 25,000,000,000 𝑴𝟑 ′ 0.0020 𝒂′′ 75,000,000 𝑪𝟏(𝒎𝒊𝒏) 2,700,000,000 𝑪𝟎 2,400,000,000 𝑴𝟏 ′′ 0.0100 𝒃𝟏 150,000,000 𝑪𝟐(𝒎𝒊𝒏) 150,000 𝑸𝟎 12,800,000,000 𝑴𝟐 ′′ 0.0100 𝒃𝟐 6,000,000 𝑲𝟐 ′′(𝒎𝒊𝒏) 65 𝒑𝒕 2,400,000,000 𝑴𝟑 ′′ 0.0500 𝒃′′ 5,500,000 𝑲𝟒 ′′(𝒎𝒊𝒏) 0.07 𝒎𝟏 ′ 25,550,000,000 𝑴𝟒 ′′ 4,950,000 𝒄𝟏 250,000,000 𝑪𝒕-case1 0.08 𝒎𝟐 ′ 1,250,000,000 𝑴𝟓 ′′ 3,700,000 𝒄𝟐 500,000,000 𝑪𝒕-case2 0.08 𝒎𝟑 ′ 0.05 𝒔𝟏 ′ 3,050,000 𝒄′ 750,000,000 𝑪𝒕-case3 0.19 𝒎𝟏 ′′ 0.05 𝒔𝟐 ′ 3,050,000 𝒄′′ 1.62 𝜼𝟐 0.41 𝒎𝟐 ′′ 0.10 𝒔𝟑 ′ 0.031 𝒉 1.12 𝜼𝟒 0.15 𝒎𝟑 ′′ 0.05 𝒔𝟏 ′′ 0.5 𝝉𝟐/𝝉𝟒 0.08 𝒎𝟒 ′′ 0.05 𝒔𝟐 ′′ 0.42 𝜽 0.15 𝒎𝟓 ′′ 0.15 𝒔𝟑 ′′ the model we developed in this research is a highly complex one with a np-hard type that requires meta-heuristic methods to be solved. therefore, we used non-dominant genetic algorithm as one of the most efficient and applicable multi-objective methods for obtaining appropriate answers [31]. studies have shown that the improved version of this algorithm (nsga-ii) has better performance and less computational complexity than the previous versions, and provides better answers than the other presented algorithms [32]. using this algorithm, the model is solved by matlab software for each of the three cases in both ei and si modes. the initial population is 200, the number of generations is 100, and the crossover rate is 0.8. the chromosome defined for solving this model is a string of length 26, 10 of which (to enter or not enter into a business) are boolean variables. the rest (investment variables) are continuous variables. the objective function values for the answers set are shown in figures 4 to 9 in the form of the pareto front. square polynomial fitting is used in order to gain a better picture of answers. in order to improve the results and reduce the impact of out-of-range answers, bi-square method has been used for robust fitting. fig. 3. pareto front of objective functions for the first case si mode fig. 4. pareto front of objective functions for the first case ei mode fig. 5. pareto front of objective functions for the second case si mode fig. 6. pareto front of objective functions for the second case ei mode fig. 7. pareto front of objective functions for the third case si mode in figures 9-11, values of each objective function for the three cases are shown in both si and ei modes. engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3665 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … fig. 8. pareto front of objective functions for the third case ei mode fig. 9. z1 for the three cases in si and ei modes fig. 10. z2 for the three cases in si and ei modes fig. 11. z3 for the three cases in si and ei modes in order to analyze the results obtained from decision variables, which represent entering or not entering into a business, the simple mean of values in each set of answers is calculated and presented separately for ei and si modes in figures 12 and 13. fig. 12. enter/not enter decision variable for the three cases si mode fig. 13. enter/not enter decision variable for the three cases ei mode v. discussion and conclusion the main achievement of this research is the structuring and formulation of qualitative concepts of corporate strategy in form of a quantitative optimization model. it can support top management in the process of strategic decision making and designing business portfolio. analyzing synergistic impact factors shows that having engineering business in the portfolio has the greatest impact on improving the capability-level of other businesses. on the other hand, “development and production” and “refining” businesses, followed by level-2 businesses, have the most positive impact on increasing the market share of other businesses. such pattern shows that specialized and knowledge cores, which in the studied industry means engineering capabilities, have a significant role in enhancing the firm's capabilities even in upstream businesses. hence, maintaining such businesses in vertical integration structure instead of outsourcing is feasible. for example, engaging in “subsurface engineering” (k=1) that provides capabilities such as reservoir engineering and geology can affect the firm's capabilities in the “development and production” (i=1) and promote it. moreover, entering into upstream businesses of the value chain can create internal markets for downstream businesses. in other words, in addition to direct benefits, investing in an upstream-level engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3666 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … business can indirectly lead to new sources of income for the downstream businesses of a portfolio. nevertheless, such benefits should be evaluated considering the costs incurred by entering the business, required investment and parenting costs needed for holding management. legal barriers or strategic directions for assigning work to subsidiaries should also be considered in this regard. findings also show that the portfolio of the level-2 firm (case 2) performs better from the value adding point of view compared to level-1 companies (case 1). although these results are obtained within our defined assumptions and constraints, it shows that presence in higher-level businesses does not necessarily mean more economic value added. corporate managers should choose the optimum portfolio by considering internal factors such as the level of capabilities and external factors such as market potentials, as well as the interactions among businesses and their impact on each other. besides, the optimal diversification-concentration strategies for each type of corporation could by proposed based on results: • type 1 (high level of competitive advantages and capabilities in level-1 businesses): the portfolio should be formed with a focus on level-1 capital-based businesses, including “development and production” and “refinement” along with outsourcing of level-2 e&c operations. in order to play a better role as an employer, protecting corporate investments, benefit from internal markets and reduce risk, applying a “limited and relevant diversification” can be considered by entering into some level-3 businesses, especially engineering businesses. • type 2 (high level of competitive advantages and capabilities in level-2 businesses): the best strategy for these firms is “diversification”|. these firms would be better placed in all level-2 e&c businesses, upstream and downstream. it is also better for these companies to include “engineering” in their business portfolio as it makes them more capable and with better performance in level-2 contracting tasks. • type 3 (high level of competitive advantage and capabilities in level-3 businesses): the appropriate strategy for these firms is “focus”. these firms must refrain from engaging in field development and production contracts, investment, ownership of refineries and e&c businesses of level-2. on the contrary, they should focus on those specialized engineering services in which they are more capable and competitive. the findings also confirm that portfolio business performance (as a whole) is different from the total revenueexpenditure structure of individual business. presence in a business can change the components of another business and, consequently, its revenue-expenditure structure. this effect does not necessarily mean that the values of the objective functions are improved when considering the whole portfolio, but could provide a more realistic picture of the portfolio's value-adding potential. vi. future work suggestions given the wide range of stakeholders and decision criteria, further objectives could be included in prospective models, especially non-financial and non-economic ones such as environmental impacts or sustainable development, which have considerable importance in decision making for oil industry. also, criteria such as employment creation and energy security that are of interest to national oil companies could be considered in future optimizations. moreover, the dynamics of the relationship between variables over time periods can be applied in the model. for example, a continuous presence in a business can improve the market position in an upcoming period, and it will provide the firm with a competitive advantage compared to a newcomer. in addition, many of the variables used as deterministic variables can actually be defined as probabilistic variables, thus, the model becomes a probabilistic optimization model, in which the risk concept will be closer to real situations by covering issues beyond the fluctuations of profitability as it is considered in this research. references [1] l. g. franko, “the death of diversification? the focusing of the world's industrial corporates, 1980-2000”, business horizons, vol. 47, no. 4, pp. 41-50, 2004 [2] r. m. grant, contemporary strategy analysis: text and cases edition, john wiley & sons, 2016 [3] p. ghemawat, “competition and business strategy in historical perspective”, business history review, vol. 76, no. 1, pp. 37-74, 2002 [4] r. a. proctor, j. s. hassard, “towards a new model for product portfolio analysis”, management decision, vol. 28, no. 3, 1990 [5] r. mansini, w. ogryczak, m. g. speranza, “twenty years of linear programming based portfolio optimization”, european journal of operational research, vol. 234, no. 2, pp. 518-535, 2014 [6] g. johnson, k. scholes, r. whittington, exploring corporate strategy: text & cases, pearson education, 2008 [7] m. e. porter, “from competitive advantage to corporate strategy”, in: managing the multibusiness company: strategic issues for diversified groups, cengage learning emea, 1996 [8] j. r. graham, m. l. lemmon, j. g. wolf, “does corporate diversification destroy value?”, the journal of finance, vol. 57, no. 2, pp. 695-720, 2002 [9] o. a. lamont, c. polk, “does diversification destroy value? evidence from the industry shocks”, journal of financial economics, vol. 63, no. 1, pp. 51-77, 2002 [10] j. d. martin, a. sayrak, “corporate diversification and shareholder value: a survey of recent literature”, journal of corporate finance, vol. 9, no. 1, pp. 37-57, 2003 [11] j. a. doukas, o. b. kan, “does global diversification destroy corporate value?”, journal of international business studies, vol. 37, no. 3, pp. 352-371, 2006 [12] r. p. rumelt, “diversification strategy and profitability”, strategic management journal, vol. 3, no. 4, pp. 359-369, 1982 [13] s. a. mansi, d. m. reeb, “corporate diversification: what gets discounted?”, the journal of finance, vol. 57, no. 5, pp. 2167-2183, 2002 [14] n. berger, j. d. cummins, m. a. weiss, h. zi, “conglomeration versus strategic focus: evidence from the insurance industry”, journal of financial intermediation, vol. 9, no. 4, pp. 323-362, 2000 [15] s. p. ferris, n. sen, c. y. lim, g. h. yeo, “corporate focus versus diversification: the role of growth opportunities and cashflow”, journal of international financial markets, institutions and money, vol. 12, no. 3, pp. 231-252, 2002 engineering, technology & applied science research vol. 8, no. 6, 2018, 3657-3667 3667 www.etasr.com kiamehr et al.: a multi-objective optimization model for designing business portfolio in the iranian … [16] s. b. suslick, d. schiozer, m. r. rodriguez, “uncertainty and risk analysis in petroleum exploration and production”, terræ, vol. 6, no. 1, pp. 30-41, 2009 [17] d. p. fichter, “application of genetic algorithms in portfolio optimization for the oil and gas industry”, spe annual technical conference and exhibition, dallas, texas, usa, october 1-4, 2000 [18] m. shakhsi-niaei, s. h. iranmanesh, s. a. torabi, “a review of mathematical optimization applications in oil-and-gas upstream & midstream management”, international journal of energy and statistics, vol. 1, no. 2, pp. 143-154, 2013 [19] s. v. de barros bruno, c. sagastizabal, “optimization of real asset portfolio using a coherent risk measure: application to oil and energy industries”, optimization and engineering, vol. 12, no. 1-2, pp. 257275, 2011 [20] q. xue, z. wang, s. liu, d. zhao, “an improved portfolio optimization model for oil and gas investment selection”, petroleum science, vol. 11, no. 1, pp. 181-188, 2014 [21] z. lin, j. ji, “the portfolio selection model of oil/gas projects based on real option theory”, in: lecture notes in computer science, vol. 4489, pp. 945-952, springer, berlin, heidelberg, 2007 [22] k. sharma, s. kumar, “economic value added (eva)-literature review and relevant issues”, international journal of economics and finance, vol. 2, no. 2, pp. 200-200, 2010 [23] d. fountaine, d. j. jordan, g. m. phillips, “using economic value added as a portfolio separation criterion”, quarterly journal of finance and accounting, vol. 47, no. 2, pp.69-81, 2008 [24] p. modesti, “eva and npv: some comparative remarks”, mathematical methods in economics and finance, vol. 2, pp. 55-70, 2007 [25] m. chima, d. hills, “supply-chain management issues in the oil and gas industry”, journal of business & economics research, vol. 5, no. 6, pp. 27-36, 2011 [26] s. tordo, b. s. tracy, n. arfaa, natural oil companies and value creation, world bank working paper no. 218, the world bank, washington dc, 2011 [27] y. y. yusuf, a. gunasekaran, a. musa, m. dauda, n. m. el-berishy, s. cang, “a relational study of supply chain agility, competitiveness and business performance in the oil and gas industry”, international journal of production economics, vol. 147b, pp. 531-543, 2014 [28] m. s. peters, k. d. timmerhaus, r. e. west, plant design and economics for chemical engineers, mcgraw-hill education, 2003 [29] t. y. chen, t. c. ku, “importance-assessing method with fuzzy number-valued fuzzy measures and discussions on tfns and trfns”, international journal of fuzzy systems, vol. 10, no. 2, pp. 92-103, 2008 [30] w. van leekwijck, e. e. kerre, “defuzzification: criteria and classification”, fuzzy sets and systems, vol. 108, no. 2, pp. 159-178, 1999 [31] konak, d. w. coit, a. e. smith, “multi-objective optimization using genetic algorithms: a tutorial”, reliability engineering & system safety, vol. 91, no. 9, pp. 992-1007, 2006 [32] k. deb, s. agrawal, a. pratap, t. meyarivan, “a fast elitist nondominated sorting genetic algorithm for multi-objective optimization: nsga-ii”, lecture notes in computer science, vol. 1917, pp. 849-858, springer, berlin, heidelberg, 2000 microsoft word 10-3153_s_etasr_v9_n6_pp4917-4924 engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4917 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance a review of shaped charge variables for its optimum performance khalid naeem school of chemical and materials engineering, national university of sciences and technology islamabad, pakistan khalid.phd@scme.nust.edu.pk arshad hussain school of chemical and materials engineering, national university of sciences and technology islamabad, pakistan principal@scme.nust.edu.pk shakeel abbas al-technique corporation of pakistan islamabad, pakistan shakeelabbas@hotmail.com abstract—shaped charge is a device for focusing the chemical energy of explosives to a particular point or line for penetration or cutting purpose respectively. they are used for the penetration or cutting of various types of targets on land, water, underground, underwater, or air. their shape is either conical or linear and consists of explosive, casing and liner. the liner is bent towards the central axis producing a thin hypervelocity jet by the energy released as a result of the explosive detonation. this jet is utilized against the target. shaped charges can perforate or penetrate targets like aircrafts, ships, submarines, armored vehicles, battle tanks, and bunkers. this paper presents a detailed review of analytical works, computer simulations, and experimental results related to the liner. among modern diagnostic techniques flash x-rays, radiography is the best and widely used in the experiments performed in the last 40 years. powder metallurgy, which started in the late twentieth century raised the efficiency of shaped charges to new altitudes. the efficiency of the shaped charge depends on numerous factors such as explosive’s type, liner’s material, geometry and metallurgy, manufacturing technique, and casing thickness. factors concerning the liner’s material, metallurgical advancements, and geometry are discussed chronologically and in detail. keywords-conical liners; hemispherical liners; sintered powder liners; rolled homogenous armor i. introduction a shaped charge is an explosive cylinder with a cavity opposite to the detonation point. the shape of the cavity may vary from conical to bow-shape. the cavity in the explosive is either empty or may contain a liner made of metal, alloy or any other material, a status known as hollow charge and shaped charge respectively. a typical shaped charge is shown in figure 1. the shaped charge is known by different names in different parts of the globe, for example, hohlladung [1] and cumulative charge [2] in germany and the soviet union respectively. liners for shaped charges are manufactured according to the requirements in the form of cone, parabola, hemisphere, vshape or any other suitable form [3]. the v-shaped liner is also known as linear cutting charge used mainly for demolition purposes [4]. detonating a hollow cylindrical charge on or near the target produces a deeper cavity in the target than a cylindrical explosive without a cavity. this is known as the neumann or munroe effect [5]. in case of the lined cavity and axial symmetric geometries, an outgoing spherical detonation wave is produced from the detonation point. the detonation wave moves toward the liner at a speed of 5000 to 8000m/s dictated by the type of explosive [6]. the detonation wave exerts a pressure in the range of twenty to two thousand gpa on the liner which behaves as an inviscid incompressible fluid [7]. this pressure causes the liner to collapse at a strain rate of 10 4 s -1 to 10 7 s -1 . the collapse of the liner starts from the apex toward the centerline to form a jet. as the detonation wave progresses on the liner towards the base a big chunk of the liner flows into a slug. typical velocities of the jet and slug are 10,000 and 1,000m/s respectively [3]. this velocity gradient produces elongation in the jet, causing necking or breakup of the jet [8]. the high-velocity jet exerts pressure on the target which exceeds the target strength resulting in penetration [9]. at these high velocities the penetration of the jet is independent of the target strength but instead depends on its density [10]. the cavity is produced by the lateral displacement of the material due to intense pressure. the cavity further deepens when the shaped charge is moved a certain optimal distance away from the target known as standoff (so). the so depends on the size, geometry and the type of the explosive used in the shaped charge [11]. shaped charges are utilized for hypervelocity impact in space-related studies [12]. fig. 1. cutview of a shaped charge with a target ii. history of shaped charges the history of shaped charges begins with the utilization of the unlined cavity charge by franz in 1792 using black powder corresponding author: khalid naeem engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4918 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance which deflagrates. at that time there was no concept of detonation or detonator. after the invention of the detonator, foerster in 1833 successfully detonated the unlined cavity charge [13]. the invention of detonating caps patented by bloem can be regarded as the first lined cavity [14]. later on, munroe also performed experiments in the united states on the unlined and lined cavity [15]. to optimize the performance of shaped charges, multiple variables like casing material, casing thickness, high explosive type, liner to explosive mass ratio, cone angle, manufacturing technique of the liner, insertion of wave-shaper, grain size control, liner material, thickness, geometry, so distance, cone angle, bimetallic liner and multimaterial liners have been studied one by one or in parallel. optimization by wave-shaper and its effect on the detonation wave-front was conducted in [16]. a detailed work on the shaped charges can be seen in [17]. the current article is particularly focused on the utilization of different materials as liner, metallurgical advancements and geometry of the liner. iii. materials for the liner of shaped charge at first, the material employed for liner fabrication was copper (cu). its selection was based on its better elongation and moderate density [18]. it was found that steel liners with thickness up to 1mm perform better against armor plates than the grey-iron casting having thickness up to 1.5mm [19]. the microstructures of cu and tantalum (ta) liners were compared in [20]. the density of the liner plays an important role in penetration because the square root of the liner’s density is directly proportional to its penetration [21]: � ∝ ������ (1) the exact form of the equation is: � = � ∗ � ∗ ����� /��� ��� (2) where p, l, λ, and ρ denote penetration, jet length, a constant having value between 1 and 2 and density respectively [22]. it was found that brass and lead’s (pb) performance is not better than cu’s, and cu with smaller gain performs better than with larger grain. an alloy of cu and tungsten (w) was used for liner in [23] and the experiments were observed using flash x-ray (fxr). improved penetration for the subject alloy was observed due to the increased density and break-up time. the effects of jet rotation and so on penetration were also studied. because of its higher density, molybdenum (mo) was tested as a liner in [24-25]. at high strain rate the jet penetration depends upon the square root of the jet density and its length as shown in (2), therefore using high density materials like mo and w gives greater penetration in comparison to low density liners. tungsten made liners were simulated and studied analytically in [26]. shaped charge with reduced slug was patented by authors in [27]. the slug was reduced by introducing a double layer liner. the layer contacting the explosive disintegrates reducing the mass of the slug and the second layer produces the jet. liner made of al was used in [28] for penetration in concrete. the development of a al jet and its penetration was modeled with the hydrodynamic tool cale. copper liners preheated to 400-600 0 c were investigated in [29]. it was found that heating increased penetration power. liner of mo was used in [30]. the material was analyzed with the analytical tool jetform, hydro-code grim and experimentally. the liner was manufactured using six different techniques. the jets captured with fxr were compared with the results obtained from analytical and hydro-code and were found in close agreement. cu liners were used in [31-32]. jet formation, elongation and its particulation for aluminium (al), nickel (ni), cu, mo, ta, uranium and w as liner materials were studied in [31] and the conclusion was that cu, mo and w have better properties for shaped charge liner. silver, zirconium, titanium and depleted uranium (du) were tested as liners in [33]. all produced a ductile jet having longer breakup time than cu. it was concluded that these four materials give better performance at longer so when compared to cu. the use of du for liner was abondoned on the basis of its toxicity and radiation hazards. a study of liner made-up of a w-cu-ni alloy was carried out in [34]. cu with 20% pb was investigated with 2-d hydro-code in [35] to create a bigger hole in well casing and a small hole in the firing gun at the same time. because of high densities and high or moderate sound velocities mo, ta, w, and their alloys were studied in [36] for liners of shaped charges and explosively formed penetrators. the jet breakup was studied experimentally and numerically in [37]. the formula for the jet break up time published in 1979 was further investigated by authors in [38] using a cu liner. the effects in the variation of constants used in the formula by changing liner material, thickness, metallurgical state, and type of explosive were also included. al, cu, and w were investigated analytically and experimentally against hard rocks and concrete in [39]. the effect of current flow in cu jet was investigated in [40]. it was found that when injecting high current in the liner, the jet tip is less affected, whereas distortion of the jet is observed where the jet particles are not aligned along the jet axis by fxr. the effect of magnetic field on a cu liner showed that increasing the field decreases the penetration [41]. cu liner was further analyzed in [42] using autodyn. changes in the shape of the jet by the detachment of explosive from the casing, air bubbles inside the explosive, eccentric initiation and dimensional inaccuracies in the liner were analyzed at different so distances. the tandem warhead consists of two warheads: the precursor fitted at the front and the main warhead fitted after the precursor at some distance. it is mostly used against armored vehicles protected by explosive reactive armor (era). the precursor of the tandem detonates on the era mounted on the armored vehicle and the main warhead then attacks the bare armored vehicle. cu, al and steel liners were tested experimentally in [43] considering their incorporation in tandem warheads against concrete. it was found that a smaller diameter deep hole is produced by cu liner when compared to the al liner. the hole diameter and depth produced by steel liner is intermediate among the ones caused by cu and al. comparative numerical and experimental study of cu and cu-w liners with the same charge to mass of the liner was carried out in [44]. the tip velocity of the jets obtained from both liners is the same in the air but when the target was submerged in water the jet tip velocity of cu-w liner decreased slowly in comparison with cu liner. zirconium was utilized as a liner material in [45]. engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4919 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance bimetallic liners were studied by authors in [46] whereas bimetallic and reactive liners were studied experimentally in [47]. it was concluded that a single metal liner is better than a bimetallic liner. multi-material liners were studied numerically in [48] and it was suggested that they can be utilized for military purposes or in well perforation. the effect of a magnetic field on the penetration of a magnetized liner was analyzed in [49]. jet penetration was reduced in the presence of an axial magnetic field in the liner just before the shot, and a transverse field in the conducting target. the decreased penetration was caused by the amplification of the magnetic field in the jet formation region. a study of the jet formation in the initial stages from al liners was conducted in [50]. the simulated results were validated by experiments. ofhc (oxygen free high conductivity) copper was used in both simulations and experiments against steel submerged in the water behind the air gap in [51]. polytetrafluoroethylene (ptfe) with added cu powder was used in [52] as a shaped charge jet. cu powder was added to improve penetration and mechanical properties of normal ptfe. improved mechanical properties were verified by static compression and split hopkinson pressure bar test. the interaction of cu-w jet with the target material was investigated in [53]. for this reason, three target materials cu, carbon steel, and ti-6al-4v alloy were selected for penetration. the penetration depth ratio was found to be 10:28:21 respectively which indicated that the same jet behaves differently when the target material is changed. ofhc cu was utilized by authors in [54] in order to study the effects of drift velocity and so distance between jet particles on penetration. addition of zinc (zn) and ni in the cu-w alloy for was studied in [55]. it was found that w–cu– zn and w–cu–ni alloys have decreased target penetration because of the transverse dissipation of jet energy. addition of zn in the alloy decreased the melting point of the matrix, making it easier to melt and squeeze out. this reduced the lubrication effect of the matrix and the interaction between the w particles and the targets was facilitated. the addition of ni in the alloy increases the melting point of the matrix phase making it harder to melt. as a result, the lubrication effect decreased. the interaction between jets and targets and the transverse dissipation of the jet energy are the consequences of reduced lubrication, leading to decreased penetration. the damage mechanism in thick concrete was investigated numerically and experimentally in [56] with al and cu liners. experiments showed that the conical al liner had higher destructive power than cu liners in concrete. al and cu were also investigated numerically in [57] using ls-dyna. less penetration but bigger hole diameter in concrete-like targets for al liner were observed. low-density materials, namely float glass, lucite, and perspex were simulated as precursor shaped charge liner for tandem warhead against era in [58]. simulation by autodyn-2d showed that the jets from lowdensity liners were better than cu liner’s in terms of jet tip velocity, ductility, and jet tip diameter. these low-density liners did not create any hindrance to the main jet, so they can be utilized as precursor liner materials against era. steel, al, and cu were utilized against spaced and layered concrete in [59]. length and diameter of shaped charges were 60mm and the target was 120mm away. it was found by experiments and simulations that regarding steel and cu liners having cone angle smaller than 120°, the spaced target provided better protection than the layered target, but this was not true for al liner. considering cost effectiveness, cu and al are the best choices for rha and concrete-like targets respecitively. for greater penetration, high density materials like mo, w and du are used but they are toxic and, above all, du is radioactive. against composite and spaced armor, reactive metal and reactive metal powder liners perform better than the earlier mentioned ones. recently, research is focused on the liners made of reactive material which produces enhanced damage to the target [60]. iv. metallurgical advancements metallic liners are mostly formed by machining, diepressing, or rolling. the liner produced by such techniques has two distinct parts, the jet and the slug. the slug is heavier and slower than the jet. the low-velocity slug pulls the jet from behind, decreases its performance and in some cases, it fills the hole created by the jet. this is undesirable in perforation of well casing. this drove researchers to powder metallurgy, where the liners are manufactured from metal powder by sintering. it is the process of compacting granules to form a solid material by heat or pressure without melting [61]. early powder metallurgy was intended for high strength materials [62]. initial experiments on liners made of a mixture of w or titanium powder with carbon were conducted in [63]. liners fabricated with metal powder were studied in [64] using fxr. the hole diameter and penetrations in rolled homogeneous armor (rha) were investigated at different stand-off distances. various theoretical aspects which come across when interpreting the penetration of jet from the un-sintered powdered metal liner were discussed in [65]. the penetration model was improved by incorporating the jet’s porous compressible nature. experimental and numerical data on the powder liners made of cu and cu-w are available in [66]. it was found that powder metallurgy technology has the potential to manufacture liners of small calibres up to several dozen millimetres and it enables designing and manufacturing liners in various shapes and chemical compositions, which is beyond the scopes of traditional manufacturing technology [66, 67]. electroformed cu liners were analyzed in [68] with backscattered kikuchi patterns and optical microscopy. to get different grain sizes, the liners were annealed at 410, 530 and 650°c. the grain growth was firstly uniform and normal, but different and abnormal at ascending temperatures. the performance of w-cu powder liner is given in [69]. the penetration depth of non-sintered and sintered powder liners and of the spinning cu plate liner were tested at sos, showing that sintered powder liner performed better. scanning electron microscope was used to study the morphology of the liners. powder liner of ni-al was investigated in [70]. fxr was used to photograph the jet whereas x-ray diffraction and optical and electron microscopy were utilized to investigate the reactive behavior of a liner. the effect of porosity on powder liners was investigated in [71]. three types of liner made up of cu, al and cu-w-pb of different cone angles and sos were tested. it was found that penetration increases linearly in metal targets whereas the diameter of the entrance hole decreases linearly with decreasing al content in the liner material. engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4920 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance a detailed study of the sintered and non-sintered cu liners with particle size below 20μm is given in [72]. examination showed that the sintered powder liner of cu is having higher purity, density, lower wall thickness, and better penetration compared to the non-sintered one. to get reactive liner, al powder was sprayed over cu [73]. reactivity was confirmed by the recovered jet. phase transformation also took place as traces of γ -alumina α -alumina were found. cu liner was sprayed with al-ni powder using a kinetic spray deposition method for enhanced reactive liner in [74]. formation and penetration of jet composed of polymeric based reactive liner were studied numerically and observed by fxr in [75]. the liners were made by cold isostatic pressing at a pressure of 250mpa. the numerical results were in good agreement with the experiments. a straight and continuous jet was formed from the reactive liner. jet formation and particulation time for polymeric reactive liners is less than cu liners due to lack of ductility. the hole produced by reactive liners had a bigger diameter and less depth compared to the one produced by cu. penetration of a jet having variable density distribution was studied in [76]. a jet having variable density was produced from an un-sintered cu-w powder liner. a new analytical model was presented to describe this variable density jet phenomenon by incorporating modifications in the earlier established virtual origin model. an analytical model was validated by numerical and experimental results. the behavior of reactive liners and its demolition power was investigated in [77]. experiments, numerical simulation, and theoretical analysis were carried out for this purpose. liners manufactured by pressing and sintering of al-ptfe in 26.5/73.5 weight per cent gave much better collateral damage to the target due to the release of chemical energy contained in reactive materials. the effect of density variation on the penetration was studied in [78]. to achieve different densities of the same material, liners were manufactured by cold isostatic and uniaxial pressing. liners produced by the former technique have no density variation whereas those produced by the later have it along the liner height. autodyn was utilized to study the effects of metal powder liner density. it was numerically found and experimentally confirmed that liners produced by uniaxial pressing are more efficient than isostatic pressed against rha targets. new manufacturing techniques and new alloys were employed with the development of metallurgy. the new techniques are robust and reproducable in comparison to previous techniques like machining, rolling etc. and the new alloys perfrom better. the grain size was controlled for better performance. currently, graphene powder is pressed to get shaped charge liners [79]. isostatic and die pressed liners were compared and it was found that higher density is achieved by the former while the latter is more suitable for liner mass production. sintered powder metal liners need to be further investigated as there is room for improvement. v. liner geometries the shape or geometry of a liner is an important factor regarding penetration [80]. the shape can be optimized according to the nature of the target, stand-off distance and required hole diameter. shape manipulation started as early as the history of the lined cavity. the liner was introduced by munroe and neumann but it could not be concluded who discovered it first. wasag (westfalische anhaltische sprengstoff actien gesellschaft) patents in 1911 are about the lined cavity [19]. british navy studied the liner for shaped charges for torpedoes in 1913. scaled conical shaped charges with an apex angle of 42 0 were studied by in 1960 [81]. work on conical liners was carried out in 1963 [82] and a new analytical method of computing penetration variables for shaped-charge jets was found. in the same year, open apex conical liners with varied cone angles were tested to get hypervelocity pellet [83]. the ways to isolate the tip of a shaped charge jet from the rest of the jet and slug were discussed. in [84], conical liners of 60 o and 42 o were utilized to find their penetration in granite. magnetic field lines could be traced out by barium (ba) ions because the sunlight is resonantly scattered by ions into several visible wavelengths and the ions move in a spiral path about magnetic field lines when travelling parallel to the field lines. to study this effect, hollow conical liners of ba were detonated at an altitude of 500km [85]. the study in [1] was conducted on 42° conically liners to find which portion of the cone contributes to the tip of the jet. experiments were performed on cones with apex filled to various heights to obtain efficient jet for penetration. the study on conical liners was pushed further by more recent researches. in [86], authors observed that the ductility of metals like al and cu increases by an order of magnitude when the strain rate is increased. the same was supported by the finding that the jet elongation at breakup is proportional to the jet velocity gradient times the breakup-time. this gave a correct size prediction of the average fragment dimension and a formula for the breakup time of jet for conical liners was deduced. a unified approach for the penetration of conical shaped charge is given in [87]. fxr was used to find the shape of the jet at specified intervals and the results were compared with the analytical predictions. small conical shaped charges were utilized to initiate explosives in [88]. an analytical model for the calculation of the dynamic penetration of inclined shaped charges at some arbitrary standoff and for various obliquities of the target plate was given in [89]. the obtained analytical model could be easily applied to get penetration estimations by changing parameters for the shaped charge and target. hemispherical shaped charges with tapered liners were introduced in [90]. numerical simulations of the hemispherical and conical liners were conducted by authors in [91] using help and epic hydro-codes. the results predicted by the hydro-codes were confirmed by the experiments. authors in [92] showed that the maximum depth of penetration for a jet can be calculated theoretically using the tip velocity of the jet, the distance of the target plate from the virtual origin of the jet, particulation time, the efficient residual velocity and the square root of the ratio of target to jet density. to prove the above-given relations, experiments were conducted with 150mm base diameter shaped charges at 6, 12, and 24 calibres stand-off against rha. conical, hemispherical and efp liners were utilized in [11] against concrete with various materials and varying so. twodimensional simulations were carried in [93] to study the penetration of conical shaped charges and long rods. the shape was further innovated to star-shaped and carried out its threedimensional simulations with trek-up [94]. it was shown engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4921 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance that the star shape can be optimized to get better efficiency. in [95], al conical and trumpet liners with different thicknesses were investigated experimentally. the liners were tested against steel and sand targets at about 15 and 3 cd (charge diameter) stand-off. fxr was utilized to find the jet characteristic. hemispherical liners of degressive thickness were investigated numerically in [96]. simulations were compared with the jets from conical liners having 10 cd penetrations. it was found that the jets produced by hemispherical liners of degressive thickness have comparable head velocity and penetration power to the jets obtained from conical liners. the effects of conical, hemispherical and spherical-segment shaped liners over water-submerged steel plates were studied numerically using smooth particle hydrodynamics model in [80]. it was found that in submerged target the shock waves which reached the target earlier than the jet produced damage. efp has better motion dynamics due to its spherical shape so it produced a high-pressure shock wave having higher damaging effects. shaped charges are meant for deep penetration in the target but they produce a smaller diameter hole. conical cu liner was utilized in [97] as a precursor for tandem warhead against concrete targets. to get a 1:1 ratio of hole diameter to depth, a w type of shaped charge was introduced in [98]. w type liners were tested numerically and experimentally observed by high-speed photography. reproducibility is highly required in the production of any manufactured goods. in [99], an optimized and reproducible linear shaped charge was presented. precision linear-shaped charges were developed in [100]. linear shaped charges were designed in [101] using the lesca code and in [102] using autodyn. it was observed that linear-shaped charges behave like conical shaped charges in the process of jet penetration. analytical steady-state equation of motion for the jet of linearshaped charges was given in [103] incorporating changes to the birkhoff theory. the equation was verified by autodyn simulations. smooth particle hydrodynamic model was applied to study the process of linear-shaped charge formation and penetration in the steel target in [104]. simulation results were in agreement with the experimental ones. linear shaped charges were investigated using ls-dyna in [105] and the simulation results were compared with the experimental ones. liner thickness and so distance were studied in [106] using abaqus and the results were validated by experiments. it was shown that jet penetration increased by choosing an optimum so distance and by decreasing liner thickness. the availability of simulation software made possible predicting minor changes in the geometry of the liner, something that is almost impossible to do otherwise. software can simulate minor changes like angle variation upto half degree and thickness upto 0.1mm depending upon grid fineness. simulations save a lot of money and time and make the process robust. geometry plays a pivoting role in penetration enhancement and must be catered seriously. vi. conclusion optimization of shaped charge is a great challenge as it involves multiple variables to be investigated. this paper is intended to update and provide guidelines for the readers in designing a shaped charge liner against numerous targets. the three variables, liner material, geometry, and liner metallurgy which play important role in the modeling of shaped charges have been discussed with references from the initial experimental and analytical studies whereas numerical simulations have also been reviewed and discussed. a design optimized for one type of target can give very less or even no penetration for another. reactive metal liners were found well suited for concrete and concrete-like targets. with the development of powder metallurgy in the field of shaped charges, powder metal, and reactive powder metal liners were introduced and performed better than the earlier versions of metal liners. cu liners created a deeper hole in rha, compared to al and steel liners. the current research status is given at the end of each section. it is not possible to use a universal design of a shaped charge for various targets. ignoring the cost of production, sintered powder liners give better efficiency if tuned properly for all type of targets. references [1] j. carleone, r. jameson, p. c. chou, “the tip origin of a shaped charge jet”, propellants, explosives, pyrotechnics, vol. 2, no. 6, pp. 126-130, 1977 [2] m. a. lavrent'ev, “cumulative charge and its operating principles”, uspekhi matematicheskikh nauk, vol. 12, no. 4, pp. 41-56, 1957 [3] h. shekhar, “theoretical modelling of shaped charges in the last two decades (1990-2010): a review”, central european journal of energetic materials, vol. 9, no. 2, pp. 155-185, 2012 [4] j. hetherington, p. smith, blast and ballistic loading of structures, crc press, 2014 [5] d. c. pack, w. m. evans, “penetration by high-velocity (munroe') jets: i”, physical society, section b, vol. 64, no. 4, pp. 298, 1951 [6] h. shekhar, “explosive characteristics and shaped charge applications of nitromethane (nm): a review”, central european journal of energetic materials, vol. 9, no. 1, pp. 87-97, 2012 [7] w. walters, “an overview of the shaped charge concept”, 11th annual arl/usma technical symposium, 2003 [8] p. c. chou, w. flis, “recent developments in shaped charge technology”, propellants, explosives, pyrotechnics, vol. 11, no. 4, pp. 99-114, 1986 [9] s. s. samudre, u. r. nair, g. m. gore, r. k. sinha, a. k. sikder, s. n. asthana, “studies on an improved plastic bonded explosive (pbx) for shaped charges”, propellants, explosives, pyrotechnics, vol. 34, no. 2, pp. 145-150, 2009 [10] v. p. alekseevskii, “penetration of a rod into a target at high velocity”, combustion, explosion and shock waves, vol. 2, no. 2, pp. 63-66, 1966 [11] m. j. murphy, r. m. kuklo, “fundamentals of shaped charge penetration in concrete”, 18th international symposium on ballistics, san antonio, texas, november 15-19, 1999 [12] k. naeem, a. hussain, “development of a matlab code for plane wave lens and its validation by autodyn-2d”, engineering, technology & applied science research, vol. 8, no. 6, pp. 3614-3618, 2018 [13] w. walters, a brief history of shaped charges, army research laboratory, 2008 [14] g. bloem. shell for detonating caps, u.s. patent 342423, 1886 [15] g. birkhoff, d. p. macdougall, e. m. pugh, g. taylor, “explosives with lined cavities”, journal of applied physics, vol. 19, no. 6, pp. 563-582, 1948 [16] k. naeem, a. hussain, “numerical and experimental study of wave shaper effects on detonation wave front”, defence technology, vol. 14, no. 1, pp. 45-50, 2018 [17] m. ahmed, a. q. malik, “a review of works on shaped charges”, engineering, technology & applied science research, vol. 7, no. 5, pp. 2098-2103, 2017 engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4922 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance [18] w. b. li, w. b. li, x. m. wang, h. zhou, “effect of the liner material on the shape of dual mode penetrators”, combustion, explosion and shock waves, vol. 51, no. 3, pp. 387-394, 2015 [19] w. p. walters, the shaped charge concept, part 2, the history of shaped charges, technical report brl-tr3158, usa army ballistic research laboratory, aberdeen proving ground, 1990 [20] a. c. gurevitch, l. e. murr, h. k. shih, c. s. niou, a. h. advani, d. manuel, l. zernow, “characterization and comparison of microstructures in the shaped-charge regime: copper and tantalum”, materials characterization, vol. 30, no. 3, pp. 201-216, 1993 [21] m. held, hydrodynamic theory of shaped charge jet penetration, messerschmitt-boolkow-blohm, 1991 [22] a. doig, “some metallurgical aspects of shaped charge liners”, journal of battlefield technology, vol. 1, no. 1, pp. 1-3, 1998 [23] w. t. fu, z. h. rong, “copper-tungsten shaped charge liner and its jet”, propellants, explosives, pyrotechnics, vol. 21, no. 4, pp. 193-195, 1996 [24] a. lichtenberger, n. verstraete, d. salignon, m. t. daumas, j. collard, “shaped charges with molybdenum liner”, 16th international ballistics symposium, san francisco, usa, september 23-28, 1996 [25] e. l. baker, a. daniels, g. p. voorhis, t. vuong, j. pearson, “development of molybdenum shaped charge liners”, 127th annual meeting and exhibition of the minerals, metals & materials society, february 15-19, san antonio, usa, 1998 [26] k. g. cowan, k. j. a. mawella, d. j. standing, b. bourne, j. s. jones, a. c. kitney, “analytical code and hydrocode modelling and experimental characterisation of shaped charges containing conical tungsten liners”, 18th international symposium on ballistics, san antonio, usa, november 15-19, 1999 [27] b. m. grove, j. f. lands, r. a. parrott, shaped charges having reduced slug creation, us patent 6021714, 2000 [28] m. j. murphy, d. w. baum, d. b. clark, e. m. mcguire, s. c. simonson, “numerical simulation of damage and fracture in concrete from shaped charge jets”, 6th international conference on mechanical and physical behaviour of materials under dynamic loading, krakow, poland, september 25-29, 2000 [29] a. v. babkin, p. a. bondarenko, s. v. fedorov, s. v. ladov, v. i. kolpakov, s. g. andreev, “limits of increasing the penetration of shaped-charge jets by pulsed thermal action on shaped-charge liners”, combustion, explosion and shock waves, vol. 37, no. 6, pp. 727-733, 2001 [30] k. g. cowan, b. bourne, “analytical code and hydrocode modelling and experimental characterisation of shaped charges containing conical molybdenum liners”, 19th international ballistic symposium, interlaken, switzerland, may 7-11, 2001 [31] m. held, “liners for shaped charges”, journal of battlefield technology, vol. 4, no. 3, pp. 1-7, 2001 [32] w. h. tian, a. l. fan, h. y. gao, j. luo, z. wang, “comparison of microstructures in electroformed copper liners of shaped charges before and after plastic deformation at different strain rates”, materials science and engineering: a, vol. 350, no. 1-2, pp. 160-167, 2003 [33] b. bourne, k. g. cowan, j. p. curtis, “shaped charge warheads containing low melt energy metal liners”, 19th international ballistic symposium, interlaken, switzerland, may 7-11, 2001 [34] w. f. l. t. g. ruijun, w. yuling, “a study on tungsten-copper-nickel alloy as shaped charge liner”, acta armamentarii, vol. 1, pp. 28, 2001 [35] w. h. lee, “oil well perforator design using 2d eulerian code”, international journal of impact engineering, vol. 27, no. 5, pp. 535559, 2002 [36] z. w. hu, z. k. li, t. j. zhang, x. m. zhang, “advanced progress in materials for shaped charge and explosively formed penetrator liners”, rare metal materials and engineering, vol. 33, no. 10, pp. 1009-1012, 2004 [37] j. petit, v. jeanclaude, c. fressengeas, “breakup of copper shapedcharge jets: experiment, numerical simulations, and analytical modeling”, journal of applied physics, vol. 98, no. 12, article id 123521, 2005 [38] e. hirsch, “scaling of the shaped charge jet break-up time”, propellants, explosives, pyrotechnics, vol. 31, no. 3, pp. 230-233, 2006 [39] m. huerta, m. g. vigil, “design, analyses, and field test of a 0.7 m conical shaped charge”, international journal of impact engineering, vol. 32, no. 8, pp. 1201-1213, 2006 [40] m. wickert, “electric armor against shaped charges: analysis of jet distortion with respect to jet dynamics and current flow”, ieee transactions on magnetics, vol. 43, no. 1, pp. 426-429, 2007 [41] s. v. fedorov, a. v. babkin, s. v. ladov, g. a. shvetsov, a. d. matrosov, “on the possibility of reducing the penetration capability of shaped-charge jets in a magnetic field”, journal of applied mechanics and technical physics, vol. 48, no. 3, pp. 393-400, 2007 [42] o. ayisit, “the influence of asymmetries in shaped charge performance”, international journal of impact engineering, vol. 35, no. 12, pp. 1399-1404, 2008 [43] c. wang, t. ma, j. ning, “experimental investigation of penetration performance of shaped charge into concrete targets”, acta mechanica sinica, vol. 24, no. 3, pp. 345-349, 2008 [44] x. zhang, c. wu, f. huang, “penetration of shaped charge jets with tungsten-copper and copper liners at the same explosive-to-liner mass ratio into water”, shock waves, vol. 20, no. 3, pp. 263-267, 2010 [45] t. elshenawy, q. m. li, “breakup time of zirconium shaped charge jet”, propellants, explosives, pyrotechnics, vol. 38, no. 5, pp. 703-708, 2013 [46] p. y. chanteret, a. lichtenberger, “bimetallic liners and coherence of shaped charge jets”, 15th international symposium on ballistics, jerusalem, israel, may 21-24, 1995 [47] j. s. mason, experimental testing of bimetallic and reactive shaped charge liners, msc thesis, university of illinois at urbana-champaign, 2010 [48] j. p. curtis, r. cornish, “formation model for shaped charged liners comprising multiple layers of different materials”, 18th international symposium on ballistics, san antonio, usa, november 15-19, 1999 [49] g. a. shvetsov, a. d. matrosov, s. v. fedorov, a. v. babkin, s. v. ladov, “effect of external magnetic fields on shaped-charge operation”, international journal of impact engineering, vol. 38, no. 6, pp. 521526, 2011 [50] e. scheid, t. d. burleigh, n. u. deshpande, m. j. murphy, “shaped charge liner early collapse experiment execution and validation”, propellants, explosives, pyrotechnics, vol. 39, no. 5, pp. 739-748, 2014 [51] m. ahmed, a. q. malik, s. a. rofi, z. x. huang, “penetration evaluation of explosively formed projectiles through air and water using insensitive munition: simulative and experimental studies”, engineering, technology & applied science research, vol. 6, no. 1, pp. 913-916, 2016 [52] b. h. chang, j. p. yin, z. q. cui, t. x. liu, “improved dynamic mechanical properties of modified ptfe jet penetrating charge with shell”, strength of materials, vol. 48, no. 1, pp. 82-89, 2016 [53] w. q. guo, j. x. liu, y. xiao, s. k. li, z. y. zhao, j. cao, “comparison of penetration performance and penetration mechanism of w-cu shaped charge liner against three kinds of target: pure copper, carbon steel and ti-6al-4v alloy”, international journal of refractory metals & hard materials, vol. 60, pp. 147-153, 2016 [54] q. q. xiao, z. x. huang, x. d. zu, x. jia, “influence of drift velocity and distance between jet particles on the penetration depth of shaped charges”, propellants, explosives, pyrotechnics, vol. 41, no. 1, pp. 7683, 2015 [55] z. zhao, j. liu, w. guo, s. li, g. wang, “effect of zn and ni added in w–cu alloy on penetration performance and penetration mechanism of shaped charge liner”, international journal of refractory metals and hard materials, vol. 54, pp. 90-97, 2016 [56] n. d. gerami, g. h. liaghat, g. h. r. s. moghadas, n. khazraiyan, “analysis of liner effect on shaped charge penetration into thick concrete targets”, journal of the brazilian society of mechanical sciences and engineering, vol. 39, no. 8, pp. 3189-3201, 2017 [57] f. hu, h. wu, q. fang, j. c. liu, “numerical simulations of shaped charge jet penetration into concrete-like targets”, international journal of protective structures, vol. 8, no. 2, pp. 237-259, 2017 engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4923 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance [58] l. ding, w. tang, x. ran, “simulation study on jet formability and damage characteristics of a low-density material liner”, materials, vol. 11, no. 1, pp. 72, 2018 [59] c. wang, w. xu, s. c. k. yuen, “penetration of shaped charge into layered and spaced concrete targets”, international journal of impact engineering, vol. 112, pp. 193-206, 2018 [60] j. xiao, x. zhang, z. guo, h. wang, “cover picture: enhanced damage effects of multi-layered concrete target produced by reactive materials liner”, propellants, explosives, pyrotechnics, vol. 43, no. 9, pp. 851851, 2018 [61] r. m. german, sintering theory and practice, wiley-interscience, 1996 [62] j. r. pickens, “aluminium powder metallurgy technology for highstrength applications”, journal of materials science, vol. 16, no. 6, pp. 1437-1457, 1981 [63] y. a. trishin, s. a. kinelovskii, “effect of porosity on shaped-charge flow”, combustion, explosion and shock waves, vol. 36, no. 2, pp. 272-281, 2000 [64] w. walters, p. peregino, r. summers, d. leidel, a study of jets from unsintered-powder metal lined nonprecision small-caliber shaped charges, usa army ballistics research laboratory, aberdeen proving ground, 2001 [65] b. grove, “theoretical considerations on the penetration of powdered metal jets”, international journal of impact engineering, vol. 33, no. 112, pp. 316-325, 2006 [66] b. zygmunt, z. wilk, “the research of shaped charges with powder liners for geological borehole perforation”, archives of mining sciences, vol. 52, no. 1, pp. 121-133, 2007 [67] b. zygmunt, z. wilk, “formation of jets by shaped charges with metal powder liners”, propellants, explosives, pyrotechnics, vol. 33, no. 6, pp. 482-487, 2008 [68] a. fan, s. k. li, w. h. tian, “grain growth and texture evolution in electroformed copper liners of shaped charges”, materials science and engineering: a, vol. 474, no. 1-2, pp. 208-213, 2008 [69] y. gao, x. gu, t. liu, “sintering effect on the performance of tungstencopper powder liner”, journal of wuhan university of technologymaterials science edition, vol. 27, no. 6, pp. 1133-1136, 2012 [70] p. church, r. claridge, p. ottley, i. lewtas, n. harrison, p. gould, c. braithwaite, d. williamson, “investigation of a nickel-aluminum reactive shaped charge liner”, journal of applied mechanics, vol. 80, no. 3, article id 031701, 2013 [71] y. i. voitenko, s. v. goshovskii, a. g. drachuk, v. p. bugaets, “mechanical effect of shaped charges with porous liners”, combustion, explosion and shock waves, vol. 49, no. 1, pp. 109-116, 2013 [72] n. duan, y. gao, j. wang, w. du, f. wang, “the properties of the sintered copper powder liner”, journal of wuhan university of technology-materials science edition, vol. 29, no. 2, pp. 269-272, 2014 [73] j. won, g. bae, k. kang, c. lee, s. j. kim, k. a. lee, s. lee, “bonding, reactivity, and mechanical properties of the kinetic-sprayed deposition of al for a thermally activated reactive cu liner”, journal of thermal spray technology, vol. 23, no. 5, pp. 818-826, 2014 [74] g. byun, j. kim, c. lee, s. j. kim, s. lee, “kinetic spraying deposition of reactive-enhanced al-ni composite for shaped charge liner applications”, journal of thermal spray technology, vol. 25, no. 3, pp. 483-493, 2016 [75] y. wang, q. yu, y. zheng, h. wang, “formation and penetration of jets by shaped charges with reactive material liners”, propellants, explosives, pyrotechnics, vol. 41, no. 4, pp. 618-622, 2016 [76] t. elshenawy, a. elbeih, q. m. li, “a modified penetration model for copper-tungsten shaped charge jets with non-uniform density distribution”, central european journal of energetic materials, vol. 13, no. 4, pp. 927-943, 2016 [77] j. xiao, x. zhang, y. wang, f. xu, h. wang, “demolition mechanism and behavior of shaped charge with reactive liner”, propellants, explosives, pyrotechnics, vol. 41, no. 4, pp. 612-617, 2016 [78] t. elshenawy, “density effect of the compacted copper-tungsten shaped charge powder liners on its penetration performance”, journal of powder metallurgy & mining, vol. 6, no. 2, pp. 1-6, 2017 [79] t. majewski, a. jackowski, “use of graphene for shaped charge liner materials”, problemy mechatroniki: uzbrojenie, lotnictwo, inzynieria bezpieczenstwa, vol. 9, no. 3, pp. 15-28, 2018 [80] z. zhang, l. wang, v. silberschmidt, “damage response of steel plate to underwater explosion: effect of shaped charge liner”, international journal of impact engineering, vol. 103, pp. 38-49, 2017 [81] r. dipersio, j. simon, t. h. martin, a study of jets from scaled conical shaped charge liners, ballistic research laboratories, 1960 [82] f. e. allison, r. vitali, a new method of computing penetration variables for shaped-charge jets, usa army ballistics research laboratory, aberdeen proving ground, 1963 [83] a. merendino, j. m. regan, s. kronman, a method of obtaining a massive hypervelocity pellet from a shaped charge jet, usa army ballistics research laboratory, aberdeen proving ground, 1963 [84] r. r. rollins, g. b. clark, h. n. kalia. “penetration in granite by jets from shaped-charge liners of six materials”, international journal of rock mechanics and mining sciences & geomechanics abstracts, vol. 10, no. 3, pp. 183-200, 1973 [85] e. m. wescott, e. p. rieger, h. c. stenbaek-nielsen, t. n. davis, h. m. peek, p. j. bottoms, “l=1.24 conjugate magnetic field line tracing experiments with barium shaped charges”, journal of geophysical research, vol. 79, no. 1, pp. 159-168, 1974 [86] e. hirsch, “a formula for the shaped charge jet breakup time”, propellants, explosives, pyrotechnics, vol. 4, no. 5, pp. 89-94, 1979 [87] m. j. murphy, shaped-charge penetration in concrete: a unified approach, phd thesis, university of california, 1983 [88] m. vigil, explosive initiation by very small conical shaped charge jets, sandia national labs, 1985 [89] m. held, r. fischer, “penetration theory for inclined and moving shaped charges”, propellants, explosives, pyrotechnics, vol. 11, no. 4, pp. 115122, 1986 [90] c. aseltine, w. walters, a. arbuckle, j. lacetera, “hemispherical shaped charges utilizing tapered liners”, 4th international symposium on ballistics, monterey, usa, september 17-19, 1978 [91] w. p. walters, s. k. golaski, hemispherical and conical shaped-charge liner collapse and jet formation, technical report brl-tr-2781, usa army ballistic research laboratory, aberdeen proving ground, 1987 [92] m. held, “penetration cutoff velocities of shaped charge jets”, propellants, explosives, pyrotechnics, vol. 13, no. 4, pp. 111-119, 1988 [93] z. rosenberg, e. d. rafael, “use of 2d simulations to study penetration mechanisms of long rods and shaped charge jets”, chemical physics reports, vol. 18, no. 10, pp. 2047-2059, 2000 [94] v. i. tarasov, y. v. yanilkin, y. a. vedernikov, “three-dimensional simulation of shaped charges with a star-shaped liner”, combustion, explosion and shock waves, vol. 36, no. 6, pp. 840-844, 2000 [95] s. saran, o. ayisit, m. s. yavuz, “experimental investigations on aluminum shaped charge liners”, procedia engineering, vol. 58, pp. 479-486, 2013 [96] s. v. fedorov, “numerical simulation of the formation of shaped-charge jets from hemispherical liners of degressive thickness”, combustion, explosion and shock waves, vol. 52, no. 5, pp. 600-612, 2016 [97] n. d. gerami, g. h. liaghat, g. h. rahimi, n. khazraiyan, “the effect of concrete damage on the penetration depth by the tandem projectiles”, proceedings of the institution of mechanical engineers, part c: journal of mechanical engineering science, vol. 232, no. 6, pp. 1020-1032, 2018 [98] c. wang, f. huang, j. ning, “jet formation and penetration mechanism of w typed shaped charge”, acta mechanica sinica, vol. 25, no. 1, pp. 107-120, 2009 [99] m. g. vigil, j. g. harlan, optimal design and fabrication of reproducible linear shaped charges, technical report, sandia national labs, 1986 engineering, technology & applied science research vol. 9, no. 6, 2019, 4917-4924 4924 www.etasr.com naeem et al.: a review of shaped charge variables for its optimum performance [100] m. g. vigil, “design and development of precision linear shaped charges”, international symposium on detonation, portland, usa, august 28-september 1, 1989 [101] m. vigil, design of linear shaped charges using the lesca code, technical report sand90-0243, sandia national laboratories, 1990 [102] m. johnston, s. lim, “numerical observation of the jet flight patterns of linear shaped charges”, applied sciences, vol. 2, no. 4, pp. 629-640, 2012 [103] s. lim, “steady state analytical equation of motion of linear shaped charges jet based on the modification of birkhoff theory”, applied sciences, vol. 2, no. 4, pp. 35-45, 2012 [104] d. feng, m. b. liu, h. li, g. r. liu, “smoothed particle hydrodynamics modeling of linear shaped charge with jet formation and penetration effects”, computers & fluids, vol. 86, pp. 77-85, 2013 [105] a. wojewodka, t. witkowski, “methodology for simulation of the jet formation process in an elongated shaped charge”, combustion, explosion and shock waves, vol. 50, no. 3, pp. 362-367, 2014 [106] p. dehestani, a. fathi, h. m. daniali, “numerical study of the stand-off distance and liner thickness effect on the penetration depth efficiency of shaped charge process”, proceedings of the institution of mechanical engineers, part c: journal of mechanical engineering science, vol 233, no. 3, pp. 977-986, 2018 microsoft word 39-3426_s1_etasr_v10_n2_pp5561-5564 engineering, technology & applied science research vol. 10, no. 2, 2020, 5561-5564 5561 www.etasr.com vekteris et al.: an efficiency study of the aerodynamic sound generators suitable for acoustic particle … an efficiency study of the aerodynamic sound generators suitable for acoustic particle agglomeration vladas vekteris department of mechanics and materials engineering vilnius gediminas technical university vilnius, lithuania vladas.vekteris@vgtu.lt darius ozarovskis department of mechanics and materials engineering vilnius gediminas technical university vilnius, lithuania darius.o@vilpra.lt vadim moksin department of mechanics and materials engineering vilnius gediminas technical university vilnius, lithuania vadim.moksin@vgtu.lt vytautas turla department of mechatronics, robotics and digital manufacturing vilnius gediminas technical university vilnius, lithuania vytautas.turla@vgtu.lt eugenijus jurkonis department of mechatronics, robotics and digital manufacturing vilnius gediminas technical university vilnius, lithuania eugenijus.jurkonis@vgtu.lt abstract—the object of this study is the acoustic field generated by aerodynamic acoustic generators of various types and designs. six types of aerodynamic acoustic generators were studied experimentally and theoretically to determine the parameters of their generated acoustic field. it was established that the aerodynamic hartmann type sound generator produces the necessary for acoustic particle agglomeration acoustic field and can be used in acoustic air cleaning equipment. it was established that classical theoretical calculation methods underestimate the design features of aerodynamic acoustic generators and cannot be used to calculate their characteristics. keywords-acoustic generator; frequency; sound pressure level i. introduction studies show that the efficiency of air cleaning equipment can be significantly improved if the distributed particles in the air being cleaned are treated by an acoustic field. authors in [1, 2] established that the relative air humidity measured above an open surface tank is decreased up to 1.6 times in the presence of an acoustic field when compared with the humidity obtained from the conventional push-pull air removal system. authors in [3] increased the average cleaning efficiency of cyclone separator from 87.2 to 97.5 by placing an aerodynamic sound generator at the bottom of the conical part of the cyclone separator. authors in [4] improved significantly the fine particle removal efficiency with the combined effect of acoustic agglomeration and vapor condensation, reaching up to 80% with a sound pressure level of 150db. an acoustic agglomeration process promoting the formation of particle clusters to enhance particle capturing efficiency without adding flow resistance in the air distribution ductwork provides an energy-efficient solution [5]. however, this method requires reliable, energy-efficient sound generators that can operate in various environments including aggressive ones. such generators should be able to generate high-intensity sound fields with sound pressure level above 100db [5], which is typically required to produce efficient acoustic agglomeration in several seconds. the most efficient are sound generators which emit acoustic waves at the ultrasonic frequency range [6]. in order to design a suitable sound generator, it is necessary to ensure that the acoustic field characteristics correspond to the characteristics of dust or aerosol particles exposed to that field [7, 8]. increased radiation intensity causes increased relative speed of the particles. particle adhesion is characterized by the characteristic frequency of the acoustic field. in case of 1μm size particles the characteristic frequency equals to 7.2mhz [7]. if the particle size is increased to 10m, the characteristic frequency is decreased to 72khz [7]. however, at very high frequencies with respect to the characteristic frequency, particle amplitude becomes independent on the frequency, but only on the ratio of densities of particle and environmental medium. therefore, in order to investigate the acoustic characteristics of the designed sound generator, it is necessary to analyze the pressure of the generated sound. ii. objects of reserch and experimental setup six aerodynamic acoustic field generator prototypes (figures 1-6) were designed and manufactured. the prototypes were experimentally studied to determine the considered parameters of generated acoustic field: sound pressure and frequency. corresponding author: vadim moksin engineering, technology & applied science research vol. 10, no. 2, 2020, 5561-5564 5562 www.etasr.com vekteris et al.: an efficiency study of the aerodynamic sound generators suitable for acoustic particle … the first acoustic generator aag-1 is shown in figure 1. compressed air flow enters into the generator through coupling 1 and then is distributed to the central 5 and peripheral channels 6, which direct the air into the resonance chamber 7. after the pressure in chamber 7 reaches a certain critical pressure, compressed air flow breaks through the air flow passing the channel 5 and leaves through the nozzle 4. after the outburst air pressure in the chamber 7 is decreased, air flow starts to pass through channel 5 and comes out through the nozzle 4 until the pressure in chamber 7 reaches certain value. then the described cycle is repeated creating air flow pulsation and acoustic field as a result. (a) (b) fig. 1. acoustic generator aag-1: a) 3d model, b) longitudinal section (a) (b) fig. 2. acoustic generator aag-2: a) 3d model, b) longitudinal section the second acoustic generator aag-2 is presented in figure 2. compressed air flow is fed through coupling 1 into the primary chamber 5. then the compressed air enters into the secondary chamber 6 of the acoustic generator through the diverging hole 7. due to the hole with increasing diameter 7, air vortexes and pressure fluctuations are created in the chamber 6. two air flows (primary and secondary) are created in the chamber 6 resulting from the turbulence. these flows come out through the hole 8 of the nozzle 3. pressure fluctuations of outgoing compressed air flow generate highfrequency acoustic field. the third acoustic generator aag-3 is shown in figure 3. compressed air flow is supplied through coupling 1 into the diffuser 2. then air flow bypassing the tab 5 of the diffuser enters into the nozzle 7 through slots 4. due to the holes 6 additional air flows are ejected into the nozzle 7 which are mixed with the main flow of compressed air. tab 5 and stepped narrowing of the nozzle 7 cause the turbulent flow. mixed air flow generates acoustic field passing through the nozzle 7. (a) (b) fig. 3. acoustic generator aag-3: a) 3d model, b) longitudinal section the fourth acoustic generator aag-4 is shown in figure 4. compressed air flow through the coupling 1 enters into the central channel 4. in addition, the secondary compressed air flow is supplied through coupling 3 which flows the channel 5 and cross the main flow near the end of the nozzle 2. the intersection of two compressed air flows causes the pressure pulsations which generate high-frequency acoustic field. the fifth acoustic generator aag-5 is shown in figure 5. compressed air flow is supplied through coupling 1 into central channel 5 and is directed toward the nozzle 4. in addition, the secondary air flow is supplied through coupling 2 into the housing 3. this air flow enters the channel 5 through the inclined channels 6 that direct it opposite to the main flow. the intersecting opposed flows create pressure pulsations and generate high-frequency acoustic field leaving the nozzle 4. the sixth acoustic generator aag-6 is shown in figure 6. compressed air flow coming out of the nozzle 1 periodically fills the resonator 2. then the air bursts coming out from the resonator collide with the compressed air exiting from nozzle 1. density fluctuations are generated as a result and generate a high-pressure acoustic field. aerodynamic acoustic generators of this type are known as hartmann type generators. engineering, technology & applied science research vol. 10, no. 2, 2020, 5561-5564 5563 www.etasr.com vekteris et al.: an efficiency study of the aerodynamic sound generators suitable for acoustic particle … (a) (b) fig. 4. acoustic generator aag-4: a) 3d model, b) longitudinal section (a) (b) fig. 5. acoustic generator aag-5: a) 3d model, b) longitudinal section a bruel & kjaer compact sound level meter type 2250-s with frequency analysis software bz-7222, bz-7223 and sound recording option bz-7226 were used for the measurements of sound pressure level and frequency analysis. the experimental setup is shown in figure 7. iii. results and discussion the results of parameter calculation of the aerodynamic acoustic generators are presented in table i. the graphical representation of the results is shown in figure 8. the calculation results show that air flow rate and generator’s outlet diameter have the greatest influence on the frequency and power of acoustic field emitted by the aerodynamic acoustic generator. aag-2 and aag-4 sound generators with small diameter holes theoretically generate an acoustic field of 47.2khz frequency. this frequency is more than exceeds the human hearing threshold, being more than the double of its upper limit. calculations show that these two generators are the most powerful. aag-6 generator with large outlet theoretically should emit a low-frequency (138hz) acoustic field. its power is respectively the lowest. (a) (b) fig. 6. acoustic generator aag-6: a) 3d model, b) longitudinal section fig. 7. experimental setup. 1: sound level meter/analyzer, 2: acoustic chamber, 3: compressed air port the experimental results are presented in figure 9, which contains a comparison of the acoustic characteristics of the manufactured generators. it can be seen from figure 9 that aag-6 acoustic generator is the most powerful. it can be also noticed that the sound pressure peak generated by the acoustic generator is repeated every 8khz, i. e. at 8 and 16khz. engineering, technology & applied science research vol. 10, no. 2, 2020, 5561-5564 5564 www.etasr.com vekteris et al.: an efficiency study of the aerodynamic sound generators suitable for acoustic particle … table i. theoretical parameters of the aerodynamic acoustic generators generatorfrequency, hz sound power, w wm wd wk aag-1 1.99e+04 3.13e+014.51e+016.50e+01 aag-2 4.72e+04 1.76e+028.01e+023.65e+03 aag-3 3.71e+03 1.09e+00 1.67e-01 2.57e-02 aag-4 4.72e+04 1.76e+028.01e+023.65e+03 aag-5 2.49e+03 4.89e-01 4.40e-02 3.97e-03 aag-6 1.38e+02 1.49e-03 2.84e-06 5.38e-09 fig. 8. the theoretical results of the aerodynamic acoustic generators fig. 9. the sound pressure levels of acoustic field generated by acoustic generators iv. conclusions the results of the theoretical calculations of the prototypes of the acoustic generators differ from the experimental data as classical theoretical calculation methods underestimate the design features of aerodynamic acoustic generators. therefore, it is necessary to formulate the assumptions and determine the empirical correction factors evaluating design peculiarities of aerodynamic acoustic sources. the experimental studies have shown that the aerodynamic efficiency of the generator is influenced not only by the principle of operation and the ratio of dimensions of its structural elements, but also by the technological fulfillment of the acoustic source. the experimental studies have shown that the most powerful aerodynamic generator is aag-6. its resonant chamber was the largest compared to the other prototypes and the edges were sharpest, i.e. chamfers were not removed. the aag-6 acoustic field generator exhibits the clearest sound pressure pulsations. sound pressure peaks are repeated every 8khz, i.e. at 8 and 16khz and exceed 100db. therefore, such prototype was chosen to be used to generate sound waves in an acoustic cyclone separator. references [1] v. vekteris, i. tetsman, v. mokshin, “investigation of the efficiency of the lateral exhaust hood enhanced by aeroacoustic air flow”, process safety and environmental protection, vol. 109, pp. 224-232, 2017 [2] v. vekteris, i. tetsman, v. moksin, “experimental investigation of influence of acoustic wave on vapour precipitation process”, engineering, technology & applied science research, vol. 3, no. 2, pp. 408-412, 2013 [3] v. vekteris, v. strishka, d. ozarovskis, v. mokshin, “numerical simulation of air flow inside acoustic cyclone separator”, aerosol and air quality research, vol. 15, no. 2, pp. 625-633, 2015 [4] j. yan, l. chen, l. yang, “combined effect of acoustic agglomeration and vapor condensation on fine particles removal”, chemical engineering journal, vol. 290, pp. 319-327, 2016 [5] b. f. ng, j. w. xiong, m. p. wan, “application of acoustic agglomeration to enhance air filtration efficiency in air-conditioning and mechanical ventilation (acmv) systems”, plos one, vol. 12, no. 6, pp. 1-26, 2017 [6] k. kilikeviciene, r. kacianauskas, a. kilikevicius, a. maknickas, j. matijosius, a. rimkus, d. vainorius, “experimental investigation of acoustic agglomeration of diesel engine exhaust particles using new created acoustic chamber”, powder technology, vol. 360, pp. 421-429, 2020 [7] v. n. khmeliov, a. v. shalunov, r. v. barsukov, s. n. tsyganok, d. s. abramenko, “acoustic coagulation of aerosols”, vestnik of the i.i. polzunov altai state technical university, no. 1-2, pp. 66-74, 2008 (in russian) [8] a. v. gridchin, a. v. shalunov, k. v. shalunova, a. n. galakhov, v. n. khmeliov, s. n. tsyganok, a. n. lebedev, “multifrequency ultrasonic oscillating system with stepped disk emitter”, 10th international conference and seminar on micro/nanotechnologies and electron devices, novosibirsk, russia, july 1-6, 2009 (in russian) microsoft word etasr_v11_n3_pp7290 engineering, technology & applied science research vol. 11, no. 3, 2021, 7290 7290 www.etasr.com phan: erratum and addendum: “contractor’s attitude towards risk and risk management in … erratum and addendum: “contractor’s attitude towards risk and risk management in construction in two western provinces of vietnam” van tien phan department of construction vinh city, viet nam vantienkxd@vinhuni.edu.vn since the publication of our original paper in vol. 10, no. 6 of this journal [1], the author has detected several misprints and errors that are corrected here, in the order in which they appear in the original paper. a new reference ([2]) should be added in the reference list of the original paper as reference number 2. the last sentence of the abstract should be rewritten as follows: “the importance of applying an effective risk management has been investigated, which is shared between the planning and production phase, whereas risk identification is the most important in the risk management process.” similarly, the conclusions of [1] should be rewritten as above. in part ii: the first sentence should be modified as “the questionnaire investigation method applied in [2] to study the risk management in construction projects of swedish contractor, is also applied in this paper.” the next sentence: “therefore, this form of survey integrates two types of data and the core assumption of this approach is that a combination of qualitative and quantitative methods leads to a better understanding of the problem [10]” should be removed. the last paragraph of part ii should include a citation of [2] in the first part and update the research data as following: “an invitation email was sent to 215 contractors in two western provinces of vietnam, which received 120 responses. the response rate is about 55%.” the last part of the paragraph should be removed as redundant. the text to be removed is “about 70% of the respondents had more than 15 years of experience within the construction industry and the majority (88%) where contractors, 24% where developers and 2,38% where consultants. the size of the companies where equally represented, approximately 48% had more than 1000 employees while 52% had less than 1000 employees.” in part iii, the following part should also be removed as redundant: “contractors were the 78%, developers (clients) the 19%, and consultants the 13% of the participants. interviews were conducted only with contractors. an equal distribution among company sizes was attained in the data collection, as stated above. a difference of opinion related to the company size will merely be mentioned when a significant differentiation can be observed between them, otherwise an overall picture of the industry will be presented due to similar answers to the questions.” the following sentence in part iii should also be removed: “the overwhelming majority of the respondents in both the questionnaire and the interviews described themselves as being risk-neutral rather than risk averse or risk-seekers, which coincides with previous studies.” in the discussion of question 1) how do you perceive risk within the construction industry, the collected data should be adjusted as following: “the results indicate that most of the respondents consider risk as a combination of threats and opportunities. about 94% (113 respondents) have chosen the above option. the rest (7 respondents) perceive risk as a natural hazard.” figure 1 may be removed as redundant. in the discussion of question 2) what is your attitude in relation to risk, the collected data should be adjusted as following: “most of the respondents dislike risks (92% or 110 respondents) and some selected “risk seeking” or “risk avoiding” as their answers. this result is almost contrary to a result published in [2], in which the majority of the respondents choose a neutral approach balanced between avoiding and risks. this hints that in vietnam, contractors seem to be more prone to avoid and limit risks in construction” finally, the interview part should be perceived as a subjective view of the matter that cannot offer objectivity. references [1] v. t. phan, "contractor’s attitude towards risk and risk management in construction in two western provinces of vietnam," engineering, technology & applied science research, vol. 10, no. 6, pp. 6418-6421, 2020. https://doi.org/10.48084/etasr.3339. [2] d. petrovic, "risk management in construction projects: a knowledge management perspective from swedish contractors," m.s. thesis, department of real estate and construction management, royal institue of technology, stockholm, sweden, 2017. microsoft word 38-2606_s1 engineering, technology & applied science research vol. 9, no. 2, 2019, 4071-4074 4071 www.etasr.com al-rawashdeh: investigation of an induction wound rotor motor to work as a synchronous generator investigation of an induction wound rotor motor to work as a synchronous generator ayman y. al-rawashdeh department of electrical engineering, faculty of engineering technology, al-balqa applied university, amman, jordan dr.ayman.rawashdeh@bau.edu.jo abstract—this paper aims at investigating the use of an induction wound rotor motor to generate voltage instead of the old diesel engines that are still used in many factories, like old cement factories, in jordan. in this investigation, a simulation model of induction wound rotor motor was implemented using matlab/simulink. the excitation dc current was connected to a two-phase rotor circuit, and the voltage-current performance characteristics were investigated and evaluated under different load types. the simulation results confirmed the possibility of using the induction wound rotor motor as a synchronous generator. keywords-slip ring; induction motor; synchronous generator; model; gear box; coupling; excitation; load variation i. introduction one of the main problems of electrical supply in industry and agriculture is the interruption of the main power supply source and the need to an emergency power supply source. in most cases, the emergency power supply sources are diesel engines, e.g. the shinko diesel engine (1982) used in jordan cement factories (figure 1, table i). diesel generators are known to cause problems especially for old engine models including their regular need to operate for at least one hour per week and the high maintenance and operative coasts. moreover, the hardware is not available and long order and delivery times are needed. fig. 1. a shinko diesel engine. induction motors are classified by their rotor construction [1]. of the different types, the current study investigates the use of wound rotor motor. from the known speed torque characteristic of the induction motor, the motor can act either as a motor or a generator [2]. when the speed of the induction motor is less than the synchronous speed and its slip is positive, the induction motor operates as a motor. when its speed is more than the synchronous speed and its slip is negative, the machine works as a generator. in the current investigation, a new method that allows an induction motor to operate as a generator is suggested. a wound rotor induction motor can operate as a synchronous generator based on its construction which is similar to a synchronous generator with non-salient poles. in wound rotor induction motors, the dc excitation current can be supplied to a rotor circuit through slip rings and carbon brushes. this gives the possibility of converting an asynchronous machine into a synchronous generator operation mode which can be used as an emergency power supply, which allows the utilization of power sources more effectively and efficiently. table i. shinko diesel engine parameters synchronous generator brushless exciter type gfy45648-4 h 3 type gjg219ta output 1251kva pf 0.8 output 18kw volt 400v 50hz dc 95v curr 1804a 1500rpm dc 189a ex volt 95v, 4 poles ex volt 70v ex curr 183v ip 22s ex curr 8.5a serial. no. 82128145051 insul f ii. proposed system description and model to investigate the voltage–current characteristics of the wound rotor induction motor in the generator operation mode, matlab simulations (table ii) were performed under different load types including resistive, inductive and capacitive loads [3-5]. to supply the excitation dc current, a two phase rotor circuit of the slip ring motor can be connected [6, 7]. as shown from the model in figure 2 and figure 4(c), the rotor circuit includes just two phases. this scheme of connection induced more magnetomotive force than the three phase connection [8, 9]. corresponding author: ayman y. al-rawashdeh engineering, technology & applied science research vol. 9, no. 2, 2019, 4071-4074 4072 www.etasr.com al-rawashdeh: investigation of an induction wound rotor motor to work as a synchronous generator table ii. block description of the matlab simulation block name block description simple gear represents a fixed-ratio gear or gear box. no inertia or compliance is modeled in this block. connections b (base) and f (follower) are mechanical rotational conserving ports. the relations between base and follower rotation directions are specified with the output shaft rotates parameter. inertia the block represents an ideal mechanical rotational inertia. it has one mechanical rotational conserving port. the block positive direction is from its port to the reference point. this means that the inertia torque is positive if the inertia is accelerated in the positive direction. ideal torque source represents an ideal source of torque that generates torque at its terminals proportional to the input physical signal. ideal rotational motion sensor is a device that converts an across variable measured between two mechanical rotational nodes into a control signal proportional to the angular velocity or angle. the sensor is ideal since it does not account for inertia, friction, delays, energy consumption,etc. connections r and c are mechanical rotational conserving ports and connections w and a are physical signal output ports for velocity and angular displacement, respectively. torque sensor it is a device that converts a variable passing through the sensor into a control signal proportional to the torque with a specified coefficient of proportionality. the sensor is ideal since it does not account for inertia, friction, delays, energy consumption, etc. connections r and c are mechanical rotational conserving ports that connect the sensor to the line whose torque is being monitored. connection t is a physical signal port that outputs the measurement result. the sensor positive direction is from port r to port c. dc voltage source ideal dc voltage source. dc machine implements a (wound-field or permanent magnet) dc machine. for the wound-field dc machine, access is provided to the field connections so that the machine can be used as a separately excited, shunt-connected or a series-connected dc machine. asynchronous machine implements a three-phase asynchronous machine (wound rotor, squirrel cage or double squirrel cage) modeled in a selectable dq reference frame (rotor, stator, or synchronous). stator and rotor windings are connected in wye to an internal neutral point. fig. 2. the rotor circuit of the slip ring motor with two phases. the total magnetomotive force of this connection for each harmonics can be calculated from: f� � 2f∅�cos � � (1) the relation between the first harmonic magnetomotive force of this connection to the first magneto motive force of the one phase is: � � � �∅��� �� � �∅ ≅ 1.5 (2) where f� , f�� are the total magnetomotive force and magnetomotive force of the one phase, respectively. it could be clearly concluded that when the dc current is supplied by two phases of the rotor circuit, the total induced magnetomotive force is 1.5 times more than the induced magnetomotive force of one phase. the flowing dc excitation current by the rotor circuit (figure 2), is used to induce the total machine’s magnetomotive force in all three rotor phases [10, 11], and: i � ��i� � i�� (3) where ib=ic=0.5ia. the suggested connection method of the rotor circuit shown in figure 2, gives 1.15 times more magnetomotive force than the traditional connection in which the dc excitation current supplied by each phase separately. as a prime mover for the system a dc motor was used, coupled with a wound rotor motor by a gear box with 1:1 ratio (figure 3(a)). (a) (b) (c) fig. 3. (a) general model blocks of the system, (b) general model of the system, (c) the general model used to collect data . figure 3(b) shows the prime mover, dc motor and a wound rotor induction motor which is driven by the prime mover and is used to generate ac power. the excitation current of the induction motor is a dc current and it is supplied from an external dc source through a slip ring and carbon brushes [11]. firstly, the prime mover is run to a rated speed and then, step engineering, technology & applied science research vol. 9, no. 2, 2019, 4071-4074 4073 www.etasr.com al-rawashdeh: investigation of an induction wound rotor motor to work as a synchronous generator by step, it supplies dc excitation current to wound rotor motor to record the measurements at no load and load operation conditions. the investigation process was done at a constant speed (speed of the prime mover), power factor and excitation current. at first, the experiment was done without load, to investigate the no load operation condition e � f�i"� at i � 0 and constant speed. then the experiment was repeated with the same connection circuit but with different connecting loads, resistive, inductive and capacitive, to the terminals of the generator [8, 12, 13]. the output voltages and load currents were recorded using measurement devices. the rated parameters of the investigated motor were v& � 220v, p � 60w, p � 1500rpm , i" � 1,7a, i�0 � 0.47a. iii. results and discussion firstly, the experiment was done without load to investigate the no load operation condition e � f�i"� at i � 0 and constant speed. table iii lists the experimental outputs of the module measured without load. tables iv, v and vi, list the experimental outputs of the module as a function of the r-, l and c-loads, respectively. table iii. experimental outputs of the module measured at no-load condition. if ea 0 0 0.2108 31.46 0.2499 37.40 0.3000 44.44 0.4989 75.68 0.6001 89.98 0.9996 148.06 1.4990 205.04 table iv. experimental outputs of the module measured as a function of r-load i v p 0.00 229.90 0.00 0.0300 220.88 9.96 0.0696 211.86 19.98 0.9960 203.94 30.00 0.1297 194.92 39.69 0.1598 187.88 49.98 0.1899 179.96 60.00 0.2096 177.92 69.96 0.2298 165.88 79.98 table v. experimental outputs of the module measured as a function of l-load. i v l 0.0696 211.86 0.000 0.9960 188.98 0.416 0.1396 169.84 0.833 0.1795 154.00 1.250 0.2096 139.92 1.666 the measurements of voltage, current and power were recorded using a voltmeter, an ampere meter and a wattmeter. results for the voltage, current and power outputs with different load types (tables iii, iv, v, and vi) are plotted graphically in figure (5). the obtained simulation results were in total agreement with the experimental results and findings in [1, 3]. the results of the r and l-loads revealed that the output voltage of the generator decreased slightly as load increases. the results reveal the increase in the current of the generator with increasing load (figure 5). the obtained results showed that the voltage – current characteristics with different load types had drooping (resistive and inductive) or rising (capacitive) character which can be explained by the effect of the rotor magnetic field – armature field – on the main magnetic field, especially with an increase in the load. (a) (b) (c) fig. 4. (a) a simulation model of the prime mover, (b) a simulation model of the gear box, (c) a simulation model of the wound rotor engineering, technology & applied science research vol. 9, no. 2, 2019, 4071-4074 4074 www.etasr.com al-rawashdeh: investigation of an induction wound rotor motor to work as a synchronous generator table vi. experimental outputs of the module measured as a function of c-load. i v c 0.0696 211.86 0.000 0.0696 216.92 0.416 0.0898 224.84 0.833 0.1099 231 1.250 0.1297 236.94 1.666 fig. 5. experimental relation between generated voltage and load current of a wound rotor induction motor with different load types. iv. conclusion the obtained results from the experiments conducted in the paper, revealed similarities between the wound rotor induction motor in generating operation mode, and the known characteristics of the synchronous generators. based on the observed similarities, the current study provides a proof of the possible use of the wound rotor induction motor as an emergency power supply instead of the currently used old diesel generators. references [1] k. y. patil, d. s. chavan, “use of slip ring induction generator for wind power generation”, international journal of engineering research and applications, vol. 2, no. 4, pp.1107-1110, 2012 [2] s. djurovic, d. s. vilchis-rodriguez, a. c. smith, “investigation of wound rotor induction machine vibration signal under stator electrical fault conditions”, the journal of engineering, vol. 2014, no. 5, pp. 248258, 2014 [3] m. barakat, s. elmasry, m. e. bahgat, a. a. sayed, “effect of rotor current control for wound rotor induction generator on the wind turbine performance”, international journal of power electronics and drive system, vol. 2, no. 2, pp. 117-126, 2012 [4] f. i. bakhsh, m. m. shees, m. s. j. asghar, “performance of wound rotor induction generators with the combination of input voltage and slip power control”, russian electrical engineering, vol. 85, no. 6, pp. 403–417, 2014 [5] o. albarbarawi, a. al-rawashdeh, g. qaryouti, “simulink modelling of the transient cases of three phase induction motors”, international journal of electrical & computer sciences, vol. 17, no. 4, pp. 6-15, 2017 [6] d. aguilar, g. vazquez, a. rolan, j. rocabert, f. corcoles, p. rodriguez, “simulation of wound rotor synchronous machine under voltage sags”, 2010 ieee international symposium on industrial electronics, bari, italy, july 4-7, 2010 [7] e. tuinman, p. piers, r. de weerdt, “simulation of a direct on line start of a large induction motor connected to a salient pole synchronous generator”, international conference on simulation '98, york, uk, september 30-october 2, 1998 [8] s. devabhaktuni, s. v. jayaram kumar, “different self excitation techniques for slip ring self excited induction generator”, international journal of computer applications, vol. 38, no. 2, pp. 1926, 2012 [9] c. h. watanabe, a. n. barreto, “self-excited induction generator/forcecommutated rectifier system operating as a dc power supply”, iee proceedings b electric power applications, vol. 134, no. 5, pp. 225260, 1987 [10] s. s. murthy, o. p. malik, a. k. tandon, “analysis of self-excited induction generators”, iee proceedings c generation, transmission and distribution, vol. l29, no. 6, pp. 260-265, 1982 [11] s. devabhaktuni, s. v. jayaram kumar, “selection of capacitors for the self excited slip ring induction generator with external rotor capacitance”, journal of energy technologies and policy, vol. 2, no. 2, pp. 66-77, 2012 [12] b. s. srikanth, r. anguraja, p. r. khatei, “experimental investigation on an induction motor to work as an alternator”, international journal of scientific & engineering research, vol. 4, no. 5, pp. 129-132, 2013 [13] d. aguilar, a. luna, a. rolan, g. vazquez, g. acevedo, “modeling and simulation of synchronous machine and its behavior against voltage sags”, ieee international symposium on industrial electronics, seoul, korea, july 5-8, 2009 authors profile dr. ayman y. al-rawashdeh phd. mechatronics. eng, was born on 1970 in jordan. he obtained his diploma degree in 1995 and phd in 2008 in the field of mechatronics engineering. currently he works as an assistant professor at the electrical department, faculty of engineering technology, al-balqa applied university, jordan. his main interest is renewable energy and drive system analysis and simulations microsoft word 47-2986_s_etasr_v9_n5_pp4842-4845 engineering, technology & applied science research vol. 9, no. 5, 2019, 4842-4845 4842 www.etasr.com kupusamy et al.: construction waste estimation analysis in residential projects of malaysia construction waste estimation analysis in residential projects of malaysia kumanan kupusamy department of building & construction engineering, university tun hussein onn malaysia, batu pahat, malaysia k.kumanankupusamy@gmail.com sasitharan nagapan department of building & construction engineering, university tun hussein onn malaysia, batu pahat, malaysia sasitharan@uthm.edu.my abd halid abdullah department of building & construction engineering, university tun hussein onn malaysia, batu pahat, malaysia abdhalid@uthm.edu.my suaathi kaliannan department of building & construction engineering, university tun hussein onn malaysia, batu pahat, malaysia suaathikaliannan@gmail.com samiullah sohu department of civil engineering. quaid-e-awam university of engineering, science & technology, campus larkana, pakistan sohoosamiullah@gmail.com shivaraj subramaniam department of building & construction engineering, university tun hussein onn malaysia, batu pahat, malaysia shivaraj103@gmail.com haritharan maniam department of building & construction engineering, university tun hussein onn malaysia, batu pahat, malaysia haritharanmaniam@gmail.com abstract—construction and demolition (c&d) waste account for an oversized share of all solid waste generated worldwide. statistical data confirm that, globally, 10-30% of waste originates from construction and demolition works. the types of waste from construction activities are wood, metals, concrete waste and mixed wastes. waste generation continues to increase with the economy and population growth. a great challenge is providing more waste disposal facilities such as landfills to treat the waste. rapid urbanization and insufficient attention to c&d waste generation, particularly in developing countries like malaysia, have contributed to an urgent need for additional research on waste generation as there is a lack of information. the aim of this study is to predict the construction waste generation for the peninsular of malaysia by quantifying construction waste generation data for kuala lumpur and to estimate the construction waste generation in 2016 for the peninsular of malaysia. the estimation approach used was the proportion method with 20% of total sites in each state. indirect waste measurement method was used to estimate waste generation. total waste generation for kuala lumpur was around 6,101.46 metric tons. the predicted total amount of c&d waste generated for residential projects throughout malaysia peninsular is 63,101.93 metric tons. the initial prediction of construction waste generation for malaysia peninsular can be used as a baseline for future study. keywords-construction; waste; estimation; proportional i. introduction construction and demolition (c&d) waste is produced during construction, renovation, and demolition of buildings and structures. construction waste is anything generated as a result of construction and then abandoned, regardless of whether it has been processed or stockpiled. it comprises of surplus materials from site clearance, excavation, construction, refurbishment, renovation, demolition and road works. construction industry is a key part in the economy of any country. in malaysia, this industry has been playing a vital role in economy’s growth [1]. nowadays, construction industry is quickly developing as a result of the modernization in the way of life, demands of infrastructure projects, changes in consumption habits, and the population increment [2]. the construction industry is commonly environmentally unfavorable [3]. c&d waste account for an oversized share of total solid waste generated worldwide. this industry contributes significantly to the environmental problem in terms of natural resources exploration, irreversible transformation of the natural environment and accumulation of pollutants in the atmosphere [4]. construction waste is generated throughout the construction process, during site clearance, material damage, material use, material non-use, excess procurement and human errors. moreover, statistical data confirm that 10-30% of total waste is originated from construction and demolition works [5]. the main types of waste from construction activities are wood, metals, concrete waste, plastics, papers and cardboards, glass, hazardous wastes (such as paints and glues), etc.[6]. construction waste is produced through the project from pre-construction stage, rough construction stage and finishing stage. generation of construction waste can be caused by various factors. it is vital to recognize and comprehend those causes for controlling waste generation at its source [7]. causes of construction waste generation on-site are: lack of skills and corresponding author: sasitharan nagapan engineering, technology & applied science research vol. 9, no. 5, 2019, 4842-4845 4843 www.etasr.com kupusamy et al.: construction waste estimation analysis in residential projects of malaysia experience of construction workers, lack of skills and experience of demolition contractors, wasteful use of materials in construction activities, inappropriate methods for loading and shipment of building materials from suppliers to sites, inappropriate methods for handling building materials on-site, frequent demolitions due to reworks and change of orders, traditional methods of construction, inappropriate packaging of building materials and components, inappropriate inventory of building materials and components, and low quality of buildings materials and components. there is still a lack of data on construction waste generation for malaysia. field measurement research on construction waste generation is negligible in malaysia. questionnaire based data are less accurate because they draw conclusions from individual assumptions on waste generation without proper evidence and data. moreover, the research done by previous researchers mainly focuses on specific sites and does not include state or county wide data. field measurement data will be more accurate on construction waste compared to questionnaire survey data. ii. construction waste categories construction waste is classified by source and type. in order to quantify the construction waste adequately, it is useful to have a classification of wastes by source and type of generated waste [8]. hence, waste generated on the construction site can be classified in the following two classes: • building waste, generated during the construction process due to defects, damages, breakage or simply due to excess. • packaging waste generated from packaging of materials and products delivered to construction site. the main types of waste from construction activities are wood, metals, mineral debris (such as stone, bricks, mortar and concrete), plastics, papers and cardboards, glass, and hazardous waste (such as paints and glues). iii. the construction waste issue in other countries various minimization programs have been conducted to enhance sustainability in construction. in australia, the c&d waste accounts for 16–40% of total solid generated waste. a total of 19.0 million tons of c&d waste had been produced in australia in 2008-2009, out of which 8.5 million tons were disposed to landfills while 10.5 million tons, or 55%, was recovered and recycled [9]. in romania, the c&d waste was the 4% of the total in 2003, 10% in 2004 and 7% in 2005. in total amounts, the quantity of c&d produced waste in 2004 was 646,400 tons and dropped in 2005 to 466,893 tons [10]. in usa, 136 million tons of building-related c&d debris are generated each year [11], out of which only 20–30% is recycled [12]. in the uk, every year around 70 million tons of c&d materials and soil end up as waste [13], and the wastage rate in the uk construction industry is as high as 10–15% [14]. the waste amount from c&d activities has remained at around 100 million tons annually in recent years, while demolition accounted for around 32.7 million tons in 2007, which means demolition waste takes about 30% of all annual construction generated waste [15]. in tehran, iran about 18,250,000 tons of c&d waste is produced annually [16]. in china, according to the report by the environment protection department (epd), about 2900 tons of c&d waste were received at landfills per day in 2007. china produces 29% of the world’s municipal solid waste (msw) each year, with nearly 40% of this amount produced by construction activities [17]. in japan, the amount of construction waste dropped from 99 million tons to 77 million tons in a ten-year period (1995–2005), while the recycling rate increased from 58% to 92% in the same period [4]. iv. construction waste in malaysia malaysia has experienced a quick infrastructure development over the last decade. c&d waste constitutes around 20% to 30% of the total waste in landfills. the amount of demolition waste is double the amount of construction waste. construction waste management has become an issue that needs high concern in many developing countries because it has an adverse effect on economy, environment and social aspects [7]. illegal dumping is a common issue and also a solution for contractors who deal in malaysia [18]. construction industry development board (cidb), mainly focuses on solid waste [7]. there are poor regulations and guidelines managing the c&d waste generation in malaysia. in this manner, a satisfactory conclusion about c&d waste management must be determined. there is no reliable data and information related to construction waste in malaysia. in addition, malaysia still lacks researches on construction waste generation [19]. v. waste amount estimation waste quantification utilized site accounting, record keeping, and waste characterization to recognize the creation of construction waste. it was a mean to estimate the quantity of generated construction waste, thus, assessing the potential for waste reduction. waste quantification can also help in decision making in assessing the feasibility of recycling programs as practiced in countries like the usa, hong kong, and taiwan. nonetheless, malaysia was yet falling behind in establishing the quantified benchmark for construction waste generation rate among its contractors as compared to other countries [17]. site visiting and field measurements were used to investigate the waste generation rates. field measurement surveys, in direct or indirect approaches can be utilized to collect c&d waste generation data. direct measurements require the weighting of the waste produced or to measure on site its volume. indirect measurements are frequently used for practical estimations. vi. research methodology indirect measurements were used. authors in [20] employed truck load records to estimate the volume of c&d waste generated on site. they recorded the number of trucks for waste collecting and the containers’ volume for deriving total waste volume at a project level. for the purpose of indirect quantification at a regional level, authors in [21] obtained truck load records from landfills. mix waste data were calculated based on bin measurements and waste bin trips per day or week depending on the amount of the waste produced in site. after that, month data were calculated based on the engineering, technology & applied science research vol. 9, no. 5, 2019, 4842-4845 4844 www.etasr.com kupusamy et al.: construction waste estimation analysis in residential projects of malaysia addiction of daily and weekly data. this process was repeated for the twelve months of the year. with limited data available, cross-proportion method [21] deemed appropriate to be adopted in predicting the total construction generated waste [22]. vii. data collection and analysis according to the data from cidb in 2015, kuala lumpur is one of leading states in malaysia in construction development. construction waste in kuala lumpur is managed by the local authorities. this research focuses on construction waste generation mainly for residential type of projects. the residential sector in the city is growing rapidly in line with supply and demand of the population [23]. however, the produced waste is mixed in the majority sites and not separated according to respective types. indirect data were collected from 38 sites during 2016. the data were taken from delivery orders by construction waste truck loads. the average tonnage of the waste generation for kuala lumpur was calculated. based on the data, the estimation of waste generation for other states in malaysia peninsular was calculated. table i shows the total data collection for kuala lumpur residential projects in 2016 (1-year period). the data were taken by using indirect methods. thirty eight sites were visited for data collection in the kuala lumpur region. waste quantification was high in january of 2016 (699.7633 tons), july (857.192 tons) and september (725.1662 tons). waste generation was less than 500 tons the other months. the month with the lowest waste production was april and july had the highest production. table i. total data collection for kuala lumpur month average (tons) jan 699.7633 feb 312.0925 mac 378.054 apr 194.34 may 326.044 jun 352.5443 july 857.192 aug 363.608 sept 725.1662 oct 405.8421 nov 411.632 dec 311.7453 apparently, the total number of construction projects for each state was important in estimating the required number of samples using the cross-proportion method. cidb has published a statistic for construction projects in malaysia in 2015. using the number of project sites in kuala lumpur which were measured through the indirect measurement method, the number of required project sites to be measured in other states can be estimated. employing the cross-proportion method, the basic mathematical formula to determine the proportionate total weight of c&d waste for another state relative to the total weight of c&d waste was obtained. the basic mathematical formula used in the cross-proportion method to determine the proportionate number of estimated project sites to be measured in another state relative to the number of project sites measured in a certain state is (1) : 1 2 1 2 c c q q = (1) where c1 is the number of project sites measured in a certain state, c2 it the number of estimated project sites to be measured in another state, q1 is the total number of project sites in a certain state, and q2 is the total number of project sites in another state. using the number of project sites in kuala lumpur which were measured through indirect measurement method (i.e. 38 sites), and the total number of project sites in kuala lumpur (190) and johor (350), the number of required project sites to be measured in the state of johor can be estimated using (1) as c2=70 sites. therefore, 70 project sites in johor should be measured in order to be proportionate to the 38 project sites measured in kuala lumpur. employing the cross-proportion method, the basic mathematical formula to determine the proportionate total weight of c&d waste (i.e. based on the required number of project sites) for another state relative to the total weight of c&d waste (i.e. based on the number of project sites measured) obtained in a certain state is shown in (2) as: 1 2 1 2 t t c c = (2) where t1 is the total tonnage of c&d waste in a certain state (i.e. based on the number of project sites measured), t2 is the total tonnage of c&d waste estimated in another state (i.e. based on the number of estimated project sites to be measured), c1 is the number of project sites measured in a certain state, and c2 the number of estimated project sites to be measured in another state. therefore, the total weight of c&d waste generated (i.e. based on the number of estimated project sites to be measured) in the state of johor can be estimated using (2) as: 2 6,101.46/38 /70t= 2 11239.53t = metric tons. based on the limited data collected in kuala lumpur, the rough assumption and prediction using cross-proportion method has been made to generate the value of estimated c&d waste generated in terms of tonnage for the 12 states of the malaysia peninsular. it is worth noting that the estimated amount obtained represents only 20% of the total residential project sites in each state. table ii shows that the predicted total amount of c&d waste generated for residential projects throughout peninsular malaysia was 600,727.39 metric tons, which was based on the 20% of the total project sites considered in each state. it was shown that selangor was the highest waste generator in the residential projects with 142156.86 tons in the year 2016, followed by johor with 106999.79 tons. the least waste was generated in perlis (4585.71 tons) and kedah (30571.37 tons). malaysian industrial areas such as selangor and johor produce more waste than less developed states such as perlis. engineering, technology & applied science research vol. 9, no. 5, 2019, 4842-4845 4845 www.etasr.com kupusamy et al.: construction waste estimation analysis in residential projects of malaysia table ii. predicted annual amount of c&d waste in malaysia peninsular state predicted waste (tons) wilayah persekutuan 58085.60 johor 106999.79 kedah 30571.37 kelantan 10699.98 melaka 25985.66 negeri sembilan 48914.19 pahang 48914.19 pulau pinang 56557.03 perak 50442.76 perlis 4585.71 selangor 142156.86 terengganu 16814.25 total 600,727.39 viii. conclusion this study successfully achieved its aim of predicting the construction waste generation for the malaysian peninsular. there is a limited number of studies conducted on this field. using the proportion methodology along with indirect measurements, this research had done the estimation of waste generation for residential projects in peninsular malaysia using kuala lumpur field measurement data. however, the projection of data includes the 20% of construction waste sites in kuala lumpur and other states. the predicted total annual amount of construction waste generated for residential projects throughout peninsular malaysia was 600,727.39 metric tons. it is shown that selangor was the highest waste generator in the residential projects in 2016, followed by johor. the least waste were generated in perlis and kedah, because industrial areas produce more waste than less developed states. this study had done the initial prediction by visiting and taking measurements on the 20% of the construction sites. in the future, this percentage could be augmented. acknowledgement authors would like to thank the cidb and the solid waste and public cleansing management corporation (swcorp) for the information provided in this study. also, the universiti tun hussein onn malaysia and the ministry of education, malaysia. funds for the study were provided by the fundamental research grant scheme (frgs) no.1624, and grant no. u704. references [1] m. f. hasmori, i. said, r. deraman, n. h. abas, s. nagapan, m. h. ismail, f. s. khalid, a. f. roslan, “significant factors of construction delays among contractors in klang valley and its mitigation”, international journal of integrated engineering, vol. 10, no. 2, pp. 3236, 2018 [2] s. nagapan, i. a. rahman, a. asmi, a. h. memon, i. latif, “issues on construction waste: the need for sustainable waste management”, ieee colloquium on humanities, science and engineering, kota kinabalu, malaysia, december 3-4, 2012 [3] s. nagapan, s. kaliannan, a. h. abdullah, s. sohu, r. deraman, m. f. hasmori, n. h abas, “preliminary survey on the crucial root causes of material waste generation in malaysian construction industry”, vol. 8, no. 6, pp. 3580–3584, 2018 [4] a. p. kern, m. f. dias, m. p. kulakowski, l. p. gomes, “waste generated in high-rise buildings construction: a quantification model based on statistical multiple regression”, waste management, vol. 39, pp. 35–44, 2015 [5] m. f. b. yusof, study on construction & demolition waste management in construction site, bsc thesis, university college of engineering & technology malaysia, 2006 [6] s. jalali, “quantification of construction waste amount”, available at: https://core.ac.uk/download/pdf/55608453.pdf, 2007 [7] s. nagapan, i. a. rahman, a. asmi, a. h. memon, r. m. zin, “identifying causes of construction waste-case of central region of peninsula malaysia”, international journal of integrated engineering, vol. 4, no. 2, pp. 22-28, 2012 [8] a. f. masudi, c. r. c. hassan, n. z. mahmood, s. n. mokhtar, n. m. sulaiman, “waste quantification models for estimation of construction and demolition waste generation: a review”, international journal of global environmental issues, vol. 12, no. 2-4, pp. 269-281, 2012 [9] s. zakar, overview of demolition waste in the uk, bre, 2009 [10] o. f. kofoworola, s. h. gheewala, “estimation of construction waste generation and management in thailand”, waste management, vol. 29, no. 2, pp. 731–738, 2010 [11] k. sandler, p. swingle, oswer innovations pilot: building deconstruction and reuse, epa 2006 [12] a. a. najafpoor, a. zarei, f. j. behnam, m. v. shahroudi, a. zarei, “a study identifying causes of construction waste production and applying safety management on construction site”, iranian journal of health sciences, vol. 2, no. 3, pp. 49–54, 2014 [13] waste strategy 2000 for england and wales, crown, 2000 [14] c. mcgrath, m. anderson, “waste minimizing on a construction site”, building research establishment digest, vol. 447, pp. 441-454, 2000 [15] x. chen, w. lu, “identifying factors influencing demolition waste generation in hong kong”, journal of cleaner production, vol. 141, pp. 799–811, 2017 [16] b. r. broujeni, g. a. omrani, r. naghavi, s. s. afraseyabi, “construction and demolition waste management (tehran case study)”, engineering technology & applied science research, vol. 6, no. 6, pp. 1249-1252, 2016 [17] h. wu, h. duan, l. zheng, j. wang, y. niu, g. zhang, “demolition waste generation and recycling potentials in a rapidly developing flagship megacity of south china: prospective scenarios and implications”, construction and building materials, vol. 113, pp. 1007– 1016, 2016 [18] c. s. poon, a. t. w. yu, l. jaillon, “reducing building waste at construction sites in hong kong”, construction management and economics, vol. 22, no. 5, pp. 461–470, 2004 [19] h. maniam, s. nagapan, a. h. abdullah, s. subramaniam, s. sohu, “a comparative study of construction waste generation rate based on different construction methods on construction project in malaysia”, engineering, technology & applied science research, vol. 8, no. 5, pp 3488-3491, 2018 [20] n. kartam, n. a. mutairi, i. a. ghusain, j. a. humoud, “environmental management of construction and demolition waste in kuwait”, waste management, vol. 24, no. 10, pp. 1049–1059, 2004 [21] s. d. sawaitul, k. p. wagh, p. n. chatur, “classification and prediction of future weather by using back propagation algorithm: an approach”, international journal of emerging technology and advanced engineering, vol. 2, no. 1, pp. 110–113, 2012 [22] d. stanley, d. mcgowan, s. h. hull, “pitfalls of over-reliance on cross multiplication as a method to find missing values”, texas mathematics teacher, vol. 11, no. 1, pp. 9-11, 2003 [23] a. a. mustaffa, m. f. hasmori, a. s. sarif, n. f. ahmad, n. y. zainun, “the use of uav in housing renovation identification: a case study at taman manis 2”, iop conference series: earth and environmental science, vol. 140, article id 012003, 2018 microsoft word 26-2746_s engineering, technology & applied science research vol. 9, no. 3, 2019, 4209-4212 4209 www.etasr.com bhell et al.: use of rice husk ash as cementitious material in concrete use of rice husk ash as cementitious material in concrete naraindas bheel department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan naraindas04@gmail.com abdul wahab abro department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan ablwab82@gmail.com irfan ali shar department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan irfanshar2000@gmail.com ali aizaz dayo department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan aliaizaz890@gmail.com sultan shaikh department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan sultan11civil@gmail.com zubair hussain shaikh department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan azubair.shaikh56@gmail.com abstract—in this research, rice husk ash (rha) was used as a partial substitute for cement in concrete to reduce its cost, and alternative processing methods using agricultural/industrial waste were found. the main objective of this study was to determine the fresh (flowability) and hardened (splitting tensile strength and compressive strength) concrete properties using rha at 0%, 5%, 10%, 15% and 20% by weight. a total of 90 concrete samples (45 cubes and 45 cylinders) were prepared and cured on 7, 14, and 28 days to the design of target strength 28n/mm 2 , and ultimately, these concrete specimens were tested on utm. three concrete specimens were cast for each proportion and ultimately the average of the three concrete samples was taken as the final result. the flowability of fresh concrete decreases with increasing content of rha in concrete. the results showed that the compressive and tensile strength of the concrete specimens increased by 11.8% and 7.31%, respectively by using 10% rha at 28 days curing. keywords-rice husk ash; cement replacement material; improved strength; reduced construction cost; utilizing disposal waste i. introduction concrete is widely and globally used throughout the history of humankind [1]. concrete is a mixture of sand and crushed rock combined together by a hardened paste of hydraulic cement and water [2]. the increased use of concrete is going to grow the demand for its ingredients’ resources (cement, sand, and gravel). the high rate of concrete constituents is increasing rapidly and hence there is a requirement for an unconventional material that is low-cost and readily presented that will also give a similar or greater strength when used for concrete [3]. cement is one of the constituents of concrete which is costly and its production releases large amounts of co2 during its manufacturing [4-8]. manufacturing one tonne of cement releases about one tonne of co2 in the atmosphere while 1.6 tonnes of natural resources are required to produce about one tonne of cement [9-11]. in many studies the cement is partially replaced by agricultural/industrial waste such as glass powder, sugar cane bagasse ash, rice husk ash (rha), blast furnace slag, maize cob ash, millet husk ash, fly ash etc. in order to reduce cost, waste and co2 emissions while these resources are easily available [9, 12]. rha is the by-product of agricultural waste [13-15], it is considered unwanted and is mostly air burned [16, 17]. currently disposing of agro/industrial waste is a serious problem. one of these agro wastes is rice husk. rice husk annual produced quantity is about 120 million tonnes per year in the paddy field [17-19]. the husk manufactured from rice processing is either burnt or dumped. rice husk is burnt at a certain temperature under atmosphere. rha possess 85% of silica content that is known as non-crystalline silica and it could be utilized as partial cement replacement material [2024]. rha is measured as a highly pozzolanic material [25-29] and it can be used as an additional material in concrete decreasing the environmental problem. a research study of the hardening properties of concrete was carried out in [30] using 10% rha. in this study, concrete samples were cured and tested after 7, 14, 28, and 56 days, using a mixing ratio of 1:2:4 with water-cement ratio 0.45, 0.50, and 0.60. the results showed that the compressive and tensile strengths increased by 14.51% and 10.71%, respectively, at 0.45 water-cement ratio when 10% rha was used in concrete with a curing time of 56 days. authors in [31] reported that rha is beneficial to reduce the temperature of concrete as compared to plain cement concrete. authors in [32] carried out researched on the hardened properties of concrete blended with cement by 5%, 10%, 15% and 20% weight. the concrete samples were designed for target strength of 25n/mm 2 and were cured for 7 and 28 days. the result was that the crushing strength improved by 15.74% when using 10% of rha in concrete at 28 days [32]. rha plays an essential role in the characteristics of cementitious materials [33]. the size of corresponding author: naraindas bheel engineering, technology & applied science research vol. 9, no. 3, 2019, 4209-4212 4210 www.etasr.com bhell et al.: use of rice husk ash as cementitious material in concrete the rha particles is finer than opc, which augments concrete properties [34]. the size of rha particles is around 25 microns which makes rha capable of playing a vital role as filler in cement [35]. in this experimental study, rha was blended in concrete by weight in proportions up to 20%. ii. research methodology research was conducted on the fresh and hardened properties of concrete by using 0%, 5%, 10%, 15% and 20% of rha as cement substituent material in concrete. a total of 90 concrete samples (45 cubes and 45 cylinders) were prepared and cured at 7, 14, and 28 days to the design of target strength of 28n/mm 2 . to get an optimum mix, a number of trail mixes are completed using variable cement (binder), coarse aggregates, fine aggregates, and water. while getting the desirable mix, rha was used as cement substituent material to determine the characteristics strength of concrete specimens and ultimately, these concrete specimens were tested on utm while obeying the british standard (bs) code. in this study, the concrete cubes were cast for compressive strength and cylinders were cast for splitting tensile strength. three concrete specimens were cast for each proportion and finally, an average value of the three specimens was taken as the final result. this research work was completed in the concrete laboratory of the department of civil engineering, college of science and technology, hyderabad, sindh, pakistan. iii. materials used a. cement the cement that was used is available locally in the market under the brand name pak land. b. fine aggregates hilly sand was used as fine aggregates of zone-ii which passed through sieve no. 4 (4.75mm). the fineness modulus, water absorption and specific gravity of fine aggregates are 2.61, 1.8% and 2.60 respectively. c. coarse aggregates the coarse aggregates used were 20mm in size and were available in the local market. the water absorption and the specific gravity of coarse aggregates were 1.4% and 2.73 respectively. d. rice husk ash rice husk was collected from the region of sakrand and it was air-dried. rha was acquired by using uncontrolled temperature burning method. the desired ash was sieved through #30 sieves. e. water drinking water, available in the lab, was used. iv. results and discussion a. fresh concrete workability the flow of fresh concrete was conducted by slump cone having the dimensions of 10cm top diameter, 20cm bottom diameter and 30cm height. the maximum flow of fresh concrete was 65mm while using 0% rha as cement substituent material in concrete and the minimum value of workability was 25mm at 20% rha by weight. it was concluded that the flow of fresh concrete decreases with an increase in the amount of rha in concrete. the experimental results are shown in figure1. fig. 1. workability of fresh concrete b. compressive strength compressive strength test were conducted on cubes (100mm×100mm×100mm) by using different rha percentages. three specimens were cast for each proportion and ultimately, an average value of these three concrete specimens was taken as the final result. the compressive strength was maximum at 10% of rha used as cement substituent material in concrete and minimum at 20% of rha, at 7, 14, and 28 days. the results are shown in figure 2. fig. 2. compressive strength of concrete at 7, 14, and 28 days c. splitting tensile strength splitting tensile strength tests were conducted on cylinders (200mm×100mm) of various rha percentages which were cured at 7, 14, and 28 days. three concrete samples were cast for each proportion and an average was taken as the final result. the maximum splitting tensile strength of concrete was noted at 10% rha specimens and the minimum splitting strength of concrete was recorded at 20%. the cylinders were tested on utm. the experimental work results are shown in figure 3. v. conclusions • the flow of fresh concrete was noted maximum while using 0% rha as cement substituent material in concrete. the minimum value of workability was recorded at 20% rha. engineering, technology & applied science research vol. 9, no. 3, 2019, 4209-4212 4211 www.etasr.com bhell et al.: use of rice husk ash as cementitious material in concrete • it was concluded that the flow of fresh concrete decreases with an increase in the amount of rha in concrete. • the compressive strength was maximum at 10% rha concrete and minimum at 20% rha concrete at 7, 14, and 28 days. • the maximum splitting tensile strength of concrete was noted at 10% rha concrete and the minimum splitting strength was recorded at 20% rha concrete at 7, 14, and 28 days. fig. 3. split tensile strength at 7, 14, and 28 days references [1] a. manimaran, m. somasundaram, p. t. ravichandran, “experimental study on partial replacement of coarse aggregate by bamboo and fine aggregate by quarry dust in concrete”, international journal of civil engineering and technology, vol. 8, no. 8, pp. 1019-1027, 2017 [2] m. k. nemati, “chapter 5: aggregates for concrete”, in: concrete technology-fiber reinforced concrete final report, university of washington, 2013 [3] p. a. shirule, a. rahman, r. d. gupta, “partial replacement of cement with marble dust powder”, international journal of advanced engineering and studies, vol. 1, no. 3, pp. 175–177, 2012 [4] j. alex, j. dhanalakshmi, b. ambedkar, “experimental investigation on rice husk ash as cement replacement on concrete production”, construction and build materials, vol. 127, pp. 353–362, 2016 [5] e. aprianti, p. shafgh, s. bahri, j. nodeh, “supplementary cementitious materials origin from agricultural wastes—a review”, construction and building materials, vol. 74, pp. 176–187, 2015 [6] r. khan, a. jabbar, i. ahmad, w. khan, a. n. khan, j. mirza, “reduction in environmental problems using rice-husk ash in concrete”, construction and building materials, vol. 30, pp. 360–365, 2012 [7] o. a. u. uche, m. adamu, m. a. bahuddeen, “influence of millet husk ash on the properties of plain concrete”, epistemics in science, engineering and technology, vol. 2, no. 2, pp. 68–73, 2012 [8] n. bheel, s. l. meghwar, s. sohu, a. r. khoso, a. kumar, z. h. shaikh, “experimental study on recycled concrete aggregates with rice husk ash as partial cement replacement”, civil engineering journal, vol. 4, no. 10, pp. 2476-3055, 2018 [9] j. p. broomfield, corrosion of steel in concrete, understanding, investigation, and repair, e & fn spon, 2006 [10] p. schiessl, corrosion of steel in concrete, report of the technical committee, 60-csc rilem, chapman and hall, 1998 [11] h. muga, k. betz, j. walker, c. pranger, a. vidor, development of appropriate and sustainable construction materials, sustainable futures institute, michigan technological university, 2005 [12] r. r. hussain, t. ishida, “critical carbonation depth for initiation of steel corrosion in fully carbonated concrete and development of electrochemical carbonation induced corrosion model”, international journal of electrochemical science, vol. 4, pp. 1178-1195, 2009 [13] n. kad, m. vinod, “review research paper on influence of rice husk ash on the properties of concrete”, international journal of research, vol. 2, no. 5, pp. 873–877, 2015 [14] m. anwar, t. miyagawa, m. gaweesh, “using rice husk ash as a cement replacement material in concrete”, international conference on the science and engineering of recycling for environmental protection, harrogate, uk, may 31–june 2, 2000 [15] s. d. nagrale, h. hemant, r. m. pankaj, “utilization of rice husk ash”, international journal of engineering research and applications, vol. 2, no. 4, pp. 1–5, 2012 [16] i. b. ologunagba, a. s. daramola, a. o. aliu, “feasibility of using rice husk ash as partial replacement for concrete”, international journal of engineering trends and technology, vol. 30, no. 5, pp. 267–269, 2015 [17] a. n. givi, s. a. rashid,f. n. a. aziz, m. a. m. salleh, “contribution of rice husk ash to the properties of mortar and concrete: a review”, journal of american science, vol. 6, no. 3, pp: 157–165, 2010 [18] h. b. mahmud, n. anjang, a. hamid, k. y. chin, “production of high strength concrete incorporating an agricultural waste—rice husk ash”, 2nd international conference on chemical, biological and environmental engineering, cairo, egypt, november 2-4, 2010 [19] h. thanh, k. siewert, h. m. ludwig, “alkali-silica reaction in mortar formulated from self-compacting high-performance concrete containing rice husk ash”, construction and building materials, vol. 88, pp. 10–19, 2015 [20] r. g. smith, g. a. kamwanja, “the use of rice husk for making a cementitious material”, joint symposium on the use of vegetable plants and their fibers as building material, baghdad, iraq,1986 [21] m. h. zhang, r. lastra, v. m. malhotra, “rice husk ash paste and concrete: some aspects of hydration and the microstructure of the interfacial zone between the aggregate and paste”, cement and concrete research, vol. 6, no. 26, pp. 963–977, 1996 [22] n. p. hasparyk, p. j. m. monteiro, h. carasek, “effect of silica fume and rice husk ash on alkali-silica reaction”, aci structural journal, vol. 4, no. 97, pp. 486–492, 2000 [23] k. sakr, “effects of silica fume and rice husk ash on the properties of heavy weight concrete”, journal of materials in civil engineering, vol. 18, no. 3, pp. 367–376, 2006 [24] v. sata, c. jaturapitakkul, k. kiattikomol, “influence of pozzolan from various by-product materials on mechanical properties of high-strength concrete”, construction and building materials, vol. 21, no. 7, pp. 1589–1598, 2007 [25] m. m. tashima, c. a. r. da silva, j. l. akasaki, m. b. barbosa, “the possibility of adding the rice husk ash (rha) to the concrete”, in: proceedings of the irilem conference on the use of recycled materials in building and structures, pp. 778–786, rilem, 2004 [26] s. k. antiohos, j. g. tapali, m. zervaki, j. sousa-coutinho, s. tsimas, v. g. papadakis, “low embodied energy cement containing untreated rha: a strength development and durability study”, construction and building materials, vol. 49, pp. 455–463, 2013 [27] l. prasittisopin, d. trejo, “hydration and phase formation of blended cementitious systems incorporating chemically transformed rice husk ash”, cement and concrete composites, vol. 59, pp. 100–106, 2015 [28] m. f. m. zain, m. n. islam, f. mahmud, m. a. jamil, “production of rice husk ash for use in concrete as a supplementary cementitious material”, construction and building materials, vol. 25, no. 2, pp. 798– 805, 2011 [29] r. bie, x. song, q. liu, x. ji, p. chen, “studies on effects of burning conditions and rice husk ash (rha) blending amount on the mechanical behavior of cement”, cement and concrete composites, vol. 55, pp. 162–168, 2015 [30] n. d. bheel, s. l. meghwar, s. a. abbasi, l. c. marwari, j. a. mugeri, r. a. abbasi, “effect of rice husk ash and water-cement ratio on strength of concrete”, international civil engineering journal, vol. 4, no. 10, pp. 2373-2382, 2018 [31] p. k. mehta, d. pirtz, “use of rice husk ash to reduce the temperature in high strength mass concrete”, international concrete abstracts portal, vol. 75, no. 2, pp. 60–63, 1978 engineering, technology & applied science research vol. 9, no. 3, 2019, 4209-4212 4212 www.etasr.com bhell et al.: use of rice husk ash as cementitious material in concrete [32] m. akhter, “experimental study on effect of wood ash on strength of concrete”, international research journal of engineering and technology, vol. 4, no. 7, pp. 1252–1254, 2017 [33] v. m. malhotra, p. k. mehta, pozzolanic and cementitious materials, taylor & francis, 2004 [34] a. m. nevile, j. j. brooks, concrete technology, longman scientific and technical, 1990 [35] s. rukzon, p. chindaprasirt, r. mahachai, “effect of grinding on chemical and physical properties of rice husk ash”, international journal of minerals, metallurgy and materials, vol. 16, no. 2, pp. 242-247, 2009 microsoft word 35-2598_s engineering, technology & applied science research vol. 9, no. 2, 2019, 4057-4061 4057 www.etasr.com iqbal: modern control laws for an articulated robotic arm: modeling and simulation modern control laws for an articulated robotic arm modeling and simulation jamshed iqbal department of electrical and electronics engineering, university of jeddah, saudi arabia department of electrical engineering, fast national university, islamabad, pakistan jmiqbal@uj.edu.sa abstract—the robotic manipulator has become an integral component of modern industrial automation. the current paper deals with the mathematical modeling and non-linear control of this manipulator. dh-parameters are used to derive kinematic model while the dynamics is based on euler-lagrange equation. two modern control strategies, h∞ and model predictive control (mpc), are investigated to develop the control laws. for an optimal performance, the controllers have been fine-tuned through a simulation conducted in matlab/simulink environment. the designed control laws are subjected to various inputs and tested for effectiveness in transient parameters like settling time and overshoot as well as steady state error. simulation results confirm the effectiveness of the developed controllers by precisely tracking the reference motion trajectories. keywords-non-linear control; robotic arm manipulator; mechatronics; dh-parameters; euler-lagrange equation i. introduction robots are considered key elements in automation, thus their application horizon is increasing [1]. autonomy and intelligence in robots is primarily caused by the advancements in technology and research in domains like modeling, design, control and artificial intelligence (ai) [2]. modeling and simulation in different scientific domains is gaining enormous interest among the scientific community in order to develop indepth understanding of real-world applications. study and modeling of an anthropomorphic system may help the better understanding of general human biomechanics and may also lead to formulating control laws of the actual biological agent. the basis of developing the control system for a robotic manipulator is the feedback loop, which plays a pivotal role to dampen the uncertainties. the control system neutralizes numerous disturbances and uncertainties in the plant. the solution to the control problem involves defining input signals like torque or actuator input voltage to achieve the desired behavior. the controller must be capable of handling the effects of nonlinearities, dynamic coupling and complexity. trivial strategies based on linear control laws are not able to handle the above mentioned issues [3]. thus, implementation of nonlinear control laws has been presented [4-7]. to meet the performance requirements to control multi-degree of freedom (dof) robotic manipulators, nonlinear control based on sliding mode control (smc) [8, 9], computed torque control (ctc), h∞ and model predictive control (mpc) [10] have been reported. mpc is an optimal control technique which predicts the future behavior of the system based on current states and responses. to ensure better tracking performance in constrained environment, an online process is utilized to compute future values. the optimized control signal is then formulated considering both prediction results and past behavior. an in-depth review of mpc based control strategies has been presented in [11]. authors in [12] used mpc for position and force control of a human arm like a seven dof robotics manipulator. in [13], a real-time computation method for mpc has been presented, which introduced a mapping of offline approximation approach in neural networks (nn). the proposed technique has been implemented on a low cost field programmable gate array (fpga) to show its less hardware and computational time requirements. a comparison of mpc with the proportional integral derivative (pid) and ctc has been presented in [14]. authors in [15] have proposed nn based mpc and pid law to control the vibration and position of 2link flexible arm. h∞ control law provides system robustness and high performance in spite of uncertainties and disturbances. in this control law, it is assumed that all the system states and disturbances can be sent as feedback to create a close loop system. authors in [16] presented feedback control of a linearized model of a selective compliance assembly robot arm (scara) based on h∞ control. in [17], authors suggested discrete time dynamical method as a solution of the h∞ control problem. a state-space h∞ solution using the riccati equation has been proposed in [18]. h∞ framework has been used in [19] to solve the control and management problem of tradeoffs in the specifications. this paper presents the design of h∞ and mpc control laws for a six dof robotic arm where links and joints are serially connected. the formulation of the control laws is based on the derived kinematics and dynamics of the robotic arm. the efficacy of both control strategies has been demonstrated through tracking results for various inputs. ii. mathematical model of the robotic arm robotic manipulator ed7220c is a commercial robot developed for academic purposes. this anthropomorphic arm is corresponding author: jamshed iqbal engineering, technology & applied science research vol. 9, no. 2, 2019, 4057-4061 4058 www.etasr.com iqbal: modern control laws for an articulated robotic arm: modeling and simulation used for modeling and control in the present work. the endeffector is a gripper. all joints have a single dof except the wrist. the wrist can move in roll and pitch planes. specifications of the robotic arm are presented in [20]. a. kinematic models kinematic modeling involves the joints and the end-effector position without considering the associated forces. it provides the position and orientation of end-effector based on robot joints’ angular position. in the present research, dh parameter based approach has been used to derive the kinematics of the manipulator. the axis assignment is shown in figure 1 and the resulting dh parameters are expressed in table i. fig. 1. kinematic representation of the arm showing assignment of frames on various joints table i. denavit-hartenberg (dh) parameters parameter symbol joint (i) 1 2 3 4 5 6 link twist ���� 0 -90° 0 0 -90° 0 link length ���� 0 0 l2 l3 0 0 joint angle �� θ1 θ2 θ3 θ4 θ5 0 joint distance �� l 1 0 0 0 0 l4 the transformation matrices, calculated through dhparameters presented in table i, for each link are given in (1), while the overall transformation is computed as given in (2): �� � �� �� 0 0�� �� 0 00 0 1 ��0 0 0 1 � �� � � �� �� 0 00 0 1 0 �� �� 0 00 0 0 1� (1)��� � �� �� 0 ���� �� 0 00 0 1 00 0 0 1 � �� � ��� �� 0 ���� �� 0 00 0 1 00 0 0 1 � ��� � �� �� 0 00 0 1 0 �� �� 0 00 0 0 1� �� � �1 0 0 00 1 0 00 0 1 ��0 0 0 1 � �� �� ��� ��� ��� ��� ��� ��������� � ���� �������� � ���� �� ���� ����������� ���� �������� ���� �� ���� ��� ������ ������ ���� �0 0 0 1 � (2) and the nomenclature used is: sab=sin(a+b), sabc=sin(a+b+c), cab=cos(a+b), and cabc=cos(a+b+c). given the required position and orientation of the tool, the transformation matrix presented in (2) is used to determine the corresponding joint angles. the computed joint angle positions are achieved through the controller which generates the appropriate signals for the dc motors but this requires the dynamic model presented below. b. dynamic model the dynamic model of the robotic arm gives information of torque and other forces resulting in the motion of the robot. the dynamic model can be formulated using various methods including recursive lagrange, recursive newton-euler and euler-lagrange. the dynamic model derived here uses the euler-lagrange equations. this is the most commonly followed approach due to its simplicity and compact description. the nomenclature for the deriving dynamics of the arm is presented in table ii. table ii. nomenclature for dynamic model symbol remarks �� mass of link i ��� position of center of mass (com) of link i ��� angular velocity of link i with reference to its frame �! linear velocity of link i w.r.t. its com "# total kinetic energy related to each link $# total potential energy related to each link $%&'� potential energy reference (�� inertia tensor of link i with reference to its frame ) acceleration of gravity the potential and kinetic energy of each link of the arm have been computed using (3-4) respectively: $� ��)# ��� � $%&'� (3) "� �� �� �!# �! � �� ��� (�� ��� (4) lagrangian has been calculated by the difference of these energies of the complete system. torque corresponding to each link is determined by the partial differentiation of lagrangian w.r.t q and *+ as given in (5): , "# $# , ../ 0102+ 0102 (5) the resulting torque is given in (6): τ m5q, q+8q9 � g5q8 � v5q, q+8 (6) where is the 4×1 torque vector applied to the robot’s joints. )5*8, <5*, *+8 and =5*, *+8 are respectively the 4×1 vector of gravitational force, the 4×1 vector of coriolis centrifugal force and the 4×4 inertia matrix. *9 , *+ and q are 4×1 vectors for angular acceleration, angular velocity and angular position. for complete derivation of system dynamics, see [20]. iii. the controller design a. mpc based controller the key concept behind the design of the mpc law is to consider a discrete-time model of a system and to formulate an engineering, technology & applied science research vol. 9, no. 2, 2019, 4057-4061 4059 www.etasr.com iqbal: modern control laws for an articulated robotic arm: modeling and simulation optimization problem which is solved based on an objective cost function. consider a plant in discrete-time representation (7)-(8) where the number of inputs is m, the number of outputs is q and the number of states is n1. >5" � 18 � >5"8 � � $5"8 (7) y5k8 c x5k8 (8) where u is the control input vector, x is the state vector, and y is the output vector. a is a square matrix termed as state matrix while b is the input matrix. a and b are properties of the system and are based on the system’s elements and structure. c is the output matrix which depends on the particular choice of output variables. the optimal control signal can be written as in (9): ∆d 5e#e � fg8��5e#fh e#i >5"�88 (9) where, e j k�k��…k�mn��� 0k�…k�mn��� 00…k�mn��� ………… 00…k�mn�mo�p np and nc represent the number of samples used for prediction and the number of samples used for control respectively. also, f j caca�…cast p rs in (9) is the set-point information and can be represented based on set-point signal rki i.e. rvw x1 1 … 1y r5k[8 r\ r5k[8 the principle of receding horizon is employed in developing incremental optimal control, which gives rise to: u5k8 ∑ ∆u5k[8 _� ∆u5k[8 kar5k[8 kbcdx5k[8 where u5k8 ∑ ∆u5k[8 _� efg� ihi�j ikl km 5e#e � fg8��5e#i8 en ihi�j o�o�opj km 5e#e � fg8��5e#fh8 in the present work, feedback linearization has been used to linearize the nonlinear system given in (6). the state feedback control law for linearization of the model is given in (10). τ m5q85ud q+ q8 � g5q8 � v5q, q+ 8 (10) where $� is given by receding horizon algorithm i.e.: ud5k8 ∑ ∆ud5k[8_� , ∆ud5k[8 kar5k[8 kbcd x5k[8 thus, a complete mpc law can be expressed as in (11): -5"8 = q *+5"8 *5"8 � ∑ reni5"�8 efg� >5"�8st� u � < � ) (11) the developed law resides on matrices ky and kmpc. their values are based on number of samples np and nc. thus, the overall computation time depends on window sizes. b. h∞ control law to design h∞ control law, a minimization problem is formulated while considering stability, robustness and performance normalization. the minimization problem is solved using the infinitive norm for the feedback-loop transfer function matrix. considering a linear plant, (12) gives the generalized state space model. >+ 5j8 � >5j8 � � $5j8 v5j8 k >5j8 � w $5j8 (12) where x(t), u(t) and y(t) represent state vectors, control input and output respectively. d is a feed forward matrix or direct transmission matrix which is determined by the selected output variables. considering a perturbed system with disturbance vector w(t) and error vector z(t), (13) represents the state space model while (14) gives the transfer function matrix. >+ 5j8 � >5j8 � ��l5j8 � ��$5j8 v5j8 k� >5j8 � w�� l5j8 � w�� $5j8 (13) x5j8 k� >5j8 � w�� l5j8 � w�� $5j8 �5�8 ≔ � �⋯k�k� ⋮⋯⋮⋮ ��⋯w��w�� ��⋯w��w��� (14) h∞ control law is designed using state feedback linearization approach based on (6). the developed control law is given in (15). =5*85$� *+ *8 � )5*8 � <5*, *+ 8 (15) where uc is the auxiliary control signal. after applying (15), appropriate values of weight coefficients wu and wp are respectively selected for the perturbation and input. the calculation of k is based on s over ks design approach and solution of the two riccati equations, ensuring the stability of the system i.e.: mink stabilizing � wc5i � tk8��w�k5i � tk8�� �� < γ (16) k serves as an auxiliary control which gives the signals $�� as output based on the input joint angles *�. $� [$�� $�� $�� $��y# can be calculated after all $�� are known. iv. simulation results and discussion matlab/simulink is used for simulation. s-functions are used for the plant and controller in simulation environment. a sampling time of 5ms is selected for controlling the modeled robotic manipulator. a. mpc simulation results the performance of mpc control law for different target trajectories is investigated. the investigation also includes the effect of the control horizon �� and the prediction horizon �g. engineering, technology & applied science research vol. 9, no. 2, 2019, 4057-4061 4060 www.etasr.com iqbal: modern control laws for an articulated robotic arm: modeling and simulation keeping �� constant while changing �g and vice versa revealed the importance of both horizons in the controller design. it is observed that the response of the system is related with the size of the control window. the performance is enhanced when the size is increased. tuning based on trial and error resulted in optimal values as �g 100 and �� 20 . figures 2 and 3 present the trajectory tracking results when the system is subjected to ramp and step inputs respectively. it can be inferred from the results that all the joints demonstrated identical response with different torques applied to the joints. shoulder joint exhibited relatively higher torque requirements in comparison with wrist, elbow and waist joints. (a) (b) fig. 2. ramp response: (a) trajectory tracking, (b) corresponding torques (a) (b) fig. 3. step response: (a) trajectory tracking, (b) corresponding torques b. h∞ simulation results the weight functions have significant effect on the performance of the designed control law. in the present research, the weight functions have been selected based on the guideline reported in [21]. the selected weight functions are given in (17): wc5s8 0.95 v���.� v��v��� v� .� and w�5s8 0.01 (17) the effect of changing �� values on the developed controller is investigated. the results reveal that the selected value is a good choice. weight functions (17) are used to plot ramp, step and sinusoidal responses of the system. figures 4 and 5 present the tracking results of ramp and sinusoidal references. it is evident from the plots that all the joints of the robotic arm showed similar behavior. a delay is also observed in the controller’s response for reference trajectories. (a) (b) fig. 4. ramp response: (a) trajectory tracking (b) corresponding torques (a) (b) fig. 5. sinusoidal response: (a) trajectory tracking, (b) corresponding torques engineering, technology & applied science research vol. 9, no. 2, 2019, 4057-4061 4061 www.etasr.com iqbal: modern control laws for an articulated robotic arm: modeling and simulation the results obtained by mpc and h∞ control laws are compared. the performance achieved by both controllers has been characterized w.r.t various parameters. table iii summarizes the comparative results based on step responses offered by both control strategies. for settling time, ±5% of the desired joint angle has been considered. it is pertinent to mention here that the given results are based on the selected gains and may vary with different gains selection. table iii. comparative performance parameter remarks control strategy h∞∞∞∞ mpc rise time tr (sec) 4.305 0.77 peak time tp (sec) 5.06 1.005 settling time ts (sec) 3.68 0.68 overshoot %os 1.9 % 4.5% v. conclusion a model of a six dof robotic arm is presented in this paper followed by the derivation of two modern control laws, mpc and h∞. simulation results confirmed that both control laws offer adequate tracking performance. comparative analysis of the performance achieved by both controllers reveals that mpc over performs h∞, on the expense of higher overshoot for controlling the robotic arm. the size of the prediction window can be increased to reduce overshoot in mpc response. in the future, it is planned to realize both control strategies on a real robotic platform. for this purpose, a custom platform named as autarep (autonomous articulated robotic educational platform) has already been designed and fabricated. also, it is planned to investigate the control performance when the robotic arm is subjected to disturbances and uncertainties. moreover, application-oriented study to explore practical avenues of the proposed research is anticipated. references [1] j. iqbal, r. u. islam, s. z. abbas, a. a. khan, s. a. ajwad, “automating industrial tasks through mechatronic systems–a review of robotics in industrial perspective”, tehnicki vjesnik, vol. 23, no. 3, pp. 917-924, 2016 [2] s. g. khan, g. herrmann, m. al grafi, t. pipe, c. melhuish, “compliance control and human–robot interaction: part 1—survey”, international journal of humanoid robotics, vol. 11, no. 3, pp. 1430001-1430028, 2014 [3] s. a. ajwad, j. iqbal, r. u. islam, a. alsheikhy, a. almeshal, a. mehmood, “optimal and robust control of multi dof robotic manipulator: design and hardware realization”, cybernetics and systems, vol. 49, no. 1, pp. 77-93, 2018 [4] i. ahmad, a. saaban, a. ibrahin, m. shahzad, “a research on the synchronization of two novel chaotic systems based on a nonlinear active control algorithm”, engineering, technology & applied science research, vol. 5, no. 1, pp. 739-747, 2014 [5] s. irfan, a. mehmood, m. t. razzaq, j. iqbal, “advanced sliding mode control techniques for inverted pendulum: modelling and simulation”, engineering science and technology, an international journal, vol. 21, pp. 753-759, 2018 [6] w. alam, a. mehmood, k. ali, u. javaid, s. alharbi, j. iqbal, “nonlinear control of a flexible joint robotic manipulator with experimental validation”, strojniski vestnik-journal of mechanical engineering, vol. 64, no. 1, pp. 47-55, 2018 [7] o. khan, m. pervaiz, e. ahmad, j. iqbal, “on the derivation of novel model and sophisticated control of flexible joint manipulator”, revue roumaine des sciences techniques-serie electrotechnique et energetique, vol. 62, no. 1, pp. 103-108, 2017 [8] a. rezoug, b. tondu, m. hamerlain, “experimental study of nonsingular terminal sliding mode controller for robot arm actuated by pneumatic artificial muscles”, ifac proceedings, vol. 47, pp. 1011310118, 2014 [9] j. iqbal, m. i. ullah, a. a. khan, m. irfan, “towards sophisticated control of robotic manipulators: an experimental study on a pseudoindustrial arm”, strojniski vestnik-journal of mechanical engineering, vol. 61, no.7-8, pp. 465-470, 2015 [10] m. i. ullah, s. a. ajwad, m. irfan, j. iqbal, “mpc and h-infinity based feedback control of non-linear robotic manipulator”, ieee international conference on frontiers of information technology, islamabad, pakistan, december 19-21, 2016 [11] a. bemporad, “model predictive control design: new trends and tools”, ieee conference on decision and control, san diego, usa, december 13-15, 2006 [12] j. de la casa cardenas, a. s. garcia, s. s. martinez, j. g. garcia, j. g. ortega, “model predictive position/force control of an anthropomorphic robotic arm”, ieee international conference on industrial technology, seville, spain, march 17-19, 2015 [13] h. ayala, r. sampaio, d. m. munoz, c. llanos, l. coelho, r. jacobi, “nonlinear model predictive control hardware implementation with custom-precision floating point operations”, 24th ieee mediterranean conference on control and automation, athens, greece, june 21-24, 2016 [14] m. makarov, m. grossard, p. rodriguez-ayerbe, d. dumur, “generalized predictive control of an anthropomorphic robot arm for trajectory tracking”, ieee/asme international conference on advanced intelligent mechatronics, budapest, hungary, july 3-7, 2011 [15] j. o. pedro, t. tshabalala, “hybrid nnmpc/pid control of a two-link flexible manipulator with actuator dynamics”, 10th ieee asian control conference, kota kinabalu, malaysia, may 31-june 3, 2015 [16] h. s. ali, l. boutat-baddas, y. becis-aubry, m. darouach, “h∞ control of a scara robot using polytopic lpv approach”, 14th ieee mediterranean conference on control and automation, ancona, italy, june 28-30, 2006 [17] b. chen, x. liu, c. lin, k. liu, “robust h∞ control of takagi–sugeno fuzzy systems with state and input time delays”, fuzzy sets and systems, vol. 160, no. 4, pp. 403-422, 2009 [18] j. c. doyle, k. glover, p. p. khargonekar, b. a. francis, “state-space solutions to standard h/sub 2/and h/sub infinity/control problems”, ieee transactions on automatic control, vol. 34, no. 8, pp. 831-847, 1989 [19] m. makarov, m. grossard, p. rodriguez-ayerbe, d. dumur, “modeling and preview h∞ control design for motion control of elastic-joint robots with uncertainties”, ieee transactions on industrial electronics, vol. 63, no. 10, pp. 6429-6438, 2016 [20] s. manzoor, r. u. islam, a. khalid, a. samad, j. iqbal, “an opensource multi-dof articulated robotic educational platform for autonomous object manipulation”, robotics and computer-integrated manufacturing, vol. 30, no. 3, pp. 351-362, 2014 [21] b. a. francis, a course in h∞ control theory, lecture notes in control and information sciences, springer-verlag, 1987 author profile j. iqbal holds a phd in robotics from the italian institute of technology (iit), italy and three master degrees in various fields of engineering from finland, sweden and pakistan. he is currently working as an associate professor in the university of jeddah, saudi arabia. with more than 15 years of multi-disciplinary experience, his research interests include robot analysis and control. he has more than 50 isi-indexed journal papers on his credit with an h-index of 24. he is a senior member of ieee, usa. microsoft word 34-2722_s1_etasr_v9_n4_pp4500-4503 engineering, technology & applied science research vol. 9, no. 4, 2019, 4500-4503 4500 www.etasr.com mohamed: effects of cold rolling and aging treatment on the properties of cu-be alloy effects of cold rolling and aging treatment on the properties of cu-be alloy masoud ibrahim mohamed chemical and materials engineering department, northern border university, arar, saudi arabia, on leave from mechanical engineering department, fayoum university, egypt ibrahim_64@yahoo.com abstract—the effects of precipitated phases during aging treatment on the properties of the cu-be alloy have been extensively studied. in this study, the effect of cold rolling on the precipitated phases of the cu-be alloy compared with nondeformed alloy during isothermal and low heating rate aging of 2 0 c/min have been investigated. hardness changes, differential scanning calorimetry (dsc), dilatation analysis, and transmission electron microscopy (tem) were used in this study. hardening and contraction were strongly increased at an early aging time for the cold rolled cu-be alloy. in addition, the dsc curves revealed an exothermic peak from the γ΄΄ phase. this peak increased and shifted to lower aging time by increasing the cold rolling reduction. in addition, the hardness remarkably increased at lower aging temperatures for the cold rolled specimens. the contraction from the dilatation curves and the exothermic peaks shifted to lower aging temperatures in cold rolled specimens. the hardening of cu-be alloy is believed to be from the γ΄ phase, and the contraction and the first exothermic peak in dsc curves from γ΄΄ phase. tem observations are in a good agreement with the above explanation and strongly revealed that γ΄΄ and γ΄ phases were highly accelerated by the effect of cold rolling. keywords-precipitation hardening; transmission electron microscopy;age hardening; solution treatment; cold rolling i. introduction cu-be alloy has been used very widely for springs, diaphragms, bearings and non-sparking tools because it has excellent mechanical properties, high electrical conductivity, and high corrosion resistance. aging after quenching from solution treatment remarkably hardens the alloy [1-4]. the precipitation sequence in this alloy has been extensively studied and can be summarized as follows [5-8]: α supersaturated solid-solution→g.p zones →γ΄΄→γ΄→γ (cube). the g.p zones are monolayer plates that form coherently on {100} matrix and revealed as streaks along 200〈 〉 α directions, γ΄΄ is a metastable phase with a monoclinic structure appearing as intensity maxima in the streaks along the 200〈 〉 α directions with aging treatment [9-13]. with farther aging the intensity maxima began to change to arrowhead-like shape, which shows the precipitation of γ ́ phase, this phase is metastable with a b.c.c. structure. the stable γ phase precipitate has an equilibrium b.c.c. structure [8, 14-19]. few studies have been done on the effect of cold rolling on the hardening behavior of cu-be alloy. in this paper the precipitation of γ΄΄ and γ ́ phases through aging of this alloy under the effect of cold rolling have been studied using hardness measurements, thermo-mechanical analysis (tma), and differential scanning calorimetry (dsc). transmission electron microscopy (tem) has been used for phase transformation studies. ii. experimεntal μethod cold-rolled plates of cu-be alloy (japanese industrial standard #c1720), which contains beryllium of 1.9mass% and 0.2mass% cobalt were used in this study. this alloy was received as a cold rolled plate of 2.5mm thickness. test pieces with size of 10mm width, 120mm length were cut out from the 2.5mm thickness plate in the rolling direction figure 1(a). (a) (b) (c) fig. 1. (a) specimen for cold rolling, (b) specimen for dilatation measurements, and (c) illustration of the heat treatments for the nd and the cr specimens: (i) solution treatment, (ii) isothermal aging at 360℃. corresponding author: masoud ibrahim mohamed (i) 80 0 c × 2h (ii) 360 0 c × time engineering, technology & applied science research vol. 9, no. 4, 2019, 4500-4503 4501 www.etasr.com mohamed: effects of cold rolling and aging treatment on the properties of cu-be alloy the cutout test pieces were first solution heat-treated at 800℃ for 2 hours followed by water quenching. the quenched specimens were cold rolled (at room temperature) with different reduction ratios (1, 2, 4, 8 and 12%). these cold rolled specimens were aged treated at 360℃ for different time intervals up to 180min. figure 1 (c) shows the heat treatment cycle. specimens of 10mm×10mm×2.5mm were cut out from the non-deformed (nd) and cold rolled (cr) plates for hardness measurements after aging at 360℃. hardness was measured by the mean value of 10 measurements using vickers hardness tester with the load of 9.8n. specimens of 2.5mm×2.5mm×40mm were cut out from the aged plates for dilatation measurements through heating of cu-be alloy (figure 1(b)). specimens of 20mg in weight were used for dsc tests. specimens of 2.5mm width, 13mm length were cut out for the tma during heating at 800℃. thin foils suitable for tem observation were prepared using double jet electro polishing technique. diffraction pattern, bright-field image were performed, the diffraction pattern was obtained on the [001] direction. the transmission electron microscope employed was hitachi h-9000nar. iii. results and discussion hardness changes for the nd and cr plates during aging at 360℃ are shown in figure 2. the cu-be alloy shows higher hardness values compared with the non-deformed. fig. 2. changes in hardness of cu-be alloy during isothermal aging at 360℃ a little cold rolling enhances the hardness remarkably at an early stage of aging and the hardness increases slightly with increasing reduction ratio of cold rolling. it is clear that the hardness of the cr specimens increases earlier than the one of the nd specimen. after 20min aging, the hardness increased from about 150hv to 220hv and 310hv at 1% and 12% for nd and cr, respectively. thus, it seems that cold rolling promotes the hardening of this alloy at early aging time. dilatation tests revealed higher shrinkage at early aging time after cold rolling as shown in figure 3. it is clear from figures 2 and 3 that the hardening and dilatation curves of the cr and nd specimens are strongly coexisting, showing remarkable increase of the hardness and shrinkage at early aging stage and are almost steady after about one hour aging. previous studies on the effect of cooling rate on aging behavior of cu-be alloy showed that the contraction was caused by the precipitation of γ΄ ́phase while the hardening was mainly related to γ ́ phase [8, 9]. therefore, it can be concluded that cold rolling enhanced the precipitation of both γ΄΄ and γ ́ phases. this will be clarified below by tem observation for cr and nd specimens after different heat treatments. fig. 3. dilatation behavior of cu-be alloy during isothermal aging at 360℃ hardness changes were measured for the nd and cr specimens (up to 30% reduction) after heating to different temperatures at a heating rate of 2℃/min, followed by water quenching. figure 4 shows that as the heating temperature increases, the hardness increases to reach a maximum at about 380℃ and then decreases. at 300℃, remarkable hardness increase was revealed for the cr specimens compared with the nd specimens. the maximum hardness slightly increases and shifts to lower temperatures as the cold rolling reduction ratio increases. generally, the hardness increases of cu-be alloy mainly due to γ ́ phase precipitation. at higher temperatures, over 380℃, the hardness decreased due to γ ́phase dissolution and/or γ stable phase formation as reported in [8, 9, 12]. fig. 4. effect of cold rolling on hardening of cu-be alloy, heating rate 2℃/min figure 5 shows the dilatation curves for the nd and cr specimens of this cu-be alloy during heating up to 600℃ at 2℃/min heating rate. the cold rolled specimens show that shrinkage increase started at about 280℃. with increasing temperature the shrinkage increased to a maximum of 0.20% at about 370℃, and then decreased. as the reduction ratio of cold rolling increased, the maximum shrinkage slightly decreased and shifted to lower temperatures. engineering, technology & applied science research vol. 9, no. 4, 2019, 4500-4503 4502 www.etasr.com mohamed: effects of cold rolling and aging treatment on the properties of cu-be alloy fig. 5. effect of cold rolling on dilatation of cu-be alloy, heating rate 2℃/min on the other hand, the nd specimens show a very slight expansion during the same heating conditions. it is interesting to note that the hardening and dilatation curves for this alloy (figures 4 and 5) have similar behavior of increasing to maximum and decreasing as the heating temperature increases. maximum contraction and maximum hardness are obtained at the same temperature of about 370~380℃ and shift to lower temperatures with increasing cold rolling reduction ratio. figure 6 shows the dsc curves for the nd and the cr specimens during heating up to 500℃. the first exothermic peak is the highest for the nd specimen and appeared around 350℃. this peak decreased and shifted to lower temperatures for the cold rolled specimens. with further heating a peak like a shoulder appeared at around 380℃. fig. 6. dsc curves of cu-be alloy, heating rate 2℃/min the first exothermic peak may be due to the γ΄ ́ phase and/or g.p zones precipitation. on the other hand, the shoulder may be due to the γ ́ phase precipitation. this assumption is based on the maximum hardness obtained at the same temperature of 380℃ as shown in figure 4. tem observation was investigated for cr and nd cu-be alloys after heat treatment at different conditions corresponds to the different exothermic heat changes, dilatation and hardness changes. if we consider the height of the peaks as an indication of the amount of the precipitated phases, it can be concluded that, cold rolling enhanced the formation of γ΄΄ phase and g.p zones and slightly decreased its amount. οn the other hand, the amount of γ΄ phase increased by cold rolling. as mentioned earlier, shrinkage mainly occurs due to the precipitation of γ΄΄ phase. from this point of view it seems that the first peak was obtained from the dsc curves of figure 6 for the nd specimen mainly due to g.p zones formation, not γ΄ ́phase precipitation. figure 7(a) and (d) shows the electron diffraction pattern of the nd and 4% cr specimens respectively. the nd specimen showed mainly weak g.p. zones, while the 4% cr specimen revealed g.p. zones with small amount of γ΄΄ phase (marked with arrows). the electron diffraction pattern of the nd specimen after aging at 360℃ for 20min revealed only g.p zones as shown in figure 7(b). it is noted, however, that 4% cr specimen precipitates γ΄ ́and γ ́phases after the same aging conditions (figure 7(e)). figure 7(c) and (f) shows the electron diffraction patterns of the nd and 4% cr specimens after heating to 348℃ and 316℃ respectively. the nd specimen showed g.p zones and small amount of γ΄΄ phase, while 4%cr specimen showed mainly γ΄΄ phase (marked with arrows) after heating to the lower temperature of 316℃. it is noted, however, that the 4% cr specimen precipitates γ΄΄ and γ΄ phases after the same aging condition (figure 7(e)). fig. 7. tem diffraction pattern, (a) as st, (b) s.t +20min aging, (c) s.t +348℃, (d) 4% cr (e), 4% cr +20min aging, and (f) 2% cr +316℃. incident beam is parallel to [001] fig. 8. tem micrograph, shows the bright field image corresponding to figure 7 figure 8 shows the bright filed image (bfi) corresponding to the electron diffraction pattern in figure 7. electron diffraction pattern of the nd and 2% cr specimens after heating to 380℃ and 360℃ respectively are shown in figure 9(a) and (d). the nd specimen revealed mainly γ ́ phase as arrowhead structure at 1/3[002]. on the other hand, 2% cr specimen showed γ ́ phase after heating to the lower temperature of 360℃. this means that cold rolling promotes engineering, technology & applied science research vol. 9, no. 4, 2019, 4500-4503 4503 www.etasr.com mohamed: effects of cold rolling and aging treatment on the properties of cu-be alloy the precipitation of γ΄ ́ and γ΄ phases coincidently with the result of the hardness change, dsc curves, tma curves and dilatation. figure 9(b) and (e) shows the electron diffraction pattern for the nd and 2% cr specimens after heating to 500℃ respectively. at this high temperature mainly γ phase was precipitated in the nd and 2% cr specimens. γ phase was revealed more clearly in 2% cr specimen. in figures 9(c) and (f) the schematics for the electron diffraction pattern are shown. thus, phase precipitation depends strongly on cold rolling, it is apparent that the precipitation of γ΄΄ and γ΄ phases was accelerated by the effect of cold rolling. fig. 9. electron diffraction pattern: (a) st +380℃, (b) st +500℃, (c) schematic illustration for the diffraction pattern, (d) 2% cr +360℃, (e) 2% cr +500℃, (f) schematic illustration for the diffraction pattern. the incident beam is parallel to [001]. iv. conclusion the influences of cold rolling and heat treatment on the age-hardening behavior of the cu-be alloy were investigated for nd and cr specimens. the obtained results are summarized as follows: • at early stage aging, the hardening and shrinkage of the cu-be alloy were strongly suppressed by the effect of cold rolling. maximum hardness and shrinkage appeared at almost the same temperature of about 370-380℃. • dsc curves showed that cold rolling promotes the first exothermic peak and shifts it to lower temperatures, thus cold rolling enhanced the precipitation of γ΄΄ phase. • the tma curves revealed that cold rolled specimens highly shrink with heating and the maximum shrinkage shifts to lower temperature with increasing of the cold rolling reduction. • dsc, dilatation changes, tma, and tem studies coexisted and revealed that the precipitation of γ΄΄ and γ΄ phases were accelerated by cold rolling. acknowledgement the author wishes to acknowledge the approval and the support of this research study by the grant no. 7593-eng2018-3-9-f from the deanship of scientific research, northern border university, arar, saudi arabia. references [1] e. rocha-rangel, j. a. rodrguez-garcia, c. a. hernandez-bocanegra, “precipitation hardening of cu-be alloys”, chemxpress vol. 5 no. 4, pp. 132-136, 2014 [2] m. mankani, s. s. sharma, “heat treatment of mill-hardened beryllium copper for space applications”, universal journal of mechanical engineering, vol. 3, no. 4, pp. 147-150, 2015 [3] z. zhu, y. cai, k. song, y. zhou, j. zou, “precipitation characteristics of the metastable γ″ phase in a cu-ni-be alloy”, materials, vol. 11, no. 8, article id 1394, 2018 [4] t. tang, y. l. kang, l. j. yue, x. l. jiao, “precipitation behavior of cu-1.9be-0.3ni-0.15co alloy during aging”, acta metallurgica sinica (english letters), vol. 28, no. 3, pp. 307-315, 2015 [5] y. tang, y. kang, l. yue, x. jiao, “mechanical properties optimization of a cu–be–co–ni alloy by precipitation design”, journal of alloys and compounds, vol. 695, pp. 613-625, 2017 [6] y. tang, y. kang, l. dejia, m. shen, y. hu, l. zhao, “tuning low cycle fatigue properties of cu-be-co-ni alloy by precipitation design”, metals-open access metallurgy journal, vol. 8, article id 444, 2018 [7] s. montecinos, s. tognana, w. salgueiro, “influence of microstructure on the young's modulus in a cu-2be (wt.%) alloy”, journal of alloys and compounds, vol. 729, pp. 43-48, 2017 [8] l. yagmur, o. duygulu, b. aydemir, “investigation of metastable γ΄ precipitate using hrtem in aged cu–be alloy”, materials science and engineering a, vol. 528, no. 12, pp. 4147-4151, 2011 [9] w. bonfield, b. c. edwards, “precipitation hardening in cu 1.81 wt % be 0.28 wt % co”, journal of materials science, vol. 9, no. 3, pp. 398408, 1974 [10] i. m. masoud, k. naito, h. era, k. kishitake, “a shape memory behavior newely reveled in cu-be alloy”, 7th cairo university international mdp conference, cairo, egypt, february 15-17, 2000 [11] l. yang, f. y. zhang, m. f. yan, m. l. zhang, “microstructure and mechanical properties of multiphase layer formed during thermodiffusing of titanium into the surface of c17200 copper-beryllium alloy”, applied surface science, vol. 292, no. 1, pp. 225-230, 2014 [12] k. esmati, h. omidvar, j. jelokhani, m. naderi, “study on the microstructure and mechanical properties of diffusion brazing joint of c17200 copper beryllium alloy“, materials & design, vol. 53, pp. 766773, 2014 [13] a. khodabakhshi, v. abouei, n. mortazavi, s. h. razavi, h. hooshyar, m. esmaily, “effects of cold working and heat treatment on microstructure and wear behaviour of cu–be alloy c1720”, tribologymaterials, surfaces & interfaces, vol. 9, no. 3, pp. 118-127, 2015 [14] r. monzen, s. okawara, c. watanable, “stress-assisted nucleation and growth of γ″ and γ′ precipitates in a cu–1.2 wt %be–0.1 wt%co alloy aged at 320°c”, philosophical magazine, vol. 92, no. 14, pp. 18261843, 2012 [15] r. j. price, a. kelly, “deformation of age hardened crystals of copper1.8 wt.% beryllium”, acta metallurgica, vol. 11, no. 8, pp. 915-922, 1963 [16] w. ozgowicz, e. kalinowska-ozgowicz, b. grzegorczyk, “thermomechanical treatment of low-alloy copper alloys of the kind cuco2be and cuco1nibe”, journal of achevements in materials and manfacturing engineering, vol. 46, no. 2, pp. 161-168, 2011 [17] r. o. galicia, c. g. garcia, m. a. alcantara, a. h. vazquez, “influence of heat treatment and composition variations on microstructure, hardness, and wear resistance of c18000 copper alloy”, isrn mechanical engineering, vol. 2012, article id 248989, 2012 [18] y. lim, k. lee, s. moon, “effects of a post-weld heat treatment on the mechanical properties and microstructure of a friction-stir-welded beryllium-copper alloy”, metals-open access metallurgy journal, vol. 9, no. 4, article id 461, 2019 [19] m. hariram, d. theerath, p. chakravarthy, r. a. kumar, “influence of cold work on aging response of c17200-beryllium copper alloy c17200”, materials today: proceedings, vol. 4, no. 10, pp. 11188– 11193, 2017 microsoft word 14-15-3022_s_etasr_v9_n5_pp4649-4653 engineering, technology & applied science research vol. 9, no. 5, 2019, 4649-4653 4649 www.etasr.com musa et al.: experimental study of the two-phase flow patterns of air-water mixture at vertical bend … experimental study of the two-phase flow patterns of air-water mixture at vertical bend inlet and outlet veyan a. musa department of mechanical engineering, university of zakho, zakho, kurdistan region, iraq veyan.musa@staff.uoz.edu.krd lokman a. abdulkareem department of petroleum engineering, university of zakho, zakho, kurdistan region, iraq lokman.abdulkareem@uoz.edu.krd omar m. ali department of mechanical engineering, university of zakho, zakho, kurdistan region, iraq omar.ali@uoz.edu.krd abstract—air-water two-phase flow in pipes introduces a noticeable challenge due to the complexity of the fluids. thus, to estimate the best design and reasonable financing cost of the transportation pipelines where the bends are presenting a part of their accessories, the investigators should have been able to estimate the flow regime occurring at different directions. an experiment was carried out by using a 90 o bend fixed with two pipes where the flow was upstream from a vertical to a horizontal pipe which were representing the bend inlet and outlet respectively. two wire-mesh sensors were used for obtaining the data of the void fractions (α) at water superficial velocities (usl) which changed from 0.052 to 0.419m/s, and air superficial velocities (usg) from 0.05 to 4.7m/s. furthermore, the characterization of flow regimes of the air-water flow at both bend inlet and outlet were competed accurately by using void fraction analysis of the time series, power spectral density (psd), tomographic images observed by the sensor program, and the probability density function (pdf) method. the flow regimes of vertical flow lines at the bend inlet were observed as bubbly, capbubble, slug, and churn flow, whereas the flow regimes of the horizontal flow line at the bend outlet were characterized as having stratified, stratified wavy, bubbly, plug, slug, wavy annular, and semi-annular flow due to the gravity and bend effects. keywords-flow pattern in vertical pipes; flow pattern in horizontal pipes; air-water flow; wire mesh sensor (wms); twophase flow at bends i. introduction generally, two-phase flow exists in a wide scope of mechanical applications and can be noticed in many scientific fields [1]. in chemical processes various two-phase flow forms occur in pipelines, reactors, plant parts, components, and bubble columns. in each case, the two-phase flow is considered as an important factor to improve the productivity and safety of the procedures. therefore, the suitable control of such flow properties is of significant importance to raise the efficiency of the operating system [2]. air-water mixture flows in transportation pipelines and bends occur in a lot of applications such as the transport heat exchange pipelines, transportation water pipelines, reactor system of thermal-hydraulic, etc. this kind of flow is unpredictable because it has a complex quality while it is passed through different behavior bends when compared with single-phase flow. in fact, the two-phase flow in bends conducts as inhomogeneous stage dispersion and flow inversion occurs under the influence of gravity, buoyancy, and centrifugal forces. the primary experimental research in such studies has been presented in [3]. authors investigated the flow distribution of air-water mixture flow through a vertical riser pipe of 7.6cm diameter directed to 90 o bend, and they studied the impact of angular position (φ) and phase velocity on the flow distribution. the authors submitted a straightforward model to foresee how each phase is acting along the bend, and they found the relationship between gravity and centrifugal forces which are communicated by the adjusted of froude number and characterized by: �� = �� (� × sin� × ����)⁄ (1) where fr is froude number, j is the phase velocity, ���� is the bend curvature radius, and φ is the angular position. at the point where fr=1, the phases are in harmony and have a tendency to stay in the original trajectory. when fr<1 the gas phase moves to the external axis side while the water phase flows in the internal axis side, and when fr>1, the gas tends to move to the internal axis side of the bend. in an experimental investigation of the air-water flow patterns [4], the authors estimated the effect of 90 o elbow with 5.03cm diameter on the bubbly flow structures in a development horizontal flow line. they used a probe conductivity dual sensor. fifteen different conditions have been tested within the regimes of bubbly flow. significantly, they are illustrated clearly, the elbow effect on the void fraction (α) distribution. in addition, the test demonstrated that the bend increases the fluctuation of the flow for both horizontal and vertical directions of the pipe cross-section area. authors in [5] obtained similar results in a 45 o shape with the same diameter and conditions. in [6], an experimental study was carried out to study the effects of fluctuating forces on a 90 o , 5.25cm diameter elbow. an aggregate of 36 tests was completed to cover the annular, bubbly, and slug flow regimes by using an impedance probe as measurement technique. the dominating corresponding author: veyan a. musa engineering, technology & applied science research vol. 9, no. 5, 2019, 4649-4653 4650 www.etasr.com musa et al.: experimental study of the two-phase flow patterns of air-water mixture at vertical bend … force frequency was observed significantly in the slug flow pattern. the root means square values of fluctuation force persistently rise with increase in the gas flow rate and greatest value is achieved at annular flow. authors in [7] studied experimentally the two-phase flow in a 90 o bend of 3.4cm internal diameter. the void fractions were tested in two situations, horizontal to vertical and vertical downstream to a horizontal flow line. the test has been performed under different values of phase superficial speeds, varied from 0.21 to 0.90m/s for water and 0.3 to 4m/s for air. the experiment utilized time series pdf analysis and visual imaging technique to portray the flow conducts and patterns. the outcome where that wavy, stratified, slug, and plug regimes were noticed in the horizontal pipe in contrast, the churn and slug flow patterns were observed at the vertical pipe. author in [8] studied the flow regime of gas-liquid flow using pdf, liquid holdup in the domain of time series, and psd to identify the flow regimes in the transparent inclined pipe. the experimental study in [9] exhibited the identification of air-water flow in a 50mm horizontal pipe by using pdf and the void fraction of time series under different water and air superficial velocities according to the analysis estimated in [10]. they observed the following flow regimes in the horizontal pipe: annular-wavy, slug, bubbly, and stratified flow under air superficial velocities from 0.23 to 10.5m/s and water superficial velocities from 0.05 to 1.7m/s by using wms to obtain the void fraction values. authors in [11] identified the flow of air-water patterns in a horizontal pipe of 5cm diameter by using wms and they improved the possibility of identification flow patterns by pdf and void fraction of time series correlation technique. many techniques have been utilized to characterize the flow patterns of the gas-liquid flow, such as x-ray tomography [2], wire-mesh sensor (wms) [12], and electrical capacitance tomography (ect). authors in [13] obtained and used an innovated wms sensor which depends on the conductivity alteration in the properties between flow fluids. the wms was exhibited in 1998 at tomographic imaging, and after it has been improved in [14] as an optimistic technique to obtain the hold-up (h) of water in oil depending on the conductivity of the liquid. finally, the comprehension of the air-water flow properties accounts on the knowledge of the flow patterns occurring inside the pipes, therefore the exact prediction of the two-phase flow pattern is the starting point for designing systems able to save energy and diminish system collapse by calculating the void fractions and estimating the bend effect on the air-water flow regimes of the vertical riser pipe to the horizontal flow line. ii. experimental design the current investigation was conducted at the research center of helmholtz centrum dresden rossendorf (hzdr) in germany. the flow consisted of air and tap water at room temperature of 22 o c with atmospheric pressure. a 90 o bend of a 15.35cm curvature radius has been joined between the riser vertical pipe and the horizontal pipe with a diameter of 6.7cm. two wire mesh sensors were fixed on the flow path before and after the bend ends with a 20cm distance from the bend ends. the flow direction was vertically upstream to the horizontal flow line. the void fractions (α) of time series information were obtained by two sensors (wms1 and wms2) as shown in figure 1. the flow patterns of the flow lines were examined for 13 different values of air superficial velocity (usg) ranging from 0.05 to 4.7m/s and constant water superficial velocity (usl) ranging from 0.052 to 0.419m/s. fig. 1. the 90 o bend frame. iii. results the following methods were utilized to classify the flow regimes occurring at the bend inlet in the vertical pipe and at the bend outlet in the horizontal pipe: the time series of void fraction, the tomographic images of the wms program, power spectral density (psd), probability density function (pdf), and mean void fraction and liquid holdup (h). a. at steady liquid superficial velocity (usl) of 0.052m/s at usl=0.052m/s and gas superficial velocity us=0.05m/s at the bend inlet, the mean void fraction fluctuates at 0.1 with a few peaks (figure 2(a)). the plot of pdf exhibits a single peak of high pdf value of 0.17 with a void fraction α=0.1 (figure 2(b)). the amplitude value of the spectrum accomplishes its maximum value at 1.5hz and declines gradually to the right (figure 2(c)). the plot features are referred to the bubbly pattern. at the bend outlet, the plot of the void fraction is shown as a straight line with some waves under the mean void fraction of about 0.5. the pdf curve shows a hill shape with α values between 0.3 to 0.6, whereas the plot of psd is characterized by a peak value of about 70 near zero frequency. regarding the amount of air and water that entered the bend, the outlet stratified wavy regime is conducted normally because the turbulent flow which occurred at the bend outlet was waving the phases. by increasing the usg value to 0.28m/s at the bend inlet, the mean void fraction fluctuated frequently under the values of 0.2 and 0.7 (figure 3(a)) the pdf plot exhibits two peaks at pdf values 0.01 and 0.05 (figure 3(b)), while the psd plot shows a single peak with a maximum of about 60 at 1.75hz (figure 3(c)). this value is declined and fluctuated with increase in frequency to the right and is characterized as slug flow. the increment in usg led to an increase in the taylor bubbles length and shrinking of the air bubbles into the water, thus, the slug flow was obtained. at the bend outlet for the horizontal flow line, the flow was observed engineering, technology & applied science research vol. 9, no. 5, 2019, 4649-4653 4651 www.etasr.com musa et al.: experimental study of the two-phase flow patterns of air-water mixture at vertical bend … as stratified wavy, which is estimated by the plot of the void fraction as a straight line at 0.7 with a few waves. the pdf plot is characterized by a single peak at 0.1, and the psd plot shows a maximum spectrum peak of about 80 near 0hz. the waves in the flow tend to increase the psd value, thus the stratified wavy plot depicts a higher value than the stratified flow plot. the stratified wavy pattern is obtained as a result of the bend effect, where the small amount of water is mixed with considerably high values of usg. when the usg value reached 1.4m/s, the flow pattern was of a churn regime at the bend inlet for the vertical flow line. the mean void fraction oscillates under a high value of about 0.8 (figure 4(a)). the pdf plot has one peak at 0.75 (figure 4(b)), while the maximum spectrum peak shows a small value of 30 at 2hz with a larger base scope (figure 4(c)). the increase of usg value raises the instability of the water slugs. when the usg achieves a critical point, the water slugs will be penetrated and this prompts the flow pattern to change its form into churn flow. at the bend outlet for the horizontal flow line, the flow regime is observed as stratified wavy because the increment of usg with a little amount of water mass flow rate make flow wavy and unstable due to the bend influence. fig. 2. flow pattern identification methods by (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.052m/s and usg=0.05 m/s. fig. 3. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.052m/s and usg=0.28m/s. at usg=2.36m/s, the flow regime is identified as churn flow at the bend inlet as displayed in figure 5 while the wavy annular flow at the bend outlet is characterized through the pdf chart by small peaks fluctuating around 0.15 with (α) tailing from 0.7 to 0.95 (figure 5(b)). the psd curve displays the wavy annular pattern showing 10% less power spectral than the plot of the stratified wavy regime (figure 5 (c)). the increment of usg with the differentiator of the density between the two phases, in addition to the effect of the bend forces the water phase to flow at the pipe base with waves. when increasing the usg value to 4.7m/s as illustrated in figure 6, semi-annular flow is observed at the bend outlet in comparison with the churn flow from the vertical flow line at the bend inlet. at the bend outlet, the mean void fraction conducts as a straight line with some waves at the high value of 0.9 (figure 6(a)) and the pdf plot has a single peak shifted to the right (figure 6(b)), referring to semi-annular flow. the interfaces occurring between air and water at the bend pushed the little amount of water to flow along with the pipe basement and air flew on it with high velocity. fig. 4. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.052m/s and usg=1.4m/s. fig. 5. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.052m/s and usg=2.36m/s. fig. 6. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.052m/s and usg=4.7m/s. engineering, technology & applied science research vol. 9, no. 5, 2019, 4649-4653 4652 www.etasr.com musa et al.: experimental study of the two-phase flow patterns of air-water mixture at vertical bend … b. at steady liquid superficial velocity usl=0.262m/s for the horizontal flow line (the bend outlet) at usg=0.05m/s as in figure 7, the bubbly flow is estimated when the mean void fraction is fluctuated at 0.1 with a few peaks at 0.3 (figure 7(a)). the bubbly flow is also characterized through the pdf chart: the plot shows a single peak to the left with a small base from 0.1 to 0.2 (figure 7(b)), whereas the power spectrum shows a maximum value of 10 at 0.75hz (figure 7(c)). the amount of water was enough for the pipe diameter to fill the cross-section area in spite of bend impacts, thus, the bubbly flow at the vertical pipe was observed to have the same pattern of the horizontal flow line. when the usg reached 0.28m/s (figure 8), the flow pattern was similar to plug flow at the bend outlet for the horizontal flow line in contrast with the slug flow that was observed at the vertical flow line. the plug flow is characterized by the time series method when the mean void fraction fluctuates at very low values near to zero and high void fraction of 0.7 at the same time (figure 8(a)). the pdf curve shows two peaks at 0.13 and 0.05 at void fractions 0.1 and 0.6 respectively (figure 8(b)). furthermore, the power spectrum exhibits high value of about 350 near 0hz (figure 8(c)). fig. 7. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.262m/s and usg=0.05m/s. fig. 8. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl =.262m/s and usg=0.28=m/s. slug flow is observed at usg values from 0.34 to 0.9m/s at the bend outlet for the horizontal flow line comparing with the slug flow that arrived from the bend inlet as showed in figure 9. for the horizontal flow line, as figure 9(a) shows, the slug pattern is identified when the void fraction fluctuates at high values (when the water height is less than the pipe radius) and has low values near zero (when the horizontal cross-sectional area is filled with water). the pdf curve shows two or more peaks around 0.08 at low and high void fraction values (figure 9(b)). moreover, the power spectrum of the slug flow exhibited a higher value of 450 at very low frequencies (figure 9(c)). the water slugs which arrived from the vertical pipe due to the higher void fraction and moderate usg values ranged from 0.34 to 0.9m/s are reverse inside the bend center flow line due to the gravity force that led the water to move at the bottom pipe base with considerable water slugs flow frequently at the top. stratified wavy flow is observed at usg increases from 1.4 to 2.36m/s as showed in figure 10, and the wavy annular regime is recognized at usg=4.7m/s as demonstrated in figure 11. when the usg increases more, to 2.83m/s, it breaks the slugs of water down due to the high velocity of the flow that produces wavy annular flow which is noticed at the bend outlet of the horizontal flow line. fig. 9. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.262m/s and usg=0.34m/s. fig. 10. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.262m/s and usg=1.4m/s. fig. 11. flow pattern identification methods by using (a) void fraction of time series, (b) pdf technique, (c) psd analysis, and (d) tomographic images at usl=0.262m/s and usg=2.83m/s. c. mean void fraction and liquid holdup (h) the liquid holdup (h) of the two-phase flow is characterized as a liquid volumetric rate fraction over the total mixture volumetric rates. the mean liquid holdup (h) values engineering, technology & applied science research vol. 9, no. 5, 2019, 4649-4653 4653 www.etasr.com musa et al.: experimental study of the two-phase flow patterns of air-water mixture at vertical bend … are obtained by calculating the average of all local crosssectional liquid holdup values over the time series. the mean h of the air-water flow diminishes linearly when the air superficial velocity, usg, gradually increases at steady water superficial velocity usl. this might clarify that bubbles expand steeply with increasing usg rates [15]. additionally, it is observed that the increasing or decreasing of the rate of the liquid phase is affected directly by the (h), and this influence is noticeable. figure 12 shows the plots of the mean (h) for constant usl values of 0.052, 0.157, 0.262, 0.314, and 0.419m/s regarding usg values altered from 0.05 to 4.7m/s at the vertical and undeveloped horizontal flow lines. it appears that the mean liquid holdup h diminishes steeply by the increasing of usg at all conditions in both flow lines (b) and (c). the mean liquid holdup at low usl diminishes dramatically at the bend outlet as compared with the bend inlet. the correlations between the mean liquid holdups at different usg values of this test are compared with [7-9]. fig. 12. liquid holdup (h) of the usg for vertical and horizontal flow line. iv. conclusion the increasing of water superficial velocity did not change the pattern in the vertical pipe, but it has a clear impact due to the effect of the 90-degree bend, gravitational force, and the buoyancy force on the flow patterns of the horizontal flow lines. these forces acted on the flow and forced the water to move inside the region of the bend centerline while the air flowed at the outside centerline of the bend. as a result, the large bubbles were broken up in the bend entrances during the imbalance which occurred by the surface tension and the centrifugal force. the flow patterns at the bend outlet may show a different behavior which depends on the bend curvature radius and diameter, fluid properties, temperature, etc. consequently, the bubbly flow regime in the vertical pipe changed to stratified-wavy flow at low usl values. in addition, the bubbly flow pattern in the vertical pipe altered to bubbly and plug flow regime at higher values of usl in the horizontal pipe. besides, the slug flow in the vertical pipe changed to the stratified-wavy pattern at low usl values, and to slug pattern as the value of usl increased. the churn pattern in the vertical pipe varied to the stratified-wavy, and semi-annular pattern at low usl in the horizontal flow lines and, to wavy-annular flow pattern at higher usl. finally, no stratified flow regime was observed at the horizontal flow lines under the tests, and no bubbly, plug, and slug flow was observed at the horizontal flow lines due to the low values of usl. references [1] c. t. crowe, multiphase flow handbook, crc press, 2005 [2] m. j. da silva, impedance sensors for fast multiphase flow measurement and imaging, phd thesis, technische universität dresden, 2008 [3] g. c. gardner, p. h. neller, “phase distributions in flow of an air-water mixture round bends and past obstructions at the wall of a 76‐mm bore tube”, proceedings of the institution of mechanical engineers, vol. 184, no. 33, pp. 93-101, 1969 [4] s. kim, j. h. park, g. kojasoy, j. m. kelly, “local interfacial structures in horizontal bubbly flow with 90-degree bend”, 14th international conference on nuclear engineering, july 17-20, 2006 [5] j. d. talley, s. kim, t. guo, g. kojasoy, “geometric effects of 45-deg elbow in horizontal air-water bubbly flow”, nuclear technology, vol. 167, no. 1, pp. 2–12, 2009 [6] y. liu, m. shuichiro, h. takashi, i. mamoru, m. hideyuki, k. yoshiyuki, k. koichi, “experimental study of internal two-phase flow induced fluctuating force on a 90° elbow”, chemical engineering science, vol. 76, no. 2012, pp. 173–187, 2012 [7] f. saidj, r. kibboua, a. azzi, n. ababou, b. j. azzopardi, “experimental investigation of air-water two-phase flow through vertical 90° bend”, experimental thermal and fluid science, vol. 57, pp. 226– 234, 2014 [8] l. a. abdulkareem, tomographic investigation of gas-oil flow in inclined risers, phd thesis, university of nottingham, 2011 [9] m. de salve, g. monni, b. panella, “horizontal air-water flow analysis with wire mesh sensor”, in: 6th european thermal sciences conference (eurotherm 2012), iop publishing, 2012 [10] a. e. dukler, m. g. hubbard, “a model for gas-liquid slug flow in horizontal and near horizontal tubes”, industrial & engineering chemistry fundamentals, vol. 14, no. 4, pp. 337–347, 1975 [11] w. liu, c. tan, f. dong, “local characteristic of horizontal air-water two-phase flow by wire-mesh sensor”, transactions of the institute of measurement and control, vol. 40, no. 3, pp. 746-761, 2016 [12] h. f. velasco pena, o. m. h. rodriguez, “applications of wire-mesh sensors in multiphase flows”, flow measurement and instrumentation, vol. 45, pp. 255–273, 2015 [13] h. m. prasser, a. bottger, j. zschau, “a new electrode-mesh tomography for gas-liquid flows”, flow measurement and instrumentation, vol. 9, no. 2, pp. 111–119, 1998 [14] i. d. johnson, method and apparatus for measuring water in crude oil, united states patent 4644263, 1987 [15] d. j. nicklin, j. o. wilkes, j. f. davidson, “two-phase flow in vertical tubes”, proceedings of the institution of mechanical engineers, vol. 40, no. 1, pp. 61–68, 1962 microsoft word 04-3382_s-d_etasr_v10_n2_pp5367-5370 engineering, technology & applied science research vol. 10, no. 2, 2020, 5367-5370 5367 www.etasr.com shaikh et al.: a short review on green supply chain management practices: the impact on … a short review on green supply chain management practices the impact on operational and environmental performance fazal ali shaikh department of economics university of sindh jamshoro, pakistan muhammad saeed shahbaz department of management sciences shaheed zulfikar ali bhutto institute of science and technology, pakistan nasurullah odhano department of economics university of sindh jamshoro, pakistan abstract—the aspects of sustainability and ecology have gradually become matters of significant concern within supply chain management processes. the aim of this study is to investigate the impact of the green supply chain on the environment and on operational performance. this study considers environmental management practices within firms, sustainable supply chain management practices relating to suppliers and customers, and environmentally conscious product and process design, by adopting a case-study approach and focusing on four major firms. the findings of this study reveal that the companies applying green supply chain management achieve better environmental performance but at an extra cost. meanwhile, green practices provide improved customer satisfaction and attraction for retailers, distributors, and authorities. keywords-green supply chain; environmental performance; operational performance; case study i. introduction the concepts pertaining to supply chain environmental management (scem) or green supply chain management (gscm) are usually understood by the industry as the monitoring of the environmental performance of suppliers. however, the exercise of conscious trade has been gaining escalating consideration. a growing number of companies are pondering over the amalgamation of ecological practices in their policy designs [1]. several motives are given to firms in order to become more environmental friendly. there are several motivational factors for introducing and applying the gscm concept [2], such as market probabilities, trade efficiency, regulatory fulfillment, and risk management [3]. gscm has a major role in confirming all the above elements [4]. environmental impact may occur at all levels of a products’ production chain and life time and gscm comes out as a significant novel practice for enterprises to accomplish lowering their environmental impact [5, 6]. ii. literature review an empirical inspection of american buying supervisors regarding green buying showed that the fundamental motivating element for green buying is meeting the regulations. the effectiveness of environmental regulations on buying practices is expected to become the second most significant concern in the future [4]. though the link between gscm and firms’ performance has been explored, the outcomes have not been decisive. there are two opposite theories regarding the connection between performance and environmental impact [7]. the first one proposes that environmental administration should just ensure the fulfillment of regulations and the second that environmental administration should be allowed to increase costs and investments in order to achieve better results. the impact of environmental parameters over the investment in terms of electricity use concluded that they are connected with the fall down in industrial manufacturing [8]. an optimistic connection between a firms’ performance and environment friendly practices has been documented in [9]. the suggested framework and empirical outcomes hint to a positive impact of ecological practices to market share and cost. recent research provided an understanding of the potential design of supply chain practices to improve ecoefficient performance. smaller and localized firms were found to be easier to follow more eco-friendly approaches [10]. connection with proprietors supports the acceptance and progress of creative environmental technologies [11], while dealing with clients and staff and cooperative r&d leads towards better environmental performance. it is vague however whether gscm leads to a positive or negative economical outcome. the actual long term economic impact is not easily assessed through considering a single factor such as short span profitability or sales performance [12]. firms that reduce their environmental impact face an increased production cost but are also expected to gradually gain an increased market share [5]. authors in [13] pointed out that environmental administration is in fact a creative environmental design for improving institutional performance. it has been indicated that an eco-efficient administration approach is capable to enhance a firm’s functional performance [14]. a strong connection has been found between the meetings of objectives and staff contribution to environmental administration [7]. returns on cost could be positively influenced when clients have a preference for the products/services of environment-friendly companies, while investment can be minimized via proactively handling environmental parameters that may cause hurdles. in addition, eco-friendly approaches may result to innovations that corresponding author: fazal ali shaikh (fazal_110_shaikh@hotmail.com) engineering, technology & applied science research vol. 10, no. 2, 2020, 5367-5370 5368 www.etasr.com shaikh et al.: a short review on green supply chain management practices: the impact on … may provide a head-start advantage to firms, at least through a marketing point of view [15]. a positive connection between business communal performance and profit has also been documented [16]. it should be mentioned however that empirical studies regarding gscm practices are scarce. the questions set in this study are: i) what are the effects of green supply chain over the environmental and functional performance of firms, and ii) what type of environmental administration practices are recommended to enhance the firm’s eco-efficient performance. the framework of the study is developed to explore the connection between different gscm practices [17]. there is a consensus in the literature that eco-friendly practices are a key factor in enhancing a firm’s progress [18]. previous studies highlighted several dimensions of gscm [19-21] (tables i-ii). table i. environmental management practices environmental management practices within a firm commitment to gscm from senior and middle-level managers total quality environmental management environmental compliance and auditing program iso 14000 certification gscm practices relating to suppliers and customers cooperation with suppliers for environmental objectives supplier’s iso 14000 certification company-wide environmental audits environmental management for suppliers internal management provide training to build supplier environmental management capacity cooperation with customers for eco-design and cleaner production cooperation with customers for green packaging environmentally conscious product and process design environmentally friendly raw materials design of products for reduced consumption of material and energy design of products for reuse, recycle, and recovery of materials product design aiming to avoid or reduce the use of hazardous products and/or their manufacturing process optimization of the process to reduce solid/liquid waste and emission use of reverse logistics table ii. environmental and operational performance constructs environmental performance reduction of solid/liquid waste and emissions reduction of consumption for hazardous/toxic materials reduction of frequency of environmental accidents reduction of electricity usage operational performance cost savings and increased efficiency product quality improvement increase in market share new market opportunities enhance employee motivation and performance increase in sales it has been found that encouragement from moderate scale directors, apart from manager’s guidance, is also a key factor for fruitful gscm application [22]. it has also been observed that gscm may offer several advantages from cost reduction to increased public involvement (i.e. creating a trend) in the firm’s policy and, thus an increased market share [2, 23]. thus, environmental concerns are becoming a visible ingredient of tactical patterns within firms [24]. green advertising and environmentally friendly packaging are practices that may improve the environmental impact of the delivery chain [2, 3]. to highlight the environmental influence of packaging, several countries have plans that aim to reduce the cost of wrapping [25]. it has been reported that standardized recyclable containers and fine merchandising designs minimize scarcity and recovery time making the product cost-friendly while also being eco-friendly [26]. eco-efficient alert manufacturing and procedure patterns could integrate several such ideas, from reducing the consumption of materials and energy during the first stages of the production chain and implementing cleaner practices to minimize solid and liquid wastes to the utilization of eco-friendly logistics. thus, the return on investment (roi) has been considered a crucial dimension of gscm [27]. iii. case studies the public data regarding the gscm practices of four major firms are considered as case studies. the firms were selected considering their market share, overall status, data availability and their overall environmental policy. major firms were selected so that the principles and practices described may be considered as a future trend roadmap for smaller firms and policy makers. a. eastman chemical company eastman is devoted to durable supply chain administration practices and functional presentation improvement practices such as measuring brokers contributions, evolving substitutes strategies of supply, establish broker’s solutions, enhancing packaging, utilizing recyclable packaging and encouraging supply chain networking, besides developing clients’ solutions and managing material recovery [28, 29]. eastman follows several environmental quantifiers and evolved the eco-efficient function practices by adding a greenhouse decrease objective, called tri (toxic release inventory). the design of eastman’s energy management policy has combined the demand for reasonable energy consumption with the demand to reduce production cost, e.g. by ensuring that the produced heat is used in more than one chemical processes. the firm also claims to apply efficient water administration practices and use recyclable materials in order to reduce waste [25]. b. westpac bank, australia the basic advantage of this firm is that it is working along with brokers and clients to enhance its manufacturing and products, e.g. in redesigning the packaging and in utilizing recyclable materials in several products. that process has brought enhancements in production and reduced logistic costs [30]. the company has declared a committance to comply with, or even exceed, the environmental legislation requirements in the areas it operates. the firm claims that has ensured reduced energy consumption and emissions in transportation, while it is a certified carbon neutral business [31]. c. coca cola enterprises the firm endorses the five strategic firm responsibility and sustainability crs stress dimensions [28]. coca cola invested us $34.8 million in 2008 on capital schemes concerning their three eco-efficient spotlight fields. in addition, they are evolving a cost-friendly evaluation process to focus on engineering, technology & applied science research vol. 10, no. 2, 2020, 5367-5370 5369 www.etasr.com shaikh et al.: a short review on green supply chain management practices: the impact on … common reporting standard (crs) investments. the firm has highlighted certain goals in the field of energy consumption, water management, durable wrapping manufacture assortment, wellbeing, and diverse and comprehensive culture. it has decreased its entire carbon mark to 15% in 2020 in comparison to the 2007 baseline. the firm also follows a water sustainable function in which the usage of water is reduced and water neutral effectiveness is achieved. to decrease the influence of package related waste, the utilization of recyclable resources has been set as a major goal [32, 33]. the company evaluated and claims it have reduced its carbon marks by utilizing a hybrid fleet, establishing a water durable function thus decreasing the ratio of water utilization and preserving more than 300 million liters of water, while it carried out a pilot study of entrenched stream footprint [19]. for reducing the influence of packaging materials the company stayed away from the use of around 31,000 metric tons of packaging materials, while 2.7% of the total quantity was recovered. d. ernst and young the firm has adopted the sustainability assurance methodology and is a member of the china financial institution’s green finance committee. the provided services are eco-friendly and comply with the international standards and regulations. the company is specialized in providing tailored sustainability services to its clients such as consultations and funding regarding carbon emission reduction, supply chain management, risk management, etc. [34]. iv. discussion it is sufficient to say that in the days to come, carbon emissions will be considered tantamount to currency. hence it is significant for international firms to take measures to control their supply chain and to estimate future costs and liabilities. struggling against the inflation of energy and to decrease inhouse emissions approximately 40% of the firms have invested in reusable energy generation, which provides a firm grip over the cost of energy and also improves firm credibility and may even become profitable when selling the surplus of the produced electricity [30]. a growing number of traders are contending the launch of durable production to enhance their market share. a large proportion of participants account the sustainability as a chance for revenue growth. credibility plus the name of the brand are the areas where prospects for sustainability and carbon concerned politics arise. several prominent firms have to determine the full prospective and potential of advantages and profits in the account of sustainable chain administration [20]. a firm’s reputation is harmed if its supply chain is found to be communally irresponsible [35]. environmental friendly and sustainable practices on the other hand improve the reputation of a firm and, ultimately, its market share even with increased cost products [36]. major firms devote significant resources towards improving their environmental impact, developing and establishing alternative techniques and approaches, reducing energy consumption, improving packaging, enhancing the use of reusable materials etc. besides the prominent eco-friendly practices, other minor practices such as greenhouse excretion reduction and protection of logistics effectiveness have also been documented. other frequently accomplished advantages are augmented effectiveness, reduced cost, enhanced risk administration, revenue growth, and credible reputation. it is significant for a company to have an ethical supply chain. the functional aspect that needs to amalgamate to the existing framework is enhanced risk administration and credibility [21]. the firms that include sscm practices found a prominent impact on their environmental and functional performance. the limitation of this research was the convenience of sampling, so more and detailed case studies can be carried out. v. conclusion this study was carried out with the aim to evaluate the association with sscm practices in corporations in a functional manner besides the environmental one. a framework was established and an effort was made to validate the framework through case studies. more specifically the implementation of environmental practices in the supply chain administration along with the functional performance of organizations was evaluated. this study is meant to assist in implementing ecological or environmental supply chain administration practices in order to raise their competitiveness in the global market. the current most significant environmental issue is the carbon emission reduction. a major aim of this study is the exploration of the ecological dimensions of the supply chain administration and the course that must be followed in order to address this issue. references [1] q. zhu, j. sarkis, “relationships between operational practices and performance among early adopters of green supply chain management practices in chinese manufacturing enterprises”, journal of operations management, vol. 22, no. 3, pp. 265–289, 2004 [2] g. c. wu, “the influence of green supply chain integration and environmental uncertainty on green innovation in taiwan’s it industry”, supply chain management, vol. 18, no. 5, pp. 539–552, 2013 [3] c. i. yang, s. lien, “governance mechanisms for green supply chain partnership”, sustainability, vol. 10, no. 8, article id 2681, 2018 [4] q. zhu, j. sarkis, k. h. lai, “confirmation of a measurement model for green supply chain management practices implementation”, international journal of production economics, vol. 111, no. 2, pp. 261–273, 2008 [5] a. longoni, d. luzzini, m. guerci, “deploying environmental management across functions: the relationship between green human resource management and green supply chain management”, journal of business ethics, vol. 151, no. 4, pp. 1081–1095, 2018 [6] k. w. green jr, p. j. zelbst, j. meacham, v. s. bhadauria, “green supply chain management practices: impact on performance”, supply chain management, vol. 17, no. 3, pp. 290–305, 2012 [7] s. zailani, k. jeyaraman, g. vengadasan, r. premkumar, “sustainable supply chain management (sscm) in malaysia: a survey”, international journal of production economics, vol. 140, no. 1, pp. 330–340, 2012 [8] a. touboulic, h. walker, “theories in sustainable supply chain management: a structured literature review”, international journal of physical distribution & logistics management, vol. 45, no. 1-2, pp. 16–42, 2015 [9] c. busse, j. meinlschmidt, k. foerstl, “managing information processing needs in global supply chains: a prerequisite to sustainable supply chain management”, journal of supply chain management, vol. 53, no. 1, pp. 87–113, 2017 [10] c. r. carter, d. s. rogers, “a framework of sustainable supply chain management: moving toward new theory”, international journal of engineering, technology & applied science research vol. 10, no. 2, 2020, 5367-5370 5370 www.etasr.com shaikh et al.: a short review on green supply chain management practices: the impact on … physical distribution & logistics management, vol. 38, no. 5, pp. 360– 387, 2008 [11] s. seuring, m. muller, “from a literature review to a conceptual framework for sustainable supply chain management”, journal of cleaner production, vol. 16, no. 15, pp. 1699–1710, 2008 [12] s. l. t. berger, g. l. tortorella, c. m. t. rodriguez, “lean supply chain management: a systematic literature review of practices, barriers and contextual factors inherent to its implementation”, in: progress in lean manufacturing, pp. 39-68, springer, 2018 [13] t. ng, m. ghobakhloo, “what derives lean manufacturing effectiveness : an interpretive structural model”, international journal of advanced and applied sciences, vol. 4, no. 8, pp. 104–111, 2017 [14] a. b. daud, a study on lean supply chain implementation in malaysia’s electrical and electronics industry: practices and performances, msc thesis, universiti sains malaysia, 2010 [15] z. h. zhang, b. f. li, x. qian, l. n. cai, “an integrated supply chain network design problem for bidirectional flows”, expert systems with applications, vol. 41, no. 9, pp. 4298–4308, 2014 [16] r. z. r. m. rasi, a. abdekhodaee, r. nagarajah, “understanding drivers for environmental practices in smes: a critical review”, ieee international conference on management of innovation & technology, singapore, june 2-5, 2010 [17] m. s. shahbaz, a. g. kazi, b. othman, m. javaid, k. hussain, r. z. r. m. rasi, “identification, assessment and mitigation of environment side risks for malaysian manufacturing”, engineering, technology & applied science research, vol. 9, no. 1, pp. 3851–3857, 2019 [18] m. hasan, “sustainable supply chain management practices and operational performance”, american journal of industrial and business management, vol. 3, no. 1, article id 26787, 2013 [19] s. sohu, a. halid, s. nagapan, a. fattah, i. latif, k. ullah, “causative factors of cost overrun in highway projects of sindh province of pakistan”, iop conference series: materials science and engineering, vol. 271, article id 012036, 2017 [20] us resilience project, dow chemical: strategies for supply chain security and sustainability, available at: https://usresilienceproject.org/ wp-content/uploads/2014/09/pdf-usrpdow_cs_012312.pdf, 2011 [21] r. z. r. m. rasi, r. ramlan, t. perera, n. j. azmi, ““enviroprenurial” value chain-a conceptual framework for malaysian small and medium enterprises”, international conference on industrial engineering and operations management, bali, indonesia, january 7–9, 2014 [22] a. hussain, r. m. yusoff, m. a. khan, m. l. m. diah, m. s. shahbaz, “the effect of transformational leadership on employee job performance through mediating role of organizational commitment in logistic sector of pakistan”, international journal of supply chain management, vol. 8, no. 4, pp. 162–176, 2019 [23] m. s. shahbaz, r. z. r. m. rasi, m. f. bin ahmad, s. sohu, “the impact of supply chain collaboration on operational performance: empirical evidence from manufacturing of malaysia”, international journal of advanced and applied sciences, vol. 5, no. 8, pp. 64–71, 2019 [24] m. s. shahbaz, r. z. r. m. rasi, m. h. zulfakar, m. f. b. ahmad, m. m. asad, “theoretical framework development for supply chain risk management for malaysian manufacturing”, international journal of supply chain management, vol. 7, no. 6, pp. 325–338, 2018 [25] m. s. shahbaz, s. sohu, f. z. khaskhelly, a. bano, m. a. soomro, “a novel classification of supply chain risks: a review”, engineering, technology & applied science research, vol. 9, no. 3, pp. 4301–4305, 2019 [26] m. s. shahbaz, r. z. r. rasi, m. f. b. ahmad, “a novel classification of supply chain risks: scale development and validation”, journal of industrial engineering and management, vol. 12, no. 1, pp. 201–218, 2019 [27] m. s. shahbaz, a. f. chandio, m. oad, a. ahmed, r. ullah, “stakeholders’ management approaches in construction supply chain: a new perspective of stakeholder’s theory”, international journal of sustainable construction engineering and technology, vol. 9, no. 2, pp. 16–26, 2018 [28] carbon disclosure project, missing link: harnessing the power of purchasing for a sustainable future, available at: https://www.bsr.org/ reports/report-supply-chain-climate-change-2017.pdf, 2017 [29] eastman, corporate environmental policy, available at: www. eastman.com/literature_center/misc/corporateenvironmentalpolicy.pdf [30] supply chain council, supply chain operations reference model, revision 11.0, supply chain council, 2012 [31] westpack group, westpac group environment policy, available at: https://www.westpac.com.au/docs/pdf/aw/environmentalpolicy.pdf [32] coca cola european partners, environment policy: our approach to environmental management, available at: www.cocacolaep.com/ assets/sustainability/documents/98d216dc36/environment-policy-ourapproach-to-environmental-management.pdf, 2019 [33] cips, the global standard for procurement and supply, cips [34] ernst & young, climate change and sustainability services, available at: https://www.ey.com/publication/vwluassets/ey-climate-change-andsustainability-services-brochure-en/$file/ey-climate-change-andsustainability-services-brochure-en.pdf [35] s. mubarik, n. naghavi, m. f. mubarak, “governance-led intellectual capital disclosure: empirical evidence from pakistan”, humanities and social sciences letters, vol. 7, no. 3, pp. 141–155, 2019 [36] m. s. mubarik, c. govindaraju, e. s. devadason, “human capital development for smes in pakistan: is the “one-size-fits-all” policy adequate”, international journal of social economics, vol. 43 no. 8, pp. 804-822, 2016 microsoft word 31-3503_s1_etasr_v10_n3_pp5769-5774 engineering, technology & applied science research vol. 10, no. 3, 2020, 5769-5774 5769 www.etasr.com chakraborty & tharini: pneumonia and eye disease detection using convolutional neural networks pneumonia and eye disease detection using convolutional neural networks parnasree chakraborty electronics & communication engineering bsa crescent institute of science & technology chennai, india c. tharini electronics & communication engineering bsa crescent institute of science & technology chennai, india abstract—automatic disease detection systems based on convolutional neural networks (cnns) are proposed in this paper for helping the medical professionals in the detection of diseases from scan and x-ray images. cnn based classification helps decision making in a prompt manner with high precision. cnns are a subset of deep learning which is a branch of artificial intelligence. the main advantage of cnns compared to other deep learning algorithms is that they require minimal preprocessing. in the proposed disease detection system, two medical image datasets consisting of optical coherence tomography (oct) and chest x-ray images of 1-5 year-old children are considered and used as inputs. the medical images are processed and classified using cnn and various performance measuring parameters such as accuracy, loss, and training time are measured. the system is then implemented in hardware, where the testing is done using the trained models. the result shows that the validation accuracy obtained in the case of the eye dataset is around 90% whereas in the case of lung dataset it is around 63%. the proposed system aims to help medical professionals to provide a diagnosis with better accuracy thus helping in reducing infant mortality due to pneumonia and allowing finding the severity of eye disease at an earlier stage. keywords-convolutional neural network; artificial intelligence; x-rays; pneumonia i. introduction a medical image based disease detection system using cnn is proposed in this paper. the suggested system has the ability of detecting pneumonia and eye disease from x-rays and scan images respectively. the novel feature of the proposed system is that it has been implemented using low cost hardware. in [1], a diagnostic system is proposed for detecting retinal diseases. the result shows that the performance of the proposed method is comparable to that of human experts. however, the implementation of the system using hardware is not suggested. a computationally efficient algorithm is introduced in [2]. adam stochastic optimization method is used to train the neural network. empirical results demonstrate that adam works well in practice and compares favourably to other stochastic optimization methods. in [3], the effect of the convolutional network depth on its accuracy is investigated and changes in architectural configuration which improve the accuracy of the algorithm are proposed. a deep-learning-based approach to detect diseases and pests in tomato plants using images is presented in [4]. the images are captured in-place by camera devices with various resolutions and are processed. the experimental results show that the proposed system can effectively recognize nine different types of diseases and pests in tomato plants. in [5], the face detection and face recognition pipeline framework (fdrenet) is proposed which involves face detection through histograms of oriented gradients and uses siamese technique and contrastive loss to train a deep learning architecture. however, disease detection is not investigated in this paper. on the other hand, a review of the applications of ai in soil management, crop management, weed management and disease management can be seen in [6], but disease management and disease detection in humans using ai are not investigated. ii. datasets used in order to test the proposed idea, two datasets were considered. the lung dataset consisted from images from [7] and the eye dataset from images from [8]. data are essential to train any neural network. the neural network, apart from other parameters, is only as good as the data it is trained on. for training the cnn, medical image data are used. two different kinds of publicly available medical image datasets are considered for training two convolutional neural networks. oct images in the iris region of the eye are considered for eye disease detection. oct is a non-invasive method of capturing biological tissues using low-coherence light. it can capture two dimensional and three dimensional images of micro meter level. the images of the oct scan are classified under 4 categories: i) choroidal neovascularization, ii) diabetic macular edema, iii) multiple drusen, and iv) normal. choroidal neovascularization is the creation of new blood vessels in the choroid region of the eye. this problem is a major cause of vision loss. macular edema is build-up of fluid in an area in the center of the retina. this build up causes the macular to thicken, distorting vision. drusen consists of multiple deposits under retina. drusen is a fatty protein made up of lipids. having drusen may increase the possibility of age-related macular degeneration. the dataset contains normal/healthy iris scan images too. the images are collected from [8] dataset which contains more than 5gb of 84438 images from [9, 10] which are classified on the above mentioned categories. chest x-ray images of children belonging to 3 classifications: i) viral pneumonia, ii) bacterial pneumonia, and iii) normal were taken from [7] and are considered in this study. pneumonia is an corresponding author: p. chakraborty (prernasree@crescent.education) engineering, technology & applied science research vol. 10, no. 3, 2020, 5769-5774 5770 www.etasr.com chakraborty & tharini: pneumonia and eye disease detection using convolutional neural networks infection that accumulates in the lung’s air sacs causing hindrance for breathing. the lung image dataset contains 1gb of 5238 images belonging to the 3 above mentioned categories. both datasets were split into three sets: train, test and validation. figure 1 describes the training of a neural network. the given data is split into training, validation and testing data with each utilizing 70%, 20% and 10% of the data respectively. after each iteration of training, the neural network is tested with the validation data to see its performance at that instant. after completing the whole training process, the performance is evaluated using the testing data. this proposed method is heavily inspired from [11] and a similar neural network with the lung data which was presented in [12]. the image datasets [7-10] are first collected and annotated or labeled in order to distinguish the normal images from images with diseases. to generate the training dataset the existing labeled data are further used to generate a new dataset using a technique called augmentation. annotated and augmented data are used for training the proposed neural network. fig. 1. block diagram of the proposed system iii. convolutional neural networks cnns [3] are a type of deep artificial neural networks, used mainly to identify and cluster images, and perform object recognition. a cnn consists of image processing layers and neural network layers namely: (a) convolutional layer, (b) pooling layer, (c) flattening layer, (d) relu layer, and (e) softmax layer. these layers are described briefly below. a. convolutional layer the convolutional layer is the main building block of a cnn. the layer's parameters consist of a set of user-defined learnable filters (or kernels), which is generally a 3×3 matrix, and iterates through each submatrix of the input. the number of input filters used is generally of the order of 2n. during a forward pass, each filter is convolved across the dimensions of the input image matrix, the mathematical function carried out being dot product and thus producing a 2-dimensional featureextracted matrix of that filter. this reveals various details like vertical or horizontal edges of the images which are extracted and fed into the next layer. the weights that are used are generated randomly using the glorot uniform distribution function. figure 2 shows the filters. figure 3(c) demonstrates the output image when an input image, shown in figure 3(a) is convolved with the one of the above displayed filters. fig. 2. the 32 filters used in the proposed cnn b. pooling layer another important concept used in cnns is pooling, which is a form of non-linear down-sampling. out of the several pooling functions analyzed in [13], max-pooling is the most effective. max-pooling partitions the input image into a set of (n×n) (generally 2×2) sub matrices and the output is the maximum value. the convolved image is first converted into arrays and then maxpooling is performed. figure 3 displays the convolution and maxpooling steps. in maxpooling the dimensions of the image are reduced from a 50×50 matrix to a 24×24 matrix. (a) (b) (c) (d) fig. 3. (a) input image, (b) filter function, (c) resultant convoluted image, and (d) the pooling layer’s output shrinking the 50×50 image to a 24×24 image engineering, technology & applied science research vol. 10, no. 3, 2020, 5769-5774 5771 www.etasr.com chakraborty & tharini: pneumonia and eye disease detection using convolutional neural networks c. flattening layer the output from the pooling layer will be in a matrix form which can’t be fed into the neural network. the flattening layer converts the n×n matrix from the pooling layer into a n2×1 matrix which is a compatible format to be fed into the neural network. d. relu layer relu is the abbreviation of rectified linear unit, which applies a non-saturating activation function. these functions remove negative values of weights by replacing them with zero. it increases the nonlinear properties of the decision function. this activation function is used in input and hidden layers of the neural network. the type of relu used is leaky relu. relu as explained in [14] is used in the neural network layers. figure 4 shows the leaky relu activation function. mathematically, the leaky relu can be defined as: y = 0.01 x when x < 0 (1) y = x when x > 0 (2) fig. 4. graphical representation of the relu e. softmax layer this layer is predominantly used when the neural network solves multiclass-classification problems. it usually consists of a number of output nodes with softmax as activation function. softmax function assigns probability to each node in the output layer. these probability values are normalized to one. the node with highest value is the prediction of the neural network. the relu layer and the softmax layer both use backpropagation [15] and forward propagation to train the cnn. figure 5 shows the softmax function. mathematically, softmax function can be defined as: σ(z)i = ��� ∑ ������ (3) where i= 1, 2,….k and z= z1, z2,…..zk. equation (3) shows the standard exponential function to each element zi of the input vector z and normalizes these values by dividing by their sum. this normalization ensures that the sum of the components of the output vector σ(z) is 1. 1) loss function loss function or cost function generally is the difference between the actual output and the predicted output. the main aim of the loss function is to reduce error. i.e. to minimize the difference between the predicted value and the actual value. the loss function predominantly used in both datasets is mean squared error. in this method, the difference between the predicted and the actual output is squared. it is better than the gradient descent methods for decreasing loss [16]. the sum of all these squares is divided by their total number. mathematically this can be represented as � ∑ �yi � ȳi���� (4) where n is the number of inputs, yi is the actual output and ŷi is the predicted output. fig. 5. graphical representation of the softmax function 2) optimizer the optimizer is a function which is guided by the loss function to update the weights so that the loss is minimized. it does so by changing the learning rate after every iteration in accordance with the calculated loss function. the weights of each node change based on the learning rate. if the learning rate is too fast, the neural network may not learn enough to generalize. if the learning rate is too low, the neural network may learn very slowly. the neural network needs to learn in an optimum speed and optimum manner and that is helped by the optimizer function. the optimizers used were the adam optimizer and the root mean square propagation optimizer. 3) adam optimizer it is one of the best optimizers available. it is computationally efficient, it augments optimized learning and has very little memory requirements. adam [2] stands for adaptive moment estimation. instead of changing the weights based on the first moment (mean) alone or based on the second moment (variance) alone, this uses both first moment and second moment to update the learning parameters: θt+1 = θt � √ṽ� � � m̂� (5) where mt and vt are first moments and second moments respectively, and η is the learning rate. iv. results and discussion neural networks with different architectures have been considered. the architecture of a neural network mainly engineering, technology & applied science research vol. 10, no. 3, 2020, 5769-5774 5772 www.etasr.com chakraborty & tharini: pneumonia and eye disease detection using convolutional neural networks depends on parameters such as the optimizer type, the number of nodes in each layers etc. the results are discussed below. a. lung dataset table i shows the results for the lung dataset [7]. the architecture comprises of an input layer, multiple hidden layers and an output layer. the training accuracy, training losses, validation accuracy and validation losses with respect to the number of iterations used for simulation are listed. the simulation is performed using python integrated development environment (ide) spyder. the maximum validation accuracy obtained in the case of the lung dataset is only around 63% with 10 epochs/iterations and 5215 steps per epoch. this result can be further improved with larger size dataset. in table ii, the complete architecture consisting of two pairs of convolution layers (named as conv2d_13 and conv2d_14), maxpooling layers (named as max_pooling2d_13 and max_pooling2d_14), and a flattening layer are shown. the optimum artificial neural network consists of an input layer consisting of 7 nodes and an output layer consisting of 3 nodes for each classification. table i. observations of the lung dataset iterations optimizer training validation accuracy loss accuracy loss 10 rmsprop 0.4848 0.2122 0.4706 0.2131 15 adam 0.7549 0.1215 0.4706 0.2901 10 adamax 0.4849 0.2108 0.4118 0.2217 15 rmsprop 0.4779 0.3009 0.4290 0.2308 12 rmsprop 0.4851 0.2108 0.5294 0.2040 15 adamax 0.7682 0.2286 0.5294 0.2286 10 adam 0.7698 0.2201 0.6309 0.2252 15 adam 0.5396 0.3547 0.3509 0.3688 table ii. summary of the nn for lung dataset which yielded the best parameters durin g training and testing layer (type) output shape param # conv2d_13 (conv2d) (none, 30, 30, 32) 896 max_pooling2d_13 maxpooling (none, 15, 15, 32) 0 conv2d_14 (conv2d) (none, 13, 13, 64) 18496 max_pooling2d_14 (maxpooling (none, 6, 6, 64) 0 flatten_7 (flatten) (none, 2304) 0 dense_13 (dense) (none, 7) 16135 dense_14 (dense) (none, 3) 24 total params: 35,551, trainable params: 35,551, non-trainable params: 0 param# in tables ii and iv indicates the number of input weights that is processed through that one given layer. total params is the sum of all the input weights in the total architecture of the neural network. the output shape denotes the number of inputs at a time (given by none) followed by the expected shape of the input. b. eye dataset table iii shows the observations for eye dataset [8-10]. the maximum validation accuracy obtained in the case of the eye dataset is around 90% which can be further improved with a larger size dataset. table iv shows the summary of the neural network model which yielded the best parameters during training and validation for eye disease detection. the maximum number of epochs used for the eye dataset ranged from 5 to 64, and the maximum validation accuracy was obtained for 15 epochs. table iii. observations of the eye dataset iterations optimizer training validation accuracy loss accuracy loss 15 adam 0.7060 0.1016 0.9062 0.039 5 adam 0.6823 0.1079 0.7812 0.078 20 adam 0.6886 0.1062 0.8438 0.072 32 adam 0.7031 0.1031 0.0982 0.718 34 rmsprop 0.4702 0.1608 0.5625 0.143 13 adam 0.6625 0.1120 0.7812 0.085 64 adam 0.7379 0.0942 0.7241 0.098 10 adam 0.7969 0.5274 0.7325 0.688 table iv. summary of the nn for eye dataset which yielded the best parameters durin g training and testing layer (type) output shape param # conv2d (conv2d) (none, 48, 48, 32) 320 max_pooling2d (maxpooling2d) (none, 24, 24, 32) 0 conv2d_1 (conv2d) (none, 22, 22, 32) 9248 max_pooling2d_1 (maxpooling2 (none, 11, 11, 32) 0 flatten (flatten) (none, 3872) 0 dense (dense) (none, 4) 15492 total params: 25,060, trainable params: 25,060, non-trainable params: 0 it can be seen from table iii that for the eye dataset the optimizer predominantly used was the adam optimizer. in the 5th trial of table iii, the rms prop optimizer was used. the loss was the mean squared error loss function with the exception of the 8th trial where categorical cross entropy loss function was used. the complete architecture consists of two pairs of convolution layers (named as conv2d and conv2d_1) and maxpooling layers (named as max_pooling2d and max_pooling2d_1) and a flattening layer named as flatten. the ann has an output layer consisting of four nodes, one for each kind of classification. the output shape consists of a 4 dimensional array for the 2 pairs of convolutional and maxpooling layer. the 1st dimension (denoted by none in all the given pairs) is the number of inputs that will be fed into that given layer at that particular given time. it is mentioned as none because these observations were taken after training, when there was not any input to be fed into at that instant of time. the rest of the 3 dimensions mention the dimensions of a single input unit. the same holds true for remaining flattening and neural network layers. v. deployment the neural networks which yielded the best parameters were saved in h5 format and were deployed in a raspberry pi which uses raspbian with features to program in python 3.5.3. a simple graphic user interface (gui) was made where the user was asked to enter the directory of the image and the neural network would make the prediction and display the result. snapsots of the results and of the gui output for both datasets can be seen in figures 6-11. the complete setup used to implement the proposed system using hardware is shown in figure 12. the hardware part includes the raspberry pi board for interface with the gui using tkinter library in python ide. further, there are two ways to connect the lcd to the raspberry pi board: 4 bit mode and 8 bit mode. in this work, 4 bit mode was used in which the byte to be sent is split into two sets that (upper bits and lower bits) of 4 bits each which are sent one by one over 4 data wires. engineering, technology & applied science research vol. 10, no. 3, 2020, 5769-5774 5773 www.etasr.com chakraborty & tharini: pneumonia and eye disease detection using convolutional neural networks fig. 6. gui output of the neural network predicting the given iris image has diabetic macular edema fig. 7. gui output of the neural network predicting the given iris image has choroidal neovascularisation fig. 8. gui output predicting the given iris image has multiple drusen fig. 9. gui output predicting the given iris image doesn’t have any disease fig. 10. gui output predicting the given x-ray image has viral pneumonia fig. 11. gui output of the neural network predicting the given x-ray image has bacterial pneumonia figures 13 and 14 show the eye disease detection system and the pneumonia detection system implemented in hardware. fig. 12. experimental setup fig. 13. result obtained for a normal human iris scan fig. 14. result obtained for human chest x-ray of bacterial pneumonia vi. conclusions and future work a medical image based disease detection system using convolutional neural networks is proposed and developed. the eye disease detection system effectively classifies the normal eye images and eye images with diseases like choroidal neovascularization, diabetic macular edema, and multiple drusen. lung image dataset [7] consisted of bacterial pneumonia, viral pneumonia, and normal lung x-ray images of children in the age group of 1-5 years. the training model was simulated using python libraries like tensorflow, keras, skimage, etc. to improve training speed. the enhanced speed engineering, technology & applied science research vol. 10, no. 3, 2020, 5769-5774 5774 www.etasr.com chakraborty & tharini: pneumonia and eye disease detection using convolutional neural networks of the training model yielded the real time implementation of the systems more suitable. the proposed system has the potential to be used in generalized high-end applications in biomedical imaging and provides a cost effective solution at a single board computer (raspberry pi). regarding future work, focus will be given in improving the current results. another promising application will be to extend the idea for identification of various diseases not only in humans but also in plants and crops. references [1] d. s. kermany, m. goldbaum, w. kai et al., “identifying medical diagnoses and treatable diseases by image-based deep learning”, cell, vol. 172, no. 5, pp. 1122-1131, 2018 [2] d. p. kingma, j. ba, “adam: a method for stochastic optimization”, international conference on learning representations, san diego, usa, may 7-9, 2015 [3] k. simonyan, a. zisserman, “very deep convolutional networks for large-scale image recognition”, international conference on learning representations, san diego, usa, may 7-9, 2015 [4] a. fuentes, s. yoon, s. cheol kim, d. sun park, “a robust deeplearning-based detector for real-time tomato plant diseases and pests recognition”, sensors, vol. 17, no. 19, article id 2022, 2017 [5] d. virmani, p. girdhar, p. jain, p. bamdev, “fdrenet: face detection and recognition pipeline”, engineering, technology & applied science research ,vol. 9, no. 2, pp. 3933-3938, 2019 [6] n. c. eli-chukwu, “applications of artificial intelligence in agriculture: a review”, engineering, technology & applied science research, vol. 9, no. 4, pp. 4377-4383, 2019 [7] d. kermany, k. zhang, m. goldbaum, “labeled optical coherence tomography (oct) and chest x-ray images for classification”, mendeley data, vol. 2, 2018 [8] k. s. mader, “eye oct datasets: retina oct datasets with accompanying fundus images from published studies”, available at: https://www.kaggle.com/kmader/eye-oct-datasets [9] t. mahmudi, r. kafieh, h. rabbani, “comparison of macular octs in right and left eyes of normal people”, in: proceedings spie 9038, medical imaging 2014: biomedical applications in molecular, structural, and functional imaging, 90381k, san diego, california, usa feb. 15-20, 2014 [10] m. k. jahromi, r. kafieh, h. rabbani, a. m. dehnavi, a. peyman, f. hajizadeh, mohammadreza ommani, “an automatic algorithm for segmentation of the boundaries of corneal layers in optical coherence tomography images using gaussian mixture model”, journal of medical signals and sensors, vol. 4, no. 3, pp. 171-180, 2014 [11] p. rajpurkar, j. irvin, r. l. ball, et. al, “deep learning for chest radiograph diagnosis: a retrospective comparison of the chexnext algorithm to practicing radiologists”, plos medicine, vol. 15, no. 11, article id e1002686, 2018 [12] x. gu, l. pan, h. y. liang, r. yan, “classification of bacterial and viral childhood pneumonia using deep learning in chest radiography”, 3rd international conference on multimedia and image processing, guiyang, china, march 16-18, 2018 [13] d. scherer, a. muller, s. behnke, “evaluation of pooling operations in convolutional architectures for object recognition”, 20th international conference on artificial neural networks, thessaloniki, greece, september 15-18, 2010 [14] k. he, x. zhang, s. ren, j. sun, “delving deep into rectifiers: surpassing human-level performance on image net classification”, in: proceedings of the ieee international conference on computer vision, pp. 1026-1034, ieee, 2015 [15] y. lecun, l. buttou, g. b. orr, k. r. muller, ”efficient backprop”, in: neural networks: tricks of the trade, 2nd edition, springer-verlag, 1998 [16] y. lecun, l. bottou, y. bengio, p. haner, “gradient-based learning applied to document recognition”, proceedings of the ieee, vol. 86, no. 11, pp. 2278-2324, 1998 authors profile parnasree chakraborty is an assistant professor (sr.grade) in the department of electronics and communication engineering at b. s. abdur rahman crescent institute of science & technology. her research interests include digital signal processing, ai & robotics, wireless sensor networks, and digital communication. she is a life member of the iste. she has published many papers in journals and conferences in the area of signal processing and wireless sensor networks. c. tharini is a professor and the head of the department of electronics and communication engineering at b. s. abdur rahman crescent institute of science & technology. she received her phd in information and communication engineering from anna university in 2011. her research interests include wireless communication, wireless sensor networks and signal processing algorithms for wireless sensor networks. she is an active member of the computer society of india. she has a teaching experience of more than 15 years. her students are presently working in wireless sensor networks, signal processing and in the wireless communication domain. she has published many papers in international journals and in conferences in the area of signal processing and wireless sensor networks. microsoft word 32-3334_s_etasr_v10_n2_pp5520-5523 engineering, technology & applied science research vol. 10, no. 2, 2020, 5520-5523 5520 www.etasr.com soomro et al.: contractor’s selection criteria in construction works in pakistan contractor’s selection criteria in construction works in pakistan noor ul islam soomro department of civil engineering mehran university of engineering and technology jamshoro sindh, pakistan noorsoomro93@gmail.com nafees ahmed memon department of civil engineering mehran university of engineering and technology jamshoro sindh, pakistan nafees.memon@faculty.muet.edu.pk aftab hameed memon department of civil engineering quaid-e-awam university of engineering, science and technology, nawabshah, sindh, pakistan aftabm78@hotmail.com kashif rafique memon department of civil engineering mehran university of engineering and technology jamshoro sindh, pakistan kashif.memon33@gmail.com abstract—the contractor is the primary stakeholder in materializing a project concept. for the successful completion of any project, it is compulsory for the contractor to have relevant experience. the selection of the appropriate contractor depends on various criteria. this study aims to study these selection criteria. based on 71 questionnaire forms received from representatives of contractor, consultant and client firms involved in execution works of construction activities it is found that quality, bid amount, technical capability, financial stability, and experience are five commonly adopted criteria for contractor selection in construction works in pakistan. on the other hand, quality, technical capability, financial stability, equipment availability, and management capability are reported as the top five effective criteria for appropriate contractor selection for any construction project. keywords-contractor selction; selection criteria; pakistan; sindh; construction works i. introduction the success of a project depends on the performance of the contractor, hence its selection is essential. the selection of the right contractor reduces cost while assuring high quality work [1]. a research carried out in hong kong identified five critical factors for project success: initiation and transmission of conflict resolution, energy, resource sharing among project participants and responsibilities have been defined clearly [2]. in traditional contracting, the contractors are mostly selected based on lowest bid criterion which often results in declination of quality, cost, and completion time of the project. thus, it is essential to select a suitable contractor for the project for achieving effective performance and successful completion. this paper focuses on studying the contractor selection criteria adopted in construction works in pakistan. this paper identified the level of adoption for selection criteria of contractors and the level of effectiveness of each criterion. the level of adoption references to the occurrence, i.e. how common each criterion is used in construction industry, while the level of effectiveness defines the importance of the criteria. ii. contractor selection criteria for the success of any construction work, contractor selection is very imperative phase. a selection criterion is involved in judging and measuring the potential of contractors. the skills of contractors are judged and calculated. some common criteria considered in contractor selection are: • technical capacity: contractors must have the capability of completing project activities successfully [3]. • experience: experience plays an important role in performing any task with ease and perfection. hence, it is essential to select the contractor based on its relevant work experience in past [3]. • management capacity: planning, organizing and handling of project express the management capability of the contractor [3]. • financial stability: overall financial position and capability must be examined based on the cash flow of the contractor [3]. • past performance: past performance should be considered for time, quality, and cost control requirements [4]. • past relationship: clients need to gather all the information regarding the contractor and evaluate its past affiliation which contributes in construction activities [4]. • reputation: project manager must have an opinion about the contractor’s past performance and reputation regarding the success in projects’ completion [4]. • occupational health and safety: contractors must apply occupational health and safety principles [3]. corresponding author: noor ul islam soomro engineering, technology & applied science research vol. 10, no. 2, 2020, 5520-5523 5521 www.etasr.com soomro et al.: contractor’s selection criteria in construction works in pakistan • quality: the contractor must be capable to maintain the desirable quality standards [3]. • organizational skills: focus on the effective utilization of the required resources with teamwork creating environment for team work while directly reduce the level of individual stress. • current workload: it refers to the present workload of the running project [3]. • equipment: it should be ensured that the contractor has the sufficient equipment required for the project [6]. • human resources: human resource management deals with employee selection, recruiting, providing orientation, training, and development, appraising employee performance, motivating employees, deciding compensation, and ensuring safety, welfare, health and incompliance with labor laws [6]. • project-specific requirements: they are some specific requirements necessary to ensure the specific project’s success. they are meant to align the project resources with the objectives of the owner. the benefits of collecting project requirements include cost reduction, higher project success rate, enhanced stakeholder communication and effective management of change [5]. • business location: the location of a business is the place where it is situated and it refers to the nearby locations where all the requirements can be achieved successfully. the owner should look at the advantages which each area has to offer [5]. • bid amount: it is colloquially known as a “bid” in markets, and is often lower than the asking price [4]. if the contractor is technically sound and has adequate experience, equipment availability, and personnel to satisfy the client, it can achieve the successful completion of the project. further, the financial soundness of the contractor may help to achieve the best quality and project success within the desired time. if the contractor management capability is high, then it is easy for the contractor to deal with technical personnel with experience, quality, and management and project management capabilities. if the contractor’s reputation is high then past performance and past relationships are considered by the clients making the contractor favorable for more awarding contracts. the selection of a suitable contractor is the key to project success by awarding the contracts to the lowest price with multi-criteria selection practices. both technical and financial criteria are considered in a multi criteria process. most changes occur by local official authorities and are useful for maintaining construction quality. a study of gaza strip showed that central bidding committee provides a fair bid evaluation process, with equal opportunities to all bidders, and suitable responsibility [4]. authors in [3] revealed that financial soundness, technical ability, management capability and health and safety are major criteria for selecting a contractor. various researchers have highlighted different criteria for selecting the appropriate contractor as shown in table i. table i. contractor selection criteria [3] [4] [5] [6] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] experience � � � � � � � � � � � � � � � technical capability � � � � � � � � � � � � financial stability � � � � � � � � � � � � � � � � � management capability � � � � � � � � � � � � � � past relationship � � � past performance � � � � � � � � � � � � occupational health and safety � � � � � � � � � � � � � � reputation � � � � � � � organizational skills � � � � quality � � � � � � � � � � environmental aspects � � current workload � � � equipment � human resources/ managerial resources � � � � project-specific requirements � � business location � bid amount � � � iii. data collection and analysis results the data for this study were gathered through a survey amongst construction practitioners. out of 120 sent survey forms, 71 were received from various professionals in relevant fields. prior to the analysis of the gathered data, the reliability of the data was assessed by cronbach's alpha computed with statistical software package spss. cronbach's alpha for the level of adoption of the criteria is 0.949 while the alpha for the level of effectiveness is 0.859. the value of alpha for adoption level and level of effectiveness are higher than 0.7 (the alpha value is considered satisfactory if it is greater than 0.7 [7]), hence, the data are considered reliable and can be considered for further analysis and drawing conclusions. the forms were analyzed statistically and the results are discussed below. a. respondents profile the respondents participating in the survey had strong technical profile and were employed in the construction sector. the characteristics of the participants are summarized in table engineering, technology & applied science research vol. 10, no. 2, 2020, 5520-5523 5522 www.etasr.com soomro et al.: contractor’s selection criteria in construction works in pakistan ii, from which it can be observed that the majority of the participants (33 out of 71) are representatives of client firms while at the second level, 24 participants are engaged in contractor companies, and 14 participants are working with consultants. among these respondents, 36 have completed civil engineering education, 26 have finished a master, 6 have a diploma, while 3 are phd holders. these respondents are working for several years in construction works with experience spanning from less than 5 years (49) to more than 20 years (5). among these participants, 13 are involved in project executions with contract sum above rs. 3000m, 14 are working in projects costing rs. 400m to 3000m and the other practitioners are working in projects costing below rs. 400m. eight participants are directors of their companies, 11 are working at a managerial level, 28 are resident and planning engineers, while 24 respondents are site supervisors and site engineers. table ii. profile of respondents frequency percent cumulative percent organization type consultant 14 19.7 19.7 contractor 24 33.8 53.5 client 33 46.5 100.0 education level diploma 6 8.5 8.5 degree 36 50.7 59.2 master 26 36.6 95.8 phd 3 4.2 100.0 experience level 0-5 years 49 69.0 69.0 06-10 years 9 12.7 81.7 11-15 years 5 7.0 88.7 16-20 years 3 4.2 93.0 21-25 years 2 2.8 95.8 more than 25 years 3 4.2 100.0 project size (rs.) less than rs. 20m 18 25.4 25.4 20m 50m 16 22.5 47.9 50m 150m 4 5.6 53.5 150m 400m 6 8.5 62.0 400m – 800m 8 11.3 73.2 800m – 1800m 5 7.0 80.3 1800m –3000 m 1 1.4 81.7 above 3000m 13 18.3 100.0 b. level of adoption of contractor’s selection criteria the participants responding in the survey process were required to rate the adoption level of various criteria of contractor’s selection considered by their respective companies. the respondents were asked to indicate the level of adoption using a 5 point scale where 1=never, 2=sometimes, 3=moderately, 4=usually and 5=always. mean value, standard deviation, and ranking obtained from statistical analysis of the data collection for these criteria are presented in table iii. it can be seen that quality with mean value of 3.90 and standard deviation 1.232 is reported as the most commonly adopted criterion and is ranked first by the participants. bid amount with mean value of 3.87 and standard deviation 1.120 is placed second and technical capability is third with a mean value of 3.87 and standard deviation 1.170. the participants were also asked to mark the level of effectiveness of all these criteria. table iii. contractor selection criteria adoption level contractor selection criteria frequency for scale m e a n s .d r a n k 1 2 3 4 5 quality 6 3 12 21 29 3.90 1.232 1 bid amount 3 6 13 24 25 3.87 1.120 2 technical capability 3 8 11 22 27 3.87 1.170 3 financial stability 4 8 14 18 27 3.78 1.229 4 experience 8 13 4 13 33 3.70 1.487 5 past performance 3 10 14 23 21 3.69 1.166 6 equipment 4 12 10 27 18 3.60 1.200 7 management capability 3 11 17 22 18 3.57 1.154 8 reputation 5 9 17 21 19 3.56 1.215 9 organizational skills 5 9 17 21 19 3.56 1.215 10 project-specific requirements 3 13 16 23 16 3.50 1.157 11 human resources/ managerial resources 6 14 14 19 18 3.40 1.293 12 current workload 7 11 19 18 16 3.35 1.266 13 environmental aspects 8 12 17 18 16 3.30 1.304 14 occupational health and safety 7 11 24 15 14 3.25 1.227 15 business location 7 15 16 22 11 3.21 1.229 16 past relationship 5 16 21 20 9 3.16 1.133 17 table iv. effectiveness level of contractor selection criteria contractor selection criteria frequency for scale m e a n s .d r a n k 1 2 3 4 5 quality 4 2 17 18 30 3.95 1.139 1 experience 6 3 11 19 32 3.95 1.247 2 technical capability 4 10 11 17 29 3.80 1.271 3 financial stability 5 8 9 24 25 3.78 1.241 4 equipment 4 7 12 26 22 3.77 1.161 5 management capability 2 8 17 23 21 3.74 1.091 6 bid amount 3 8 15 27 18 3.69 1.103 7 reputation 3 7 22 20 19 3.63 1.111 8 current workload 4 8 15 28 16 3.61 1.125 9 past performance 3 11 15 24 18 3.60 1.152 10 organizational skills 2 11 18 23 17 3.59 1.102 11 project-specific requirements 3 8 20 28 12 3.53 1.039 12 human resources/ managerial resources 5 10 16 24 16 3.50 1.193 13 past relationship 4 13 25 20 9 3.23 1.075 14 business location 4 16 20 21 10 3.23 1.127 15 occupational health and safety 4 17 18 22 10 3.23 1.139 16 environmental aspects 6 15 17 23 10 3.22 1.185 17 the respondents were asked to indicate the level of effectiveness using a 5 point scale as 1=not effective, 2=less effective, 3=moderately effective, 4=very effective and 5=extremely effective. analysis of the data was performed with mean value and standard deviation calculation and is presented in table iv. it can be seen that that quality with mean value of 3.95 and standard deviation 1.139 is reported as the most effective criterion, with technical capability and financial engineering, technology & applied science research vol. 10, no. 2, 2020, 5520-5523 5523 www.etasr.com soomro et al.: contractor’s selection criteria in construction works in pakistan stability following with mean value of 3.8 and 3.78 respectively. equipment availability and management capability were at the fourth and fifth place. iv. conclusion this paper focused on highlighting the contractor’s selection criteria adopted in construction works in pakistan. this aim was achieved through a survey of construction practitioners using a form based on 17 criteria identified in the literature. mean value and standard deviation analysis of the 71 survey forms received revealed that quality, bid amount, technical capability, financial stability and experience are common criteria adopted in construction projects of pakistan for selecting the most appropriate contractor. the study also highlighted that quality, technical capability, financial stability, equipment availability, and management capability are the most effective criteria for selecting a contractor for any construction project. the findings of this study will be very useful for consultants and clients in order to select a suitable contractor for construction works. references [1] a. p. c. chan, a. p. l. chan, “key performance indicators for measuring construction success”, benchmarking: an international journal, vol. 11, no. 2, pp. 203-221, 2004 [2] a. p. c. chan, d. w. m. chan, y. h. chiang, b. s. tang, e. h. w. chan, k. s. ho, “exploring critical success factors for partnering in construction projects”, journal of construction engineering and management, vol. 130, no. 2, pp. 188-198, 2004 [3] z. hatush, m. skitmore, “criteria for contractor selection”, construction management & economics, vol. 15, no. 1, pp. 19-38, 1997 [4] a. enshassi, s. mohamed, z. modough, “contractors’ selection criteria: opinions of palestinian construction professionals”, international journal of construction management, vol. 13, no. 1, pp. 19-37, 2013 [5] s. t. ng, r. m. skitmore, “contractor selection criteria: a cost-benefit analysis”, ieee transactions on engineering management, vol. 48, no. 1, pp. 96-106, 2001 [6] e. palaneeswaran, m. m. kumaraswamy, “contractor selection for design/build projects”, journal of construction engineering and management, vol. 126, no. 5, pp. 331-339, 2000 [7] m. a. munir, m. a. zaheer, m. haider, m. z. rafique, m. a. rasool, m. s. amjad, “problems and barriers affecting total productive maintenance implementation”, engineering, technology & applied science research, vol. 9, no. 5, pp. 4818-4823, 2019 [8] j. s. russell, m. j. skibniewski, “decision criteria in contractor prequalification”, journal of management in engineering, vol. 4, no. 2, pp. 148-164, 1998 [9] g. d. holt, p. o. olomolaiye, f. c. harris, “evaluating prequalification criteria in contractor selection”, building and environment, vol. 29, no. 4, pp. 437-448, 1994 [10] z. hatush, m. skitmore,“contractor selection using multicriteria utility theory: an additive model”, building and environment, vol. 33, no. 2-3, pp. 105-115, 1998 [11] l. f. alarcon, c. mourgues, “performance modeling for contractor selection”, journal of management in engineering, vol. 18, no. 2, pp. 52-60, 2002 [12] e. k. zavadskas, t. vilutiene, “a multiple criteria evaluation of multifamily apartment block's maintenance contractors: i—model for maintenance contractor evaluation and the determination of its selection criteria”, building and environment, vol. 41, no. 5, pp. 621-632, 2006 [13] d. singh, r. l. tiong, “contractor selection criteria: investigation of opinions of singapore construction practitioners”, journal of construction engineering and management, vol. 132, no. 9, pp. 9981008, 2006 [14] f. waara, j. brochner, “price and nonprice criteria for contractor selection”, journal of construction engineering and management, vol. 132, no. 8, pp. 797-804, 2006 [15] h. doloi, “analysis of pre-qualification criteria in contractor selection and their impacts on project success”, construction management and economics, vol. 27, no. 12, pp. 1245-1263, 2009 [16] g. v. manideepak, a. bhatla, b. pradhan, “methodologies for contractor selection in construction industry”, acsge-2009, bits pilani, india, october 25-27, 2009 [17] d. j. watt, b. kayis, k. willey, “the relative importance of tender evaluation and contractor selection criteria”, international journal of project management, vol. 28, no. 1, pp. 51-60, 2010 [18] p. jaskowski, s. biruk, r. bucon, “assessing contractor selection criteria weights with fuzzy ahp method application in group decision environment”, automation in construction, vol. 19, no. 2, pp. 120-126, 2010 [19] z. morkunaite, v. podvezko, v. kutut, “selection criteria for evaluating contractors of cultural heritage objects”, procedia engineering, vol. 208, pp. 90-97, 2017 [20] d. n. a. ayettey, h. danso, “contractor selection criteria in ghanaian construction industry: benefits and challenges”, journal of building construction and planning research, vol. 6, pp. 278-297, 2018 microsoft word 29-3039_s_etasr_v9_n5_pp4735-4740 engineering, technology & applied science research vol. 9, no. 5, 2019, 4735-4740 4735 www.etasr.com alghamdi: suitability of quaternary sediments of wadi arar, saudi arabia as construction materials suitability of quaternary sediments of wadi arar, saudi arabia, as construction materials an environmental radioactivity approach mohammed a. m. alghamdi faculty of earth science, king abdulaziz university, jeddah, saudi arabia mmushrif@kau.edu.sa abstract—the surficial quaternary deposits of wadi arar were radioactively evaluated for construction purposes. the concentrations of 226 ra, 232 th, and 40 k were used to evaluate the radioactive suitability of wadi arar. gamma-spectrometry technique with an hpge detector was used to measure the concentrations of ra, th, and k. the average specific activities of ra, th, and k were 22.92, 16.99, and 223.66bq/kg respectively. the average value of the air absorbed dose rate (d) was 30.47ngy/h. the average values of the indoor and outdoor annual effective dose equivalent (aede) were 149.46 and 37.36µsv/y respectively. the average value of the radium equivalent activity index (raeq) was 64.44bq/kg. the maximum values of the external and internal hazard index (h) were 0.20 and 0.27 respectively. radioactivity concentration and hazard index values are within the acceptable global values and do not pose any significant radiological threat to the population. these results reflect the safety of wadi arar as a site for construction and the potential to use depositional sediments at the site as construction materials. keywords-environmental; geology; construction; radiation; hpge i. introduction wadis, coasts, and deserts are possible construction sites and sources for construction materials. radioactive hazards are one of the factors that affect the selection of construction material sites. geological, geochemical, pathological, and ecological processes along with seasonal changes are some of the main processes that influence natural radioactivity [1]. radiation level concentrations differ depending on rock-type, soil, or sediment [2]. the discharge of gamma radiation from naturally occurring radioisotopes depends on land conditions, and is globally characterized by various levels [3]. due to the presence of active faults and lineaments, some areas experience elevated concentrations of k, ra, and th in soil samples [4]. similar researches concluded in varying results. gamma-ray spectroscopy was used to assess the average effective dose of 226 ra, 232 th, and 40 k in punjab, india in [5]. assessments of the natural radionuclide contents of 238 u, 232 th, and 40 k at tushki, egypt using gamma-spectrometry analysis showed high background radiation, thankfully far from habitation and cultivated regions [6]. naturally occurring radioactivity in soil samples at akwa ibom, which were evaluated were less than the recommended safety limits [7]. all health hazard indices were well below their recommended limits for samples collected from locations at aden, south of yemen [8]. maximum and minimum activity 40 k concentrations of sediments in water samples at abuja, nigeria were ranked in [9]. soil sediments in the udi and ezeagu areas of the enugu state, nigeria have reduced concentrations of 40 k, 226 ra, and 232 th [10]. authors in [11] found that the radiological effects of soil samples from geregu were below the standard limits and posed no potential significant effects on public health. authors in [2] analyzed the radionuclide activity concentrations of 40 k, 226 ra, and 232 th in sand deposits from the bharathapuzha river, india, and found that the concentrations were higher than the international recommended values. the specific natural radionuclide activities in sediment samples collected from beni haroun dam, algeria had no hazardous indices compared with analog measurements from other locations [12]. in saudi arabia, the strategic road that connects arar and aljouf is crossing wadi arar while the urban expansion of arar extends in the southeastern direction towards wadi arar (figure 1). radon concentrations in this wadi, reflect a significant correlation between the rad7 and cr-39 techniques that were used to detect it [14], while it has a significant correlation with coarse and fine sand grain size [15]. authors in [16-19] used gamma spectroscopic analysis measuring the radioactivity of ra, th, and k, to obtain hazard indices in alkhobar, jeddah, aqabah, and ad-dahna respectively. according to them, the hazard indices at ad-dahna were below the global average value, but the values of k at jeddah and aqabah were higher than the global average. considering 226 ra, 232 th, and 40 k, this study was conducted on 22km of the surficial deposits of wadi arar (figure 1). the assessment of radiation concentrations and hazard indices at this wadi will allow us to i) evaluate the site validity for future urbanization and the potential use of sediment deposits as construction materials, ii) support interpretations of subsurface structural geology, and iii) perform comparisons and corresponding author: mohammed a. m. alghamdi engineering, technology & applied science research vol. 9, no. 5, 2019, 4735-4740 4736 www.etasr.com alghamdi: suitability of quaternary sediments of wadi arar, saudi arabia as construction materials interpretations of radiation hazards in wadi environments with the global environment. fig. 1. study area profiles in wadi arar. (screenshot from google earth [13], © 2018 google, image © 2019 maxar technologies, image © 2019 cnes/airubus) ii. study area a. geological setting the arar quadrangle underlies the late cretaceous aruma formation and paleogene and neogene sedimentary rocks [20]. sedimentary rock units of devonian, silurian, and ordovician are also representative of the subsurface formations. wadi arar cuts off these formations from the southwest to the northeast, filling this area with quaternary deposits, such as gravel, sand, and silt, which lie above the sedimentary rocks. according to the structural geology perspective, the arar arch traverses the study area, trending from southwest to northeast. flood seasons have continually transported these deposits and soil sediments on the same trend of arar arch. b. location and sampling geological and topographic maps of the northern border region [13, 20] were used to adapt to the study area. the study area of wadi arar is located between 30°50’30’’n and 30°56’30’n and 40°50’30’’e and 41°02’30’’e. seven profiles (a, b, c, d, e, f, and g) were chosen at the peak of water deposition or sediment erosion. the total distance from profile a to g was 22km, with an average distance of 3km between each profile (figure 1). five samples of 1kg from each station were sealed in plastic bags and stored for laboratory tests. iii. hazard parameters a. detector samples with an average weight of 180g were placed in sealed cylindrical 100ml plastic containers and used to measure the naturally occurring radioactive materials (norm). the containers were stored for one month to obtain secular equilibrium in each natural radioactive series, where the rate of daughter decay reaches equilibrium with that of the parents. activity concentration measurements were performed using a gamma-ray spectrometer equipped with a high-purity germanium (hpge) detector that was enclosed in a 10cm cylindrical multilayer graded shield (canberra 747e). the hpge detector has an efficiency of 60% and energy resolution of 2.4kev at 1,332.5kev from a 60 co gamma-ray. the detector was coupled with an amplifier to the computer using a multichannel analyzer. calibration of the energy, efficiency of the detector and efficiency of the sample geometry were performed using the methods described in [21–23]. 1,461 kev γ-line was used to determine 40 k activity while 226 ra and 232 th activities were determined indirectly using the most intense noninterfering gamma lines (295 and 352kev for 214 pb, 609, 1120, and 1764kev for 214 bi, 583 and 2614kev for 208 tl, 338, 911, and 968kev for 228 ac). each sample was measured for 24hr in order to obtain a sufficient amount of data [24]. b. air-absorbed dose rate (d) the measured concentrations of 226 ra, 232 th, and 40 k were converted to a total absorbed gamma dose rate in the air at one meter above the ground using the monte carlo method [3] based on the following equation: �����.�� � 0.462��� � 0.621��� � 0.0417�� (1) where d is the air-absorbed dose rate and ���,���,and ��are the activities, in bq/kg, of ra, th, and k, respectively. c. annual effective dose equivalent (aede) the annual effective dose provides a measure of the total radiation risk to an individual organism. the conversion coefficient from the absorbed dose in the air to the effective dose and the indoor occupancy factor was used to estimate the annual effective dose with a conversion factor of 0.7sv/gy [2]. assuming that people spend, on average, approximately 20% of their time outdoors and 80% indoors [3], the annual effective dose was calculated with the following equations: aede indoor �msv y� � � �ngy.h� x 8760 h x 0.8 x 0.7 sv gy� -10�. (2) aede outdoor �msv y� � � �ngy.h� x 8760 h x 0.2 x 0.7 sv gy� -10�. (3) d. γ-ray radiation hazard indexes (1234) the natural radiation in building materials is not uniform and is typically determined by the concentrations of 226 ra, 232 th, and 40 k [2]. uniformity for radiation is denoted in terms of radium equivalent activity raeq, in bq/kg, to match the specific activity of a fabric that contains a different quantity of 226 ra, 232 th, and 40 k by a single amount. it is a commonly used hazard index, which is calculated using the following equation [2]: 1234 � 5�� � 1.43 5�� � 0.0775� (4) where cra, cth, and ck are the activity concentrations of 226 ra, 232 th, and 40 k, in bq/kg respectively. it has been assumed that 370bq/kg of 226 ra, 259bq/kg of 232 th, or 4810bq/kg of 40 k produce the same gamma dose rate [2]. e. hazard index (738, 79:) in their research on sandy soil, authors in [24] obtained an external hazard index using the 1234 expression from (4) by engineering, technology & applied science research vol. 9, no. 5, 2019, 4735-4740 4737 www.etasr.com alghamdi: suitability of quaternary sediments of wadi arar, saudi arabia as construction materials suggesting that the maximum allowed value (equal to unity) corresponds to the upper limit of raeq (370bq/kg). this index value must be less than unity to maintain an insignificant level of radiation hazard, i.e. the radiation exposure due to construction material radioactivity is limited to 1.0msv/y. the external hazard index can be defined with the following equation: 738 � � ;<= >?@ � ;ab cde � ;f gh @ i 1 (5) where ���, ���, and ��� are the specific activities of ra, th, and k in bq/kg respectively, while 370, 259 and, 4810 are the activities, in bq/kg, of ra, th, and k that produce the same gamma dose rate. in addition to the external hazard index, radon and its shortlived daughter products are hazardous to respiratory organs [3]. internal exposure to radon and its daughter products can be quantified with the internal hazard index 79: [3], which is given by the following equation: 79: � j ;<= hd � ;ab cde � ;f gh @ k i 1 (6) where 185, 259 and, 4810 are the activities of ra, th, and k respectively that produce the same gamma dose rate. the value of the internal hazard index 79: must be less than unity to maintain a negligible level of radiation hazard [25]. iv. results the concentrations of the naturally radioactive elements (k, ra, and th) and the hazard indices from soil sediments in the surficial layer at different locations of the wadi arar are listed in table i and plotted in figures 2–7. table i. natural radionuclide activity levels and radiation risk indices. location indice 226 ra 232 th 40 k d aede indoors aede outdoors raeq hex hin unit bq/kg bq/kg bq/kg ngy/h µsv/y µsv/y bq/kg a 19.80 8.56 132.89 20.00 98.14 24.53 42.27 0.11 0.17 b 21.85 16.59 260.00 31.24 153.25 38.31 65.59 0.18 0.24 c 20.79 18.89 305.96 34.09 167.25 41.81 71.36 0.19 0.25 d 22.59 17.16 169.20 28.15 138.09 34.52 60.16 0.16 0.22 e 26.20 20.13 241.95 34.69 170.20 42.55 73.62 0.20 0.27 f 25.93 19.13 265.30 34.92 171.32 42.83 73.71 0.20 0.27 g 23.31 18.45 190.30 30.16 147.96 36.99 64.35 0.17 0.24 average 22.92 16.99 223.66 30.47 149.46 37.36 64.44 0.17 0.24 max. allowable value [2] 35 30 400 57 450 70 370 ≤1 ≤1 the average concentrations of ra, th, and k at wadi arar are 22.9, 17.0, and 223.7bq/kg respectively, while the value range was 19.8–26.2 for ra, 8.56–20.13 for th and 132.89– 305.96 for k. it was observed that k>ra>th which is consistent with their global order while the average concentrations of ra, th, and k were lower than the average internationally recommended concentrations of 35, 30, and 400 respectively [3]. th and k were 0.6 times lower than the average internationally recommended values, whereas ra was 0.7 times lower. depending on the similarity between the three radioactive isotopes at each location, data were plotted using a 3-d method to classify the wadi arar from a radioactivity cluster perspective (figure 3). wadi arar was classified into three clusters: the first cluster included profiles a, d, and g, the second included profiles c and b, and the third included profiles e and f. fig. 2. ra, th, and k concentrations at the wadi arar which show a generally increasing trend in k concentration (red arrows) fig. 3. a 3-d plot of the ra, th, and k concentrations (bq/kg) for the different study area profiles fig. 4. the absorbed dose in air which shows a generally increasing trend in concentration (red arrow) engineering, technology & applied science research vol. 9, no. 5, 2019, 4735-4740 4738 www.etasr.com alghamdi: suitability of quaternary sediments of wadi arar, saudi arabia as construction materials fig. 5. the annual indoor and outdoor effective dose. doses are characterized by a generally increasing trend (red arrow) fig. 6. the radium equivalent activity raeq showing a generally increasing trend (red arrow) fig. 7. internal and external hazard indices characterized by a generally increasing trend (red arrow) assuming that the naturally occurring radionuclides have a uniform distribution [3], the absorbed dose rates (d) were calcated from the gamma radiation in the air at 1m above the ground. the rates varied from 20 to 34.92ngy/hr, with an average value of 30.47ngy/hr (table i). figure 4 shows two peaks of the d values at stations c and f. the d values increase from the northeast (location a) to the southwest (location g) (red arrow). based on these results, the wadi arar can be classified into two absorbed dose zones, i.e. a–d and d–g. the calculated values for aede were between 98.14 and 171.32µsv/y, with an average of 149.46µsv/y. the annual effective outdoor dose rate ranges from 24.53 to 42.83µsv/y, with an average of 37.36µsv/y (table i). figure 5 shows the changes in both indoor and outdoor annual effective dose, which are characterized by a general trend (red arrow) that increased from location a to g with two peaks at c and f. table i also summarizes the raeq estimated values for the study area. figure 6 shows the changes that occur at each location, which are characterized by a general trend that increases from a to g with two peaks at c and f. the calculated values for the external and internal hazard index (hex and hin) ranged from 0.11 to 0.20 and 0.17 to 0.27 respectively. figure 7 illustrates those changes, which are characterized by a general hazard index trend (red arrow) that increases in the southwest direction (location d). based on the results for both isotope concentrations and hazard indices, two radioactive zones can be assessed. the values increase from the northeast to the southeast and are lower than average global values. v. discussion it was observed that, the values at profile d were less than the international average for a gamma radiation dose level from terrestrial sources [3] and less than the average value reported by numerous countries such as the united states, switzerland, spain, greece, egypt, iran, india, china, and korea in 2005 [4]. the results for the indoor and outdoor aede were within the average global limits, which are 450 and 70µsv/y, respectively. therefore the sampled sediments can be safely used for construction materials. the estimated average raeq value was lower than the maximum permissible value of 370bq/kg suggested for building materials concerning radiation hazards [2]. hex and hin values were lower than unity, so the soil samples at wadi arar are considered safe and can be used as construction materials without posing any significant radiological threat to the population, according to [26]. the concentrations of radioactive elements generally increase from the northeast to southwest. in comparison with 232 th and 226 ra, 40 k has the highest radioactive concentrations throughout wadi arar, which includes two concentration peaks at profiles c and f (figure 2), indicative of the availability of potash feldspar minerals. fluctuations in the radioactive concentrations may represent the availability of more rock and mineral resources in the study area, such as carbonate and silicate minerals that include sedimentary rocks from the badanh and zallum formations, i.e. limestone, sandstone, and shale [20, 27]. based on a similarity analysis, figure 2 shows profiles a, d, and g as one cluster, which has reduced 40 k concentrations, whereas profiles e and f represent another cluster that has increased 226 ra concentrations. profiles b and c have the highest 40 k concentrations. the availability of more sha’ibs [28] or tributaries that pass through profiles a, d, and g possibly transport new sediment to wadi deposits may reduce the isotope radiation concentration. additional sediments modify the deposit composition by adding new minerals. on the other hand, the proximity of geological structures, such as the arar arch folds or graben faults, possibly changes radioactive element concentrations (figure 8). for example, weathering and erosion can affect the fold’s hinge zone and cover it with sediments. profiles a, d, and g are located on engineering, technology & applied science research vol. 9, no. 5, 2019, 4735-4740 4739 www.etasr.com alghamdi: suitability of quaternary sediments of wadi arar, saudi arabia as construction materials identical rock types, where profile a is located on the first fold limb, d is located on the second fold limb, and g is on the third fold limb. these locations may create a situation that yields identical radioactive concentrations at a small-scale. background radiation is present everywhere so, spectrum analysis should be conducted using lead absorbers around the instruments. otherwise high temperature can create electrical noise and ruin the detector, so the detector must be cooled. from a radiation perspective, local authorities have to take the conclusions of this paper in consideration along with the construction codes in wadi arar. the results of this study on the radioactivity in wadi arar can be used for global comparison and mapping. this requires essenstial communication between the construction sector and the population regarding the radiation issues in wadi arar. fig. 8. i: a map view of the study area, ii: a virtual fold cross-section, and iii: a virtual fault cross-section vi. conclusion based on the acquired results from this study on the radioactivity of 226 ra, 232 th, and 40 k in wadi arar, the following can be concluded: • the average radioactivity concentrations of 226 ra, 232 th, and 40 k of the soil surface deposits in wadi arar are 22.92, 16.99, and 223.66bq/kg respectively, while their range is 19.8–26.2, 8.56–20.12, and 132.89–305.96bq/kg, respectively. • radioactivity tends to increase from the northeast to southwest. • based on the fluctuations in radioactivity, wadi arar deposits can be divided into two radiological zones: from locations a to d and from d to g with maximum concentrations of 365 and 265bq/kg, respectively. • from a radiological perspective and regardless of other geotechnical properties, soil surface deposits in wadi arar can be used as construction materials without posing any significant radiological threat. • potassium has higher concentration compared with the two other radioactive elements, which indicates high occurrence rate of potash feldspar minerals in wadi arar deposits, such as plagioclase. • fluctuations in radioactive concentrations in wadi arar may reflect the occurrence of geological structures, such as fault, folds, or changes in lithology. acknowledgement this work was supported by the deanship of scientific research, northern border university, saudi arabia, under the grant no 45/2001. references [1] l. guagliardi, n. rovella, c. apollaro, a. bloise, r. d. rosa, f. scarciglia, g. buttafuoco, “modelling seasonal variations of natural radioactivity in soils: a case study in southern italy”, journal of earth system science, vol. 125, no. 8, pp. 1569–1578, 2016 [2] n. krishnamurthy, s. mullainathan, r. mehra, m. a. e. chaparro, m. a. e. chaparro, “radiation impact assessment of naturally occurring radionuclides and magnetic mineral studies of bharathapuzha river sediments, south india”, environmental earth sciences, vol. 71, no. 8, pp. 3593–3604, 2014 [3] unscear, sources, effects, and risks of ionizing radiation, united nations scientific committee on the effects of atomic radiation, 2000 [4] s. singh, a. rani, r. k. mahajan, “ 226 ra, 232 th, and 40 k analysis in soil samples from some areas of punjab and himachal pradesh, india using gamma-ray spectrometry”, radiation measurements, vol. 39, no. 4, pp. 431–439, 2005 [5] r. mehra, s. singh, k. singh, r. sonkawade, “ 226 ra, 232 th, and 40 k analysis in soil samples from some areas of malwa region, punjab, india using gamma ray spectrometry”, environmental monitoring and assessment, vol. 134, no. 1-3, pp. 333, 2007 [6] f. ahmed, h. a. shousha, h. m. diab, “comparative study of natural radioactivity concentrations in soil samples from the newly developed tushki and giza regions in egypt”, radiation effects & defects in solids, vol. 161, no. 4, pp. 257–266, 2006 [7] m. c. bede, a. a. essiett, e. inam, “an assessment of absorbed dose and radiation hazard index from natural radioactivity in soils from akwa ibom state, nigeria”, international journal of science and technology, vol. 4, no. 3, pp. 80–92, 2015 engineering, technology & applied science research vol. 9, no. 5, 2019, 4735-4740 4740 www.etasr.com alghamdi: suitability of quaternary sediments of wadi arar, saudi arabia as construction materials [8] s. harb, a. h. e. kamel, a. m. zahran, a. abbady, f. ahmed, “assessment of natural radioactivity in soil and water samples from aden governorate south of yemen region”, international journal of recent research in physics and chemical sciences, vol. 1, pp. 1–7, 2014 [9] a. m. umar, m. y. onimisi, s. a. jonah, “baseline measurement of natural radioactivity in soil, vegetation and water in the industrial district of the federal capital territory (fct) abuja, nigeria”, british journal of applied science & technology, vol. 2, no. 3, pp. 266–274, 2012 [10] g. o. avwiri, j. c. osimobi, e. o. agbalagba, “evaluation of radiation hazard indices and excess lifetime cancer risk due to natural radioactivity in soil profile of udi and ezeagu local government areas of enugu state, nigeria”, comprehensive journal of environmental and earth sciences, vol. 1, pp. 1–10, 2012 [11] m. hassan, j. s. karniliyus, j. m. egieya, “radioassay of geregu soil north-central nigeria”, academic research international, vol. 5, no. 4, pp. 69-78, 2014 [12] g. bouhila, f. benrachi, m. ramdhane, “evaluation of natural radioactivity and assessment of radiation hazard indices in some sediment samples from streams of east algeria”, international journal of nuclear and radiation science and technology, vol. 1, pp. 7–11, 2016 [13] google earth v 7.3.2.5776, “wadi arar, saudi arabia”, eye alt 19.24 km, landsat, copernicus, http://www.earth.google.com, 2016 [14] m. a. m. alghamdi, h. m. diab, “measurement of radon content in silty sand soil using rad7 and cr-39 techniques at wadi arar, saudi arabia: comparison study”, international journal of management and applied science, vol. 2, no. 5, pp. 2394–7926, 2016 [15] m. a. m. alghamdi, “relationship between grain size distribution and radon content in surficial sediments of wadi arar, saudi arabia”, engineering, technology & applied science research, vol. 8, no. 1, pp. 2447-2451, 2018 [16] f. alshahri, “radioactivity of 226ra, 232th, 40k, and 137cs in beach sand and sediment near to desalination plant in eastern saudi arabia: assessment of radiological impacts”, journal of king saud university– science, vol. 29, no. 2, pp. 174–181, 2017 [17] s. h. q. hamidalddin, “measurements of the natural radioactivity along red sea coast (south beach of jeddah saudi arabia)”, life science journal, vol. 10, no. 1, pp. 121–128, 2013 [18] h. a. a. trabulsy, a. e. m. khater, f. i. habbani, “radioactivity levels and radiological hazard indices at the saudi coast line of the gulf of aqaba”, radiation physics and chemistry, vol. 80, no. 3, pp. 343–348, 2011 [19] a. s. alaamer, “measurement of natural radioactivity in sand samples collected from ad-dahna desert in saudi arabia”, world journal of nuclear science and technology, vol. 2, no. 4, pp. 187–191, 2012 [20] a. f. a. khattabi, s. m. dini, c. a. wallace, a. s. banakhar, m. h. a. kaff, a. m. a. zahrani, geological map of the arar quadrangle, saudi geological survey, 2010 [21] t. vidmar, “efftran-a monte carlo efficiency transfer code for gammaray spectrometry”, nuclear instruments and methods in physics research section a: accelerators, spectrometers, detectors and associated equipment, vol. 550, no. 3, pp. 603–608, 2005 [22] t. vidmar, n. celik, n. c. diaz, a. dlabac, i. o. b. ewa, j. a. c. gonzalez, m. hult, s. jovanovic, m. c. lepy, n. mihaljevic, o. sima, f. tzika, m. j. vargas, t. vasilopoulou, g. vidmar, “testing efficiency transfer codes for equivalence”, applied radiation and isotopes, vol. 68, no. 2, pp. 355–359, 2010 [23] t. vidmar, g. kanisch, g. vidmar, “calculation of true coincidence summing corrections for extended sources with efftran”, applied radiation and isotopes, vol. 69, no. 6, pp. 908–911, 2011 [24] r. veiga, n. sanches, r. m. anjos, k. macario, j. bastos, m. iguatemy, j. g. aguiar, a. m. a. santos, b. mosquera, c. carvalho, m. b. filho, n. k. umisedo, “measurement of natural radioactivity in brazilian beach sands”, radiation measurements, vol. 41, no. 2, pp. 189–196, 2006 [25] h. m. diab, s. a. nouh, a. hamdy, s. a. e. fiki, “evaluation of natural radioactivity in a cultivated area around a fertilizer factory”, journal of nuclear and radiation physics, vol. 3, no. 1, pp. 53–62, 2008 [26] european commission, radiation protection 112, radiological protection principles concerning the natural radioactivity of building materials, european commission, office for official publications of the european communities, 1999 [27] m. a. m. alghamdi, “grain size distribution and mineral composition of surficial quaternary sediments of wadi arar, saudi arabia”, international journal of advances in science engineering and technology, vol. 6, no. 1, pp. 40–43, 2018 [28] m. a. m. alghamdi, a. a. e. hegazy, “physical properties of soil sediment in wadi arar, kingdom of saudi arabia”, international journal of civil engineering, vol. 2, pp. 1-8, 2013 microsoft word 09-3093_s2etasr_v9_n6_pp4912-4916 engineering, technology & applied science research vol. 9, no. 6, 2019, 4912-4916 4912 www.etasr.com jamali et al: analysis of co2, co, no, no2, and pm particulates of a diesel engine exhaust analysis of co2, co, no, no2, and pm particulates of a diesel engine exhaust qadir bakhsh jamali department of mechanical engineering quest, nawabshah, pakistan qjamali@quest.edu.pk muhammad tarique bhatti department of mechanical engineering quest campus larkana, pakistan trqbhatti@quest.edu.pk qamar abbas qazi department of mechanical engineering quest, nawabshah, pakistan qaziqamarabbas@yahoo.com bakar hussain kaurejo department of mechanical engineering indus university, karachi, pakistan baqar.hussain@indus.edu.pk ishfaque ali qazi department of mechanical engineering quest campus larkana, pakistan ishfaquealiqazi@gmail.com shafquat hussain solangi department of mechanical engineering quest, nawabshah, pakistan shafquat13me31@gmail.com abdul sattar jamali department of mechanical engineering quest, nawabshah, pakistan jamali_sattar@quest.edu.pk abstract—exhaust emissions of a diesel engine are considered to be a substantial source of environmental pollution. diesel engines are mainly used in vehicles and power generation. the usage of diesel engines is unavoidable as they give more power and performance, but at the same time, higher usage of diesel engines leads to increased air pollution, sound pollution, and emissions to the environment. therefore, various attempts have been made to control the harmful emissions of engines. for this reason, different devices have been made such as catalytic convertors to overcome emission problems and purify the harmful gases. in order to meet these ends, a new system was designed that would contribute to controlling the air pollution of the engines. the system is also known as an aqua silencer, and its design is somehow different but still can be used as a silencer. the newly designed emission controller was installed in a test-bed diesel engine and a total of twenty experiments were conducted with and without the new emission controller at constant speed and at constant load. during these experiments, exhaust gases were analyzed with flue gas analyzers measuring co2, co, no2, no, and pm. the study concluded that the contaminants of diesel engine exhaust gases were) controlled by the developed emission controller. keywords-emission control; diesel engine; aqua silencer; carbon dioxide; nitrogen oxide i. introduction internal combustion (ic) engines have become highly imperative in transportation and industry. diesel engines are the most commonly preferred engines, especially in the application of heavy-duty vehicles. besides other sources, these engines are counted as one of the largest environmental pollution contributors due to their exhaust emissions (figure 1). the use of ti nano tubes in an aqua silencer along with charcoal can absorb toxin gases [1]. the performance and emissions using alcohol fumigation in the presence of hot exhaust gas recirculation (egr) were examined in [2]. the egr results to great reductions in nox emission, amounting up to 30 to 40% at higher loads [2]. in [3], the aqua silencer was used in the reduction of toxic gases and noise. this experiment gives an effective way to reduce emission gases from the engine exhaust. moreover, aqueous ammonia solution can be used as an absorber for the reduction of co2, so2, and nox from exhaust gases of ic engines. the aqueous ammonia process can simultaneously remove co2, so2, nox and also hydrocarbons that may be present in the exhaust gas. a study was conducted on a single cylinder four stroke cycle direct injection diesel engine at a constant speed with a fuel injection pressure of 200bars. tests were conducted using commercial diesel fuel and diesel fuel with 10% and 20% water by volume. it was found that the water emulsification has the potential to improve brake thermal efficiency and brake specific fuel consumption [4]. in order to check the system's performance an aqua silencer was directly integrated in the exhaust of the engine and its effects were studied in [6]. the use of a new catalyst converter to replace the noble metals platinum (pt), palladium (pd), and rhodium (rh) has been studied. materials such as zeolite, nickel oxide, and metal oxide have been found to effectively reduce emissions. beside this, ultrasonic treatment with a combination of electroplating technique, citrate method and plasma electrolytic oxidation (peo) have been carried out on producing an effective catalyst in reducing exhaust emissions [7]. aqua silencer covers the effect of black smoke, nox emissions, and sound from the exhaust gases. after implementing the aqua silencer, the engine tended to corresponding author: qadir bakhsh jamali engineering, technology & applied science research vol. 9, no. 6, 2019, 4912-4916 4913 www.etasr.com jamali et al: analysis of co2, co, no, no2, and pm particulates of a diesel engine exhaust have nox emissions and sound completely eliminated, whereas carbon monoxide (co) emission reduced up to 53%, unburned hydrocarbon (ubhc) was reduced up to 41% and co2 emission reduced up to 44% in comparison with the existing system [8]. fig. 1. composition of diesel exahust gases [5] aqua silencer is thermally effective and technically feasible for use at reducing engine noise and toxic emissions but certain improvements so as to fit in the application with the engine exhausting unit are in order [9]. in [10], the toxic contents in petrol engine exhausts at various running speeds were studied by changing the limestone content in the rtp silencer. reduction of hc and co emissions were observed by changing the limestone content from 25 to 150 grams. in [11], co emissions were reduced up to 53% with the use of an aqua silencer. ubhc can be reduced by 41% and co2 emissions by 44%. the catalytic converters are used to reduce the amounts of nitrogen oxides, co, and ubhc in automotive emissions. during vehicle use, the converter is exposed to heat, which causes the metal particles to agglomerate and their overall surface area to decrease. as a result, catalyst activity deteriorates [12]. the effect of various engine parameters on the control of these emissions is reported with different versions of the engine in [13]. authors in [14] reported that the ubhc emission in a twin spark engine is reduced up to 12% as compared to the single spark engine while the co emission in the twin spark engine is reduced to a great extent. in [15], a prototype emission control system was designed and tested on a gasoline-fueled vehicle. federal test procedure (ftp) emission results show a 35% reduction in hydrocarbons emitted during the cold transient segment due to adsorption. ii. materials and methods the design of a new emission controller was carried out based on the engine parameters and material requirements. in the first step, a cad model was designed on creo parametric 3.0 with all suitable dimensions. the section view of the emission control unit is shown in figures 2 and 3. the emission controller was installed, and the experiments were carried out on low speed of the diesel engine. the research and test bed model was the dwe-6/10-js-dv (figure 4), which is available at the thermodynamics laboratory of quaid-eawam university of engineering, science & technology. the detailed specifications of this test bed diesel engine are given in table i. the gases obtained from the diesel engine exhaust were analyzed by the flue analyzer (flue gas analyzer model no. testo 350xl), whether they were purified or not and the purification extent was estimated. to measure the total suspended particles (tsp) in the flue gases a device named aerocet 531s was used to count the pm. table i. specifications of diesel engine test unit number of cylinders 01 bore 80mm stroke (piston displacement) 95mm (477cc) compression ratio 23:1 starting method manual (cell starter upon request) output/rational speed 8.5ps/2200 rpm (max) cooling system water cooled type horizontal fig. 2. section view of emission control unit fig. 3. detailed drawing of the emission control unit (dimensions are given in inches) figure 4 shows the complete unit of aqua silencer. when the exhaust gases of the diesel engine enter the device, they pass through the perforated tube. the holes in the perforated tube are designed in a way that the large mass gases form smaller gas bubbles. it is a closed end tube and all the gases can pass through the holes. the perforated tube is completely immersed in a lime water solution where the gases chemically react and less precipitates are created. around the circumference of the perforated tube, there is double activated charcoal. the charcoal is highly porous and possesses extra free valences, so it has the ability to absorb flue gases. engineering, technology & applied science research vol. 9, no. 6, 2019, 4912-4916 4914 www.etasr.com jamali et al: analysis of co2, co, no, no2, and pm particulates of a diesel engine exhaust fig. 4. assembly of the system fig. 5. hand held pm and flue gas analyzer iii. results measurements were taken in order to compare the emissions of the test bed diesel engine with and without installation of the newly designed system at different loads and different speeds (rpm). a total of twenty experiments were performed at constant load and constant speed to observe the behavior of emissions with respect to these parameters. a. performance evaluation the performance evaluation experiment data sheets for constant load and constant speed are illustrated in tables ii and iii. table ii. engine emissions at constant speed s y s te m s s p e e d t o r q u e c o 2 ( p p m ) c o ( p p m ) n o 2 ( p p m ) n o ( p p m ) p m ( m g /l ) w it h o u t 9 5 0 0.2 990 283 9.0 52 0.09 0.4 1505 298 8.4 54 0.088 0.6 4020 240 6.4 58 0.103 0.8 7605 330 4.1 56 0.144 1.0 9700 382 4.3 50 0.212 w it h 0.2 690 325 3.0 46 0.101 0.4 1150 320 6.1 48 0.108 0.6 3410 230 5.4 55 0.105 0.8 4024 265 2.6 48 0.107 1.0 5500 308 1.2 44 0.148 b. co2 emissions carbon dioxide (co2) is a colorless and non-combustion gas released when fuels with carbon content burn fully. consequently, co2 is a significant parameter in exhaust emissions from engine. the co2 emissions at constant speed and at constant load of the diesel engine, with and without the emission control unit, are shown in figures 6 and 7 respectively. these graphs show that the co2 obtained from the newly developed emission controller is less than without it. table iii. engine emissions at constant torque s y st e m s t o r q u e s p e e d c o 2 ( p p m ) c o ( p p m ) n o 2 ( p p m ) n o ( p p m ) p m ( m g /l ) w it h o u t 0 .4 950 1508 298 8.5 53 0.088 1050 3900 200 10.2 48 0.112 1150 6690 220 13.6 60 0.086 1250 8660 290 16.5 77 0.094 1350 9800 188 17.8 80 0.142 w it h 950 1206 242 6.3 48 0.086 1050 3875 296 5.6 42 0.094 1150 4300 180 12.5 51 0.096 1250 5560 290 15 63 0.103 1350 6078 205 17.2 77 0.118 fig. 6. comparative results of co2 emissions at constant speed fig. 7. comparative results of co2 emissions at constant load c. co emissions carbon monoxide emissions are colorless, odor free and toxic gases produced as an effect of incomplete burning of carbon. the emission of co results from oxidation of fuel consisting of carbon and hydrogen with oxygen. co is produced as a result of the deterioration of the resulting yield. the co emissions at constant speed and load, with and without the emission control unit, are shown in figures 8 and 9. it can be seen that co emission is more in the absence of the emission controller. engineering, technology & applied science research vol. 9, no. 6, 2019, 4912-4916 4915 www.etasr.com jamali et al: analysis of co2, co, no, no2, and pm particulates of a diesel engine exhaust fig. 8. comparative results of co emissions at constant speed fig. 9. comparative results of co emissions at constant load d. nox emissions fig. 10. comparative results of no2 emissions at constant speed fig. 11. comparative results of no2 emissions at constant load the emissions of nitric oxides (nox) are affected by the emission controller due to the use of limewater. the hydroxides in the water absorb noxious thus noxious emissions are less than the actual emissions from the diesel engine. the no2 emissions at constant speed and at constant load of the diesel engine, with and without the emission control unit, are shown in figures 10 and 11. the no emissions at constant speed and at constant load of the diesel engine with and without the emission control unit are shown in figures 12 and 13 respectively. we can see that the nox emissions decreased when using the newly developed emission controller. fig. 12. comparative results of no emissions at constant speed fig. 13. comparative results of no emissions at constant load e. particulate matter (pm) emissions particulate matter (pm) air pollution is an air suspended combination of solid and liquid particles depending upon the size, shape, surface area, number, chemical composition, solubility, and source. it is known that diesel fuel is a major emitter of pm generation. the pm emissions at constant speed and at constant load of the diesel engine with and without the emission control unit are shown in figures 14 and 15. fig. 14. comparative results of pm emissions at constant speed engineering, technology & applied science research vol. 9, no. 6, 2019, 4912-4916 4916 www.etasr.com jamali et al: analysis of co2, co, no, no2, and pm particulates of a diesel engine exhaust fig. 15. comparative results of pm emissions at constant load iv. comparative study a comparative study was conducted between the developed system (with and without the implementation of the control unit at the exhaust of the diesel engine) and the standardized values of diesel engine exhaust. the comparison is illustrated in table iv. table iv. comparative study results standard with the system without the system co (ppm) 100 325 382 co2 (ppm) 5000 5500 9700 no (ppm) 25 55 58 no2 (ppm) 5 6.1 9 pm (mg/l) 0.02 0.148 0.212 v. conclusion in this study, experiments were conducted on a diesel engine with an emission control system (commonly known as an aqua silencer) to investigate its impact on the engine’s emission characteristics. the study concluded that this system can be used along with or instead of a catalytic converter. with this unit the emissions at the tailpipe of an exhaust system can be easily lowered below the specified levels. with the use of lime water in the silencer, the toxic levels of nox gases are decreased along with the temperature of the final exhaust gases which has also a positive effect on the environment. the water contamination is found to be negligible, because of the amount of the acidity level inside which plays the role of absorbing the noxious products of combustion. co is not highly controlled due to its negligible presence in the emissions (0.20% by volume) and does not pose any health hazard when compared to gasoline engines. the double layer of activated charcoal used helped in adsorbing several harmful constituents. by using the perforated tubing, there will not be any excessive back pressure formation as high mass bubbles get converted into low mass bubbles and the noise is slightly reduced due to tubing and water. fuel consumption remains the same after the implementation of this system. this system is also cheap to build and maintain as compared to other emission control methods. references [1] m. p. patel, s. r. gajjar, “a literature review on design and development of industrial generator silencer”, international journal for scientific research & development, vol. 3, no. 1, pp. 335-339, 2015 [2] g. s. hebbar, a. k. bhat, “diesel emission control by hot egr and ethanol fumigation; an experimental investigation”, international journal of modern engineering research, vol. 2, no. 4, pp. 1486-1491, 2012 [3] i. k. patel, s. r. gajjar, “design and development of aqua silencer for two-stroke petrol engine”, international journal of innovative research in science and technology, vol. 1, no. 1, pp. 31-37, 2014 [4] s. s. rawale, s. n. patil, a. a. nandrekar, a. s. kabule, “use of aqueous ammonia in silencer for removal of co2, so2, and nox from exhaust gases of i.c. engines”, international journal of engineering science and innovative technology, vol. 2, no. 5, pp. 157-160, 2013 [5] w. a. majewski, m. k. khair, diesel emissions and their control, sae international, 2012 [6] s. raj, a. k. aniyan, a. aji, a. raj, a. mohan, t. r. sharon, “fabrication and testing of portable twin filter aqua silencer”, international journal of mechanical and industrial technology, vol. 3, no. 2, pp. 177-186, 2015 [7] k. i. patel, r. engineer, p. k. patel, “cfd analysis of perforated tube of aqua silencer”, indian journal of research, vol. 4, no. 5, pp. 182-184, 2015 [8] p. chen, j. wang, “estimation and adaptive nonlinear model predictive control of selective catalytic reduction systems in automotive applications”, journal of process control, vol. 40, pp. 78-92, 2016 [9] g. m. p. yadav, k. nagaraju, h. raghavendra, b. m. k. reddy, g. n. reddy, j. b. kumar, k. srikanth, k. jagadesh, “modeling and experimental investigations of the sound and emissions performance for 4-stroke multi cylinder diesel engine with an aqua silencer”, international journal for research in applied science & engineering technology, vol. 3, no. 5, pp. 541-553, 2015 [10] a. a. kumar, n. anoop, a. jawed, p. p. bijoy, t. v. midhun, n. p. shiyas, r. krishna, “design and development of aqua silencer”, international journal of engineering and innovation technology, vol. 5, no. 11, pp. 35-41, 2013 [11] a. saraf, t. khese, t. shah, g. gaikwad, s. d. bhaisare, “design and analysis of aqua silencer”, international research journal of engineering and technology, vol. 4, no. 2, pp. 1432-1436, 2017 [12] a. gaikwad, p. taware, v. kannan, v. kachare, p. ghayal, “study on development of aqua silencer”, international journal for research publications in engineering and technology, vol. 3, no. 4, pp. 199-203, 2017 [13] h. tanaka, m. taniguchi, n. kajita, m. uenishi, i. tan, n. sato, k. narita, m. kimura, “design of the intelligent catalyst for japan ulev standard”, topics in catalysis, vol. 30, no. 1-4, pp. 389-396, 2004 [14] p. v. k. murthy, s. n. kumar, m. v. s. m. krishna, v. v. r. s. rao, d. n. reddy, “aldehyde emissions from two-stroke and four-stroke spark ignition engines with methanol blended gasoline with catalytic converter”, international journal of engineering research and technology, vol. 3, no. 3, pp. 793-802, 2010 [15] i. altin, i. sezer, a. bilgin, “effects of the stroke/bore ratio on the performance parameters of a dual-spark-ignition (dsi) engine”, energy & fuels, vol. 23, no. 4, pp. 1825-1831, 2008 microsoft word 30-2537_s engineering, technology & applied science research vol. 9, no. 1, 2019, 3822-3825 3822 www.etasr.com reddy & chaganti: investigating optimum sio2 nanolubrication during turning of aisi 420 ss investigating optimum sio2 nanolubrication during turning of aisi 420 ss nune madan mohan reddy department of mechanical engineering bits pilani hyderabad campus telangana, india madan008phd@gmail.com phaneendra kiran chaganti department of mechanical engineering bits pilani hyderabad campus telangana, india phaneendrakiran@yahoo.co.in abstract—aisi 420 martensitic stainless steel is used for making gas and steam turbine blades, steel balls and medical instruments, due to its anti-corrosive properties. turning of aisi 420 ss would be a worthy procedure specifically in manufacturing high surface finish parts. in this work, effort has been made to investigate the cooling and lubricating performance of sio2 (silicon dioxide) nanoparticles at different weight concentrations of 0.1g, 0.5g and 1g mixed in a novel developed base fluid (synthetic). the performance of optimum sio2 based cutting fluid is evaluated based on the turning process with output responses like surface finish and cutting temperature. taguchi technique was used with standard l9(3**4) orthogonal array. the responses, surface roughness, and cutting temperature were analyzed using s/n (signal-to-noise) and anova (analysis of variance). this analysis identifies the significant input parameter combination to obtain minimum surface roughness and temperature. keywords-taguchi; sio2 nanoparticles; anova; orthogonal array; cutting fluid i. introduction water was firstly proposed as a cutting fluid to reduce temperature and enhance surface finish and tool life in metal cutting process [1]. onwards, many cutting fluids were introduced and used by the machining industry. recently, industries realized the cost, environmental and health issues in the use of these cutting fluids [2-4]. the industry expects a cutting fluid with minimal cost, more output and best quality [5]. few attempts were made in the past to develop such a cutting fluid by including nanoparticles in it and it was named as nanolubricant, which includes a combination of metallic or non-metallic nanometer sized particle in the cutting fluid. in the past decade more attention was paid to nanolubricants due to their enhanced thermal properties like thermal conductivity and convective heat transfer coefficient [6]. most commonly used nanoparticles in the base fluid were titanium oxide, molybdenum disulphide and silicon dioxide. compared to others, sio2 nanoparticles have shown a significant improvement in machining and thermal properties and a reasonable improvement in lubrication effect [7-11]. these sio2 nanoparticles impinge between metal surfaces and create rolling action enhancing lubrication [12]. another study on sio2 nanolubricant [13] shows 62.67% and 30.86% lower forces in dry machining and application of conventional oil based cutting fluid. turbine blade materials like aisi 420, do have a high chromium percentage (i.e. 13 to 14%). turning of these high hardness materials raises high temperature at the cutting zone and more surface roughness on the machined surface. the surface finish of these materials is critical as poor surface finish leads to surface cracks and to fail at high centrifugal forces [14]. there are a few studies in the literature on the machinability of aisi 420 using nanolubricants [8-10] and some of them used doe (design of experiments) [15]. frequently used doe are response surface and taguchi methods. response surface method is expensive compared to taguchi method. taguchi method suggests a few experiments while keeping the analysis at par with other methods [16]. this work aims to minimize the surface roughness of machined aisi 420 material using sio2 nanolubricant. the control factors were speed, feed, depth of cut and the weight of sio2 nanoparticles in the base fluid. the responses considered were surface finish of the workpiece and cutting temperature in the cutting zone. the experiments were planned and conducted using l9 orthogonal array. ii. materials and methods a. workpiece, tool and nanoparticles the experiments were carried out on a cnc lathe machine (hmt praga model) as shown in figure 1. the workpiece material considered for turning was aisi 420 with 50mm diameter and 120mm length. fig. 1. cutting temperature measurement with thermocouple. a hss (high speed steel) uncoated carbide insert (cnmg 120408) with 0.8mm nose radius was used. the corresponding author: n. m. m. reddy engineering, technology & applied science research vol. 9, no. 1, 2019, 3822-3825 3823 www.etasr.com reddy & chaganti: investigating optimum sio2 nanolubrication during turning of aisi 420 ss workpiece and the cutting tool properties are given in table i. each experiment was conducted using a new cutting edge. the size of the sio2 nanoparticles used in the cutting fluid was 521.4nm (table ii). adding sio2 nanoparticles in the cutting fluid may improve the convective heat transfer coefficient and net heat carrying capacity, which is essential requirement for a cutting fluid. table i. workpiece and tool material properties material properties workpiece cutting tool type aisi420 martensitic stainlesssteel uncoated hss carbide insert conductivity (w/mk) 24.9 105 density (kg/m 3 ) 7800 15000 modulus (gpa) 200 620 poisson’s ratio 0.28 0.22 specific heat cp (j/kg o c) 460 670 table ii. sio2 nanoparticle properties properties sio2 nanoparticles physical structure amorphous crystalline powder conductivity (w/cm k) 0.015 density (g/cm3) 2.1 b. plan of experiments all the experimental procedures used taguchi l9 orthogonal array. considered control parameters were speed, feed, depth of cut and sio2 nanoparticle concentration in the base fluid. all parameters were kept at 3 levels: low, medium and high. the responses measured were surface roughness and temperature at the cutting zone. each experiment was repeated thrice to ensure repeatability and the average value of these three experiments is given below. the list of planned experiments is specified in table iii. table iii. design of experiments and control factors # orthogonal array l9 (3**4) speed-a (m/min) vc feed-b (mm/rev) f depth of cutc (mm) ap sio2 *-d (g) a b c d 1 1 1 1 1 150 0.10 0.10 0.1 2 1 2 2 2 150 0.15 0.20 0.5 3 1 3 3 3 150 0.20 0.30 1 4 2 1 2 3 175 0.10 0.20 1 5 2 2 3 1 175 0.15 0.30 0.1 6 2 3 1 2 175 0.20 0.10 0.5 7 3 1 3 2 200 0.10 0.30 0.5 8 3 2 1 3 200 0.15 0.10 1 9 3 3 2 1 200 0.20 0.20 0.1 * concentration iii. results and discussion the responses obtained were analyzed with s/n ratio and significant factors were also identified. the smaller-thebetter s/n ratio equation was chosen for the responses surface roughness and cutting temperature: ������ � ��� ��� � �10� �� � � ∑ ��� ������� (1) the responses and corresponding s/n ratios are given in iv. anova was done to find significant parameters and the optimal concentration of sio2 nanoparticle in the based fluid to get least surface roughness and cutting temperature. table iv. experimental and s/n results # vc f ap sio2 (g) surface roughness cutting temperature ra (µm) ra. s/n (db) t ( o c) t. s/n (db) 1 150 0.10 0.1 0.1 0.47 6.55 169 -44.5 2 150 0.15 0.2 0.5 0.35 9.11 162 -44.1 3 150 0.20 0.3 1 0.26 11.7 148 -43.4 4 175 0.10 0.2 1 0.49 6.19 153 -43.6 5 175 0.15 0.3 0.1 0.64 3.87 178 -45.0 6 175 0.20 0.1 0.5 0.53 5.51 167 -44.4 7 200 0.10 0.3 0.5 0.58 4.73 173 -44.7 8 200 0.15 0.1 1 0.29 10.7 164 -44.2 9 200 0.20 0.2 0.1 0.61 4.29 182 -45.2 a 5% level of significance was considered for anova. the anova results for surface roughness, cutting temperature are discussed in the following sections. a. surface roughness the s/n ratios of surface roughness varied from 3.87db to 11.7db. optimal combination of factors for minimum surface roughness can be obtained from the plot in figure 2. this shows that 150m/min cutting speed, feed 0.15mm/rev, 0.1mm depth of cut and 1g sio2 nanoparticle combination gives minimum surface roughness. among all 4 factors the concentration of sio2 nanoparticles was having an s/n ratio of 9.55db. this emphasizes the importance of sio2 nanoparticles in controlling the surface roughness of the parts produced. detailed s/n ratios of all input parameters for different levels are presented in table v. fig. 2. optimum combination of control factors for minimum surface roughness (µm). anova for surface roughness was analyzed and the results are given in table vi. these results show 50.65% contribution from concentration of sio2 nanoparticles showing its significance. regarding the other factors, speed engineering, technology & applied science research vol. 9, no. 1, 2019, 3822-3825 3824 www.etasr.com reddy & chaganti: investigating optimum sio2 nanolubrication during turning of aisi 420 ss is contributing 37.66%, feed 7.14% and depth of cut 4.55% in reducing surface roughness. the interaction plot for the control factors is shown in figure 3. in general, the parallel lines in the interaction plot show no interactions and the intersecting lines represent the presence of interactions between control factors. figure 3 shows that most of the factors were having a two way interaction in influencing the output surface roughness. table v. ratio of s/n response for surface roughness level cutting speed feed depth of cut sio2 concentration 1 9.12 5.82 7.60 4.90 2 5.19 7.91 6.53 6.45 3 6.59 7.16 6.76 9.55 ∆ 3.93 2.08 1.07 4.64 rank 2 3 4 1 table vi. anova for surface roughness sources degrees of freedom sum of squares mean of squares contribution percentage (%) cutting speed 2 0.058 0.029 37.66 feed 2 0.011 0.005 7.14 depth of cut 2 0.007 0.003 4.55 sio2 nanoparticle concentration 2 0.078 0.039 50.65 error 0 total 8 0.154 100 fig. 3. control factors interaction for surface roughness (µm). fig. 4. optimum combination of control factors for lower cutting temperature ( o c). b. cutting temperature a similar study was carried out considering cutting temperature. optimal parameter combination for minimal temperature may be interpreted from figure 4. the s/n ratio for cutting temperature ranged from -45.2db to -43.3db and is given in table iv. the s/n ratio response table at different levels of input parameters is given in table vii. the optimal parameter combination was chosen based on the s/n ratio value at the particular level. the optimal parameter condition for minimum cutting temperature is 150m/min speed (44.05db), 0.1mm/rev feed (-44.34db), 0.2mm depth of cut (44.38db) and 1gram (-43.80 db) sio2 nanoparticle concentration. as before, for the cutting temperature the nanoparticle concentration is having the highest s/n ratio, proving its significance among other factors. the ranking of the parameters obtained by the value of s/n ratio was specified in table vii. the anova results for the cutting temperature are given in table viii. the results show that concentration of sio2 nanoparticle contributed 70.53% in the temperature variance. among the other factors, cutting speed contributes 27.97% and the contribution from depth of cut and feed are negligible. table vii. s/n response for cutting temperature level cutting speed feed depth of cut sio2 concentration 1 -44.05 -44.34 -44.44 -44.94 2 -44.39 -44.50 -44.38 -44.47 3 -44.77 -44.37 -44.39 -43.80 ∆ 0.72 0.16 0.06 1.14 rank 2 3 4 1 table viii. anova for cutting temperature sources degrees of freedom sum of squares mean of squares contribution percentage (%) cutting speed 2 281 140 27.97 feed 2 14 7 1.40 depth of cut 2 1 0 0.10 sio2 nanoparticle concentration 2 708.7 354.3 70.53 error 0 total 8 1004.7 100 interaction effect for the factors is shown in figure 5. in this, most of the lines representing the factors are nonparallel showing the two way interactions. in figure 5, irrespective of other factors, the temperature at the 1-gram concentration of sio2 nanoparticle is lower. this can be attributed to the increased heat carrying capacity and thermal conductivity of nanoparticles in the base fluid. the general regression equations were obtained based on the experimental results and are given below: 130 0 273 10 0 167 24 1� � . . . . o c t c v f ap x= + × + × − × − × (2) 0 144 0 00267 0 467 0 317 0 253� � . . . . . c r m v f ap xα µ = + × − + × − × (3) where vc is speed used, f is feed, depth of cut is ap and the concentration of sio2 nanoparticles is x. the r-square values obtained for cutting temperature and surface roughness are 98% and 73.3% respectively. the given regression polynomial equations are useful for predicting the cutting temperature and surface roughness. 0.200.150.10 0.30.20.1 1.00.50.1 0.60 0.45 0.30 0.60 0.45 0.30 0.60 0.45 0.30 cutting speed (m/min) feed (mm/rev) depth of cut (mm) sio2 concentration (gram) 150 175 200 (m/min) speed c utting 150 175 200 (m/min) speed c utting 150 175 (m/min) speed c utting 0.10 0.15 0.20 (mm/rev) f eed 0.10 0.15 0.20 (mm/rev) f eed 0.1 0.2 0.3 cut (mm) depth of interaction plot for surface roughness (um) data means engineering, technology & applied science research vol. 9, no. 1, 2019, 3822-3825 3825 www.etasr.com reddy & chaganti: investigating optimum sio2 nanolubrication during turning of aisi 420 ss fig. 5. control factors interaction for cutting temperature ( o c). iv. conclusions in the present work, an optimal level of sio2 nanoparticles concentration in a base fluid for minimal cutting temperature and surface roughness was determined. the machining experiments were conducted on aisi 420. speed, feed, depth of cut and sio2 nanoparticle concentration were considered as factors and taguchi l9 orthogonal array was used to design the experiments. based on the responses measured from the experiments the following observations were made: • minimum surface roughness was observed on the machined surface of the workpiece for 1g of sio2 nanoparticles in base fluid, 150m/min cutting speed, 0.15mm/rev feed and 0.1mm depth of cut. • minimum cutting temperature was observed for 1g of sio2 nanoparticles in base fluid, 150m/min cutting speed, 0.10mm/rev feed and 0.2mm depth of cut. • based on anova it was observed that sio2 nanoparticles contribute 50.65% in reducing surface roughness and 70.53% in obtaining minimum cutting temperature. • a polynomial equation was proposed to obtain a relationship between input parameters and responses, i.e. cutting temperature and surface roughness. from these results, it can be concluded that the machining performance, in terms of cutting temperature and surface finish of workpiece, was improved by adding 1g sio2 nanoparticles in the base fluid. these results give a direction to develop new cutting fluid with sio2 nanoparticles. acknowledgment authors are thankful to bits, hyderabad campus and anurag group of institutions for providing the experimental facilities. references [1] w. f. sales, a. e. diniz, a. r. machado, “application of cutting fluids in machining processes”, journal of the brazilian society of mechanical sciences, vol. 23, no. 2, pp. 227-240, 2001 [2] f. klocke, g. eisenblatter, “dry cutting”, cirp annals, vol. 46, no. 2, pp. 519-526, 1997 [3] e. kalhofer, “dry machining principles and applications”, 2th international seminar on high technology, santa barbara d’oeste, brazil, 1997 [4] u. heisel, m. lutz, d. spath, r. wassmer, u. walter, “application of minimum quantity cooling lubrication technology”, in: production engineering vol. ii, universities of stuttgart and karlsruhe, institute for machine tools i institute for machine tools and production science processes, pp. 4954, 1998 [5] s. debnath, m. m. reddy, q. s. yi, “environmental friendly cutting fluids and cooling techniques in machining: a review”, journal of cleaner production, vol. 83, pp. 33-47, 2014 [6] n. a. c. sidik, s. samion, j. ghaderian, m. n. a. w. m. yazid, “recent progress on the application of nanofluids in minimum quantity lubrication machining: a review”, international journal of heat and mass transfer, vol. 108a, pp. 79-89, 2017 [7] m. sayuti, a. a. d. sarhan, f. salem, “novel uses of sio2 nanolubrication system in hard turning process of hardened steel aisi4140 for less tool wear, surface roughness and oil consumption”, journal of cleaner production, vol. 67, pp. 265-276, 2014 [8] r. k. singh, a. k. sharma, a. r. dixit, a. mandal, a. k. tiwari, “experimental investigation of thermal conductivity and specific heat of nanoparticles mixed cutting fluids”, materials today: proceedings, vol. 4, no. 8, pp. 8587-8596, 2017 [9] m. m. r. nune, p. k. chaganti, “experimental investigation on turning of turbine blade material aisi 410 under minimum quantity cutting fluid”, materials today: proceedings, vol. 4, no. 2, pp. 1057-1064, 2017 [10] n. m. m. reddy, c. p. kiran, “investigating the machining performance of turbineblade profile using sio2 nanoparticle based eco-friendly cutting fluid”, xiii international conference on high speed machining, montigny-les-metz, france, october 4-5, 2016 [11] a. a. minea, “hybrid nanofluids based on al2o3, tio2 and sio2: numerical evaluation of different approaches”, international journal of heat and mass transfer, vol. 104, pp. 852-860, 2017 [12] m. sayuti, o. m. erh, a. a. d. sarhan, m. hamdi, “investigation on the morphology of the machined surface in end milling of aerospace al6061-t6 for novel uses of sio2 nanolubrication system”, journal of cleaner production, vol. 66, pp. 655-663, 2014 [13] r. k. singh, a. k. sharma, a. r. dixit, a. mandal, a. k. tiwari, “experimental investigation of thermal conductivity and specific heat of nanoparticles mixed cutting fluids”, materials today: proceedings, vol. 4, no. 8, pp. 8587-8596, 2017 [14] c. p. kiran, s. clement, “surface quality investigation of turbine blade steels for turning process”, measurement, vol. 46, no. 6, pp. 1875-1895, 2013 [15] a. m. el-tamimi, t. m. el-hossainy, “investigating the machinability of aisi 420 stainless steel using factorial design”, materials and manufacturing processes, vol. 23, no. 4, pp. 419-426, 2008 [16] s. k. khare, s. agarwal, “optimization of machining parameters in turning of aisi 4340 steel under cryogenic condition using taguchi technique”, procedia cirp, vol. 63, pp. 610-614, 2017 microsoft word etasr_7-1_1345-1352.doc engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1345 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete badorul hisham abu bakar school of civil engineering universiti sains malaysia, engineering campus penang, malaysia cebad@usm.my ahmed tareq noaman school of civil engineering universiti sains malaysia, engineering campus penang, malaysia atn_en@yahoo.com hazizan md. akil school of materials and mineral resources eng., universiti sains malaysia, engineering campus penang, malaysia hazizan@usm.my abstract—concrete properties, such as toughness and ductility, are enhanced to resist different impacts or blast loads. rubberized concrete, which could be considered a green material, is produced from recycled waste tires grinded into different crumb rubber particle sizes and mixed with concrete. in this study, the behavior of rubberized steel fiber-reinforced concrete is investigated. flexural performance of concrete beams (400×100×100 mm) manufactured from plain, steel fiber, crumb rubber and combination crumb rubber and steel fiber are also evaluated. similarly, concrete slabs (500×500×50 mm) are also tested under flexural loading. flexural performance of the sfrrc mixtures was significantly enhanced. the toughness and maximum deflection of specimens with rubber were considerably improved. steel fiber/crumb rubber-reinforced concrete can be used for practical application, which requires further studies. keywords-rubberized steel fiber; toughness; flexural behavior i. introduction resistance of concrete structures against the effects of impact loading, which may suddenly occur, has been the focus of considerable attention. concrete properties, such as toughness and ductility, are tuned to enhance the resistance against different impact or blast loads. various studies have enhanced these properties by utilizing natural or artificial waste products to improve the toughness of cement composite or concrete [1-3]. rubberized concrete could be considered a green material produced by recycling waste tires from different crumb rubber particles and mixed with concrete by replacing a specified portion of aggregate. public disposal of waste tires has posed serious environmental concern. the combination of these tires with concrete as partial replacement of aggregate presents a potential solution to the environmental problem, beside the other innovative techniques used to recycle them in civil engineering [4]. crumb rubber concrete exhibits reduced mechanical properties depending on their content [5, 6]. however, in [7] it was observed that concrete toughness and ductility are enhanced because of the ability of rubber aggregate to deform after failure. rubber crumbs were incorporated into concrete mixture at various volumetric replacement ratios from 5% to 20% of the coarse aggregate. the damping ratio was determined for the concrete column from free vibration test. rubberized concrete columns showed better performance than normal concrete due to enhancement in damping ratios by about 3%. this characteristic indicates that concrete containing crumb rubber aggregate is more capable to dissipate kinetic energy. rubberized concrete may be used in various civil engineering applications, such as blast walls, bollards, and road traffic barriers, in which the design strength is not a critical parameter. rubber combined with steel fiber is an innovative material with high tensile properties. in [8], authors evaluated the post-cracking performances of recycled steel fibers from waste tires concrete by testing flexural beams and slabs. the post-cracking behavior of recycled steel fiber reinforced concrete showed reduced brittle behavior, which indicates that more energy is absorbed and that toughness can be possibly increased by more than 100%. waste recycled rubber interacts with steel fibers and bonds with cement paste such that the steel fiber in the waste recycled rubber also increases concrete deformability under applied loads [9]. thus, the flexural performances of rubberized and rubberized steel fiber concrete beams were analyzed experimentally. the effect of crumb rubber inclusion as partial replacement of fine aggregate volume at three different ratios, namely, 17.5%, 20%, and 22.5%, into plain concrete (pc) was investigated. similar ratios of crumb rubber were combined with the steel fiber concrete (sfc) mixture. finally, flexural tests on rubberized concrete slabs were conducted to determine their flexural properties. ii. experimental program a. materials and mix design the mixture used in this study was prepared to achieve concrete compressive strength of 45 mpa on day 28. ordinary portland cement, crushed aggregate from local source with maximum aggregate size of 14 mm, and natural river sand as fine aggregate were used in this work. crumb rubber was produced by grinding waste tires to produce the desired size in this study (0.15–2.36 mm). the properties of fine and coarse engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1346 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete aggregate and crumb rubber aggregate are listed in table i. figure 1 shows the recycled crumb rubber particles adapted for the experiments. steel fiber (hooked-end bundled) with aspect ratio of 80, circular diameter of 0.75 mm, specific gravity of 7.85, and tensile strength of 1050 mpa is used, as shown in figure 1. a polycarboxylic (ether-based) superplasticizer (sp) was used to enhance the low workability of the sfc mixtures. table i. properties of natural aggregate and crumb rubber type of aggregate specific gravity absorption% moisture content% fineness modulus specific area mm2/g coarse 2.65 1.0 1.0 6.27 na fine 2.64 2.0 3.0 3.93 0.672 rubber 0.73 10.6 1.4 na 0.774 fig. 1. crumb rubber and steel fiber. concrete mixtures were prepared with different replacement ratios of fine river sand volume by crumb rubber particles. the replacement ratios were 17.5%, 20%, and 22.5%, and the corresponding samples were designated as cr17.5, cr20, and cr22.5. the sfc mixture was considered a reference mix without crumb rubber. the volume fraction of the steel fiber was fixed (0.5%). rubberized steel fiber reinforced concrete mixtures were prepared by adding crumb rubber at the same replacement ratios for rubberized concrete (i.e., 17.5%, 20%, and 22.5%), and the corresponding samples were denoted as sfcr17.5, sfcr20, and sfcr22.5. plain (pc), steel fiber (sfc) concrete mixtures, and mix proportions are presented in table ii. other mixtures, such as rubberized and rubberized steel fiber. compositions of concrete mixes are presented in tables iii and iv, respectively. for all mixes water/cement ratio was fixed = 0.47. iii. experimental details and procedures a. compressive strength test compressive strength tests were conducted using three cubes with 100 mm sides in accordance with the british standard bs 1881: part 116 [10] on three cubic specimens for each mixture, using an oil hydraulic machine with a capacity of 3,000 kn. the compressive strength was calculated from the measured load over the area of the cube. b. four – point bending test four-point bending tests were carried out on the rubberized steel fiber beams to investigate the effect of combination between crumb rubber and steel fiber on the flexural performance of concrete. the flexural test setup and specimen geometry are illustrated in figure 2. three beams of each type of mixture were prepared for this test. the beams were 100 mm wide, 100 mm deep, and 400 mm long, with loaded span of 300 mm.the experimental setup is in accordance with astm c1609 [11]. the test was carried out using a 100 kn ag-x series shimadzu universal testing machine. the concrete specimens were loaded to ultimate failure under constant displacement rate at a loading rate of 0.1 mm/min. the loads versus mid-span deflection data were recorded, and the modulus of rupture (mor) was calculated. three specimens from each mixture were tested for 28 days. table ii. concrete mix design for plain and steel fiber mixtures mix cement steel fiber coarse aggregate fine aggregate pc 430 0 907 814 sfc 430 39 907 814 table iii. concrete mix design for rubberized mixtures mix cement coarse aggregate fine aggregate rubber cr17.5% 430 907 670 39.5 cr20% 430 907 649 45.3 table iv. concrete mix design for steel fiber reinforced rubberized mixtures mix cement steel fiber coarse aggregate fine aggregate rubber sfrc17.5% 430 39 907 670 39.5 sfrc20% 430 39 907 649 45.3 sfrc22.5% 430 39 907 630 50.3 fig. 2. test set-up for the four-point bending test. c. flexura test of slabs square slabs with dimensions of 500×500×50 mm were prepared to investigate their behavior under static loading. the tested slabs were fixed on the built-in testing machine after 28 days of moist curing (figure 3). slab specimens were placed on steel supports consisting of four rollers supported by four-sided rigid square steel hollow tubes. the span between the steel rollers was 430 mm (figure 4). the load was applied through a circular steel ram with a diameter of 100 mm adjusted to the centerline of the slab. the loading rate was 1.5 mm/min following the method presented in [12]. the test was stopped when the measured mid-span deflection of the concrete slabs reached approximately several millimeters. this test was engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1347 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete conducted to investigate the effect of combined steel fiber and crumb rubber on the static test results of slabs with different percentages of crumb rubber as green material with enhanced energy absorption capacity, as previously reported [7]. fig. 3. flexural test of concrete slabs. fig. 4. test set-up of the concrete slabs. iv. results and discussions a. compressive strength the results of the compressive strength tests of different concrete cubes for 28 days are presented in figure 5. the compressive strength decreased with the increase of crumb rubber ratio. the combined effect of steel fiber and crumb rubber limited this reduction. however, the tendency of a small amount of hooked-end steel fiber (0.5% by volume) to improve the compressive strength could be ignored. the compressive strengths of sfcr17.5, sfcr20, and sfcr22.5 were reduced by 21%, 25%, and 27%, respectively, compared with sfc. b. flexural behavior of beams the flexural performance of the concrete mixtures was assessed in terms of the first crack load, first crack strength, ultimate flexural load, ultimate flexural strength (mor), midspan deflection at maximum loads, flexural stiffness, and flexural toughness, according to the four-point bending test. the typical load–deflection curves of the different concrete mixtures with varying percentages of crumb rubber aggregate are shown in figures 6 (a) and (b). the averages of three fourpoint bending tests of loading and stresses for each batch are presented in table v. the load–deflection curves showed that the maximum load that can be sustained by the rubberized concrete is generally lower than that of non-rubberized concrete (whether with or without steel fiber). this reduction is similar to the results from the compressive loading tests. by contrast, the reduction rate in the flexural loads is smaller than that of the compressive loads. the curves for the rubberized concrete present a load–deflection capacity with the same shape. the load capacity declined with the increased content of crumb rubber aggregate in the concrete mix, but the increase in crumb rubber led to a more ductile behavior by increasing the deflections before failure. the shape of the curves for rubberized steel fiber concrete became less sharp than that of the original mix with only steel fiber. this result was also observed in [13], where it was reported that the crumb rubber does not affect the post-peak resistance capacity offered by the steel fiber. an enhancement of the deflection measured up to failure was obtained with the increase of rubber aggregate particles in the fibrous cement composites (table vi). maximum deflection was observed at 20% content of crumb rubber for plain concrete mixes. a similar trend was observed for the rubberized steel fiber concrete mixes. the increase in the deflection capacity at ultimate load, which is defined as the strain capacity [13], indicates an increase in the deformability of rubberized concrete. the cumulative effect of steel fiber and crumb rubber was confirmed through the enhanced ductility and reduced cracking caused by steel fiber [14]. therefore, additional restrain of cracks is provided by the presence of crumb rubber particles [15]. a similar trend was observed in the interaction between recycled and industrial steel fiber and rubber granulates [16]. according to astm c1018 [17], the first crack load is defined as the point on the load–deflection curve where the tendency of the curve changes from linear to nonlinear behavior, and the first crack deflection is the deflection value that is related to this load. the first crack strength was calculated based on the load obtained at first crack for different mixtures. the effect of crumb rubber incorporation on the first cracking strength is shown in figure 7(a). first cracking strength decreased by 18% because of the partial replacement of fine aggregate sand by recycled crumb rubber particles. the combined effect of steel fiber is to minimize the reduction in the first cracking strength. crumb rubber did not enhance the first crack deflection. the partial replacement of 20% of the fine aggregate with crumb rubber resulted in the 30% enhancement of the first cracking deflection compared with pc. this tendency could be attributed to the effect of crumb rubber, which could be neglected in the presence of hooked-end steel fiber with relatively high aspect ratio. figure 7(b) also shows that the crumb rubber content had an evident effect, that is, the ultimate strength of concrete (plain or fiber) decreased with the increase in rubber ratio. the reduction rate increased from 16% to 22% by partial substitution of fine aggregate with crumb rubber in steel fiber reinforced concrete. this trend of systematic reduction was reported because of the lack of pairing between the cement paste and the rubber aggregate [4]. flexural stiffness k was also determined from the flexural behavior of the concrete beams (table vi). k is presented as the linear part of the load–deflection curve of rubberized engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1348 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete concrete between 50% and 10% of the ultimate load [18, 19]. the reduction in stiffness of concrete indicates an increase in deformability, which facilitates the enhancement of the capacity of concrete sections to absorb more energy and exhibit better ductile behavior when subjected to high loading rate. toughness, which was determined by calculating the area under the load–deflection curves up to failure, was observed to be enhanced considerably in the presence of crumb rubber (table vi). the rates of increase were 28.8%, 31.1%, and 26.4% at 17.5%, 20%, and 22.5% crumb rubber content, respectively. these values indicate that the flexural toughness of rubberized concrete beam is sensitive to rubber ratio. toughness increases by increasing rubber content. further replacement of fine aggregate by crumb rubber reduced the toughness values. thus, optimum results were obtained at 20% (50.3 kg/m3). a similar trend was observed in [20] where the highest fracture energy value was recorded at 25% replacement of coarse aggregate by chipped tire rubber. reduced fracture energy was observed beyond this ratio. however, rubber aggregate improve the concrete deformability by reducing the local stresses occurred around the microcracks [3]. thus, a weakness occurred in the rubberized cement composite under loading, causing detrimental to the energy absorbed [20]. toughness values are also presented in table vi. it is clear that all rubberized steel fiber concrete mixes exhibited the highest toughness results compared with the other samples. the enhancement was due to the increase in the peak deflection capacity due to inclusion of rubber aggregate from waste tires in concrete. higher toughness generally offers an excellent property for civil engineering materials for sound barriers and paving constructions [21]. table v. results of flexural loads and strength of beams mix first crack load (n) ultimate flexural load (n) first crack strength (mpa) ultimate flexural strength (mpa) pc 15569 15569 4.60 4.60 sfc 15856 16684 4.75 4.95 cr17.5 12491 13275 3.74 3.92 cr20 12110 12933 3.63 3.81 cr22.5 11950 12671 3.58 3.75 sfcr17.5 13694 14030 4.11 4.15 sfcr20 13557 13752 4.06 4.13 sfcr22.5 12743 12925 3.82 3.87 table vi. deflections, stiffness and toughness of concrete beams under flexure mix first crack deflection (mm) maximum mid span deflection (mm) flexural stiffness (kn/mm) toughness (kn.mm) pc 1.07 1.07 14.18 7.048 sfc 0.80 1.41 19.45 16.032 cr17.5 1.35 1.51 9.05 9.086 cr20 1.40 1.55 8.87 9.243 cr22.5 1.39 1.54 8.69 8.913 sfcr17.5 0.81 1.80 17.21 20.576 sfcr20 0.82 1.83 16.91 21.343 sfcr22.5 0.83 1.78 15.94 20.331 20 25 30 35 40 45 50 0 17.5 20 22.5 c o m p re s s iv e s tr e n g th ( m p a ) crumb rubber ratio (%) plain steel fibre fig. 5. compressive strength results. (a) 0 2 4 6 8 10 12 14 16 18 0 0.5 1 1.5 2 f le x u ra l lo a d ( k n ) deflection (mm) sfc sfcr17.5 sfcr20 sfcr22.5 (b) 0 2 4 6 8 10 12 14 16 18 0 0.5 1 1.5 2 f le x u ra l lo a d ( k n ) deflection (mm) sfc sfcr17.5 sfcr20 sfcr22.5 fig. 6. load – deflection curves of beams (a) rubberized, (b) steel fiber rubberized concrete. c. flexural behavior of slabs table vii presents the average test results for slabs of each mixture. flexural test results of slabs are also presented as load–deflection curves in figures 8(a) to 8(h). all tested rubberized steel fiber concrete specimens had similar load– deflection trends with a linear branch until the ultimate load was reached and then a descending branch with softening in strain. scattering was observed in the descending branch, engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1349 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete which can be attributed to the distribution of fiber in the cracked portions at the post-peak load [8]. the curve for the rubberized steel fiber concrete slabs showed a higher descending slope than the plain rubberized slab because of the presence of steel fiber. the rubber aggregate increased the ultimate defection capacity up to a specified replacement ratio (20%). further increase in ratio resulted in lower values for the slope in the post-cracking phase. this phenomenon could be explained by the weakness in the matrix of slabs produced by further rubber replacement. the bridging effect of crumb rubber fiber is not provided at this stage. the increase in the concrete strength could lead to an increase in the energy absorption capacity [12]. however, rubberized mixes, with or without fiber reinforcement, exhibited reduced ultimate load and increased strain capacity [13]. moreover, rubber mixes tended to exhibit reduced elastic stage compared with pc or sfc slabs. this finding was also previously reported and attributed to the lower tangential elastic modulus of the rubbercrete slabs [22]. further studies on the combined effect of steel fiber and crumb rubber aggregate with long rubber fiber should be conducted. thus, the bridging action could be duplicated and the post-cracking deflection capacity could be enhanced. the rubber aggregate has a considerable effect on the toughness of concrete slabs (figures 9 a and b). table vii shows that the energy absorption capacity (toughness) of concrete slabs increased, with rates of increase of approximately 17%, 20%, and 15% at 17.5%, 20%, and 22.5% replacement ratios, respectively. a minimum of 15% enhancement of toughness was noted, although a decrease in toughness was observed beyond 20%. a similar trend was observed when crumb rubber was combined with steel fiber. the steel fiber with 0.5% crumb rubber provided the mixtures with initial enhancement of 14 times more than that of pc. further improvements in the toughness values were achieved by inclusion of crumb rubber. the rates of increase were approximately 16%, 19%, and 14% for the 17.5%, 20% and 22.5% slabs, respectively, compared with the sfc slabs. this property indicates that utilizing crumb rubber with steel fiber results in lower weight, higher toughness, and better energy absorption capacity before failure of concrete slabs. in [23], authors presented the combination of steel fiber and crumb rubber to produce layered concrete slabs subjected to impact force by replacing a specified portion of the steel fiber concrete thickness. the results obtained showed a reduction in weight and more energy dissipation in the rubberized steel fiber concrete plates, which acts as a cushion layer to reduce the effects of impact force. figure 10 (a) and (b) show adequate residual strengths for the four different mixtures, namely, sfc, sfcr17.5, sfcr20, and sfcr22.5, after the test because of the steel fiber that improved the ultimate flexural strength, energy absorption capacity, and ductility. an increase in the mid-span deflection with the increase in the crumb rubber ratio was also observed. this finding indicates enhanced flexural performance by improved results obtained from flexure test of concrete slabs ductility and energy absorption capacity for the concrete slabs (figure 11). table vii. results obtained from flexure test of concrete slabs mix ultimate load (kn) maximum deflection at middle (mm) toughness (kn.mm) pc 35.16 18.98 42.05 sfc 37.94 22.97 588.65 cr17.5 29.17 20.75 49.15 cr20 28.39 21.40 50.40 cr22.5 27.38 20.62 48.23 sfcr17.5 31.80 25.76 682.68 sfcr20 31.18 26.65 701.17 sfcr22.5 29.82 25.61 671.95 (a) (b) fig. 7. flexural strength at (a) first crack, and (b) ultimate strength. v. conclusions in the present study, concrete beams and slabs were prepared by combining hooked-end steel fiber with crumb rubber aggregate to produce a green innovative material with some desirable properties. the concrete beams, whether plain with crumb rubber or steel fiber with crumb rubber, were subjected to four-point bending load. the slabs were subjected to centre point load to investigate their flexural performance. slump and compressive strength tests were also conducted. the following conclusions were drawn: engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1350 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete (a) pc (b) cr17.5 (c) cr20 (d) cr22.5 (e) sfc (f) sfcr17.5 (g) sfcr20 (h) sfcr22.5 fig. 8. load – mid span deflection graphs for concrete slabs. engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1351 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete (a) 36 38 40 42 44 46 48 50 52 0 17.5 20 22.5 t o u g h n e s s ( k n .m m ) crumb rubbe r content % plain concrete (b) 500 550 600 650 700 750 0 17.5 20 22.5 t o u g h n e s s ( k n .m m ) crumb rubbe r content % steel fiber concrete fig. 9. effect of crumb rubber inclusion on toughness of concrete slabs (a) plain concrete, and (b) steel fiber concrete. • the compressive strength of all rubberized mixtures decreased with the increase in the partial substitution of crumb rubber aggregate in plain or steel fiber concrete. the loss of compressive strength values was not less than 20%. • the inclusion of crumb rubber into the concrete mixtures, with or without fiber, resulted in reduced flexural strength at first crack and ultimate failure for concrete beams under four-point bending load. • by contrast, toughness, which was determined by calculating the area under the load–deflection curves up to maximum deflection, improved significantly in the presence of crumb rubber. the optimum replacement ratio of sand aggregate by recycled crumb rubber aggregate was 20%. • all rubberized steel fiber reinforced concrete mixes exhibited the highest toughness values compared with the other samples in this study. the enhancements were due to the lower reduction in load and the increase in the ultimate deflection capacity. • the ultimate deflection values increased as the crumb rubber ratio increased. in addition, better flexural performance and improved concrete slab bendability before failure were observed when steel fiber was incorporated into the concrete mixture. • the post-cracking behavior of the rubberized concrete slabs could induce more ductile behavior with the increase in the waste tire crumb rubber aggregate ratio up to 20%. a significant increase in toughness of approximately 16 times was observed for the sfcr20 mixture compared with pc. • the experimental results showed a promising application of steel fiber/crumb rubber-reinforced concrete, which requires further studies. (a) (b) fig. 10. failure of concrete steel fiber rubberized concrete slabs under flexural loading. (a) top (sfcr22.5) (b) bottom (csfr22.5) fig. 11. failure of steel fiber rubberized concrete (sfcr22.5) engineering, technology & applied science research vol. 7, no. 1, 2017, 1345-1352 1352 www.etasr.com abu bakar et al.: cumulative effect of crumb rubber and steel fiber on the flexural toughness of concrete acknowledgements the work presented herein was funded by the universiti sains malaysia grant (cluster for polymer composite: 1001/pkt/8640013). references [1] h. toutanji, “the use of rubber tire particles in concrete to replace mineral aggregates”, cem. conc. comp., vol. 18, no. 2, pp. 135-139, 1996 [2] z. ismail, e. al-hashmi, “use of waste plastic in concrete mixture as aggregate replacement”, waste management, vo. 28, no. 11, pp. 20412047, 2008 [3] c. wang, y. zhang, z. zhao, “fracture process of rubberized concrete by fictitious crack model and ae monitoring”, comp. concrete, vo. 9, no. 1, pp. 51-61, 2012 [4] e. ganjian, m. khorami, a. maghsoudi, “scrap-tyre-rubber replacement for aggregate and filler in concrete”, const. build. mats., vo. 23, no. 5, pp. 1828-1836, 2009 [5] m. bekhiti, h. trouzine, a. asroun, “properties of waste tire rubber powder”, eng. technol. appl. sci. res., vol. 4, no. 4, pp. 669-672, 2014 [6] k. najim, m. hall, “workability and mechanical properties of crumbrubber concrete”, proc. of the ice – const. mats., vol. 166, no. 1, pp. 7-17, 2013 [7] j. xue, m. shinozuka, “rubberized concrete: a green structural material with enhanced energy-dissipation capability”, const. build. mats., vo. 42, pp. 196-204, 2013 [8] g. centonze, m. leone, m. aiello, “steel fibers from waste tires as reinforcement in concrete: a mechanical characterization”, const. build. mats., vo. 36, pp. 46-57, 2012 [9] d. flores-medina, n. flores medina, f. hernández-olivares, “static mechanical properties of waste rests of recycled rubber and high quality recycled rubber from crumbed tyres used as aggregate in dry consistency concretes”, materials struct., vol. 47, no. 7, pp.1185-1193, 2013 [10] british standards institute (bs), method for determination of compressive strength of concrete cubes, bs 1881:116, london, 1983 [11] american society for testing and materials (astm), standard test method for flexural performance of fiber-reinforced concrete (using beam with third-point loading), astm c1609-12, astm international, west conshohocken, pa, 2012 [12] a. khaloo, m. afshari, “flexural behaviour of small steel fiber reinforced concrete slabs”, cem. and concrete comp., vo. 27, no. 1, pp. 141-149, 2005 [13] a. turatsinze, j. granju, s. bonnet, “positive synergy between steelfibers and rubber aggregates: effect on the resistance of cement-based mortars to shrinkage cracking”, cem. conc. resch., vol. 36, no. 9, pp. 1692-1697, 2006 [14] r. olivito, f. zuccarello, “an experimental study on the tensile strength of steel fiber reinforced concrete”, composite part b: eng., vol. 41, no. 3, pp. 246-255, 2010 [15] a. turatsinze, s. bonnet, j. granju, “mechanical characterisation of cement-based mortar incorporating rubber aggregates from recycled worn tyres”, build. envt., vol. 40, no. 2, pp. 221-226, 2005. [16] d. bjegovic, a. baricevic, s. lakusic, d. damjanovic, i. duvnjak, “positive interaction of industrial and recycled steel fibers in fiber reinforced concrete”, j. of civil eng. manag., vo. 19, (sup1), pp. 50-60, 2013 [17] american society for testing and materials (astm), standard test method for flexural toughness and first-crack strength of fiber-reinforced concrete (using beam with third-point loading), astm c1018-97, astm international, west conshohocken, pa, 1997 [18] a. turatsinze, m. garros, “on the modulus of elasticity and strain capacity of self-compacting concrete incorporating rubber aggregates”, res. conserv. recy., vol. 52, no. 10, pp. 1209-1215, 2008 [19] k. najim, m. hall, “mechanical and dynamic properties of selfcompacting crumb rubber modified concrete”, const. build. mats., vol. 27, no. 1, pp. 521-530, 2012 [20] m. reda taha, a. el-dieb, m. abd el-wahab, m. abdel-hameed, “mechanical, fracture, and microstructural investigations of rubber concrete”, j. of materials in civil eng., vol. 20, no. 10, pp. 640-649, 2008 [21] t. c. ling, “effects of compaction method and rubber content on the properties of concrete paving blocks”, const. build. mats., vol. 28, no. 1, pp. 164-175, 2012 [22] b. mohammed, “structural behavior and m–k value of composite slab utilizing concrete containing crumb rubber”, const. build. mats., vol. 24, no. 7, pp. 1214-1221, 2010 [23] p. sukontasukkul, s. jamnam, m. sappakittipakorn, n. banthia, “preliminary study on bullet resistance of double-layer concrete panel made of rubberized and steel fiber reinforced concrete”, materials struct., vol. 47, no. (1-2), pp. 117-125, 2013 microsoft word 17-3578_s_etasr_v10_n5_pp6237-6241 engineering, technology & applied science research vol. 10, no. 5, 2020, 6237-6241 6237 www.etasr.com mubarak: effect of carrier phase on gps multipath tracking error the effect of carrier phase on gps multipath tracking error omer mohsin mubarak department of electrical engineering jouf university saudi arabia ommubarak@ju.edu.sa abstract—multipath is one of the main sources of tracking error in gps receivers. this tracking error has previously been analyzed against the relative delay of the line of sight (los) and reflected signals. however, only carrier phase differences of 0 and π were used, since they give tracking error with maximum magnitude. this paper shows that tracking error does not change linearly with changing carrier phase difference. tracking error plots against relative carrier phase difference of the los and reflected signals have been used to analyze the relationship between the two in various scenarios. while maximum positive and negative errors are found at carrier phase difference of 0 and π, a sharp increase in tracking error is found around the phase difference of π. there is a zero crossing in all plots but that point is dependent on relative amplitude, delay, and carrier phase difference of the two signals. the analysis has also been extended to narrow correlators receiver. tracking error is significantly reduced in this case, however, similar characteristics have been observed when the tracking error is analyzed against the relative carrier phase difference. moreover, the tracking error was found to be less dependent on the relative delay between the two signals when correlators spacing is reduced. keywords-multipath; global navigation satellite system; global positioning system; carrier phase; tracking error i. introduction the reflection of the line of sight (los) satellite signal from nearby objects, also known as multipath, causes tracking errors in the global positioning system (gps) receivers [1-3]. tracking errors caused by multipath can be positive or negative and depend on relative amplitude, carrier phase, and code delay of the reflected signal with respect to the los signal [4]. a plot of maximum positive and maximum negative code tracking errors against relative delay of a reflected signal is called as the multipath error envelope and is generally used to represent error characteristics [5-7] and effects of mitigation techniques [8-9]. it shows the maximum deviation in tracking error that can be caused by variations of carrier phase difference between the los and the reflected signals for a given relative amplitude and code delay between the two signals. two maximums are obtained for relative carrier phase differences of 0 and π radians. however, the error envelope does not provide details of how tracking error is changed when the carrier phase difference between the two signals changes from 0 to π or vice versa. this paper analyzes the effect of carrier phase difference between the los and the reflected signals on multipath tracking error. ii. experimental setup in a gps receiver, correlators are placed on the correlation function to track the code phase of a received gps signal. typically, three correlators are used, one at the prompt or on time correlation position and other two (early and late) symmetrically placed on either side [7, 10]. early and late correlators use equally advanced and delayed versions of prompt code respectively, such that for a triangular correlation function their equal energy implies that prompt correlator tracks the peak. equal energy is ensured using a discriminator function, which in simplest form is the difference in energy of the two. tracking loops adjust local code and carrier, aiming to maintain discriminator output to zero. in the presence of a reflected signal, the correlation function shape is distorted. in this case, even with zero output of discriminator function, prompt correlator code is not aligned with received signal, resulting in tracking error [2]. in this paper, a perfect triangular function v(t) is used as an autocorrelation function for determining tracking error. the counterfeit signal is given as scaled, phased and delayed version of v(t). the received signal g(t) is then given as the sum of the los and counterfeit signal by: ���� = ���� + ����� − �� (1) where , � and � are respectively the relative amplitude, the carrier phase difference, and the delay of the counterfeit signal with respect to los. the discriminator function is given by: ��� = ������� ∗��� − ������� ∗��� (2) where dl and de are given by (3) and (4) respectively. ����� = ��� + ��+ ����� + � − �� (3) ����� = ��� − �� + ����� −� − �� (4) where � is the correlator spacing between early and prompt, or prompt and late correlators. wide correlators, i.e. with 1 chip spacing between early and late correlators (�=0.5 chips), are used in this paper, except in section v. corresponding author: omer mohsin mubarak engineering, technology & applied science research vol. 10, no. 5, 2020, 6237-6241 6238 www.etasr.com mubarak: effect of carrier phase on gps multipath tracking error iii. motivation figure 1 shows the tracking error for a multipath signal with relative amplitude of 0.5 and carrier phase offsets (φ) of 0, π/4, π/2, 3π/4, and π, with respect to the los signal. relative amplitude of 0.5 implies that reflected signal is of half amplitude than the los signal and φ=0 implies that the reflected signal is in-phase with the los signal. the tracking error is zero in all cases for reflected signal delay of over 1.5 chips. plots confirm that maximum error magnitude is obtained when the reflected signal is in-phase or completely out of phase (φ=π) with the los signal. however, it can also be noted that plots are not uniformly spaced as phase difference is increased by π/4. for example, plots for φ=0 and φ=π/4 are much closer as compared to the plots for φ=π/4 and φ=π/2, although in both cases the difference in phase is π/4. fig. 1. tracking error plot against signal’s relative delay for a multipath signal with relative amplitude of 0.5 and varying carrier phase offsets (φ) with respect to the los signal. similar patterns are observed for other relative amplitudes of multipath signal with respect to the los signal. figures 2 and 3 show the tracking errors for a multipath signal with relative amplitude of 0.3 and 0.8 respectively. it can be observed that the overall error magnitude is smaller for relative amplitude of 0.3 and higher for relative amplitude of 0.8, as compared to figure 1. however, similar to figure 1, the plots in both figures are not uniformly spaced as phase difference is increased by π/4. fig. 2. tracking error plot against signal’s relative delay for a multipath signal with relative amplitude of 0.3 and varying carrier phase offsets (φ) with respect to the los signal. non-linear change in tracking error with changing carrier phase difference provided motivation to explore the effect of phase difference between the los and reflected signals on tracking error. fig. 3. tracking error plot against signal’s relative delay for a multipath signal with relative amplitude of 0.8 and varying carrier phase offsets (φ) with respect to the los signal. iv. tracking error analysis this section analyzes the changes in tracking error with changing carrier phase difference between the los and reflected signals. figures 4-6 show the tracking error plots against relative carrier phase difference of the los and reflected signals, with relative amplitudes of 0.3, 0.5, and 0.8. respectively. fig. 4. tracking error plot against signal’s relative phase difference for a multipath signal with relative amplitude of 0.3 and varying multipath signal delay (d) with respect to the los signal. fig. 5. tracking error plot against signal’s relative phase difference for a multipath signal with relative amplitude of 0.5 and varying multipath signal delay (d) with respect to the los signal. the following can be observed from these plots: • it is confirmed that for a given relative amplitude and delay between signals, the maximum positive error is obtained for φ=0 and the maximum negative error is obtained for φ=π, in all cases. engineering, technology & applied science research vol. 10, no. 5, 2020, 6237-6241 6239 www.etasr.com mubarak: effect of carrier phase on gps multipath tracking error • the change in tracking error with increasing carrier phase difference is non-linear for all cases. • there is a sharp increase in tracking error around φ=π for all cases. • there is no single phase difference which can give zero tracking error in all cases, since zero crossing for each plot is different. zero crossing of a plot is dependent on all three parameters, i.e. relative amplitude, delay, and carrier phase difference between the los and reflected signals. • figure 4 plots have the least variation in tracking error as phase difference is changed from 0 to π radians. this implies that the dependence of the tracking error on carrier phase difference is higher for higher relative amplitudes of the reflected signal. fig. 6. tracking error plot against signal’s relative phase difference for a multipath signal with relative amplitude of 0.8 and varying multipath signal delay (d) with respect to the los signal. v. tracking error with narrow correlator the previous analysis has used wide correlators, i.e. 1 chip spacing between early and late correlator of a receiver. reduced spacing of 0.1 chip between early and late correlators, termed as the narrow correlator, has been used for mitigating the tracking error caused by multipath for various global navigation satellite signals [8, 11-13]. this can be confirmed from figure 7, which shows the tracking error with reflected signal relative amplitude of 0.5 using a narrow correlator. fig. 7. tracking error plot against signal’s relative delay for a multipath signal with relative amplitude of 0.5 and varying carrier phase offsets (φ) with respect to the los signal, using narrow correlators in receiver. comparing this with figure 1, it can be seen that the tracking error has been significantly reduced. the tracking error is zero when separation between the los and reflected signals is more than 1.05 chips, instead of 1.5 chips in figure 1. the maximum tracking error is 0.025 chips instead of 0.25 chips obtained with wide correlators. similarly, reduced error can be observed in figures 8-9, as compared with similar relative amplitude plots in figures 2-3 respectively. it can again be noted that the plots in figures 7-9 are not uniformly spaced. similar to wide correlators, plots for φ=0 and φ=π/4 are much closer than the plots for φ=π/4 and φ=π/2, although in both cases the difference in phase is π/4. therefore, the change in tracking error against carrier phase difference is analysed in this section for a narrow correlator receiver. fig. 8. tracking error plot against signal’s relative delay for a multipath signal with relative amplitude of 0.3 and varying carrier phase offsets (φ) with respect to the los signal, using narrow correlators in receiver. fig. 9. tracking error plot against signal’s relative delay for a multipath signal with relative amplitude of 0.8 and varying carrier phase offsets (φ) with respect to the los signal, using narrow correlators in receiver. figures 10-12 show tracking error plots against relative carrier phase difference of the los and reflected signals using narrow correlators in a receiver, with relative amplitudes of 0.3, 0.5, and 0.8 respectively. all the 5 observations noted in the previous section for wide correlators, are also valid for these narrow correlator based plots. moreover, out of delays of 0.25, 0.5, 0.75, 1, and 1.25 chips, the 1.25 chips plot stays at zero in all the 3 figures, as the reflected signal delay of over 1.05 chips gives zero tracking error irrespective of the relative amplitude and carrier phase difference of the reflected signal with respect to the los signal. it can also be noted that plots are much closer as compared to the plots obtained using wide correlators for the same relative amplitude. for example, the plots in figure 10 are much closely spaced as compared to the plots in figure 4, although both are obtained for relative amplitude of 0.3 and the same set of relative delays. engineering, technology & applied science research vol. 10, no. 5, 2020, 6237-6241 6240 www.etasr.com mubarak: effect of carrier phase on gps multipath tracking error fig. 10. tracking error plot against signal’s relative phase difference for a multipath signal with relative amplitude of 0.3 and varying multipath signal delay (d) with respect to the los signal, using narrow correlators in receiver. fig. 11. tracking error plot against signal’s relative phase difference for a multipath signal with relative amplitude of 0.5 and varying multipath signal delay (d) with respect to the los signal, using narrow correlators in receiver. fig. 12. tracking error plot against signal’s relative phase difference for a multipath signal with relative amplitude of 0.8 and varying multipath signal delay (d) with respect to the los signal, using narrow correlators in receiver. vi. conclusion multipath is a source of tracking error in gps receivers, which leads to positioning errors [3, 14-17]. multipath error envelopes have been used to analyze tracking error caused by multipath [5-9]. however, they only provide maximum positive and negative error for a given relative amplitude and code delay between the two signals. two maximums are obtained for relative carrier phase differences of 0 and π radians, whereas the tracking error for relative carrier phase differences between 0 and π radians has not been explored earlier. this paper has analyzed the tracking error caused by multipath and specifically the effect of carrier phase on error. tracking error plots against relative carrier phase difference of the los and reflected signals have been used instead of conventional tracking error plots against relative delay of the two signals. this novel analysis confirmed that maximum positive and negative errors are obtained for φ=0 and φ=π respectively. it has also been observed that change in tracking error with increasing carrier phase difference is non-linear and the error increases sharply around φ=π. the zero crossing of a tracking error plot is found to be dependent on relative amplitude, delay and carrier phase difference between the los and the reflected signals, i.e. there is no single carrier phase difference between the two signals which gives zero tracking error. moreover, the dependence of tracking error on carrier phase difference is found to be higher for higher relative amplitude of the reflected signal. tracking error has also been analyzed for a receiver using narrow correlators, which are generally used to reduce the tracking error caused by multipath. the spacing between early and late correlators was reduced to 0.1 chip, instead of 1 chip in wide correlators. as a result, maximum tracking error reduced to 0.025 chips instead of 0.25 chips. moreover, tracking error is zero when the separation between the los and reflected signals is more than 1.05 chips, instead of 1.5 chips for wide correlator receivers. characteristics observed for tracking error using wide correlators are also found to be valid when narrow correlators were used. moreover, the plots for different multipath signal delays are found to be much closer as compared to the plots obtained using wide correlators for the same set of signal parameters. this implies that the tracking error is less dependent on relative delay between the two signals when narrow correlators are used. these findings can be useful for finding better estimates of tracking error in a gps receiver for multipath with given relative amplitude, carrier phase difference, and delay between the two signals. references [1] x. chen and f. dovis, “enhanced cadll structure for multipath mitigation in urban scenarios,” in proceedings of the 2011 international technical meeting of the institute of navigation, san diego, ca, 2011, pp. 678–686. [2] o. m. mubarak and a. dempster, “carrier phase analysis to mitigate multipath effect,” presented at the ignss symposium 2007, the university of new south wales, sydney, australia, dec. 2007. [3] o. m. mubarak and a. g. dempster, “exclusion of multipath-affected satellites using early late phase,” journal of global positioning systems, vol. 9, no. 2, pp. 145–155, 2010. [4] k. yedukondalu, a. d. sarma, and v. s. srinivas, “estimation and mitigation of gps multipath interference using adaptive filtering,” progress in electromagnetics research m, vol. 21, pp. 133–148, 2011, doi: 10.2528/pierm11080811. [5] t. g. ferreira and f. d. nunes, “advanced multipath mitigation techniques for gnss receivers,” presented at the 1st seminar of the portuguese committee, lisbon, portugal, nov. 2007. [6] o. m. mubarak and a. g. dempster, “analysis of early late phase in single-and dual-frequency gps receivers for multipath detection,” gps solutions, vol. 14, no. 4, pp. 381–388, sep. 2010, doi: 10.1007/s10291010-0162-z. [7] a. pirsiavash, a. broumandan, and g. lachapelle, “characterization of signal quality monitoring techniques for multipath detection in gnss applications,” sensors (basel, switzerland), vol. 17, no. 7, jul. 2017, doi: 10.3390/s17071579. engineering, technology & applied science research vol. 10, no. 5, 2020, 6237-6241 6241 www.etasr.com mubarak: effect of carrier phase on gps multipath tracking error [8] a. j. v. dierendonck, p. fenton, and t. ford, “theory and performance of narrow correlator spacing in a gps receiver,” navigation, vol. 39, no. 3, pp. 265–283, 1992, doi: 10.1002/j.2161-4296.1992.tb02276.x. [9] a. pirsiavash, a. broumandan, and g. lachapelle, “performance evaluation of signal quality monitoring techniques for gnss multipath detection and mitigation,” presented at the international technical symposium on navigation and timing (itsnt), toulouse, france, nov. 2017. [10] e. d. kaplan, understanding gps: principles and applications, second edition, 2nd edition. bs: artech house, 2005. [11] m. e. cannon, g. lachapelle, w. qiu, s. l. frodge, and b. remondi, “performance analysis of a narrow correlator spacing receiver for precise static gps positioning,” in proceedings of 1994 ieee position, location and navigation symposium plans’94, apr. 1994, pp. 355–360, doi: 10.1109/plans.1994.303337. [12] z. xuefen, c. xiyuan, and c. xin, “comparison between strobe correlator and narrow correlator on mboc dll tracking loop,” in 2011 ieee international instrumentation and measurement technology conference, may 2011, pp. 1–4, doi: 10.1109/imtc.2011.5944083. [13] j. h. lee et al., “a gps multipath mitigation technique using correlators with variable chip spacing,” e3s web of conferences, vol. 94, 2019, doi: 10.1051/e3sconf/20199403006, art. no. 03006. [14] m. orabi, j. khalife, a. a. abdallah, z. m. kassas, and s. s. saab, “a machine learning approach for gps code phase estimation in multipath environments,” in 2020 ieee/ion position, location and navigation symposium (plans), apr. 2020, pp. 1224–1229, doi: 10.1109/plans46316.2020.9110155. [15] t. kos, i. markezic, and j. pokrajcic, “effects of multipath reception on gps positioning performance,” presented at the elmar-2010, zadar, croatia, sep. 2010. [16] i. rumora, n. sikirica, and r. filjar, “an experimental identification of multipath effect in gps positioning error,” transnav, international journal on marine navigation and safety od sea transportation, vol. 12, no. 1, pp. 29–32, mar. 2018, doi: 10.12716/1001.12.01.02. [17] t. l. dammalage, “the effect of multipath on single frequency c/a code based gps positioning,” engineering, technology & applied science research, vol. 8, no. 4, pp. 3270–3275, aug. 2018. author’s profile omer mohsin mubarak received the b.s. in electronics engineering from the ghulam ishaq khan institute of engineering sciences & technology, pakistan, and the m.e. and ph.d. degrees from the university of new south wales, australia in 2006 and 2010 respectively. from 2013 to 2016, he was with iqra university, pakistan and during this period he served as head of the electronics engineering department and head of computing & technology department. he is currently working as an assistant professor at jouf university, saudi arabia. his research interests include multipath mitigation, spoofing detection and other signal processing techniques for gnss receivers. he is a senior member of ieee, usa. microsoft word 27-2923_s_etasr_v9_n4_pp4463-4468 engineering, technology & applied science research vol. 9, no. 4, 2019, 4463-4468 4463 www.etasr.com manig et al.: crack pattern investigation in the structural members of a framed two-story building … crack pattern investigation in the structural members of a framed two-floor building due to excavationinduced ground movement naeem mangi department of civil engineering, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan naeem08ce30@gmail.com dildar ali mangnejo department of civil engineering, mehran university of engineering & technology, shaheed zulfiqar ali bhutto campus, khairpur mir’s, pakistan dildarali72@gmail.com hemu karira department of civil engineering, mehran university of engineering & technology, shaheed zulfiqar ali bhutto campus, khairpur mir’s, pakistan engr.hemu07civil@gmail.com m. kumar department of civil engineering, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan manojbhoopani475@gmail.com ashfaque ahmed jhatial department of civil engineering, mehran university of engineering & technology, shaheed zulfiqar ali bhutto campus, khairpur mir’s, pakistan ashfaqueahmed@muetkhp.edu.pk f. r. lakhair department of civil engineering, mehran university of engineering & technology, shaheed zulfiqar ali bhutto campus, khairpur mir’s, pakistan faisallakhair722@gmail.com abstract—increased urbanization causes traffic and parking issues especially in metropolitan cities like karachi, london, shanghai, etc. to accommodate parking issues for the vehicles mainly in urban areas (excavated) underground parking areas under or nearby high rise buildings are preferred. as a result of excavation, ground movements occur that have major impact on structures, buildings and utilities. the past research usually oversimplified surface structure as an equivalent elastic beam, which is unable to represent the behavior of a framed building realistically. in this study, the detrimental effects (i.e. crack pattern) on a two floor rcc framed building founded on piles due to an adjacent excavation-induced ground movement are investigated. elasto-plastic coupled-consolidation analysis was adopted. the hypoplastic constitutive model was used to capture soil behavior. it is an advanced model which is able to capture the soil unique features which are non-linear behavior, stiffness degradation (with stress, strain & path dependent), and stressstrain dependent soil dilatancy. the concrete damaged plasticity (cdp) model was used to capture the cracking behavior in the concrete beams, columns and piles. it was revealed that the induced slope and tilting are not equal. consequently, the frame was distorted. as a result, tension cracks were induced at the inner side of the column. keywords-excavation; rcc framed building; crack pattern i. introduction high rise building load is transferred to the surrounding soil through pile foundations. as a result, high stress regime is generated around the pile [1]. on the other hand, ground excavations result in ground movement due to induced-stress release [2]. with the increase in population, urbanization is also increasing, causing traffic and parking issues, especially in metropolitan cities. to accommodate parking issues, mainly in urban areas, excavations under or nearby high rise buildings are preferred. as a result, ground movements occur that have major impact on nearby structures [3]. excavations carried out near a building may cause its distortion, affect pile foundation or even cause collapse. it is important for a geotechnical engineer to assess these dangerous situations when an excavation is carried out nearby a building. to cope with parking issues on congested cities, underground transportation systems (underground parking) have been developed. these excavations are sometimes unavoidable to be constructed adjacent to existing buildings. this condition leads to the challenge of assessing and protecting the integrity of the framed building. authors in [3] investigated the damage mechanism and behavior of a framed building, on shallow foundation with separate footing and different infill configuration, due to tunnelling induced ground movements, by carrying out a series of numerical analyses. they concluded that the sensitive performance of a framed building on stiffer ground is more than that on softer ground. besides that, the infill configuration plays a significant role in the performance of a framed building. when infill walls are subjected to tunnelling caused ground movements, the structure distortion is significantly reduced. authors in [4] investigated the effects of selected geotechnical and structural parameters on the response of masonry buildings subjected to tunnelling induced settlements. a sensitivity study has been carried out to determine the influence of building cracking and soil structure interface parameters. it was concluded that the high dependency of the corresponding author: naeem mangi engineering, technology & applied science research vol. 9, no. 4, 2019, 4463-4468 4464 www.etasr.com manig et al.: crack pattern investigation in the structural members of a framed two-story building … structural response on the soil structure interface stiffness and material cracking suggests that this model could be used in extensive parametric analyses to improve the existing damage design curves. authors in [1] carried out a sensitivity study on a 2d finite element model that was validated by comparing experimental results. the study investigated the effects of building weight, openings, initial damage, material properties, applied settlement profile and normal and shear behavior of base interface. the results assessed the major role played by the normal stiffness of the soil structure and the quasi brittle masonry behavior. the results showed the high dependency of the final damage to soil–structure interaction and on material cracking. authors in [5-8, 27] reported that existing buildings affect tunnelling induced ground movement profile in a similar way as induced settlement tunnelling impacts existing adjacent buildings. authors in [9] compared the response of different types of buildings founded on shallow foundation, which are subjected to excavation induced ground settlement, and provided a better understanding of the complex soil structure interaction in the building response. authors in [10] suggested estimating the stiffness of framed building by simply adding separate bending stiffness of all the floor slabs. authors in [11] carried out three centrifuge tests which were performed to study the effects of a multi propped deep excavation on the behavior of piles in dry toyoura sand. it was concluded that lateral restraints imposed on pile head have a significant influence on the induced pile bending moment which can exceed the pile bending capacity. authors in [12] proposed an analytical method to evaluate the reduction of capacity and increase in settlement of a nearby pile during excavation. the pile settlement due to excavation depends on the shaft friction of the pile and the soil movement pattern. authors in [13] studied the lateral response of a pile group using a finite element method and concluded that pile bending moment and lateral deformation increase significantly with increase in excavation depth. authors in [29, 32] developed design charts for the computation of the lateral behaviour of a single pile existing nearby a deep excavation in soft ground. before carrying out the multi-strutted deep excavation, the numerical model and its soil parameters were calibrated by using the centrifuge test results and the triaxial tests reported in [15, 18]. the aim of this study is to give an insight into the tilting behavior and settlement of a pile group by making changes in pile length, excavation depth, supporting system stiffness, pile group distance from the excavation, permeability, initial working load, and soil state. in addition to that, the continuous changes in excess pore water pressure and long term settlement of the pile group having different positions of pile toe relative to the final exaction level are studied. authors in [17] found that the maximum apparent earth pressure for the upper 10% of height exceeded the trapezoidal boundary of the apparent earth pressure diagrams for both soft clay and stiff clay that were initially proposed. no significant difference among the apparent earth pressure values of excavation supported by wall of different stiffness was found. authors in [11] carried out field studies on a 10m deep multi-propped excavation in overconsolidated and fissured gault clay comparing measured and peck’s earth pressure. the measured values were near to the lower bound of peck’s chart. the strut load of the lowest prop was found to be smaller to some extent due to the low lateral stress in the ground following the construction of the diaphragm wall. authors in [17] studied numerically a case where the clear distance between piles and the 0.8m thick diaphragm wall was 3m. there was reasonable agreement between the measured and predicted results regarding the location of maximum deflection and bending moment. authors in [28] concluded that computed bending moments in the steptapered piles based on upper-bound finite element results were smaller than their moment capacity, indicating that the actual moments in the piles were not large enough to cause cracking. apart from field monitoring, this problem has been investigated by means of centrifuge modeling in soft kaolin clay [18]. authors concluded that the distance between pile and diaphragm wall is an important parameter which plays a major role on generated bending moment in the pile. in the presence of initial applied load, the soil surrounding the pile foundation experiences higher stress level before the starting of adjacent excavation. the excavation induced stress released in the ground induced bending moment and settlement. continuum numerical methods using finite-element computer programs offer powerful tools to model complex construction processes, including deep excavations. the ability to predict excavation-induced ground movements reliably, however, is wholly dictated by the input of representative parameters for the soil and the other structural components of the excavation [6, 8]. existing numerical codes are extremely demanding of such prior information. a practical alternative to complement numerical analysis is to discover mechanisms of behavior by means of model tests and to use them to understand the performance of buildings when subjected to excavation-induced deformations. in order to observe these mechanisms, authors in [24, 25] studied small-scale models tested in a 8m diameter geotechnical centrifuge machine at the cambridge university. the aim of these centrifuge tests was to simulate the excavation-soil-building interaction and to understand the mechanisms involved, rather than modeling a specific prototype. therefore, it was decided to use sand for all the centrifuge tests due to the significantly smaller time required in preparation before testing compared to clay. an excavation system, which adopted a new technique for simulating the excavation process in-flight, was developed to investigate the interaction between soil and model buildings. the test results provided new insight into the fundamental mechanisms involved. in this paper, a detailed description of the centrifuge models (and associated test procedures) is presented followed by presentation of results from two key centrifuge tests. most of the previous research was focused on the damaging effects to structures founded on shallow foundation due to excavation. surface structures were usually oversimplified as equivalent elastic beams, which does not represent the behavior of a framed building realistically. the damaging effects on piles due to excavation were rarely reported. structural components such as piles, beams and columns were assumed as elastic. so, the main objective of this study is to estimate the crack patterns induced in the structural members due to adjacent excavation. to achieve this, three-dimensional finite element analysis (i.e. coupled consolidation parametric study) engineering, technology & applied science research vol. 9, no. 4, 2019, 4463-4468 4465 www.etasr.com manig et al.: crack pattern investigation in the structural members of a framed two-story building … was conducted. the angular distortion and crack pattern in the framed building are reported and discussed. ii. three-d coupled consolidation analysis a. exacavation description the 12m deep exaction is carried out nearby a building at a vertical interval of 3m. a diaphragm wall having depth of 18m with 0.6m is used to support the soil mass on the excavation side. the clear distance between diaphragm and an adjacent pile is kept 3m. the wall penetration depth to excavation depth ratio is typically 0.5 to 2 [22], so a value of 0.5 is taken in this study. props are used at the vertical interval of 3m along the excavated soil mass to support the diaphragm wall. the props are modeled with axial rigidity of 81×103knm [17]. the horizontal spacing of props is 8m. figure 1 illustrates the geometry of a typical case with excavation depth of 12m. fig. 1. geometry of the problem adopted in this study b. finite element mesh and boundary conditions figure 2 describes the finite element mesh used in this case with size of 20m×25m×22m. in this three dimensional parametric study, solid elements (with 8-node trilinear continuum) were used to model soil, pile and diaphragm wall. solid continuum elements are small material blocks. they can be used to build models of nearly any shape subjected to different loading because they are connected to each other like bricks in a building or tiles in a mosaic. each node of a solid continuum element has three degrees of freedom for describing displacements in x, y and z directions and one degree of freedom for taking pore pressure into account. the props were modeled by using truss elements which are long, slender structural members that can transmit only axial force but cannot transmit moment. each truss element node has two degrees of freedom, one for displacement and one for rotation. to restrain the horizontal and vertical movement of sides, roller supports were applied. the base of the mesh was restrained in every direction by applying pin/fix supports on it. the ground water table level was taken at ground surface. at geostatic state, pore pressure distribution was taken as hydrostatic. the top of the mesh was considered drainage boundary. the frictional interface was taken between pile and soil and wall and soil. the interface was modeled by the coulomb’s friction law, in which the interface friction coefficient (µ) and limiting displacement (γlim) are the input parameters. ιnterface friction coefficient of 0.35 and limiting shear displacement of 5mm were assumed to achieve full mobilisation of the interface friction [19]. excavation was carried out by deactivating soil elements within the zone of excavation. activation of truss elements procedure was adopted for prop installation on the wall. fig. 2. finite element mesh and boundary conditions c. constitutive model and its parameters’ hypoplastic model a hypoplastic constitutive model was used to capture soil behavior. it is an advanced model which is able to capture the unique features of soil as non-linear behavior, degradation of stiffness and stress-strain dependent dilatancy of soil. basically nonlinear behavior within the range of strain level of medium to large (due to monotonic loading) of the granular material is captured by using the basic hypoplastic model [20-22]. there are five parameters ��,λ∗,κ∗,ϕ�and required for the basic model. the parameters n and λ * control the normal compression line location and its slope in ln�1 � �� vs ln� ′ diagram. similarly, the parameter κ * is used to regulate the slope of the unloading line. the value of r is responsible for controlling the large shear strain modulus. the critical state friction angle is represented by φc. authors in [31] further implemented the concept of intergranular strain to improve the basic hypoplastic model. moreover, five additional parameters (required by the intergranular strain) were included into the basic model for considering the strain and path dependency of soil stiffness: r, βγ, χ, mt and mr. βγ and χ determine the rate of stiffness degradation and r is an elastic range. ko (coefficient of lateral earth pressure at rest) was calculated by jaky’s equation: k� � �1 � sinφ �� (1) 3.0 m formation level excavation depth (he) = 12 m wall penetration ratio = 0.5 prop 1 (1.0 m) prop 2 (4.0 m) prop 3 (7.0 m) prop 4 (10.0 m) diaphragm wall thickness = 0.6 m depth = 18.0 m clay single piles length = 18.0 m diameter = 0.8 m plinth beam gf columns ff beam gf beam ff columns engineering, technology & applied science research vol. 9, no. 4, 2019, 4463-4468 4466 www.etasr.com manig et al.: crack pattern investigation in the structural members of a framed two-story building … hypoplastic clay model parameters are given in table i. table i. hypoplastic model parameters description value critical state angle, φ΄ 22 o slope of normal compression line, λ* 0.11 slope of unloading line, κ* 0.026 position of normal compression line, n 1.36 shear stiffness at medium to large strain level parameter, r 0.65 initial shear stiffness with 180°strain path, mr 14 initial shear stiffness with 90°strain path, mt 11 elastic range, r 1×10 -5 rate of degradation of stiffness with strain, βr 0.1 degradation rate of stiffness with strain, χ 0.7 void ratio at initial condition, e 1.05 density (kg/m 3 ) 1136 co-efficient of permeability, k (m/s) 1×10 -9 d. concrete damaged plasticity (cdp) model the cdp model was used to capture cracking behavior in the concrete beams, columns and piles. the cdp model provides a general capability for modeling concrete and other quasi-brittle materials in all types of structures (beams, trusses, shells, and solids). this model uses concepts of isotropic damaged elasticity in combination with isotropic tensile and compressive plasticity to represent the inelastic behavior of concrete. it can be used for plain concrete, even though it is intended primarily for the analysis of reinforced concrete structures and used with rebar to model concrete reinforcement. it is designed for applications in which concrete is subjected to monotonic, cyclic, and dynamic loading under low confining pressures. table ii. adopted concrete parameters description value young's modulus, e 35gpa poisson's ratio, ν 0.3 density, ρ 2400kg/m 3 e. numerical modeling procedure numerical analysis modeling procedure for a typical case is summarized as: step 1: set up the initial boundary and initial stress conditions (i.e. static stress conditions with varying k0=0.63). step 2: activate the brick elements representing single pile (modeled as wished-in-place). step 3: the construction of the two-floor rcc building was carried out as follows: (a) construct the plinth beam by activating its elements on the single piles. (b) erect the ground floor columns on the plinth beam followed by ground floor beams by activating their elements. (c) similarly, construct the columns and beams of the first floor of the frame. step 4: allow excess pore pressure, which generated in result of application of load of the frame on the pile, to dissipate. step 5: activate the brick elements representing the diaphragm wall. step 6: simulate staged multi-propped excavation. after excavating to 3m depth, the first level of props is installed at 1m below the ground surface. step 7: repeat step 6 to excavate the next stages and install props until the last stage of excavation (i.e. he=12m) is completed. iii. interpretation of computed results a. induced frame angular distortion due to excavation angular distortion (which is defined as shearing distortion of the frame) can be induced in the frame due to the adjacent excavation. mathematically, angular distortion is calculated as: angular distortion = slope − tilting. (2) figure 3 shows the induced angular distortion in the twofloor frame (i.e. bay 1 and bay 2) due to excavation-induced stress release. the horizontal axis represents the normalized excavation depth and the vertical axis shows the angular distortion induced in the bays. the angular distortion at any vertex is taken positive if the frame is compressed diagonally at that vertex. fig. 3. induced angular distortion in the frame due to excavation it can be observed that negative angular distortion is induced in bay 1 during excavation. this implies that the diagonals bd and ac of bay 1 were stretched and compressed during excavation, respectively. this can be attributed to differential settlement and tilting of the frame due to excavation induced stress release. consequently, the column ad of the frame (in bay 1) is subjected to tension towards its inner side. as a result, the tension crack can be induced at the inner side of the column (discussed later). unlike bay 1, bay 2 was distorted at vertex c positively during excavation. as the excavation progresses, the positive distortion at the vertex c of bay 2 increases. this implies that the diagonals ce and bf of -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 a n g u la r d is to rt io n : % normalised excavation depth (h/he) bay 1 bay 2 bay 1 bay 2 l a l cb h d e f engineering, technology & applied science research vol. 9, no. 4, 2019, 4463-4468 4467 www.etasr.com manig et al.: crack pattern investigation in the structural members of a framed two-story building … bay 2 were compressed and elongated during excavation, respectively. this reason can be attributed to the differential settlement and tilting of the frame due to excavation induced stress release. consequently, the column cf of the frame (in bay 2) is subjected to tension towards its inner side. as a result, the tension crack can be induced at the inner side of the column (discussed below). b. progressive development of tension-induced crack pattern in the frame as discussed above, angular distortion is induced in the frame. as a result, tension is developed in the columns of bay 1 and bay 2. figure 4 illustrates the development of tensioninduced cracks (i.e. crack pattern) in the plinth beam and columns on excavation completion. for reference, the tension damage before excavation is included in the figure. the tension damage is expressed in terms of dt (i.e. damaget) in the numerical modeling. the values of dt range from 0 to 1 where zero represents no damage and one complete damage in tension induced in structural member, respectively. as expected, no tension damage (damaget=0) is developed in the frame before excavation. however, as excavation progresses, damage failure is induced in the plinth beam and the lower part of the ground floor columns. on excavation completion, the magnitude of scale of damage (dtmax) became as high as 0.97 in the bottom layer of the portion of the plinth beam resting on the pile. this suggests that the induced maximum tensile stress in that portion of the plinth beam exceeded the yield stress of the pile. consequently, the stiffness and strength of the concrete degraded significantly. fig. 4. excavation-induced cracking pattern in the frame iv. conclusion induced slope and tilting are not equal. consequently, the frame was distorted. as a result, tension cracks were induced at the inner side of the column. on completion of the excavation, the magnitude of dt became as high as 0.97 in the bottom layer of the portion of the plinth beam resting on the pile. references [1] g. giardina, a. v. v. d. graaf, m. hendriks, j. g. rots, a. marini, “numerical analysis of a masonry facade subject to tunnelling-induced settlements”, engineering structures, vol. 54, pp. 234-247, 2013 [2] h. j. burd, g. t. houlsby, c. e. augarde, g. liu, “modelling tunnelling-induced settlement of masonry buildings”, proceedings of the institution of civil engineers-geotechnical engineering, vol. 143, no. 1, pp. 17-29, 2000 [3] j. fu, z. yu, s. wang, j. yang, “numerical analysis of framed building response to tunnelling induced ground movements”, engineering structures, vol. 158, pp. 43-66, 2018 [4] g. giardina, m. a. n. hendriks, j. g. rots, “sensitivity study on tunnelling induced damage to a masonry facade”, engineering structures, vol. 89, pp. 111-129, 2015 [5] j. n. franzius, d. potts, t. i. addenbrooke, j. b. burland, “the influence of building weight on tunnelling-induced ground and building deformation”, soils and foundations, vol. 45, no. 4, pp. 25-38, 2004 [6] j. n. franzius, d. m. potts, j. b. burland, “the response of surface structures to tunnel construction”, proceedings of the institution of civil engineers-geotechnical engineering, vol. 159, no. 1, pp. 3-17, 2006 [7] j. a. pickhaver, h. j. burd, g. t. houlsby, “an equivalent beam method to model masonry buildings in 3d finite element analysis”, computers & structures, vol. 88, no. 19-20, pp. 1049-1063, 2010 [8] d. m. potts, t. i. addenbrooke, “a structure's influence on tunnelling induced ground movements”, proceedings of the institution of civil engineers-geotechnical engineering, vol. 125, no. 2, pp. 109-125, 1997 [9] m. son, e. j. cording, “evaluation of building stiffness for building response analysis to excavation-induced ground movements”, journal of geotechnical and geoenvironmental engineering, vol. 133, no. 8, pp. 995-1002, 2007 [10] j. b. burland, j. r. standing, f. m. jardine, building response to tunnelling–case studies from construction of the jubilee line extension, ciria, 2001 [11] c. w. w. ng, m. a. soomro, y. hong, “three-dimensional centrifuge modelling of pile group responses to side-by-side twin tunnelling”, tunnelling and underground space technology, vol. 43, pp. 350-361, 2014 [12] m. korff, r. j. mair, f. a. f. v. tol, “pile-soil interaction and settlement effects induced by deep excavations”, journal of geotechnical and geoenvironmental engineering, vol. 142, no. 8, pp. 1-14, 2016 [13] d. s. liyanapathirana, r. nishanthan, “influence of deep excavation induced ground movements on adjacent piles”, tunnelling and underground space technology, vol. 52, pp. 168-181, 2016 [14] n. loganathan, h. g. poulos, d. p. stewart, “centrifuge model testing of tunnelling-induced ground and pile deformations”, geotechnique, vol. 50, no. 3, pp. 283-294, 2000 [15] t. benz, small-strain stiffness of soils and its numerical consequences, phd thesis, university of stuttgart, 2007 [16] d. e. ong, c. e. leung, y. k. chow, “pile behavior due to excavationinduced soil movement in clay i: stable wall”, journal of geotechnical and geoenvironmental engineering, vol. 132, no. 1, pp. 36-44, 2006 [17] a. t. c. goh, k. s. wong, c. i. teh, d. wen, “pile response adjacent to braced excavation”, journal of geotechnical and geoenvironmental engineering, vol. 129, no. 4, pp. 383-386, 2003 [18] d. e. l. ong, c. f. leung, y. k. chow, “behavior of pile groups subjected to excavation-induced soil movement in very soft clay”, journal of geotechnical and geoenvironmental engineering, vol. 135, no. 10, pp. 1462-1474, 2009 engineering, technology & applied science research vol. 9, no. 4, 2019, 4463-4468 4468 www.etasr.com manig et al.: crack pattern investigation in the structural members of a framed two-story building … [19] c. w. w. ng, m. a. soomro, y. hong, “effects of piggyback twin tunnelling on a pile group: 3d centrifuge tests and numerical modelling”, geotechnique, vol. 65, no. 1, pp. 38-51, 2015 [20] d. masin, i. herle, “state boundary surface of a hypoplastic model for clays”, computers and geotechnics, vol. 32, no. 6, pp. 400-410, 2005 [21] p. w. mayne, f. h. kulhawy, “k0-ocr relationships in soils”, journal of the geotechnical engineering division, vol. 108, no. 6, pp. 851-872, 1982 [22] b. c. b. hsiung, “a case study on the behaviour of a deep excavation in sand”, computer and geotechnics, vol. 36, no. 4, pp. 665-675, 2009 [23] s. w. jacobsz, j. r. standing, r. j. mair, t. hahiwara, t. suiyama, “centrifuge modeling of tunnelling near driven piles”, soil & foundations, vol. 44, no. 1, pp. 49-56, 2004 [24] s. y. lam, ground movements due to excavation in clay: physical and analytical models, phd thesis, university of cambridge, 2010 [25] m. z. e. b. elshafie, effect of building stiffness on excavation-induced displacements, phd thesis, university of cambridge, 2008 [26] m. a. soomro, d. a. mangnejo, r. bhanbhro, n. a. memon, m. a. memon, “3d finite element analysis of pile responses to adjacent excavation in soft clay: effects of different excavation depths systems relative to a floating pile”, tunnelling underground space technology, vol. 86, pp. 138-155, 2019 [27] j. b. burland, assessment of risk of damage to buildings due to tunnelling and excavation, london imperial college of science, technology and medicine, 1995 [28] r. j. finno, l. g. a. monsalve, f. sarabia, “observed performance of the one museum park west excavation”, journal of geotechnical and geoenvironmental engineering, vol. 141, no. 1, pp. 1-11, 2014 [29] h. g. poulos, l. t. chen, “pile response due to excavation-induced lateral soil movement”, journal of geotechnical and geoenvironmental engineering, vol. 123, no. 2, pp. 94-99, 1997 [30] d. masin, j. bohac, p. tuma, “modelling of a deep excavation in a silty clay”, 15th european conference on soil mechanics and geotechnical engineering, athens, greece, 2011 [31] a. niemunis, i. herle, “hypoplastic model for cohesionless soils with elastic strain range”, mechanics of cohesive-frictional materials, vol. 2, no. 4, pp. 279-299, 1997 [32] l. z. wang, k. x. chen, y. hong, c. w. w. ng, “effect of consolidation on responses of a single pile subjected to lateral soil movement”, canadian geotechnical journal, vol. 52, no. 6, pp. 769782, 2015 [33] d. a. mangnejo, n. mangi, “the responses of an end-bearing pile to adjacent multipropped excavation: 3d numerical modelling”, civil engineering journal, vol. 5, no. 3, pp. 552-562, 2019 [34] d. a. mangnejo, m. a. soomro, n. mangi, i. a. halepoto, i. a. dahri, “a parametric study of effect on single pile integrity due to an adjacent excavation induced stress release in soft clay”, engineering, technology & applied science research, vol. 8, no. 4, pp. 3189-3193, 2018 microsoft word kunene-ed.doc etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 363-367 363 www.etasr.com kunene and allopi: comparison between conditions of major roads… comparison between conditions of major roads within and outside the port of durban oscar kunene durban university of technology durban, south africa mbongenik@yahoo.com dhiren allopi durban university of technology durban, south africa allopid@dut.ac.za abstract—over the past years road traffic has increased at the port of durban. due to the lack of maintenance, this has resulted in road deterioration. roads are considered as the most important transport mode at the port of durban. it is an important mean for facilitating economic growth of local, regional and national industries. for the port to maintain global competitiveness with the current trend of globalization, it has to be ensured that the roads are well maintained. the purpose of the paper is to provide an overview of road condition within and outside the port. verification and assessment of the condition of the existing eight major roads was conducted. comparisons between the condition of the eight major roads within and outside the port were highlighted. conclusions and recommendations were also drawn based on the findings. keywords-road condition; verification; assessment; pavement management system; port of durban i. introduction a. background of the study in the past, all roads within the port of durban were owned and maintained by transnet national ports authority (tnpa, formerly known as portnet). currently, not all roads within the port are owned and maintained by tnpa, e.g. public roads (ethekwini municipality) and access roads to private terminals (lessees). some roads are divided into two, with one section of the road falling within the port boundary and another section falling outside the port boundary. these roads are maintained by both tnpa and ethekwini municipality. tnpa is the landlord for the port of durban. private companies and other transnet divisions such as transnet port terminals and transnet freight rail leases land from tnpa. ethekwini municipality is the local government for the city of durban, in which the port of durban is located. in the eighty five years leading up to 1995, the south african railways and harbours held a monopoly on transport over a 50 km lead distance from the port and therefore all cargo owners, both import and export, were obliged to dispatch their products by rail. this led to large areas of the bayhead becoming the preserve of the railways and large marshalling yards and carriage and wagon workshops were established in the area. when rail was the dominant mode of transport to the port, all the marshalling yards were used and, in fact, the lack of marshalling space often proved to be the bottleneck of the port [1]. in the last fifteen years, with the deregulation of road transport, there was an immediate and extensive switch of general cargo from rail to road transport with the current split being close to 80% road and 20% rail. the result of this switch has placed tremendous pressure on the road network while railway facilities are now greatly under-utilised and the usage of this prime space needs to be incorporated into the future planning of the port [1]. based on service level agreement for ethekwini municipality, highlighting response time for road maintenance, the reasonable time for repair of potholes is within 48 hours, sink hole and traffic signals are within 24 hours, road, sidewalk repairs, reinstatements of trenches, broken kerbs and road signs are repaired within 10 days [2]. b. objectives of the study the main purpose of the study was to determine the following: • to compare the condition of roads within and outside the port which are owned by different authorities. • to identify and assess the condition of the eight major roads within and outside the port. • to determine factors that affect road condition. • to recommend measures based on the findings. c. study limitations the study focuses on comparisons between conditions of roads within and outside the port of durban. it covers the eight major roads which are main accesses to the port of durban. figure 1 shows the eight major roads within the port of durban that connect the south, west and north of ethekwini municipality. these roads are bayhead, quayside, maydon, rick turner (formerly known as francois road), wisely, south coast, bluff and iran roads. sections of these roads extend outside the port boundary. all roads highlighted in dotted lines are sections of the roads outside the port. etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 363-367 364 www.etasr.com kunene and allopi: comparison between conditions of major roads… fig. 1. road network inside and outside port of durban. ii. methodology the pavement management system manual was used as a reference for physical site measurements conducted as part of the assets verification process. verification and assessment were conducted as per table i. table i. schedule of verification and assessments road name date of survey length within port (km) length outside port (km) .umber of lanes in each direction quayside 25/10/2011 3 0.3 1 maydon 27/10/2011 2.4 0 1 rick turner 01/11/2011 0.4 4 1 wisely 01/11/2011 0.6 0 2 bayhead 03/11/2011 5 0.5 2 south coast 03/11/2011 2.3 7 1 bluff 10/11/2011 2.4 6 1 iran 10/11/2011 1.6 0 1 note: the full length of roads such as maydon, wisely and iran roads falls within the port. bayhead, quayside, rick turner, south coast and bluff road are divided into two sections. the measurements recorded represent the section of roads within and outside the port. the visual inspection (eyeball method) was identified for assessing the condition of the road infrastructure. this method is a quick visual inspection of the road on a routine basis to identify problems. the visual inspections were conducted on all eight major roads within the port and section of roads outside the port. the comparison between the roads within and outside the port was also conducted. during the visual inspection of each road, an inspection report was compiled which included the following components: road markings, traffic signs, potholes, cracks, rutting, aggregate loss, riding quality, surface drainage and unpaved shoulders. each component was rated using the following rating method in table ii: table ii. rating method [3] percentage description rating detailed description 100-90% excellent a it is new and perfect. no maintenance work required at this stage 89-70% very good b it looks like new and minor maintenance work may be required at a later stage 69-50% good c it is moderate and maintenance work may be required within 12 months 49-30% fair d it is reasonable but maintenance work may be required within 6 months 29%-10% poor e it is not safe and need urgent attention 9%-0% very poor f it is very poor and reconstruction work required urgently the findings of the assessment for the eight major roads within and outside the port (where applicable) were recorded on inspection reports. table iii shows an example of an inspection report that was conducted on section of bayhead road within the port: table iii. inspection report conducted on bayhead road component weight rating weighted average road markings 10 65% 6.5 traffic signs 10 65% 6.5 potholes 20 45% 9 cracks 10 45% 4.5 rutting 10 45% 4.5 aggregate loss 10 45% 4.5 riding quality 10 50% 5.0 surface drainage 15 45% 6.75 unpaved shoulders 5 65% 3.25 total 100 50.5 the weighting of each component was identified based on the importance and damage that can be caused if that particular component was not repaired. rating score was based on the condition of the component and table ii was used during the rating process. iii. results the results from the inspection reports conducted on the eight major roads within the port are shown in figure 2. maydon and south coast roads are low rated roads which are in a poor condition. quayside road is highly rated and in a very good condition. etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 363-367 365 www.etasr.com kunene and allopi: comparison between conditions of major roads… fig. 2. summary of major road condition within the port the following are results obtained during the assessment of roads within the port: • quayside road (71.5%) falls under b (very good) category. it looks like new and minor maintenance work may be required at a later stage. • rick turner (59.5%), iran road (59.5%) and bayhead road (50.5%) fall under c (good) category. it is moderate and maintenance work may be required within 12 months. • wisely road (49%) and bluff road (45%) fall under d (fair) category. it is reasonable but maintenance work may be required within 6 months. • maydon road (28.8%) and south coast road (28%) fall under e (poor) category. it is not safe and needs urgent attention. fig. 3. longitudinal and crocodile cracks on south coast road figure 3 shows the longitudinal cracks found on sections of bayhead road within the port where the asphalt layer failed at the joint. the results from the inspection reports conducted at the five major roads that have sections which fall outside the port are shown in figure 4, namely bayhead, quayside, rick turner, south coast and bluff roads. the full length of maydon, wisely and iran road falls within the port and hence cannot be assessed. south coast road is a low rated road which is in a fair condition and all other roads are rated above 70%. fig. 4. assessment of road condition outside of the port. the following are results obtained during the assessment of roads outside the port: • bayhead (94%) and quayside road (91.5%) fall under a (excellent) category. it is new and perfect. no maintenance work required at this stage. • rick turner (71%) and bluff road (75%) fall under b (very good) category. it looks like new and minor maintenance work may be required at the later stage. • south coast road (40%) falls under d (fair) category. it is reasonable but maintenance work may be required within 6 months. figure 5 shows the typical defects found on sections of rick turner road outside the port near king edward hospital. fig. 5. aggregates loss on rick turner road. the section of roads which falls outside the port are in a good condition compared to the section of roads within the port as shown in figure 6. etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 363-367 366 www.etasr.com kunene and allopi: comparison between conditions of major roads… fig. 6. comparison between roads within and outside of the port. iv. factors affecting road conditions there are a number of factors that contribute to poor condition of roads within the port of durban such as: • growth of container cargo • increase in the dimension and weight of trucks • transport deregulation over the past years and overloading a brief discussion on each factor is highlighted: a. growth of container cargo the port of durban has been experiencing high growth rate in container traffic which impacts on road infrastructure condition. about 70% of south african container cargos are handled at the port of durban. the port of durban has a dedicated container terminal that handles 2,7 million twenty foot equivalent units (teu) per annum. average growth of container volumes over the past few years is between 5-7% per annum. fig. 7. container volumes at the port of durban [4] figure 7 shows the container volumes handled at the port of durban over the years where it increased from 1,9 million teu in year 2005 to 2,7 million teu in year 2011. this has put pressure on the roads within the port and resulted in deterioration of the road condition [4]. b. increase in dimension and weight of trucks there has been an increase in the dimension and weight of trucks over the years. the overall length of trucks has increased rapidly from 13m in 1960 to 22m in 1996, which is currently used these days, that is almost double. the weight, which is gross combination mass (gcm) has increased from 38000 tons to 58800 tons. these changes in dimensions and weight of trucks have resulted due to global modification of trucks capacity. these changes have major impact on road condition at the port of durban [5]. the history of the trucks dimensions and weight are as follows: • in 1970: overall length of truck increased from 13 m to 17 m. gross combination mass increased from 38000 tons to 41020 tons. • in 1980: overall length of truck increased from 17 m to 20 m. gross combination mass increased from 41020 tons to 47007 tons. • in 1990: overall length of truck increased from 20 m to 22 m. gross combination mass increased from 47007 tons to 56000 tons. • in 1996: overall length of truck remains at 22 m. gross combination mass increased from 56000 tons to 58800 tons with 5% overload allowance [6]. c. transport deregulation over the past years and overloading by the 1970's, government realized that transport deregulation was necessary and that the railway administration would have to be relieved of its former social obligations (i.e. transport of uneconomic traffic or on money-losing branches and secondary lines and passenger services in general). the form of transport deregulation was debated for another ten years. by 1989 de-facto deregulation had taken place [7]. a government white paper on transport was published in 1991. while specific issues were identified, consensus could not be reached on implementing necessary control mechanisms such as the road traffic quality system (rtqs) and how fair and equitable road-user fees could be levied to the different size motor vehicles. but the legislation was enacted, the road permit system abolished, and a transport “free-for-all” was allowed to develop [7]. government enacted further legislation, while the department of transport unilaterally changed existing statutes, which resulted in larger heavy vehicles appearing on the highways. axle loads were increased, the bridge formula relaxed, but the rtqs was not implemented. competition within the road industry and not just against rail led to price cutting, overloading, un-roadworthy vehicles and excessive pressure on truck drivers to work long and uncontrolled hours [7]. etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 363-367 367 www.etasr.com kunene and allopi: comparison between conditions of major roads… enforcement is a particular issue in the area of truck overloading, where some haulers are able to improve their costs by overloading their vehicles, secure in the knowledge that enforcement will be sporadic at best. while the haulers realize a cost advantage, they create an additional cost in road maintenance and repair. truck overloading is one of the principal sources of road damage in the country: the 30% to 40% of trucks that are overloaded caused 60% of the damage to the road network [1]. v. conclusions and recommenditions maydon and sections of south coast roads which falls within the port are low rated roads and are in a poor condition. these roads are not safe and urgent attention is required. quayside road is in a better condition compared to other roads and falls under category b (very good). the section of south coast road which falls outside the port is a low rated road and is in a fair condition. all sections of roads which fall outside of the port are owned by ethekwini municipality and most of these roads are in good condition except for south coast road. the public are involved in ensuring that these roads are well maintained by informing ethekwini municipality via a 24 hour toll free number. transnet national ports authority owns all sections of roads which fall within the port and they are in a fair condition but much work needs to be done. the public are not well informed on who can assist if there are defects on the road. the major problems are experienced when there is a change of ownership. it is recommended that these eight major roads have their own budget and assessed separately to other roads because of their importance. the ownership must be clear by installation of boards or signs on each road indicating the owner’s name and contact details so that it will be easy for road users to report any defect on any road. fast track handover period of ownership will ensure that there is continuity in terms of maintenance. more attention must be paid in terms of regular inspections of roads within the port in order to improve their standard to be similar to roads outside the port. it is recommended that the area supervisor, maintenance manager and road engineer for both parties (ethekwini municipality and transnet national port authority) conduct visual inspections annually. it is further recommended that problem areas be inspected as often as required. it is recommended that on site or laboratory material testing be conducted as and when there are failures to the base, sub-base and road surface layers (asphalts). the road freight industry in south africa needs to be regulated especially on overloading and truck driving hours. weigh bridges and truck toll fees are possible solutions. incidences of motor vehicle roadworthiness non compliance have been highlighted by the large number of trucks. truck driver working hours have also become a major issue that must be addressed urgently. the establishment of a railway safety regulator has set standards for the rail industry but a similar regulator is urgently needed for the road industry. acknowledgment the author thanks supervisor professor dhiren allopi for his guidance and support throughout the study. thanks to transnet national ports authority and ethekwini municipality for allowing me to access their information. also, the financial support from durban university of technology is greatly appreciated. references [1] department of transport kwazulu-natal, national transport master plan 2005-2050, 2008 [2] city of durban, standard engineering specification, 1992 [3] transnet national ports authority, port maintenance manual, road module, 2004 [4] transnet national ports authority, summary of cargo handled at ports of south africa, 2012 [online] available at: http://www.transnetnationalportsauthority.net/doingbusinesswithus/cal endar%20years/calendar%20year%202005.pdf [accessed in october 2012] [5] h. ghoos, j. korsgaard, l. runge-schmidt, h. agerschou, “berth and terminal design in general. storage facilities and cargo handling systems” in planning and design of port marine terminal, thomas telford, ltd, 2004 [6] b. sheat, “truck dimensions and weight”, railways africa, issue 5, pp 13-20, 1997 [7] road and rail association of south africa, modal issues, 2007 [online] available at: http://www.rra.co.za/?page_id=15738 [accessed in october 2012] engineering, technology & applied science research vol. 8, no. 4, 2018, 3168-3171 3168 www.etasr.com mavromatakis et al.: photovoltaic systems and net metering in greece photovoltaic systems and net metering in greece fotis mavromatakis department of electrical engineering technological educational institute of crete heraklion, crete, greece fotis@staff.teicrete.gr george viskadouros department of electrical engineering technological educational institute of crete heraklion, crete, greece viskadouros@staff.teicrete.gr hara haritaki department of accounting and finance technological educational institute of crete heraklion, crete, greece haritaki@staff.teicrete.gr george xanthos department of business administration technological educational institute of crete heraklion, crete, greece xanthos@staff.teicrete.gr abstract—the latest measure for the development of photovoltaics in greece utilizes the net-metering scheme. under this scheme the energy produced by a pv system may be either consumed by the local loads or be injected to the grid. the final cost reported in an electricity bill depends upon the energy produced by the pv system, the energy absorbed from the grid and the energy injected to the grid. consequently, the actual electricity consumption profile is important to estimate the benefit from the use of this renewable energy source. the state latest statistics in greece for households reveal that the typical electrical consumption is 3750 kwh while 10244 kwh are consumed in the form of thermal energy. we adopt in our calculations the above amount of electrical energy but assume four different scenarios. these different hourly profiles are examined to study the effects of synchronization upon the final cost of energy. the above scenarios are applied to areas in different climate zones in greece (heraklion, athens and thessaloniki) to examine the dependence of the hourly profiles and the solar potential upon the financial data with respect to internal rate of return, payback times, net present value and the levelized cost of energy. these parameters are affected by the initial system cost and the financial parameters. keywords-photovoltaic; net metering; modeling; financial i. introduction one of the support mechanisms to promote photovoltaic (pv) technology and reduce the energy costs for residential and commercial customers is related to the local production of renewable energy (self-production). the net metering scheme involves grid connected systems and is available in many countries worldwide. utilities can better manage peak loads since pv systems generate most of their power around noon time. local power production allows reducing the strain in the electrical distribution system and the transmission and distribution losses. part of the energy generated by the pv system may be consumed by the loads (self-consumption) of the owner of the pv system, e.g. lights, refrigerators, pumps, etc, while any surplus energy will be exported to the grid. the owner of the pv system will be billed for the net energy which is basically equal to the energy retrieved from the grid minus the energy injected to the grid. in greece the relevant law was introduced in 2014. recently, in 2017, the greek state setup the details of net metering and introduced the concept of virtual net metering. under this scheme the points of energy production and consumption may well differ electrically or spatially. in addition, it is foreseen that a customer may incorporate more points of consumption into the energy balance agreement. in this work we focus on the net metering scheme for residential customers. the total billing of energy in households involves basically two parts. the first part refers to the cost of the actual energy consumed by the customer (“energy supply charges”). the second part refers to a few regulated costs like those related to transmission, distribution, services of common wealth, reduction of gas pollution, special fees and other charges (“regulated costs”). the “energy supply charges” depend upon the specific energy supplier company, which may be other than the greek public power corporation (ppc), whereas the “regulated costs” are common to all energy suppliers. both categories of costs are calculated in a specific way and in this work those defined by the ppc are adopted. we explore the implications of net metering at three different geographical areas in greece. for each area, we assume a number of hourly profiles for the loads resulting to the same annual energy consumption. adopting typical values for the financial data the payback times, net present value and the levelized cost of energy are calculated and compared. ii. model data the input model data may be distinguished to two categories. the first involves the data related to the photovoltaic system while the second involves the data related to the consumer load. the pv*sol software was utilized for the hourly pv calculations [1]. this software makes use of the meteonorm database to retrieve solar irradiance and meteorological data for the selected sites: heraklion – crete, ath pea con typ kw ran sum ave 37 am kw (84 mo fol fol on wi “re fro syn loa a. rat pv dif sys to com loa the dep am tha res the sce the ath ado ava ado con ado pro be engineerin www.etasr thens attica ak power of e nsumed energ pical commerc w while the m nge of 250-3 mmarizes the b site heraklio athens thessalon according to erage annual e 50 kwh, whi mounts to 139 wh refers to th 4.9%), water ostly used to g llowed by wo llow with muc the electrical ll be conside egulated costs om the grid, nchronization ads. the hourly e formally, the tio of self-cons v system. th fference betwe stem minus th formally, the 100%. the mi mpletely out o ads) while the e produced p pend upon the mount of energ at is adopted a sulting in the e consumer u enario (s2) uti e centre of re thens and the opted in the c ailable as a opted in this s nsumption in opts a load c ofile to increas a real profile ng, technology r.com and thessalo each pv system gy. in all case cial inverter w modules are ch 300w at st basic data for table i. ghi (kwh/m on 1870 s 1710 niki 1580 o the greek s electrical energ le the corresp 94 kwh [3]. hermal energy heating (4.4% generate therm ood burning (2 ch smaller per l energy and it ered in the n s” depend ma it is importan between the energy profiles e degree of syn sumed energy e self-consum een the ac e e ac energy i e degree of sy inimum degre of phase with e maximum re pv energy. c e degree of sy gy consumed f assumes that worst-case sc under the net ilizes results f enewable ener average hourl current work [ built-in profi study. the spe a block of fla curve centered se the degree e, but it is imp y & applied sci niki norther m was chosen s, the pv sys with a nomina haracterized by tc conditions all three sites. basic data gt m2) (kwh/m 1960 1800 1670 statistical aut gy consumptio ponding total the additiona y which is use %) and cookin mal energy is 23.8%). other rcentages. in t is the energy net metering ainly upon the nt to understa generation of s nchronization, y over the ener med energy is energy (eac) p injected to the ynchronization ee refers to loa the solar ener efers to loads consequently, ynchronization from the grid. all loads are o cenario (s1), i t metering sc from a short su rgy sources (c ly profile of a [4]. the third ile in the sof ecific profile r ats. finally, th d close to noo of synchroniz plemented to ience research rn greece [2]. n to fully matc tem comprise al power aroun y peak power s. table i b pp m2) (kw) 2.43 2.65 2.80 thority in 201 on of a househ energy requir al energy of ed for space h ng (9.7%). th diesel oil (60 r sources of e this work we y of 3750 kw scheme. sinc e energy cons and the conce f pv power an , s, is defined rgy produced b s calculated a roduced by th grid (einj). n may vary fro ads that are bas rgy (e.g. only that exactly m , the billing n since it affec the first load on during the i.e. highest co cheme. the s urvey perform cres) in the a all measureme load profile ( ftware and is refers to the e he last scenario on time with zation. this ma study the effe h v mavro . the ch the s of a nd 2.5 in the below 13 the hold is ement 10244 eating he fuel 0.3%), energy focus wh that ce the sumed ept of nd the as the by the as the he pv m 0% sically y night match costs cts the curve night ost for second med by area of ents is s3) is s also energy o (s4) a flat ay not ects of incr arb the 3,7 b. cus calc are in stat per eur the acc met cor scen arou scen scen pur pro than con of z eve calc und fina inv of s and of s the cos resi still ene c h vol. 8, no. 4, 20 omatakis et al.: reased synchr itrary units. t total annual 50 kwh. fi financial dat as already m stomers and i culated with d issued every one calendar ted by the pp riod in herak ros. this value ppc and ar cording to the tering (s1), w rresponding co nario s1, the und noon tim nario, concern nario (s4) is a rposes. it is u ofile. during n the energy r ntribution to th zero net energy ery site and ev culated to dete der the net me ancial calcul estment. anot synchronizatio d scenarios. th synchronizatio other two citi st of energy w idence. it is c l worth cons ergy. tabl heraklion athens thessaloniki cost of energy eraklion (1 yr) 018, 3168-3171 photovoltaic sy ronization. all the software n energy consu ig. 1. the dif ta mentioned, th in this case, data provided b four months, i year. accord pc, the cost o klion, without e does not incl re, then, attrib greek law. in where all loads ost drops to best scenario me. the total c ning the same an ideal scena unlikely to co summer time equired by the he total cost. t y, there is a m very scenario, ermine the ann etering scheme lations conce ther interesting on which is sho he dependence on is evident. ies and are no without net m clear that even idering net m le ii. degre s1 0.0% 3 0.0% 3 0.0% 3 285 € 2 1 systems and net l profiles are normalizes the umed by the h fferent hourly prof his work focu the typical by the ppc [5 i.e. there are th ding to the r of a typical fo net metering ude charges th buted to the n the worst-ca s are active du around 92 e is s4 where t cost rises to 2 four-month su ario and is sho onsider a resid the pv energ e loads but stil this is becaus minimum cost s the four-mon nual electricity e. these costs erning the g parameter is own in table e of the saving similar costs ot repeated her metering rises t n in the worstmetering to r ee of synchron s2 s 7.8% 44 7.3% 43 6.7% 42 204 € 20 3169 t metering in g shown figure ese profiles, so household loa files uses on reside cost of energ 5]. the energy hree clearance esidential tari our-month sum g, amounts to hat are collecte local municip ase scenario o uring the nigh euros. contrar the loads are a 28 euros under ummer period. wn for compa dence with su gy is usually h ll there is a non se even in the set by the ppc nth billing cost y and savings are adopted i viability of s the annual d ii for all three gs upon the d s are calculate re. the total an to 664 euros -case scenario educe the co ization s3 s4 4.5% 71.3 .4% 70.1 2.4% 68.2 00 € 132 greece e 1 in o that ads is ential gy is y bills e bills iff as mmer o 220 ed by pality of net ht, the ry to active r this . this arison uch a higher nzero e case c. for ts are costs in the f the egree e sites egree ed for nnual for a o it is ost of 4 3% 1% 2% 2 € engineering, technology & applied science research vol. 8, no. 4, 2018, 3168-3171 3170 www.etasr.com mavromatakis et al.: photovoltaic systems and net metering in greece iii. discussion the use of net metering by residential customers in three different areas and climate zones in greece is considered in this work. the cost of energy for each scenario is practically the same for all sites explored, while the annual degree of synchronization shows minor variations (table ii). the financial metrics used to evaluate the viability of the investment of such domestic pv systems involve the levelized cost of electricity (lcoe), the simple payback time (pbt), the net present value (npv) and the internal rate of return (irr). these are calculated following the formulation provided by the national renewable energy laboratory (nrel) [6]. the lcoe represents the cost of energy per kwh produced by the pv system over the investment horizon, while the simple pbt represents the time it takes for the net revenues to equal the initial investment cost. the npv represents the savings over the same period taking in terms of the current value of the money. the irr represents the deflated cost of capital at which the npv becomes zero during the life time of the project. the basic financial parameters used in the calculations are summarized in table iii. in this work the investment time interval is set to 25 years and basically, reflects the duration of the contract signed with the hellenic electricity distribution network operator s.a. under the net metering scheme. however, it must be made clear that this doesn’t represent the actual life time of a pv module. it formally represents the time interval, at the end of which, a module may not produce less than 80% of its initial power rating (guaranteed by the manufacturers). a module will keep producing energy although degraded by about 0.4%/yr [7, 8]. table iii. financial data initial cost (€/kw)1 system degradation2 inverter replacement (€/kw)3 waccnom4 & inflation5 scenarios 1,000.0 0.4% 350 7% & 4% s1, s2, s3, s4 1,250.0 0.4% 350 7% & 4% s1, s2, s3, s4 1,500.0 0.4% 350 7% & 4% s1, s2, s3, s4 1initial cost of pv system 2system degradation per year 3inverter replacement per ten years 4waccnom: nominal weighted average cost of capital 5inflation: the annual inflation rate a replacement time of string inverters of 10 years, considering a good quality inverter, is adopted. in such small pv systems the owner of the system may carry out the operation and maintenance costs (o&m) and thus, do not pay any costs for the cleaning of the modules and the inverter, the inspection of cable connections, etc. finally, a fixed cost of 300 euros is incorporated in year 0 costs as a grid connection fee foreseen by the state law. a single-phase grid connection is assumed since the examined pv peak power is small. to examine the viability of the investment it is crucial to integrate local economy parameters such as the inflation and the nominal weighted average cost of capital (wacc). inflation is an increase in the cost of goods and services per unit time. for convenience, inflation is customary to refer to a year. based on the performance of economic indicators, an average annual inflation rate of 4% was selected [9]. on the other hand, wacc is associated with the amount of profit that obtained from saving capital. net metering is nothing more than an agreement with a power company, enabling the consumer to install a solar power system to meet part or all of the energy consumption. the power company compensates the energy generated by the solar modules with the power consumed by the owner of the photovoltaic system. when there is excess energy because the consumption is low it will be supplied to the grid. on the other hand, when the pv system does not produce enough energy (e.g. clouds or night time) then energy will be consumed from the grid. when the demand for electricity is consistent with production, the compensation price increases and thus, depreciation of capital spent for the installation of the system becomes faster. it should be noted that the excess energy does not lead to income growth due to lack of agreement in tariff for net metering systems. the analysis of a household in heraklion shows that, in the worst-case scenario s1 (just night loads), the breakeven point is 13 years for a pv system price of 1,250 €/kwp. this parameter ranges from 9 to 16 years considering system prices of 1,000 and 1,500 €/kwp and all possible scenarios. the cash flow analysis for this household meets the agreement mentioned earlier for the electrical energy demand and shows that in the consumption profile s2 (mixed daily and nights loads) the breakeven point is 8 years for a pv system price of 1,250 €/kw. generally, in the case of scenarios s2 and s3 which better represent everyday profiles, the breakeven points drop to 7, 8 and 12 years for the corresponding initial system costs reported in table iii. the financial savings for these consumption profiles in heraklion are very close to 3,600 euros (npv) while the internal rate of return is close to 11-12% for the price of 1,250 €/kw which is quite satisfactory since the real average cost of capital in the calculations is 2.9%. if the real cost of capital is below the irr value of 11-12%, then the investment is viable (table iv). the analysis for the cities of athens and thessaloniki also confirms that the worst-case scenario is s1 while scenario s4 is the best one as expected. however, both are unlikely to occur under typical conditions. scenarios 2 and 3 are more realistic and provide very similar results for all cities despite the different origins of the hourly profile data. the financial savings range from around 2,300 € (1,500 €/kwp) to 3,700 € (1,000 €/kwp) which are, of course, lower than the corresponding values in heraklion due to the increased cost of the pv system (2.43 kwp vs 2.80 kwp). it was necessary to increase the installed pv power since the solar irradiance in northern greece is reduced by around 17% with respect to the solar irradiance in heraklion, crete while the household electrical loads remain the same among all cities. the internal rates of return for the typical scenarios s2 and s3 are around 12% for heraklion, 10% for athens and 9% for thessaloniki adopting the unit cost of 1,250 €/kwp. since the real cost of capital of 2.9% (7% nominal and 4% inflation) is less than these irrs, the investment is viable with a net present value ranging from around 3,000 to 3,600 €. the breakeven times are nine (9) years for heraklion and twelve (12) years for the other two cities for the same scenarios suggesting that a residential customer will benefit from the net engineering, technology & applied science research vol. 8, no. 4, 2018, 3168-3171 3171 www.etasr.com mavromatakis et al.: photovoltaic systems and net metering in greece metering scheme. furthermore, depending on the size of the pv system the savings in co2 emissions range from 53,000 to 62,000 kg [10]. finally, the levelized cost of energy (lcoe) is also calculated, although it is not directly related to the net metering measure. the high solar potential in greece offers the opportunity to establish low lcoe values with respect to other european countries. under the data given, it is calculated that the lcoe ranges from the minimum of 0.052 €/kwh in heraklion and for a system cost of 1,000 €/kwp to a maximum of 0.081 €/kwh in thessaloniki and for a system cost of 1,500 €/kwp. table iv. economic metrics for a household in heraklion investment area heraklion consumption profile s1 s2 s3 s4 synchronization (%) 0.0% 37.8% 44.5% 71.3% lcoe (€) 0.0613 breakeven (year) 13 9 9 8 irr 8.4% 11.5% 11.6% 14.0% npv (€) 2,164 3,593 3,625 4,776 simple payback (year) 8.8 7.3 7.2 6.3 pp=2.43 kwp, cost 1,250 €/kwp these results agree with studies of the lcoe [11]. it is estimated that the final savings will increase since the formal clearance time interval for the energy consumed and produced is three years while the simulations are conducted on an annual timeline. future work involves the simulation of solar data with 1-minute resolution to explore the effect upon the degree of synchronization since hourly data smooth out any shorter time variations. furthermore, it is interesting to examine the case where the energy required to supply all kind of residential loads may be in the form of electrical energy excluding the use of wood, natural gas, lp gas or diesel oil for e.g. heating, cooking. heat pumps with a high coefficient of performance can account for the heating loads (space heating, hot water). iv. conclusions in this paper we present the results from the economic analysis of pv systems under the net metering scheme in different cities and climate zones in greece (heraklion, athens and thessaloniki). several input parameters like the initial system cost, inflation, cost of capital and other parameters are used to examine different scenarios for potential investors of small residential pv systems. residential customers may adopt the net metering scheme to reduce the cost of energy bills. the lifetime earnings (npv) for a residence in heraklion amounts to around 3,600 € while the breakeven time occurs at the ninth (9th) year of operation for a system cost of 1,250€/kwp. an irr of 12% is calculated for the same site. the degree of synchronization is around 40% for typical household hourly profiles and it affects the final energy cost. under the specific scenarios the breakeven point is 12 years, the irr is around 9% and the npv is a bit less than 3,000 € for a site in northern greece. references [1] valentin energy software, pv*sol simulation program for photovoltaic systems, berlin, 2018 [2] meteonorm global meteorological database for engineers, planners and education, version 4.0.95, switzerland [3] greek statistical authority, yearly report, 2013, http://www.statistics.gr [4] center for rewenable energy sources, http://www.cres.gr [5] ppc, residential tariffs, https://www.dei.gr/en [6] e. drury, p. denholm, r. margolis, the impact of different economic performance metrics on the perceived value of solar photovoltaics, nrel technical report/tp-6a20-52197, 2011 [7] r. m. smith, d. c. jordan, s. r. kurtz, outdoor pv module degradation of current-voltage parameters, nrel, world renewable energy forum denver, colorado, may 13–17, 2012 [8] f. vignola, j. peterson, r. kessler, f. lin, b. marion, a. anderberg, f. mavromatakis, pv module performance after 30 year without washing, 43rd conference of the american solar energy society (solar 2014), san francisco, california, july 06-10, 2014 [9] interest rates of deposits and loans, bank of greece, https://www.bankofgreece.gr/ [10] european co2 emission data, https://www.eea.europa.eu/data-andmaps/indicators/overview-of-the-electricity-production-2/assessment [11] c. kost, s. shammugam, v. juelch, h. t. nguyen, t. schegl, fraunhofer institute for solar energy systems ise, levelized cost of electricity, renewable energy technologies, 2018 . microsoft word 26-3464_s_etasr_v10_n5_pp6294-6300 engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6294 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system highly stable photonic local carriers for phased array receiver system md. rezaul hoque khan islamic university of technology, dhaka, bangladesh and university of twente, netherlands rhkhan@iut-dhaka.edu md. ashraful hoque islamic university of technology dhaka, bangladesh mahoque@iut-dhaka.edu abstract—in this paper, a complete system analysis of photonic local carrier generation technique has been investigated. the generated carrier is potentially suitable to replace the existing microwave/rf local carrier (lc) used in commercial low noise blocks (lnbs) for the phased array (pa) receiver system. the optical lc generated from heterodyning of two commercialized lasers is being stabilized with an optical frequency lock loop (ofll). this approach resulted in a generated carrier at the kuband (10.7ghz to 12.75ghz) signal received from a pa receiver. various loop parameters of the ofll have been investigated to comply with the requirements of the commercial lnbs the proposed ofll shows a 2400 fold improvement in the frequency stability at 1000s averaging time compared to its free running condition. it is also demonstrated that with an optimized loop gain of 30db, the loop response time of the proposed ofll becomes 11�s. keywords-optical frequency lock loop (ofll); microwave carrier generation; locking range; frequency stability i. introduction stabilization of laser frequency differences is essential in many modern experimental schemes. applications extend from advanced optical fiber telecommunications in atomic clocks or in high resolution atomic and molecular spectroscopy [1-4] to precision spectroscopy and sensing [5, 6]. phase coherence of the two laser fields locked at a frequency offset is not required in many applications and a mere frequency lock is an adequate solution. in any case, the optical phase lock loop (opll) is a standout amongst the most commonly utilized locking techniques. when the opll is designed to ensure only the stabilization of the frequency drift and not the consistency of the phase, it is called optical frequency locked loop (ofll). in all these applications, a precisely defined optical frequency is needed and long-term stability must be ensured for correct system operation, with a required degree of stability and accuracy that depends on the application. practically speaking, a beat signal is produced between the laser to be frequencystabilized (slave) and a second laser with known frequency (master) [7]. to stabilize the optically generated carrier, the beat signal is often mixed down to a lower frequency which is far better to be taken care of electronically [8, 9]. the main difference among the various frequency locking schemes which have been developed in the last decade, is the method used to generate and process the error signal employed for locking the slave laser [10]. a very simple locking scheme was demonstrated in [11] using an electronic delay line in conjunction with a phase detector as a frequency dependent phase shifter. this scheme has the advantage of large capture range, but as a drawback the beat frequency must be tuned by a manual adjustment of the delay line length. as a consequence, real-time, rapid tuning of the beat frequency is impossible. other methods used to generate beat frequency use frequency multiplier. a different approach converts the beat frequency to a proportional voltage by a frequency-to-voltage converter (fvc). the voltage is then compared to a reference voltage, which sets the beat frequency [12]. this approach is limited to the maximum operating frequency of the commercial fvc and can be improved by using a hybrid analog-digital locking scheme using high performance fvcs [13]. nevertheless the scheme suffers from the limited bandwidth of the fvc. recently, a highly stable and wide tunable frequency locking scheme has been presented, where a high-speed frequency divider (prescaler) is used on the beat signal before processing the error signal [14]. the main disadvantage of the use of the prescaler is that it degrades the loop response time and limits the loop bandwidth. generating the error signal from the amplitude response of an rf filter to realize a sensitive analog fvc is also proposed [8, 15]. an alternative of these modulation-free error conversion techniques is the use of electrical filters [16] or frequency multipliers [17]. however, to the best of our knowledge, none of the above schemes reported any investigation on the long term frequency stability of the beat spectrum of the ofll. a simple ofll technique based on the concept presented in [11] is proposed in this paper and its performance is improved with the use of a variable delay line in the frequency discriminator to facilitate beat frequency tuning. ii. proposed optical frequency locked loop scheme the frequency of a diode laser depends on the injection current and the temperature and is very sensitive to fluctuations of those parameters. for example, the dfb laser (avanex inc., a1905lmi) used in our experiment has a frequency sensitivity to injection current and temperature of 325mhz/ma and 10ghz/ 0 c respectively [18]. several studies and experiments show that the beat signal generated by a free-running heterodyning system suffers a substantial frequency drift [19]. laser frequency drifts by hundreds of mhz in an ordinary corresponding author: m. r. h. khan engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6295 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system environment [20]. the maximum permissible frequency drift of the generated carrier is 5mhz [21], according to the dvb-s specification [22]. the carrier signal, generated by heterodyning of two lasers, is used for the downconversion of the 10.7ghz to 12.75ghz k �-band signal received from a phased array antenna (paa). the block diagram of the proposed ofll is presented in figure 1. the proposed scheme is able to generate an adjustable carrier as specified in [22]. (a) (b) fig. 1. proposed ofll: (a) block diagram, (b) experimental setup. the two lasers used for the experiments are a dfb laser avanex a1905lmi and a tunable laser diode (tld) santec tsl-210. in the experiment, the tunable laser and the dfb laser are employed as a master laser and slave laser having optical frequencies ��� and ��� , respectively, to produce the beat frequency, δ�� ��� � ���. the beat signal is provided by a 20ghz bandwidth photodetector (discovery semiconductor dsc30s) and is amplified by 20db by a commercial rf amplifier (minicircuit zx60-183). a power splitter (minicircuit zfrsc-123-s) is used to tap the beat signal for monitoring. the output signal from the power splitter is mixed with a reference signal, �� �, at ∼10ghz frequency provided by a signal generator (agilent psg e8267d). an error frequency, � δ�� � �� �, is produced at the output of the rf mixer (minicircuit zx05-153lh-s). the error signal, � cos�2π� �� , is then passed through a variable frequency discriminator, where � is the amplitude of the error signal. in the frequency discriminator, the signal is divided into two equal parts by an rf splitter and recombined at a mixer (minicircuits sbl 48) after a variable delay line (narda 3752) has deferred one part. hence, we can write: ( )e1 ec t 2 v v os ω =   (1) ( )( )e2 e 1c 2 v v os tω τ = −  (2) where �� and �� are the signals combined in the mixer, � 2π�δ�� � �� ��, and �� is the time delay. the resultant signal of the frequency discriminator at the output of the mixer, �� is [23]: ( ) ( )( )if0 if 1 if 1cos 2 cos 2 2 ... v v f f tπ τ π τ π  = + − +  (3) the high-frequency term (the 2 nd term in the bracket) is eliminated by a low pass filter (lpf) (minicircuit slp-50), called the loop filter. as a result, the lpf output produces a series of nulls when � �� �2� � 1�π/2 , where � "0,"1,"2 ... the feedback system acts on the slave laser injection current and allows to actively control the emission frequency of the slave laser, so that a constant frequency difference between the slave and the master laser is maintained. the frequency of the slave laser is tuned by applying an external voltage to its injection current controller (ilx lightwave ldc-3724). the conversion factor of this external voltage and the change of emission frequency of the slave laser, %��, is measured to be 31.25&10 '�mhz/mv. a. characterization of the rf discriminator utilizing a passive rf component and realizing a delay of ��=3ns, the discriminator output at the lpf as a function of the beat frequency is shown in figure 2. the offset frequency between the first null from the reference frequency is given by δ� 1/2��, where 1/�� denotes the spacing of the nulls that the ofll lockes on. the point a and b in figure 2 is the nearest null to the reference frequency where the frequency to voltage conversion factor, %(, is maximum and amounts to be 300mv/ghz. a higher �� will reduce the δ� and will increase the slope at the nulls which ultimately will give a higher conversion of %(. the various parameters and their values used in our analysis are summarized in table i. fig. 2. beat frequency as a function of error voltage for a delay of ��=3ns. table i. various loop parameters parameter value dimension total gain of the rf amplifiers 30db %) external voltage-to-frequency 31.25&10 '� %�� conversion factor of the slave laser mhz/mv frequency-to-voltage conversion 300 mv/ghz %( factor of the discriminator time constant of the current 300 μs �� controller of the slave laser time constant of the lpf 1.58μs �� engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6296 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system iii. optical frequency lock loop analysis the proposed ofll in figure 1 can be represented as a generic model for the feedback system and its s-domain representation is presented in figure 3. in this model of the ofll, the optical and the electrical connections are represented by doted and solid lines respectively. the optical field from the master and the slave laser are represented by +�� and +�� respectively. the fields are combined and put to a pd. the beat frequency is mixed with the reference frequency, �� �. the downconverted signal is then amplified by an rf amplifier having a gain of %). an rf frequency discriminator having a frequency-to-voltage conversion factor %( converts the downconverted frequency into proportional error voltage. the error signal is passed through a lpf having a laplace transform of the frequency response, ,�-�, to filter out the high frequency component from the error signal. the error signal is applied to the current controller of the slave laser. the external voltage-to-frequency conversion factor and the time constant of the slave laser controller are denoted by %�� and �� respectively. fig. 3. linearized s-domain representation of the ofll. from the linearized s-domain representation of the ofll the open-loop transfer function of the ofll is defined as the product of the transfer functions of all the elements in the loop: ( ) ( ) ( )pd mix a fvc sl fm fmg s k k k k k f s kf s= = (4) where % %.(%/01%)%234%�� is the total gain. the closedloop transfer function of the ofll is [24]: ( ) ( ) 2 2 2 2 2 2 2 n 2 2 2 n n 4 2 4 1 4π f 1 s sζ2πf 4π f 1 k r c r k f h s s s f f k π π π = + + + =   + + +    (5) where �5 and 6 represent the natural frequency and the damping factor of the loop and are expressed by: c n f f ζ = (6) n rf f k= (7) iv. experimental results in this section we will the stability of the ofll will be investigated and analyzed. moreover, some other functionalities of a carrier, namely the maximum tuning range and the maximum tuning rate (the speed at which the generated carrier can be tuned to a certain frequency) need to be investigated. like any feedback system, the dynamics of these functionalities is determined by various parameters like loop gain and loop response. also, the effect of various loop parameters (i.e. loop natural frequency and damping factor) on these functionalities of the ofll will be investigated. a. locking offset range and capture range the discriminator signal at the output of the lpf as a function of the beat frequency is shown in figure 3. the error voltage is applied to the external tuning port of the current controller of the slave laser. this will ultimately change the emission frequency of the slave laser until the error voltage becomes zero. hence, the generated carrier signal is eventually locked to the reference signal with a fixed offset frequency. the relation of the delay in the frequency discriminator and the offset frequency, as given in (4) is plotted in figure 4. the measured values are also plotted and found to be close to the calculated values. fig. 4. the relation of the delay in the frequency discriminator and the offset frequency voltage for a delay of ��=3ns. from figure 3, with ��=3.3ns, the locking 150mhz offset frequency from the reference signal and the nulls are spaced by 300mhz. figure 4 shows the relation of the delay in the frequency discriminator and the offset frequency. this offset frequency range also gives the range of frequencies from the reference signal at which the free-running carrier signal becomes locked, also called the capture range of the ofll. in figure 3, the error signal provides a capture range given by "1/2�� "150mhz. b. beat frequency tuning once the optically generated carrier signal is locked with the reference signal, the generated carrier signal can be tuned by simply varying the delay. the beat frequency is locked at 125, 135, 142, 151 and 166.6mhz offset frequency from the reference frequency. as given in (4), for delays of 3, 3.3, 3.5, 3.7, and 4ns the beat frequency will be locked at 166.6, 151, 142, 135, and 125mhz respectively. the beat frequency tuning for various delays is shown in figure 5. engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6297 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system fig. 5. beat frequency tuning by varying delay. c. frequency resolution the frequency resolution is determined by the slope of the error signal at the locking point. as indicated by (4), a longer delay line �� reduces the capture range but enhances resolution. in order to demonstrate the effect of delay on the sensitivity, for a fixed delay, the reference frequency was tuned from 9.66ghz to 9.54ghz, as shown in figure 6. depending on the delay, for example 3, 3.5, and 4ns, the beat frequency is also tuned with an offset frequency of 166.6, 142, and 125mhz respectively from the reference frequency. figure 6 shows the frequency resolution of the locked signal for various delays. in the measurement the reference frequency was changed to demonstrate the frequency resolution. moreover, the inset in figure 6 shows the change of resolution due to the change of delay. fig. 6. frequency resolution of the locked signal for various delays. the inset shows a highlighted portion of the measurement. d. loop response time the step response provides an insight in loop response time and loop settling time. note that the time taken for a feedback system to stay within 10% of its final value is called settling time. one widely used measure of the response speed of a feedback system is the time it requires to reach the 90% of its final value, which is called loop response time. with the help of (6) and (7), for a given loop bandwidth (1mhz in our experiment), the calculated loop response time values are plotted in figure 7 for various loop natural frequencies and damping factors. from figure 7 we can see that for a 1mhz loop bandwidth, the loop response becomes faster for a lower damping factor with higher settling time. the optimum value of the damping factor should be chosen by carefully considering both loop response time and settling time. from figure 7 we can observe that there is a trade-off between the loop response time and settling time. with higher damping factor, the loop response time increases with decreasing settling time. the optimum value of the damping factor can be indicated as 6 1 where both loop response time and settling time are optimum. the optimized loop natural frequency �5 is 1mhz. fig. 7. loop response time for various natural frequencies and damping factors. the step response measurement procedure is shown in figure 8. the step signal from a vector signal generator (agilent psg e8267d) will introduce an external disturbance in the current controller of the slave laser (dfb laser) of the ofll and due to this aggravation, the signal will suffer a temporary deviation from the desired frequency. the ofll will recover the system into the former stable condition soon after the disturbance and this duration of the generated carrier signal from its unstable condition is measured by an oscilloscope (agilent infinium 54854a) by synchronizing with the vector signal generator through an external trigger. employing the values of the parameters in table i, for a given value of gain, %) 14db, the natural frequency, �5, and the damping factor, 6, can be calculated using (7) and (8) as 1mhz and 1, respectively. for %) 10db the calculated values of the natural frequency and the damping factor become �5=0.62mhz and 6=1.7 respectively. fig. 8. loop response time measurement setup. engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6298 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system from figure 9, for 6=1 and 6=1.7 the speed of the response of the loop is found to be 0.35 and 0.85μs respectively. from figure 9(a) it is evident that for a value of 6=1 the calculated loop settling time is found to be 0.78μs which is slower than the 0.58μs (figure 9(b)) for 6=1.7 (called critical damping). the experimental results are in good agreement with the calculated values. from figure 9 we can observe that there is a trade-off between loop response time and settling time. with higher damping factor, the loop response time increases with decrease of settling time. the optimum value of damping factor can be indicated as 6=1 where both loop response time and settling time are also optimum. the optimized loop natural frequency, �5 is 1mhz. (a) (b) fig. 9. loop settling time comparison, using step response, on calculated and measured values for damping factor of (a) 1 and (b) 1.7. e. frequency response the frequency response of the ofll is plotted in figure 10. the x-axis has been normalized by dividing the frequency by loop natural frequency �5. the plot shows how the loop behaves in the frequency domain. figure 10 shows the result of the frequency response of the ofll for various damping factors. the frequency response of the loop in figure 10 looks very similar to that of the frequency response of a low pass filter. this is what an ofll acts in practice. if the frequency is equal to the natural frequency (i.e. when f/fn=1) the oscillation becomes very large. this phenomenon is reflected in figure 10(a) for 6=1. larger damping factors have lower overshoot, i.e. better response, but also have long response time [29], as can be observed in figure 10(b) for 6=1.7. note that the higher frequency term in (3) needs to be filtered out, the response does it by keeping the loop bandwidth narrow. for the same values of natural frequency and damping factor, the calculated result of the frequency response is presented in figure 10. the experimental results are in good agreement with the calculated values. the step/impulse response of the system, a time domain response, can be converted into a frequency domain response by simply performing fourier transformation. the calculated frequency response of the loop in figure 10 is actually a time domain response (step response/impulse) of figure 7. the time and frequency domain measurement values from figure 7 and figure 10 are in a good agreement. (a) (b) fig. 10. experimental and calculated results of the frequency response of the ofll, the x-axis is the normalized frequency and the y-axis is the response in db. f. long term frequency stability analysis typically, a free-running microwave frequency, �8 )9 , deviates from the required frequency, �0( ): , with variable direction and rate of change. in order to express the long term frequency stability of a carrier, the allan deviation ;< is often used. to evaluate this frequency stability, ;< is proven to be a valuable tool to quantify the stability of an optically generated microwave carrier [26]: ( ) ( ) 1 2 2 1 1 1 [ ] 2 1 m y i i i y y m σ τ − + = = − − ∑ (8) where =0 is the i th of the m fractional frequency values averaged over the measurement (sampling) interval � . the measured frequency stability of the generated carrier is expressed in allan deviation in figure 11 for both locked and free running conditions. for the free-running condition, for an increased averaging time of � =1s to � =1000s, the allan deviation increases from 2.5×10 -10 to 6×10 -9 . however, by implementing the ofll the corresponding improved long-term frequency stability could be 3×10 -10 and 2.5×10 -12 . the ofll shows a 2400-fold improvement in the frequency stability at 1000s averaging time. a typical quartz oscillator (wenzel 501engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6299 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system 04623e) has an allan deviation frequency stability of 4×10 -7 [27]. a typical atomic oscillator used as a clock source may provide a frequency stability of allan deviation of 1×10 -10 for 1s averaging time [28]. a radio frequency (rf) clock signal transmission over 100m via fiber link results in an allan deviation of 1.3×10 -10 for 1s averaging time [29]. the long term frequency stability of the presented work shows the superior performance of allan deviation of 1×10 -10 for an averaging time of 10 3 s compared to opll setups involving integrated phase-frequency detectors (pfd) [30]. fig. 11. allan deviation of the frequency stability of the ofll under freerunning and locked conditions. v. conclusion system analysis and experimental demonstration of an optically generated lc designed to comply with the requirements of the standard lc signal used in commercial lnbs have been investigated in this paper. the lc signal was used for the downconversion of the 10.7ghz to 12.75ghz signal received from a pa receiver. an ofll was also implemented to stabilize the generated lc signal. detailed analysis of the ofll scheme has been presented. the loop filter and the loop gain of the ofll should be chosen properly to make the feedback system stable and fast. it was also demonstrated that with an optimized gain of 14db, the loop response time becomes 0.35μs with a settling time of 11.5μs. the presented results emphasize the effectiveness of the use of the ofll in improving the long-term stability of the freerunning microwave carrier by a 2400-fold improvement in the frequency stability at 1000s averaging time. previous research merely presents straightforward time domain measurements and lacks complete frequency domain measurements of the loop response. the presented research focused on this highly interesting topic. both time and frequency domain measurement values are in a good agreement with each other. moreover, the presented implementation shows batter performance in terms of the capture range of 150mhz compared to the very recent investigation of a few hundred khz in [31]. acknowledgments the authors acknowledge the support of the smart mix program of the dutch ministry of economic affairs and the dutch ministry of education, culture and science. references [1] s. h. yim, s.-b. lee, t. y. kwon, and s. e. park, “optical phase locking of two extended-cavity diode lasers with ultra-low phase noise for atom interferometry,” applied physics b, vol. 115, no. 4, pp. 491– 495, jun. 2014, doi: 10.1007/s00340-013-5629-5. [2] m. dąbrowski, r. chrapkiewicz, and w. wasilewski, “hamiltonian design in readout from room-temperature raman atomic memory,” optics express, vol. 22, no. 21, pp. 26076–26091, oct. 2014, doi: 10.1364/oe.22.026076. [3] m. parniak, a. leszczyński, and w. wasilewski, “coupling of fourwave mixing and raman scattering by ground-state atomic coherence,” physical review a, vol. 93, no. 5, p. 053821, may 2016, doi: 10.1103/physreva.93.053821. [4] c.-h. shin and m. ohtsu, “heterodyne optical phase-locked loop by confocal fabry-periot cavity coupled algaas lasers,” ieee photonics technology letters, vol. 2, no. 4, pp. 297–300, apr. 1990, doi: 10.1109/68.53268. [5] m. lyon and s. d. bergeson, “precision spectroscopy using a partially stabilized frequency comb,” applied optics, vol. 53, no. 23, pp. 5163– 5168, aug. 2014, doi: 10.1364/ao.53.005163. [6] r. matthey, s. schilt, d. werner, c. affolderbach, l. thévenaz, and g. mileti, “diode laser frequency stabilisation for water-vapour differential absorption sensing,” applied physics b, vol. 85, no. 2, pp. 477–485, nov. 2006, doi: 10.1007/s00340-006-2358-z. [7] m. r. h. khan and m. a. hoque, “a photonic frequency discriminator based laser linewidth estimation technique,” international journal of advanced and applied sciences, vol. 6, no. 4, pp. 65–74, apr. 2019, doi: 10.21833/ijaas.2019.04.008. [8] g. ritt, g. cennini, c. geckeler, and m. weitz, “laser frequency offset locking using a side of filter technique,” applied physics b, vol. 79, no. 3, pp. 363–365, aug. 2004, doi: 10.1007/s00340-004-1559-6. [9] m. r. h. khan, m. f. islam, g. sarowar, t. reza, and m. a. hoque, “carrier generation using a dual-frequency distributed feedback waveguide laser for phased array antenna (paa),” journal of the european optical society-rapid publications, vol. 13, no. 1, p. 30, oct. 2017, doi: 10.1186/s41476-017-0058-4. [10] m. r. h. khan et al., “dual-frequency distributed feedback laser with optical frequency locked loop for stable microwave signal generation,” ieee photonics technology letters, vol. 24, no. 16, pp. 1431–1433, aug. 2012, doi: 10.1109/lpt.2012.2205379. [11] u. schünemann, h. engler, r. grimm, m. weidemüller, and m. zielonkowski, “simple scheme for tunable frequency offset locking of two lasers,” review of scientific instruments, vol. 70, no. 1, pp. 242–243, jan. 1999, doi: 10.1063/1.1149573. [12] t. stace, a. n. luiten, and r. p. kovacich, “laser offset-frequency locking using a frequency-to-voltage converter,” measurement science and technology, vol. 9, no. 9, pp. 1635–1637, sep. 1998, doi: 10.1088/0957-0233/9/9/038. [13] j. hughes and c. fertig, “a widely tunable laser frequency offset lock with digital counting,” review of scientific instruments, vol. 79, no. 10, p. 103104, oct. 2008, doi: 10.1063/1.2999544. [14] a. castrillo, e. fasci, g. galzerano, g. casa, p. laporta, and l. gianfrani, “offset-frequency locking of extended-cavity diode lasers for precision spectroscopy of water at 138µm,” optics express, vol. 18, no. 21, pp. 21851–21860, oct. 2010. [15] n. strauß, i. ernsting, s. schiller, a. wicht, p. huke, and r.-h. rinkleff, “a simple scheme for precise relative frequency stabilization of lasers,” applied physics b, vol. 88, no. 1, pp. 21–28, 2007. [16] s. schilt, r. matthey, d. kauffmann-werner, c. affolderbach, g. mileti, and l. thévenaz, “laser offset-frequency locking up to 20 ghz using a low-frequency electrical filter technique,” applied optics, vol. 47, no. 24, pp. 4336–4344, aug. 2008, doi: 10.1364/ao.47.004336. [17] d. m. perisic, a. c. zoric, and z. gavric, “a frequency multiplier based on time recursive processing,” engineering, technology & applied science research, vol. 7, no. 6, pp. 2104–2108, dec. 2017. [18] m. r. h. khan, m. burla, c. g. h. roeloffzen, d. a. i. marpaung, and w. van etten, “phase noise analysis of an rf local oscillator signal engineering, technology & applied science research vol. 10, no. 5, 2020, 6294-6300 6300 www.etasr.com khan & hoque: highly stable photonic local carriers for phased array receiver system generated by optical heterodyning of two lasers,” in 14th annual symposium of the ieee photonics benelux chapter, brussels, belgium, nov. 2009, pp. 161–164. [19] f. friederich et al., “phase-locking of the beat signal of two distributedfeedback diode lasers to oscillators working in the mhz to thz range,” optics express, vol. 18, no. 8, pp. 8621–8629, apr. 2010, doi: 10.1364/oe.18.008621. [20] d.-h. yang and y.-q. wang, “preliminary results of an optically pumped cesium beam frequency standard at peking university,” ieee transactions on instrumentation and measurement, vol. 40, no. 6, pp. 1000–1002, dec. 1991, doi: 10.1109/19.119781. [21] e. casini, r. d. gaudenzi, and a. ginesi, “dvb-s2 modem algorithms design and performance over typical satellite channels,” international journal of satellite communications and networking, vol. 22, no. 3, pp. 281–318, 2004, doi: 10.1002/sat.791. [22] etsi en 301 790 v1.5.1 (2009-05): digital video broadcasting (dvb); interaction channel for satellite distribution systems. sophia antipolis cedex, france: etsi, 2009. [23] c. toumazou, g. s. moschytz, and b. gilbert, eds., trade-offs in analog circuit design: the designer’s companion. springer us, 2002. [24] d. r. stephens, phase-locked loops for wireless communications: digital, analog and optical implementations, 2nd ed. springer us, 2002. [25] f. m. gardner, phaselock techniques, 3rd ed. usa: john wiley & sons, 2005. [26] h. y. ryu, s. h. lee, and h. s. suh, “widely tunable external cavity laser diode injection locked to an optical frequency comb,” ieee photonics technology letters, vol. 22, no. 14, pp. 1066–1068, jul. 2010, doi: 10.1109/lpt.2010.2049101. [27] e. rubiola, phase noise and frequency stability in oscillators. cambridge, uk: cambridge university press, 2008. [28] s. knappe et al., “microfabricated atomic clocks and magnetometers,” journal of optics a: pure and applied optics, vol. 8, no. 7, pp. s318– s322, may 2006, doi: 10.1088/1464-4258/8/7/s04. [29] b. sprenger, j. zhang, z. h. lu, and l. j. wang, “atmospheric transfer of optical and radio frequency clock signals,” optics letters, vol. 34, no. 7, pp. 965–967, apr. 2009, doi: 10.1364/ol.34.000965. [30] m. lipka, m. parniak, and w. wasilewski, “optical frequency locked loop for long-term stabilization of broad-line dfb laser frequency difference,” applied physics b, vol. 123, no. 9, p. 238, aug. 2017, doi: 10.1007/s00340-017-6808-6. [31] b. chen, k. wu, l. yan, j. xie, and e. zhang, “stabilization of synthetic wavelength using offset-frequency locking for the measurement accuracy improvement of the laser synthetic wavelength interferometer,” optical engineering, vol. 57, no. 3, p. 034106, mar. 2018, doi: 10.1117/1.oe.57.3.034106. microsoft word 04-3155_s_etasr_v9_n6_pp4883-4885 engineering, technology & applied science research vol. 9, no. 6, 2019, 4883-4885 4883 www.etasr.com chakravorty & saraswat: improving power flow capacity of transmission lines using dpfc with … improving power flow capacity of transmission lines using dpfc with a pem fuel cell jaydeep chakravorty electrical engineering department indus university ahmedabad, india jyoti saraswat alpla india, pvt. ltd. dadra & nagar havali gujarat, india abstract—the electrical power system is one complex architecture integrating generation, transmission, distribution, and utilization sections. the exponential increase in power requirements made this system more complex and dynamic. providing good quality and uninterrupted power has become a challenge. in this respect, facts devices are playing a vital role in improving power quality and also in increasing the transmission capacity of lines. in this paper. distributed power flow controller (dpfc), with a pem fuel cell, has been used in an ieee-14 bus system to improve system power flow capacity. the proposed ieee-14 bus with dpfc has been simulated in matlab/simulink. the effects are exhibited and analyzed. keywords-dpfc; pem; power quality i. introduction uninterrupted electrical power supply is a major requirement for the development of a country. to meet the increasing need of uninterrupted good quality power is a big challenge. due to the increasing demand of power, power system networks are becoming very complex. with very fast increase in non-linear loads in the power system, the supply of good quality power has become a major problem. to cater these needs, facts devices are playing a very vital role in increasing the efficiency of the transmission system [1-2]. studies conducted in improving power quality by incorporating facts devices can be seen in [3-11]. various algorithms have also been developed with the help of which it is feasible now to efficiently place facts devices in the power system, something that has drastically reduced the cost of operation and also has improved the quality of power transfer [12-13]. this paper proposes a method to improve power transfer capability of the system with the proposed dpfc with a pem fuel cell [14]. the optimal location of the proposed dpfc has been decided with the help of artificial algae algorithm [15]. the complete proposed system has been simulated in matlab/simulink and the result has been compared with the system without dpfc in it. ii. transmission line representation a simple representation of transmission line is show in figure.1 between bus-i and bus-j, the line admittance is ��� � ���� � � ��� � , and the bus voltages are ��∠��and ��∠�� respectively. the real ����� and reactive ����� power flowing from bus-i to bus-j can be written as: ��� � ������ � �������� cos��� � ��� sin���� (1) ��� � ���� ���� � !" � #��������� sin��� ���� cos���� (2) where, ��� � �� � ��. real power pji and reactive power qji flowing from bus j to bus i are given by: ��� � ������ � �������� cos��� � ��� sin���� (3) ��� � ���� ���� � !" � #��������� sin��� ���� cos���� (4) fig. 1. transmission line representation iii. dpfc model in this model, dpfc with a pem fuel cell has been used. the complete representation dpfc with a pem fuel cell has been discussed in [14]. the complete matlab model of is shown in figure 2. iv. ieee-14 bus system the ieee-14 bus system simulink model is shown in figure 3. the location of dpfc has been decided by the application of artificial algae algorithm [15]. the numerical data and the parameters are taken from [16]. the proposed ieee-14 bus system has 19 lines, 11 load buses, 1 slack bus and 2 generator buses. v. artificial algae algorithm the optimal location of dpfc has been has been determined by the application of artificial algae algorithm [15], which gives very good results for nonlinear optimization [17]. corresponding author: jaydeep chakravorty (jaydeepchak@yahoo.co.in ) engineering, technology & applied science research vol. 9, no. 6, 2019, 4883-4885 4884 www.etasr.com chakravorty & saraswat: improving power flow capacity of transmission lines using dpfc with … in this algorithm an artificial algae colony represents each individual. the process of this technique has three steps, helical movementt, reproduction and adaptation. after each cycle of operation, the population in the colonies is modified in the helical movement phase. it is assumed that the colony swims in all three dimensions in order to reach the light. colony’s energy will increase and its movement will slow down as it reaches the light. to increase the local search ability of the algorithm as the colonies approach the light, the algorithm starts searching the space with smaller and smaller steps. on the other hand, the colonies which are far away from the light will search the space with bigger steps, which in turn increases global search ability. artificial algae algorithm has a strong balance between exploration and exploitation. the pseudo code is given in figure 4. fig. 2. dpfc with a pem fuel cell [14] fig. 3. ieee-14 bus system vi. results and discussion the voltage profile of the system with and without the application of dpfc when applied to the ieee 14 bus system is shown in figure 5. the optimal location of dpfc was obtained by the application of artificial algae algorithm. the ieee 14 bus system was tested first without the dpfc in the system and then with the dpfc embedded in the system. engineering, technology & applied science research vol. 9, no. 6, 2019, 4883-4885 4885 www.etasr.com chakravorty & saraswat: improving power flow capacity of transmission lines using dpfc with … 1. to generate initial population with random solution of n algal colonies. 2. evaluate f(xi) for i = 1,2,3, ……, d 3. while the stopping condition is not reached 4. for i = 1 to n 5. while the energy of the i-th colony is not finished 6. modify the colony 7. end while 8. end for 9. apply evolution strategy 10. apply adaptation strategy 11. end of while fig. 4. artificial algae algorithm pseudo code fig. 5. voltage profile the real and reactive power loss values for normal load condition, 125% load and 150% load are shown in table i and ii respectively. table i. real power loss normal load 125% load 150% load without dpfc with dpfc without dpfc with dpfc without dpfc with dpfc real power loss (mw) 14.5 13.2 26.01 24.99 38.32 35.57 table ii. reactive power loss normal load 125% load 150% load without dpfc with dpfc without dpfc with dpfc without dpfc with dpfc reactive power loss (mw) 29.3 28.5 69.45 67.3 98.23 96.6 from the above results it can be concluded that by the proper application of dpfc in the system, its voltage profile can be improved. the application of the proposed dpfc with a pem fuel cell has reduced the real and reactive power loss in the system at different load conditions. vii. conclusion in this paper, a 14 bus ieee system with dpfc has been simulated in matlab/simulink. it was observed that the power flow capacity of the system with dpfc is more than the one of the same system without dpfc. the simulation of the proposed system took a very long time to give the output. in the future, some modifications in the design of the system may be applied in order to reduce simulation time. references [1] n. g. hingorani, l. gyugyi, understanding facts: concepts and technology of flexible ac transmission systems, ieee press, 2000 [2] abb, technologies that changed the world: facts, available at: https://new.abb.com/facts/about-facts/technologies-that-changed-theworld-facts [3] l. gyugyi, c. d. schauder, s. l. williams, t. r. rietman, d. r. torgerson, a. edris, “the unified power flow controller: a new approach to power transmission control”, ieee transactions on power delivery, vol. 10, no. 2, pp. 1085-1092, 1995 [4] c. chengaiah, r. v. s. satyanarayana, “power flow assesment in transmission lines using simulink model with upfc”, international conference on computing, electronics and electrical technologies, kumaracoil, india, march 21-22, 2012 [5] z. huang, y. ni, c. m. shen, f. f. wu, s. chen, b. zhang, “application of unified power flow controller in interconnected power systems: modeling, interface, control strategy, and case study”, ieee transactions on power systems, vol. 15, no. 2, pp. 817-823, 2000 [6] c. chengaiah, r. v. s. satyanarayana, g. v. marutheswar, “study on effect of upfc device in electrical transmission system: power flow assessment”, international journal of electrical and electronics engineering, vol. 1, no. 4, pp. 66-70, 2012 [7] p. kannan, s. chenthur pandian, “case study on power quality improvement of thirty bus system with upfc”, international journal of computer and electrical engineering, vol. 3, no. 3, pp. 417-420, 2011 [8] a. r. bhowmik, c. nandi, “implementation of unified power flow controller (upfc) for power quality improvement in ieee 14-bus system”, international journal of circuit theory and applications, vol. 2, no. 6, pp. 1889-1896, 2011 [9] e. gholipour, s. saadate, “improving of transient stability of power systems using upfc”, ieee transactions on power delivery, vol. 20, no. 2, pp. 1677-1682, 2005 [10] a. k. sahoo, s. s. dash, t. thyagarajan, “an improved upfc control to enhance power system stability”, modern applied science, vol. 4, no. 6, pp. 37-48, 2010 [11] a. j. f. keri, a. s. mehraban, x. lombard, a. eiriachy, a. a. edris, “unified power flow controller (upfc): modeling and analysis”, ieee transactions on power delivery, vol. 14, no. 2, pp. 648-654, 1999 [12] a. m. vural, m. tumay, “steady state analysis of unified power flow controller; mathematical modelling and simulation studies”, ieee bologna power tech conference, bologna, italy, june 23-26, 2003 [13] s. n. singh, i. erlich, “locating unified power flow controller for enhancing power system loadability”, international conference on future power systems, amsterdam, netherlands, november 18, 2005 [14] j. chakravorty, j. saraswat, v. bhatia, “modeling a distributed power flow controller with a pem fuel cell for power quality improvement”, engineering, technology & applied science research, vol. 8, no. 1, pp. 2585-2589, 2018 [15] j. charavorty, j. saraswat, “deciding optimal location of dpfc in transmission line using artificial algae algorithm”, engineering, technology & applied science research, vol. 9, no. 2, pp. 39783980, 2019 [16] m. p. aghababa, m. e. akbari, a. m. shotorbani, “an efficient modified shuffled frog leaping optimization algorithm”, international journal of computer applications, vol. 32, no. 1, pp. 26-30, 2011 [17] s. a. uymaz, g. tezel, e. yel, “artificial algae algorithm (aaa) for nonlinear global optimization”, applied soft computing, vol. 31, pp. 153-171, 2015 microsoft word etasr_7-1_1398-1404.doc engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1398 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… a weight-based query forwarding technique for super-peer-based grid resource discovery mahdi mollamotalebi department of computer engineering buinzahra branch islamic azad university buinzahra, iran mmahdi2@live.utm.my raheleh maghami department of computer engineering buinzahra branch islamic azad university buinzahra, iran maghamy@qiau.ac.ir abdul samad ismail faculty of computing universiti teknologi malaysia skudai, malaysia abdsamad@utm.my abstract—grid computing environments include heterogeneous resources shared by a large number of computers to handle data and process intensive applications. the required resources must be accessible for the grid applications on demand, which makes resource discovery a critical service. in recent years, different techniques are provided to index and discover grid resources. response time and message load during the search process highly affect the efficiency of resource discovery. this paper proposes a technique to forward the queries based on the resource types accessible through each neighbor in super-peer-based grid resource discovery approaches. the proposed technique is simulated in gridsim and the experimental results indicated that it is able to reduce the response time and message load during the search process especially when the grid environment contains a large number of nodes. keywords-grid computing; resource discovery; super-peer; weight-table; query forwarding i. introduction grid computing aims to handle data and process intensive applications. grid environments typically include a large number of nodes each of which owns one or more resources to be shared and used by the applications [1, 2]. with regard to the continuous increasing of networks’ bandwidth and resource varieties, grid computing is emerging as the next-generation computing platform in government, science, and business [3]. resource discovery is the process that takes the execution requirements of grid applications, searches the network for required resources and returns a set of grid nodes including resources matched to the application requests [4]. grid environments are inherently large-scale and dynamic. as a result, grid resource discovery is a time and message consuming process which affects the efficiency of the entire grid [5]. the response time and message load are two important factors for evaluating the efficiency of resource discovery approaches. the response time refers to the time between issuing a resource request and returning the resource owner addresses, and message load indicates the number of messages transferred between the grid nodes per second during the search process [6, 7]. the super-peer is a prevalent resource discovery approach in grid environments. in super-peer-based approach, the large-scale grid environment is divided into several smallscale environments in order to increase the scalability of resource discovery. some other approaches are based on hierarchical [8-12], peer-to-peer [13, 14], agent-based [15], and centralized [16] structures. this paper proposes a weight-based technique to improve the super-peer-based resource discovery approaches in terms of the response time and message load. in the super-peer structure, each node has the role of either regular-peer or super-peer. the super-peer nodes are connected to each other as peer-to-peer. in addition, each super-peer node acts as a central server for a set of regular-peers. a regular-peer sends the resource information and resource queries to its related super-peer node [17, 18]. when a regular-peer needs to explore the grid for some required resources, it sends a query message to its local super-peer. if the local super-peer finds the desired resources in its local index, it returns the resource reference to the requesting regular node. otherwise, it forwards the query to its neighboring super-peer nodes. some recent proposed super-peer-based resource discovery techniques for grid environments are described in the following paragraphs. in [19], the authors proposed a resource discovery system which uses the super-peer nodes in a framework based on chord [13]. the chord has a single ring to index the resources, but this technique uses multiple rings. each node of a ring can keep the addresses of some resources matching the queries. one node in a ring does not necessarily own the real resources but it may maintain their ip addresses. a super-ring is used to keep the nodes pointing to other rings. this technique decreases the message load however it highly depends on the information existed in the cache. it also benefits from the inherent load balancing of chord. on the other hand, it cannot control the traffic of different rings. it uses some monitoring considerations but in dynamic conditions, the monitoring potentially needs to transfer more messages. in [12], the authors proposed a technique in which the super-peer nodes make relation between different physical organizations and maintain the local domain’s resource information. some contact peers are defined in each domain to handle the grid registrations. different domains can have their own grid frameworks and they are communicated only through the super-peers. in this technique, the query is not forwarded to engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1399 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… all the neighboring nodes to traverse the network. instead, the receiving super-peer node forwards the query to a selected number of neighbors. to this end, the super-peer nodes record the number of query-hits obtained by each neighbor in previous search attempts. in the next search processes, the super-peer node forwards the received query to the neighbors who had the highest number of query-hits in the past and by this way, it decreases the message loads. in [20], the authors proposed a resource discovery technique for superpeer-based structure using semantic and fuzzy theory. it considered delay as a major parameter of fuzzy system. the amount of semantic similarity between node services is the other parameter of the fuzzy system. thus, it used delay, bandwidth and semantic similarity as the input parameters of fuzzy system to create semantic overlay network. the scheme consists of node grouping and resource discovery, and fuzzy theory is used in both phases. it uses a hybrid p2p structure where the nodes are divided into groups. this technique could increase the number of adequate discovered resources and reduce the response time. on the other hand, the precision of resource discovery is decreased because it emphasizes on the geographical factors to group and search the resources. in [21], the authors presented a super-peer based technique for the discovery of resources in grid environment which supports multi-attribute and range queries. the technique summarizes resource information to improve its scalability. it also exploits routing indices (ris) to handle the priority of tasks and resolves the problem of access to all domains’ summaries. it uses clustering to perform the summarization and builds a tree of gathered information. then it creates a summary of the database by applying two steps, pruning and leafing. in the first step, branches are pruned at a given depth such that no branch is deeper than the chosen depth. although its scalability, this technique is not as efficient in terms of message load and response time due to the processing overheads related to the summarization steps. ii. description of weighted-sp the weight-based super-peer (weighted-sp) technique aims to reduce the message load and response time of the super-peer resource discovery by limiting the neighbor nodes receiving the forwarded queries. it is performed by adding a weight table to indexing nodes indicating the weight of their neighbors to be chosen. in each indexing node, the weight table indicates the number of resource types that are accessible through each path. by this way, when a query is received by an indexing node, it forwards the query to the limited number of neighbors with the highest weights for the requested resource type. it is not possible in super-peer structure to be aware of the resource information comprehensively during the join or update processes. this is because the super-peer nodes are related to each other as p2p without any hierarchical organizations. also publishing all the resource join and update messages to other super-peer nodes will impose a dramatic message load on the system. thus, weighted-sp updates the number of resource types in the weight tables during the search process instead of the join or update processes. for this purpose, whenever a query reached the resource owner, the resource discovery process returns the found resource owner address to the requester node and then, the weight table of the super-peer nodes in the backward path of the successful query is updated. the weighted-sp scheme is illustrated in figure 1. each super-peer node keeps the resource information of its local regular nodes and the weight table. the weight table includes a row for each neighbor of current super-peer. in each row, there are columns corresponding to the resource types existing in the grid. the weight tables of super-peer nodes are empty in the initial state of grid environment establishment. in this situation, super-peer nodes which have no information about the resource types in their weight table, forward the queries to all of their neighbors. but after each query forwarding, the weight tables are updated for the requested resource type. fig. 1. the schema of weighted-sp resource discovery the search process is initiated by regular nodes in the grid. the regular node issues its resource query and sends it to the local super-peer node. the super-peer node first investigates the local regular nodes to find the required resource. if the resource is found, the address of found resource owner will be returned to the requester node and the search process will be terminated. otherwise, the super-peer node sorts its weight table information related to the requested resource type and sends the query to a specified percent of neighboring nodes which highest number of resource types are accessible through them. this process is continued until the required resource is found or all the super-peer nodes are investigated. by this manner, the query is forwarded to the neighbors with the highest probability of success at each step of the search process. weighted-sp uses the backward messages to be informed about the successful search results and updates the weight tables of the super-peer nodes located in the path of successful search processes. the weight tables of super-peer nodes are empty during the initial state of grid environment establishment. in this situation, super-peer nodes which have no information about the resource types in their weight table, forward the queries to all of their neighbors. after each query forwarding, the weight tables are updated for the requested resource type. the implementation design for weighted-sp is presented in figure 2. engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1400 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… fig. 2. implementation design of weighted-sp the request manager module generates the resource requests and nodes’ join/leave events. the generated resource requests are assigned to users randomly and sent to the module broker. the module broker is responsible to schedule the resource requests and responses. it acts as a medium between the resource discovery system and grid environment. it analyzes the received resource requests according to the receive time, separates multiple attribute requests to make different independent queries, schedules all queries and forwards them in turn to the gis. then it waits for the resource owner address to be returned from the gis module. when the matched resource owner address is discovered by the gis and returned, module broker sends the allocation request to the found resource owner. the indexing nodes and resource information are handled by the gis. almost all modules are related to the gis to transfer the resource information or get the search results. when a resource is allocated to an application, its value will be updated and the new resource information is sent to the gis. the resource handler module is responsible to send any resource information changes caused by the resource allocation, join, and leave events to the gis and subsequently the gis updates all indices related to the changed resource information. the resource handler also manages the allocation requests sent by the broker. if a resource is available, then it will be allocated to the requesting application for specified period of time which is determined by the initial resource requester. the gis is responsible to index resource information and locate the resource owners according to the received requests. after finding the resource owner addresses, the gis returns them back to broker to handle the allocation process. the report writer module receives statistical information of requested resources, transferred messages, and successful results of the search processes to analyze and make the appropriate reports according to the comparison and analysis requirements. for each experiment, r resource requests are issued. the resources join and leave the network and these events cause to transfer more messages. moreover, after finding a required resource, the found resource is allocated to the requester node. the resource information changes caused by the allocation events also should be considered. the update messages that are related to the allocation events are estimated by the (1) such that s is the success rate of queries, r is the number of issued resource requests, and k is the rate of multi-attribute query issues that request two resources. 1 n =2(k+1) s r (1)⋅ ⋅ 2 1 n n =h (2) n ⋅ the messages related to join and leave events are estimated by (2) where n represents the number of regular nodes and n is the number of super-peer nodes. assuming h1 as the rate of join and leave events, the equation specifies the messages that are issued for the regular nodes’ join and leave events. in addition to the above mentioned update messages, some messages are transferred for random change events that are applied by the system on the resources. each grid node shares one to three resources. equation (3) calculates these messages as well the allocation and join/leave events such that r is the maximum number of resources in each grid node and h2 is the rate of change events on the resources. r c 1 2 i=1 n m =n +n + (h2 i) (3) n ⋅ ⋅å considering the number of issued resource requests as r, and k as the rate of multi-attribute queries, the number of resource requests reached to an super-peer node is estimated by (4). rm = 2 (k+1) r (4) n ⋅ ⋅ n c r i=1 i p m= (m +m (1)) (5) n ⋅ ⋅å considering the update and request messages at each superpeer node, and with regard to p percent of neighbors chosen to forward the queries by super-peer nodes, the overall transferred engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1401 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… messages can be estimated by (5). the implementation methods for weighted-sp including the join/leave, request issue, and search processes are presented and described in the following paragraphs. figure 3 presents the algorithm of the join events for grid resources in weighted-sp. fig. 3. the algorithm of grid nodes’ join events in weighted-sp once the request manager issues a join event, the method joinrequest() will be performed. this method chooses one of the free nodes to join the grid environment. with regard to the random event issues of join and leave events, usually there are some free nodes available to be chosen. after choosing a free node, its status will be changed to active and its resource is added to the existing resources of grid environment. then a super-peer node that has free capacity to index the new received node will be found and assigned to the joint grid node. moreover, the new node is added to the regular nodes of the found super-peer. figure 4 presents the pseudo code of the leave events for grid resources in weighted-sp. the method leaverequest() first chooses a random node to be left. the leave event can be issued for the grid node or a resource of grid node. it is specified by the request manager through the parameter ’type’ that is set to ’complete leave’ if the grid node should be left from the grid. if the leave event issued for a resource of an existing grid node, a random number between one to three is chosen and then the resource will be removed from the chosen grid node. moreover, the request manager issues events to leave the super-peer nodes. in this cases, the method leaverequestsp() is performed that substitutes a free superpeer for the leaving one. for this purpose, the process copies the neighbors and related nodes of the leaving super-peer node to the new one. the method requestissue() in fig. 5 chooses a regular node as random to issue a resource request. with regard to possibility of issuing multi-attribute queries in weighted-sp, and existence of one to three resources in each grid node, this process adds a random number of resources in the query. considering that weighted-sp uses the successful query messages to update the weight table of super-peers in the backward path, the super-peer of current regular node is kept in the global variable originsp. this variable is used subsequently in the method wtableupdate. then the query is forwarded to the local super-peer of current regular node. fig. 4. the algorithm of grid nodes’ leave events in weighted-sp fig. 5. the algorithm of resource request issue in weighted-sp the receivedrequest()method that is shown in figure 6 is responsible to handle the search process of requested resources. the super-peer node investigates its related regular nodes to satisfy the received query. if one of its regular nodes could satisfy the query, the super-peer performs the wtableupdate process in order to update the weight table of super-peers in backward path and search process is terminated. if the superpeer node did not found any matched resource in its related regular nodes, it sets the number of neighbors to forward the query considering the specified percentage by the user. the weight table of current super-peer node is copied in tempwtable to prevent information changes in the original weight table during the search process. engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1402 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… the number of branches to forward the query is set considering the percentage of forwarding branches specified by the user. then the weight table of current indexing node is copied to tempwtable in order to find the branches that highest number of requested resource type is accessible through them. this copy of the weight table is needed to prevent changes in the original weight table of the indexing node during the forward steps. at each step of query forwards, the neighbor with highest number of accessible resource type is found and the query is forwarded to it. the index of appropriate neighbor to forward the query is kept by the variable hindex. in order to remove the highest value form the next forward steps, its value is set to zero in the tempwtable fig. 6. the algorithm of resource discovery in weighted-sp the wtableupdate process shown in figure 7 is responsible to update the weight table of superpeer nodes in backward path upon finding the required resource. its first parameter is the address of super-peer node that should update its weight table. the query is send as the second parameter to inform the receiving super-peer that this update is related to which resource type. the third parameter is the address of current super-peer. the receiving super-peer adds the value of resource type that is accessible through its neighbor. then it fetches the previous super-peer in order to perform the update process. the update process is repeated until reaching the original super-peer node related to the initial requester regular node. fig. 7. the algorithm of weight table update in weighted-sp iii. simulation results and evaluation in order to perform the simulation experiments for the proposed technique, it is implemented in java under the simulation toolkit gridsim version 5.2. the hardware platform includes cpu intel core 2 duo 2.93 ghz and 4 gb of ram. the experiments are performed by different number of grid nodes i.e. 1000, 5000, 10000, and 20000. also, to investigate the impacts of the forwarded neighbors, three values i.e. 25, 50, and 75 of the forward percentages are considered for the indexing nodes. in addition, to have the reliable results, 1000 resource requests are issued per experiment and the average results of the message load and response times are calculated. in order to compare and evaluate the results, an existing superpeer-based technique [21] is simulated in the same hardware and software platform. this technique is referred as sp in the rest of this paper. the message load in the performed experiments is measured by dividing the number of messages transferred during the search process to the time spent for issuing all the requests. also, the response time is taken by calculating the average value of all the requests’ response time. the response time of each request is the duration between issuing the request and finding the result. the experimental results of weightedsp are presented in the following paragraphs. table iσφάλμα! το αρχείο προέλευσης της αναφοράς δεν βρέθηκε. presents the message load of weighted-sp for different number of grid nodes and forwarding percentage values. also, the reduction rates of the message load for weighted-sp relative to sp are presented in table ii. table i. the message load of weighted-sp for different number of grid nodes and forwarding percentage values number of grid nodes 1000 5000 10000 20000 pr =25% 4.6 13.6 30.6 61.1 pr =50% 6.1 17.58 40.52 80.1 pr =75% 7.39 22.34 51.4 104.73 sp 8.41 25.74 60.49 124.97 the reduction rates of message load indicate that choosing the lower percentages of neighbors to forward the queries results to the higher rates of reductions in the message load. the reduction rate of message load is not equal to the excluded neighbors since the weight tables are used only to limit the query forwards. the update messages between super-peer nodes and their related regular nodes are transferred regardless to the weight tables. σφάλμα! το αρχείο προέλευσης της αναφοράς δεν βρέθηκε. presents the average response time of engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1403 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… sp and weighted-sp for different number of grid nodes. the response time of resource discovery is affected by the weightbased forwards of the queries because the neighbors with lower accessibility of required resource types are excluded from the search process. therefore, the average response time for the issued queries is reduced. table ii. the message load reduction rates of weighted-sp for different forwarding percentages number of grid nodes 1000 5000 10000 20000 overall reduction rate pr =25% 45.2% 47% 49.3% 51.1% 48.15% pr =50% 27.4% 31.7% 33% 35.9% 32% pr =75% 12% 13.2% 15% 16.1% 14% table iii. the average response time of sp and weighted-sp for different number of grid nodes number of grid nodes 1000 5000 10000 20000 weighted-sp 0.19 0.32 0.45 0.59 sp 0.2 0.35 0.51 0.67 reduction rate 5% 8.5% 11.5% 11.9% in order to identify the appropriate values of the forwards percentage, the success rates of the search processes are measured. because the weight-based query forwarding excludes some neighbors form the search domain, it affects negatively the success rate of the search process. table iv presents the success rate of weighted-sp for different number of grid nodes and query forwarding percentages. the success rate of weighted-sp is less than sp in all cases. it is because weighted-sp excludes some neighbors from the investigation during the search process. but the success rate does not increase significantly when the queries are forwarded to more neighbors. it is because the success rate of weighted-sp depends to the information of weight tables more than the forwarding percentage. if the weight tables do not contain useful information about the resource types, this technique cannot forward the queries to the appropriate grid nodes. in weighted-sp, the super-peer nodes are not informed about the resource information of other ones, and the weight tables are updated by using the results of resource queries. thus more number of requests enriches the information of weight tables. in order to investigate the impacts of number of issued queries on the success rate of weighted-sp, the experiments are repeated with three more values i.e. 5000, 10000, and 20000 of the issued resource queries. table v presents the success rate of weighted-sp for 1000 grid nodes and different number of issued queries. the results indicate that issuing more number of queries increases the success rate of weighted-sp. it is because the weight table’s information of super-peer nodes in weighted-sp depends on the search results of issued queries. the higher number of issued queries enriches the information of weight tables, and subsequently the queries can be forwarded more efficiently toward the appropriate neighbors. as a result, weighted-sp could reduce the message load and response time of the search process by excluding some neighbors form the search domain based on the specified percentage of query forwards. because the results of the search processes are used to update the weight table of super-peer nodes, more number of issued queries enriches the weight tables of weighted-sp and subsequently it increases the success rate of this technique. table iv. the success rate of weighted-sp for different number of grid nodes and query forwarding percentages number of grid nodes 1000 5000 10000 20000 pr =25% 44.8% 45% 45.2% 45.5% pr =50% 47.2% 47.6% 47.9% 48.1% pr =75% 49.7% 51% 51.7% 52.3% table v. the impact of number of issued queries on the success rate of weighted-sp number of requests 1000 5000 10000 20000 pr =25% 44.8% 47.1% 51.7% 57.8% pr =50% 47.2% 58.8% 64.1% 74.9% pr =75% 49.7% 69.1% 75.3% 79.6% average of success rates 47% 58% 63% 71% iv. conclusion the resource discovery service responsible to find the resources required by grid applications has an important role in grid computing systems. this paper proposes a weight-based technique to improve the super-peer-based grid resource discovery solutions in terms of message load and response time. in the proposed technique, each indexing node keeps a weight table consisting of its neighbors and the number of different resource types that are accessible through each neighbor. in super-peer structure, the resource information of other super-peer nodes cannot be collected during the join or update processes, and such information is collected by using the successful search results. the experimental results of the proposed technique indicated that it reduces the message load and response time during the search process. with regard to the contents of the weight tables, choosing higher percentage of neighbors to forward the queries causes the system to act similarly to conventional super-peer-based systems. on the other hand, choosing a very low percent of neighbors excludes many resource owners from the search domain. the results showed that choosing more than 75% of neighbors could result to suitable values of the message load and response time while keeping the success rates high. in addition, higher number of resource requests enriches the weight tables of the proposed technique in the super-peer-based structure and causes it to act more efficiently. engineering, technology & applied science research vol. 7, no. 1, 2017, 1398-1404 1404 www.etasr.com mollamotalebi et al.: a weight-based query forwarding technique for super-peer-based grid resource… references [1] d. puppin, s. moncelli, r. baraglia, n. tonellotto, f. silvestri, “a grid information service based on peer-to-peer”, 11th international euro-par conference, lisbon, portugal, august 30-september 2, 2005 [2] m. a. arafah, h. s. al-harbi, s. h. bakry, “grid computing: a stope view”, international journal of network management, vol. 17, pp. 295305, 2007 [3] m. r. islam, m. t. hasan, g. ashaduzzaman, “an architecture and a dynamic scheduling algorithm of grid for providing security for real‐ time data‐ intensive applications”, international journal of network management, vol. 21, pp. 402-413, 2011 [4] m. hauswirth, r. schmidt, “an overlay network for resource discovery in grids”, sixteenth international workshop on database and expert systems applications, pp. 343-348, 2005 [5] a. hameurlain, d. cokuslu, k. erciyes, “resource discovery in grid systems: a survey”, int. j. metadata semant. ontologies, vol. 5, pp. 251263, 2010 [6] b. beverly yang, h. garcia-molina, “designing a super-peer network”, 19th international conference on data engineering, pp. 49-60, 2003 [7] d. t. p. trunfio, p. fragopoulou, h. papadakis, m. mordacchini, m. pennanen, k. popov, v. vlassov, s. haridi, “peer-to-peer models for resource discovery on grids”, future generation computer systems, vol. 23, no. 7, pp. 864–878, 2007 [8] a. padmanabhan, s. ghosh, s. wang, “a self-organized grouping (sog) framework for efficient grid resource discovery”, journal of grid computing, vol. 8, pp. 365-389, 2010 [9] y. gong, f. dong, w. li, z. xu, “vega infrastructure for resource discovery in grids”, journal of computer science and technology, vol. 18, pp. 413-422, 2003 [10] y. ma, b. gong, l. zou, “resource discovery algorithm based on smallworld cluster in hierarchical grid computing environment”, seventh international conference on grid and cooperative computing, pp. 110116, 2008 [11] r.-s. chang, m.-s. hu, "a resource discovery tree using bitmap for grids”, future generation computer systems, vol. 26, pp. 29-37, 2010 [12] c. mastroianni, d. talia, o. verta, “designing an information system for grids: comparing hierarchical, decentralized p2p and super-peer models”, parallel comput., vol. 34, pp. 593-611, 2008 [13] i. stoica, r. morris, d. karger, m. f. kaashoek, h. balakrishnan, “chord: a scalable peer-to-peer lookup service for internet applications”, sigcomm comput. commun. rev., vol. 31, pp. 149160, 2001 [14] p. merz, k. gorunova, “fault-tolerant resource discovery in peer-topeer grids”, journal of grid computing, vol. 5, pp. 319-335, 2007 [15] m. marzolla, m. mordacchini, s. orlando, “resource discovery in a dynamic grid environment”, sixteenth international workshop on database and expert systems applications, pp. 356-360, 2005 [16] d. cokuslu, a. hameurlain, k. erciyes, “grid resource discovery based on centralized and hierarchical architectures”, international journal for infonomics, vol. 3, pp. 227-233, 2010 [17] c. mastroianni, d. talia, o. verta, “a super-peer model for resource discovery services in large-scale grids”, future generation computer systems, vol. 21, pp. 1235-1248, 2005 [18] p. trunfio, d. talia, h. papadakis, p. fragopoulou, m. mordacchini, m. pennanen, k. popov, v. vlassov, s. haridi, “peer-to-peer resource discovery in grids: models and systems”, future generation computer systems, vol. 23, pp. 864-878, 2007 [19] j. salter, n. antonopoulos, “an optimized two-tier p2p architecture for contextualized keyword searches”, future generation computer systems, vol. 23, pp. 241-251, 2007 [20] s. javanmardi, s. shariatmadari, m. mosleh, “a novel decentralized fuzzy based approach for grid resource discovery”, international journal of innovative computing, vol. 3, no. 1, pp. 23-32, 2013 [21] a. c. caminero, a. robles-gomez, s. ros, r. hernandez, l. tobarra, "p2p-based resource discovery in dynamic grids allowing multi-attribute and range queries”, parallel computing, vol. 39, pp. 615-637, 2013 microsoft word 40-3097_s_etasr_v9_n5_pp4801-4807 engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4801 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … spectral re-growth suppression in the fbmcoqam signal under the non-linear behavior of a power amplifier jimmy jake department of electrical engineering, pan african university, institute for basic sciences, technology and innovation, nairobi, kenya jakeloponi@gmail.com elijah mwangi school of engineering, university of nairobi, nairobi, kenya mwangiel2010@gmail.com kibet langat dpt of telecommunication and information engineering, jomo kenyatta university of agriculture and technology, nairobi, kenya kibetlp@jkuat.ac.ke abstract—typically, the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam) bears some impressive properties that make it popular as one of the substitutes to orthogonal frequency division multiplexing (ofdm) for the upcoming technology of broadband wireless communication systems. although fbmc-oqam preserves the multicarrier modulation (mcm) features, its spectrum usually suffers from impairments when subjected to the nonlinear behavior of a power amplifier (pa) which results in spectral regrowth. due to the spectrum limitation and low energy efficiency foreseen in the forthcoming 5g networks, it is vital to confine the spectrum of the fbmc-oqam signal within the allocated band of interest. in this paper, the suppression of the spectral regrowth experienced on the fbmc-oqam signal due to the nonlinear distortion effects introduced by the pa is investigated. the crest factor reduction (cfr) method in combination with an adaptive digital pre-distortion (dpd) are used. the peak windowing technique based on sequential asymmetric superposition windowing (sasw) algorithm is used in the cfr part while the least square estimation with qr-decomposition (lse/qr) has been used as the coefficient’s estimator and adaptation algorithm in the dpd part. the performance of the two combined techniques has been evaluated on systemvue2018 simulation platform. the adjacent channel leakage ratio (aclr) and the error vector magnitude (evm) have been considered as the performance merits. the simulation results show that the proposed techniques significantly improve the spectrum, first by reducing the papr of the fbmc-oqam signal by about 1.5db. secondly, the spectral re-growth has been reduced by about -45.74db adjacent channel leakage suppression and the error vector magnitude measure has been obtained to be about 7.12%. (-22.95db). these values lead to better average input power of the fbmc-oqam signal and improvement in the spectral efficiency and they are in accordance with the 3gpp standard for wideband signals in nonlinear systems. keywords-fbmc-oqam; spectral re-growth; nonlinear pa; cfr; sasw; adaptive dpd; lse with qr decomposition i. introduction the dynamic growth in the number of users that have different demands for accessing information under wireless communication is leading to spectrum scarcity. this has prompted a concern for the investigation of new techniques to address the issues of spectrum and energy efficiency improvement. orthogonal frequency division multiplexing (ofdm) modulation scheme had been widely used in the existing wireless domain due to its ability to effectively minimize the issues of delay spread in the broadband wireless channels in the long-term evolution (lte) 4g cellular networks [1-3]. however, the ofdm signal is time-limited because it is centered on the use of rectangular pulses in the time domain which leads to slowly decaying behavior in the frequency domain. these features make ofdm inappropriate, especially in situations where users need to be asynchronized and strict limits on the out-of-band radiation levels are highly required. to overcome these limitations, the fbmc-oqam scheme has been studied as an alternative waveform candidate to ofdm and is becoming the leading contender among the multicarrier modulation schemes proposed for the upcoming wireless communication systems, because of its ability of minimizing the out-of-band emission (oobe), cyclic prefix (cp) free transmission, and robustness to the asynchronous situation [4-6]. unfortunately, the fbmc-oqam signal still suffers from high peak-to-average power ratio (papr) which is susceptive to the nonlinear behavior of the pa, and as a result, it suffers from spectral re-growth. high peak signals require amplification with power amplifiers that have high linearity. however, such power amplifiers come with high cost and they compromise energy efficiency [7, 8] which may not be suitable to many telecommunication network providers, that is why nonlinear power amplifiers are still common in many wireless transmitters, due to the high cost and low energy efficiency of the linear power amplifiers, the use of nonlinear power amplifiers is preferred. however, they cause signal distortion which requires compensation techniques for signals with high peak values. among these techniques are clipping and filtering, peak cancellation, and peak windowing for crest factor reduction [9, 10] while feedforward, feedback, and predistortion [10, 11] were used for nonlinearity compensation. corresponding author: jimmy jake engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4802 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … these techniques allow the nonlinear pa to operate at higher output power while maintaining linearity and increase in energy efficiency. reducing the crest factor enhances energy efficiency and linearization approaches required to compensate the nonlinearities introduced by the pas. however, in practice, the crest factor reduction (cfr) technique slightly degrades the signal quality which results in reduced data throughput. in regard to multicarrier signals such as fbmc-oqam, degradation within the required bandwidth has less significance than degradation that results from the broadening of signal bandwidth [10]. in this work, the peak windowing approach [9, 10] based on the kaiser window together with sequential asymmetric superposition widowing (sasw) algorithm [12] is used to reduce the high peak values experienced on the fbmc-oqam signal. the reasons for this are that this algorithm is applicable to both single and multicarrier signals. also, it can handle the over-attenuation issues that arise as a result of unnecessary window superpositioning. the concurrent application of crest factor reduction and digital pre-distortion techniques can provide an amicable solution to meet the requirements of linearity and energy efficiency benefits [13]. digital predistortion (dpd) techniques are currently popular in the cellular communication domain as nonlinearity compensating techniques [14]. due to the high flexibility and excellent linearization performance, dpd has been mostly used in linearizing pas and it tends to be an essential linearization technique in the current and next-generation wireless communication systems. owing to the dynamic nature of the pa nonlinearity, adaptive digital pre-distortion will be used here as a nonlinearity compensating technique to deal with the issues of the spectral re-growth experienced on the fbmcoqam signal in order to maintain the characteristics of the signal under the nonlinear behavior of the pa. although adaptive digital pre-distortion can be modeled as a nonlinear technique using volterra series, look-up table (lut) and memory polynomial (mp) [15, 16], the main focus here is on the memory polynomial model which is a truncated version of volterra series due to its elegance and simplicity in implementation. the least square estimation (lse) algorithm has been used on many occasions to estimate the coefficients of nonlinear systems [10, 17]. here, the lse with qr decomposition is used to estimate and adapt the model’s coefficients. this is because of its ability in dealing with the over-determined problems which may arise because of the large number of samples used and also the ill-conditioned and rank-deficient scenarios that may be encountered during the inversion of the transfer function of the nonlinear power amplifier. to the best of our knowledge, no published study has used the combination of the two algorithms to concurrently minimize both the papr and the spectral re-growth of the fbmc-oqam under the nonlinear effects of the pa. ii. related work in [15], the impact of hpa nonlinearity on the performance of ofdm and fbmc-oqam systems was investigated, where saleh’s model for the nonlinear hpa was considered. in this study, two pre-distortion schemes which were based on the indirect learning architecture were presented. the first scheme aimed to compensate simultaneously the amplitude and phase distortions induced by the nonlinear hpa, while the second aimed to compensate these distortions separately. it was shown that the first pre-distortion scheme performs worse in the fbmc-oqam system than in the ofdm. with the second pre-distortion scheme, the phase and amplitude pre-distortions were made separately. the ofdm and fbmc-oqam systems reached the same performance showing that a higher attention must be paid in phase correction in fbmc-oqam. authors in [18] presented an innovative algorithm for scalar feedback digital pre-distortion, known as orthogonal scalar feedback linearization to compensate the nonlinear distortion and reduce the spectral re-growth in nonlinear power amplifiers. the adaptation of the discrete model coefficients becomes orthogonal in the intermodulation domain. this scheme presented a lower intermodulation at the power amplifier output and was compared with existing scalar feedback digital pre-distortion algorithms in achieving faster convergence time and lower output power variation. the problem of this method is that the coefficients are adjusted independently while the lower order coefficients are affected by the higher order ones. also, the adjustment on the current coefficient affects the adjustments made on the previous coefficients, which causes non-orthogonality of coefficients in the intermodulation domain. the distortion caused by nonlinear power amplifiers brought up the desire for correction. in [19], the performance of 5g candidate waveforms such as fbmc and ufmc with nonlinear power amplifiers was evaluated using digital pre-distortion and an iterative correction algorithm with hard detection and was compared to the ofdm waveform. it was observed that, in regards to power amplifier without memory effects, the ufmc presents the most robust behavior while the fbmc suffers more impact in terms of ber. it was also noted that the iterative correction with hard detection algorithm is effective on the three waveforms and that no many iterations are needed to reach a result close to linear performance. the scenario with memory power amplifier changes this result a bit. in this case, the fbmc is the one that overcomes the others in terms of ber performance. the 5g waveforms had showed more robustness when compared to the 4g waveform (ofdm). iii. system model a. overview of the fbmc-oqam system the overall concept of the fbmc-oqam is the transmission of complex symbols, where the in-phase and quadrature components are interleaved by a half symbol duration, �/2. in the oqam pre-processing, the complex input symbol vectors ��,� are converted into real symbols, where the in-phase and the quadrature components are time staggered by half a symbol period in order to maintain the orthogonality between carriers. for 0≤m≤2m-1 and 0≤n≤n-1, where m and n are the symbols and subcarriers indices respectively, the oqam processing mathematical formulation is given as in [20]. the procedure of the oqam pre-processing in the fbmc transmitter is illustrated as in figure 1. after the oqam preprocessing, the real symbols undergo poly-phase filtering that engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4803 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … involves ifft transformations along with filtering by a synthesis filter bank (sfb) with an impulse response �� as illustrated in figure 2. fig. 1. oqam pre-processing in the fbmc transmitter fig. 2. sfb implementation in the fbmc transmitter a discrete-time baseband fbmc-oqam signal ���� upsampled by a factor �/2 is then obtained [21]: ��� � � � s��,�,s��,�,………………. ,s����,�� (1) ��� � ∑ ∑ ��, !�� "�#$���"� %� & ' (! # )* +,-./ 0*+12,3 (2) where � 4.5 is the fbmc-oqam modulation function, ��, are oqam processed from ��,� vectors and 6�, is the phase term equal to 7 # ' 8 9� & :'. the physical layer for dynamic spectrum access and cognitive radio (phydyas) prototype filter introduced in [22, 23] with overlapping factor k=4 is considered here as the pulse shaping filter in the fbmc-oqam system because of its optimal localization characteristics of the signal in both time and frequency domains. b. cfr-based sequential asymmetric superposition windowing algorithm a major challenge of dealing with multicarrier systems such as fbmc-oqam is the resulting high peak values. this mainly occurs because each subcarrier is usually modulated and filtered individually which results in more instantaneous power than the average power. the peak-to-average power ratio (papr) of the discrete-time fbmc-oqam signal is [24]: ;<;= ���� � >?@abcbdef |h 0�| ,� i4|h 0�|,5 (3) where j4.5 is the expectation value. here, the advantage of the sequential asymmetric superposition windowing (sasw) algorithm is explored because of its application in both single and multicarrier signals. at first, a window length is specified and when the peaks are detected in the window, they are grouped into blocks, and their locations are then indexed. in order to minimize the over-attenuation introduced by the unnecessary window superpositioning, an overall windowing function for each block is constructed in an iterative manner to deal with the dynamic fluctuation of the peaks. the window segment for the grouped peaks is then smoothened by the addition of all the asymmetric peak windows in the block. the sequential asymmetric superposition windowing (sasw) algorithm is applied to the peaks in each block, where only the peaks with large values take part in the decision making of the scaling function. the crest factor reduction (cfr) based on sequential asymmetric superposition windowing is given by [12]: klm��� � 1 & ∑ op�q�4rsp�� & �q� 8 rtp�� & �q�5 u v"�u (4) where rsp��� and rtp��� are the left and right side of the window functions respectively with different window lengths, op�q� is the new weighting index factor for those concealed peaks indexed by q as op�q� � 0. c. digital pre-distortion in wireless systems fundamentals the basic concept of the dpd technique is illustrated in figure 3. a transfer function of a nonlinear pa is shown with xyx ��� as the input and z �� as the output. if the dpd model manipulates the signal with a proper inverse transfer function such as shown on the transfer function with ��� as input and xyx ��� as output, the final output z �� of the pa will be a linear signal with respect to the original input ���. unlike circuit-level nonlinearity compensation, digital pre-distortion uses the black-box-based behavioral modeling procedures to describe and invert the input signal before the pa [25]. under this concept, only the pa input and output relationship are considered in the nonlinearity compensation process. this significantly relieves the burden of analogue circuit design and debugging. fig. 3. basic concept of digital pre-distortion d. digital pre-distortion model description the power amplifier is an essential component of the transmitter. however, its operation near the saturation zone creates nonlinear distortion to the input signals which result in spectrum re-growth. therefore, there is a need to compensate these nonlinearities in order to maintain the spectrum of the input signal within the specified band of interest. the engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4804 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … nonlinearity of the pa is usually dynamic due to its build-up components, aging, and other environmental factors. therefore, an adaptive identification of the dynamic nonlinear behavior of the pa is essential. to compensate for this nonlinear and the dynamic distortion in the pas, adaptive dpd models based on indirect learning architecture (ila) with memory polynomial (mp) are used because of their simplicity [10]. this concept is illustrated in figure 4, where the adaptive dpd model is applied to adjust the signal sample ��� in the digital domain to compensate for the nonlinear behavior of the pa output z �� in the analogue domain. the pa output is monitored by an observation path and converted to the digital domain where the input signal xyx ��� to the pa and the feedback signal [��� from the pa output are compared and an adaptive algorithm is implemented where the dpd coefficients in the pre-distorter block are updated accordingly. the overall idea of using the adaptive dpd is to adjust the fbmc-oqam signal ��� in such a way as to minimize the distortion introduced by the nonlinear pa in a dynamic manner. fig. 4. ila for an adaptive digital pre-distortion model extraction e. digital pre-distortion authentication procedures the dpd can be carried out in multi-step procedures where the adjacent-channel-power (acp) assessment of the transmitter is verified. due to the dynamic nonlinear distortion and memory effects of the nonlinear pa, the dpd parameters are trained for several iterations. however, this requires a robust testing platform in order to evaluate the dpd performance properly. this work only considers the simulation-based platform. dpd algorithms are intrinsically mathematical and need to be performed on a processor with high computational ability, therefore, a simulation-based environment, known as systemvue2018 has been used as the modeling and verification platform. the modeling procedures are: step 1 baseband source: generating a digital baseband data sequence with required sampling rate such as fbmcoqam signal ����. step 2 crest factor reduction: reducing the peak of the fbmc-oqam baseband signal in order to minimize clipping during amplification. step 3 signal pre-distortion: the original digital baseband signal is pre-distorted using an inverse transfer function model of the pa. this compensates the nonlinearity distortion effects of the pa. step 4 signal up-conversion: up-converting the digital baseband signal to the required analogue frequency, with appropriate signal power in order to drive the pa. step 5 signal amplification: a power amplifier is employed to amplify the signal before transmission to compensate for the attenuations that may be encountered by the signal in the free space propagation. step 6 feedback signal attainment: the output of the nonlinear pa is down-converted to the baseband signal, and then the baseband analogue signal is captured with an adc. step 7 dpd parameter extraction and updating: the dpd parameters are calculated and then the coefficients are updated in the digital pre-distorter. step 8 time alignment: to obtain matched pa input and output, the original input and output signals of the pa are captured and aligned in the time domain so that the sample delay introduced by the feedback path can be characterized and compensated with no system deadlock. step 9 performance assessment: a spectrum analyzer is used to assess the frequency-domain nonlinearity compensation performance such as the aclr or capturing the signal gain to assess the time-domain performance such as the evm. these procedures are modeled in systemvue2018-based platform as illustrated in figure 5. fig. 5. a simulation-based dpd procedure iv. behavior modeling of the digital pre-distortion system a. memory polynomial model the mp model has been considered as a special case of the volterra model due to its elegancy and simple structure for incorporating memory effects into the static nonlinear polynomial model and has been extensively used in a variety of applications, especially in the linearization of power amplifiers with memory [10]. the nonlinearities with memory effects can be generally described as the sum of the outputs from the polynomial functions and can be written as [11]: xyx��� � ∑ ∑ \],� �� & ^�| �� & ^�|]��_�"�y]"� (5) where ; and ` are the nonlinearity order and memory depth of the mp-based model respectively and \],� are the coefficients of the model. engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4805 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … the output of the nonlinear pa z �� is a function of the dpd signal xyx��� and is given by: z �� � ∑ ∑ \],� xyx � & ^�| xyx � & ^�| ]��_ �"� y ]"� (6) in the indirect learning architecture dpd (post-dpd – figure 4), the nonlinear function of the pa is inverted in order to identify the nonlinearity behavior of the pa which acts as a feedback signal, essentially acting as a post-distorter on the pa output signals. since the nonlinearity behavior of the pa is modeled by dpd using mp, the pre-distortion of the nonlinear pa is functionally the same as the post-distortion. the input to the post-dpd block is the direct inversion of the nonlinear pa transfer function and is given as: [��� � ∑ ∑ \],�z � & ^�|z � & ^�| ]��_ �"� y ]"� (7) where the parameters ;,`, and \],� are the same as in the predistortion model. b. coefficients estimation and adaptation algorithm fbmc-oqam is the most demanding modulation scheme for the next generations of wireless communication industry. a relatively large number of samples have to be allocated to it to identify the small number of coefficients for the dpd algorithm. therefore, the lse with qr decomposition (lse/qr) algorithm is used to estimate the coefficients of the model due to the over-determined problem experienced on the signal because of the large number of the samples used. the lse is more commonly used because of its easy implementation and satisfactory performance [17]. 1) coefficients estimation the coefficients \],� in (7) can be solved by using the lse by defining a new signal variable as in [17]. the output of the pre-distorter training block (post-dpd) of figure 4 with the gain of the amplifier a set to unit can be solve using the lse solution as given in [17] where the coefficients of the predistorter training block (post-dpd) become: \b � rcr���rcd̂ (8) the matrix r is ill-conditioned even if the signal is scaled and normalized at the range of the values of the polynomial orders. using the `= decomposition method, the matrix r can be factorized into: r � `= (9) where the matrix ` is ' × 9 with orthonormal columns and = is an invertible 9 × 9 upper triangular matrix. the orthonormal matrix ` preserves the norm or distance in any transformation. the `= decomposition rotates the matrix r until the point where the set of linear equations can be solved by backsubstitution in the matrix =, since ` is orthonormal, i.e.: `c` � g (10) then: `cd̂ � `c`=\b � =\b (11) since = is invertible, the estimate for the coefficients becomes: \b � =��`cd̂ (12) where [•] h stands for complex conjugate transpose. equation (12) is relatively straightforward to solve, and since = is triangular, the coefficients can be found by backsubstitution in the vector `cd̂. this procedure avoids the inversion of the rcr in the normal equation (8) and the associated numerical instabilities. this algorithm is in-built in the systemvue2018 software and it is generally robust in dealing with ill-conditioned and rank-deficient matrices. the coefficients of the pre-distorter obtained off-line through (12) are copied to the pre-distorter on the feed-forward path as the initial coefficients. because of the dynamic nature of the pa nonlinearity stated above, the coefficients can be updated adaptively as demonstrated in figure 4. 2) coefficients adaptation the pre-distortion coefficients adaptation is quite straightforward for indirect learning architecture. considering the nonlinear memory polynomial model of the dpd as in (5), the pre-distorted input signal to the pa ( xyx���) is compared to the post-distorted output signal [���� from the pa. the absolute error signal h� at any sample instant � is then obtained by the equation given in [10]. then, the estimate of the coefficients error (∆a ̂) can be minimized by taking the difference between the error term and the post-distortion update in the least square manner as given in [10]. the coefficient update relation can be expressed starting from the lse expression by describing the coefficient estimation from the output of the post-distorted signal [���� and the pre-distorted signal ( xyx���) as: xyx v� � [.\ (13) in block matrix form: \ij = ([v c [v) ��[v c . xyxv (14) then the error term for the estimation block uses the previously-calculated coefficients and is written as: h = [vk�.\ij − xyxvk� (15) expanding (15) and upon simplification, we get: \ij − \bvk� = −∆\ (16) therefore, the error expression for the pre-distorter coefficients becomes: h = −[vk�.∆\ (17) this coefficient error is used to update the pre-distorter coefficients through: \bvk� = \ij − m.∆\ (18) where the parameter m is used to speed up or stabilize the convergences of the pre-distorted signal and it depends on the number of the coefficients used. v. simulation results and performance analysis this section presents the simulation results of the performance of the fbmc-oqam signal in terms of the error engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4806 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … vector magnitude (evm) and adjacent channel leakage ratio (aclr) when subjected to the nonlinear power amplifier. these performances are evaluated when the nonlinearities are compensated with concurrent application of cfr and dpd. the 16qam mapping method has been used for the fbmcoqam system because it is less susceptive to noise and data errors. an fbmc-oqam signal with phydyas prototype filter with overlapping factor n=4 is employed. the simulation parameters for the fbmc-oqam signal are based on the 3gpp release 13 to 15 and beyond [26] and the nonlinear power amplifier parameters settings are shown in table i and ii respectively. table i. fbmc-oqam simulation parameters parameters values carrier frequency 6ghz sampling rate with oversampling 320mhz sampling rate without oversampling 20mhz oversampling ratio 4 modulation type 16qam number of subcarriers 512 subcarriers ifft length 2048 filter overlapping factor 4 filter bank structure polyphase network ifft table ii. nonlinear power amplified parameter settings parameters settings amplifier gain 1db output 1db gain compression power 0.01w output third order intercept power 0.1w saturation power 0.032w gain compression at saturation 3db reference impedance 50ω in the crest factor reduction (cfr), a peak windowing based on sequential asymmetric superposition windowing (sasw) with maximum kaiser window length of 500 to enable the smooth transition on the window edge is used. a kaiser window adjustment parameter (6) equal to 15 to determine the rolling-off window’s edge and maximum iteration time equal to 20 was considered due to its steady convergence level. a block size for each cfr operation of 1000 and a target papr value of 6db were considered. the cfr technique performance results are shown in figure 6 where the peak value of the original fbmc-oqam signal has been reduced by about 1.5db. for the dpd technique, the memory polynomial model was used with nonlinear order p=7 and a memory depth of q=3, which only consider odd-order polynomials due to the odd-order intermodulation product characteristics of the nonlinear pa transfer function. figure 7(a) shows the power spectrum density (psd) plot of the original fbmc-oqam signal. the amplified version of the fbmc-oqam signal without dpd is labeled as (b), and (c) is the dpd response. although the dpd responded by suppressing a significant amount of the spectral re-growth, less gain is experienced after the nonlinearity compensation with the dpd. this is because of the high peak characteristics of the fbmc-oqam signal and the nonlinearity behavior of the pa which exhibited high spectral re-growth after the amplification of the fbmc-oqam signal. fig. 6. ccdf plot of the performance of the cfr technique fig. 7. psd of (a) original fbmc-oqam signal, (b) pa output without dpd, (c) pa output with dpd fig. 8. psd of (a) reduced crest factor of the fbmc-oqqam signal, (b) pa output, (c) pa output with combined cfr and dpd it can be observed in figure 8 that using cfr simultaneously with dpd can significantly reduce the spectral re-growth by about -45.74db adjacent channel leakage suppression. the error vector magnitude (evm) measure is found to be about 7.12% (-22.95db) which is clearly reaching the target of the standard evm value for wideband signals which was standardized to be 8% (-22db) by 3gpp. vi. conclusion spectral re-growth suppression of the fbmc-oqam signal under the nonlinear behavior of a power amplifier has been studied in this paper. regarding the cfr of the fbmc-oqam signal, a peak windowing technique with sequential asymmetric superposition windowing (sasw) algorithm has been proposed, which showed consistency in reducing the large engineering, technology & applied science research vol. 9, no. 5, 2019, 4801-4807 4807 www.etasr.com jake et al.: spectral re-growth suppression in the fbmc-oqam signal under the non-linear … peak values of the fbmc-oqam signal by about 1.5db. this leads to an increase in the average input power of the fbmcoqam signal. regarding nonlinearity compensation, memory polynomial based dpd with lse along with qr decomposition (lse/qr) which utilized the indirect learning architecture (ila-dpd) have been employed. the cfr and dpd techniques have been jointly applied to mitigate the issues of both high peak and nonlinearities experienced on the fbmc-oqam signals. the application of both cfr and dpd significantly reduced the spectral re-growth which resulted from the nonlinear behavior of the pa by about -45.74db adjacent channel leakage ratio (aclr) suppression. the error vector magnitude (evm) measure was found to be about 7.12% (-22.95db) which is less than the evm value for wideband signals standardized by 3gpp to be 8% (-22db). in this paper, much emphasis has been given to the fbmcoqam waveform. nevertheless, it would be of equal interest to study other waveforms and compare them with fbmcoqam under the same proposed techniques, in the 5g context. references [1] w. jiang, t. kaiser, “from ofdm to fbmc: principles and comparisons”, in: signal processing for 5g: algorithms and implementations, john wiley & sons, 2016 [2] s. patil, s. patil, u. kolekar, “implementation of 5g using ofdm and fbmc (filter bank multicarrier) /oqam (offset quadrature amplitude modulation)”, international journal of innovative science, engineering & technology, vol. 5, no. 1, pp. 11–15, 2018 [3] h. zhang, d. l. ruyet, d. roviras, y. medjahdi, h. sun, “spectral efficiency comparison of ofdm/fbmc for uplink cognitive radio networks”, eurasip journal on advances in signal processing, vol. 2010, article id 621808, 2010 [4] r. gerzaguet, n. bartzoudis, l. g. baltar, v. berg, j. b. dore, d. ktenas, o. f. bach, x. mestre, m. payaro, m. farber, k. roth, “the 5g candidate waveform race: a comparison of complexity and performance”, eurasip journal on advances on wireless communications and networking, vol. 1, no. 1, pp. 1–14, 2017 [5] m. renfors, x. mestre, e. kofidis, f. bader, orthogonal waveforms and filter banks for future communication systems, first edition, academic press, 2017 [6] r. nissel, s. schwarz, m. rupp, “filter bank multicarrier modulation schemes for future mobile communications”, ieee journal on selected areas in communications, vol. 35, no. 8, pp. 1768–1782, 2017 [7] m. azhar, a. shabbir, “5g networks : challenges and techniques for energy efficiency”, engineering, technology & applied science research, vol. 8, no. 2, pp. 2864–2868, 2018 [8] a. shabbir, h. r. khan, s. a. ali, s. rizvi, “design and performance analysis of multi-tier heterogeneous network through coverage, throughput and energy efficiency”, engineering, technology & applied science research, vol. 7, no. 6, pp. 2345–2350, 2017 [9] y. rahmatallah, s. mohan, “peak-to-average power ratio reduction in ofdm systems : a survey and taxonomy”, ieee communications surveys and tutorials, vol. 15, no. 4, pp. 1567–1592, 2013 [10] j. wood, behavioral modeling and linearization of rf power amplifiers, first edition, artech house, 2014 [11] z. he, w. ye, s. feng, “digital predistortion of power amplifiers based on compound memory”, ieice electronic express, vol. 10, no. 21, pp. 1–5, 2013 [12] m. v. d. nair, r. giofre, p. colantonio, f. giannini, “sequential asymmetric superposition windowing for crest factor reduction and its effects on doherty power amplifier”, integrated nonlinear microwave and millimetre-wave circuits workshop, taormina, italy, october 1-2, 2015 [13] m. v. d. nair, r. giofre, p. colantonio, f. giannini, “effects of digital predistortion and crest factor reduction techniques on efficiency and linearity trade-off in class ab gan-pa”, 10th european microwave integrated circuits conference, paris, france, september 7-8, 2015 [14] f. m. ghannouchi, o. hammi, m. helaoui, behavioral modeling and predistortion of wideband wireless transmitters, first edition, john wiley & sons, 2015 [15] r. zayani, y. medjahdi, h. bouhadda, h. shaiek, d. roviras, r. bouallegue, “adaptive predistortion techniques for non-linearly amplified fbmc-oqam signals”, ieee 79th vehicular technology conference, seoul, south korea, may 18-19, 2014 [16] m. sajedin, a. ghorbani, h. r. a. dava, “nonlinearity compensation for high power amplifiers based on look-up table method for ofdm transmitters”, international journal of advanced computer science and information technolog, vol. 3, no. 4, pp. 354–367, 2014 [17] w. gao, linearization techniques for rf power amplifiers, springer, 2017 [18] h. d. rodrigues, t. c. pimenta, r. a. a. d. souza, l. l. mendes, “orthogonal scalar feedback digital pre-distortion linearization”, ieee transactions on broadcasting, vol. 64, no. 2, pp. 319–330, 2018 [19] v. vasconcellos, g. c. ornelas, a. n. barreto, “performance of 5g candidate waveforms with non-linear power amplifiers”, ieee 9th latin-american conference on communications, guatemala city, guatemala, november 8-10, 2017 [20] s. s. k. c. bulusu, h. shaiek, d. roviras, “pa linearization of fbmcoqam signals with overlapped recursive error correcting predistortion”, international symposium on wireless communication systems, poznan, poland, september 20-23, 2016 [21] t. jiang, d. chen, c. ni, d. qu, oqam/fbmc for future wireless communications principles technologies and applications, first edition, academic press, 2018 [22] m. bellanger, “fbmc physical layer: a primer”, european project, vol. 1, no. 1, pp. 1–31, 2010 [23] a. sahin, i. guvenc, h. arslan, “a survey on multicarrier communications: prototype filters, lattice structures, and implementation aspects”, ieee communications surveys and tutorials, vol. 16, no. 3, pp. 1312–1338, 2014 [24] h. wang, “a hybrid papr reduction method based on slm and multi-data block pts for fbmc/oqam systems”, information, vol. 9, article id 246, 2018 [25] l. guan, a. zhu, “green communications : digital predistortion for wideband rf power amplifier”, ieee microwave magazine, vol. 15, no. 7, pp. 84-99, 2014 [26] 5g americas, wireless technology evolution towards 5g: 3gpp release 13 to release 15 and beyond, 5g americas, 2017 microsoft word khelil-ce_r3.doc etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 488 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… modeling of fatigue crack propagation in aluminum alloys using an energy based approach f. khelil laboratoire de mécanique de lille (lml), university of lille1, france foudil.khelil@univ-lille1.fr b. aour laboratory of environmental technology research, enset d’oran, algeria ben_aour@yahoo.fr m. belhouari department of mechanical engineering, university of sidi bel abbes, algeria m_belhouari@yahoo.com n. benseddiq lml, university of lille1, france noureddine.benseddiq@ univ-lille1.fr abstract—materials fatigue is a particularly serious and unsafe kind of material destruction. investigations of the fatigue crack growth rate and fatigue life constitute very important and complex problems in mechanics. the understanding of the cracking mechanisms, taking into account various factors such as the load pattern, the strain rate, the stress ratio, etc., is of a first need. in this work an energy approach of the fatigue crack growth (fcg) was proposed. this approach is based on the numerical determination of the plastic zone by introducing a novel form of plastic radius. the experimental results conducted on two aluminum alloys of types 2024-t351 and 7075-t7351 were exploited to validate the developed numerical model. a good agreement has been found between the two types of results. keywordsfatigue crack growth; energetic approach; plastic zone; aluminum alloys. i. introduction in these last years, the concepts of fracture mechanics allowed a better definition of the stresses and strains fields at the vicinity of crack tips under static and dynamic loadings. cracking laws, empirical or formal, were developed in order to explore with an acceptable approximation the fatigue crack growth. indeed, cracks’ growth is related to the existence of a plastic zone (pz) at the crack tip, the formation and the intensification of which is accompanied by energy dissipation. thus, the amount of the cyclic plastic strain energy may represent with precision the rate of damage at the crack tip. the use of a cyclic plastic dissipation criterion for fatigue crack growth was first proposed by rice [1]. from this date, plastic energy approaches to fatigue crack extension prediction have been the subject of several experimental, analytical and numerical investigations [2-10]. weertman [11] proposed that the crack advances when the accumulated plastic energy at the crack tip reaches a critical value. then, shozo et al [2] measured the cyclic work to produce a unit area of fatigue crack for a steel of low carbon content and for high resistance aluminum alloys, using micro strain gages stuck in the plastic zone associated with a fatigue crack. subsequently, different techniques have been developed to evaluate the plastic energy, such as sub-grain size measurements [3], infrared thermography [4], micro-calorimetry [6] and by direct measurement of hysteresis energy under the loading line of a compact tensile (ct) specimen [12, 20]. following the work of bodner et al [5], klingbeil [7] has proposed a crack growth law, in which the fatigue crack growth rate was related to the total plastic energy dissipated ahead of a crack tip under cyclic loading. this model has been further extended to mixed mode fatigue delamination of layered materials across the interface [10, 14]. recently, mazari et al [15] proposed an empirical correction factor which takes into account the over evaluations obtained by hysteresis loops and shows the different effects of plasticity, crack closure and opening mode. in this paper, a new approach, for the evaluation of the cyclic plastic strain energy at the crack tip in mode i, has been proposed. this approach is based on the numerical determination of the plastic zone by introducing a novel form of plastic radius. the theoretical basis related to the surface energy creation and the evolution of the energy parameters will be discussed in section 2. section 3 is devoted to the presentation of the experimental data exploited for the validation. then, section 4 describes the development of the numerical algorithm used for the evaluation of the cyclic plastic strain energy. the obtained results are presented and discussed in the last section. ii. theoretical background a. energetic description of fatigue crack growth the description of kinetics of fatigue failure is very important for estimating fatigue lifetime of a component. the knowledge of crack propagation direction and crack growth rate makes it possible to predict the lifetime by means of kinetics fatigue failure diagrams (kffd). however, for an estimation of the cyclic plastic strain energy wp, the area of the hysteresis loop (figure 1), which characterizes the energy corresponding to one loading cycle, can be used. to this end, a power law relationship between stress and strain has been proposed by morrow [13] as follows: 1 1 n " w . n "       (1) where "n is the exponent linking the stress amplitude  and the plastic strain amplitude p . etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 489 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… on the other hand, a specific energy, u, is defined as, 2 pw u b da / dn    (2) where b is the specimen thickness and /da dn is the fatigue crack growth rate. fig. 1. schematic drawing showing hysteresis loop. the evolution of u as a function of /da dn can be subdivided according to three stages of the kffd. it should be noted that this diagram can be easily obtained experimentally by measuring crack propagation as a function of the stress intensity factor (see figure 2). the obtained curve is characterized by three stages which are commonly referred to as stage i, ii and iii respectively [16, 17]. fig. 2. a typical fatigue crack growth rate curve (kinetic fatigue failure diagram: kffd). the relation between r and log k is linear [11, 18, 19], whereas that between /da dn and u can be written as: 4 2 c da a k dn g u   (3) b. the cyclic plastic strain enery– proposed model assuming that the energy is primarily dissipated in the plastic zone, a comparison can be made between the measured values and those predicted theoretically by assuming propagation in mode i. rice [1] and tracey [21] gave an expression for the equivalent shear strain 3  near the crack tip defined in terms of the amplitude function  nr  as follows: 1 1 ' 0 / ( ( ) / ) n ng r r    (4) where r and  are the polar coordinates at the crack tip, g the shear modulus, 0 0 / 3  the yield stress under pure shear and  nr  can be considered the dominant singular term approximation to the elastic-plastic boundary which depends upon the hardening exponent n’ and is given as a function of  in the normalized form [21]:      20/ /n nf r k   (5) where  nf  is a dimensionless function which defines the profile of iso-deformation as a function of polar coordinates at the crack tip. according to rice [1], the singularity of equivalent cyclic strain can be described by applying the tensile form of (4-5) by simply replacing k by δk and σ0 by δσ0 related to the cyclic stress-strain law. 3p / g    (6) where p and  are the plastic strain and stress amplitude. consequently the cyclic stress strain curve gives the hardening law for the material at the crack tip [22]: 0 0 n'              (7) where  and  are the equivalent strain and stress amplitude, 0 and 0 0 / 3g    represent the cyclic yield strength of the material. using the expressions 3    and 0 0 / 3  , equation (4) can be rewritten as follows :   1/1 0 3 n nr g r                  (8) therefore within the hypothesis given by rice [1], the equivalent strain amplitude near the crack tip is given by:   1/1 2 0 0 . n nf k r                   (9)   =2a   = 2 a ap ae a wp etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 490 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… where k is the stress intensity range which can be given as a function of the maximum and minimum stresses and the crack length a as follows:  max mink a     . in order to evaluate the amplitude of the equivalent average strain  chalant [23] considered an element with a rectangular or circular form located at the crack tip so that: 0 1 s ds s     (10) where s is the surface of the element at the crack tip. in the case of a rectangular element with dimensions d1 and d2 (see figure 3a), we get [16, 23]: 2 1 1 1 1 1 0 1 1 2 2 0 1 1 2 n ' n ' n 'n ' k d i d i n '                          (1) the terms 1i and 2i are given by:         0 0 1 1 1 1 20 1 1 2 2 1 2 2 n ' n n ' n ' / n n ' f i .d co s f i .d sin                                (12) with 1 2 0 12 d tan d   (13)  (a) (b) fig. 3. elements at crack tip: (a) rectangular, (b) circular. the drawing shows the hysteresis loop. in the case of a circular crack tip element with a radius r1 (figure 3b), the expression of the amplitude of the equivalent average strain is given by [12]: 2 1 1 1 0 1 0 2n ' n ' k r i                       (14) with   12 1 0 / n 'ni f d         (15) the plastic energy throughout the plastic zone is obtained by integrating the plastic energy per surface element given by (1), i.e. 0 1 " ( ) 4 . . 1 " ps p n w pz ds n       (16) where ps is the surface of a quarter of the plastic zone. hence 2 0 0 1 4 1 p/ r p n " w ( pz ) . .r dr d n "          (17) with pr indicates the limit of cyclic plastic zone. if we assume that pr is defined by the distance for which the total equivalent strain is equal to 0 , we have, from (9): 2 0 p n k r f ( )     (18) the integral (17) can be evaluated by substituting the expressions (5-8) for  and p , which gives after simplification [12] :       4 22 0 0 0 0 1 " 2 1 ' . 1 " i n kn w pz n f d n                   (19) this expression can be rewritten as follows:     0 011 1 pz n" w pz n ' . .s n"      (20) with   4 22 0 0 2 ipz n k s f d                (21) indicating simply the surface of the plastic zone. on the other hand, engerand [24] proosed the tresca or von mises criteria to compute the limit of the plastic zone. it is interesting to note that (19) enables us to express the energy dissipated throughout the plastic zone per unit thickness as a function of 4k , which conforms to the theoretical models given by klingbeil [7], mazari et al [15]. and ranganathan et al [8]. hence, in order to obtain a similar variation as a function of 4k , we propose to compute the plastic zone surface using the following expressions for the plastic rays:   4 2 2 2 0 1 1 2 3 2 2 2p k r ( pd ) cos sin b                       (22) for plane strain, and: 4 2 2 0 1 ( ) cos 1 3sin 2 2 2p k r ps b                      (23) etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 491 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… for plane stress. by using (22) and (23), the new model proposed for the calculation of plastic energy per unit thickness can be written as follows:   0 0. . .m pzmmw pz s      (24) where m is a constant which depends on the material and the criterion used and ms indicates the surface of the plastic zone determined by expressions (22) or (23). hence, the total dissipated energy q in the specimen is given by:  q w pz b   (25) iii. experimental details a. material and specimen configurations the tests were conducted on two aluminum alloys, the 2024 alloy in the t351 condition and the 7075 alloy in the t7351 condition. the nominal composition and mechanical properties of these alloys are given in tables i and ii respectively. in table ii, the parameters k and n are computed from the relationship of ludwik [25]: .( )npk  (26) table i. nominal composition (in %) of the studied alloys. alloy si fe cu mn mg cr zn ti al 2024 0.10 0.22 4.46 0.66 1.50 0.01 0.04 0.02 remain 7075 0.07 0.16 1.52 0.04 2.55 0.20 6.00 0.04 remain table ii. nominal mechanical properties of the studied alloys material 2024-t351 7075-t7351 conventional yield stress at 0.2% of plastic strain 0.2  (mpa) 318 470 stress at fracture r  (mpa) 524 539 elongation a% 12.8 11.7 strength coefficient k(mpa) 652 960.5 hardening coefficient n 0.104 0.051 the tests were carried out using compact tension specimens (ct) with thicknesses of 10 mm for 2024 and 6 mm for 7075 (figure 4). the direction of cracking has been taken in the rolling direction. all mechanical tests were conducted using an instron servohydraulic machine at a typical test frequency of 20 hz in temperature room with a stress ratio of r=0.5. maxmin /r p p (27) where pmin and pmax are the minimum and maximum load in the cycle. fig. 4. cyclic test configuration on a compact tension specimen. the crack propagation rate was measured by optical techniques on the polished side of the specimen using a travelling microscope with a precision of 0.01 mm. the total dissipated energy was evaluated by the area locked up inside the recorded hysteresis cycles. the stress intensity range for this geometry is given by newman [26]:   with . p a k f z z wb w     (28) where w and b are respectively the width and the thickness of the specimen, a, is the crack length, and max minp p p   , is the amplitude applied load. in order to obtain more precision, two functions of compliance  f z were used:  for 0.2 0.3z  [26]:   2 3 4 5 6 4.55 40.32 414.7 1698 3781 4287 2017 f z z z z z z z        (29)  for 0.3 0.7z  [27]:   0 5 1 5 2 5 3 5 4 5 29 6 185 5 655 7 1017 638 9 . . . . . f z . z . z . z z . z      (30) b. identification of the cyclic plastic strain energy parameters the results obtained after the dentification of the cyclic plastic strain energy parameters are summarized in table iii. table iii. cyclic plastic strain energy parameters for 2024-t351 0 0 m 'n "n integral 4w k  914 0.0111 6.67e-4 0.148 0.078 0.0138 2.92e-13 for 7075-t7351 0 0 m 'n "n integral 4w k  705 0.00849 1.26e-5 0.166 0.0765 0.0140 4.85e-13 for the particular materials under study, we finally obtain:  for 2024-t351: 13 42 92 10 joule mw , . k /  (31)  for 7075-t7351: p pmax pmin p time p p b a w etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 492 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… 13 44 85 10 joule mw , . k /   (32) c. identification of paris law fatigue crack growth in the ct specimens was modeled according to the paris law, where the fcg rate, da/dn, is described in terms of the stress intensity range k according to the following relationship [28]:   mda c k dn  (33) where c and m are the fatigue crack growth coefficient and exponent, respectively.   y = 6e-09x4.5849 y = 3e-08x3.1987 1.0e-07 1.0e-06 1.0e-05 1.0e-04 1.0e-03 1.0e-02 1 10 100 k (mp a.m^1) d a/ d n ( m m /c y cl e) 2024t351 7075t7351 fig. 5. fitting of experimental data with a paris law figure 5 shows typical results of the experiments used in constructing plots of log(da/dn) in terms of log(k) for each aluminum alloy. a power law model was fit to the steady state region (stage ii) of fatigue crack growth and the paris law coefficient (c) and exponent (m) were determined for each specimen that underwent stable fatigue crack growth. a logarithmic scale was used to represent the curve as a straight line. thus, a linear regression returns the material parameters c and m. by identifying the paris law parameters according to experimental data's of the two alloys, we found that c=6.0e-9 and m=4.5849 with a slope of 4.58 for 2024-t351. however, for 7075-t7351: c=3.0e-8 and m=3.1987 with a slope of 3.20. it is worth noting that the 2024 aluminum alloy remains as an important aircraft structural material due to its extremely good damage tolerance and high resistance to fatigue crack propagation [29]. however, the 7075-t7351 offers good stresscorrosion cracking resistance [30]. iv. implementation of the proposed approach a computer program has been written in matlab language. the program was developed to calculate the stress intensity factor, the size of the plastic zone, the crack growth rate, the number of cycles and the cyclic plastic strain energy. conditions of plane strain and von mises criterion were considered. first, we specify ct specimen dimensions (b: thickness, w: width, a0: initial crack length, aend: crack length for forced termination), quantities relevant to material properties (e: young’s modulus, ν: poisson’s ratio, y: yield stress, c and m: material constants of paris law and the amplitude of the applied load). then, the program computes the stress intensity factor range k, the plastic radius rp, the fatigue crack growth rate ,da dn and the energetic parameters as shown by the flaw-chart in figure 6. fig. 6. flow chart of the numerical process. etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 493 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… v. results and discussion it is worth noting that the evolution of the cyclic plastic strain energy is directly related to the change of the plastic zone size at the crack tip as described in section (ii.b). in what follows, we will first deal with the evolution of the plastic zone. then, we will analyze in detail the evolution of the cyclic plastic strain energy and the speed of crack propagation. a. evolution of the plastic zone the purpose of this section is to study in detail the evolution of the plastic zone at the crack tip during the crack growth on the two aluminum alloys. it should be noted that, at the crack tip of a ductile material, the strain fields are significant and leading to a considerable extent of the plastic zone. in this case, the mechanical energy at the crack tip is absorbed by the material in the form of linear defects (dislocations) [20,31].on the other hand, the size of the plastic zone depends not only on the nature of the materials, but also on the intensity of the mechanical energy at the crack tip, the geometry and the size of the crack. noting that the hardening (maximum consolidation of material) characterized by its rate, can significantly slow the extension of the plastic zone. figure 7 shows the evolution of the plastic zone for both aluminum alloys (2024 and 7075) using tracey model (figure 7a) and the proposed model (figure 7b). it is found that the size of the plastic zone calculated by the proposed model is about 16 times (for 2024) and 18 times (for 7075) greater than that calculated by the tracey model. furthermore, we observe that the 2024 alloy, whose mechanical properties of ductility are higher than 7075, presents the largest size of the plastic zone. we have obtained for:  tracey model: s(pz) of 2024 = 2,37  s(pz) of 7075.  proposed model: s(pz) of 2024 = 2,12  s(pz) of 7075. b. evolution of the total dissipated energy in the specimen it should be noted that the crack extension which leads to the fracture occurs when the provided energy is sufficient to overcome the material strength. assuming that this energy is mainly dissipated in the plastic zone, a comparison can be made between the measured values and those predicted theoretically as shown in figure 8 by assuming propagation in mode i. c. evolution of the total dissipated energy in the specimen it should be noted that the crack extension which leads to the fracture occurs when the provided energy is sufficient to overcome the material strength. assuming that this energy is mainly dissipated in the plastic zone, a comparison can be made between the measured values and those predicted theoretically as shown in figure 8 by assuming propagation in mode i. (a) ‐3,00 ‐2,00 ‐1,00 0,00 1,00 2,00 3,00 ‐1,50 ‐0,50 0,50 1,50 y / ( k /   0 )^ 4 x/(k/0)̂ 4 2024t351 7075t7351 crack (b) ‐0,30 ‐0,20 ‐0,10 0,00 0,10 0,20 0,30 ‐0,30 ‐0,20 ‐0,10 0,00 0,10 y / ( k /   0 )^ 4 x/(k/0)^4 2024t351 7075t7351 crack fig. 7. evolution of the normalized plastic zone in the case of : (a) the tracey model and (b) the proposed model for both aluminum alloys. figure 8 illustrates the evolution of the total dissipated energy according to the amplitude of the stress intensity factor k for both types of aluminum alloys in the case of a constant stress ratio of r=0.5. it can be seen that the theoretical estimates given by the tracey model are much lower than the experimental measurements. however, a good prediction has been obtained by the proposed model. noting that for 2024 etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 494 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… (figure 8a), q calculated by the tracey model is about 11 to 76 times weaker than the measured q; the difference is less for high values of ∆k. the same trends have been also found for 7075 (figure 8b). the gap between the theoretical estimates of tracey and the measured values ranges from 3, for high values of ∆k, to 10 for the low values of ∆k. this difference can be allotted in a great part to the size of the estimated plastic zone which is much lower than that measured especially for materials that exhibit a high ductility as that of 2024-t351 in comparison with that of 7075-t7531. on the other hand, it may be noted from figure 8, that the results obtained by the proposed model are in good agreement with experimental data for both types of aluminum alloys. it is also observed that for the same value of ∆k, the total dissipated energy of 2024-t351 is higher than that of 7075-t7531. (a)   1,0e-07 1,0e-06 1,0e-05 1,0e-04 1,0e-03 1,0e-02 1 10 100 k (mp a.m^1/2) q ( j/ cy cl e) exp. result s t racey model p roposed model (b)   1,0e-06 1,0e-05 1,0e-04 1,0e-03 1,0e-02 1,0e-01 1 10 100 k (mpa.m^1/2) q ( j/ cy cl e) exp. res ults tracey mo del p ro po s ed mo del fig. 8. comparison of measured and estimated dissipated energy per cycle for (a) 2024-t351 and (b) 7075-t7351. vi. relationship between q and da dn the evolution of the crack growth rate is studied as a function of energetic parameters in order to interpret the crack behavior in various elucidated regimes. figure 9 shows the evolution of da/dn in terms of the total dissipated energy q for r=0.5. for both aluminum alloys, the experimental results can be subdivided into two distinct stages as shown in figure 9. in the case of 2024, the stage i is defined by da/dn210-5 mm/cycle. in this stage we note a strong decrease in crack propagation speed with the total dissipated energy. a logarithmic approximation can be used in this stage. the relationship obtained is given by: y = 0.7346x 1.272 y = 2e-05ln(x) + 0.0001 1.0e-07 1.0e-06 1.0e-05 1.0e-04 1.0e-03 1.0e-02 0.00001 0.0001 0.001 0.01 0.1 q (j/cycle) d a/ d n ( m m /c y cl e) exp . results prop osed m odel fig. 9. comparison of measured evolution of da/dn with q for 2024t351 y = 0.1123x 0.8382 y = 0.0001ln(x) + 0.0011 1.0e-05 1.0e-04 1.0e-03 1.0e-02 0.0001 0.001 0.01 0.1 q (j/cy cle) d a/ d n ( m m /c y cl e) exp . results prop osed m odel fig. 10. evolution of da/dn with q for 7075-t7351. the black lines represent the regression curves of experimental results using power and logarithmic functions etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 495 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… stage ii is defined by da/dn10-5 mm/cycle. this region exhibits a stable evolution of the crack growth and a power law can be used as an approximation in this stage: 1 2720 7346 . da . q dn  (35) in the case of 7075, stage i is defined by da/dn2.10-4 mm/cycle. the evolution of the crack growth in this regime is unstable. if we use a logarithmic function, we can determine a relationship of the form: 0 0001 0 0011 da . ln( q ) . dn   (36) stage ii is defined by da/dn 2.10-4 mm/cycle. the obtained law has the following form: 0 83820 1123 . da . q dn  (37) one can observe that there is a good agreement in the stage ii between experimental data and the results obtained by the proposed model for both types of aluminum alloys. furthermore, the straight lines of slope 0.316 for 2024 and 0.086 for 7075 can be deduced from the proposed model. indeed, we obtain the following expressions: 1 14620 316 . da . q dn  for 2024t351 (38) 0.79970.086 da q dn  for 7075t7531 (39) however a significant difference was found in stage i between experimental data and the results of the proposed model. this limits, therefore, the application of the proposed model in the stage where the crack propagation is stable. vii. conclusions in this study an energy based approach of fatigue crack growth has been proposed. this approach provides a direct link between the cyclic plastic strain energy and the plastic zone at the crack tip. in order to validate the model, experimental data conducted on ct specimens of aluminum alloys (2024-t351 and 7075-t7351), under constant amplitude with a stress ratio of r=0.5 and mode i loading, have been exploited. the following conclusions have been drawn:  the measured values of the cyclic plastic strain energy for both types of aluminum alloys are substantially higher than those calculated by the tracey model. however, a relatively good agreement has been found between the experimental data and the results obtained by the proposed model.  a correct modeling of the plastic zone is necessary to accurately determine the total cyclic plastic dissipation at the crack tip. the size of the plastic zone calculated by the proposed model is about 16 to 18 times greater than that calculated by the tracey model. finally, it should be noted that the simplicity of the current modeling approach limits its ability to account for the crack closure, the environment and the variable amplitude loadings effects, which are topics of ongoing research. acknowledgements it should be noted that the experimental tests were carried out and provided within the framework of a cooperation project "cmep 389 mdu 97" between the laboratory of mechanics and energetic (university djillali liabes of sidi bel abbes (algeria)) and the laboratory of mechanics and rheology (university françois rabelais of tours (france)). the authors especially thank professor n. ranganathan and his team for their support and assistance. references [1] j. r. rice, “the mechanics of crack tip deformation and extension by fatigue”, fatigue crack propagation special technical publication 415, astm, pp. 247-311, philadelphia, 1967 [2] i. shozo, i. yoshito, m. e. fine, “plastic work during fatigue crack propagation in a high strength low alloy steel and in 7050 al-alloy”, engineering fracture mechanics, vol. 9, no. 1, pp. 123-136, 1977 [3] p. k. liaw, s. i. kwun, m. e. fine, “plastic work of fatigue crack propagation in steels and aluminum alloys”, metallurgical transactions a, vol. 12, no. 1, pp. 49-55, 1981 [4] c. saix, p. jouanna, “analyse de la dissipation plastique dans des pièces métalliques minces”, journal de mécanique appliquée, vol. 5, no. 1, pp. 65-93, 1981 [5] s. r. bodner, d. l. davidson, j. lankford “a description of fatigue crack growth in terms of plastic work”, engineering fracture mechanics, vol. 17. no. 2, pp. 189-191, 1983 [6] a. d. joseph, t. s. gross, “comparison of techniques for the measurement of plastic work of fatigue crack growth in low carbon steel”, engineering fracture mechanics, vol. 21, no. 1, pp. 63-74, 1985 [7] n. w. klingbeil, “a total dissipated energy theory of fatigue crack growth in ductile solids”, international journal of fatigue, vol. 25, no. 2, pp. 117-128, 2003 [8] n. ranganathan, f. chalon, s. meo, “some aspects of the energy based approach to fatigue crack propagation”, international journal of fatigue, vol. 30, no. 10-11, pp. 1921-1929, 2008 [9] r. jones, m. krishnapillai, k. cairns, n. matthews, “application of infrared thermography to study crack growth and fatigue life extension procedures”, fatigue & fracture of engineering materials & structures, vol. 33, no. 12, pp. 871-884, 2010 [10] j. s. daily, n. w. klingbeil, “plastic dissipation energy at a bimaterial crack tip under cyclic loading”, international journal of fatigue, vol. 32, no. 10, pp. 1710-1723, 2010 [11] j. weertman, “theory of fatigue crack growth based on a bcs crack theory with work hardening”, international journal of fracture.; vol. 9, no. 2. 125-131, 1973 [12] m .mazari, “contribution à l’étude d’une approche énergétique de la propagation des fissures de fatigue”, thèse de doctorat, université de sidi bel abbès, algérie, 2003 [13] j. morrow, “cyclic plastic strain energy and fatigue of metals” in internal friction, damping, and cyclic plasticity, astm stp 378, american society for testing and material, 1965 [14] j. s. daily, n. w. klingbeil. “plastic dissipation in fatigue crack growth under mixed mode loading”, international journal of fatigue, vol. 26, no. 7, pp.727-738, 2004 [15] m. mazari, b. bouchouicha, m. zemri, m. benguediab, n. ranganathan, “fatigue crack propagation analyses based on plastic energy approach”, computational materials science, vol. 41, no. 3, pp. 344-349, 2008 [16] n. ranganathan, “contribution au développement d’une approche énergétique à la propagation d’une fissure de fatigue”, thèse de doctorat, université de poitiers. france, 1985 etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 488-496 496 www.etasr.com khelil et al.: modeling of fatigue crack propagation in aluminum alloys using an energy… [17] w. wang, h. hsu, “fatigue crack growth rate of metal by plastic energy damage accumulation theory”, journal of engineering mechanics, vol. 120, no. 4, pp. 776-795, 1994 [18] s. m. beden, s. abdullah, a. k. ariffin. “review of fatigue crack propagation models for metallic components”. european journal of scientific research, vol. 28. no. 3, pp. 364-397, 2009 [19] r. o. ritchie, “mechanisms of fatigue-crack propagation in ductile and brittle solids”. international journal of fracture., vol. 100, no. 1, pp. 5583, 1999 [20] n. ranganathan, k. jendoubi, m. benguediab, j. petit, “effect of r ratio and k level on the hysteretic energy dissipated during fatigue crack propagation”, scripta metallurgica, vol. 21, no. 8, pp. 1045-1049, 1987 [21] d. m. tracey, “finite element solution for crack-tip behavior in smallscale yielding”, journal of engineering materials and technology, vol. 98, no. 2, pp. 146-151, 1976 [22] g. chalant, l. remy, “plastic strain distribution at the tip of a fatigue crack. application to fatigue crack closure in the threshold regime”, engineering fracture mechanics, vol. 16, no. 5, pp. 707-720, 1982 [23] g. chalant, “fissuration par fatigue d’alliages cobalt-nickel : discussion d’un modèle mécanique de propagation”. thèse de doctorat de l’ecole des mines de paris, 1981 [24] j. l. engerand, mécanique de la rupture, ed. techniques ingénieur, 1990 [25] p. ludwik, elemente der technologischen mechanik, springer-verlag ohg, berlin, 1909 [26] j. c. newman, “stress analysis of the compact specimen including the effects of pin loading fracture analysis”, astm stp 560, pp. 105-121, 1974 [27] j. e. srawley, b. gross, “stress intensity factors for bend and compact specimens”, engineering fracture mechanics, vol. 4, no. 3, pp. 587589, 1972 [28] p. c. paris, f. a. erdogan, “a critical analysis of crack propagation laws”, journal of basic engineering, vol. 85, no. 4, pp. 528-533, 1963 [29] d. altenpohl, aluminium: technology, application and environment. a profile of a modern metal: aluminum from within, 6th edition, wiley, 2010 [30] p. s. pao, s. j. gill , c. r. feng, “on fatigue crack initiation from corrosion pits in 7075-t7351 aluminum alloy”, scripta materialia, vol. 43, no. 5, pp. 391-396, 2000 [31] j. w. kysar, “energy dissipation mechanisms in ductile fracture”, journal of the mechanics and physics of solids, vol. 51, no. 5, pp. 795824, 2003 microsoft word 6-2969_setasr_v9_n5_pp4605-4611 engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4605 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … environmental economic dispatch with the use of particle swarm optimization technique based on space reduction strategy t. manoj κumar department of electrical and electronics engineering, saveetha school of engineering, simats, chennai, india manojkumart99@gmail.com n. albert singh bharat sanchar nigam limited chennai, india basisngc@gmail.com abstract—this paper introduces a professional edition of particle swarm optimization (pso) technique, intending to address the environmental economic dispatch problem of thermal electric power units. space reduction (sr) strategy based pso is proposed, in order to obtain the pareto optimal solution in the prescribed search space, by enhancing the speed of the optimization process. pso is a natural algorithm, which can be used in a wide area of engineering issues. many papers have illustrated different techniques that solve various types of dispatch problems, with numerous pollutants as constraints. search sr strategy is applied to pso algorithm in order to increase the particles’ moving behavior, by using effectively the search space, and thus increasing the convergence rate, so as to attain the pareto optimal solution. the validation of sr-pso algorithm is demonstrated, through its application on an indian system with 6 generators and three ieee systems with 30, 57 and 118 buses respectively, for variable load demands. the minimum fuel cost and least emission solutions are achieved by examining various load conditions. keywords-search space reduction; particle swarm optimization (pso); environmental/economic dispatch (eed) problem; pareto optimal solution i. introduction the process of satisfying energy demands raises concerns regarding to energy durability and environmental protection, in conjunction with the market and regulatory demands. environmental/economic electricity dispatch (eed) is a technique to plan the energy generator unit’s output with load demand. eed is essential to create sufficient capacity in order to meet consistently variable client load demands at minimal cost under various difficulties. the eed problem is nonlinear, discontinuous and multimodal. it is essential to deliver power, generated by several units on an optimum current economic climate basis in order to attain the best results in the generating system. many methods have been used to solve the dispatch. in [1] the modulated particle swarm optimization (mpso) technique was presented to solve the eed problem of thermal units, modulating particles’ velocity for better exploration and exploitation of search space. this modulation of velocity is controlled by introducing a sinusoidal constraint function in the control equation. a fuzzified multi-objective particle swarm optimization (fmopso) algorithm was proposed in [2] where, in order to validate the effectiveness of the proposed method, a comparative study was conducted with other techniques, like weighted aggregation (wa) and multi-objective evolutionary algorithm (moea). many further issues are to be considered, such as the nonlinear characteristics of ramp rate limits and the prohibited operating zones of power system operations. in [3] a new strength pareto evolutionary algorithm (spea) was proposed in order to compete with nonlinear objectives. diversity preserving mechanism has been developed to resolve the pareto optimal issue. an artificial bee colony, improved with dynamic population size (abcdp), has been introduced in [4], to prove the efficiency and effectiveness in handling the nonlinear multi-objective function. in [5], a modified bacterial foraging algorithm (mbfa) was developed for the solution of the eed problem, using the global optimum bacterium having successful foraging strategy. this method shows the capability of obtaining quality compromise solution and the operator has to decide different objectives according to system constraints. another method of the hybrid multi-objective algorithm based on particle swarm optimization (pso) and differential evolution (de) was proposed in [6], where the search space is completely explored by the pso, while de takes the initiative in exploiting the subspace with sparse solutions. in this approach, the exploration and exploitation capability has been improved by the effective usage of crowding distance and time variant acceleration coefficients. a new multi-objective particle swarm optimization (mopso) that resolves the eed issue is explained in [7]. this technique is the multi-objective version of pso, offering the redefinition of global and local best individuals in search space. the potential and efficiency of this method is proved by obtaining multiple optimal solutions in one simulation run and validating the diversity and well-distributed characteristics of the non-dominated solutions. a flower pollination technique, used to resolve the economic load dispatch (eld) and combined economic and emission dispatch (ceed) problems is suggested in [8]. simulation results were compared to other swarm based techniques to show its effectiveness. its main advantage is that it can be used for large-scale power systems corresponding author: manoj κumar engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4606 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … with valve-point effects. many parameters were checked such as converging property, computational efficiency, and economic effect. spiral optimization algorithm (soa) is used in [9], to resolve the economic and emission dispatch problem, in order to obtain minimum fuel cost and emanation level, while satisfying the required load demand and operational constraints. this is a metaheuristic optimization algorithm having several advantages, including its few control variables, local searching capability, fast results, easy using process and simple structure. the multi-objective differential evolution (mode) algorithm is described in [10] to resolve the eed problem. crowding entropy diversity tactic is used in order to preserve the diversity of pareto optimal solution. also, fuzzy set theory is employed to extract the best compromise between fuel cost and emission. a stochastic pso method was used in [11], by formulating both deterministic and stochastic models to deal with the economic load dispatch problem, considering the environmental impacts as constraints. in [12] a niched pareto genetic algorithm (npga) approach is presented, considering the whole eed problem as a multi-objective issue, with total fuel cost and emissions as competing objective functions. the main advantage of this method is that there is no restriction on the number of objectives to optimize. fpa is explained in [13], where only fuel cost is considered as the objective function to resolve the eld problem, while both fuel cost and emissions are considered for ceed. the superiority of fpa to other algorithms is discussed, even for a large power system with valve-point effects. a nonlinear fractional approach for resolving the eed is elaborated in [14], presenting two simultaneous models of objective functions with nonlinear constraints. the first model minimizes the quotient of fuel cost and emission function, while the latter minimizes the total fuel cost expressed as a quadratic objective function. the non-dominated sorting genetic algorithm (nsga-ii) faces many drawbacks, such as lack of uniform diversity and absence of lateral diversity. these can be eliminated by introducing dynamic crowding distance (dcd), using a modified nsga-ii algorithm [15]. differential evolution (de) algorithm is developed with emission constrained economic dispatch problem in [16], to minimize the fuel cost. but due to the heavy environmental impact created by the thermal power plants, it has to be considered as an additional objective function. the innovative tribe-modified differentia evolution (tribe-mde) is presented in [17] to solve the multi-objective eed problem. this multi-objective problem can be altered into a min-max problem, in order to be resolved with the tribemde algorithm. a bare-bones multi-objective pso algorithm to resolve the eed issue is presented in [18]. several advantages are reported here, such as particle updating strategy without tuning procedures and mutation operation, with expanded search capability. the time-varying acceleration based pso (pso-tvac) technique is proposed in [22] to resolve the eed problem. the standard pso algorithm can be improved by adjusting the acceleration constant in order to get a balanced exploration and exploitation capability. in [24], the exact method to resolve the eed problem and pso to get the pareto optimal solution is elaborated. in [25], different evolutionary algorithms are applied to solve eed problems in various electrical power systems, comparing their performance. resolving the economic dispatch problem by an improved version of the random drift pso algorithm is explained in [26]. in [27], an improved pso, called biogeography-based learning particle swarm optimization (blpso), is presented for solving the ed problems involving different constraints. the chaotic bat algorithm is deployed in [28] for solving the ed problem, involving equality and inequality constraints such as power balance, prohibited operating zones, and ramp rate limits. an improved differential evolution algorithm for the eld problem, with or without valve-point effects, is explained in [29]. an adaptive pso with heterogeneous multicore parallelism and gpu acceleration is deployed in [30], while new swarm intelligence is applied in [32]. in [31], the resolution of multi-objective eed problem by utilizing the grey wolf optimization algorithm is explained, analyzing various operating constraints. a new optimization technique named elephant herd optimization (eho) was proposed in [33], for global optimization. like most different metaheuristic algorithms, eho is not using the preceding individuals inside the later updating process. in [34], a combined heat and power economic dispatch problem is resolved, utilizing an advanced modified pso technique on different systems. in [35], a tlabc algorithm is proposed, which employs three hybrid search stages, in its search for the optimization parameters. the primary purpose of this study is to present the utilization of space reduction particle swarm optimization (sr-pso) optimization technique in power systems. sr-pso is a versatile and well-balanced mechanism that improves exploration and exploitation. the sr-pso method is proposed to resolve environmental/economic dispatch (eed) problems for different power systems, such as an indian utility system having 6 units, and three ieee systems with 30, 57 and 118 buses respectively. results are studied comparatively with other researches from the relevant literature. ii. problem formulation in order to resolve the eed problem, we should examine the best mixture of power generation that diminishes the aggregate cost, by considering fuel and emission expenditure, under numerous working conditions. the multi-objective advancement technique is commonly used to get the ideal solution for real-world challenges. the main objectives are: • minimization of fuel cost: in a power system with n number of generators, the total fuel price can be determined by: iii n i iii n i ig cpbpapfpf ++== ∑∑ == 1 2 0 )()( (1) where �(��) denotes the total generation fuel cost, �� represents the electrical output of � generator, ��, � and �� signify the cost coefficients of � generator and �denotes the quantity of generators assigned to the operating system • minimization of emissions: total emissions with fossil gas based heat generation are given by: 2 0 1 ( ) ( ) n n g i i i i i i i i i e p e p p pα β γ = = = = + +∑ ∑ (2) engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4607 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … where �(��) denotes total emissions and ��, �� and �� are the emission coefficients of the i th unit. the multi-objective search engine optimization problem offers two goals, economy and emissions. it is transformed into a solitary objective search engine optimization problem as: ( ) ( ) (1 ) ( ) g g g t p u f p u e p= ∗ + − (3) where � is total cost and 0 < � < 1 is a compromising factor. when � is zero, the aim function is only emission dispatch problem which limits the emissions of the plant. when � is 1, the objective function becomes entirely a conventional economical load dispatch problem, which restricts the availability expenses of this scheme. eed problem has two constraints. firstly, there are the generator’s constraints, as the real power productivity of every generator will be in the middle of its base, while optimum limits and the inequality constraints for every generator must be assured: min max , 1,..... gi gi gi p p p i n≤ ≤ = (4) where ��� ��� and ��� ��� are the lower and upper limits of power of the i th unit. secondly, there are the power balance constraints, where the aggregate power loss has the real power reduction in transmission line pl and total demand pd. then: 1 0 g i d l i p p p = − − =∑ (5) in which the system loss function l p is given by: mwpbpp jij n i n j il )( 1 1 ∑ ∑ = = = (6) iii. sr-pso implementation in the eed problem inspired by the social behavior of animals like bird flocking, fish schooling and swarm basic principle, pso is broadly utilized for the resolution of the heavily limited eed problem. the particles move throughout the multi-dimensional search space till they discover the optimal solution. by their own knowledge (���� ) and the knowledge achieved by nearest particles ( ��� ), every particle updates its situation throughout the flight. velocity and position of the i th particle, for fitness evaluation at (! + 1) iteration in m-dimensional search space, are given by: 1 1 1 2 2 . . .( ) . .( ) k k k k best best i i i i g i v wv c r p x c r g x + = + − + − (7) 11 ++ += k i k i k i vpp , 0 0 = =k i v (8) where, i is particle's index, k represents the discrete time index, n denotes the number of particles in a group, m signify the dimensions of a particle, w is the inertia weight factor, �� ��� represents the best position found by th i particle , � ��� symbolizes the best position create by swarm, c1, c2 symbolize the acceleration coefficients, r1, r2 represent the uniform random values in the array [0, 1], �� # and $� # indicate the position and velocity of the i th particle at k th iteration. a new strategy of the pso algorithm is introduced for solving eed problems. indicators will be addressed about how to handle the inequality and equality constraints of the eed complications, when adjusting every search point in pso. a new method of pso with sr strategy based optimization technique is used in order to improve the convergence rate of the process. this approach is activated, when the performance is not increased throughout a pre-specified time. in this method, the search space can be regulated with the proper usage of constant %. the distance between the global best and maximum or minimum values of position values are added or subtracted with that constant, giving: 1 m ax m ax m ax ( ) k k k k i i i i p p p g b e s t + = − − ∆ (9) 1 min min m in ( ) k k k k i i i i p p p g b e s t + = + − ∆ (10) a. updating the inertia weight the update of inertia weight is essential for resolving optimization problems. for high level optimization problems, a local optimum exists which is close to global optimum solutions. consequently, the utilization capability of the search algorithm should be adequate to acquire the best solutions. hence, the update of inertia weight is essential during the iteration process. this has to be done according to cycles. the up gradation of inertia weight is given below: maxmin max ));(logexp( itr itr w w w e =−= ηη (11) b. updating the position and velocity of particles the perceptive manners are taken into consideration, regarding the best and worst experience of particles in the given search space. consequently, the idea of preceding knowledge of position and velocity of each particle is recommended, if the present fitness of every particle is weighted against its fitness value. if this value is small, then it can be treated as its experience. this particle’s occurrence generates significantly less variety compared to worst experience and affords enhanced exploration and exploitation of space without utilizing supplementary regional random exploration. c. formulating the objective function numerous objectives are certainly converted into one, by simply pre-multiplying the excess weight supplied by the customer, with all targets, by utilizing weighted sum approach. the weights are usually picked in such a method that each one provides comparable importance for the problem. usually weights happen to be preferred in such a way that their arithmetical sum is normally equal to one. the eed problem is represented as: iit teutfuf cos)1(cos ∗−+∗= (12) every single objective function takes any kind of value within the array and u is the weighting factor which decides the change of pressure over every objective function. the weighting factor u can take distinct values between 0 and 1. equal chances to reduce both objective functions can be engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4608 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … acquired when w is located to 0.5, where equal weight is set to both objective functions. an array of several solutions is usually obtained, known as pareto optimal alternatives. d. algorithm of the proposed method the implementation of sr-pso in eed problem consists of the following steps: i) initialization, setting the initial velocity to zero and randomizing particles’ position. ii) position upgrade, where every particle’s position and velocity are updated, by considering the corresponding constraints. iii) fitness value calculation, for all particles. iv) obtain ���� , ��� and thereafter save the corresponding positions. v) use the proposed strategy based on space reduction. vi) go to step (ii) until the required criterion is obtained. here the optimization problem has the equality and inequality constraints explained before. equality constraints indicate a problem to random optimization algorithms, though it is tough to meet up with the optimization procedure. here constraints are managed as follows: 1) equality constraints a new system is suggested to tackle this constraint in the eed issue. at all iterations, (3) is contented above. i) initially disregard the network losses and then randomly generate almost all units’ power levels 121 ........... −+++ nppp ii) determine the end unit's power level using (10) iii) determine the transmission losses by utilizing (6) iv) include losses into power by regulating the last unit's power level as: ).......( 121 − +++−= ndn ppppp (13) 2) inequality constraints lower and upper power limits are checked after each iteration to make sure they concur with (4). if a particle flies out of limits, the current position shall be upgraded to the prior best position (���� ). figure 1 demonstrates the flowchart of the suggested sr-pso technique. the cognitive and the social element in this algorithm have no constants. the position and velocity of the i th particle can be calculated using the following equations: 1 2 ( 1) ( ) ( 1 ( ( )) ( 2 ( ( )) i b v k w v k c r a n d p x k c r a n d p x k + = ∗ + ∗ ∗ − + ∗ ∗ − (14) )1()()1( ++=+ kvkxkx (15) 1 1 1 1 max (( ) ) f i i iterc c c c iter = − ∗ + (16) ))(( 2 max 222 iif c iter iterccc +∗−= (17) where �&'(��� is the maximum iteration number and the inertia weight limits are between 0.4 and 0.9. )*�, )+�, )*,, and )+, denote the initial and the final values for cognitive and social factors, while )* and )+ represent the cognitive and social factors respectively. fig. 1. flowchart for the proposed sr-pso eed problem resolution iv. simulation results and analysis a. test system 1: indian utility system the suggested technique is analyzed using an indian utility system with six generators. fuel rate and emission constants, lower and upper limitation,s and transmission loss coefficient matrix are extracted from [20]. for the simplification of the multi-objective problem, only �-� emission is considered. also, the valve-point effects aren’t considered for the problem. simulation results are extracted using the power demand of 900mw. the algorithms were applied utilizing matlab programming language. during the simulation process, the compromise factor alpha takes values from 0 to 1. the optimization results with different alpha values are specified in table i. these simulation results are compared with [23] in table ii. the variation of total fuel cost with total emission is shown in figure 2. the best compromise solution can be obtained from the resultant graph. table i. optimization results on an indian utility system test system control variables sr-pso pl (mw) total fuel cost ($/hr) total emissions (kg/hr) ieee 30 bus system pd=200mw, α=0 2.8283 545.2791 210.6876 pd=200mw, α=0.5 3.5697 526.7653 217.5804 pd=200mw, α=1 4.4955 518.5650 235.4381 engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4609 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … table ii. eed results with of 900mw demand parameters sr-pso ig p economic dispatch (mw) emission dispatch (mw) 1 120.5966 86.9897 2 35.2975 30.5767 3 16.6013 22.9873 4 10.0000 20.6892 5 10.0000 20.4620 6 12.0000 21.1235 pl (mw) 4.4955 2.8283 cost ($/hr) 518.5650 545.2791 emissions (kg/hr) 235.4381 210.6876 fig. 2. variation of generation cost with emission for different values of α b. test system 2: ieee 30-bus system this system has 6 generators and transmission losses are taken into consideration. the valve-point effects are not considered. fuel and emission rate constants along with various inequality power constraints are taken from [19]. the simulation results are described using 200mw power demand. table iii shows the ideal solution of sr-pso. the economic and emission dispatch results are shown in table iv using compromise factor α as 1 and 0 respectively. the variation of total fuel rate with the total pollutant is definitely calculated with different values of compromise factors. the variation is plotted in figure 3 and the best solution with the least value of fuel rate and pollutant can be found out from the graph. table iii. optimization results of the ieee 30 bus system test system control variables sr-pso pl (mw) total fuel cost ($/hr) total emissions (kg/hr) ieee 30 bus system pd=200mw, α=0 2.8283 545.2791 210.6876 pd=200mw, α=0.5 3.5697 526.7653 217.5804 pd=200mw, α=1 4.4955 518.5650 235.4381 c. test system 3: ieee 57-bus system this system has 7 generators and 42 loads connected in various buses. the entire load demand is 1250.8mw and 336.4mvar, without line limitations. analysis has been conducted without considering transmission losses. fuel and emission rate constants alongside various inequality power constraints are taken from [21]. table v shows the optimization results for 3 different values of α. the eed result is shown in table vi. the variation of total fuel cost with total emissions is shown in figure 4 for various values of α. the best compromise solution can be obtained with the best execution time from the resultant graph. the simulation results of srpso are described using the power demand of 900mw. table iv. eed results with 200mw demand parameters sr-pso ig p economic dispatch (mw) emission dispatch (mw) 1 120.5966 86.9897 2 35.2975 30.5767 3 16.6013 22.9873 4 10.0000 20.6892 5 10.0000 20.4620 6 12.0000 21.1235 pl (mw) 4.4955 2.8283 cost ($/hr) 518.5650 545.2791 emission(kg/hr) 235.4381 210.6876 fig. 3. variation of generation cost with emission for different values of α table v. optimization results of the ieee 57 bus system test system control variables sr-pso total fuel cost ($/hr) total emission (kg/hr) ieee 57 bus system pd=900mw, α=0 5086.2578 1667.0086 pd=900mw, α=0.5 2717.3640 2595.5687 pd=900 mw, α=1 2534.5068 3114.8429 table vi. eed results with 900mw demand parameters sr-pso ig p economic dispatch (mw) emission dispatch (mw) 1 384.5697 188.4421 2 10.0000 93.8984 3 20.0000 104.1802 4 10.0000 94.1156 5 341.1844 173.7787 6 10.0000 93.9535 7 124.2458 151.6316 cost ($/hr) 2534.5068 5086.2578 emission (kg/hr) 3114.8429 1667.0086 engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4610 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … fig. 4. variation of generation cost with emission for different values of α d. test system 4: ieee 118-bus system simulations were run on a typical 118-bus system. this is one of the largest power systems with 14 generating units. the fuel and emission rate constants, alongside various inequality power constraints were taken from [19]. the optimization results are listed in table vii. economic and emission dispatch results are demonstrated in table viiι. the system load demand is 950mw. total fuel cost variation and total emissions are given in figure 5. table vii. optimization results of the ieee 118 bus system test system control variables sr-pso pl (mw) total fuel cost ($/hr) total emissions (kg/hr) ieee 118 bus system pd=950mw, α=0 7.6657 4594.7261 23.4037 pd=950mw, α=0.5 7.3835 4441.8930 91.2697 pd=950mw, α=1 10.0603 4347.8062 398.2422 table viii. eed results with 950mw demand parameters sr-pso ig p economic dispatch (mw) emission dispatch (mw) 1 102.6883 70.5071 2 90.7391 50.0000 3 50.0000 77.7310 4 50.0000 88.6522 5 50.0000 67.6157 6 50.0020 50.0007 7 50.0000 73.4310 8 50.0002 72.3253 9 63.2259 73.2658 10 63.1861 89.5837 11 62.8433 50.0000 12 177.3753 72.4813 13 50.0000 72.0719 14 50.0000 50.0000 pl (mw) 10.0603 7.6657 cost ($/hr) 4347.8062 4594.7261 emission (kg/hr) 398.2422 23.4037 v. conclusion sr-pso analysis was conducted in order to resolve the eed problem for the above mentioned power systems. power dispatch is planned into two objective functions, diminishing concurrently total operating cost and pollutant emissions. two objectives converge into one function, by utilizing a mathematic modeling method. optimum values of required variables are obtained for several loading conditions for ieee systems having 30, 57 and 118 buses and an indian utility system with six generators. the result satisfies all the chosen constraints. the comparison shows that the proposed approach has competitive performance in resolution conditions and computation time. the proposed sr-pso is robust, efficient and simple. this paper does not impose any constraints on the number of objectives and may be extended to incorporate more objectives, by utilizing various algorithms. fig. 5. variation of generation cost with emission for different values of α references [1] v. k. jadoun, n. gupta, k. r. niazi, a. swarnkar, “modulated particle swarm optimization for economic emission dispatch”, international journal of electrical power and energy systems, vol. 73, pp. 80-88, 2015 [2] l. wang, c. singh, “environmental/economic power dispatch using a fuzzified multi-objective particle swarm optimization algorithm”, electrical power systems research, vol. 77, no. 12, pp. 1654-1664, 2007 [3] m. a. abido, “environmental/economic power dispatch using multiobjective evolutionary algorithms”, 2003 ieee power engineering society general meeting, toronto, canada, july 13-17, 2003 [4] d. aydin, s. ozyon, c. yasar, t. liao, “artificial bee colony algorithm with dynamic population size to combined economic and emission dispatch problem”, international journal of electrical power and energy systems, vol. 54, pp. 144-153, 2014 [5] p. k. hota, a. k. barisal, r. chakrabarti, “economic emission load diapatch through fuzzy based bacterial foraging algorithm”, international journal of electrical power and energy systems, vol. 32, no. 7, pp. 794-803, 2010 [6] d. w. gong, y. zhang, c. l. qi, “environmental/economic power dispatch using a hybrid multi-objective optimization algorithm”, international journal of electrical power and energy systems, vol. 32, no. 6, pp. 607-614, 2010 [7] m. a. abido, “multiobjective particle swarm optimization for environmental/economic dispatch problem”, electrical power systems research, vol. 79, no. 7, pp. 1105-1113, 2009 [8] a. y. abdelaziz, e. s. ali, s. m. abd elazim, “combined economic and emission dispatch solution using flower pollination algorithm”, engineering, technology & applied science research vol. 9, no. 5, 2019, 4605-4611 4611 www.etasr.com kumar & singh: environmental economic dispatch with the use of particle swarm optimization … international journal of electrical power and energy systems, vol. 80, pp. 264-274, 2016 [9] l. benasla, a. belmadani, m. rahli, “spiral optimization algorithm for solving combined economic and emission dispatch”, international journal of electrical power and energy systems, vol. 62, pp. 163-174, 2014 [10] l. h. wu, y. n. wang, x. f. yuvan, s. w. zhou, “environmental/economic power dispatch problem using multi-objective differential evolution algorithm”, electrical power systems research, vol. 80, no. 9, pp. 1171-1181, 2010 [11] l. wang, c. singh, “stochastic economic emission load dispatch through a modified particle swarm optimization algorithm”, electrical power systems research, vol. 78, pp. 1466-1476, 2008 [12] m. a. abido, “a niched pareto genetic algorithm for multiobjective environmental/economic dispatch”, international journal of electrical power and energy systems, vol. 25, no. 2, pp. 97-105, 2003 [13] a. y. abdelaziz, e. s. ali, s. m. abd elazim,“flower pollination algorithm to solve combined economic and emission dispatch problems”, engineering science and technology, an international journal, vol. 19, no. 2, pp. 980-990, 2016 [14] f. chen, g. h. huang, y. r. fan, r. f. liao, “a nonlinear fractional programming approach for environmental-economic power dispatch”, international journal of electrical power and energy systems, vol. 78, pp. 463-469, 2016 [15] s. dhanalakshmi, s. kannan, k. mahadevan, s. baskar, “application of modified nsga-ii algorithm to combined economic and emission dispatch problem”, international journal of electrical power and energy systems, vol. 33, no. 9, pp. 992-1002, 2011 [16] a. a. abou el ela, m. a. abido, s. r. spea, “differential evolution algorithm for emission constrained economic power dispatch problem”, electric power systems research, vol. 80, no. 10, pp. 1286-1292, 2010 [17] t. niknam, h. d. mojarrad, b. b. firouzi, “a new optimization algorithm for multi-objective economic/emission dispatch”, international journal of electrical power and energy systems, vol. 46, pp. 283-293, 2013 [18] y. zhang, d. w. gong, z. ding, “a bare-bones multi-objective particle swarm optimization algorithm for environmental/economic dispatch”, information sciences, vol. 192, pp. 213-227, 2012 [19] m. modiri-delshad, n. abd rahim, “multi-objective backtracking search algorithm for economic emission dispatch problem”, applied soft computing, vol. 40, pp. 476-494, 2016 [20] m. basu, “economic environmental dispatch using multi-objective differential evolution”, applied soft computing, vol. 11, no. 2, pp. 2845-2853, 2011 [21] s. p. karthikeyan, k. palanichami, c. rani, i. j. raglend, d. p. kothari, “security constrained unit commitment problem with operational, power flow and environmental constraints”, wseas transactions on power systems, vol.4, pp. 53-66, 2009 [22] b. hadji, b. mahdad, k. srairi, n. mancer, “multi-objective psotvac for environmental/economic dispatch problem”, energy procedia, vol. 74, pp. 102-111, 2015 [23] j. cai, x. ma, q. li, l. li, h. peng, “a multi-objective chaotic ant swarm optimization for environmental/economic dispatch”, international journal of electrical power and energy systems, vol. 32, no. 5, pp. 337-344, 2010 [24] l. bayon, j. m. grau, m. .m. ruiz, p. m. suarez, “the exact solution of the environmental/economic dispatch problem”, ieee transactions on power systems, vol 27, no. 2, pp. 723-731, 2012 [25] b. y. qu, y. s. zhu, y. c. jiao, m. y. wu, p. n. suganthan, j. j. liang, “a survey on multi-objective evolutionary algorithms for the solution of the environmental/economic dispatch problems”, swarm and evolutionary computation, vol. 38, pp. 1-11, 2018 [26] w. t. elsayed, y. g. hegazy, m. s. el-bages, f. m. bendary, “improved random drift particle swarm optimization with self-adaptive mechanism for solving the power economic dispatch problem”, ieee transactions on industrial informatics, vol. 13, no. 3, pp. 1017–1026, 2017 [27] q. qin, s. cheng, x. chu, x. lei, y. shi, “solving non-convex/nonsmooth economic load dispatch problems via an enhanced particle swarm optimization”, applied soft computing, vol. 59, pp. 229–242, 2017 [28] b. r. adarsh, t. raghunathan, t. jayabarathi, x. s. yang, “economic dispatch using chaotic bat algorithm”, energy, vol. 96, pp. 666–675, 2016 [29] d. zou, s. li, g. g. wang, z. li, h. ouyang, “an improved differential evolution algorithm for the economic load dispatch problems with or without valve-point effects”, applied energy, vol. 181, pp. 375–390, 2016 [30] m. p. wachowiak, m. c. timson, d. j. du val, “adaptive particle swarm optimization with heterogeneous multicore parallelism and gpu acceleration”, ieee transactions on parallel distributed systems, vol. 28, no. 10, pp. 2784–2793, 2017 [31] y v. k. reddy, m d. reddy, “solution of multi objective environmental economic dispatch by grey wolf optimization algorithm”, international journal of intelligent systems and applications, vol. 7, no. 1, pp. 34-41, 2019 [32] m. jevtic, n. jovanovic, j. radosavljevic, d. klimenta, “moth swarm algorithm for solving combined economic and emission dispatch problem”, elektronika ir elektrotechnika, vol. 23, no. 5, pp. 21-28, 2017 [33] h. wang, j. h. yi, “an improved optimization method based on krill herd and artificial bee colony with information exchange”, memetic computing, vol. 10, no. 2, pp. 177-198, 2018 [34] m. neyestani, m. hatami, s. hesari, “combined heat and power economic dispatch problem using advanced modified particle swarm optimization”, journal of renewable and sustainable energy, vol. 11, no. 1, 2019 [35] x. chen, b. xu, c. mei, y. ding, k. li, “teaching–learning–based artificial bee colony for solar photovoltaic parameter estimation”, applied energy, vol. 212, pp. 1578–1588, 2018 microsoft word 4-david et al.doc etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 17-22 17 www.etasr.com david et al: experimental evaluation of a dedicated pinhole spect system … exprerimental evaluation of a dedicated pinhole spect system for small animal imaging and scintimammography s. david department of medical instruments technology technological educational institute of athens sdavid@teiath.gr m. georgiou department of medical instruments technology technological educational institute of athens mary_georgiou@yahoo.gr e. fysikopoulos department of electrical and computer engineering national technical university of athens lefteris.fysikopoulos@gmail.com g. loudos department of medical instruments technology technological educational institute of athens gloudos@teiath.gr abstract—�uclear medicine (spect and pet) provides functional information, which is complementary to the structural. in cancer imaging radiopharmaceuticals allow visualization of cancer cells functionality, thus small cell population can be imaged. this allows early diagnosis, as well as fast assessment of response to therapy. our system is a single head gamma camera based on an r3292 position sensitive photomultiplier tube (pspmt), coupled to a 10cm in diameter csi:tl crystal. we have assessed two csi:tl crystals with pixel size of 2x2mm 2 and 3x3mm 2 respectively. three collimators were tested: a) a hexagonal, 1.1mm in diameter, general purpose parallel hole collimator b) a 1mm pinhole and c) a 2mm pinhole. systems were tested using capillary phantoms. all measurements were carried out in photon counting mode with gamma radiation produced by 99m tc. using the 2x2mm 2 crystal and the 1mm pinhole collimator a resolution better than 1mm was achieved. this allows very detailed imaging of small animals. using the 3x3mm 2 and the 2mm pinhole collimator a resolution of 1.3mm was possible with suitable sensitivity for breast imaging. those results indicate that this system is suitable for animal and breast studies. the next step will be clinical evaluation of the camera. keywordsdedicated gamma camera; pspmt; small animal imaging; scintimammography; pinhole i. introduction recently there has been a growing interest in compact and high resolution small gamma cameras, which are used for applications ranging from small animal imaging to scintimammography [1]-[3]. several groups have been working on the development of the small gamma cameras using position sensitive photomultiplier tubes (pspmt) and pixellated scintillator crystals [2]-[6]. the spatial resolution of these systems is mainly determined by the degree of pixellation; thus the use of discrete crystals allows the selection of intrinsic spatial resolution of the system independently of scintillator’s light yield. this technique assigns an event to a specific crystal location and improves the spatial linearity of the fov at the edges, compared with a continuous scintillator plate [7], [8]. image quality in single photon emission computed tomography (spect) is strongly influenced by the gamma detector capability to estimate energy and positioning of the γevent. traditionally gamma rays are detected by scintillator crystals that produce a scintillated light pulse, which is amplified, commonly by a position-sensitive photomultiplier tube (pspmt), and readout via charge division techniques [8, 9]. detectors based on pspmts have the advantages of a continuous photodetector surface (i.e. no gaps between pmts in anger camera). cameras based on pspmts are increasingly used for in vivo small animal studies [10], [11]. the benefits of small field of view systems have been explored in clinical environment and mainly in scintimammography [13], [14]. scintimmamography is a breast imaging technique based on nuclear medicine a technique that involves injection of a radioactive tracer (dye) into the patient. since the tracer accumulates differently in cancerous and non-cancerous tissues, scintimammography allows physicians to determine whether cancer is present or not. scintimammography is a supplemental breast exam that is used in specific patient’s population, to investigate a breast abnormality, following conventional x-ray mammography and before biopsy. the past fifteen years there has been increased interest in dedicated breast imagers using single photon emission mammography (spem) or positron emission mammography (pem) methods; thus a number of such dedicated systems have been developed and some are commercially available. although x-ray mammography remains the best screening method, it is characterized by low sensitivity, especially in the case of dense breasts. this leads to a large number of negative biopsies, part of which could be avoided. since, needle biopsy has obvious financial and psychological effects; there is need for a more sensitive second examination. scintimammography using conventional gamma cameras or pet systems is not sensitive due to the large size of those systems and the limited performance. typical cases where scintimammography with dedicated breast imagers is clinically useful, is in the case of etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 17-22 18 www.etasr.com david et al: experimental evaluation of a dedicated pinhole spect system … dense breasts or in order to follow up therapy, so that metabolic changes in breast tumors can be depicted a few days following treatment. in this work, a dedicated single-head round gamma camera based on an r3292 pspmt, coupled to an 10cm in diameter csi:tl crystal were tested. two crystals with pixel size of 2x2mm 2 for small animal imaging and 3x3mm 2 for breast examinations were have assessed, respectively. performance metrics include a 1mm and a 2mm pinhole collimator and crystal combinations are compared both for spatial resolution and sensitivity. images resulting from flooding of the two round csi:tl crystals with 99m tc gamma rays are shown. moreover profiles of line images from a 1mm capillary tube having 50 µci to 100 µci 99m tc were used in order to determine the better full width at half-maximum (fwhm) of the system with respect to the magnification factor of the pinhole apparatus. ii. materials and methods a. dedicated gamma ray imager description the dedicated small gamma camera consists of a discrete array of 2 x 2 x 3mm 3 or a 3 x 3 x 5mm 3 csi:tl crystals with 0.22mm spacing coupled to a 12.7cm round cross-wire anode pspmt (hamamatsu r3292) with simple resistive charge division readout of the 28x and 28y cross-wire anodes [15]. the general purpose collimator used has 1.5mm diameter and 2.2cm long hexagonal holes with 0.2mm septa. a resistive current divider network reduced the number of signals from 56 to 4. the four charges arriving at the resistive divider ends (qxa , qxb , qyc and qyd) [16] were preamplified and finally amplified by two 519 of mech-tronics nuclear dual amplifiers (4 channels). the acquisition system is based on a fast 7070 analog to digital converter (adc) module connected to a fast multiparameter acquisition system operating in the windows environment (mpa/win). the multiparameter system controls the four adcs (dependent mode) and acquires data in list mode through a 1 mbyte first in first out register (fifo) inside the mpa card. the centroid position (x, y) of the incident light pulse distribution on the photocathode is obtained by anger’s equations. the sum of the four charges provides the total energy deposited by the incident gamma ray. raw data processing and visualisation was carried out using custom software written in c++. the camera is shielded by 1cm lead and by 5mm tungsten on the side facing the radiation. a high applied voltage of 950 v was used; this voltage takes advantage of the full adcs dynamic range, without noise amplification. b. pinhole imaging characteristics and collimator designs. the imaging geometry of the pinhole collimator as compared to that of the parallel-hole collimator is shown in fig.1. the object is positioned close to the pinhole aperture, and a reversed and magnified image is projected onto the detector. comprehensive discussions of the principle of pinhole imaging in nuclear medicine can be found in [17]. the unique feature of pinhole imaging is that the image is magnified as compared to the image of the parallel-hole collimator. due to this magnification, the limits imposed by the intrinsic resolution of the camera system can be overcome. in addition, the pinhole collimator also provides a better trade-off between image resolution and photon detection efficiency than the parallel-hole collimator. however, sensitivity is not stable uniform in the entire field of view, thus corrections have to be taken into account. figure 1. schematic view of pinhole imaging as compared with parallel-hole collimator. reproduced by [18] from the basic pinhole geometry shown in figure 1, z is the object-to-collimator distance, l is the collimator length, and a, is the effective hole diameter of the pinhole aperture. the magnification factor m is given by the equation 1. /m l z= (1) the resolution of pinhole imaging is determined by the size of the pinhole aperture, the gamma photon energy, and the material used to fabricate the pinhole aperture [18]. c. flood correction for parallel collimator. flood correction is a necessary step in pixellized scintillators and is performed following the procedure reported in [19]. the uniformity correction was based on a flood source, consisting of a plastic container filled with radioactive solution of (2 mci 99mtc). the source was sufficiently large to cover the entire detector and was placed in direct contact with the collimated detector. a large number of counts (`4,000,000) were collected to minimize the statistical noise. using the raw flood image figure 2(a) a grid that maps each crystal pixel is determined figure 2(b) and the values in each crystal pixel are summed leading to the flood matrix figure 2(c) that is used to correct raw images. figure 3 demonstrates an example of uncorrected image raw image (a), summed image (b) and the flood corrected image (c) for a thin capillary filled with a 99m tc solution. figure 2. the raw flood image (a); the grid that maps each crystal pixel (b) and the summed flood matrix (c) that is used for flood correction. a flood 99m tc source has been used. (a) (b) (c) etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 17-22 19 www.etasr.com david et al: experimental evaluation of a dedicated pinhole spect system … figure 3. an example of the uncorrected raw image (a), summed image (b) and the flood corrected image (c) of a thin capillary filled with a 99m tc solution. d. experimental method 1) system sensitivity the sensitivity was defined as the fraction of events emitted by the line source over those actually detected by the system. we used a 1 mm capillary tube source filled with a 3.7mbq (100µci) 99mtc solution, placed perpendicular to the x-direction across the gamma camera fov at 0mm distance source from collimator. measurements were carried out for 300sec. radioactivity decay and corrections in background counts were corrected in sensitivity calculations. 2) system spatial resolution spatial resolution was measured as a function of the distance from source to the detector surface, by using a capillary filled with 99mtc. measurements were repeated as capillary stepped (vertically and horizontally) at several positions across the detector fov. the fwhm of the capillary tubes profiles were calculated using gaussian fit. fwhm values (in mm) were extracted by multiplying by 2.35 times the variance of the distribution σ. for the evaluation of pinhole apparatus the line source images were obtained. two different keel edge pinhole collimators with 2 and 3 mm hole were evaluated. the distance from each pinhole to line source varied horizontally from 1mm to 90mm. the images were acquired using a 100% energy window. the full width at half-maximum (fwhm) was calculated by the average of three profiles of the line images. 3) system linearity linearity was measured with a flood source phantom consisting of a plastic container filled with 99m tc radioactive solution with radioactivity varied from µci to 22 mci with increasing activity (in the same volume). the source was placed in direct contact with the collimated detector. a maximum activity in order to have linear response of events was determined. 4) system energy resolution system energy resolution is expressed as a percentage and is equal to the photopeak fwhm divided by the photopeak center energy, measured with the collimator in place. system’s energy resolution is always larger than the intrinsic. energy spectra were obtained by placing a 99m tc point source at the center of the fov above the detector surface. all measurements were carried out after the optimization of adjusting parameters (hv, amplifier gain, settings of the discriminators, adcs coincidence time and conversion gain) that provided maximum rating combined with the best imaging performance. iii. results and discussion table i shows the results of the measurements regarding, system spatial resolution and system sensitivity of the dedicated gamma camera equipped with 2 x 2 x 3mm 3 and 3 x 3 x 5mm 3 discrete csi:tl pixellated crystals at zero source to parallel collimator distance. table i. gamma camera characteristics detector characteristics 3 x 3 x 5mm 3 csi:tl 2 x 2 x 3mm 3 csi:tl coating thickness (g/cm 2 ) 0.85* 0.68* crystal array pitch 0.22 0.22 fwhm spatial resolution (mm) 4.5 3.7 fwhm energy resolution @ 140 kev (%) 22 23 sensitivity (cps / mbq) 198 156 quantum detection efficiency @ 140 kev (%) 85.7 68.9 * assuming 100% packing density the crystal array with 2x2 mm 2 crystal size shows improved detector spatial resolution as compared to the 3x3 mm 2 crystal (3.7 mm fwhm and 4.5 mm fwhm respectively). figure 4, shows the spatial resolution variation with increased source to parallel collimator distance. 0 5 10 15 20 25 0 5 10 15 20 csi:tl crystal 3x3x5mm 3 csi:tl crystal 2x2x3mm 3 source to collimator distance (cm) s p a ti a l r e s o lu ti o n ( f w h m ) figure 4. variation of spatial resolution with source to collimator distance. the csi:tl pixellated scintillator of 2 x 2 x 2mm 3 has better resolution properties at distance smaller than 15 cm. up to this point, spatial resolution properties of pixellated csi:tl scintillators is almost the same. the sensitivity of the system remains the same with the variation of source to parallel collimator distance as we expected. (a) (b) (c) etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 17-22 20 www.etasr.com david et al: experimental evaluation of a dedicated pinhole spect system … the imaging geometry of the pinhole collimator as compared to that of the parallel-hole collimator is shown in figure 3. the unique feature of pinhole imaging is that the image is magnified as compared to the image of the parallelhole collimator. due to this magnification, the limits imposed by the intrinsic resolution of the camera system can be overcome. however, sensitivity doesn’t stable uniform in the entire field of view, thus corrections have to be taken into account. the pinhole to absorption plane distance of 8 cm secure lower than 50% efficiency reduction at the edges of fov. that distance was experimentally determined after positioning a point source at the center of fov and at the critical angle, varying the pinhole to absorption plane distance. the critical angle is the tapered angle that ‘sees’ the edge of the detector. 0 2 4 6 8 10 0 1 2 3 4 5 pinhole 1mm pinhole 2mm source to pinhole distance (cm) s p a ti a l r e so lu ti o n ( f w h m ) in m m figure 5. spatial resolution for the 2x2mm 2 csi:tl crystal using the 1mm and 2mm pinhole collimator. 0 2 4 6 8 10 0 50 100 150 200 pinhole 1mm pinhole 2mm source to collimator distance (cm) c o u n ts p e r s e c o n d figure 6: system sensitivity for the 2x2mm2 csi:tl crystal with 1 and 2 mm aperture pinhole recorded with increasing source to pinhole distance. a line source with ~50µci 99mtc radioactivity was imaged; the distance between the crystal surface and the collimator was 8 cm. the source to pinhole distance varied from 1 to 9. the spatial resolution as well as the sensitivity recorded by the detector equipped with pinhole collimators is shown in figures 5 and 6. the pixellated csi:ti scintillators were set 8 cm away from the center of 1 and 2mm keel edge pinhole collimators. metrics were carried out changing the distance of the line source from the collimator. that distance was increased by a step of 1 cm. the measurements were finished when the magnification factor became little down to 1. in figure 7 and 8, the variation of spatial resolution as well as the sensitivity performance achieved by 3x3mm 2 csi:ti scintillator combined with 1 and 2mm keel edge pinhole collimators were shown. 0 2 4 6 8 10 0 1 2 3 4 5 pinhole 1mm pinhole 2mm source to pinhole distance (cm) s p a ti a l r e so lu ti o n ( f w h m ) in m m figure 7: spatial resolution for the 3x3mm 2 csi:tl crystal using the 1mm and 2mm pinhole collimator. 0 2 4 6 8 10 0 50 100 150 200 pinhole 2mm pinhole 1mm source to collimator distance (cm) c o u n ts p e r s e c o n d figure 8: system sensitivity for the 3x3mm2 csi:tl crystal with 1 and 2 mm aperture pinhole recorded with increasing source to pinhole distance. a line source with 50µci 99m tc radioactivity was imaged; the distance between the crystal surface and the collimator was 8 cm. the source to pinhole distance varied from 1 to 9. for all cases spatial resolution was found best (i.e. 0.79) for: (a) a magnification factor equal to eight (at 1 cm source to collimator distance), (b) a 1 mm pinhole aperture, which overcomes the physical limitation of crystal pixel size. the best spatial resolution achieved with the 2 mm pinhole aperture was etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 17-22 21 www.etasr.com david et al: experimental evaluation of a dedicated pinhole spect system … 1.2 1.3 mm depending on crystal pixel size. this result is significantly better than that found for the parallel collimator. in all cases, the sensitivity recorded by the system with the pinhole collimator drops exponentially, when source to collimator distance increases. this is in agreement with what is theoretically expected. for distances higher than 8 cm, the sensitivity increases again due to image dimensions decrease. point sensitivity and counting rate linearity were evaluated for a range of activities from ~37 mbq to 0.814 gbq (~1 22 mci) with a 99m tc source in air. 0 5 10 15 20 25 0 10000 20000 30000 2x2 csi:tl 3x3 csi:tl radioactivity (mci) c o u n ts p e r s e c o n d figure 9: counting rate linearity for 3x3mm 2 and 2x2mm 2 csi:tl pixellated scintillator plates. a good linearity is evident up to 10 mci (figure 9), which is adequate for the activity ranges of small animal imaging and scintimammography. for activity values higher than 10 mci, non-linearities and saturation effects begin to appear. at 20mci, only a noise signal was imaged. at high count rates, both counts and spatial resolution are lost. the former is due to dead time (the time during which the system cannot process another event), and the latter is primarily due to pulse summation (pile up) of low energy pulses which produce mispositioned signals. after 22mci radioactivity, a paralyzing effect drops the counts recorded by the system [20]. iv. concusions and future work a dedicated system suitable to perform in vivo imaging of small animals and scintimmamography has been built and tested. using the 2x2mm 2 crystal and the 1mm pinhole collimator a resolution better than 1mm was achieved. using the 3x3mm 2 and the 2mm pinhole collimator a resolution of 1.3mm was possible with suitable sensitivity for breast imaging. those results indicate that this system is suitable for animal and breast studies. future work includes energy spectra correction for each crystal in the array. for each individual crystal spectrum, the location of the photopeak will be identified. improvement of readout using a subtractive resistive technique will improve raw image quality. small animal studies are carried out, in order to assess systems performance in dynamic animal studies. the next step is object’s rotation, to acquire projection data for pinhole spect imaging. finally breast phantoms are being imaged as an initial step before the clinical evaluation of the camera. references [1] r.pani, m.n. cinti, r.pellegrini et al., “compact large fov gamma camera for breast molecular imaging”, 'ucl. instr. and meth. a, vol. 569, pp. 255–259, 2006 [2] carrie b. hruska, michael k. o’connor and douglas a. collins, “comparison of small field of view gamma camera systems for scintimammography”, 'ucl. med. commun., vol. 26, pp. 441–445, 2005 [3] s.vecchio, n. belcari, p. bennati, “a single photon emission computer tomograph for breast cancer imaging”, 'ucl. instr. and meth. a, vol. 581, pp. 84-87, 2007 [4] r.wojcik, s. majewski, d. steinbach and a.g. weisenberger, “high spatial resolution gamma imaging detector based on 5” diameter r3292 hamamatsu pspmt,” ieee trans. 'ucl. sci., vol. 45, no.3, pp.487-491, june, 1998 [5] d.p. mcelroy, l.r. macdonald, f.j. beekman, y. wang, b.e. patt, j.s. iwanczyk, b.m.w. tsui, and e.j. hoffman, “performance evaluation of a-spect: a high-resolution desktop pinhole spect system for imaging small animals,” ieee trans. 'ucl. sci., vol. 49, no. 5, pp. 2139-2147, 2002 [6] r.pani, r.pellegrini, m.n. cinti et al. , “imaging detector designs based on flat panel pmt”, 'ucl. instr. and meth. a ,vol. 527, pp. 54-57, 2004. [7] del. guerra, ionizing radiation detectors for medical imaging, (singapore: world scientific publishing, co. pte. ltd.), 2004 [8] w. moses, v. gayshan, a. gektin, ‘the evolution of spect – from anger to today and beyond’ radiation detectors for medical applications, pp. 37–80, 2006 [9] s. siegel, r. w. silverman, y. shao, and s. r. cherry, "simple charge division readouts for imaging scintillator arrays using a multi-channel pmt” ieee trans. 'ucl. sci., vol. 43, pp. 1634-1641, 1996 [10] v. popov, s. majewski, a. g. weisenberger, and r. wojcik, "analog readout system with charge division type output” in ieee 'ss, 2001 [11] d.p.mcelroy, et al., “ultra high resolution in vivo i-125 and tc-99m small animal pinhole spect imaging,” in proc. high resolution imaging in small animals: instrumentation, applications and animal handling conf., rockville, md, 2001 [12] r.pani, et al., “very high resolution gamma camera based on position sensitive photomultiplier tube,” physica medica, vol. 9, no.2-3, pp.233236, 1993 [13] a.trueman, et al., “pixellated csi(tl) arrays with position-sensitive pmt readout,” 'ucl. instrum. methods a, vol 353, no.1994, pp. 375378, 1994 [14] r.wojcik, et al., “high spatial resolution gamma imaging detector based on 5’’ diameter r3292 hamamatsu pspmt,” ieee trans 'ucl sci. vol. 45, pp. 487-491, 1998 [15] hamamatsu corporation, bridgewater, new jersey, usa [16] r. l. clancy, c. j. thompson, j. l. robar, a. m. bergman “a simple technique to increase the linearity and field-of-view in position sensitive photomultiplier tubes” ieee trans. 'ucl. sci., vol. 44, no. 3, pp. 494-498, 2002, june 1997 [17] barrett h , swindell w, (1981) radiological imaging: the theory of imaging formation, detection, and processing, vol. 1 & 2, new york: academic press. [18] qi yu-jin., “high-resolution spect for small-animal imaging” 'uclear science and techniques“. vol.17. no.3, pp. 164-169, 2006 [19] steinbach d, majewski s, williams m, kross b,weisenberger a g and wojcik r, “development of a small field of view scintimammography camera based on a yap crystal array and a position sensitive pmt, ” ieee med. imag. conf. conf. rec. pp. 1251–1256, 1997 [20] g. f. knoll, radiation detection and measurement, (singarore: john wiley & sons), 2 nd edition, 1989 etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 17-22 22 www.etasr.com david et al: experimental evaluation of a dedicated pinhole spect system … authors profile stratos david received his diploma from the department of medical instruments technology, technological educational institute of athens, greece, in 2004. he received his master degree in medical physics from the medical school of university of patras in 2006. he received his phd from medical school of university of patras in 2010. he is now working in the field of small field of view detectors for spect and pet and detector components evaluation. maria georgiou received her diploma in electrical and computer engineering from the national technical university of athens (ntua), greece, in 2007. since 2009 she is a phd student in the medical school of thessaly. her work is mainly focused on the development of electronics for field of view spect and pet systems and scintimammography. eleytherios fysikopoulos received his diploma in electrical and computer engineering from the national technical university of athens (ntua), greece, in 2007. since 2009 he is a phd student in national technical university of athens. he is working in the field of data acquisition using compact adcs and fpgas for application in small spect and pet systems. george loudos is an assistant professor in the department of medical instruments technology, technological educational institute of athens, since 2008. he received his diploma in electrical engineering from the national technical university of athens (ntua), greece, in 1998 and his phd in biomedical engineering in 2003 (ntua). his research interests are focused on molecular imaging using nuclear medicine techniques and medical instrumentation. more specifically he is interested in the development of dedicated systems for small animal imaging and scintimammography, software for spect and pet, monte carlo simulations of pet/spect systems and radionuclide dosimetry. on application level, he is interested in in vivo imaging of radiopharmaceuticals, radiolebelled nanoparticles and application of spect/pet in the study of biological processes. microsoft word 35-2342_s engineering, technology & applied science research vol. 8, no. 5, 2018, 3492-3495 3492 www.etasr.com alzahougi et al.: rsw junctions of advanced automotive sheet steel by using different … rsw junctions of advanced automotive sheet steel by using different electrode pressures abdulkarim alzahougi faculty of technology karabuk university karabuk, turkey aalzahougi@yahoo.com muhammed elitas faculty of technology karabuk university karabuk, turkey melitas@karabuk.edu.tr bilge demir faculty of engineering karabuk university karabuk, turkey bdemir@karabuk.edu.tr abstract—the effects of different types of welding current and electrode pressure on tensile shear properties of resistance spot welding (rsw) of commercial dp600 steel have been investigated in this study. in tensile shear tests of the welded joints, tensile shear force and maximum displacement were utilized to characterize the performance of the welding process. nugget diameter has been measured to create a clear definition of rsw physical properties. experimental results show that tensile shear load bearing capacity increases as the electrode pressure increases. low current value occurs at low and at high electrode pressures. during high current value, the material can exhibit superior mechanical properties. the effect of electrode pressure on tensile shear load bearing capacity increases as welding current increases. keywords-dp600; rsw; electrode pressure; tensile shear force; mechanical properties i. introduction global demand for energy saving and increasing concern for environmental pollution and global warming affect the scientific community and relevant studies are on the rise. the improvement of strength, capacity and properties of materials, most importantly metals, reduces material cross section, the reduction of part weight and the resultant decrease in fuel consumption, has made possible to reduce greenhouse gas emissions. for various functional requirements of current vehicles, advanced high strength steel is the ideal solution [1]. one of the most important and most valuable properties of advanced strength steels is the excellent strength-ductility relationship. dp and the trip steels have been developed for this purpose. dp steels usually have tensile strength of 6001200mpa and elongation of 15-25% [2]. dp steels are absorptive and are endowed with high tensile strength, low yield strength or tensile strength ratio, perfect formability, and high tensile energy. these advantages make dp steels attractive for automotive applications [3]. rsw is the major joining technique utilized for automobile production and manufacturing. a common automotive body consists of a broad number of rsw, between 3000 and 5000 spots [4]. during rsw, broad changes in the mechanical and metallurgical properties of the weld metal and the heat affected zone (haz) are taking place. the investigation of these changes is important and relevant for the safety, protection and strength of the welded joints [5]. some studies examined the mechanical properties of rsw junctions of dp steels. authors in [6] laid emphasis on the carbon equivalence which increases the hardness of the weld zone which in turn forms a very strong correlation between chemical composition and mechanical properties. advanced high strength steel (ahss), reached much higher tensile strength than the conventional high strength steel (hss). authors in [7] studied and examined the welding process design and optimization of rsw parameters. they observed the tensile strength values which revealed weaker correlation with higher current. authors in [8] studied the microstructure of rsw and the friction stir welded (fsw) with dp600 samples. the results of the rsw hss showed that the high strength-weight ratio and its mechanical properties are ideal for automotive applications. however, it has been noticed that microstructure changes alongside with rsw affect mechanical properties with transformations occurring in the base metal microstructure. authors in [9] noted that the effect of welding currents on the capacity of the welding strength lasts longer than welding time. authors in [10] observed the effect of the weld nugget size on the overload fracture mode of the rsw samples. in [11], a discovery was found about optimal welding parameters which were selected through the utilization of taguchi experimental design method. authors in [12] examined, evaluated and studied the transition of the rsw dp780 steel from the interface-fracture mode to the pullouttype of the fracture mode in tension and in transverse stress loading situations. authors in [13] investigated the effect of holding time on microstructure, hardness and mechanical properties for the welded joints with different levels of thickness. authors in [14] investigated the effects of multistage tempering process on nugget size, microstructure, microhardness and tensile fracture behavior of the dp590 steel with rsw while utilizing finite element models. authors in [15] investigated the welded joints of the dp980 steel sheets and the effects on stiffness properties of the pulsed current. there are many studies on welding, however they are typically and usually about welding parameters like weld current, weld time etc. in relation with the electrode pressure or force. to the best of our knowledge there hasn’t been any engineering, technology & applied science research vol. 8, no. 5, 2018, 3492-3495 3493 www.etasr.com alzahougi et al.: rsw junctions of advanced automotive sheet steel by using different … research conducted on rsw junctions of the ahss particularly in relation to the dp600 sheet steel through the utilization of different electrode pressures. regarding advanced automotive sheet steels, dp600 has a proper place and relevance based on its properties and low cost when compared to other ahss. in this study, rsw process was applied to dp600 at different welding currents and electrode/weld pressures. tensile shear tests were applied to the base material (bm) and the rsw samples at electrode pressures of 2-6bar at 5ka and 7ka welding currents. the effects of the different electrode/weld pressures and welding currents on tensile shear properties of the dp600 steel were investigated. ii. experimental procedure commercial dp600 steel is available in sheet metal layers of 250mm*250mm*1mm. 230vac heat input, 50hz frequency and 2500va power capacity from the spectrolab model lavfa18a were utilized to determine the chemical composition of the dp600 steel material. steel’s chemical composition is shown in table i while its microstructure is shown in figure 1. fig. 1. microstructure view of dp600 steel table i. chemical composition of dp600 steel (%) material c si mn s cr ni dp600 0.077 0.253 1.86 0.006 0.177 0.012 al ti v sn fe 0.127 0.002 0.004 0.006 97.472 the samples were designed according to en iso 14273 standard for the rsw operation. the rsw samples were cut to dimensions of 100mm*30mm*1mm through the shearing process from the 250mm*250mm sheets. the samples were overlapped for 35mm. with the consequent size of rsw, overlapped samples were 165mm. rsw was carried out through the utilization of two different welding currents, 5ka and 7ka, and 5 different electrode pressures of 2-6bar. the ac machine for the spot welding was equipped with a device for the pneumatic control of the phase shift of the ac current. before their joining, the surface of the test pieces was cleaned and thereafter the welded utilized conical water-cooled cu–cr alloy electrodes. the diameter of the contact surface of the electrode was 8mm [16]. the woodworking mould was utilized to readjust the samples during the welding, to avoid axis misalignment and to protect and readjust the sparks from spattering. three-resistance spot welded samples were created singularly for each experiment parameter. the water-cooling system of the electrodes was kept under specific constant water control because of the excess of the heat input. rsw parameters utilized in this study are presented in table ii [17, 18]. the time unit is cycle-based (1cycle=0.02s). the rsw process steps and a sample photograph are shown in figures 2 and 3 respectively. table ii. welding parameters for rsw processing electrode pressure (bar) 2-6 welding current (ka) 5, 7 electrode tip diameter (mm) 8 down time (cycle) 15 squeeze time (cycle) 35 welding time (cycle) 20 hold time (cycle) 10 separation time (cycle) 15 fig. 2. rsw process steps fig. 3. rsw sample image tensile shear test parameters are the 5ka and 7ka welding currents, and 2bar to 6bar electrode pressures. the samples were tested for each of the experimental parameters and the arithmetic means of these values were calculated. in addition, tensile test was applied to the base material for comparison. in addition to the tensile shear data which has been generally utilized in the literature, there is now a reference to the parts of the joining made on the nugget welding profile shown in figure 3. in this way, shear data are also considered as stress data. rsw samples had 1mm thickness, 30mm width and 110mm gauge length. the crosshead speed value utilized for the tensile shear test was 2mm/min [16]. iii. results and discussion during tensile shear tests, maximum tensile shear force value which allows the fracture to occur was evaluated and examined to assess the tensile shear load bearing capacity characteristics of the joint [18]. base material was also subjected to tensile test. the effects of electrode pressure and welding current on tensile shear load bearing capacity were studied. authors in [17] reported that micro voids of large size were formed in the low electrode forces of the rsw process. it was observed that nugget solidifies before electrodes were moved away from the weld region. since the weld nugget was not sufficiently suppressed at relatively lower electrode forces, shrinkage voids were formed because of the low stress during this time [17]. in addition to other approaches, and because of increased electrode progression and cramping as the electrode force increases, the significant parts of the voids are closed, and fewer shrinkage voids occur in the welded area. any increment engineering, technology & applied science research vol. 8, no. 5, 2018, 3492-3495 3494 www.etasr.com alzahougi et al.: rsw junctions of advanced automotive sheet steel by using different … of the welding current provides a capable level of current efficiency. hence, the effectiveness of the welding processes increases [3, 17]. through this discovery, variations of the nugget diameter were investigated in the rsw joints which depended on the welding parameters. the strong relationship between the increment of maximum tensile shear load bearing capability and the increment of the area been subjected to upcoming stress level was caused by the nugget diameter [5]. obtained results of the tensile shear test are presented in tables iii and iv for samples joined with 7ka and 5ka respectively. table iii. tensile shear test results with 7ka welding current electrode pressure (bar) nugget diameter (mm) tensile shear force (kn) maximum displacement (mm) 2 7.49 13.1687 5.382 3 7.62 13.8750 6.877 4 7.89 13.9156 6.818 5 7.22 13.4875 7.434 6 7.06 12.2219 4.430 base material 19.2125* 13.665 *this is the ultimate tensile force for base material’s condition of unwelded or before welding. differentiating from others, it wasn’t spot-welded. it had size of 100*30 and it is one of the overloaded parts. it was tested in order to evaluate maximum load bearing-capacity. table iv. tensile shear test results with 5ka welding current electrode pressure (bar) nugget diameter (mm) tensile shear force (kn) maximum displacement (mm) 2 7.41 13.5594 7.979 3 7.63 13.6938 7.968 4 7.71 13.7605 7.136 5 7.91 13.8359 7.953 6 7.15 13.5609 8.097 the maximum tensile shear load bearing capacity of the samples increased along with electrode pressure until 4bar. after this pressure level, it showed a high tendency to decrease. this shows that 4bar pressure is the optimum value for 7ka rsw. at welding current of 5ka, the maximum tensile shear load bearing capacity was reached at 5bar, indicating a higher tendency to decrease afterwards (table iv). sensitive values were taken from the graph (figure 4), as the event in which the melt-pressure exceeded 7ka-4bar and 5ka-5bar respectively. in that stage, electrodes undergo a much deeper level of sinking with excessive pressure and thus nugget thickness is reduced [17, 19]. the thickness of the rsw nugget-base material had passed the lateral area, which was also falling at a time. the lateral area forms a core basis for tensile approach. the decrease in the lateral area and the decrease in the nugget thickness coexist [5]. the indentation depth ought not to be beyond 30% of the sheet thickness [2, 9]. the electrode depth indentation experimental values were within standard levels. nugget formation was strongly examined as dependent on the material type and welding parameters. the excessive sinking of the electrodes to the rsw samples causes an increment in the welding heat input because of the excessive increase of the indentation-contact. excessive heat input, strong electrode contact, and electrode retention could disrupt the nugget geometry and weld profile. moreover, the pressure can cause cracks on the nugget surface and lateral surface areas. in addition, excessive heat input causes grain growth in the haz and weld metal and causes the strength to decrease [3, 5]. studies on advanced strength steels show a strong relationship between nugget size and tensile shear strength [3, 20]. the mean diameter of the nugget had been measured as been up to approximately 7mm in [21-24] and it can be used for dp600 steel [4]. these results can be explained by 5 t rule (t: material thickness), which was recommended to be the most appropriate rule according to japanese [25] and german [26] standards. after examination of the results in tables iii and iv, it was seen that the strength of the dp600 steel increased in direct proportion to the nugget diameter until an optimum value and it decreased thereafter [18]. maximum tensile shear force values obtained at these different welding currents depended on the electrode pressure (figure 4). the 7ka-4bar values of the rsw were near to ideal. the maximum tensile shear load bearing capacity value occurred at 5bar electrode pressure in 5ka samples. the effect of the electrode pressure on the tensile shear force is greater in the 7ka samples. the graph of the 5ka samples is almost horizontal. this shows that the effect and the sensitivity of the electrode pressure were lower at lower current values. because of this, it was asserted that as the current increases, the effect of the pressure increases. as the electrode pressure increases, the welding current will have the potential to decrease. fig. 4. maximum tensile shear force values obtained the highest reported tensile shear force values were from 12.5kn to 16.67kn [17], 13.69kn [21], 11.67kn [18], 3.1259.75kn [2], and 10-14.7kn[27]. the greatest value of the tensile shear load bearing capacity was, reported in [17] and the lowest in [2]. some of the differences observed between maximum tensile shear force values can be attributed to differences in the chemical composition of the steels used, rsw parameters, and sheet thickness. the tensile shear load bearing capacity obtained in our study resulted in a category that could be classified as been close to high values. iv. conclusions rsw process was applied to the commercial automotive sheet steel dp600 at different welding currents and electrode/weld pressures. these tensile shear tests were applied to the base material and to the rsw samples at electrode pressures of 2bar to 6bar at 5ka and 7ka welding currents. the effects on the tensile shear properties of the dp600 steel were studied and the following results were obtained: • at lower current values, effect and sensitivity of the electrode pressure were observed to be less. as the current engineering, technology & applied science research vol. 8, no. 5, 2018, 3492-3495 3495 www.etasr.com alzahougi et al.: rsw junctions of advanced automotive sheet steel by using different … increased, the effect of the pressure increased. as the electrode pressure increased, the welding current could also reduce. • welding parameters and nugget diameter are important factors that determine the load-bearing capacity of the overloaded rsw junction samples. • the welding current efficiency had increased up to 4bar at 7ka welding current and up to 5bar at 5 ka welding current. as the welding current increased, heat input, nugget diameter, tensile shear force value, and maximum tensile shear load bearing capacity increased. • the tensile shear load bearing capacity was directly proportional to the nugget diameter. • tensile strength of the base material decreased after rsw. acknowledgment this work was supported by the scientific research projects coordination unit of karabuk university (karabuk, turkey). project number: kbübap-17-kp-463. references [1] m. a. erden, “the effect of the sintering temperature and addition of niobium and vanadium on the microstructure and mechanical properties of micro alloyed pm steels”, metals, vol. 7, no. 9, pp. 1-16, 2017 [2] s. m. mule, s. u. ghunage, b. b. ahuja, “process characterization of resistance spot welding of dual phase stainless (dp600) steel”, 6th international & 27th all india manufacturing technology, design and research conference, maharashtra, india, december 16-18, 2016 [3] m. elitas, b. demir, o. yazici, “the effects of the electrode pressure on tensile strength and fracture modes of the rsw junctions of dp600 sheet steel”, 2nd international conference on material science and technology in cappadocia, nevsehir, turkey, october 11-13, 2017 [4] h. fujimoto, h. ueda, r. ueji, h. fujii, “improvement of fatigue properties of resistance spot welded joints in high strength steel sheets by shot blast processing”, isij international, vol. 56, no. 7, pp. 12761284, 2016 [5] f. hayat, b. demir, m. acarer, s. aslanlar, “effect of weld time and weld current on the mechanical properties of resistance spot welded if (din en 10130–1999) steel”, kovove materialy, vol. 47, no. 1, pp. 1117, 2009 [6] m. l. kuntz, e. biro, y. zhou, “microstructure and mechanical properties of resistance spot welded advanced high strength steels”, materials transactions, vol. 49, no. 7, pp. 1629-1637, 2008 [7] m. pradeep, n. s. mahesh, r. hussain, “process parameter optimization in resistance spot welding of dissimilar thickness materials”, international journal of mechanical, aerospace, industrial and mechatronics engineering, vol. 8, no. 1, pp. 80-83, 2014 [8] m. i. khan, m. l. kuntz, p. su, a. gerlich, t. north, y. zhou, “resistance spot welding (rsw) and friction stir welding (fsw) of dp600: a comparative study”, science and technology of welding and joining, vol. 12, no. 2, pp. 175-181, 2007 [9] x. q. zhang, g. l. chen, y. s. zhang, “characteristics of electrode wear in resistance spot welding dual-phase steels”, materials & design, vol. 29, no. 1, pp. 279-283, 2008 [10] m. pouranvari, h. r. asgari, s. m. mosavizadch, p. h. marashi, m. goodarzi, “effect of weld nugget size on overload failure mode of resistance spot welds”, science and technology of welding and joining, vol. 12, no. 3, pp. 217-225, 2007 [11] v. k. prashanthkumar, n. venkataram, n. s. mahesh, kumarswami, “process parameter selection for resistance spot welding through thermal analysis of 2mm crca sheets”, procedia materials science, vol. 5, pp. 369-378, 2014 [12] m. pouranvari, s. p. h. marashi, h. l. jaber, “dp780 dual-phase steel spot welds: critical fusion-zone size ensuring the pull-out failure mode”, materials and technology, vol. 49, no. 4, pp. 579-585, 2015 [13] h. long, y. hu, x. jin, j. shao, h. zhu, “effect of holding time on microstructure and mechanical properties of resistance spot welds between low carbon steel and advanced high strength steel”, computational materials science, vol. 117, pp. 556-563, 2016 [14] b. wang, l. hua, x. wang, j. li, “effects of electrode tip morphology on resistance spot welding quality of dp590 dual phase steel”, the international journal of advanced manufacturing technology, vol. 83, no. 9-12, pp. 1917-1926, 2016 [15] c. sawanishi, t. ogura, k. taniguchi, r. ikeda, k. oi, k. yasuda, a. hirose, “mechanical properties and microstructures of resistance spot welded dp980 steel joints using pulsed current pattern”, science and technology of welding and joining, vol. 19, no. 1, pp. 52-59, 2014 [16] f. hayat, i. sevim, “the effect of welding parameters on fracture toughness of resistance spot-welded galvanized dp600 automotive steel sheets”, the international journal of advanced manufacturing technology, vol. 58, no. 9-12, pp. 1043-1050, 2012 [17] c. ma, d. l. chen, s. d. bhole, g. boudreau, a. lee, e. biro, “microstructure and fracture characteristics of spot-welded dp600 steel”, materials science and engineering: a, vol. 485, no. 1-2, pp. 334-346, 2008 [18] m. i. khan, m. l. kuntz, e. biro, y. zhou, “microstructure and mechanical properties of resistance spot welded advanced high strength steels”, materials transactions, vol. 49, no. 7, pp.1629-1637, 2008 [19] t. k. pal, k. bhowmick, “resistance spot welding characteristics and high cycle fatigue behaviour of dp780 steel sheet”, journal of materials engineering and performance, vol. 21, no. 2, pp. 280-285, 2012 [20] y. kaya, n. kahraman, “the effects of electrode force, welding current and welding time on the resistance spot weldability of pure titanium”, the international journal of advanced manufacturing technology, vol. 60, no. 1-4, pp. 127-134, 2012 [21] x. long, s. k. khanna, “fatigue properties and failure characterization of spot welded high strength steel sheet”, international journal of fatigue, vol. 29, no. 5, pp. 879-886, 2007 [22] h. zhang, a. wei, x. qiu, j. chen, “microstructure and mechanical properties of resistance spot welded dissimilar thickness dp780/dp600 dual-phase steel joints”, materials & design, vol. 54, pp. 443-449, 2014 [23] s. daneshpour, m. kocak, s. riekehr, c. h. j. gerritsen, “mechanical characterization and fatigue performance of laser and resistance spot welds”, welding in the world, vol. 53, no. 9-10, pp. 221-228, 2009 [24] m. elitas, b. demir, “the effects of the welding parameters on tensile properties of rsw junctions of dp1000 sheet steel”, engineering, technology & applied science research, vol. 8, no. 4, pp. 3116-3120, 2018 [25] jis z3140. method of inspection for spot welds, japanese industrial standard 1989 [26] dvs 2923. resistance spot welding, german standard [27] d. zhao, y. wang, d. liang, p. zhang, “an investigation into weld defects of spot-welded dual-phase steel”, the international journal of advanced manufacturing technology, vol. 92, no. 5-8, pp. 3043-3050, 2017 microsoft word 10-2991_s_etasr_v9_n5_pp4627-4630_protasi engineering, technology & applied science research vol. 9, no. 5, 2019, 4627-4630 4627 www.etasr.com duc: experimental water quality analysis from the use of high sulfuric fly ash as base course … experimental water quality analysis from the use of high sulfuric fly ash as base course material for road building nguyen viet duc faculty of civil engineering thuyloi university hanoi, vietnam ducnv@tlu.edu.vn abstract—water quality directly influences human life. drinking water contamination can result in severe health problems. this paper deals with the analysis of water specimens from submergence of material containing high sulfuric fly ash as base course material for road building. the specimens were obtained from real road testing. results showed that for the material that used fly ash and chemical admixture, water quality was suitable for drinking in accordance with the standard parameters prescribed by the vietnam ministry of health, while for the material that used the same fly ash without chemical admixture, the total arsenic content was eight times higher than that of the former. thus, if one desires to utilize fly ash with high sulfur as base course material for road building, it needs to be used in combination with appropriate chemical admixture, so that it would not affect ground water quality. keywords-fly ash with high sulfur; base course material; real road testing; water quality analysis; chemical admixture i. introduction electricity consumption is growing nowadays especially in developing countries like vietnam along with economy transformation [1]. according to the recent forecasting consumption, vietnam electricity system will need more than 500gwh by the year 2030 [2-5]. the latest master plan for the power system of the country revealed that a half of that figure will be supplied by coal-fired thermal power plants [6]. along with that, there will be million tons of fly ash from these plants dumped to the environment. apart from the ordinary fly ash, there will be also a vast amount of fly ash with high sulfuric content, which can be used for concrete or cement if it is properly treated [7-10]. in order to improve the environmental acceptability and reduce the construction cost of the deep mixing method, the replacement of ordinary portland cement by supplementary cementing materials such as ordinary fly ash has been recently involved into road building [11-16]. when mixed with lime and water, fly ash forms a compound similar to cement. when used with cement, fly ash improves strength and durability, particularly where the locally available soil is poor [17, 18]. fly ash with high sulfur has been attempted to be used as base course material for road building [19-21]. this type of fly ash improved the physical and mechanical properties of soil similarly to the ordinary fly ash. however, when it rains, water falling on the road diffuses into soil along with other materials used for road building, which might be detrimental to ground water quality, and ultimately would affect public health [22-24]. the aim of this paper is to evaluate water, which was used for submerging the specimens up to saturated condition. the specimens were obtained from a road which was built by using fly ash with high sulfur as base coarse material. water quality was evaluated in accordance with the standards prescribed by the vietnam ministry of health [25]. ii. water sample collection in this study, a real road building test was performed. onekilometer rural road in ward huu lung, lang son province, vietnam was taken into consideration. the road was divided into four segments of 250 meters each, which served for testing four different materials, as shown in table i. the first material was local soil with 5% cement and 5% fine crushed stone. the second and third ones were mixes with 9% and 5% high sulfur fly ash and 5% cement. in the third mix, chemical admixture was also used with the amount of 5% cement weight. fly ash with high sulfur was not added into the fourth mix, which used only 5% cement and admixture. table i. materials used for the road building test material 1 material 2 material 3 material 4 local soil 1 1 1 1 cement pc40 0.05 0.05 0.05 0.05 fine crushed stone 0.05 fly ash with high sulfur 0.09 0.05 chemical admixture 0.0025 0.0025 all road building tests were carried out by equipment provided from the vietraco jsc such as sakai® stabilizer machine, rollers, rammers, etc. at first, the optimum moisture content of the local soil was determined, depending upon the water content that might be added to the soil. for the first road segment (material 1), after the placement of fine corresponding author: nguyen viet duc, thuyloi university, 175 tay son, dong da, hanoi, vietnam engineering, technology & applied science research vol. 9, no. 5, 2019, 4627-4630 4628 www.etasr.com duc: experimental water quality analysis from the use of high sulfuric fly ash as base course … crushed stone, cement was dispersed with a thin layer, as it can be seen in figure 1. for the rest road segments (material 2, material 4) cement and/or fly ash with high sulfur were placed above the local soil. after the placing of raw materials (fine crushed stone, cement, and/or fly ash), the stabilizer machine came in and blended local soil and raw materials together. chemical admixture and water were added during the mixing process (figure 2). finally, a rammer and a roller came in to press and vibrate the composite material until its surface became plane and dense. fig. 1. cement and/or fly ash placement on the road before mixing with local soil underneath at 28 days, specimen extraction was carried out from all the road segments by a hand-held coring machine. the specimens from those segments are shown in figure 3. after extraction from the field, all specimens were moved to the laboratory for further study. the specimens were submerged into distilled water for 72 hours until reaching a saturated condition, as it can be seen in figure 4. after that, water sample collection was conducted, as shown in figure 5. water quality was estimated following the directives of [25] (table ii). fig. 2. picture of on site equipment during the test (a) (b) (c) (d) fig. 3. sample extraction from the four road segments: a) material 1, b) material 2, c) material 3, d) material 4 fig. 4. specimen submergence into distilled water fig. 5. water sample collection used for analysis engineering, technology & applied science research vol. 9, no. 5, 2019, 4627-4630 4629 www.etasr.com duc: experimental water quality analysis from the use of high sulfuric fly ash as base course … table ii. standard values of potable water [8] parameters units standard value ph indicator 6.5-8.5 ammonium content mg/l ≤ 3 total arsenic content mg/l ≤ 0.01 chloride content mg/l ≤ 300 total iron content mg/l ≤ 0.3 lead content mg/l ≤ 0.01 total mercury content mg/l ≤ 0.001 total manganese content mg/l ≤ 0.3 nitrite content mg/l ≤ 3 e coli and/or coliform bacteria no bacteria / 100 ml iii. results and discussion the results of water quality analysis are presented in table iii. it can be seen that the ph indicator of all samples, including the ones with fly ash (materials 2-3), complies with the standard potable water values in table ii. regarding ammonium content, the similar outcome is also observed. total arsenic content from all samples is within the standard values except the one of material 2. chloride content from the water of material 2 is the highest, however all of them are within standard values. the other parameters such as total iron content, lead content, total mercury content, total manganese content and nitrite content from the water of all samples are in compliance with the standard values. eventually, no e coli and/or coliform bacteria were found in the water of all materials. table iii. water quality analysis results parameters units material 1 2 3 4 ph indicator 7.93 7.88 7.94 8.27 ammonium content mg/l 2.99 2.79 2.59 2.47 total arsenic content mg/l 0.006 0.08 0.009 0.009 chloride content mg/l 5.68 6.75 5.68 6.04 total iron content mg/l 0.262 0.21 0.08 0.08 lead content mg/l 0.009 0.009 0.008 0.001 total mercury content mg/l 0.001 0.001 0.001 0.001 total manganese content mg/l 0.16 0.074 0.0016 0.0043 nitrite content mg/l 0.545 0.593 0.591 0.583 e coli and/or coliform bacteria no no no no it is noteworthy that due to the total arsenic content from the water of material 2, which is about eight times higher than that of the rest, only this material that used fly ash with high sulfur would not be applied for road building, because arsenic is very harmful [22]. on the other hand, material 3 also involves fly ash with high sulfur, but the difference here is the use of proper chemical admixture. according to the supplier, this admixture is able to be dispersed around heavy metals in the specimen and prevent them from being emitted to water. iv. conclusion although fly ash with high sulfur can improve physical and mechanical properties of road building soil, the environmental issue related to the use of this industrial waste must be examined. water from specimen submergence of the material that used fly ash with high sulfur for road building was analyzed in this study. the main conclusions of this study are: • water derived from specimen submergence of all materials in this study complies with the quality parameters prescribed by the vietnam ministry of health, except for water sampled from the material that used fly ash with high sulfur. • for the material that used the same fly ash and chemical admixture, the water quality was suitable for drinking. the admixture seemed to prevent metallic substances, which are very harmful for human health, from being emitted to water during submergence. • thus, if one desires to utilize high sulfur fly ash for road building, it is recommended to be used in combination with the appropriate chemical admixture, so that it would not affect ground water quality. acknowledgment the author would like to express his gratitude to mr. dao minh, ceo of vietraco jsc for his help and contribution to the present paper. references [1] http://data.worldbank.org/country/vietnam [2] general statistics office of vietnam, master investigation on population and households, general statistics office of vietnam, 2009 [3] general statistics office of vietnam, master investigation on population and households, general statistics office of vietnam, 2014 [4] general statistics office of vietnam, united nations fund for population activities, vietnam population projection 2014-2049, vietnam news agency publishing house, 2016 [5] v. h. m. nguyen, c. v. vo, k. t. p. nguyen, b. t. t phan, “forecast on 2030 vietnam elecitricity consumption”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2869-2874, 2018 [6] institute of energy, evn, revised power master plan no. vii, ministry of industry and trade, 2015 [7] l. d. luong, d. v. nguyen, h. t. luu, h. v. le, t. m. nguyen, “study on fluidized bed combusion fly ash with high sulfur from cao ngan coal-fired thermal power plant for production of construction materials”, vietnamese journal of science and technology, vol. 1-2, no. 6, pp. 816, 2010 [8] c. s. shon, d. saylak, s. mishra, “evaluation of manufactured fluidized bed combustion ash aggregate as road base coarse materials”, world of coal ash conference, denver, usa, may 9-12, 2011 [9] d. gazdic, m. fridrichova, k. kulisek, l. vehovska, “the potential use of the fbc ash for the preparation of blended cements”, procedia engineering, vol. 180, pp. 1298-1305, 2017 [10] x. m. xie, l. guo, “study on preparation and properties of fly ash concrete with high sulfur and high-calcium fly”, 2nd ieee international conference on information management and engineering, chengdu, china, april 16-18, 2010 [11] h. afrin, “a review on different types soil stabilization techniques”, international journal of transportation engineering and technology, vol. 3, no. 2, pp. 19-24, 2017 [12] b. r. phanikumar, s. s. radhey, “effect of fly ash on engineering properties of expansive soil” journal of geotechnical and geoenvironmental engineering, vol. 130, no 7, pp. 764-767, 2004 [13] a. hilmi, m. aysen, “analyses and design of a stabilized fly ash as pavement base material”, fuel, vol. 85, no. 16, pp. 2359-2370, 2006 [14] s. karthik, e. k. ashok, p. gowtham, g. elango, d. gokul, s. thangaraj, “soil stabilization by using fly ash”, journal of mechanical and civil engineering, vol. 10, no. 6, pp. 20-26, 2014 engineering, technology & applied science research vol. 9, no. 5, 2019, 4627-4630 4630 www.etasr.com duc: experimental water quality analysis from the use of high sulfuric fly ash as base course … [15] j. vestin, m. arm, d. nordmark, a. lagerkvist, p. hallgren, b. lind, “fly ash as a road construction materials”, wascon 2012 conference proceedings, iscowa and sgi, 2012 [16] a. μouratidis, “stabilization of pavements with fly-ash”, conference on use of industrial byproducts in road construction, thessaloniki, greece, 2004 [17] n. s. pandian, k. c. krishna, b. leelavathamma, “effect of fly ash on the cbr behavior of soils”, indian geotechnical conference, allahabad, india, 2002 [18] m. r. hall, k. b. najim, p. keikhaei dehdezi, “soil stabilization and earth construction: materials, properties and techniques”, in: modern earth buildings, pp. 222-255, woodhead publishing limited, 2012 [19] v. d. nguyen, “cementitious materials for stabilizing clayey soils in road building”, international journal of engineering and technology, vol. 7, no. 4, pp. 6105-6108, 2018 [20] american coal ash association, fly ash facts for fighway engineers, federal highway administration, 2003 [21] s. κolias, a. κarahalios, “analytical design of pavements incorporating a capping layer of stabilized soil with high calcium fly ash and or cement”, 1 st conference for the utilization of industrial by products in building construction, thessaloniki , greece, 2005 [22] a. n. laghari, z. a. siyal, d. k. bangwar, m. a. soomro, g. walasai, f. a. shaikh, “grounwater quality analysis for human consumption: a case study of sukkur city, pakistan”, engineering, technology & applied science research, vol. 8, no. 1, pp. 2616-2620, 2018 [23] s. v. s. prasanth, n. s. magesh, k. v. jitheshlal, “evaluation of groundwater quality and its suitability for drinking and agricultutal use in the coastal stretch of alappuzha distrist, kerala, india”, applied water science, vol. 2, no. 3, pp. 165-175, 2012 [24] a. olusola, o. adeyeye, o. durowoju, “groundwater: quality levels and human exposure, sw nigeria”, journal of environmental geography, vol. 10, no. 1-2, pp. 23-29, 2017 [25] vietnam ministry of health, national technical regulation on domestic water quality, 2009 engineering, technology & applied science research vol. 8, no. 3, 2018, 3054-3059 3054 www.etasr.com duranay et al.: fuzzy sliding mode control of dc-dc boost converter fuzzy sliding mode control of dc-dc boost converter zeynep bala duranay eee department firat university, technology faculty elazig, turkey zbduranay@firat.edu.tr hanifi guldemir eee department firat university, technology faculty elazig, turkey hguldemir@firat.edu.tr servet tuncer eee department firat university, technology faculty elazig, turkey stuncer@firat.edu.tr abstract—a sliding mode fuzzy control method which combines sliding mode and fuzzy logic control for dc-dc boost converter is designed to achieve robustness and better performance. a fuzzy sliding mode controller in which sliding surface whose reference is obtained from the output of the outer voltage loop is used to control the inductor current. a linear pi controller is used for the outer voltage loop. the control system is simulated using matlab/simulink. the simulation results are presented for input voltage and load variations. simulated results are given to show the effectiveness of the control system. keywords-boost converter; dc-dc converter; flc; fuzzy logic; sliding mode i. introduction voltage-mode control and current-mode control are two methods used to control dc-dc converters [1]. the voltagemode control is robust to disturbances, but slow. current-mode control has a fast transient response but is more complex than voltage mode control. classical pi and hysteretic controllers are the most used controllers for dc-dc converters. the linearized converter model around an operating point obtained from the state space average model [2] is used for conventional linear controllers. the classical controllers are simple to implement but the effect of variation of system parameters cannot be avoided, due to the dependence of linearized model parameters on the converter’s operating point [3]. the controller for dc-dc converters must account the nonlinearity and parameter variations. it should maintain stability and provide fast response in any operating condition. classical control methods for dc-dc converters are not much efficient in achieving the desired performance [4-5]. a nonlinear control technique developed in [6] derived from variable structure control theory is called sliding mode control (smc) has advantages of simple implementation, robustness and fast transient response [7]. smc is used to maintain the output voltage of the converter to be independent of parameters, input and load variations [8]. it provides invariant system dynamics to uncertainties when controlled in the sliding mode [9]. smc has a problem of chattering. chattering is undesirable oscillations having finite amplitude and frequency due to the presence of unmodeled dynamics or discrete time implementation [10]. some methods such as equivalent control and boundary layer approach are used to reduce the chattering. equivalent control based methods cannot be used to reduce chattering because of their finite number of output values. the boundary layer approach has a problem of reaching sliding mode due to the replacement of the discontinuous control action with a continuous saturation function [10]. fuzzy sliding mode control (fsmc) is another method used to avoid the chattering problem [11]. fuzzy logic control is a nonconventional and robust control technique which is suitable for nonlinear systems characterized by parametric fluctuation or uncertainties [12-13]. fsmc has the advantage of not being directly related to the mathematical model of the controlled systems as the smc. fsmc combines fuzzy logic and smc to control the dc-dc converter to achieve better performance. in fsmc, the fuzzy system is used to estimate the upper bound of the uncertain disturbances to reduce the chattering. fuzzy logic controller (flc) has an increased level of efficiency regarding nonlinear converters. in this method, the control action is generated by linguistic rules which do not require an accurate mathematical model of the system, hence the complexity of the nonlinear model is decreased [14-15]. flc overcomes the deficiency resulting from using linearized small signal models and improves the dynamic behavior. ii. dc-dc boost converter mathematical model the output voltage of a boost type dc-dc converter is higher than the input source voltage. this is achieved by periodically opening and closing the switching element in the converter circuit. the dc-dc boost converter is shown in figure 1. the switching period is t. the switch is kept open for time (1-d)t and is kept closed for time dt. the analysis is done by examining voltage across and current through the capacitor for both times, when the switch is open and when the switch is closed. continuous conduction mode (ccm) will be assumed in which the inductor current will have a nonzero value due to load variations. when the switch is closed, the diode in the circuit is reverse biased and becomes open circuit as shown in figure 2(a). then the voltage across the inductor is: = = (1) and the current through the capacitor is: 2(b bec th the equ cur (1) val clo engineerin www.etasr = if the switch b), the inducto comes forwar hen the voltage= e current throu= fig. 2. boost rearranging uations by tak rrent through ), (3) and (2), lues 0 and 1 osed when u=1= 1= 1 u taking x1=i ng, technology r.com h changes to or current cann rd biased prov e across the ind= ugh the inducto= fig. 1. b t converter operat (1)-(4) and king the volta the inductor (4) with u wh 1 representing 1 and switch isu u and x2=v , th y & applied sci off position a not change sud viding path fo ductor become or is: boost converter. (a) (b) tion (a) switch clo d representin age across the as state varia hich is the cont g the switch s open when u hen the state eq ience research (2) as shown in f ddenly and the or inductor cu es: (3) (4) osed (b) switch o ng them as e capacitor an able and comb trol input takin position (swit u=0) then we h (5) (6) quations becom h v durana figure diode urrent. pen. state nd the bining ng the tch is have: me, forc slid reac reg slid inse we whe fun surf vec eve requ slid sho fulf whe traj equ in [ also mai reac the stea tha def vol. 8, no. 3, 20 ay et al.: fuzzy x = 1 ux = 1 u the smc the ce the system ding surface in ching the slid ime of contro ding mode. in ensitive to par have the state= , ere x is the s nction vector. face s(x)=0 th, , = the aim is t ctor x tracks a = en with the quired control i= 1 0 since the aim ding surface an ould ensure the filled [16]: | | ere  is a p ectories hit the iv. sliding mode uations govern [18-19]. the p o presented ag in objective in ch the switchin state equation ady state outp at is, = ∗= ∗ = 0 the sliding f fined as, = ∗ 018, 3054-3059 sliding mode u xx x iii. sliding eory uses a h trajectory to n the state spa ding surface is ol system on n sliding mod rameter variati e equation in st, state vector, u if the function hen, , , , , to find a cont desired traject∗ model uncer input is given 00 m is to force nd slide toward e stability and positive consta e sliding surfa sliding mod e controller des ning the contro performance of gainst input v n sliding mod ng surface and ns for boost co put voltage sh function is for = 0 9 control of dc g mode contr high speed sw move and sta ace. the system s called reach the sliding su de, the system ions and distu tate space form u is the contro n vector f is d 0 0 trol action u s tory x* that is, rtainties and by (12): the system s ds the origin, t the following ant that guar ace in finite tim e controller sign procedure oller were der f the sliding m voltage and lo de control is to d stay on that s onverter given hould be the d rmed by the s 3055 -dc boost con (7) (8) rol witching strateg ay on a path c m trajectory b hing mode an urface is know m response rem urbances. assu m, (9) ol input and f discontinuous (10) such that the , (11) disturbances. (12) states to reach the control str inequality mu (13) rantees the sy me [17]. r design e and the nece rived and pres mode controller oad variations. o force the err surface [20]. u n in (3) and (4 desired voltage (14) (15) state variable (16) verter gy to called before nd the wn as mains uming f is a on a ) state ) the ) h the rategy ust be ) ystem essary ented r was . the ror to using 4), the e v*. ) ) error, ) lin the and sho and val so the vo lin of as con con the aut pro tec con sho vre me and and use e(k me th∆ engineerin www.etasr the referenc near voltage co ∗ = in order to f e control signa= 1 to guarantee d slides over it0 ould be satisfie= ∗ d state variabl lues at the stea olving the ineq e existing cond thus the out ltage for slidin the most im nguistic variabl linguistic vari big and less ntrol is closer nsists of a set e linguistic co tomatic contro ocesses are chniques [21]. nsists of fuz own in figure + d/dt v* ef e c fig. 3 input data ar embership fun d the linguistic d then control ed to convert k) which is easured curren he output of th , which is ng, technology r.com ce value ∗ is ontroller as force the syste al is e the state tra t, the reaching ed. since, les are constan ady state, ∗ = quality given dition of slidin tput voltage s ng mode to exi v. fuzzy mportant featu les rather than iables are sent , and represen r to human th t of linguistic ontrol rules ba ol strategy. f too complex a fuzzy logic zzification, in 3. ru d m fuzzifier d ek cek . fuzzy logic re converted in nctions in the c variables de ller action is o the fuzzy res the differenc nt values is use he fuzzy contr used to obtai y & applied sci obtained from em states to th ajectory to rea g law condition nt and coincide= 0 in (19) by rep ng mode, should be high ist. y logic contr ure of fuzzy l n numerical va tences in a nat nted by fuzzy hinking and n control rules. ased on expert lc is used w for analysis c controller fo nference and ule base decision making defu ata base controlled boost nto linguistic v fuzzifier. us finition, fuzzy obtained. defu sults to contro ce between ed as input to t roller is the ch in the pwm s ience research m the output o (17 he sliding line (18 ach the slidin n, (19 (20 e with the refe placing (17) w (21 her than the s rol logic is that it ariables [14]. v tural language y sets. fuzzy natural langua . the flc con t knowledge in with systems w s by conven or a boost con defuzzificatio uzzifier boost converte converter. values by the u sing the know y rules are eva uzzification st ol action. the the reference the fuzzy cont hange in duty ignal for the s h v durana of the 7) e s=0, 8) ng line 9) 0) erence we get ) source t uses values e, such logic age. it nverts nto an whose ntional nverter on as er v0 use of wledge luated tage is e error e and troller. cycle switch in arch met min an pro qua the obt star inpu num due deg fuz con beh fig dete indu dete betw s is con 1. 2. 3. pre vol. 8, no. 3, 20 ay et al.: fuzzy the converte hitecture is the thods are used nimum and m nd and or op oduct composi alitative action centroids of tain the crisp o rts with the uts. the inpu mber of fuzzy e to its simpl gree of memb zifier interfac ntroller are de havior. input a gures 4 and 5. ermined from ductor. the va ermined by c ween 0 and 1. s the error (e) ntrol rules are o if the error b big meaning t reference poin the reference if the output a small chang if the desire steady, there fi fig surface view sented in fi 018, 3054-3059 sliding mode er. a mamd e one in which d in the infere maximum fun perators respec ition method n to a quantitat all output m output. the de definition of ut resolution y levels. the licity is chose ership for a g ce. the contro etermined from and output me values of the m the maximu alues of the o considering th the fuzzy rul ) and is the obtained based between the re that the outpu nt, then ∆ sh value quickly of the conver ge in duty cycl ed reference should be no c ig. 4. input m g. 5. output m w of the rules igure 6. the 9 control of dcdani type fuz h max–min an ence engine a ctions are use ctively in the c has been us tive action. w membership fu esign of a fuz membership increases w triangular me en for the con given input is ol rules for th m the dc–d embership fun e input membe um current flo utput member he duty ratio les are given i change of err d on the follow eference and m ut of the conve hould be big to . rter is near the le is needed. is achieved a change in the membership functi membership funct s of the fuzz e rules are 3056 -dc boost con zzy logic co nd center of gr and defuzzifica ed to describ control rules. s sed to change weighted avera unctions is use zzy logic contr functions fo with the incre embership fun ntroller input. determined i he designed f dc boost conv nctions are giv ership function owing through rship function interval whi in table i, in w ror (ce). the f wing criteria: measured curre erter is far from o take the outp e desired refer and the outp duty cycle. ions. tions. zy logic contr derived from verter ontrol ravity ation. e the sum– e the age of ed to troller r the easing nction . the in the fuzzy verter ven in ns are h the ns are ch is which fuzzy ent is m the put to rence, put is rol is m the com err des bo the and hig ext suc des fre app non con con exp mo con per cri fu by enh by cur con vo con a f as the per con red engineerin www.etasr mbinations of ror for a func sign procedur ost [23] and b e fuzzy contro d load variatio ghly reliable a ternal disturba fig vi. fuzzy logic ccessfully app sign for uncer ee technique b proach for co nlinearities or ntrol of uncert nventional tec pert knowled odel. sliding m ntrol the d rformance an iteria ensure uzzy system is estimating t hances the sys the differenc rrent which i nnected to the ltages is used ntrol input is d form of the con the controller e scaled outp riod’s duty cyc= this represe ntroller outpu duces steady-s ng, technology r.com f the inputs w ction of outpu re is presente buck-boost con oller was also ons. it is foun and robust to ances [16]. table i. ce e n n n z nm p z g. 6. surface fuzzy slidin is a robust plied to stabil rtain nonlinea based on heu ollecting hum r uncertaintie tain systems th chniques. fuz dge of the sy mode and fuzz dc-dc boost nd to improve the stability s used to reduc the bound of stem robustne ce of measure is obtained by e error betwee as the input t determined by nditional state r output. the put ∆ by cle d[k − 1]. s1 ∆ ents a discrete ut. integrating state error. the y & applied sci which are the e ut. fuzzy log ed for dc-dc nverters [24]. t o presented ag nd that fuzzy c change in circ smf rule tabl n z p nb nm z m z pm z pm pb e: e ce: change of view of the fuzzy ng mode con technique th lity analysis a ar systems [25 uristics metho an knowledge es. it is used hat cannot be zy controller ystem besides zy logic contr converter e robustness. of the fsm ce the chatteri f the uncerta ess [21]. slidin d current and y the output en reference an to the fuzzy co y a set of fuzzy ments. the du duty cycle is h and the so, the fuzzy co e time integr g the fuzzy e block diagra ience research error and chan gic based cont c [22], buck the performan gainst input v controlled syst cuit parameter le p z m b error, error y rules. ntroller hat has been and control s 5-28]. it is a m ods. it provid e and dealing for modeling easily controll design is bas s the mathem rol are combin to achieve lyapunov sta controlled sy ing of the con ain disturbanc ng surface, obt d reference ind of a pi cont nd measured o ontroller. the y rules expres uty cycle is obt obtained by a previous sam ontroller outpu (22 ration of the controller’s o am of the duty h v durana nge in troller [16], nce of oltage tem is rs and n very system model des an g with g and led by sed on matical ned to better ability ystem. nverter ces. it tained ductor troller output fuzzy sed as tained adding mpling ut is: 2) fuzzy output y cycle calc con tria des dyn load per con bot outp sin outp inn is con outp con con refe var tab firs vol. 8, no. 3, 20 ay et al.: fuzzy culation is giv ntrol signal u angular carrier signed overall the aim is to namic perform d changes. th rformance und ntrol and fsm th the disturba tput voltage lo nce the dynami tput voltage, a er current loop controlled by nverter circuit. tput voltage. f ntrolled boost ntrolled dc-d erence voltage riations. a bo ble ii is used fo fig. 9. sim table vs 2 the performa st tested for the 018, 3054-3059 sliding mode ven in figure is obtained by r signal. the fsm controlle fig. 7. duty fig. 8. f vii. sim o implement mance even w hat system sh der different op m control appro ance rejection oop controller ics of the curre a fuzzy sliding p. the output y the duty ra . fsm control figure 9 show t converter. t dc boost conv es, under inp oost converter for the simulati mulink block of th ii. parame (v) l (mh) 20 4 ance of the fs e reference vo 9 control of dce 7. the pulse y comparing t simulink rep er is given in f y cycle calculation fsm controller. mulations a robust cont with input volt ould have an perating condit oach is implem and tracking r is a linear ent is much fa g mode contro voltage of dc atio of the sw ller is used to s the block di the performa erter is monit put voltage va r with the pa ions. he fsm controlled ters of boost c c (f) r 1200 sm controlled oltage change. 3057 -dc boost con e width modu the signal d w presentation o figure 8. n. troller with a tage variations invariant dyn tions. a casca mented to imp performance pi type contr ster than that o oller is used i c-d boost conv witch used in obtain the de iagram of the ance of the ored with diff ariations, and arameters give d boost converter. converter r () 200 d boost conver the output vo verter ulated with a of the good s and namic ade pi prove . the roller. of the in the verter n the esired fsm fsm ferent load en in . rter is oltage engineering, technology & applied science research vol. 8, no. 3, 2018, 3054-3059 3058 www.etasr.com duranay et al.: fuzzy sliding mode control of dc-dc boost converter and current waveforms are shown in figure 10 for a step change in the desired reference voltage from 30v to 40v at time t=0.5s. load variation is applied to the fsm controlled boost converter to test its robustness under load variation. figure 11 shows the voltage and current waveforms when the load resistance is changed from r=200 to r=100 at time t=0.5s. 30 v output voltage is maintained during the load change. test for the input voltage change is also made to see the effects of the input voltage variations on the output voltage. a step change in input voltage is made when the converter is at steady state with a 30v output voltage. the performance of the controlled system is shown in figure 12 and figure 13 when a change in input voltage from 20v to 15v and 20v to 25v occurs at the time t=5s. in figure 12, a decrease in input voltage occurs and in figure 13, an increase in input voltage occurs at t=0.5s. figures 10-13 prove the robustness of the fsm control against changes in the load and input voltage. the recovering feature of the fsm controlled boost converter can be clearly seen. fig. 10. output voltage and output and input current waveforms for step change in reference voltage. fig. 11. output voltage and output and input current waveforms for step load variations. fig. 12. output voltage and output and input current waveforms when input voltage decreased from 20v to 15v. fig. 13. output voltage and output and input current waveforms when input voltage increased from 20v to 25v. viii. conclusions a sliding mode fuzzy controller designed and simulated for a dc-dc boost converter to improve the performance and achieve robustness. the error obtained from the load current and the reference current which is the sliding surface and its derivative are used as inputs to the fuzzy controller which controls the duty ratio of the signal driving the switch in the converter circuit. matlab/simulink programming environment is used for the simulations. the obtained results show that the controlled system is robust against load and input voltage variations. a good dynamic performance is also achieved. references [1] l. h. dixon, average current-mode control of switching power supplies, unitrode power supply design seminar handbook, unitrode corporation, 1990 [2] a. j. forsyth, s. v. mollov, “modeling and control of dc-dc converters”, power engineering journal, vol. 12, no. 5, pp. 229-236, 1998 [3] p. mattavelli, l. rosetto, g. spiazzi, “small-signal analysis of dc-dc converters with sliding mode control”, ieee transactions on power electronics, vol. 12, no. 1, pp. 96-102, 1997 [4] c. k. tse, k. m. adams, “quasi-linear analysis and control of dc-dc converters”, ieee transactions on power electronics, vol. 7, no. 2, pp. 315-323, 1992 [5] m. ahmed, m. kuisma, k. tolsa, p. silventoinen, “standard procedure for modelling the basic three converters (buck, boost, and buck-boost) with pid algorithm applied”, 13th international symposium on electrical apparatus and technologies, plovdive, may, 2003 [6] v. i. utkin, “sliding mode control design principles and applications to electric drives”, ieee transactions on industrial applications, vol. 40, no. 1, pp. 23-36, 1993 [7] o. kaynak, f. harashima, “disturbance rejection by means of sliding mode”, ieee transactions on industrial applications, vol. 32, no. 1, pp. 85-87, 1985 [8] h. guldemir, “sliding mode speed control for dc drive systems”, mathematical and computational applications, vol. 8, no. 3, pp. 337384, 2003 [9] j. j. e. slotine, w. li, applied nonlinear control, prentice hall, 1991 [10] y. m. alsmadi, v. utkin, m. a. haj-ahmed, l. xu, “sliding mode control of power converters: dc/dc converters”, international journal of control, pp. 1-22, 2017 [11] m. b. ghalia, a. t. alouani, “sliding mode control synthesis using fuzzy logic”, american control conference, seattle, washington, usa, vol. 2, pp. 1528-1532, june 21-23, 1995 [12] a. kandel, g. langholz, fuzzy control systems, crc press, 1993 [13] m. k. passino, s. yurkovich, fuzzy control, addison wesley, 1998 [14] p. mattavelli, l. rossetto, g. spiazzi, p. tenti, “general purpose fuzzy controller for dc/dc converters”, ieee transactions on power electronics, vol. 12, no. 1, pp. 79-86, 1997 [15] a. i. al-odienat, a. a. al-lawama, “the advantages of pid fuzzy controllers over the conventional types”, american journal of applied sciences, vol. 5, no. 6, pp. 653-658, 2008 [16] t. govindaraj, r. rasila, “development of fuzzy logic controller for dc– dc buck converters”, international journal of engineering techsci, vol. 2, no. 2, pp. 192-198, 2010 [17] l. guo, j. y. hung, r. m. nelms, “evaluation of dsp-based pid and fuzzy controllers for dc-dc converters”, ieee transactions on industrial electronics, vol. 56, no. 6, pp. 2237-2248, 2009 [18] h. guldemir, “sliding mode control of dc-dc boost converter”, journal of applied sciences, vol. 5, no. 3, pp. 588-592, 2005 [19] h. guldemir, “study of sliding mode control of dc-dc buck converter”, energy and power engineering, vol. 3, no. 4, pp. 401-406, 2011 engineering, technology & applied science research vol. 8, no. 3, 2018, 3054-3059 3059 www.etasr.com duranay et al.: fuzzy sliding mode control of dc-dc boost converter [20] z. h. akpolat, h. guldemir, “trajectory following sliding mode control of induction motors”, electrical engineering, vol. 85, no. 4, pp. 205209, 2003 [21] w. c. so, c. k. tse, y. s. lee, “development of a fuzzy logic controller for dc-dc converters: design, computer simulation and experimental evaluation”, ieee transactions on power electronics, vol. 11, no. 1, pp. 24-32, 1996 [22] c. p. ugale, r. b. dhumale, v. v. dixit, “dc-dc converter using fuzzy logic controller”, international research journal of engineering and technology, vol. 2, no. 4, pp. 593-596, 2015 [23] a. z. ahmad firdaus, m. normahira, k. n. syahirah, j. sakinah, “design and simulation of fuzzy logic controller for boost converter in renewable energy application”, ieee international conference on control system, computing and engineering, mindeb, malaysia, november 29–december 1, 2013 [24] m. e. sahin, h. i̇. okumus, “fuzzy logic controlled buck-boost dc-dc converter for solar energy-battery system”, international symposium on innovations in intelligent systems and applications, istanbul, turkey, june 15-18, 2011 [25] a. sabanovic, “variable structure systems with sliding modes in motion control-a survey”, ieee transactions on industrial informatics, vol. 7, no. 2, pp. 212-223, 2011 [26] s. e. beid, s. doubabi, “dsp-based implementation of fuzzy output tracking control for a boost converter”, ieee transactions on industrial electronics, vol. 61, no. 1, pp. 196–209, 2014 [27] g. feng, “a survey on analysis and design of model-based fuzzy control systems”, ieee transactions on fuzzy systems, vol. 14, no. 5, pp. 676–697, 2006 [28] t. gupta, r. r. boudreaux, r. m. nelms, j. y. hung, “implementation of a fuzzy controller for dc-dc converters using an inexpensive 8-b microcontroller”, ieee transactions on industrial electronics, vol. 44, no. 5, pp. 661-669, 1997 microsoft word 39-3676_s1etasr_v10_n4_pp6116-6125 engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6116 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security feather: a proposed lightweight protocol for mobile cloud computing security ahmed alamer department of computer science and information technology, school of engineering and mathematical sciences, la trobe university, australia and department of mathematics, tabuk university, saudi arabia a.alamer@latrobe.edu.au ben soh department of computer science and information technology, school of engineering and mathematical sciences, la trobe university, victoria, australia b.soh@latrobe.edu.au abstract-ensuring security for lightweight cryptosystems in mobile cloud computing is challenging. encryption speed and battery consumption must be maintained while securing mobile devices, the server, and the communication channel. this study proposes a lightweight security protocol called feather which implements mickey 2.0 to generate keystream in the cloud server and to perform mobile device decryption and encryption. feather can be used to implement secure parameters and lightweight mechanisms for communication among mobile devices and between them and a cloud server. feather is faster than the existing cloak protocol and consumes less battery power. feather also allows more mobile devices to communicate at the same time during very short time periods, maintain security for more applications with minimum computation ability. feather meets mobile cloud computing requirements of speed, identity, and confidentiality assurances, compatibility with mobile devices, and effective communication between cloud servers and mobile devices using an unsafe communication channel. keywords-mobile cloud computing; lightweight encryption; battery consumption; offloading tasks; mickey 2.0 i. introduction data transfer between two mobile devices and from a mobile device to the cloud should be done securely, through multiple different communication channels, such as wi-fi, 4g and 5g. a secure protocol for data transfer through unsecure communication methods is required. as mobile devices have limited computation power, it can be difficult to address all security cryptosystem tasks. authors in [1] proposed a mobile cloud computing enterprise that consists of mobile devices, a wireless core, wi-fi access points, and regional information centres. in addition to the limited computational capabilities of mobile devices, battery consumption due to heavy computations adds another challenge. authors in [2] showed that mobile computing could save energy, such as battery life and wireless energy, by offloading some tasks to a cloud server, which is used to transfer the data in some applications, however some applications are not energy efficient. to meet security challenges, as well as the demand for a lighter security protocol to save time and address computation power, device hardware limitations, and battery consumption, the research questions to answer are: • how can mickey 2.0 be implemented efficiently to secure communication between mobile devices in mobile cloud computing? • how can the performance of a new security protocol be evaluated against the existing protocols? • how can the claim that the proposed protocol is immune from attack be justified? the aims of this research are: • to implement mickey 2.0 efficiently to secure communication between mobile devices in mobile cloud computing. • to evaluate the performance of the new security protocol against the existing protocols. • to provide a clear justification that the new security protocol is immune from possible attacks. this paper proposes a new protocol, feather, to better meet the security and energy needs of mobile cloud computing and mobile devices than existing protocols. ii. background a. mobile computing mobile cloud computing serves important applications, such as mobile learning, mobile commerce, mobile gaming, ehealth applications, and web searching, [3] and is growing at a fast rate, with 4.78 billion mobile devices globally predicted by the end of 2020 [4]. with many devices connected to each other via large networks, there is a vulnerability to attacks that requires the use of reliable security protocols. encryption systems suitable for these devices on insecure communication channels are needed. a secure communication protocol must meet the following requirements: speed, identity protection, confidentiality, compatibility with mobile devices, and effective communication between cloud servers and mobile devices through a communication channel that is not safe. many cryptosystems meet the demand for private and secure corresponding author: ahmed alamer engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6117 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security transfer of confidential information. however, some require a large computation capability. the advanced encryption system (aes) is widely used because it is a very strong and secure cryptosystem [5]. however, it is a “heavy” system that requires large computational resources, has high power consumption, and is therefore not suitable for mobile devices with limited computation capacity. some researchers have introduced lightweight versions of aes for small devices, such as ale [6], to reduce the demand on resources, such as central processing units (cpus) and memory, used to generate the keystream. some components in cloud computing such as embedded systems on cloud computing with 32-bit, 16-bit, and 8-bit microcontrollers cannot meet real time demands for conventional methods of cryptography [7]. therefore, aes is a poor solution for many embedded devices in cloud computing that have low computation ability. b. cloud computing for a lighter encryption method in cloud computing, lightweight stream ciphers can be implemented to provide the required security. lightweight stream ciphers include a decryption function and an encryption function to handle messages of arbitrary length. thus, they are better than block ciphers, such as aes, that only handle inputs of a fixed length. due to their functionalities, they are well adapted to low bandwidth or noisy communications and thus are appropriate for cloud computing. speed, memory, number of cpus, and cost efficiency are also important factors [8]. in [9], a mickey 2.0 variant, mickey 2.0.85, was proposed as the preferred choice over other lightweight stream ciphers. it is lighter and has lower energy consumption, which means it is more cost efficient [10]. however, the protocol can be adapted to implement other lightweight ciphers, such as trivium or grain. al-omari [11] proposed a lightweight block cipherbased encryption mechanism and tested a faster algorithm by comparing it to an aes cipher in terms of speed. ali et al. [12] focused on a cloud-based file distribution and management model, and showed that the ability of cloud computing to adapt is important for users, and not only in terms of data storage. the study also addressed the problem of offloading tasks to the server by using multiple servers and demonstrated how this method provides more security when sharing data. hassan et al. [13] discussed cloud computing applications using machine learning approaches as a useful direction for predicting loading using statistical analysis, as well as for ensuring service level agreements. iii. literature review bahrami and singhal [14] studied the adequacy of using aes in mobile cloud computing and explained that, due to cost, cryptosystems such as aes are not suitable for mobile devices, because mobile devices have limited resources, such as limited power energy, low speed processors, and tiny ram capacity. aes is not the appropriate encryption technique, since offload and download is done for every single transferred file. they introduced lightweight methods, such as pseudorandom permutation, based on chaos systems. another solution is using lightweight security methods that provide a balance between energy efficiency and security. a lightweight security technique can be considered an easy operation (i.e. a permutation) instead of using complicated and expensive operations when using secret key or public key encryptions [15-17]. a. the advantage of using stream ciphers in small devices a stream cipher is a symmetric cryptosystem that uses the same key for encryption and decryption. stream ciphers can transform data faster than other ciphers, such as block ciphers, and also faster than ciphers in an asymmetric cryptosystem [18, 19]. stream ciphers are less secure than other, symmetric and block cipher, types of cryptosystems, such as aes which is one of the most secure ciphers. the encryption process in aes involves permutations and a substitution process and requires a number of rounds, which increases the power and storage requirements. on the other hand, lightweight stream ciphers such as mickey 2.0, trivium, and grain [20] need much less power and memory. widely used lightweight stream ciphers for small applications include e0 (bluetooth), rc4 (web), and the a5 family (gms) [21]. stream ciphers have advantages due to their high throughput property and low computational complexity. lightweight stream ciphers [22] are a better choice than block ciphers because they need less memory and less hardware complexity b. using lightweight stream ciphers in cloud computing and mobile cloud computing lightweight stream ciphers have several advantages for cloud computing. they provide fast encryption by generating a secure keystream faster than other popular ciphers, such as aes. they need fewer computation facilities, such as cpus and memory from the cloud, which reduces cost and power consumption significantly. additionally, they include faster encryption, the consumption of less battery power, and lower bandwidth requirements. c. aes and cloak protocols cloak is a lightweight protocol based on the aes cipher that enables two mobile devices to communicate, while leaving the keystream generation on an external server [23]. as cloak can get the keystream from either trusted or untrusted external servers, the main security concern is to protect the keystream. security can be compromised by fetching the keystream from an external server and from communication media. lightweight stream ciphers that can be used in mobiles include trivium [24], grain [25], and mickey 2.0 [26]. mickey 2.0 cipher is more resistant to statistical attacks [2729] and it can produce large throughput. the lightweight protocol developed in this study does not rely on the server to be secure and will not be compromised as in the cloak protocol, which assumes the security of the server relies on the server provider [23]. using mickey 2.0 in this lightweight protocol to provide a secure keystream is significantly faster than using aes. for example, the time needed by the server to generate the keystream is reduced, which in turn reduces the time to transfer the data between the server and the mobile. adithya et al. [30] introduced another enhancement for the cloak security, which is compared to the proposed feather protocol in this study in figure 3, and discussed in section viii. engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6118 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security iv. the lightweight protocol feather a. overview the study designed a mickey 2.0 cipher based protocol called feather to strengthen confidentiality and protection during messaging between mobile devices, as well as communication between devices and the cloud server (see figure 1). the mickey 2.0 cipher produced a secure keystream in the external server to reduce reliance on mobile devices that have limited computing power and memory. the role of mobile devices is only encryption and decryption, which allows mobile devices to compute and reduce the amount of energy consumed by the device battery. fig. 1. mobile cloud basic communication. a lightweight secure protocol is introduced for communication between devices and the external server over the cloud, as well as design applications on mobile devices for the process of verification and encryption and decryption. the proposed protocol is faster and can move larger files than the cloak cipher [23]. the protocol also maintains a high level of security. the protocol was designed to achieve security through the application of the mickey 2.0 cipher with additional protection systems for identity verification, such as hash functions, time stamps, and out-of-band passwords. a lightweight stream cipher is needed to generate the keystream faster and use fewer resources, so more secure applications can take advantage of advances in mobile cloud computing. if the keystream generated in the server is faster, it will allow more mobiles to get it from the cloud compared to a heavy encryption system like aes. thus, it will be more efficient and will reduce cost. using mickey 2.0 meets most of these needs. b. design principles there are ten design principles for a lightweight protocol: 1) avoid implementing a heavy encryption method as some popular encryption algorithms, such as aes, require considerable resources in terms of cpu time and/or memory usage, the protocol should offload the more computing-intensive steps to a server in the cloud while simplifying the steps carried out on the mobile device. therefore, a lightweight protocol can offload generation and storage of the keystream to a server using the mickey 2.0 algorithm. 2) avoid relying entirely on the server it is important to avoid relying entirely on the server to ensure communication security. then, even if an adversary compromises the server, it cannot easily use the captured keystream data to decrypt messages directly. although the client receives a keystream from the server, the client does not use it directly. instead, the client selects a few random values using primitive polynomials to apply the keystream to the plaintext to compute the encrypted data. 3) send messages between client and server over the internet the protocol must assume an adversary may intercept messages or an impostor may try to insert invalid messages in the client–server communication. one popular approach is to use a key-exchange algorithm, such as diffie–hellman (which is vulnerable to a man-in-the-middle attack), or a more sophisticated station-to-station protocol [31], which avoids this vulnerability. the significant computation of these approaches may not be appropriate for simple mobile or microcontroller devices. the protocol needs to assume the ability to send brief out-of-band messages using a different communication medium. for example, if the protocol is implemented on top of the http protocol, a secret out-of-band message may be sent by email or sms. in this protocol, an outof-band message is sent from the server to the client to convey a one-time-pad, and from one client to another to convey a file token and the secret values (using primitive polynomials) used to step through the keystream. 4) focus on authentication on unique security parameters for authentication, this protocol uses a “bring something, know something” technique. the protocol assumes that each mobile device (or microcontroller device) has a universally unique identifier (uuid). it also allows each user to select a username that is not necessarily unique. these are combined using a hash function to generate a unique identifier (uid) for each user. at the initiation of the protocol, each user registers its uid and then communicates an encrypted copy of its secret password to the server. for subsequent communication, all messages between the client and the server are validated using a digital signature based on hashing the message and the secret password. in this case, the “bring something” refers to the device and its uuid and the “know something” refers to the user’s secret password. since an adversary does not know the secret password, it cannot generate a valid signature, so the client and server can reject messages with invalid signatures. 5) secure the communication between the client and the cloud server client-server communication security relies on a shared keystream. this shared keystream is first generated by the server when the client sends a message to register the user. in its response to the client, the server sends the shared keystream, encrypted with the one-time-pad, to prevent an adversary from capturing the keystream. engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6119 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security 6) offload the keystream generation to the cloud server the server implementation may use any reasonable technique for keystream generation. in practice, a method is needed that is computationally efficient and still provides a reasonable level of security. to generate a new keystream for each user, the server must first create an initial key (or key+iv pair). 7) ensure client request for the keystream from the cloud authentications when the client submits a request to generate a new keystream, it includes a token and expiry time. there are two possible implementations. the server may simply generate and store a key, and then generate the actual keystream “on the fly” whenever requested. alternatively, the server may generate the keystream right away and store it as a file to be retrieved later when the client submits the corresponding token. the expiry time allows the client to limit the time the keystream is stored on the server. this reduces the availability of the keystream if an adversary tries to compromise the server. 8) ensure there are possible and flexible variations for secure data transfer to enhance the security of the protocol, the server never has access to the unencrypted data. the data are encrypted by the client, using a modified version of the keystream, and this modification is unknown to the server. when transferring encrypted data from one client to another, there are three main options available. • in one variation, since the data are securely encrypted, the file can be uploaded to any simple file server. this may provide an increased level of security since it introduces a separation from the keystream server and the file server. in fact, clients would be free to use a variety of different file servers to transfer encrypted data files, as long as these are communicated between the sender and receiver. • in a simpler implementation, the clients can upload or download the encrypted data to the server and are being identified by a unique token which can be pseudo-randomly generated. any other client can download the encrypted file, asynchronously, once it receives the appropriate token from the first client. some efficiency can be gained if the file upload and download is implemented on the keystream server, since the same protocol mechanism can be used to download a keystream (given a token) or to download encrypted data (given a token). in fact, once a keystream is generated and stored as a file, the keys used to generate the keystream could be deleted, reducing the vulnerability of the protocol. • in the third option, the encrypted data could also be transferred directly and synchronously from one client to another. this approach could make sense when a pair of clients wants to send and receive a number of smaller messages, as in a secure chat session. this can be accomplished first by generating and downloading a keystream and then sending encrypted messages back and forth without requiring an intermediate file server. 9) modify the keystream to further enhance security for efficiency, the client uses a keystream generated by the remote server, but for security, the keystream is modified in a way unknown to the server. in particular, the client randomly selects a few parameters that describe a particular pseudorandom permutation of keystream values. by sharing these secret permutation parameters with the other client through an out-of-band communication, the other client will be able to decrypt the encrypted file. 10) ensure data in the cloud server are tied to expiry time the security of the protocol is enhanced by reducing how long information is retained before being deleted. the keystream and the encrypted files have an associated expiry time, after which the server deletes them. this reduces the information that is exposed if the server is compromised. c. algorithmic demonstration of the feather protocol 1) channels 1. insecure channel e.g internet http 2. out-of-band channel e.g sms 2) algorithm 1: mobile device step 1: register mobile with server i. pick a unique username ii. create uid: hash (username, device id) iv. get the timestamp t v. send register action (via channel 1) with payload [mobile phone number, uid, t] vi. wait for response vii. if ok status received, go to step 2. otherwise, error status received, go to step 1 (ii). step 2: update password with server i. wait for one-time-pad, otp ii. provide a password iii. create hashed password, pass: hash (password, uid) iii. create an encryption d: xor (pass, otp) iv. get the timestamp t v. create the payload, x: hash (uid, d, t) vi. send update action with payload vii. wait for response viii. if ok status received, go to step 3. otherwise, error status received, go to step 2 (ii). step 3: validate password with server, if the first time i. send validate action with payload x ii. go to step 4. step 4: generate keystream from server i. provide a unique 32-byte token ii. send generate action with no payload iii. wait for keystream response, with bytes size n iv. go to step 5. step 5: share keystream with another mobile i. specify expiry time, e ii. get the timestamp t engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6120 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security iii. create a unique token: hash (uid, e, t) iv. create an encryption f: xor (token, keystream) v. create the payload, x: hash (message, pass, uid, e, t, f, n) vi. send payload x (via channel 2) vii. go to step 6. step 6: upload to server i. provide a 32-byte file-id ii. create a file: hash (uid, file-id, e, t) iii. create an encryption f: xor (file, keystream) iv. create an encryption d: xor (file-contents, token, keystream) v. send upload action with payload [uid, f, d]. step 7: request from server i. if token + keystream, create an encryption f: xor (token, keystream). otherwise: an encryption f: xor (file-id, keystream) ii. get the timestamp t iii. create the payload x: hash (pass, uid, f, t) iv. send request action with payload x. 3) algorithm 2: server step 1: wait for registration request from mobile i. receive registration action with uid: ii. get the timestamp t iii. if no account with the uid exists: a. create a new account b. respond with [ok status, t] c. send one-time-pad (via channel 2) d. go to step 2 otherwise, account with uid exists: a. respond with [error status, error code, t] b. go to step 1. step 2: wait for update request from mobile i. receive update action with encrypted payload, d, and signature ii. recompute the signature iii. decrypt the hashed password, pass iv. validate the message v. if message is valid: a. respond with [ok status] b. go to step 3. otherwise: a. respond with [error status] b. go to step 2 step 3: if process validate request received (only first time) i. go to step 2 (ii). otherwise, go to step 4. step 4: wait for generate request from mobile i. if no payload, generate keystream: a. generate random mickey 2.0 keystream b. respond with the key c. go to step 4. otherwise, payload received: a. create hashed payload: hash (payload, keystream) b. store hashed payload c. go to step 5. step 5: wait for upload request from mobile i. receive upload action with encrypted file payload ii. store the file iii. respond with [ok status]. otherwise something goes wrong, respond with [error status]. step 6: wait for ‘request’ request from mobile: i. receive request action with encrypted token or file-id ii. lookup the token and create the requested data d: xor (token + keystream). otherwise d: xor (file-contents, keystream) iii. get the timestamp t iv. create the payload x: hash (pass, d, t, ok status) v. respond with payload x v. protocol implementation the feather communication protocol enables mobile devices with limited computational resources to share encrypted files with the help of an external server that has greater computing, storage, and bandwidth resources. the protocol uses two communication channels. the first channel is assumed to be insecure, such as the internet using http to transport messages between the mobile devices and the external server. the second channel carrying “out-of-band” messages is assumed to be secure and could be implemented using sms messages to mobile devices, or possibly email. the first channel allows mobile devices to initiate six actions by sending a message to the external server and receiving a response. the second out-of-band channel is used to send and receive three kinds of secret information: • a one-time-pad, which could use a more secure parameter instead of justification. • a file id. • a token id (and some additional parameters). the protocol also uses a cryptographic hash function, such as sha-256, which outputs a 32-byte hash value. for distinct pairs of strings, s and t, h(s)≠h(t) (with very high probability). messages in the protocol are simply concatenated key = value pairs of parameters. each of the 11 possible parameters is identified by a unique character: a = action s = status c = code (error code) u = uid p = phone f = token or file d = data n = number e = expire t = timestamp x = signature the timestamp is unix time in seconds, and can help prevent “replay attacks”. the cryptographic signature is a hash of the entire message string (before the signature is added) and is used to authenticate messages. the six actions and messages are: register, update, validate, generate, engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6121 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security upload, and request. figure 2 illustrates the secure communication between the basic components, server, mobile devices and communication channel. fig. 2. feather protocol message communications between the mobile devices and the cloud server. a. register the person using the mobile device app provides a username (e.g. “jason”). the device hardware is also assumed to have a unique hardware identifier (e.g. device id). the mobile app combines these strings using a hash function to get a unique id that can be sent to the external server without revealing any private information. uid = h(device id, username); 32-byte value. the mobile device also has a telephone number at which it can receive an out-of-band message via sms. the person registers an account on the external server by sending a message: a = register u = uid p = phone t = timestamp when the external server receives this message, if no account exists for that uid, a new account is created, and this message is sent back: s = ok t = timestamp if an account already exists for that uid the server responds: s = error c = code (indicating type or error) t = timestamp if an account already exists for the given uid, the person needs to pick a new username to create a different uid: one-time-pad via sms following a successful register message, the external server sends a one-time-pad to the mobile device via an out-ofband channel using sms to the phone number provided. the person would need to cut-and-paste this string into the mobile device app to be stored. b. update in the mobile device app, the person also provides a password (e.g. “mysecret”), which provides a type of “bring something, know something” security (bring something = mobile device, know something = username, password). the user’s simple password is combined with the uid to create a “hashed password”, which will be sent to the external server. pass = h(uid, password) the hashed password is encrypted using xor with the secret one-time-pad. the entire message (before the signature) is hashed to create a cryptographic signature for authentication. a = update u = uid d = xor(pass, otp(one-time-pad)) t = time x = h(message) the external server confirms the validity of the message by recomputing the signature, and then decrypts and stores the hashed password in the account. the response is either ok or error. c. validate this message is optional but useful for debugging purposes when implementing this protocol for the first time. the mobile device sends the following message asking the external server to confirm that the hashed password and signature are valid. a = validate u = uid d = xor(pass, otp) t = time x = h(message, pass) the external server decodes the hashed password, recomputes the signature, and responds with either ok or error. d. generate the mobile device provides a unique 32-byte token and asks the external server to generate a new encryption key that will be used to generate a keystream of “number” bytes that will be stored until a given “expire” time. the unique token is created by hashing the uid, expire, and timestamp: token = h(uid, expire, timestamp) the token is xor-encrypted with the shared-keystream. the message sent to the server has these parameters: a = generate u = uid f = xor(token, shared-keystream) n = number (of bytes in the keystream) e = expire t = timestamp x = h(message, pass) engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6122 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security the external server generates a random mickey 2.0 key (20 bytes of key+iv). there are two implementation-dependent choices: • the server can simply store the 20-byte in association with the token and generate the keystream on-the-fly when requested, or • the server can generate and store the keystream and then discard the 20-byte key. with this option, the token becomes equivalent to a file-id, and the keystream becomes equivalent to the file contents. e. upload the mobile device asks the external server to store a file by providing a 32-byte file-id, the encrypted contents of the file, and an expiration time, after which the file will be deleted. the unique file-id is created by hashing the uid, filename, expire, and timestamp: file = h(uid, filename, expire, timestamp) the file-id is xor-encrypted with the shared-keystream. the mobile device sends a message with these parameters: a = upload u = uid f = xor(file, shared-keystream) d = xor(file-contents, token-keystream) the external server stores the file and responds with ok or else error if something goes wrong. f. request a mobile device can request a token-keystream or encrypted file contents by providing the appropriate 32-byte token or file-id. the message has these parameters: a = request u = uid f = xor(token, shared-keystream) or f = xor(file, shared-keystream) t = timestamp x = h(message, pass) the external server uses the token (or file-id) to look up the requested data and sends it back to the mobile device. s = ok d = xor(token-keystream, shared-keystream) or d = xor(file-contents, shared-keystream) t = timestamp x = h(message, pass) the protocol assumes the first mobile device (the sender) is able to communicate the “token” and “file” to the second mobile device (the receiver) through a secure out-of-band channel, here assumed to be via an sms message. it is important that the communication remains secure even if the external server is compromised by an adversary. therefore, the token-keystream is not used directly to encrypt the file contents, since someone with access to the server could easily decrypt the file. instead, the first mobile device must pick several random numbers r1, r2, r3, ... that are used to walk through the bytes of the token-keystream in a deterministic but difficult to predict order. these sets of random numbers must also be communicated to the second mobile device through a secure out-of-band channel. for example, for a tokenkeystream with length n=2 k -1, which is a prime number, the index of the next byte to be used could be calculated as: index(i) = r1 mod n index(i+1) = (r2 * index(i) + r1) mod n the mobile app was designed in android studio and then the app was transferred as a file to be converted into a mobile local app. the code was written in java on the android studio platform, which works on all major operating systems (i.e. windows, macos and linux). vi. results and analysis the performance of feather protocol is measured on two items: the overall speed and battery consumption. a. speed performance five different mobile devices with android-based operating systems, shown in table i, were used to test the protocol performance. the total time from downloading the keystream, encryption, and writing to storage was measured. tables ii–v show the computations in five different android-based devices. table i. specifications of the used mobile devices d-1 d-2 d-3 d-4 d-5 model name lg v20 huawei nova 3e samsung galaxy s9+ samsung galaxy a6+ lenovo m10 tablet os android 7.0 nougat android 8.1 with emui 8.0 android 9.0 p android 8.0 oreo android 8.0 oreo api level 24 26 28 26 27 cpu quad-core 2.15ghz + 1.6ghz quad-core 2.36ghz octa-core (4×2.7ghz & 4×1.7ghz) octa-core 1.8ghz octa-core 1.8ghz chipset qualcomm snapdragon 820 hisilicon kirin 659 qualcomm snapdragon 845 qualcomm snapdragon 450 qualcomm snapdragon 450 ram 4gb 4gb 6gb 4gb 3gb gpu adreno 530 mali-t830 mp2 adreno 630 adreno 506 adreno 506 battery 3200mah, li-ion 3000mah, li-polymer 3500mah, li-ion 3500mah, li-ion 4,850mah, li-ion polymer table ii shows the total time average for the five different devices. the lg v20 device was the slowest at 18.44169s. however, it was very fast for 8mb file size. the samsung galaxy s9+ device had the fastest total time average (for download, decode and write) at 10.3438s. the total time for all five devices was 71.6456s and the average was 14.3291s. in the experiments, 15 different file sizes from 1kb to 16mb were used to measure the overall performance, as shown in tables iii-v. it is clear that feather can handle large files, and 16mb file size is sufficient to transfer documents and photos. these calculations use the samsung galaxy s9+, and a 16mb file only needs about 19.0s overall time which includes downloading the encrypted file from the external server, decryption time and storing it to the device (write). engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6123 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security table ii. running 8mb file 60 times and average time (s) d-1 d-2 d-3 d-4 d-5 down load 18.0833 11.594383 10.162450 17.28501 13.083 decode 0.13299 0.090583 0.0872030 0.151201 0.1143 write 0.22536 0.12371666 0.0941666 0.2327666 0.1841 total time 18.44169 11.80868266 10.3438196 17.668978 13.382 table iii. running 3 to 512kb files and calculating time (s) file size 32kb 64kb 128kb 256kb 512kb download 0.324 0.38 0.424 0.743 1.001 decode 0.00102 0.0015886 0.0023127 0.0085227 0.01105 write 0.085 0.066 0.068 0.057 0.052 total time 0.41002 0.4475886 0.4943127 0.8085227 1.0640 table iv. running 1 to 16mb files and calculating time (s) file size 1mb 2mb 4mb 8mb 16mb download 1.684 2.957 5.439 9.625 18.664 decode 0.036893 0.02132 0.0324316 0.0836791 0.1598017 write 0.057 0.085 0.092 0.106 0.19 total time 1.777893 3.06332 5.5634316 9.8146791 19.013801 b. power consumption an android-based application, gsam battery monitor [32] was used to measure the overall battery power consumption of feather using a samsung galaxy s9+ with a 3500mah liion battery. after running gsam and the mobile app for feather, the results showed that performing the operations on 10 files varying in size from 2 to 16mb consumed less than 1% of all apps running in the background, which consumed 1% of battery power, so feather consumes only 0.0001% of battery power. c. . feather vs. cloak the proposed feather protocol is lighter than cloak and is much faster. comparing the performance for file sizes of 1, 2, 4, and 8mb shows that feather is faster. for example, in table vi, the total time for 8mb file size is 110s for cloak and about 9.8s for feather. therefore, feather is even more practical if multiple devices need to communicate at the same time. in addition, feather consumes 80% less battery power than cloak. adithya et al. [30] presented another secure application of cloak protocol in apache server, using a graphical user interface to provide more security, however it takes 1 to 2s for users to enter the digits. feather, cloak and [30] are compared in figure 3. table v. cloak and feather protocols: total speed time for different file sizes file size (mb) total time (s) cloak feather 1 20 1.77789342 2 30 3.06332185 4 60 5.56343165 8 110 9.81467915 vii. attack analysis this section provides an analysis of common attacks and shows how feather is resistant to these types of attacks. a. man in the middle attacks the attacker can interrupt data, inject information, and redirect the traffic. this can be between the two devices or between the devices and the external server, thus it works on the communication channel. this can be prevented by providing strong mutual authentication and end point authentication, as the feather protocol does, and by using hashing for messages, so as all messages are wrapped in hash functions. thus, feather is immune from man in the middle attacks. fig. 3. cloak, feather and adithya et al. [30] speed performance comparison. b. insider attacks on the server side, if an insider can gain access to the information they only gain access to the keystream. however, the message will be included in a hash function, as well as the one-time-pad, and another secure parameter such as the timestamp or a random number only known by the mobile device users. on the mobile side, the mobile will validate the messages received from the server and other mobiles. c. denial of service attacks the feather protocol has steps in the external server to authenticate users before accessing the service by: 1) authentication of users’ credentials, 2) updating the accessing parameters, and 3) validating the users’ messages and hash functions. as verification by the server and devices is mutual, a denial of service attack is not applicable. d. chosen iv-attacks the keystream is generated by using mickey 2.0 and (key, iv) as the initial input. in feather, the iv is not used more than once with the same key, thus feather eliminates this threat by preventing the reuse of the iv, as well as by including the iv in the hash function, so an attacker choosing the iv will not result in the key being revealed. e. two-time pad attacks assuming there are two messages m1 and m2, if the same key (k) is used (called two-time pad), and there are two ciphertexts (c1, c2) then: engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6124 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security m1⊕ k that results in c1 and m2⊕ k that results in c2. therefore, it is easy for the attacker to perform the xor operation for ciphertexts in order to reveal the plaintext as: c1 ⊕ c2 that is, using statistical frequency analysis leads to m1 ⊕ m2. in the feather protocol, each file is encrypted by a different keystream as well as a different one-time-pad for every session and time timestamp. thus, this attack is not applicable. f. impersonation attacks this kind of attack occurs when the attacker gains access to a mobile device and requests a response from the server. the server will validate and authenticate the request. as mobile users will be using a hash function, including a one-time-pad (as discussed in the protocol implementation), the server also will hash the keystream with a one-time-pad among other user credentials, meaning this attack is not feasible with feather. g. brute force attacks as the complexity of a brute force attack in key=80bit in general is 2 80 , the feather protocol used a hash function. for example, using d-3 (a user may choose other stronger hash functions, and that will not affect the speed performance as the slower part is the downloading time), the computation power relies on the implementation, and adding other secure parameters such as using otp, that is similar to the one-timepad cipher, which substantially raises the computation power needed to break the protocol. viii. discussion in feather, downloading is the most time-consuming task compared to the cloak protocol. if it is required for more than two mobile devices to communicate at the same time, the external server generating the keystream in the feather protocol is much faster than cloak. this will reduce the overall time as the decoding time is just performing xor on messages with the keystream, which is fast. the mobile battery lifetime is also longer. the proposed lightweight security protocol feather provides confidentiality, authorisation, and security for users in mobile cloud computing technology and iot technology. it also helps reducing power consumption, which will improve the overall performance of mobile applications. the proposed protocol was analyzed against possible known attacks, which showed that it is secure for implementation. the mickey 2.0 cipher was used as a pseudo-random number generator. however, the feather protocol can be adapted to use other iv-based lightweight synchronous stream ciphers. the proposed mickey 2.0.85 [9] which is 23% faster in generating pseudo-random numbers, can also be used, however, even using mickey 2.0 in feather is fast enough. mickey 2.0.85 is useful for other smaller applications. the feather protocol offers a secure contribution to mobile cloud computation. the comparison in figure 3 shows that feather is much faster than the recent cloak and [30] protocols, and it also provides more security. the limitations of this study include the testing on five devices, although the cloak protocol [23] was also tested on five devices. future work could involve further testing of the performance of feather on a wider range of devices, and compare it to a wider range of existing protocols. another important direction for future research is adapting other lightweight ciphers such as trivium, grain, and other lightweight block ciphers to generate the keystream in the server, and then implementing feather and calculating the overall execution time. ix. conclusion ensuring security in mobile cloud computing is critical but challenging. the proposed lightweight security protocol, feather, will reduce cost and time used in the external server. therefore, it can increase the number of devices communicating at the same time and enhance mobile cloud computing applications. the feather protocol has better performance than existing protocols and can help meet the requirements for secure mobile cloud computing with internet connectivity. acknowledgment the authors would like to thank dr david jones for his advices on the mobile cloud computing settings. references [1] p. bahl, r. y. han, l. e. li, and m. satyanarayanan, “advancing the state of mobile cloud computing,” in proceedings of the third acm workshop on mobile cloud computing and services, low wood bay, lake district, uk, jun. 2012, pp. 21–28, doi: 10.1145/2307849.2307856. [2] k. kumar and y. lu, “cloud computing for mobile users: can offloading computation save energy?,” computer, vol. 43, no. 4, pp. 51–56, apr. 2010, doi: 10.1109/mc.2010.98. [3] h. dinh thai, c. lee, d. niyato, and p. wang, “a survey of mobile cloud computing: architecture, applications, and approaches,” wireless communications and mobile computing, vol. 13, no. 18, pp. 1587– 1611, dec. 2013, doi: 10.1002/wcm.1203. [4] “number of mobile phone users worldwide 2015-2020,” statista. https://www.statista.com/statistics/274774/forecast-of-mobile-phoneusers-worldwide/ (accessed jul. 23, 2020). [5] g. singh and s. kinger, “a study of encryption algorithms (rsa, des, 3des and aes) for information security,” international journal of computer applications, vol. 67, no. 19, pp. 33–38, apr. 2013, doi: 10.5120/11507-7224. [6] a. bogdanov, f. mendel, f. regazzoni, v. rijmen, and e. tischhauser, “ale: aes-based lightweight authenticated encryption,” in fast software encryption, berlin: springer, 2014, pp. 447–466. [7] g. desolda, c. ardito, h.-c. jetter, and r. lanzilotti, “exploring spatially-aware cross-device interaction techniques for mobile collaborative sensemaking,” international journal of human-computer studies, vol. 122, pp. 1–20, aug. 2018, doi: 10.1016/j.ijhcs.2018.08.006. [8] t. eisenbarth, s. kumar, c. paar, a. poschmann, and l. uhsadel, “a survey of lightweight-cryptography implementations,” ieee design & test of computers, vol. 24, no. 6, pp. 522–533, dec. 2007, doi: 10.1109/mdt.2007.178. [9] alamer, soh, and brumbaugh, “mickey 2.0.85: a secure and lighter mickey 2.0 cipher variant with improved power consumption for smaller devices in the iot,” symmetry, vol. 12, no. 1, dec. 2019, doi: 10.3390/sym12010032, art no. 32. [10] p. kitsos, n. sklavos, g. provelengios, and a. n. skodras, “fpgabased performance analysis of stream ciphers zuc, snow3g, grain v1, engineering, technology & applied science research vol. 10, no. 4, 2020, 6116-6125 6125 www.etasr.com alamer & soh: feather: a proposed lightweight protocol for mobile cloud computing security mickey v2, trivium and e0,” microprocessors and microsystems, vol. 37, no. 2, pp. 235–245, mar. 2013, doi: 10.1016/j.micpro.2012.09.007. [11] a. h. al-omari, “lightweight dynamic crypto algorithm for next internet generation,” engineering, technology & applied science research, vol. 9, no. 3, pp. 4203–4208, jun. 2019. [12] m. ali, n. q. soomro, h. ali, a. awan, and m. kirmani, “distributed file sharing and retrieval model for cloud virtual environment,” engineering, technology & applied science research, vol. 9, no. 2, pp. 4062–4065, apr. 2019. [13] m. k. hassan, a. babiker, m. baker, and m. hamad, “sla management for virtual machine live migration using machine learning with modified kernel and statistical approach,” engineering, technology & applied science research, vol. 8, no. 1, pp. 2459–2463, feb. 2018. [14] m. bahrami and m. singhal, “a light-weight permutation based method for data privacy in mobile cloud computing,” in 3rd ieee international conference on mobile cloud computing, services, and engineering, san francisco, ca, usa, apr. 2015, pp. 189–198, doi: 10.1109/mobilecloud.2015.36. [15] j. daemen and v. rijmen, the design of rijndael: aes the advanced encryption standard. berlin: springer, 2002. [16] d. a. osvik, j. bos, d. stefan, and d. canright, “fast software aes encryption,” presented at the 17th international workshop on fast software encryption, seoul, korea, feb. 2010, vol. 6147, pp. 75–93, doi: 10.1007/978-3-642-13858-4_5. [17] m. yoshikawa and h. goto, “security verification simulator for fault analysis attacks,” international journal of soft computing and software engineering, vol. 3, no. 3, pp. 467–473, 2013, doi: 10.7321/jscse.v3.n3.71. [18] j. daemen and v. rijmen, the design of rijndael: aes the advanced encryption standard. berlin heidelberg: springer-verlag, 2002. [19] d. a. osvik, j. w. bos, d. stefan, and d. canright, “fast software aes encryption,” in fast software encryption, s. hong and t. iwata, eds. berlin, heidelberg: springer, 2010, pp. 75–93. [20] m. robshaw and o. billet, eds., new stream cipher designs. berlin, heidelberg: springer, 2008. [21] a. j. menezes, p. c. van oorschot, and s. a. vanstone, handbook of applied cryptography. boca raton, florida: crc press, 2018. [22] l. diedrich, p. jattke, l. murati, m. senker, and a. wiesmaier, “comparison of lightweight stream ciphers: mickey 2.0, wg-8, grain and trivium,” 2016. [23] a. banerjee, m. hasan, m. a. rahman, and r. chapagain, “cloak: a stream cipher based encryption protocol for mobile cloud computing,” ieee access, vol. 5, pp. 17678–17691, 2017, doi: 10.1109/access.2017.2744670. [24] c. de canniere, “trivium: a stream cipher construction inspired by block cipher design principles,” in information security, vol. 4176, berlin, heidelberg: springer, 2006, pp. 171–186. [25] m. hell, t. johansson, and w. meier, “grain: a stream cipher for constrained environments,” ijwmc, vol. 2, no. 1, pp. 86–93, jan. 2007, doi: 10.1504/ijwmc.2007.013798. [26] s. babbage and m. dodd, “the mickey stream ciphers,” in new stream cipher designs: the estream finalists, m. robshaw and o. billet, eds. berlin, heidelberg: springer, 2008, pp. 191–209. [27] m. s. turan and a. dog, “detailed statistical analysis of synchronous stream ciphers,” presented at the sasc 2006 stream ciphers revisited, leuven, belgium, feb. 2006. [28] s. al hinai, l. m. batten, and b. colbert, “mutually clock-controlled feedback shift registers provide resistance to algebraic attacks,” in information security and cryptology, vol. 4990, berlin, heidelberg: springer, 2008, pp. 201–215. [29] a. r. kazmi, m. afzal, m. f. amjad, h. abbas, and x. yang, “algebraic side channel attack on trivium and grain ciphers,” ieee access, vol. 5, pp. 23958–23968, 2017, doi: 10.1109/access.2017.2766234. [30] v. adithya, r. ramya, d. v. kumar, and m. m. krishnan, “cloak encryption in apache,” international journal of advance research and development, vol. 3, no. 3, pp. 184–187, 2018. [31] s. anand and v. perumal, “eecdh to prevent mitm attack in cloud computing,” digital communications and networks, vol. 5, no. 4, pp. 276–287, nov. 2019, doi: 10.1016/j.dcan.2019.10.007. [32] “gsam battery monitor – apps on google play.” https://play.google.com/store/apps/details?id=com.gsamlabs.bbm&hl=en _au (accessed jul. 23, 2020). microsoft word final-ed.doc etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 413-415 413 www.etasr.com caccetta et al: an improved clarke and wright algorithm to solve the capacitated vehicle… an improved clarke and wright algorithm to solve the capacitated vehicle routing problem louis caccetta dpt of mathematics and statistics curtin university of technology australia l.caccetta@curtin.edu.au mamoon alameen dpt of engineering the australian college of kuwait kuwait m.radiy@ack.edu.kw mohammed abdul-niby dpt of engineering the australian college of kuwait kuwait m.nibi@ack.edu.kw abstract— this paper proposes an effective hybrid approach that combines domain reduction with the clarke and wright algorithm to solve the capacitated vehicle routing problem. the hybrid approach is applied to solve 10 benchmark capacitated vehicle routing problem instances. the dimension of the instances was between 21 to 200 customers. the results show that domain reduction can improve the classical clarke and wright algorithm by about 18%. the hybrid approach improves the large instances significantly in comparison with the smaller size instances. this paper will not show the time taken to solve each instance, as the clarke and wright algorithm and the hybrid approach took almost the same cpu time. keywordsclarke and wright; capacitated vehicle routing problem; domain reduction i. introduction the vehicle routing problem (vrp) is an important problem in the distribution network and has a significant role in cost reduction and service improvement. the problem is one of visiting a set of customers using a fleet of vehicles, respecting constraints on the vehicles, customers, drivers etc [1]. the goal is to produce a minimum cost routing plan specified for each vehicle. the problem of vehicle scheduling was first formulated in 1959 [2] and may be stated as a set of customers, each with a known location and a known requirement for some commodity, that is to be supplied from a single depot by delivery vehicles, subject to the following conditions and constraints: • the demands of all customers must be met. • each customer is served by only one vehicle. • the capacity of the vehicles may not be violated (for each route the total demands must not exceed the capacity). the objective of a solution may be stated, in general terms, as that of minimizing the total cost of delivery, namely the costs associated with the fleet size and the cost of completing the delivery routes [3]. the problem frequently arises in many diverse physical distribution situations. for example bus routing, preventive maintenance inspection tours, salesmen routing and the delivery of any commodity such as mail, food or newspapers [4]. the vehicle routing problem is an integer programming problem that falls into the category of np-hard problems. as the problems become larger, there will be no guarantee that optimal tours will be found within reasonable computing time [5]. ii. problem formulation the capacitated vehicle routing problem (cvrp) is to satisfy the demand of a set of customers using a fleet of vehicles with minimum cost. the problem is described as follows [4]: let: • c= {1, 2,…, n}: the set of customer location. • 0: depot location. • g=(�,e): the graph representing the vehicle routing network with �={0,1,…,n} and e={(i,j):i,j∈�, i0, the plate at y=0 starts moving with a constant velocity and a constant heat flux, while the plate at y=h is heated with periodic temperature and maintained stationary. to setup the mathematical model of the problem, the following assumptions are made: • the fluid physical properties are constant. • the fluid flow is unsteady, laminar, incompressible, viscous, electrically conducting and fully developed. • the magnetic reynolds numbers are small, hence the induced magnetic field is negligible. fig. 1. schematic diagram of the system based on the above assumptions and applying the magnetic field with a presence of periodic plate temperature, the transient governing equations of the mhd flow that present the fluid motion and temperature are: �� �� � ���� � � � ��� ��� ���� � � (1) ��� �� �� � � ��� ��� (2) where u is the axial velocity, t is the fluid temperature, υ is the kinematic viscosity, k is the thermal conductivity, and cp is the speicific heat, with initial and boundary conditions: � � 0: � � 0 � � � for all y � � 0: � � � �� �� � ! "� # � 0 (3) � � 0 � � �$ � %��$ � ��&'()� "� # � * to write the governing equations in dimensionless form, the following dimensionless variables are introduced: + � �, , . � /� ,� , ( � ,�0) / , 1 � � �� ,2 � �3���43��, 5 � �, ���� 6 , 78 � 9:,���43��� /�� , ;8 � 6<=! (4) the dimensionless equations become: �> �? � 782 � ��> �@� 51 (5) �a �? � b cd ��a �@� (6) the corresponding boundary conditions can be specified as: 0 0 0 for all y 0 1 1 at 0 : , : , u u y y τ θ θ τ ≤ = = ∂ > = = − = ∂ (7) 0 1 at 1, cos� �u yθ ε ωτ= = + = engineering, technology & applied science research vol. 9, no. 2, 2019, 4007-4011 4009 www.etasr.com abdullah & saada: free convection mhd couette flow with application of periodic temperature and … iii. numerical analysis finite difference technique is applied to solve the dimensionless momentum and energy equations to determine the velocity and temperature distributions for different parameter values. a uniform grid consisting of a large number of nodes in the y direction is used. crank-nicolson method is used to solve the equations with a large amount of time steps. crank-nicolson method is fully implicit and numerically stable. it has a higher order of accuracy when the solutions are directly obtained from the thomas algorithm. iv. analytical solution the numerical solution can be verified by its comparison with the analytical solution for the case of constant wall temperature (ε=0). the analytical eigenfunction expansion method is used to solve the energy equation. the dimensionless time dependent energy equation can be written as: �a �? � b cd ��a �@� (8) the boundary conditions for constant wall temperature can be written as follows: 0 0 for all y 0 1 at 0 : : yy τ θ θ τ ≤ = ∂ > = − = ∂ (9) 1 at 1yθ = = since the boundary conditions are non homogeneous, we can convert them to homogeneous by introducing: j�+,.� � 2�+,.� �2 +� (10) then the energy equation becomes: �l �? � b cd ��l �@� (11) and the boundary conditions can be specified as: 0 2 for all y 0 0 at 0 : � � : f y f yy τ τ ≤ = − ∂ > = = ∂ (12) f 0 at 1, y= = let j�+,.� � ∅�+�o�.� . taking the derivatives and substituting into (11) yields the eigenvalue problem as: p�∅ p@� � q∅ � 0 �r �@ �0� � ∅�1� � 0 (13) where λ is the separation constant. the solution of the above equation will be: ∅t�+� � �&'u√q+w (14) with eigenvalues: qt � x yt3b y z[ y (15) for each n, the solution for δ(τ) is ot�.� � \ 3]^_=` . hence the series solution for f(y,τ) is: j�+,.� � ∑ btctdb �&'u√q+w\ e] =`? (16) which satisfies the non-homogeneous initial condition j�+,0� � �+ 2� . hence bt � 3y f x1 � √q sin u√qw[ , where i � 1,2,……,∞. then the final form of solution is: 2�+,.� � l∑ 3yf x1 � √q sin u√qw[ ctdb �&'u√q+w\ e] =`?m ��2 +� (17) the coefficient of heat transfer (nusselt number), is given by: n� � ,� op oq� ��3��� (18) using the dimensionless temperature expression, the nusselt number at moving and stationary plates can be written as: n� � 3ba� ,?� x �a �@[@d � b a� ,?� (19) n�b � x�a�@[@db � l∑ l y√f 'riu√qw � 2 'ri y u√qwm ctdb \ e] =`?m 1 (20) v. results and discussion in this section, the effect of different dimensionless parameters on the velocity and temperature profiles is discussed. the numerical solution using crank-niclson technique for velocity and temperature profiles is computed for different values of magnetic parameter, prandtl number, grashof number, and temperature frequency. the following parameter values are used to get the results: gr=5, pr=7, m=1, ω=10, ε=0.2, τ=1. figure 2 illustrates the effect of grashof number on the dimensionless velocity profile. it is seen that the effect of increasing gr would be to increase the velocity u when all other parameters are held constant. figure 3 shows the effect of magnetic field on the velocity profile which clearly indicates that the increase in the applied magnetic intensity would result in a decrease in velocity. fig. 2. velocity profile for different grashof numbers (gr) the effect of the prandtl number pr on the velocity profile is shown in figure 4. it shows that increasing pr would decrease the velocity. the effect of pr on the temperature field is illustrated in figure 5. it is observed that as pr increases, the temperature in the fluid decreases. also, it is seen that pr has a engineering, technology & applied science research vol. 9, no. 2, 2019, 4007-4011 4010 www.etasr.com abdullah & saada: free convection mhd couette flow with application of periodic temperature and … significant influence on the temperature of the plate with constant heat flux for higher values of pr as shown in figure 5. fig. 3. velocity profile for different magnetic parameter (m) fig. 4. velocity profile for different prandtl numbers (pr) fig. 5. temperature profile for different prandtl number (pr) the transient velocity and temperature profiles are shown in figures 6 and 7 for different locations on the fluid. it is noticed that the effect of increasing time is to increase the velocity and temperature until they reach a steady state, and the required time is reduced as we move towards the stasionary plate which has a periodic temperature. it is noticed also that both velocity and temperature have oscillatory behavior with higher amplitude nearby the stationary plate which decays as we move away from the plate. fig. 6. transient velocity at different points on the y coordinate fig. 7. transient temperature at different points on the y coordinate to study the effect of plate temperature frequency on the transient temperature profile, the pulsed volume must be cleared. as shown in figure 8, as the frequency increases, the temperature difference appears to fade away with time. hence, it seems that the temperature at high frequency is continuous rather than oscilating. fig. 8. transient temperature for different values of temparature frequency a comparison of the transient temperature results of the crank-niclson solution (fully implicit method) with analytical solution in the case of ε=0 for temperature profile is shown in figure 9. it is seen that the results are in good agreement with each other. fig. 9. comparison of temperature numerical and analytical results vi. conclusions a fully implicit numerical solution to the problem of transient free convective couette flow of a viscous incompressible fluid confined between two vertical parallel plates in the presence of constant heat flux and periodic temperature on the walls has been prersented. the dimensionless governing partial differential equations are solved by the crank-niclson technique and verified by an eigen function expansion method. the effect of different parameters such as magnetic parameter, grashof number, prandtl number and time are studied. the conclusions of the study are: engineering, technology & applied science research vol. 9, no. 2, 2019, 4007-4011 4011 www.etasr.com abdullah & saada: free convection mhd couette flow with application of periodic temperature and … • the effect of mhd on the fluid flow appears through the magnetic parameter effect. magnetic field exerts a retarding influence on the fluid velocity, which means that a slightly conductive fluid is not affected by the presence of the magnetic field. that could mean that control of the flow can be obtained by controlling the magnetic flux and by a good choice of electrical conductivity. • increasing the grashof number will cause an increase on velocity. • increasing the prandtl number will decrease velocity and temperature. • the periodic behavior of the plate temperature is reflected on transient velocity and temperature profiles. nomenclature bo magnetic flux density (t) cp specific heat, (j. kg -1 . k -1 ) gr grashof number k thermal conductivity, (w.m -1 . k -1 ) m magnetic parameter pr prandtl number t time, (s) t temperature, (k) tw wall temperature, (k) t0 initial fluid temperature, (k) u velocity component in the x direction, (m.s -1 ) u dimensionless velocity x, y cartesian coordinates y dimensionless coordinate ε small reference parameter ω frequency of oscillation, (rad.s -1 ) ω dimensionless frequency φ, δ separation variables ɵ dimensionless temperature τ dimensionless time λ separation constant ν kinematic viscosity, (m 2 .s -1 ) σ electrical conductivity, (siemens.m -1 ) ρ density, (kg. m -3 ) µ dynamic viscosity, (kg. m -1 .s -1 ) references [1] a. singh, “natural convection in unsteady couette motion”, defence science journal, vol. 38, no. 1, pp. 35-41, 1988 [2] b. jha, “natural convection in unsteady mhd couette flow”, heat and mass transfer, vol. 37, no. 4, pp. 329-331, 2001 [3] m. m. rashidi, n. kavyani, s. abelman, “investigation of entropy generation in mhd and slip over a rotating porous disk with variable properties”, international journal of heat and mass transfer, vol. 70, pp. 892-917, 2014 [4] a. m. rashad, m. m. rashidi, g. lorenzini, s. e. ahmed, a. m. aly, “magnetic field and internal heat generation effects on the free convection in a rectangular cavity filled with a porous medium saturated with cu–water nanofluid”, international journal of heat and mass transfer, vol. 104, pp. 878–89, 2017 [5] m. sheikholeslami, k. vajravelu, m. m. rashidi, “forced convection heat transfer in a semi annulus under the influence of a variable magnetic field”, international journal of heat and mass transfer, vol. 92, pp. 339–348, 2016 [6] r. chaudhary, p. jain, “exact solutions of incompressible couette flow with constant temperature and constant heat flux on walls in the presence of radiation”, turkish journal of engineering and environmental sciences, vol. 31, no. 5, pp. 297–304, 2007 [7] m. narahari, “effects of thermal radiation and free convection currents on the unsteady couette flow between two vertical parallel plates with constant heat flux at one boundary”, wseas transactions on heat and mass transfer, vol. 5, no. 1, pp. 21-30, 2010 [8] k. bunonyo, e. amos, i. c. eli, “unsteady oscillatory couette flow between vertical parallel plates with constant radiative heat flux”, asian research journal of mathematics, vol. 11, no. 2, pp. 1-11, 2018 [9] p. sharma, b. sharma, r. c. tamkang, “unsteady free covection oscillatory couette flow through a porous medium with periodic wall temperature”, journal of mathematics, vol. 38, no. 1, pp. 93-102, 2007 [10] c. israel-cookey, e. amos, c. nwaigwe, “mhd oscillatory couette flow of a radiating viscous fluid in a porous medium with periodic wall temperature”, american journal of scientific and industrial research, vol. 1, no. 2, pp. 326-331, 2010 [11] m. narahari, b. dutta, “free convection flow and heat transfer between two vertical parallel plates with variable temperature at one boundary”, acta technica, vol. 56, pp. 103–113, 2011 [12] m. raju, s. varma, “unsteady mhd free convection oscillatory couette flow through a porous medium with periodic wall temperature”, imanager’s journal on future engineering & technology, vol. 6, no. 4, pp. 7-12, 2011 [13] n. ahmed, k. sarma, d. p. barua, “magnetic field effect on free convective oscillatory flow between two vertical parallel plates with periodic plate temperature and dissipative heat”, applied mathematical sciences, vol. 6, no. 39, pp. 1913-1924, 2012 [14] s. das, b. sarkar, r. n. jana, “radiation effects on free convection mhd couette flow started exponentially with variable wall temperature in presence of heat generation”, open journal of fluid dynamics, vol. 2, no. 1, pp. 14-27, 2012 [15] h. k. mandal, s. das, r. n. jana, “transient free convection in a vertical channel with variable temperature and mass diffusion”, chemical and process engineering research, vol. 23, pp. 38-54, 2014 [16] b. zigta, p. koya, “the effect of mhd on free convection with periodic temperature and concentration in the presence of thermal radiation and chemical reaction”, international journal of applied mechanics and engineering, vol. 22, no.4, pp. 1059-1073, 2017 engineering, technology & applied science research vol. 8, no. 3, 2018, 2963-2968 2963 www.etasr.com tuballa and abundo: operational impacts of renewable energy systems on a remote diesel-powered … operational impact of res penetration on a remote diesel-powered system in west papua, indonesia maria lorena l.tuballa school of engineering, university of san carlos cebu city, philippines and college of engineering and design, silliman university, dumaguete city, philippines mlt_kin@yahoo.com michael lochinvar s. abundo nanyang technopreneurship centre, nanyang technological university, singapore and school of engineering, university of san carlos, cebu city, philippines michael.abundo@ntu.edu.sg abstract—when a new power source connects to the distribution or transmission grid, an assessment of its impact is necessary. technical studies must assess the possible effects of a proposed expansion, reinforcement or modification to evaluate the possible incidents that may occur. typically, the calculations or analyses done are load flow, short-circuit, and transient stability. the possible renewable energy (re) sources are determined first. the details of the existing electrical system, including the specifications for the elements used, are obtained and logical assumptions are utilized for those that are not known. the load flow analysis in the considered case revealed that the re presence reduces diesel generation. the 119 kw pv array and the 54 kw tidal turbine displace most diesel generation: 22% of gen 4 and 21.8% of gen 5. the diesel-solar system brought the diesel generation down by 20.05% of gen 4 and 20% of gen 5. the diesel-tidal combination lessened the diesel generation by 1.92% of gen 4 and 1.83% of gen 5. short-circuit analysis alerts indicating the operating percentages of the circuit breakers that are beyond their interrupting ratings are presented. the transient stability analysis depicts that re sources affect the existing system and appear to be putting in more stress. the studied systems are not transient-stable based on the results. while it is relatively simple to plan to put up renewables in remote island systems, there are many factors to consider such as the possible impacts of the re sources. keywords-electrical grid; renewable energy; transient stability; diesel generation; tidal turbine i. introduction in a typical distribution system, where a traditional electric grid that interconnects smaller grids is available, whenever a new significant power generation source connects to the distribution or transmission grid, whether it is conventional or a renewable energy (re) resource, a grid impact assessment or grid impact study is necessary. these are sets of technical studies used to assess the possible effects of the proposed expansion, reinforcement, or modifications to evaluate the possible incidents that may occur. power system issues are mostly dependent on system size, geographical distribution, planned capacities of variable renewable resources, system operation scheme, market structure, size of the balancing area and interconnection capacity. typically, the calculations or analyses done on a grid impact study are load flow, shortcircuit and transient stability studies. for remote islands that are not connected to the usual grid system, it is also rather important to see the impact of a new generation. the objective of the current study is to assess the impacts of solar energy, wind energy and tidal energy system on the operational characteristics of an electrical system powered only by diesel generators. the specific location in indonesia is pt bintuni utama murni wood industries (pt bumwi), a mangrove processing and woodchip plant. the company specializes in mangrove utilization and forest management. it started as a primary supplier of mangrove logs to the japanese paper industry. the concession area is in pulao amutu besar, bintuni bay, west papua, with a total area of 82,120 hectares. the site activities include harvesting mangrove logs and processing them to make woodchips. the wood chip factory is located in the southern part of the island, with no available conventional electricity grid, thus the energy demand is met by off-grid diesel generators. the diesel fuel supply comes from sorong, approximately 500 km away from the island. the reported high fuel cost is around 13,000 idr ($0.89 usd) per liter and base consumption for power generation is around 220,600 liters per year. the company aims to lessen its dependence on diesel fuel and achieve a sustainable power system by integrating re. it plans to include tidal energy in its supply as the site is surrounded by a large bay. ii. west papua woodchip factory site indonesia is an archipelago and the indonesian transmission network is segregated into many power grids—eight interconnected networks and isolated grids that are all operated by pln, a government-owned monopoly on electricity distribution in indonesia. pln prioritizes the development of renewable resources to supply local grids where available and interconnects grids where feasible [1]. pln operates a total of 4600 diesel systems outside java-bali and approximately, there are 30.000 small diesel generator sets in the rural areas [2]. indonesia’s latest electricity ratio is approximately 86% and at present, its total plant capacity is 54 gw, 17% of which come from new and renewable energy (nre) sources. by 2025 total plant capacity is expected to grow to 115 gw and nre contribution to increase to 37% [3]. as of 2014, the electricity engineering, technology & applied science research vol. 8, no. 3, 2018, 2963-2968 2964 www.etasr.com tuballa and abundo: operational impacts of renewable energy systems on a remote diesel-powered … ratio targets reflect papua as needing most of the capacity additions [4]. a. the existing electrical system the pt bumwi electrical plant houses five generator sets which supply power to the camp. the main loads are three 220kw three-phase wood chipper machines. other loads include residential, workshop, bulldozer rooms, conveyor belts, and ship loading. b. solar resource indonesia, like any tropical country and being located in the equatorial line has abundant solar energy potential. the indonesian energy potential for solar averages at 4.8 kwh/m2/day [5]. the pt bumwi site is located at 02° 31' 11" s and 133° 35' 48" e. homer, a microgrid optimization software, provides average horizontal global solar radiation, expressed in kwh/m2, for each hour of the year by specifying the coordinates of the site. table i shows the representative clearness indices and daily radiation for the different months of the year while figure 1 illustrates the global solar radiation table i. site clearness indices and daily radiation month clearness index daily radiation (kwh/m2/d) january 0.489 5.04 february 0.493 5.2 march 0.482 5.07 april 0.508 5.12 may 0.531 5.01 june 0.511 4.62 july 0.488 4.48 august 0.479 4.67 september 0.479 4.93 october 0.497 5.2 november 0.514 5.3 december 0.491 5 c. wind resource in general, a 2 m/s minimum is required to start rotating most small wind turbines and the typical cut-in speed (when a small turbine starts generating power) is 3.5 m/s, with 10–15 m/s speeds producing maximum generation power [6]. the average recorded wind speed here is 2.32 m/s. although this meets the minimum speed required to start rotating most small wind turbines, the typical cut-in speed of 3.5m/s, occurs only around 7% of the time [7]. figure 2 shows the wind rose. this limitation makes wind turbine generation at pt bumwi impractical. d. tidal resource tidal stream energy is a form of energy arising from the movement of masses of water or tides. certain types of tidal stream generators or tidal turbines function similarly to wind turbines, with the only difference that they are underwater. the pt bumwi site is situated near a strait, pointing to some good tidal in-stream resource in the area. the assessment of the tidal resource potential in the pt bumwi factory was made by oceanpixel, a spin-off of nanyang technological university dedicated to increasing the ocean renewable energy uptake in south east asia. the technical report [8] data are taken into account in the current study. bathymetrical data is collected using an echo sounder. the bathymetry is computed by interpolating the combined data sets of the indonesian nautical maps and the collected depth data. tidal flows and velocities are determined using an acoustic doppler current profiler (adcp). the recorded maximum current speed is 1.6 m/s while the average current speed flowing through the center of the narrow strait in pt bumwi is 0.7625 m/s. from the assessment, the monthly available resource is roughly 1.733 gwh. table ii shows the monthly speed averages of the projected tidal resource. fig. 1. plot of pt bumwi solar radiation fig. 2. wind rose e. modeling parameters the diesel generators are modeled based on their specifications. details that are not known or specified are left as default values. wood chippers are modeled as induction motors, corresponding to nameplate ratings. other loads are modeled as lumped loads. ratings for the lumped loads require real and reactive power components and these are taken from energy loggers deployed on their respective panels on site. the capacitor banks in pt bumwi consists of six parallel threephase capacitors, each rated 23.2 kvar. the inputs are taken from phmkp440.3.28, 10-84 lvac power capacitors data sheet, the capacitors installed in the site. whenever solar panels are involved in etap, there is an option to calculate the irradiance for a specified location. in this case, it is 1.3361° s, 133.1747° e. figure 3 shows the irradiance calculator and the pv array editor configurations used. the tidal turbine is not an engineering, technology & applied science research vol. 8, no. 3, 2018, 2963-2968 2965 www.etasr.com tuballa and abundo: operational impacts of renewable energy systems on a remote diesel-powered … available element in etap, hence the tidal turbine is modeled as a wind turbine generator. the characteristics of the schottel in-stream turbine are used for the wind turbine inputs and specified according to the actual type that would be running with the diesel generators in pt bumwi. in a separate microgrid simulation, the suggested optimal size for the tidal turbine is 54 kw. figure 4 shows the info and turbine specifications tabs. table ii. monthly tidal current speed averages month tidal speed (m/s) january 1.042 february 0.909 march 0.956 april 1.019 may 1.03 june 0.96 july 0.894 august 0.933 september 0.891 october 0.969 november 1.031 december 1.044 fig. 3. irradiance calculator and pv array editor fig. 4. tidal turbine info and ratings tabs (modeled as wind turbine) iii. results and discussion in figure 5 the simplified single-line diagram of the pt bumwi electrical system implemented in etap is shown. it consists of 7 buses, 15 branches, 5 diesel generators (two running diesel generators at a time, usually gen 4 and gen 5), 9 different connected loads and 6 capacitors each rated 23.2 kvar. a. load flow study load flow studies aim to determine whether an existing or reinforced system can satisfy the voltage and current limits, under steady-state conditions. it must be seen that the loading levels of all transmission lines and substation equipment are below 90% of the maximum continuous ratings of phase conductors and transformers. deviations may only be acceptable on the contingency that depends on the condition of the facility. voltage variations in the system for all voltage levels shall remain within ±5% of the nominal value during normal conditions and during single outage contingency events. the load flow results are obtained for different configurations: existing diesel system, diesel-solar system and diesel-tidal system and diesel-solar-tidal system. there are no significant issues found in the load flow. the presence of the re reduces the percent generation of the diesel generators, with the diesel-solar-tidal displacing the most diesel generation: 22% of gen 4 and 21.8% of gen 5. the diesel-solar system brought the diesel generation down to 20.05% of gen 4 and 20% of gen 5. the diesel-tidal combination lessened the diesel generation by 1.92% of gen 4 and 1.83% of gen 5. the dieselsolar-tidal would have displaced only roughly 6% of gen 4’s generation and 6.1% of gen 5’s generation if the solar pv were sized to match the tidal turbine. the load flow studies for the other configurations do not reflect any voltage issue. table iii displays the load flow analyzer results. table iii. load flow analyzer results system id % pf % generation diesel gen 4 83.48% 36.4% gen 5 83.48% 65.5% diesel-solar gen 4 77.14% 29.1% gen 5 77.14% 52.4% diesel-tidal gen 4 85.51% 35.7% gen 5 85.51% 64.3% wtg1 77.14% 16.9% diesel-solar-tidal gen 4 79.55% 28.4% gen 5 79.55% 51.2% wtg1 77.14% 16.9% b. short circuit study short circuit studies determine the magnitude of currents that flow through electrical faults and compare these against the ratings of the equipment to ensure that proper protection is present. short-circuit levels through all circuit breakers have to be within acceptable limits not just to prevent costly replacement projects but to ensure safety to equipment and personnel. in addition to the short-circuit study, a protectioncoordination can be done to determine the trip settings of the protective devices in the system in order to achieve maximum protection with minimum interruption for all faults that can possibly occur. in this study, protection coordination is not covered. the majority of fault studies include only three-phase and single line-to-ground faults. three-phase faults, though they rarely occur (around 5% of initial faults) are usually the most severe. on the other hand, single line-to-ground faults are engineering, technology & applied science research vol. 8, no. 3, 2018, 2963-2968 2966 www.etasr.com tuballa and abundo: operational impacts of renewable energy systems on a remote diesel-powered … the most common (80%). the rest are either line-to-line or lineto-line-to-ground faults. the short-circuit analysis program in etap enables the analysis of the effect of different types of faults such as 3-phase, line-to-ground, line-to-line, and line-toline-to-ground faults on distribution systems. the program calculates total short-circuit currents and the contributions of individual motors, generators, and utility ties in the system. fault duties are in compliance with the latest editions of the ansi/ieee standards and iec standards. table iv gives the short circuit analysis results. fig. 5. the pt bumwi single-line diagram in etap table iv. short circuit abalysis results faulted bus system (no re) system (solar) system (td) system (solar and td) bus 17 3-phase faults – device duty cbg1-4 operating 217.5% cbg1-4 operating 160.2% cbg1-4 operating 220.4% cbg1-4 operating 162.6% bus 18 3-phase faults – device duty cbg5 operating 217.5% cb pva oparating 192.2% cbg5 operating 160.2% cbg5 operating 220.4% cb pva oparating 195.1% cbg5 operating 162.6% main bus 3-phase faults – device duty cbg1-5 operating 217.5% cbg1-5 operating 160.2% cbg1-5 operating 220.4% cbg1-5 operating 162.6% bus 46 no alert no alert no alert no alert bus 36 3-phase faults – device duty cb wc1-2 operating 280.4% cb wc1-2 operating 173.9% cb wc1-2 operating 175.7% cb wc1-2 operating 175% bus 33 3-phase faults – device duty cb cap16 operating 132.6% cb cap1-6 operating 132.2% cb cap1-6 operating 133.2% cb cap1-6 operating 132.8% c. transient stability transient stability is the ability of the system to maintain synchronism when subjected to a severe fault or disturbance. generators and large machines connected to the system should remain in synchronism and maintain stable operation during normal and during contingency events. the fault clearing times must be adequate and breakers need to trip or be manually tripped before any catastrophic event occurs. the clearing time used was 0.08 seconds, (4 cycles) [9]. a threephase fault was assumed to have occurred in bus 18 at 1.000 sec and cleared at 1.080 sec, with a 20-second simulation time. figure 6 shows the details of the transient stability study case. some of the transient stability plots for the two diesel generators, the loads and the buses are shown below. in figure 7, the generator reactive power has hugely defined peaks and dips and highly frequent changes in amplitudes in the presence of the re sources. the case looks worse in the diesel-solar system. figure 8 shows the load reactive power plot in the diesel-generator system. when the system is run by diesel alone, these parameters show some degree of stability, but not totally as load reactive power is not the same as the reactive power after fault clearing. the generator electrical power plots are shown in figure 9. they display similar behavior with that observed in figure 7. these almost noise-like plots need the more attention. engineering, technology & applied science research vol. 8, no. 3, 2018, 2963-2968 2967 www.etasr.com tuballa and abundo: operational impacts of renewable energy systems on a remote diesel-powered … fig. 6. transient stability study case details fig. 7. generator reactive power fig. 8. load reactive power (diesel only) fluctuations in the load reactive power and load electrical power are also observed in the presence of the re sources. figure 10 exhibits the behavior of the bus frequency in the diesel, solar and tidal configuration. it shows the worst among the three simulated systems. as to the generator speed, variation is most unfavorable in the diesel-solar configuration. figure 11 shows the generator speed variations after the fault clearing–when the system is powered by diesel, by diesel and solar and by the diesel, solar and tidal combination. the plots illustrate that after fault clearing, the generators, buses and load parameters do not return to their original state. fig. 9. generator electrical power fig. 10. bus frequency engineering, technology & applied science research vol. 8, no. 3, 2018, 2963-2968 2968 www.etasr.com tuballa and abundo: operational impacts of renewable energy systems on a remote diesel-powered … fig. 11. generator speed iv. conclusion this study gives an insight on the operational impacts of re on an island grid. the load flow analysis revealed that the re presence reduces the diesel generation. the 119 kw pv array and the 54 kw tidal turbine (here simulated as a wind turbine generator) displace the most diesel generation: 22% of gen 4 and 21.8% of gen 5. the diesel-solar system diesel generation brought the diesel generation down by 20.05% of gen 4 and 20% of gen 5. the diesel-tidal combination lessened the diesel generation by 1.92% of gen 4 and 1.83% of gen 5. the short-circuit analysis alerts indicate the operating percentages of the circuit breakers that are beyond their interrupting ratings. since the circuit breaker ratings are assumed, before actual replacements and/or modifications are done, the models of these protective devices need to be identified and data inputs changed accordingly. the transient stability analysis depicts that the intermittency of the re sources affects the existing system and appears to be putting in more stresses. the systems in the study are not transient-stable acknowledgment authors would like to thank engineering research for development and technology (erdt), oceanpixel pte. ltd. and green forest product & tech. pte. ltd. references [1] p. tharakan, summary of indonesia’s energy sector assessment, adb, 2015 [2] s. blocks, “business assessment for diesel hybrid systems in indonesia, german-indonesian chamber of industry and commerce, 2013 [3] ministry of energy and mineral resources republic of indonesia, indonesia’s renewable energy and energy conservation development, 2015 [4] pwc, power in indonesiainvestment and taxation guide, 2017 [5] unep dtu partnership, indonesian solar pv rooftop program (isprp)facilitating implementation and readiness for mitigation (firm), 2016 [6] branz ltd., “wind turbine systems”, available at: http://www.level.org.nz/energy/renewable-electricity-generation/windturbine-systems/ , 2017 [7] j. valleser, “a techo-economic feasibility study of implementing a hybrid renewable energy microgrid at pt bumwi camp, amutu besar, bintuni bay, indonesia”, university of san carlos, 2016 [8] ocean pixel, tidal in-stream energy resource assessment in bintuni bay, west papua, indonesia, 2015 [9] d. k. neitzel, “protective devices maintenance as it applies to the arc/flash hazard”, conference record of the 2004 annual pulp and paper industry technical conference, ieee, pp. 209-215, 2004 engineering, technology & applied science research vol. 8, no. 3, 2018, 3018-3022 3018 www.etasr.com kanona et al.: a review of ground target detection and classification techniques in forward … a review of ground target detection and classification techniques in forward scattering radars mohammed e. a. kanona school of telecommunication future university khartoum, sudan mohammedkanona@gmail.com mohamed ghazli hamza school of telecommunication future university khartoum, sudan mohghazli@gmail.com ashraf g. abdalla school of telecommunication future university khartoum, sudan agea33@yahoo.com m. k. hassan faculty of electrical engineering universiti teknologi malaysia, johor, malaysia memo1023@gmail.com abstract—this paper presents a review of target detection and classification in forward scattering radar (fsr) which is a special state of bistatic radars, designed to detect and track moving targets in the narrow region along the transmitter-receiver base line. fsr has advantages and incredible features over other types of radar configurations. all previous studies proved that fsr can be used as an alternative system for ground target detection and classification. the radar and fsr fundamentals were addressed and classification algorithms and techniques were debated. on the other hand, the current and future applications and the limitations of fsr were discussed. keywords-neural network; pca; z-score; target classification; recognition; forward scattering radar i. introduction radar systems and stations are used for detecting various objects in space and establishing their current position, as well as determining velocities and trajectories for moving objects [1]. from the basic point of view, this is achieved by transmitting an electromagnetic (em) wave from the transmitting antenna. if the target is located within the radar coverage area, the wave will be reflected back to the receiving antenna [2]. there are different types of radar systems, based on the transmitter-receiver topology: (a) the monostatic radar, where the transmitter and the receiver are spatially combined, and (b) the bistatic radar which consists of a single transmitter and a single receiver which are separated by a distance, comparable to the maximum range [1]. a forward scattering radar (fsr) is a special state of bistatic radar [3], designed to detect and track targets moving in the narrow region along the transmitter-receiver base line. its most attractive feature is radar cross section (rcs) which considers the target’s physical cross section, the wavelength, the shape of the target’s surface. a basic comparison to the traditional monostatic radar can be found in [4], whereas probably its most important feature is its robustness against stealth technology [5]. moreover, the fsr receiver can utilize radiation from non-cooperative transmitters without revealing its location. in a hostile environment this is highly desirable as the receiver may be used covertly. all these advantage features created a ‘come back’ interest to fsr. in addition, fsr can be used for target classification, requires relatively simple hardware and has a long coherent interval of the received signal. this is the consequence of the loss in range resolution [6-7]. on the other hand, the fsr presents a conservative class of systems which have a number of fundamental limitations, that include the absence of the range resolution and operation within the narrow angles. this therefore requires the target to be very close to the transmitterreceiver baseline and the radar loses its ability to measure the range when the target crosses the base line. ii. previous and related works some of the main literature devoted to the fsr is given in [8, 9]. generally, there is a lack of recent publications on fsr. earlier publications, pertaining to the forward scattering, were devoted to estimating the rcs of an object at the forward scattering. among others, [5] provides a theoretical analysis and various experimental results to prove that the ram coatings did not impose any effect on forward scattering when applied on highly conducting objects which were larger than the wavelength of the carrier. nevertheless, the advantages of the fsr became known much later. these included the increase in the rcs of the object at the forward scattering. in [4], author presented a fast and simple approach for estimating the effective forward scattering rcs for the different targets at various operating frequencies. this was followed by [10], in which authors experimentally confirmed that the rcs, at forward scattering, was bigger than the one in the monostatic case by 30-40 db, depending on the frequency of the carrier. authors in [11] discussed target detection and estimated the detection zones at forward scattering. they showed that the detection zone of a fsr was dependent on the type of the object and its flight trajectory. they calculated the bistatic rcs of the objects, related to the xy cartesian co-ordinates to estimate the detection zones. authors in [12] on the other hand, suggested that the detection is always lost at zero doppler. this area is known as the ‘dead zone’. authors in [1] suggested that the areas worthy of investigation are the system issues of the engineering, technology & applied science research vol. 8, no. 3, 2018, 3018-3022 3019 www.etasr.com kanona et al.: a review of ground target detection and classification techniques in forward … fsr. these include system configurations and system parameters, such as the operating frequencies, power levels, and baseline distance. all that delayed the deployment of fsr in practical applications. a. target detection in fsr authors in [13] discussed the fsr technology, the current and possible applications and the limitations of fsr in a feasibility study to the automatic ground target detection and classification. also, the extraction of features from the radar measurements was introduced. authors proved that the fsr system has a huge potential to be used as an alternative for ground target detection and classification based on pca as feature extraction and the classification algorithm (k-nearest neighbor classifier) by a real experiment of three vehicles carried out on a public road. authors in [14, 15] touched the problem of extracting the doppler signature in different interference environments by using hilbert transform and wavelet technique in order to predict the existence of target. two experimentations have been done to collect the fsr signal under high clutter. the proposed method gave a good result with some reservations of wavelet issues. soon after, authors in [16, 17] studied the effect of clutter on the automatic target classification (atc) accuracy in fsr. it was shown that using conventional clutter-uncompensated atc system can achieve high target classification accuracy at high scr only, but the accuracy drops significantly with decreasing scr. the employment of clutter-compensated atc system is shown to improve significantly the classification accuracy at low scr. authors in [18] continued in the same field with new improvements in the existing method proposed in [14, 15] by considering a rough environment (receiver and surrounding noises). results showed that target detection using a hilbert transform is applicable only for certain conditions but target detection employing the wavelet technique is more robust against clutter and noise. an inclusive comparison of various wavelet threshold selection rules for different types of wavelet filters and levels of decomposition is conducted to study the effect on target detection with fsr. two sets of field experiments were carried out to validate the proposed method, and target signals under the influence of large clutter were successfully detected using the proposed method with a confidence level exceeding 75%. then authors in [19] implemented the haar and meyer wavelet technique in fsr which gives more detailed scales and variation information from the measured signals. the results from the wavelet technique showed that they could find the similarity between signals of each target and dissimilarity between different targets. authors in [21] investigated accurate signal modeling for detecting moving targets in fsr by modifying the existing model algorithm by using numerous simulations. they claim that all the existing signal models and algorithms are built based on the assumptions that the baseline is long, diffraction angle is small and velocity direction of the target is approximately perpendicular to the baseline, the ground-based fsr system is characterized by short baseline and large diffraction angle and the velocity direction of the target is not always perpendicular to the baseline. therefore in many cases, the above assumptions introduce significant errors to the results in the ground-based fsr. in the light of the ground-based fsr system, the signal model and imaging algorithm in traditional sisar imaging technology are modified and gave good results. authors in [22] state that, the received signal in fsr depends on the target’s electrical size and trajectory, which are unknown a priori. as a result, in practical situations, it is impossible to obtain the accurate reference function at the reception side. that’s why they proposed a signal processing algorithm which includes the construction of the adaptive reference functions and the identification of target velocity and observation time by the adaptation of an optimal filter (quasioptimal). they tested the algorithm performance under practical motion trajectories such as different motion directions and baseline crossing points, which indicates the effectiveness of the proposed algorithm in a practical case for fsr. as result they found that the proposed methods are suitable for the identification of target parameters, and in particular when observing target’s time and speed. by knowing those parameters the possibility to obtain accurate target recognition rises. authors in [23] discussed fsr cross section of different target specification by conducting a simulation for analysis of a multi-car model. the study showed the effects of different target specifications on rcs radiation pattern at different angle for each frequency. novel studies were in [24, 25]. authors used gps radio shadow instead of previous fsr models in order to build a passive fsr system. investigation on different moving objects was introduced. the results showed that from fs-gps radio shadows of different objects, information about the parameters of the object (size, speed and direction of movement, distance to the receiver) can be extracted from the width, shape and length of the received fs shadow. the occurrence of fs shadow is essential physical phenomena, which can be used to extract some useful information about the objects that create it. in [26], authors proved the forward scattering gps radio shadows system can be used for detecting road vehicles in urban environment. authors in [27] followed the previous study of target shadow with the practical aspects of the target profile reconstruction in a single node groundbased fsr system and discussed the target return signal for three different cars measured in real outdoors environment. the study proved that modeling approach can be successfully used for simulation of the doppler phase history, which is required in the procedure of complex envelope extraction. the similarity of the reconstructed and original profiles demonstrates that tspr delivers results suitable for both visual interpretations of target profile and atc. b. classification techniques in fsr in [2, 28-30], it was shown that fsr can be effectively used for ground target detection, and in particular automatic vehicle recognition and classification using different techniques and scenarios. researchers classified four different types of vehicles into categories based on their sizes and types at frequency 1 ghz in ideal case scenario where the vehicles are crossing the baseline perpendicularly. in order to perform the classification, the combination of principal component analysis (pca) and k-nearest neighbor (knn) was proposed. obtained engineering, technology & applied science research vol. 8, no. 3, 2018, 3018-3022 3020 www.etasr.com kanona et al.: a review of ground target detection and classification techniques in forward … results showed that good vehicle classification performance can be achieved. authors in [31] worked on the evaluation of a network of forward scattering (fs) radar micro sensors for the detection and classification of ground targets based on the findings of [8]. the system power budget was operating in line of sight (los) conditions. it was evaluated at both theoretical and experimental level in terms of power budget analysis and resolution. authors demonstrated that an excellent resolution is achievable. the potential resolution of the system is equal to the target’s horizontal dimension. the dynamic range of the system is also shown to be very high. in addition, a number of practical targets were considered as simulation examples over a wide range of radar carrier frequencies. in [29], authors proved that fsr system has a huge potential to be used as an alternative system for ground target detection and classification. authors in [32] proposed another solution based on using vehicle height and length and height profiles obtained by a microwave (mw) radar sensor. a precise feature vector can be extracted, and simple deterministic algorithms can be applied to determine the vehicle class. field trials using a spread-spectrum mw radar sensor system operating on these principles have been carried out. they confirmed that accurate classification of a large number of vehicle classes can be reached. authors in [33] used image technique formulation to obtain the electric field integral equations (efies) in order to classify cylindrical targets from their ultra wide-band radar returns. then, the efies were solved numerically by the method of moment (mom). because of the wide frequency range of the ultrawide-band radar signal, the database to be used for target classification becomes very large. to deal with this problem and to provide robustness, wavelet transform was utilized. application of wavelet transform significantly reduces the size of the database. the coefficients obtained by wavelet transform are used as the inputs of artificial neural networks (anns). then, the actual performances of the anns were investigated by receiver operating characteristic (roc) analysis. therefore, the compressed inputs of the anns were determined and a dataset was formed. rbf, grnn and mlp were investigated in this work. according to the testing and training rates, the best classifier is the mlp. to support this result, sensitivity and specificity values and the roc curves were obtained and it was observed that mlp was better. authors in dealt with a new system that uses neural network(nn)-based methodology with various types of training algorithms. it reiterates the uniqueness and the ability of the neural network implementation to accurately classify unknown vehicle signature on the available training data, the input of nn is defined as the vehicle length and back-propagation nn (bpnn) was used as a nn model. the paper proves that nn is suitable to be used as a classifier since the classification accuracy exceeds more than 90%. authors in [35] on the other hand, proved the potential and utilization of nns by comparing the knn classifier and the conventional method (pca) with the proposed. results showed that nns can be effectively employed in fsr as an automatic classifier. after implementing multi-layer perceptron (mlp), a bpnn trained with three back-propagation algorithms gave very promising results in vehicle recognition and vehicle categorization. 10% of overall data was misclassified in vehicle recognition and only 2% of overall data was misclassified in vehicle categorization. the same classification method was applied in [14, 36], but with different trajectory known as angle of detection. theoretically, target’s trajectory is one of the factors affecting the target’s signature which contributes towards the poor performance of the classification system. hence, by using multiple sensors, the discrepancies in classification performance could be reduced. later, the same classification system has been tested at low frequencies: ultra high frequency (uhf) and very high frequency (vhf) bands [37]. the paper proved that a good classification performance can be obtained even at low frequency. authors in [38] proposed a novel ground vehicle classification approach using unmodulated cw radar. the radar was set up to look forward down to the road. vehicles were modeled as body targets composed of multiple scattering centers. analysis showed that the spatial distribution of scattering centers can be derived from the doppler signature of radar echo. hough transform was performed to estimate the distribution which was then used for classification. in experiments, vehicles were classified into three types at an average accuracy of 94.8%. authors in [39, 40] designed and developed three novel, distinct automatic target recognition (atr) methods. for the classification they divided the observed targets into predefined classes (extremely randomized trees or subspace methods). a key feature of the approach was the breaking of the recognition problem into a set of subproblems by decomposing the parameter space, which consist of the frequency, the polarization, the aspect angle and the bistatic angle, into regions and build one recognizer for each region. authors in [41] claimed that all previous studies did not consider a rough environment analysis in ground target classification systems and all experiments have been on ideal environment which significantly effects the classifier output. after adding simulation noises to the fsr signal output nn was used as classifier. as result it was found that classification using an ann is robust against noise to a certain extent of noise. they developed an enhanced classification process in [42]. however the performance of the classification system was still below satisfactory level especially under the influence of external factors such as clutter [16], target trajectory uncertainties [43] and features used as the input to the classifier. authors in [44] addressed the importance of feature extraction process in the fsr system by evaluating manual and automatic reduction techniques (pca and z-score). the main objective of this study was to analyze the most suitable feature extraction algorithm to classify ground vehicles based on their physical size. they continued in [45, 46] by improving the classification accuracy by the combination of z-score and nn. it was shown that as the number of features increases, the classification accuracy increases. the highest percentage of classification accuracy can be achieved when using a nn5 system. authors in [47] used lte signal as a source for passive bistatic radar (pbr) for detection and location of ground moving targets depending on bistatic rcs. conventional processing was used as classification approach. they performed simulations using computer simulation technology (cst) microwave studio. the simulation results showed that engineering, technology & applied science research vol. 8, no. 3, 2018, 3018-3022 3021 www.etasr.com kanona et al.: a review of ground target detection and classification techniques in forward … the large area of ground moving target, had better outcome compared to other ground moving targets which is compatible with babinet’s principle, which declares a target of physical cross-sectional area is proportionate to rcs. in [48], same authors rolled in their previous study but this time detected humans instead of vehicles. real experiments were done by testing and evaluating different human sizes. it was discovered that the psd of the individuals are inversely proportional to their heights. in pca, data of the individuals show a good convergence in terms of their respective groups. authors in [49, 50] proposed a passive fsr system that can exploit the peculiar advantages of the enhancement in forward scatter radar cross section (fsrcs) for target detection and recognition using lte signals. this paper illustrates the first classification result in the passive fsr system. the great potential in the passive fsr system provides a new research area in passive radar that can be used for diverse remote monitoring applications. in [51], authors presented the latest feasibility studies and experimental results from using lte signals in pbr applications. details are provided about aspects such as signal characteristics, experimental configurations, and snr studies. six experimental scenarios were carried out to investigate the detection performance of the proposed system on ground-moving targets. the detection ability was demonstrated through the use of a cross-ambiguity function. the detection results suggested that lte signals are suitable as a source signal for pbr. iii. conclusion this paper presents a review of target detection and classification in fsr and its advantages over other types of radar configuration. it is generally accepted that fsr can be used as an alternative system for ground target detection and classification. further, recent research focused on the application of different classifiers to accurately classify unknown vehicle signatures. moreover, feature extraction was addressed, especially pca and z-score to improve classification accuracy. the area of ground vehicle classification is very interesting, that’s why intensive studies have been conducted in the last few years looking for alternatives to old systems, using fsr theory to reduce the cost and to make use of the existing transmitted signals. therefore, gsm, lte, gps and fm were tested for detection and recognition of vehicles and humans as well using the same classification techniques. however, a number of problems are still unsolved, including the choice of optimum frequency, a more precise and intelligent speed estimation algorithm plus the absence of the range resolution and operation within the narrow angles. another set of problems is the choice of the feature extraction before injecting the signature into the classifier. future research will focus on using artificial intelligence, especially nns with bigger databases and feature extraction techniques as pre-processing to improve the classification system in order to implement automatic classification, in order to lead to wider use of fsr in other areas. references [1] m. cherniakov (ed), bistatic radar: principles and practice, wiley, 2007 [2] a. r. s. a. raja, forward scattering radar for vehicle classification, phd thesis, university of birmingham, 2007 [3] m. i. skolnik, introduction to radar systems, mcgraw-hill, ny,usa, 1962 [4] j. l. glaser, “bistatic rcs of complex objects near forward scatter. ieee transactions on aerospace and electronic systems”, vol. aes-21, no. 1, pp. 70-78, 1985 [5] r. e. hiatt, k. m. siegel, h. weil, “forward scattering by coated objects illuminated by short wavelength radar”, proceedings of the ire, vol. 48, no. 9, pp. 1630-1635, 1960 [6] h. sun, d. k. p. tan, y. lu, “design and implementation of an experimental gsm based passive radar”, proceedings of the international radar conference, adelaide, australia ieee, 2003 [7] d. k. p. tan, h. sun, y. lu, “sea and air moving target measurements using a gsm based passive radar”, ieee international radar conference, arlington, usa, ieee, 2005 [8] m. cherniakov, v. v. chapurskiy, r. r. abdullah, p. jancovic, m. salous, “short-range forward scattering radar”, international radar conference, pp. 322-328, 2004 [9] m. i. skolnik, radar handbook, mcgraw hill professional, 1970 [10] y. s. chesnokov, m. v. krutikov, “bistatic rcs of aircrafts at the forward scattering. in radar”, cie international conference of radar, beijing, china, ieee, 1996 [11] a. b. blyakhman, a. g. ryndyk, s. b. sidorov, “forward scattering radar moving object coordinate measurement”, the record of the ieee 2000 international radar conference, alexandria, va, usa, ieee, 2000 [12] d. m. gould, r. s. orton, r. j. e. pollard, forward scatter radar detection, in radar 2002 , edinburgh, uk, pp. 36-40, 2002 [13] r. abdullah, a. ismail, “forward scattering radar current and future application”, international journal of engineering and technology, vol. 3, no. 1, pp. 61-67, 2006 [14] k. h. mohamed, m. cherniakov, m. f. a. rasid, r. s. a. raja abdullah, “automatic target detection using wavelet technique in forward scattering radar”, in eurad 2008, european radar conference, amsterdam, netherlands, october 30-31, 2008 [15] m. k. h. m. alla, r. s. a. raja abdullah, m. f. a. raseed, “detection of ground target in forward scattering radar using hilbert transform and wavelet technique”, international review of electrical engineering, vol. 4, no. 2, pp. 320-326, 2009 [16] n. e. abd rashid, p. jancovic, m. gashinova, m. cherniakov, v. sizov, “the effect of clutter on the automatic target classification accuracy in fsr”, in 2010 ieee radar conference, washington, dc, usa, ieee, 2010 [17] n. e. b. abd rashid, “automatic vehicle classification in a low frequency forward scatter micro-radar”, phd thesis, university of birmingham, 2012 [18] r. s. a. raja abdullah, m. f. a. rasid, m. k. mohamed, “improvement in detection with forward scattering radar”, science china information sciences, vol. 54, no. 12, pp. 2660-2672, 2011 [19] k. a. othman, m. i. jusoh, n. e. abd rashid, c. w. f. c. wan fadhil, “wavelet technique implementation in forward scattering radar (fsr) ground target signal processing”, journal of telecommunication, electronic and computer engineering, vol. 9, no. 1-5, pp. 59-62, 2017 [20] m. salah, m. f. a. rasid, r. s. a. raja abdullah, m. cherniakov, “speed estimation in forward scattering radar by using standard deviation method”, modern applied science, vol. 3, no. 3, pp. 16-25, 2009 [21] t. zeng, x. li, c. hu, t. long, “investigation on accurate signal modelling and imaging of the moving target in ground-based forward scatter radar”, iet radar, sonar & navigation, vol. 5, no. 8, pp. 862870, 2011 [22] c. hu, v. sizov, m. anoniou, m. gashinova, m. cherniakov, “optimal signal processing in ground-based forward scatter micro radars”, ieee transactions on aerospace and electronic systems, vol. 48, no. 4, pp. 3006-3026, 2012 engineering, technology & applied science research vol. 8, no. 3, 2018, 3018-3022 3022 www.etasr.com kanona et al.: a review of ground target detection and classification techniques in forward … [23] n. a. m. daud, n. e. abd rashid, k. a. othman, n. ahmad, “analysis on radar cross section of different target specifications for forward scatter radar (fsr)”, in fourth international conference on digital information and communication technology and it's applications, bangkok, thailand, ieee, 2014 [24] c. kabakchiev, i. garvanov, v. behar, p. daskalov, h. rohling, “study of moving target shadows using passive forward scatter radar systems”, in 15th international radar symposium, gdansk, poland, ieee, 2014 [25] c. kabakchiev, k. kabakchiev, i. garvanov, v. behar, k. kulpa, h. rohling, d. kabakchieva, a. yarovoy, “experimental verification of target shadow parameter estimation”, in 17th international radar symposium, krakow, poland, ieeem 2016 [26] i. garvanov, c. kabakchiev, v. behar, m. garvanova, “target detection using a gps forward-scattering radar”, in international conference on engineering and telecommunication, moscow, russia, ieee, 2015 [27] s. hristov, l. daniel, e. hoare, m. cherniakov, m. gashinova, “target shadow profile reconstruction in ground-based forward scatter radar”, in 2015 ieee radar conference, arlington, va, usa, ieee, 2015 [28] f. kruse, f. folster, m. ahrholdt, h. rohling, m. m. meinecke, t. b. to “target classification based on near-distance radar sensors”, in 2004 ieee intelligent vehicles symposium, parma, italy, ieee, pp. 722-727, 2004 [29] m. cherniakov, m. salous, r. s. a. raja abdullah, v. kostylev, “forward scattering radar for ground targets detection and recognition”, in defence technology conference, edinburgh, uk. 2005 [30] m. cherniakov, r. s. a. r. abdullah, p. jancovic, m. salous, v. chapursky, “automatic ground target classification using forward scattering radar”, iee proceedings-radar, sonar and navigation, vol. 153, no. 5, pp. 427-437, 2006 [31] m. cherniakov, m. salous, v. kostylev, r. s. a. raja abdullah, “analysis of forward scattering radar for ground target detection”, in 2005 european radar conference, paris, france, pp. 165-168, ieee, 2005 [32] i. urazghildiiev, r. ragnarsson, p. ridderstrom, a. rydberg, e. ojefors, k. wallin, p. enochsson, m. ericson, g. lofqvist, “vehicle classification based on the radar measurement of height profiles”, ieee transactions on intelligent transportation systems, vol. 8, no. 2, pp. 245-253, 2007 [33] s. makal, a. kizilay, l. durak, “on the target classification through wavelet-compressed scattered ultrawide-band electric field data and roc analysis”, progress in electromagnetics research, vol. 82, pp. 419-431, 2008 [34] r. s. a. raja abdullah, m. i. saripan, m. cherniakov, “neural network based for automatic vehicle classification in forward scattering radar”,i n 2007 iet international conference on radar systems, edinburgh, uk, ieee, 2007 [35] n. k. ibrahim, r. s. a. raja abdullah, m. i. saripan, “artificial neural network approach in radar target classification”, journal of computer science, vol. 5, no. 1, pp. 23-32, 2009 [36] m. cherniakov, r. s. a. raja abdullah, p. jancovic, m. salous, “forward scattering micro sensor for vehicle classification”, in 2005 ieee international radar conference, arlington, va, usa, ieee, 2005 [37] n. e. a. rashid, m. antoniou, p. jancovic, v. sizov, r. abdullah, m. cherniakov, “automatic target classification in a low frequency fsr network”, in european radar conference, amsterdam, netherlands, ieee, 2008 [38] j. x. fang, h. d. meng, h. zhang, x. q. wang, “a ground vehicle classification approach using unmodulated continuous-wave radar”, in 2007 iet international conference on radar systems, edinburgh, uk, ieee 2007 [39] j. pisane, automatic target recognition using passive bistatic radar signals, phd thesis, supélec, 2013 [40] j. pisane, s. azarian, m. lesturgie, j. verly, “automatic target recognition for passive radar”, ieee transactions on aerospace and electronic systems, vol. 50, no. 1, pp. 371-392, 2014 [41] m. k. h. m. alla, m. kanona, a. g. elsid, “target classification in forward scattering radar in noisy environment”, international journal of application or innovation in engineering & management, vol. 3, no. 11, pp. 1-5, 2014 [42] m. e. a. kanona, a. g. abdalla, m. k. h. m. alla, y. a. hamdalla, “enhanced neural network based ground target classification”, technology horizons journal, vol. 1, pp. 1-5, 2018 [43] n. e. a rashid, n. ahmad, n. f. abdullah nor, n. ismail, a. a. bt jamaludin, “the effect of different ground characteristic to the stability and similarity of target spectra in fsr micro-sensor network”, in 2012 ieee student conference on research and development, pulau pinang, malaysia, ieee, 2012 [44] n. f. abdullah, n. e. a. rashid, i. musirin, z. i. khan, “vehicles classification based on different combination of feature extraction algorithm with neural network (nn) using forward scattering radar (fsr)”, journal of theoretical & applied information technology,. vol. 77, no. 3, pp. 311-317, 2015 [45] n. f. abdullah, n. e. a. rashid, k. a. othman, z. i. khan, i. musirin, “ground vehicles classification using multi perspective features in fsr micro-sensor network”, journal of telecommunication, electronic and computer engineering, vol. 9, no. 1-5, pp. 49-52, 2017 [46] n. f. abdullah, n. e. a. rashid, z. i. khan, i. musirin, “analysis of different z-score data to the neural network for automatic fsr vehicle classification”, iet international radar conference, hangzhou, china,ieee, 2015 [47] n. a. aziz, r. s. a. r. abdullah, “rcs classification on ground moving target using lte passive bistatic radar”, journal of scientific research and development, vol. 3, no.2. pp. 57-61, 2016 [48] n. h. a. aziz, r. s. a. r. abdullah, a. n. m. yusof, “human detection and recognition system using passive forward scattering radar”, science international, vol. 29, no. 1, pp. 69-73, 2017 [49] r. s. a. r. abdullah, n. h. a. aziz, n. e. abdul rashid, a. a. salah, f. hashmin, “analysis on target detection and classification in lte based passive forward scattering radar”, sensors, vol. 16, no. 10, p. 1607, 2016 [50] r. s. a. r. abdullah, a. a. salah, n. h. a. aziz, n. e. abdul rashid, “vehicle recognition analysis in lte based forward scattering radar”, in ieee radar conference, philadelphia, pa, usa, ieee, 2016 [51] r. s. a. r. abdullah, a. a. salah, a. ismail, f. hashim, n. e. abdul rashid, n. h. a. aziz, “lte-based passive bistatic radar system for detection of ground-moving targets”, etri journal, vol. 38, no. 2, pp. 302-313, 2016 microsoft word 20-2725_s1_etasr_v9_n3_pp4176-4181 engineering, technology & applied science research vol. 9, no. 3, 2019, 4176-4181 4176 www.etasr.com kote & wadkar: modeling of chlorine and coagulant dose in a water treatment plant by artificial … modeling of chlorine and coagulant dose in a water treatment plant by artificial neural networks alka s. kote department of civil engineering, dr. d. y. patil institute of technology, pune, india alkakote26@gmail.com dnyaneshwar v. wadkar dr. d. y. patil institute of technology, and aissms college of engineering, pune, india dvwadkar_civil@yahoo.co.in abstract—coagulation and chlorination are complex processes of a water treatment plant (wtp). determination of coagulant and chlorine dose is time-consuming. many times wtp operators in india determine the coagulant and chlorine dose approximately using their experience, which may lead to the use of excess or insufficient dose. hence, there is a need to develop prediction models to determine optimum chlorine and coagulant doses. in this paper, artificial neural networks (ann) are used for prediction due to their ability to learn and model non-linear and complex relationships. separate ann models for chlorine and coagulant doses are explored with radial basis neural network (rbfnn), feed-forward neural network (ffnn), cascade feed forward neural network (cfnn) and generalized regression neural network (grnn). for modeling, daily water quality data of the last four years are collected from the plant laboratory of wtp in maharashtra (india). in order to improve performance, these models are established by varying input variables, hidden nodes, training functions, spread factor, and epochs. the best models are selected based on the comparison of performance measures. it is observed that the best performing chlorine dose model using defined statistics is found to be rbfnn with r=0.999. similarly, the cfnn coagulant dose model with bayesian regularization (br) training function provided excellent estimates with network architecture (2-40-1) and r=0.947. based on the above models, two graphical user interfaces (guis) were developed for real-time prediction of chlorine and coagulant dose, which will be useful for plant operators and decision makers. keywords-artifical neural networks; chlorine dose; coagulant dose; water treatment, modelling i. introduction water treatment consists of many complex physical and chemical processes. the efficiency of these processes is accomplished by examining the quality of outlet water. generally, in india, wtp operators take necessary remedial measures for water quality improvement using only their experience. this practice is inefficient and time-consuming in monitoring real-time responses [1, 2]. in a wtp, coagulation and disinfection are essential treatment processes as they assure the supply of safe and clear water. conventionally, chlorine is the most widely used disinfectant, and aluminum sulphate (alum) used as a coagulant due to its high efficiency and low cost. mainly, two common vital factors, turbidity and applied dosages, decide the effectiveness of chlorination and coagulation [3]. turbidity provides a shield to microbes, which reduces the efficiency of chlorination. it raises chlorine demand, which results in less availability of residual chlorine in water distribution networks (wdns) [4, 5]. in india, wdns are old, have leakage issues responsible for microbial contamination, and there is a tendency of plant operators to apply higher chlorine dose for maintaining the desired residual chlorine in the wdn. the high chlorine dose increases the probability of trihalomethane (thm) formation. consumption of thm containing water creates adverse effects on human health such as high blood pressure, reproductive system disorders, and cancer inception [6]. a chlorine predictive model will help monitoring the process and avoid complex laboratory analysis, which requires more time and money. coagulation and chlorination processes show non-linear nature that is hard to express using linear mathematical models [7]. it is difficult to model water treatment processes due to complex interactions among many chemical and physical reactions. thus, the application of anns is considered for the prediction of optimum coagulant and chlorine dose. an ann is a biologically inspired system consisting of a number of interconnected elements called neurons. these neurons are arranged in input, hidden and output layers. all the layers are well connected like human brain synapses where weights are optimized by using input and output variables [8]. an ann has the ability to learn and model non-linear and complex relationships. several studies have been carried out on the prediction of the coagulant dose for particular wtp [9-15]. rbfnns and grnns have shown good performance capabilities for predicting residual chlorine in wtp [16]. thus, two ann models are explored for prediction of coagulant and chlorine dose for a major wtp of pimpri-chinchwad municipal corporation (pcmc), maharashtra, india. ii. materials and methods a. study area the wtp under study is located in pcmc, maharashtra, india, 18°37'33.87'' n and 73°48'43.76''e. this wtp supplies 428mld of water to an area of 177km 2 with 117,936 water connections and 59 elevated service reservoirs. corresponding author: alka s. kote engineering, technology & applied science research vol. 9, no. 3, 2019, 4176-4181 4177 www.etasr.com kote & wadkar: modeling of chlorine and coagulant dose in a water treatment plant by artificial … b. methodology this study presents an ann-based methodology for the prediction of chlorine and coagulant dose in a wtp. chlorine dose models were developed with input variables the coagulant dose, outlet water turbidity, and residual chlorine and the chlorine dose as the output variable. similarly, coagulant dose models are developed with input variables the inlet and outlet water turbidity, and coagulant dose as the output variable. daily data of inlet and outlet water quality were collected from the plant laboratory over a period of four years (2012-2016). ann models were developed using matlab version 16. four ann models are developed: rbfnn, feed-forward neural network (ffnn), cascade feed forward neural network (cfnn) and grnn by trial and error method with modifying input variables, hidden nodes, training functions, spread factor (sf), and epochs for improving the models’ performance. the establishment of an optimum number of hidden nodes in ann applications is always a challenging task. there is no precise and easy way to achieve the optimum number of nodes in each layer [17-20]. to build hidden neurons in a hidden layer in this study, information of nodes in both input and output layers is used. during the development of the ann models, training and testing data are split into 75:30 and 80:20 respectively. diversified training functions such as bayesian regularization (br), levenberg-marquardt (lm), resilient back propagation (rp), bfgs quasi-newton (bfg), one step secant (oss), conjugate gradient back propagation (cgb), conjugate gradient back propagation with fletcher-powell (cgf), variable learning rate gradient descent (vlrgd), gradient descent (gd), gradient descent with momentum (gdm) are used for the development of ffnn and cfnn models. it has been reported that the sf of 1 and 0.1 provided the best testing performance of the rbfnn and grnn models respectively [10]. therefore, in this study, both rbfnn and grnn models are tried for sf ranging from 0.1 to 15. the performance of these ann models is quantified by using standard statistics which means (х̅), standard deviation (σ), skewness (ɣ1), kurtosis (ɣ2) and error statistics such as the coefficient of regression (r), mean square error (mse), and mean absolute error (mae). the best performing ann model is selected for its highest r and lowest mse and mae values. also, the mapping of predicted series with observed series is checked for standard statistics, time series plots and scatter plots. two guis for prediction of chlorine and coagulant dose were developed for the best model in each category. iii. results and discussion based on the above explained methodology, 48 ann models for prediction of chlorine dose and 44 ann models for prediction of coagulant dose were developed. the networks were rigorously trained and the performances of the training functions are shown in table i. it is found that training functions lm and br are highly effective (r=0.943 and r=0.947 respectively) for ffnn and cfnn. the other training functions showed very poor correlation between the observed and the predicted values. therefore, lm and br training functions are used for further development of the best models. table i. training function performance training stage training function r lm 0.943 br 0.947 bfg -0.866 rp 0.142 cgb -0.729 cgf -0.882 oss 0.016 vlrgd -0.591 gd -0.321 gdm 0.187 a. chorine dose ann model chlorine dose ann models were developed using 1849 data samples of input variables, namely coagulant dose, outlet water turbidity, and residual chlorine and chlorine dose as the output variable. these variables are closely associated with chlorination process. ann models namely model i, ii and iii were developed by varying input variables. 1) chorine dose ann model i for the development of the ann model i, one input variable, viz coagulant dose is adopted. sixteen ffnn, cfnn, rbfnn, and grnn models were developed. these models were compared using performance measures and it was observed that all the models resulted in poor performance (r<0.72). figure 1 shows the plot of the observed and predicted series of ffnn, cfnn, rbfnn, and grnn chlorine dose models during the testing. fig. 1. comparison of best chlorine dose ann model i (testing stage) 2) chorine dose ann model ii for the development of ann model ii, two input variables, coagulant dose and residual chlorine were adopted. several ffnn, cfnn, rbfnn, and grnn models were developed and tested to get an appropriate network that provided satisfactory performance. standard statistics were observed engineering, technology & applied science research vol. 9, no. 3, 2019, 4176-4181 4178 www.etasr.com kote & wadkar: modeling of chlorine and coagulant dose in a water treatment plant by artificial … during the testing stage. σ varied from 0.036 to 0.128, ɣ1 varied from -1.713 to -8.717 and ɣ2 from 17.667 to 89.15. similarly, error statistics such as mse varied from 0.001 to 0.020, mae from 0.015 to 0.120 and r from 0.695 to 0.97. however, there is prominent supremacy in prediction by rbfnn 2 model with an sf of 0.1. the comparison of the best chlorine dose ann model ii among ffnn, cfnn, rbfnn, and grnn models is shown in figure 2. fig. 2. comparison of best chlorine dose ann model ii (testing stage) 3) chorine dose ann model iii for the development of the ann model iii, three input variables, coagulant dose, outlet water turbidity and residual chlorine were adopted. for rbfnn and grnn models, the values of sf varied from 0.1 to 15 whereas training functions varied for ffnn and cfnn models and minimum/maximum values of performance parameters were noted. the developed models were tested in order to get an appropriate network that provided satisfactory performance. a comparison of the best chlorine dose ann model iii among ffnn, cfnn, rbfnn, and grnn models is shown in figure 3 where the nature of plot of predicted chlorine dose by rbfnn3 model is closely associated with the observed chlorine dose. the performance of all ann models is displayed in table ii, indicating minimum and maximum values of standard statistics and error statistics. standard statistics variation was as follows: σ varied from 0.026 to 1.005, ɣ1 from -10.24 to 1.032 and ɣ2 from 5.309 to 110.45. similarly, the error statistics variation was: mse varied from 0.001 to 1.069, mae varied from 0.009 to 0.98 and r from -0.237 to 0.99. the rbfnn model with sf 0.1 produced the highest r compared to all other ann models. in rbfnn and grnn models, it was found that the prediction efficiency increased, with decrease in sf value. further, ffnn and cfnn models with br training function produced good prediction when compared to all other training functions. however, these models are less efficient. fig. 3. comparison of best chlorine dose ann model iii (testing stage) table ii. minimum and maximum values of standard and error statistics of chorine dose ann model iii model min/max values standard statistics error statistics х̅ σ ɣ1 ɣ2 r mse mae rbfnn min 1.771 0.026 -10.24 7.813 -.237 1.069 0.009 max 1.949 1.005 1.032 110.45 0.999 0.001 0.391 grnn min 1.851 0.138 -3.786 11.231 0.053 0.023 0.099 max 1.91 0.166 -2.324 28.395 0.477 0.051 0.199 ffnn min 1.867 0.166 -3.107 5.309 0.239 0.025 0.1 max 1.918 0.192 -1.035 20.4 0.444 1.028 0.982 cfnn min 1.882 0.164 -2.263 4.76 0.277 0.035 0.124 max 1.898 0.234 -0.51 12.078 0.433 0.143 0.344 there is a prominent supremacy in the prediction with sf ranging from 0.1 to 1 in rbfnn models. therefore, the performance parameters of the best rbfnn models during the training and testing stages are shown in tables iii and iv. also, it is observed that rbfnn3 model gave better performance than rbfnn2 and rbfnn1. table iii. standard statistics of the best rbfnn models ann model training testing х̅ σ ɣ1 ɣ2 х̅ σ ɣ1 ɣ2 observed values 1.909 0.208 2.097 12.31 1.954 0.171 2.53 12.39 rbfnn1 1.910 0.137 -1.967 15.71 1.962 0.120 -2.43 12.28 rbfnn2 1.910 0.044 -4.286 62.15 1.954 0.036 -1.71 17.66 rbfnn3 1.910 0.026 -3.027 98.89 1.953 0.026 1.032 21.04 table iv. parameters of the best rbfnn models ann model training testing r mse mae r mse mae rbfnn1 0.715 0.014 0.068 0.753 0.018 0.077 rbfnn2 0.978 0.002 0.013 0.977 0.001 0.015 rbfnn3 0.989 0.001 0.006 0.999 0.001 0.009 table iii provides the standard statistics х̅, σ, ɣ1 and ɣ2 of the best ann models during training and testing. it was found that the rbfnn 3 model showed the lowest σ and higher engineering, technology & applied science research vol. 9, no. 3, 2019, 4176-4181 4179 www.etasr.com kote & wadkar: modeling of chlorine and coagulant dose in a water treatment plant by artificial … positive ɣ2. this lowest σ implies that the data points are near the mean of the database while the higher ɣ2 indicates the set of the database had a heavier tail as compared to a normal distribution. furthermore, in table iv the rbfnn3 model delivered excellent performance with r=0.999, mse=0.001 and mae=0.009 during testing stage. time series plot and scatter plots of rbfnn3 model during the testing stage are shown in figures 4 and 5. it is seen that the observed and predicted series of chlorine dose are close thereby indicating the best model. overall, the rbfnn model showed excellent prediction results. fig. 4. times series plot of rbfnn 3 during testing stage fig. 5. scatter plot of rbfnn models during testing stage b. coagulant dose ann model regarding coagulant dose ann models, 44 models were tried using ffnn and cfnn networks considering inlet and outlet water turbidity as input variables. the database of input and output variables required for the ann modeling consisted of 11688 data points. the training and testing data were divided as 75:30 and 80:20 during model building. the development of ann models was carried out through several steps of training and testing with various training functions as depicted in section ii (b). it was observed that br and lm training functions gave good r-values 0.947 and 0.944 respectively whereas rp, oss and gdm showed poor performance and bfg, cgf, cgb, vlrgd, and gd showed negative correlation, indicating their incapability. therefore, lm and br training functions were used for development of ffnn and cfnn models namely ffnn1, ffnn2, cfnn1, and cfnn2. during the development of these models, hidden nodes varied from 15 to 60, and corresponding r was found to range between 0.936 to 0.947. it wass observed that the best performance was produced by a cfnn model using br training function with hidden node number=40, with r=0.952 for training, r=0.922 for testing, and overall r=0.947. the cfnn model, due to the weighted connections of input layer with the hidden and output layers, mapped the input-output relationship very well. also, br training function delivers a decisive benchmark for finishing the training step and counters overtraining of the network. table v shows the performance of the best models during the testing stage. it is seen that both ffnn and cfnn models perform equally good, however cfnn model showed a slightly better prediction. in the testing stage, the cfnn model showed a reduction in mse (46.85%). the error statistics of the cfnn model are very near to the observed values. the prediction of coagulant dose by the developed best cfnn2 model during testing stage was carried out with 248 data points. the observed coagulant dose at the wtp is constant for a specific period, which is indicated by the straight line, whereas predicted coagulant dose by cfnn2 model shows variation as shown in figure 6 table v. performance comparison of the best ann models ann models training function epochs hidden nodes r mse ffnn1 model lm 26 60 0.944 185.09 ffnn2 model br 500 50 0.945 113.13 cfnn1 model lm 36 60 0.943 59.22 cfnn2 model br 500 40 0.947 99.28 fig. 6. prediction of coagulant dose by the best cfnn2 model during the testing stage it is seen that the predicted values of the coagulant dose do not follow the pattern of the actual coagulant dose from 60 to 90 data points. this could be due to a wide range of data points of inlet water turbidity as well as coagulant dose during the ann training. despite the significant variations, the average actual coagulant dose and predicted coagulant dose show a similar trend. among all the models, the cfnn2 model gave the best performance with r=0.947 and mse=99.28. therefore, the cfnn model is more capable and precise in the modeling of the coagulation process. iv. model implementation the presence of residual chlorine in the wdn is a major concern in india due to water leakage and distribution issues. engineering, technology & applied science research vol. 9, no. 3, 2019, 4176-4181 4180 www.etasr.com kote & wadkar: modeling of chlorine and coagulant dose in a water treatment plant by artificial … hence, the required residual chlorine at the outlet of wtp is an important aspect for the distribution network. two guis were developed to determine chlorine and coagulant dose using the best performed rbfnn3, and cfnn2 models respectively (figures 7-8). in the gui of chlorine dose, the chlorine dose is predicted by using coagulant dose, outlet water turbidity, and residual chlorine. the plant operator can decide the chlorine dose as per desired residual chlorine (0.2mg/l) at the end of wdn. similarly, in the gui of coagulant dose, the plant operator can decide the coagulant dose by using inlet water turbidity and outlet water turbidity (less than 5ntu). fig. 7. snap shot of the chlorine dose ann model gui fig. 8. snap shot of coagulant dose ann model gui v. conclusion the chlorine and coagulant dose in a wtp are typically determined through laboratory analysis that requires a long experimental time. thus, guis were developed for chlorine and coagulant dose using anns. during the ann development, it was observed that br training function had better prediction capability than lm, rp, bfg, oss, cgb, cgf, vlrgd, gd and gdm. among all chlorine dose ann models, the rbfnn3 model (r=0.999) delivered the most excellent performance. one of the most important findings of the study is that the decreasing order of sf increases the performance of rbfnn and grnn models. for the development of coagulant dose models, it was found that the number of input variables increased the performance of ann models. all ffnn and cfnn models with lm and br training functions performed well especially for lower values of coagulant dose. however, higher values showed under prediction. cfnn2 model (r=0.947) with br training function provided the best prediction for the coagulant dose. the guis of the best ann models will be very useful tools to plant operators and managers for deciding the required chlorine and coagulant dose. references [1] d. wu, l. shang-lien, “predicting real-time coagulant dosage in water treatment by artificial neural networks and adaptive-network-based fuzzy inference system”, engineering applications of artificial intelligence, vol. 21, pp. 1189–1195, 2011 [2] o. bello, y. hamam, k. djouani, “coagulation process control in water treatment plants using multiple model predictive control”, alexandria engineering journal, vol. 53, no 4, pp. 420-435, 2014 [3] m. s. gibbs, n. morgan, h. r. maier, g. c. dandy, j. b. nixon, m. holmes, “investigation into the relationship between chlorine decay and water distribution parameters using data-driven methods”, mathematical and computer modelling ,vol. 44, no. 5-6, pp.485–498, 2006 [4] m. w. lechevallier, k. k. au, water treatment and pathogen control: process efficiency in achieving safe drinking water, iwa publishing, 2004 [5] j. a. crump, g. o. okoth, l. slutsker, d. o. ogaja, b. h. keswick, s. p. luby, “effect of point-of-use disinfection, flocculation and combined flocculation disinfection on drinking water quality in western kenya”, journal of applied microbiology, vol. 97, no. 1, pp. 225–231, 2004 [6] e. hood, “tap water and trihalomethanes: flow of concerns continues”, environmental health perspectives, vol. 113, no. 7, pp. 472-474, 2005 [7] m. m. hamed, m. g. khalafallah, e. a. hassanien, “prediction of wastewater treatment plant performance using artificial neural networks”, environmental modelling, vol. 19, no. 10, pp. 919-928, 2004 [8] p. hajela, l. berke, “neurobiological computational modes in structural analysis and design”, computers & structures, vol. 41, no. 4, pp. 657– 667, 1991 [9] a. robenson, s. r. a. shukor, n. araiz, “development of process inverse neural network model to determine the required alum dosage at segama wtp, sabah, malaysia”, computer aided chemical engineering, vol. 27, pp. 525–530, 2009 [10] s. heddam, a. bermad, n. dechemi, “applications of rbf and grnn for modelling of coagulant dosage in a drinking wtp: the comparative study”, journal of environmental engineering, vol. 137, no. 12, pp. 1209-1214, 2011 [11] m. j. kennedy, a. h. gandomi, c. m. miller, “coagulation modelling using ann to predict both turbidity and dom-parafac component removal”, journal of environmental chemical engineering, vol. 3, no. 4a, pp. 2829–2838, 2015 [12] c. l. chen, p. l. hou, “fuzzy model identification and control system design for coagulation chemical dosing of potable water”, water supply, vol. 6, no. 3, pp. 97–104, 2006 [13] l. x. gao, h. gu, a. rahardianto, p. d. christofides, y. cohen, “selfadaptive cycle to cycle control of inline coagulant dosing in ultrafiltration for pre-treatment of reverse osmosis feed water”, desalination, vol. 401, pp. 22–31, 2017 [14] h. r. maier, n. morgan, c. w. k. chow, “use of artificial neural networks for predicting optimal alum doses and treated water quality parameters”, environmental modelling & software, vol. 19, no. 5, pp. 485–494, 2004 [15] b. larmrini, a. benhammou, m. v. le lann, a. karama, “a neural software sensor for online prediction of coagulant dosage in a drinking wtp ”, transactions of the institute of measurements and control, vol. 27, no. 3, pp. 195-213, 2005 [16] d. wadkar, a. kote, “prediction of residual chlorine in a water treatment plant using generalized regression neural network”, international journal of civil engineering and technology, vol. 8, no. 8, pp. 1264–1270, 2017 engineering, technology & applied science research vol. 9, no. 3, 2019, 4176-4181 4181 www.etasr.com kote & wadkar: modeling of chlorine and coagulant dose in a water treatment plant by artificial … [17] f. khayatian, l. satro, g. dallo, “application of neural networks for evaluating energy performance certificates of residential buildings”, energy and buildings, vol. 125,pp. 45-54, 2016 [18] hecht-nielsen, “theory of the back-propagation neural network”, international joint conference on neural networks, washington, usa, august 6, 1989 [19] y. najjar, i. basheer, m. hajmeer, “computational neural networks for predictive microbiology methodology”, international journal of food microbiology, vol. 34, no. 1, pp. 27–49,1997 [20] l. m. salchenberger, m. cinar, n. a. lash, “neural networks: a new tool for predicting thrift failures”, decision sciences, vol. 23, no. 4, pp. 899–916, 1992 microsoft word 38-2749_s_etasr_v9_n3_pp4276-4280 engineering, technology & applied science research vol. 9, no. 3, 2019, 4276-4280 4276 www.etasr.com sohu et al.: controlling measures for causes of cost overrun in highway projects of sindh province controlling measures for cost overrun causes in highway projects of sindh province samiullah sohu department of civil engineering, quaide-awam university of engineering, science and technology, sindh, pakistan engr.samiullah@quest.edu.pk abd halid abdullah faculty of civil and environmental engineering, universiti tun hussein onn malaysia, johor, malaysia abdhalid@uthm.edu.my sasitharan nagapan faculty of civil and environmental engineering, universiti tun hussein onn malaysia, johor, malaysia sasitharan@uthm.edu.my touqeer ali rind department of civil engineering, mehran university of engineering and technology, shaheed zulfiqar ali bhutto campus, khairpur mirs’, pakistan touqeerali@muetkhp.edu.pk ashfaque ahmed jhatial department of civil engineering, mehran university of engineering and technology, shaheed zulfiqar ali bhutto campus, khairpur mirs’, pakistan ashfaqueahmed@muetkhp.edu.pk abstract—cost overrun is a serious issue in the construction industry worldwide, including pakistan. cost overrun is a critical and serious issue found in highway projects among all construction projects in pakistan. cost overrun occurs when the final cost of the projects exceeds the actual cost of the project. the main objective of this research is to identify the main causes of cost overrun and to determine possible mitigation measures of the identified main causes of cost overrun from contractors of highway/road projects of pakistan. in this study, a mixed mode (quantitative and qualitative) approach was used. a deep literature review helped to identify the 30 most common causes of cost overrun in the construction industry. in the first stage, a questionnaire was developed, and a survey was carried out among professional and experts who were working with contractors in highway projects. in the second stage, a semistructured questionnaire was developed, and a survey was carried out to determine the possible mitigation measures of the identified main causes of cost overrun. collected data for the first stage from experts were inserted in spss and were analyzed by using the average index method and the data of the second stage were analyzed by content analysis. it is expected that the findings of this research will be useful and helpful for the construction stakeholders to control and mitigate the major causes of cost overrun. keywords-cost overrun; highway projects; contractors; controlling measures i. introduction the construction industry is known as a major source of economic growth of any country [1]. the construction industry advances the quality standard of living by the construction of infrastructures such as schools, roads, hospitals, etc. [2]. the construction industry is known as a complex in nature and fragmented industry, therefore it is facing critical problems of cost overrun, low quality, time overrun, construction waste, low productivity, etc. out of these problems, cost overrun is one of the major ones as money is always of high importance [3, 4]. cost overrun is a serious issue for developing and developed countries [5]. the basic definition of cost overrun is that the final cost is higher than the one that was budgeted in an earlier stage [6]. in qatar construction industry more than 50% of construction projects faced serious issues of cost overrun [7, 8]. as reported in [9], 60% of construction projects in singapore were badly affected by cost overrun. according to [10] most construction projects have an experience of an average increase in cost of more than 33%. cost overrun or budget overrun in the construction projects is known as a universal and regular phenomenon whereas only a few projects are rarely completed within the approved and estimated budget [11]. according to [12], cost overrun is more than 100% of the total cost of a project in both developed and developing countries. pakistan construction industry is also facing this serious issue [13, 14], but the issue occurs mostly in highway projects. construction projects of pakistan have faced the issue of cost overrun and most projects have crossed 50% from the actual cost of the project [15]. although various studies have been conducted on cost overrun in construction projects, only a few have found mitigation measures of the major causes of cost overrun, probable because the causes vary from country to country [16]. pakistan is a developing country, facing the issue of cost overrun in all types of construction projects especially in highway projects. therefore, there is a need to explore the major causes of cost overrun in highway projects and determine ways to cope with the problem. hence, the research objectives of this research are: • to find the major causes of cost overrun in highways/road projects in pakistan. • to determine possible mitigation measures from experienced respondents for the identified causes. corresponding author: samiullah sohu engineering, technology & applied science research vol. 9, no. 3, 2019, 4276-4280 4277 www.etasr.com sohu et al.: controlling measures for causes of cost overrun in highway projects of sindh province ii. literature ρeview many researches have studied the causes of cost overrun in different types of construction projects and as a result a number of causes of cost overrun in construction have been found. authors in [17] identified the most critical causes of cost overrun in construction projects in afghanistan by conducting a questionnaire survey of 75 respondents from different organizations. the results of the survey concluded that the major causes which affect the total cost of the construction project were delay in process of payments by the client to the contractor, corruption in tendering and billing, financial difficulties, problems faced by the contractor/builder, security issues at the construction site, market inflations, and sudden changes from the client. authors in [18] carried a quantitative survey which aimed to identify the major causes of cost overrun in cambodia construction projects. the results of the survey showed that mistakes in the estimation of project cost, lack of communication, and unsuitable construction methods were the main causes of cost overrun. authors in [19] carried a survey regarding construction projects of india to find the major causes of cost overrun. a questionnaire was distributed among 190 professional experts in the construction industry. the results identified that escalation in raw materials, lack of communication between parties, frequent and sudden design changes, wastage and misuse of materials at the construction site, labor disputes, lack of on-site financial control, owner interference, mistakes during construction, relationship issues, and labor and management issues were the major reasons. a research was conducted in palestine construction industry by conducting a survey through the distribution of 151 questionnaires to contractors, clients and consultants [20]. through analysis of the collected data, it was revealed that material, design and documentation, professional management, contractual relationships, external factors, owner’s responsibilities, government relations, contractor’s responsibilities, consultant’s responsibilities, labor and equipment were the major causes of cost overrun. authors in [21] carried a questionnaire survey and interviewed professionals and experts of the construction industry in south africa. the outcome of the survey was that incomplete information at the time of tendering, changes in the scope of the project, contractual claims, improper planning for funds, additional works and improper fund monitoring were found as the main causes of cost overrun. authors in [22] investigated the main causes of cost overrun in uganda's construction projects by conducting a questionnaire survey. the results of the survey showed that the main causes of cost overrun were high-interest and inflation rates, changes in the scope of the project, poor control and poor monitoring, deficiencies in contract documents, and delay in the process of payments. authors in [16] recognized the causes of cost overrun by using quantitative method. the main causes were contactor poor site management, additional work, poor supervision, poor project management assistance, rapid design changes, delayed payments of completed works, material cost, unforeseen site conditions, shortages and delay of material arrival at the site, and inaccurate estimates. based on [23-36], thirty common causes of cost overrun were identified: inadequate planning, delay in payment process by client, owner interference, poor contract management, delay in decision making, shortage of material, fluctuation in price of materials, financial difficulties by contractor, poor site management, natural disasters, change in material specifications and type, poor financial control at site, mistakes and errors in design, lack of experience of the technical consultant, additional works, mistakes and discrepancies in contract document, accidents, poor design, severe overtime, fraudulent practices and kickboards, the relationship between management and labor, delay in approval, problems with neighbours, complicated design, incompetent subcontractor, inadequate monitoring, inaccurate site investigation, schedule delay, and high labor cost. author in [37] conducted research by using the structured questionnaire approach to identify the major factors of cost overrun in transport/highway projects in canada. results of the research were that external factors, problems in the estimated cost, changes in design and other conditions were the major factors. authors in [38] conducted research to find the main causes of cost overrun in highway projects in the uk. the main causes of cost overrun turned out to be changes in design, inflation, changes in the price of materials, complex nature of the project, changes in the scope of the project, contract procedure and inadequate procurement in transport projects of united kingdom. iii. research methodology to achieve the objectives of this research, a mixed mode (questionnaire survey and semi-structured questionnaire survey) approach was used. the process of the research method is given below. an extended literature review was conducted on cost overrun in construction projects, which helped to identify 30 causes of cost overrun in construction projects. a questionnaire was developed consisting of two parts, namely part a and part b. part a was designed to obtain the personal information of the respondents. part b comprised of the common cost overrun causes. a five-point likert scale was used. its numeric values ranged from 1 for “extremely significant to 5 for “not significant”. a pilot a study was carried out to check the relevancy of cost overrun causes given in the questionnaire to highway project contractors. the participants were 40 contractors. a total of 39 complete answered questionnaires were received from the respondents and all were valid and considered for further analysis. the response rate for this research was 96%. mean value (mv) was used to rank the cost overrun causes in highway projects of pakistan. the cut-off scale was adapted from [39] for the relevancy of cost overrun causes to highway projects, as shown in table i. table i. test specifications and conditions mean score level of relevancy below 4 non-relevant above 4 relevant pilot b: a semi-structured questionnaire survey was carried out among 30 professionals and experts working with contractors in handling highway projects, in order to engineering, technology & applied science research vol. 9, no. 3, 2019, 4276-4280 4278 www.etasr.com sohu et al.: controlling measures for causes of cost overrun in highway projects of sindh province determine possible mitigation measures for the major nine cost overrun causes which were identified by an actual questionnaire survey. the collected data were analyzed by content analysis. the educational level and occupation of the respondents are shown in figures 1 and 2. fig. 1. academic qualification of respondents fig. 2. occupation of respondents a quantitative questionnaire was designed and distributed among 100 selected respondents from contractors of highway projects to identify the most significant mitigation measures of major causes of cost overrun. the occupation and the educational level of the selected respondents (contractors) for the actual questionnaire are presented in figures 3 and 4. the collected data from the actual questionnaire survey were evaluated using spss with mv. fig. 3. occupation of respondents iv. data analysis, results and discussion the statistical package for the social sciences (spss) version 22 was used to check the data off stage one questionnaire. cronbach’s alpha (α) was used to check the reliability of the collected data from the questionnaire survey. the result of the reliability test was 0.894. according to [40] if the value of cronbach’s alpha is more than 0.7, then the data are valid and reliable and if the value of cronbach’s alpha is below 0.7, then the data are not valid and reliable. thus, since the value of α was above 0.7, the collected data are valid. table ii shows the major causes of cost overrun which were calculated by using the average index method and having mean value more than 4.0. table ii shows that “inadequate planning” scored the highest mean value of 4.624 and ranked as the first cause of cost overrun in highway projects. it was followed by “frequent design changes” and “financial difficulties faced by the client”. fig. 4. academic qualification of respondents table ii. major causes of cost overrun major causes mean value rank inadequate planning 4.624 1st frequent design changes 4.468 2nd financial difficulties faced by the client 4.468 3rd owner interference 4.309 4th delays in decision making 4.298 5th fluctuation of the process in the material 4.213 6th poor contract management 4.188 7th mistakes in design 4.115 8th shortage of labor 4.027 9th a. mitigation measures for the major causes of cost overrun table iii shows the results of the semi-structured questionnaire survey. the possible mitigation measures for major cost overrun causes regarding construction were investigated. these mitigation measures were analyzed by content analysis technique. the results of the semi-structured questionnaire helped get the possible mitigation measures of the 9 main causes of cost overrun. each cause has a minimum of four possible mitigation measures. b. significant mitigation measures of major causes of cost overrun table iv represents the results of the structured questionnaire survey. the most significant mitigation measures for major cost overrun causes were identified by using mean value (mv). the most significant mitigation measures of the major causes of cost overrun had the highest mv. from the at least four possible mitigation measures, the measure having the highest mv was selected as the most significant one. a total of nine significant mitigation measures were identified, one for each of the respective main causes of cost overrun in highway projects. engineering, technology & applied science research vol. 9, no. 3, 2019, 4276-4280 4279 www.etasr.com sohu et al.: controlling measures for causes of cost overrun in highway projects of sindh province table iii. semi-structured questionnaire for mitigation measures of major causes of cost overrun results causes mitigation measures frequency inadequate planning the client should plan each activity before starting the project 27 the planning committee should visit the site before tendering project 24 competent staff should be appointed in the planning section 20 the project should be planned as per its scope/need 18 frequent design changes the complexity of contract/wrong concept/formulation should be avoided 27 violation of environmental and human safety should be avoided 24 competent and well-experienced engineers should be posted/appointed 20 details should be provided 18 financial difficulties faced by the client sufficient funds should be kept for each project 27 donor/investor should start a project on the availability of funds 25 funds of the project should not be transferred to other projects 24 government should allocate funds on time 21 owner interference appointment of favoured contractors should be avoided 26 approval and procedural delays should be avoided 23 changes in key posts should be avoided 20 political appointments should be avoided 17 delay in decision making sufficient data & details should be provided to take decisions 27 a competent and qualified team should be appointed to take decisions 25 favoritism and nepotism should be avoided 24 communication between parties should be adopted to expedite activities at project 22 the client should arrange frequent meetings with all involved engineers 19 fluctuation of material prices proper planning and scheduling should be adopted 28 proper policies should be adopted by the government 26 sufficient material should be stored 25 inordinate delay in project implementation should be avoided 23 poor contract management a qualified team should be appointed until the completion of the project 26 planning and scheduling should be done for every activity 23 coordination between parties should be adopted 22 daily routine matters should be recorded at the site 20 mistakes in design favoritism should be avoided 29 experienced consultant in highway construction should be appointed 27 experienced staff under consultant should be appointed 25 consultants profile should be evaluated before awarding of the project 23 shortage of labor sufficient facilities should be given to skilled labor 28 high wages should be paid to skilled labor 26 training should be given to unskilled labor 25 government should adopt policies so that maximize skilled labor 22 table iv. μajor causes and mitigation measures major causes mitigation measures mv inadequate planning the client should plan each activity before starting the project 4.592 frequent design changes details should be provided 4.297 financial difficulties faced by the client donor/investor should start a project on the availability of funds 4.319 owner interference approval and procedural delays should be avoided 4.571 delay in decision making a competent and qualified team should be appointed to take decisions 4.329 fluctuation of material prices proper policies should be adopted by the government 4.571 poor contract management a qualified team should be appointed until the completion of the project 4.331 mistakes in design experienced consultant in highway construction should be appointed 4.297 shortage of labor sufficient facilities should be given to skilled labor 4.235 v. conclusion cost overrun is one of the major and serious issues the construction industry is facing, especially in highway projects. a thorough literature review was conducted, and a questionnaire was designed consisting of 30 main causes identified from previous studies. out of these 30 causes, 9 major causes of cost overrun were identified from professionals working with contractors in highway projects of pakistan. a semi-structured questionnaire survey offered a source of information based on the participants’ experience in relation to controlling major causes of cost overrun. a structured questionnaire was designed on the findings of the semistructured questionnaire and was distributed among 100 randomly selected respondents working with contractors, which helped to identify the most significant controlling measures for the major causes of cost overrun in highway/road projects. the findings of this research are going to be useful to the stakeholders of the construction of highway projects. references [1] a. a. jhatial, s. sohu, n. k. bhatti, m. t. lakhiar, r. oad, “effect of steel fibres on the compressive and flexural strength of concrete”, international journal of advanced and applied sciences, vol. 5, no. 10, pp. 16–21, 2018 [2] y. al-emad, n. hamid, structural relationships model of delay factors in makkah construction industry, universiti tun hussein onn malaysia, 2016 [3] i. mahamid, n. dmaidi, “risks leading to cost overrun in building construction from consultants’ perspective”, organization, technology and management in construction: an international journal, vol. 5, no. 2, pp. 860–873, 2013 [4] d. m. matin, “identifying the effective factors for cost overrun and time delay in water construction projects”, engineering, technology & applied science research, vol. 6, no. 4, pp. 1062–1066, 2016 [5] s. sohu, a. h. abdullah, s. nagapan, a. a. jhatial, k. ullah, i. a. bhatti, “significant mitigation measures for critical factors of cost overrun in highway projects of pakistan”, engineering, technology & applied science research, vol. 8, no. 2, pp. 2270–2274, 2018 [6] m. siemiatycki, cost overruns on infrastructure projects: patterns, causes, and cures, institute on municipal finance and governance, 2015 engineering, technology & applied science research vol. 9, no. 3, 2019, 4276-4280 4280 www.etasr.com sohu et al.: controlling measures for causes of cost overrun in highway projects of sindh province [7] a. senouci, a. ismail, n. eldin, “time delay and cost overrun in qatari public construction projects”, procedia engineering, vol. 164, pp. 368375, 2016 [8] m. a. akhund, h. u. imad, n. a. memon, f. siddiqui, a. r. khoso, a. a. panhwar, “contributing factors of time overrun in public sector construction projects”, engineering, technology & applied science research, vol. 8, no. 5, pp. 3369–3372, 2018 [9] k. yongjian, f. y. y. ling, y. ning, “public construction project delivery process in singapore, beijing, hong kong and sydney”, journal of financial management of property and construction, vol. 18, no. 1, pp. 6–25, 2013 [10] j. r. hartley, concurrent engineering: shortening lead times, raising quality, and lowering costs, routledge, 2017 [11] a. aljohani, d. ahiaga-dagbui, d. moore, “construction projects cost overrun: what does the literature tell us?”, international journal of innovation, management and technology, vol. 8, no. 2, pp. 137–143, 2017 [12] u. s. vaardini, s. karthiyayini, p. ezhilmathi, “study no cost overruns in construction projects–a review”, international journal of applied engineering research, vol. 11, no. 3, pp. 356–363, 2016 [13] s. sohu, a. h. abdullah, s. nagapan, a. fattah, k. ullah, k. kumar, “contractor’s perspective for critical factors of cost overrun in highway projects of sindh, pakistan”, in: proceedings of the international conference of global network for innovative technology and awam international conference in civil engineering (ignite-aicce’17), pp. 080002-1−080002-6, aip publishing, 2017 [14] m. ali, s. a. mangi, s. sohu, q. b. jamali, k. ullah, “major factors of budget overrun in construction of road projects of sindh,pakistan”, engineering science and technology international research journal, vol. 1, no. 2, pp. 28-32, 2017 [15] s. sohu, a. h. abdullah, s. nagapan, n. a. memon, r. yunus, m. f. hasmori, “causative factors of cost overrun in building projects of pakistan”, international journal of integrated engineering, vol. 10, no. 9, pp. 122-126, 2018 [16] s. kim, k. n. tuan, v. t. luu, “delay factor analysis for hospital projects in vietnam”, ksce journal of civil engineering, vol. 20, no. 2, pp. 519–529, 2015 [17] g. a. niazi, n. painting, “significant factors causing cost overruns in the construction industry in afghanistan”, procedia engineering, vol. 182, pp. 510–517, 2017 [18] s. durdyev, m. omarov, s. ismail, m. lim, “significant contributors to cost overrun in construction projects of cambodia”, cogent engineering, vol. 4, no. 1, pp. 1–10, 2017 [19] s. p. wanjari, g. dobariya, “identifying factors causing cost overrun of the construction projects in india”, sadhana, vol. 41, no. 6, pp. 679693, 2016 [20] a. enshassi, j. al-najjar, m. kumaraswamy, “delays and cost overruns in the construction projects in the gaza strip”, journal of financial management of property and construction, vol. 14, no. 2, pp. 126–151, 2009 [21] m. s. ramabodu, j. j. p. verster, “factors contributing to cost overruns of construction projects”, 5th built environment conference factors contributing to cost overruns of construction projects, durban, south africa, july 18-20, 2010 [22] h. alinaitwe, r. apolot, d. tindiwensi, “investigation into the causes of delays and cost overruns in uganda’s public sector construction projects”, journal of construction in developing countries, vol. 18, no. 2, pp. 33–47, 2013 [23] p. t. gbahabo, o. s. ajuwon, “effects of project cost overruns and schedule delays in sub-saharan africa”, european journal of interdisciplinary studies, vol. 3, no. 2, pp. 46-59, 2017 [24] m. a. akhund, a. r. khoso, u. memon, s. h. khahro, “time overrun in construction projects of developing countries”, imperial journal of interdisciplinary research, vol. 3, no. 5, pp. 124–129, 2017 [25] a. h. memon, i. a. rahman, m. r. abdullah, a. a. a. azis, “factors affecting construction cost performance in project management projects: case of mara large projects”, international journal of civil engineering and built environment, vol. 1, no. 1, pp. 30–35, 2014 [26] z. shehu, i. r. endut, a. akintoye, “factors contributing to project time and hence cost overrun in the malaysian construction industry”, journal of financial management of property and construction, vol. 19, no. 1, pp. 55–75, 2014 [27] s. z. h. s. jamaludin, m. f. mohammad, k. ahmad, “enhancing the quality of construction environment by minimizing the cost variance”, procedia-social and behavioral sciences, vol. 153, pp. 70–78, 2014 [28] n. roslan, n. y. zainun, a. h. memon, “measures for controlling time and cost overrun factors during execution stage”, international journal of construction technology and management, vol. 1, no. 1, pp. 8–11, 2014 [29] a. s. ali, s. n. kamaruzzaman, “cost performance for building construction projects in klang valley”, journal of building performance, vol. 1, no. 1, pp. 110–114, 2010 [30] n. azhar, r. u. farooqui, s. m. ahmed, “cost overrun factors in construction industry of pakistan”, first international conference on construction in developoing countries, karachi, pakistan, august 4-5, 2008 [31] m. sambasivan, y. w. soon, “causes and effects of delays in malaysian construction industry”, international journal of project management, vol. 25, no. 5, pp. 517–526, 2007 [32] d. s. tejale, s. d. khandekar, j. r. patil, “analysis of construction project cost overrun by statistical method”, international journal of advance research in computer science and management studies, vol. 3, no. 5, pp. 349–355, 2015 [33] h. samarghandi, s. m. m. tabatabaei, p. taabayan, a. m. hashemi, k. willoughby, “studying the reasons for delay and cost overrun in construction projects: the case of iran”, journal of construction in developing countries, vol. 21, no. 1, pp. 51–84, 2016 [34] m. baek, k. mostaan, b. ashuri, “recommended practices for the cost control of highway project development”, construction research congress, puerto rico, 2016 [35] z. t. zewdu, g. t. aregaw, “causes of contractor cost overrun in construction projects: the case of ethiopian construction sector”, international journal of business and economics research, vol. 4, no. 4, pp. 180-191, 2015 [36] g. b. e. elanga, p. louzolo-kimbembe, c. pettang, “evaluation of cost overrun factors in the construction projects in developing countries: cameroon as case study”, international journal of emerging technology and advanced engineering, vol. 4, no. 10, pp. 533–538, 2014 [37] s. m. vidalis, f. t. najafi, “cost and time overruns in highway construction”, canadian society for civil engineering-30th annual conference: 2002 challenges ahead, montreal, canada, june 5-8, 2002 [38] w. j. hamid, a. waterman, “analysis of the main causes of cost overruns in construction industry in developing countries and the uk”, international review of civil engineering, vol. 9, no. 3, pp. 105-113, 2018 [39] l. muhwezi, l. m. chamuriho, n. m. lema, “an investigation into materials wastes on building construction projects in kampala-uganda”, scholarly journal of engineering research, vol. 1, no. 1, pp. 11–18, 2012 [40] l. xin, w. rong, “survey research on relationship among service failures, service recovery and customer satisfaction”, international conference on management science and engineering, harbin, china, august 20-22, 2007 microsoft word 36-2538_s engineering, technology & applied science research vol. 9, no. 1, 2019, 3859-3862 3859 www.etasr.com iqbal et al.: an experimental study on the performance of calcium carbonate extracted from … an experimental study on the performance of calcium carbonate extracted from eggshells as weighting agent in drilling fluid raheel iqbal institute of petroleum & natural gas engineering, mehran university of engineering & technology, jamshoro, pakistan 15pg46@students.muet.edu.pk muhammad zubair institute of petroleum & natural gas engineering, mehran university of engineering & technology, jamshoro, pakistan 13pet04@students.muet.edu.pk fawad pirzada institute of petroleum & natural gas engineering, mehran university of engineering & technology, jamshoro, pakistan fawadpirzada8@gmail.com faisal abro institute of petroleum & natural gas engineering, mehran university of engineering & technology, jamshoro, pakistan faisal.abro@hotmail.com muhammad ali institute of petroleum & natural gas engineering, mehran university of engineering & technology, jamshoro, pakistan muhamadali5014@gmail.com avinash valasai department of mining engineering, mehran university of engineering & technology, jamshoro, pakistan amvalasai@gmail.com abstract—drilling mud density is an important factor in drilling operations. the cost of the drilling mud used for oil and gas well drilling can be 10%-15% of the total drilling cost, and the deeper the well, the more the needed drilling mud. this research aims to prepare a mud that provides performance similar to the conventional mud and to lower down the dependency of primordial caco3 technology by exploring it from trash/polluted and naturally occurring materials. for that purpose, a mud was prepared by replacing primordial caco3 with caco3 derived from eggshells, as eggshells contain caco3 in high amounts ranging from 70% to 95%. the success of this project will provide an affordable solution and an alternative way to explore new methodologies of obtaining caco3. the obtained results of this research are quite satisfactory. caco3 obtained from eggshells is used in high amounts, 275–410g to achieve density ranges from 9.5 to 11.0 pounds per gallon whereas, the needed quantity of pure caco3 is 150g to obtain density of 10.5 pounds per gallon. apart from this, it is also observed that eggshell based caco3 samples are more efficient in rheological properties compared to the market samples of caco3. the ph of pure caco3 sample of 10.5 pounds per gallon density is almost the same with the sample of eggshell caco3 of 10.5 pounds per gallon density. keywords-drilling fluid; weighting agent; mud balance; calcium carbonate; rheological properties i. introduction drilling fluid has obligatory properties like carrying out rock cutting towards the surface, cleaning and cooling the pit, decreasing resistive forces, stabilizing wellbore, and preventing fluids to flow from pores into the borehole. various methods for designing suitable drilling muds are developed for avoiding problems encountered during drilling. the drilling mud should be user friendly, cost effective and economically viable. therefore drilling muds are basically formulated to decrease the effect of damage and to ensure the possibility and economically viability of rotary drilling in hydrocarbon containing formations. the filter cakes which are formed after the intrusion of drilling mud in the pore space of pay zone are compressible and contain varying porosity and permeability characteristics, with low void spaces at the filter channel surface and maximum void spaces on cake surface. in order to reduce filtrate invasion, fluid loss additives such as organic polymers which prohibit water invasion are used. during the formulation of the mud, the microscopic structure and composition of the filter cake related to it and the information of the characteristics of filtration are of main importance [1]. during drilling and completion varying drilling muds in the borehole are used. the most significant factor is the physical and chemical compatibility of the mud with the reservoir rock. by formation damage these muds can reduce the productivity of the well by invasion. consequently additives are used i.e. caco3,which can reduce the chance of these damages in the formations by forming a filter cake of low permeability (optimum thickness) that reduces further invasion of solids and filtrate the pore spaces of rock. after drilling these cakes are washed by for maximizing the flow in the wellbore. fluid loss and viscosity of mud are important factors which must be investigated throughout the drilling of a well [2]. for that reason, mud is treated with several types of additives i.e. different polymers and chemicals, to achieve requirements important for the particular well such as rheology, control of corresponding author: r. iqbal engineering, technology & applied science research vol. 9, no. 1, 2019, 3859-3862 3860 www.etasr.com iqbal et al.: an experimental study on the performance of calcium carbonate extracted from … fluid loss, weight of mud etc. starch and calcite are the most important materials used to control fluid loss and to increase the weight of mud by forming mud cake respectively [3]. ii. experimental work a. caco3 analysis in eggshells by titration technique calcite is a major component of eggshells ranging from 70% to 95%. the technique of titration named as back titration is used for the reaction of acids with calcium present in the blended powder of eggshells. calcium dissolves in acids rather than pure water: caco3(s) + 2hcl(aq) → cacl2(aq) + h2o(l) + co2(g) this reaction is slow mostly when it reaches completion hence this technique cannot be directly used. sufficient amount of acid must be added to dissolve all the calcium carbonate. then, sodium hydroxide will be added that will react with the remaining hcl. the unreacted amount is calculated by determining the difference between the amount of hcl that was added initially and the amount of hcl that remained after the titration. the caco3 amount present in the sample is calculated as discussed. given equation is used to determine the amount of caco3 in the sample: naoh(aq) + hcl(aq) → h2o(l) + nacl(aq) b. drilling fluid preparation and properties a few additives along with barite baso4, calcite caco3 are commonly used in water-based drilling mud. the three main factors which affect drilling fluid performance are density, viscosity and ph. fluid samples preparation at laboratory scale is obtained by adding chemicals taken in grams into 350ml barrels (standard laboratory barrel). this research involves the preparation of five water-based drilling fluid samples which contain bentonite as a filtration controller and viscosifier, caustic soda for ph control, and starch for filtration control. along with them, soda ash and xanthan gum are also used as hardness and rheology control materials respectively. the composition of all these additives is constant for all prepared samples except calcite. the most important part of these samples is the concentration of caco3 as a weighting agent. it is used to increase densities of samples from 9.0 to 11 pounds per gallon (ppg). the required amount of caco3 as a weighting agent is determined by: 945 2 1 weighting agent sacks/100 barrels) 22 5 2 ( ( ) . w w w − = − (1) and the formula to determine the required amount of barite as a weighting agent can be written as: ( ) 2 weighting agent sacks/100 ba 1470 2 1 5 rrels 3 ( )w w w − = − (2) where, w1 is indicating the initial weight of mud (ppg) and w2 is indicating the required weight of mud (ppg). c. density of mud mud density is the main parameter to consider during study as it directly affects the formation of filter cake. the most common additive to increase the mud weight in production zones is mostly calcite (caco3) which is widely used during drilling of zone of interest. five drilling mud samples by formulating both calcites (pure calcium carbonate, eggshell calcium carbonate) are prepared in this study to achieve densities ranging from 9.5 to 11.0 ppg. the amount used for the formulation of pure calcite base sample is 150g as shown in table i, and the amount for the formulation of eggshells based samples ranges from 275g to 410g as shown in figure 1 to achieve mud density from 9.5 to 11.0 ppg. fig. 1. amount of calcium carbonate vs densities d. drilling fluid rheological properties rheological properties are categorized in gel strength, yield point and apparent viscosity. rheology is the basis of all investigations ranging from hydraulics of wellbore to the evaluation of mud system functionality. mud rheological properties are continuously tested throughout the drilling operation. mud rheological properties are very critical to maintain control while tackling the wellbore problems, because inappropriate rheological properties may result in loss of time and money. besides rheological properties, filtration, ph, chemical analysis (alkalinity and lime content, chloride content, calcium content, etc.), and resistivity are also tested throughout the drilling. in the laboratory (as in the drilling site), a rotational viscometer is frequently used to measure the rheological properties of mud. readings are taken on 600, 300, 200, 100, 60, 30 and 6rpm (rotations per minute). later these readings are plotted on a chart of shear stress and shear rate which are used to determine viscosity and appropriate viscosity model. rotational viscometer also provides the information about other rheological properties, including effective viscosities (µa, µp, and µe), gel strength (gel) and yield point (yp) as shown in table ii. given equations are utilized for these purposes: apparent viscosity µa, (cp) = ∅��� � (3) plastic viscosity µp, (cp)= ∅600 � ∅300 (4) effective viscosity µe,(cp) = �� �∅ � (5) yield point yp, (lb/100ft 2 )=∅300 � µ� (6) shear stress τ, (lb/100ft 2 ) = 1.065 * ø (7) shear rate ɣ, sec -1 = 1.7023 * ω (8) engineering, technology & applied science research vol. 9, no. 1, 2019, 3859-3862 3861 www.etasr.com iqbal et al.: an experimental study on the performance of calcium carbonate extracted from … where, ø indicates the reading of dial, lb/100ft 2 and ω indicates rotation of rotor speed (rpm). bingham plastic model is a basic two-parameter model used in drilling industry widely to identify the properties related to flow for the different mud types. it is known as the most common fluid model to estimate non-newtonian fluids’ rheology. shear stress is a straight line function of shear rate which is the basic supposition of this model. yield point is also named as threshold stress, it is the point where shear rate is zero. by the reduction in colloidal solids, the optimum plastic viscosity (pv) is achieved. table i. drilling mud samples and its composition product per lab barrel (350 ml) sample number 1 2 3 4 5 water (ml) 325.50 325.50 325.50 325.50 325.50 bentonite (g) 24.50 24.50 24.50 24.50 24.50 barite pure caco3 (g) 150 eggshell caco3 (g) 275 320 365 410 starch (g) 0.40 0.40 0.40 0.40 0.40 caustic soda (g) 0.20 0.20 0.20 0.20 0.20 soda ash (g) 0.25 0.25 0.25 0.25 0.25 xanthanum (g) 1.00 1.00 1.00 1.00 1.00 table ii. rheological properties of mud samples sample number 1 2 3 4 5 plastic viscosity (cp) 22 10.5 17.1 23 32 apparent viscosity (cp) 38 20.5 33 39.25 50 yield point (lb/100 ft 2 ) 32 20 31.8 32.5 36 gel strength@10 min (lb/100 ft 2 ) 16.5 12.8 19.5 20 20.6 θ600 76 41 66 78.5 100 θ300 54 30.5 48.9 55.5 68 θ200 45 26.5 42 46 56 θ100 35 21.5 33 35.5 42 θ60 29 19 29 31 36 θ30 19.5 17 25 26 29 θ6 15 13 19 19.5 21 to carry cuttings out of the hole, yield point should be high enough, but not very much because pump pressure would become incompatible with drilling operation. for both low and high shear rates ranges, bingham plastic model has its own limitations. the physical/solid reason behind this behavior is that the liquid generally contains particles (clay) or large molecules (polymers) which generally have some kind of interaction, while creating a weak solid structure, known as a false body, and at that point a certain amount of stress is required to break it. under viscous forces the particles tend to move as the structure breaks. the results which are acceptable for a drilling mud diagnosis are produced by bingham plastic model. but, for hydraulic calculations its accuracy is not very high. a bingham body doesn’t begin to flow until a shearing stress, corresponding to the yield value, is exceeded. the results for the bingham plastic model are obtained by the graph between shear stress and shear rate in figure 2, plotted with the use of (6) and (7). e. drilling fluid ph determination it is important to know mud ph because it affects the solubility of the organic thinners, contaminant removal, corrosion mitigation, and the dispersion of clays present in the mud. ph is mostly used to express drilling fluid’s, especially water-based mud, acidity or alkalinity. generally, its value ranges from 0 to 14. ph is expressed by (9): ph=-log[h] (9) where, [h] is the hydrogen ion concentration in mol. ph value decreases as the acidity of the fluid increases by the addition of more hydrogen atoms. generally, ph of the neutral fluid is 7. values above 7 indicate alkaline ph below 7 indicate acidic ph. in drilling mud there are three main chemical components involved which are hydroxyl ions (oh ), carbonate ions (co3 -2 ) and alkalinity of drilling mud including (hco3 ) bicarbonates ions. for better ph measurement, ph meter is mostly used rather than the litmus paper, because ph meter provides quantitative information where litmus paper provides qualitative information about the acidity of the drilling mud. fig. 2. bingham plastic model of samples iii. results and discussion a. caco3 determination by back titration the outcomes from the back titration method for the caco3 amount can be determined by washing, boiling, peeling off the membranes, and heating of the eggshells at 120°c, and the resulting value is showing 74% presence of caco3 in the eggshells blended powder. b. characteristics of drilling fluids the densities of 5drilling fluid samples of both ordinary caco3 and eggshell caco3, ranging from 9.0 to 11.0 ppg are shown in figure 1. table ii provides comprehensive information about the rheological properties of the prepared sample. variations in rheological properties were observed with the increase of amount of caco3 of eggshells. besides this, the purpose of increased density was achieved by increasing the amount of caco3 obtained from eggshells. in general, the increment in sample density causes increment in rheological properties’ values. meanwhile, the drilling fluid sample no. 4 (based on 10.5 ppg eggshell caco3) exhibits engineering, technology & applied science research vol. 9, no. 1, 2019, 3859-3862 3862 www.etasr.com iqbal et al.: an experimental study on the performance of calcium carbonate extracted from … similar rheological property values with sample no. 1 (based on 10.5 ppg pure caco3). c. bingham plastic model it is discussed that non-newtonian fluids exhibit a relationship between shear rate and shear stress measured for the formulated samples as shown in figure 2. according to the graphs which are plotted for 5 water-based drilling fluid samples, shear stress increases with increasing amount of eggshell caco3, while pure caco3 sample no. 1 is showing almost the same trend with the sample no. 4 of eggshell caco3. a general trend line is drawn for the bingham plastic fluid and it is observed that the yield point of 36ib/100ft 2 is obtained for eggshell caco3 sample no. 5, and it is also measured by viscometer and discussed in table ii. d. ph determination the ph of the prepared sample is determined by using ph meter and the obtained results are shown in figure 3. if a comparison is generated between the calcite based samples of same densities then it is observed that the sample no. 1 of pure caco3 has almost the same ph value with the sample no. 4 of eggshell caco3 with 10.5 ppg density, which indicates that there is no impact of the amount of caco3 on the ph of both samples. the required amounts for both samples are varying but as density reached 10.5 ppg for both samples, they show almost the same ph. fig. 3. ph of samples iv. conclusion on the basis of laboratory measurements analysis and interpretation, the main concluded points are: • mud density of 10.5lb/gal is optimum for the x well. it was selected among 5 prepared mud densities, considering that it can sustain formation pressure. by using another mud density the formation starts to create fractures in the well. • caco3 obtained from eggshells is used in higher amounts from 275g to 410g to achieve densities ranging from 9.5 to 11.0lb/gal. pure caco3 took only 150g to obtain density of 10.5lb/gal. apart from this, it is also observed that eggshell caco3 samples are more efficient in rheological properties than the samples from market caco3. • it is observed that the ph of pure caco3 sample of 10.5lb/gal has almost the same value with the sample of eggshell caco3 of 10.5lb/gal density. • it was observed that the prepared samples of eggshells caco3 are producing unpleasant smell after 1-2 days of preparation. • however, by heating the eggshells at a temperature ranging from 300°c to 500°c, better results for the amount of caco3 will be obtained. by doing this the efficiency of the caco3 is improved as the acids which are present in the shells are removed and the smell dissipates. references [1] k. a. fattah, a. lashin, “investigation of mud density and weighting materials effect on drilling fluid filter cake properties and formation damage”, journal of african earth sciences, vol. 117, pp. 345-357, 2016 [2] s. gogoi, p. talukdar, “use of calcium carbonate as bridging and weighting agent in the non damaging drilling fluid for some oilfields of upper assam basin”, international journal of current research, vol. 7, no. 8, pp. 18964-18981, 2015 [3] t. hudson, m. coffey, “fluid loss control through the use of a liquid thickened completion and work over brine”, journal of petroleum technology, vol. 35, no. 10, pp. 1776-1782, 1983 [4] m. sajjadian, e. e. motlagh, a. a. daya, “laboratory investigation to use lost circulation material in water base drilling fluid as lost circulation pills”, international journal of mining science, vol. 2, no. 1, pp. 33-38, 2016 [5] n. gaurina-medimurec, “laboratory evaluation of calcium carbonate particle size selection for drill-in fluids”, rudarsko-gcolofkonaftnizbomik, vol. 14, pp. 4753, 2002 [6] a. odabasi, an experimental study of particle size and concentration effects of calcium carbonate on rheological and filtration properties of drill-in fluids, msc thesis, middle east technical university, 2015 [7] r. samavati, n. abdullah, t. k. nowtarki, s. a. hussain, d. r. a. biak, “rheological and fluid loss properties of water based drilling mud containing hcl-modified fufu as a fluid loss control agent”, international journal of chemical engineering and applications, vol. 5, no. 6, pp. 446-450, 2014 [8] m. amani, j. k. hassiba, “the effect of salinity on the rheological properties of water based mud under high pressures and high temperatures for drilling offshore and deep wells”, spe kuwait international petroleum conference and exhibition, kuwait city, kuwait, 10-12 december, 2012 [9] p. k. jha, v. mahto, v. k. saxena, “emulsion based drilling fluids: an overview”, international journal of chem tech research, vol. 6, no. 4, pp. 2306-2315, 2014 [10] p. talalay, z. hu, h. xu, d. yu, l. han, j. han, l. wang, “environmental considerations of low-temperature drilling fluids”, annals of glaciology, vol. 55, no. 65, pp. 31-40, 2014 [11] p. o. ogbeide, s. a. igbinere, “the effect of additives on rheological properties of drilling fluid in highly deviated wells”, futo journal series, vol. 2, no. 2, pp. 68–82, 2016 [12] n. al-malki, p. pourafshary, h. al-hadrami, j. abdo, “controlling bentonite-based drilling mud properties using sepiolite nanoparticles”, petroleum exploration and development, vol. 3, no. 4, pp. 717-723, 2016 [13] c. kelessidis, “drilling fluid challenges for oil-well deep drilling”, nternational multidisciplinary scientific geoconference sgem 2009, albena, bulgaria, june 14-19, 2009 microsoft word 43-3338_s_etasr_v10_n1_pp5340-5345 engineering, technology & applied science research vol. 10, no. 1, 2020, 5340-5345 5340 www.etasr.com duong et al.: available transfer capability determination for the electricity market using cuckoo … available transfer capability determination for the electricity market using cuckoo search algorithm thanh long duong faculty of electrical engineering technology industrial university of ho chi minh city ho chi minh city, vietnam duongthanhlong@iuh.edu.vn thuan thanh nguyen faculty of electrical engineering technology industrial university of ho chi minh city ho chi minh city, vietnam nguyenthanhthuan@iuh.edu.vn ngoc anh nguyen faculty of electrical engineering technology industrial university of ho chi minh city ho chi minh city, vietnam nguyenngocanh@iuh.edu.vn tong kang college of electrical and information engineering hunan university, china kangtong126@126.com abstract—in the electricity market, power producers and customers share a common transmission network for wheeling power from generation to consumption points. all parties in this open access environment may try to produce energy from cheaper sources for greater profit margin, which may lead to transmission congestion, which could lead to violation of voltage and thermal limits, threatening the system security. to solve this, available transfer capability (atc) must be accurately estimated and optimally utilized. thus, accurate determination of atc to ensure system security while serving power transactions is an open and trending research topic. many optimization approaches to deal with the problem have been proposed. in this paper, cuckoo search algorithm (csa) is applied for determining atc problem between the buses in deregulated power systems without violating system constraints such as thermal, voltage constraints. the suggested methodology is tested on ieee 14 and ieee 24bus for normal and contingency cases. the simulation results are compared with the corresponding results of ep, pso, and gwo and show that the csa is an effective method for determining atc. keywords-csa; atc; congestion; electricity market i. introduction one of the key features of the competitive electricity market is fair and open transmission access of the network to all users which may result to the frequent overloading of transmission system facilities. assessment of available transfer capability for the economic utilization of the available system components with regard to system security plays a vital role in operational planning and real time operation of a system. with the development of renewable energy power generation technology and the increase of power load demand, renewable energy power generation can not only service specific users outside the power grid, but also can be massively incorporated into the power grid. renewable energy power generation has many advantages, but its intermittent and stochastic output may influence the power system. renewable energy power generation could increase the uncertainties of the power system which has significant effects on the transfer capability of the transmission system. hence, transmission congestion management problem and analysis of the impacts of renewable energy has become an important challenge [1-4]. secure and reliable operation of transmission network requires the independent system operators (iso) to determine and update atc at regular intervals for its optimal commercial use [5]. the atc of a transmission network is the unutilized transfer capability of the network for the transfer of power for further commercial activity, over and above the already committed usage [6]. essentially, atc is a measure of the extra transmission capability above the base case power transfer for the purpose of power marketing. atc value can be derived by considering various parameters relating to transfer capabilities such as total transfer capability (ttc), transmission reliability margin (trm), and capacity benefit margin (cbm). ttc is the summation of all the network transfers (base case and commercial transfers) including the margins for system security and reliability, and existing transmission commitments (etc). trm is the network margin reserved for system uncertainties whereas cbm is the network margin reserved for external generation in case of emergency generation outages. it is measured by the loss of load expectation. adequate atc is needed to ensure all economic transactions, while sufficient atc is needed to facilitate electricity market liquidity. it is necessary to maintain economical and secure operation over a wide range of system operating conditions and constraints. an accurate value of atc can be used in forecasting future upgrading of the transmission network. the precise calculation of atc should include system constraints such as voltage limit, thermal limit, real and reactive power generation limit, and system uncertainties. corresponding author: thanh long duong engineering, technology & applied science research vol. 10, no. 1, 2020, 5340-5345 5341 www.etasr.com duong et al.: available transfer capability determination for the electricity market using cuckoo … several approaches have been proposed for atc computation including linear approximation methods (lams) [7], repetitive power flow (rpf) [8], continuation power flow (cpf) [9], optimal power flow (opf) [10], and artificial intelligence (ai) techniques [11]. different ai techniques have been used to solve various optimization problems [12-14]. applying meta-heuristic algorithms for determining the atc have been proposed recently: genetic algorithm (ga) [15], bee algorithm (ba) [16], particle swarm optimization (pso) [17], and evolutionary programming (ep) [18-19]. ai approaches are employed to avoid local optimal solutions associated with conventional optimization techniques, especially for highly nonlinear systems. authors in [20] have developed a new meta-heuristic algorithm called cuckoo search algorithm (csa) which is inspired from the obligate brood parasitic behavior of some cuckoo species. a cuckoo bird will choose a random nest of other species and lay and dump its egg in it. an egg is either hatched and carried over to the next generation or abandoned by the host bird. it is an efficient meta-heuristic algorithm that balances between the local search strategy (exploitation) and the whole space (exploration) [21]. in each generation, there are two new populations created using the levy flight and discovering alien egg mechanisms. the first mechanism helps csa to explore the search space while the second mechanism supports csa to exploit the search space, ensuring that the obtained results from csa have better quality compared to others. in addition, there is only one control parameter for csa in the search process, which makes it more reliable for applying to the optimal problem. the csa algorithm has been proposed for solving power system security in [22]. in this paper, csa is applied for determining the atc of power transactions between sources and sink areas in a deregulated power system considering thermal and voltage limits. the proposed approach is demonstrated on the ieee 14-bus and the ieee 24-bus test systems. ii. objective function the main objective of this work is to determine the available power that can be transferred from a specific set of generators of a source area to loads in a sink area, subject to real and reactive power generation limits, voltage limits, and line thermal limits. the atc is determined by starting from an initial point and then increasing the load by a factor λ until a system limit is reached [15]. the details of atc computation are given below: )()( 0 1 0 max 1 λλ ∑∑ == −= load i di load i di atc obj ppf (1) subject to: • the real and reactive power balance equations: gigio j diocij pppp +=++∑ ∀ )1( , λ (2) gigio j diocij qqqq +=++∑ ∀ )1( , λ (3) • the power generation limits: max 0 gigigio ppp ≤+≤ (4) max 0 gigigio qqq ≤+≤ (5) • the voltage limits: maxmin iii vvv ≤≤ (6) • the apparent power flow limit: max2 , 2 , ijcijcijij sqps ≤+= (7) to effect the generation and load changes, the active power generation and the active and reactive loads in the source and sink areas, respectively, need to be modified using the scalar parameter λ. 0 ( ) . (1 ) gi gi p pλ λ= + (8) 0 ( ) . (1 ) di di p pλ λ= + (9) 0 ( ) . (1 ) di di q q= +λ λ (10) where pgio, pdio and qgio, qdio are the active and reactive power respectively of bus i in the base case. λ=0 corresponds to no transfer (base case) and λ=λmax corresponds to the largest value of transfer power that causes no limit violations. pdi(λmax) is the sum of load in sink area when λ=λmax while pdio refers to the sum of load when λ=0. iii. application of csa on atc problem determination the steps of determining the atc problem using the proposed csa are presented below. step 1: read the power system data and set associated parameters such as the host nests size n, the probability of an alien egg in a nest of a host bird to be discovered pa∈ [0, 1], the number of variables to be optimized d, the maximum number of iterations itmax. step 2: initialize n host nests {xi (i=1, 2, …, n)}. each of these nests is concatenated of two strings and represents a feasible solution to the optimization problem. step 3: evaluate the fitness function of the initial n host nests based on the results of power flow analysis, choose the best value of each nest xbesti (i=1, 2, … , n) and the global best nest among all nests gbest which is corresponding to the best fitness function, store the fitness values and the best fitness value. lim 2 lim 2 1 1 lim 2 max 2 1 1 nb nb f obj p gi gi q gi gi i i nb nl v i i s li li i i f f k (p p ) k (q q ) k (v v ) k (s s ) = = = = = − − − − − − − − ∑ ∑ ∑ ∑ (11) step 4: get cuckoos (new solutions) randomly based on the previous best nest via lévy flights. the new solution for each nest is calculated using (12) and (13): 1 new new i i i x xbest rand xα= + × ×∆ (12) engineering, technology & applied science research vol. 10, no. 1, 2020, 5340-5345 5342 www.etasr.com duong et al.: available transfer capability determination for the electricity market using cuckoo … where α>0 is the updated step size, rand1 is a normally distributed stochastic number, and the increased value new ix∆ is determined by: ( ) 1 new u u i i vv rand x xbest gbest rand β σ σ ∆ = × × − (13) where randu and randv are two normally distributed stochastic variables with standard deviation σu and σv given in (14). 1 ( 1) 2 (1 )sin( 2) [(1 ) 2] 2 1 u v β β β πβ σ β β σ −   γ + =   γ +    = (14) where β is the distribution factor (0.3≤β≤1.99). step 5: evaluate the new solutions’ fitness function based on the results of power flow analysis, determine the newly best value of each nest xbesti and the global best nest gbest by comparing the stored fitness values in step 3 with the newly calculated ones, update the best value of each nest xbesti and the global best nest gbest, store the fitness values and the best fitness value. fig. 1. the flowchart of the proposed process of applying the csa to determine atc step 6: discovering an alien egg in a nest of a host bird with the probability of pa creates a new solution for the problem similar to the lévy flights. the new solution because of this action is calculated by (15), (16) and (17): disc disc i i i x xbest c x= + ×∆ (15) 2 1 0 a if rand p c otherwise < =   (16) 3 1 2 [ ( ) ( )] disc i i i x rand randp xbest randp xbest∆ = × − (17) where rand2 and rand3 are the distributed random numbers on the interval [0, 1], randp1(xbesti) and randp2(xbesti) are the random perturbation for positions of nests in xbesti. step 7: evaluate the new solutions’ fitness function based on the results of power flow analysis, determine the newly best value of each nest xbesti and the global best nest gbest by comparing the calculated fitness function from this new solutions with the stored fitness values in step 5, update the best value of each nest xbesti and the global best nest gbest, store the fitness values and the best fitness value. step 8: if the predefined maximum number of iterations itmax is reached, the computation is terminated and the results are displayed, else go to step 4. the flowchart of the proposed process is shown in figure 1. iv. numerical results the atc for each of the stipulated source to sink power transfers on two ieee systems (ieee 14-bus and ieee 24-bus) reliability is tested. the ieee 14-bus system consists of 5 generators and 20 lines as shown in figure 2, while there are 41 lines and 11generators in the ieee 24-bus system as shown in figure 3. the network and load data are given in [23]. based on experimental results, the optimal control parameters of csa have been selected for the 14-bus and 24-bus systems as: the number of nests for the two systems is 20 and 25 respectively. the rate of detection of alien eggs and the maximum number of iterations are 0.25 and 100 respectively for both systems. fig. 2. the ieee 14-bus system in order to apply the proposed methodology in security studies and in congestion management, atc values are computed in selected line outages. in the studies, the atc engineering, technology & applied science research vol. 10, no. 1, 2020, 5340-5345 5343 www.etasr.com duong et al.: available transfer capability determination for the electricity market using cuckoo … margin is limited by bus voltage magnitude in the range of 0.95-1.15pu. the variation in atc over the base state for both systems is studied with line outages, in line 16 (bus 13 to bus 14) in the ieee 14-bus system and in line 8 (bus 4 to bus 9) in the ieee 24-bus system. fig. 3. the ieee 24-bus rts system the atc results for each of the stipulated source to sink power transfers on the ieee 14-bus and 24-bus systems with and without line outage are tested by csa, ep, gwo and pso algorithms. from the results in tables i-iv and figures 4-9 it can be seen that csa has the ability to converge quickly while achieving better atc compared to ep, gwo and pso while the power on the branches and the voltage at the buses also meet the allowable limits as shown in figures 3 and 4. table i. atc with normal topology ieee 14-bus system source/sink bus no atc ep gwo pso csa 1/9 54.2131 54.6663 54.6714 55.4486 1/10 43.7002 44.5024 44.3598 44.8332 1/12 28.9543 29.1568 29.0006 29.0220 1/13 28.8554 29.0571 29.3684 29.5996 1/14 38.5578 38.4526 39.1232 39.4719 1/4 213.0554 214.1656 213.9675 215.3233 1/3 149.1062 152.4437 152.9997 153.1253 table ii. atc with normal topology ieee 24-bus system source/sink bus no atc ep gwo pso csa 23/15 790.9801 794.8945 794.9189 797.3823 22/9 375.4146 375.5291 377.2215 377.5662 22/5 249.8510 249.0834 251.1657 252.8519 21/6 65.3891 65.5217 65.9981 65.9995 18/5 250.2302 251.4353 251.9639 252.8634 the analysis results show that the csa algorithm is able to solve the nonlinear optimization problem of handling atc of power transactions between sources and sinks with equality and inequality constraints in the deregulated power system considering both thermal and voltage limits, and the ability of the algorithm to converge. fig. 4. power flow branch of the ieee 14-bus system without line outage fig. 5. bus voltage profile of the ieee 14-bus system without line outage fig. 6. convergence characteristics of csa compared to ep, gwo, and pso for the ieee 14 bus system without line outage table iii. atc with line outage topology for the ieee 14bus system source/sink bus no atc ep gwo pso csa 1/9 46.8778 48.7042 48.7277 50.0297 1/10 46.7321 49.9837 48.7654 50.8314 1/12 30.8796 34.1418 34.1167 34.1631 1/13 26.9986 32.4435 31.9989 33.8311 1/14 37.5423 38.2378 35.2742 38.6285 1/4 206.751 209.8131 207.324 210.365 1/3 148.903 150.778 150.228 151.770 engineering, technology & applied science research vol. 10, no. 1, 2020, 5340-5345 5344 www.etasr.com duong et al.: available transfer capability determination for the electricity market using cuckoo … fig. 7. convergence characteristics of csa compared to ep, gwo, and pso for the ieee 24-bus system without line outage fig. 8. convergence characteristics of csa compared to ep, gwo, and pso for the ieee 14-bus system with line outage table iv. atc with line outage topology for the ieee 24bus system source/sink bus no atc ep gwo pso csa 23/15 781.342 793.135 794.381 795.014 22/9 372.625 371.950 372.768 376.399 22/5 227.914 226.261 226.823 227.259 21/6 49.9771 51.0006 50.5363 51.1782 18/5 228.763 228.724 227.561 229.852 fig. 9. convergence characteristics of csa compared to ep, gwo, and pso for the ieee 24-bus system with line outage v. conclusions accurate atc determination in order to ensure system security while serving power transactions is one of the most challenging tasks in the electricity market. this paper has presented an implementation of the cuckoo search algorithm to solve the problem which is formulated as a nonlinear optimization problem with equality and inequality constraints for handling the atc of power transactions between sources and sinks in a deregulated power system considering both thermal and voltage limits. the results for the two systems have proved that the proposed csa has remarkable robustness in maximizing the atc. in all cases, the available transfer capability obtained by using csa is much higher than that of ep, gwo, and pso. thus, csa is one of the most effective methods for determining atc in an electric power system. references [1] m. r. salehizadeh, m. a. koohbijari, h. nouri, a. tascikaraoglu, o. erdinc, j. p. s. catalao, “bi-objective optimization model for optimal placement of thyristor-controlled series compensator devices”, energies, vol. 12, no. 13, article id 2601, 2019 [2] m. r. salehizadeh, a. rahimi-kian, k. hausken, “a leader–follower game on congestion management in power systems”, in: game theoretic analysis of congestion, safety and security, pp. 81-112, springer, 2015 [3] m. r. salehizadeh, a. r. rahimi-kian, m. oloomi-buygi, “securitybased multi-objective congestion management for emission reduction in power system”, international journal of electrical power & energy systems, vol. 65, no. 2, pp. 124-135, 2015 [4] m. oloomi-buygi, m. r. salehizadeh, “toward fairness in transmission loss allocation”, 2007 australasian universities power engineering conference, perth, australia, december 9-12, 2007 [5] y. ou, c. singh, “assessment of available transfer capability and margins”, ieee transactions on power systems, vol. 17, no. 2, pp. 463-468, 2002 [6] north american electric reliability council (nerc), available transfer capability definitions and determination, nerc, 1996 [7] p. venkatesh, r. gnanadass, n. p. padhy, “available transfer capability determination using power transfer distribution factors”, international journal of emerging electric power systems, vol. 1, no. 2, article id 1009, 2004 [8] h. farahmand, m. rashidi-nejad, m. fotuhi-firoozabad, “implementation of facts device for atc enhancement using rpf technique”, large engineering systems conference on power engineering, halifax, canada, july 28-30, 2004 [9] z. chen, m. zhou, g. li, “atc determination for the ac/dc transmission systems using modified cpf method”, international conference on critical infrastructure, beijing, china, september 20-22, 2010 [10] t. k. hahn, m. k. kim, d. hur, j. k. park, y. t yoon, “evaluation of available transfer capability using fuzzy multi-objective contingencyconstrained optimal power flow”, electric power systems research, vol. 78, no. 5, pp. 873-882, 2008 [11] m. rashidinejad, h. farahmand, m. f. firuzabad, a. a. gharaveisi, “atc enhancement using tcsc via artificial intelligent techniques”, electric power systems research, vol. 78, no. 1, pp. 11-20, 2008 [12] d. t. long, t. t. nguyen, n. a. nguyen, l. a. t. nguyen, “an effective method for maximizing social welfare in electricity market via optimal tcsc installation”, engineering, technology & applied science research, vol. 9, no. 6, pp. 4946-4955, 2019 [13] v. h. nguyen, h. nguyen, m. t. cao, k. h. le, “performance comparison between pso and ga in improving dynamic voltage stability in anfis controllers for statcom”, engineering, technology & applied science research, vol. 9, no. 6, pp. 4863-4869, 2019 [14] l. t. duong, t. t. nguyen, “network reconfiguration for an electric distribution system with distributed generators based on symbiotic engineering, technology & applied science research vol. 10, no. 1, 2020, 5340-5345 5345 www.etasr.com duong et al.: available transfer capability determination for the electricity market using cuckoo … organisms search”, engineering, technology & applied science research, vol. 9, no. 6, pp. 4925-4932, 2019 [15] t. nireekshana, g. k. rao, s. s. n. raju, “enhancement of atc with facts devices using real-code genetic algorithm”, electrical power and energy systems. vol. 43, no. 1, pp. 1276–1284, 2012 [16] r. m. idris, a. kharuddin, m. mustafa, “available transfer capability determination using bees algorithm”, 20 th australasian universities power engineering conference, christchurch, new zealand, december 5-8, 2010 [17] h. su, y. qi, x. song, “the available transfer capability based on a chaos cloud particle swarm algorithm”, 9 th international conference on natural computation, shenyang, china, july 23-25, 2013 [18] m. m. othman, a. mohamed, a. hussain, “available transfer capability assessment using evolutionary programming based capacity benefit margin”, international journal of electrical power & energy systems, vol. 28, no. 3, pp. 166-176, 2006 [19] d. s. ivan, “evolutionary algorithm for avaluating available transfer capability”, journal of electrical engineering, vol. 64, no. 5, pp. 291297, 2013 [20] x. s. yang, s. deb, “cuckoo search via lévy flights”, 2009 world congress on nature & biologically inspired computing, coimbatore, india, december 9-11, 2009 [21] m. shehab, a. t. khader, m. a. a. betar, “a survey on applications and variants of the cuckoo search algorithm”, applied soft computing, vol. 61, no. 12, pp. 1041-1059, 2017 [22] t. kang, j. yao, t. l. duong, s. yang, x. zhu, “a hybrid approach for power system security enhancement via optimal installation of flexible ac transmission system (facts) devices”, energies, vol. 10, no. 9, article id 1305, 2017 [23] http://www.pserc.cornell.edu//matpower (accessed on 7 june 2016) microsoft word 30-2924_s1_etasr_v9_n4_pp4480-4483 engineering, technology & applied science research vol. 9, no. 4, 2019, 4480-4483 4480 www.etasr.com panhyar et al.: influence of casting temperature on the structural behavior of concrete influence of casting temperature on the structural behavior of concrete muhammad arif panhyar department of civil engineering, mehran university of engineering and technology, shaheed zulfiqar ali bhutto campus, khairpur mirs’, pakistan fahad ali shaikh department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan syed naveed raza shah department of civil engineering, mehran university of engineering and technology, shaheed zulfiqar ali bhutto campus, khairpur mirs’, pakistan ashfaque ahmed jhatial department of civil engineering, mehran university of engineering and technology, shaheed zulfiqar ali bhutto campus, khairpur mirs’, pakistan abstract—concrete is the most preferred construction and building material and its demand is not expected to diminish in the near future. the properties of concrete are affected by various factors, one of them is the temperature at casting. this experimental work was carried out in order to study the effect of high temperature on workability, and compressive and flexural tensile strength of concrete. it was found that higher temperatures caused water content evaporation from the mix during casting, causing reduction in workability, but simultaneously increased the strength compared to concrete cast at controlled temperatures). keywords-structural behavior; temperature; compressive strength; flexural strength; casting temperature i. introduction concrete is extensively used as a construction material and it is the second highest consumed material around the world [1]. its popularity is mainly due to its durability, availability, cold resistance, chemical resistance, workability, and flexibility [2, 3]. nowadays, concrete consumption in the world is approximated to 2.5 tons per capita per year equal to 17.5 billion tons for a population of 7 billion [4]. concrete is a composite material and its properties depend on various factors such as proportioning, batching, mixing, transporting, pouring, and curing [5]. depending on the site conditions, concrete can be mixed manually or with the help of a mixing machine called mixer which can be transported with the help of wheel barrows or pumps. the curing process in concrete is an important factor because curing protects against the loss of needed moisture for hydration, enhances strength and reduces the hardened concrete’s permeability. curing is carried out by various methods, such as: ponding, steam curing, sprinkling water on concrete, covering surfaces with gunny bags, covering surfaces with membrane, and others (providing shades on concrete work, spreading wet sand etc.). concrete is a widely used commodity, which can be used in a variety of environments. concreting, though a relatively easy concept, is less understood sometimes and even small factors like temperature have a big role in its proper and satisfactory performance. temperature affects the properties of concrete, particularly when it varies during casting and curing. the guidelines for concreting in extreme weather conditions should be properly followed in order to acquire a serviceable product. concrete is cast under varying surrounding temperatures. concrete placement in winter is done at low temperatures and curing is done at even lower temperatures. in summer, concrete is frequently poured at high temperatures and cured at much higher temperatures. casting plays a key role in concrete construction and if not properly brought into action, it will surely have adverse effects on the slump (workability), compressive strength, and flexural tensile strength. throughout the casting of concrete, various factors should be taken into consideration which may affect its properties, preferably temperature. temperature directly affects workability, water demand, initial and final setting time, compressive and flexural tensile strength. by inspecting the influence of temperature variation, the behavior of concrete can be more easily identified. khairpur mirs’ is a city in northern sindh province of pakistan, which is considered as an average of the hot climate cities in northern sindh. the climate remains mostly hot for around 8 months in a year. furthermore, the temperature of khairpur mirs’ reached a record high of 49.5 o c during the heat wave of the 26th of may, 2010. the usual temperature varies approximately from 43.4 o c to 7.2 o c in summer and winter respectively [6]. high atmospheric temperature affects concrete’s properties by increasing the temperature of fresh concrete and causing a high-water demand, resulting in quick dehydration, which accelerates the initial settling and lowers the long term strength of concrete. high temperature increases the evaporation rate of freshly mixed concrete, resulting in lowering the effective water corresponding author: muhammad arif panhyar (arifali.panhyar@gmail.com) engineering, technology & applied science research vol. 9, no. 4, 2019, 4480-4483 4481 www.etasr.com panhyar et al.: influence of casting temperature on the structural behavior of concrete demand and hence lowering the effective water-cement (w/c) ratio [7]. keeping concreting continuous either in summer or in winter is important. there is an optimum temperature during the early-life of concrete which optimizes strength at later-ages [8]. in general, temperature was given less priority, it was neglected most of the times. temperature varies with time and area, therefore, its effects also vary with time and site location. when concrete is fresh, it is desirable to be workable. but sometimes due to conditions at site, the desirable workability is not obtained which results in the form of segregation, bleeding and loss of compressive strength. also, when there is a high ambient temperature at the casting place, then loss in workability occurs because of the accelerated evaporation process that is adjusted by adding excess water in the concrete mix. surely, this disturbs the w/c ratio which is directly proportional to concrete’s strength. if w/c ratio is increased, it would surely have adverse effects on concrete’s strength. often in summer the temperature may reach 45°c at day and 20°c at night. this change in temperature will adequately affect concrete’s properties. in hot and humid environmental conditions the concrete surfaces could get cracks. results in [9] show that the serviceability of hardened concrete and the mechanical properties of concrete are significantly affected by temperature. the factors which affect the concrete’s strength at high temperature can be divided in to two groups: properties of materials (aggregates, cement) and environmental factors such as temperature, duration of exposure, etc. [10]. authors in [11] worked on hardened concrete exposed to elevated temperatures [11]. when exposing reinforced concrete structures to risen temperatures, the structure starts deteriorating and could fail [12]. authors in [13] studied the curing regimes and temperature on the compressive strength of concrete. authors in [14] studied the effects of curing conditions on concrete’s properties. authors in [15] reported that exposing concrete to risen ambient temperature after casting will have a significant effect on its properties. long-term behaviors are affected temperature factor and should be investigated [16]. usually concrete testing is performed under controlled conditions in laboratory. in the field however, the concrete is prepaid and kept in service at a variety of temperatures. the range of concrete use has grown up considerably with many modern constructions being built in countries having a hot and humid climate. conventionally, the strength properties of concrete have been used as standard for determining its performance, although it is not necessary for concrete to have high strength, but it should have a long service life. it is known that the performance of concrete should be evaluated according to durability and strength under expected surrounding atmospheric conditions. keeping in view the research work carried out on concrete, only a few were carried out on the effects of casting temperatures on its properties. most researches were carried out under controlled conditions. therefore, an experimental study was conducted in order to learn the effects of khairpur mirs’ ambient temperature on the workability, compressive and flexural tensile strength of concrete during casting. the results were compared with the ones of conventional specimens. ii. materials and experimental procedure for this experimental study, type i ordinary portland cement was used. locally acquired fine and coarse aggregates had specific gravity of 2.667 and 2.7 respectively. mix design was done to determine the mix ratio, in order to achieve the target strength according to aci, which was determined to be 1:2.04:2.74 (1 part cement, 2.04 parts fine and 2.74 parts coarse aggregates), while the w/c ratio was 0.62. to determine the influence of temperature on the properties of concrete, it was cast at two different sites, one under controlled temperature of 26 o c inside the laboratory, while the other was cast outdoors, in open air where the temperature varied with respect to the time of casting. the casting of concrete was conducted at 1 hour intervals from 11:00 a.m. to 4:00 p.m. (outdoors). to determine the effect of increase in concrete temperature, a total of 6 batches were prepared outside and 1 batch of concrete in the laboratory. for each batch a total of 9 cubes of 150mm×150mm×150mm (3 cubes each for 3, 7 and 28 days of curing), while 6 beams of 500mm×100mm×100mm (3 beams for 7 and 28 days of curing each) were cast and were cured in water for the specified period. the workability was determined according to [17] and was recorded for the wet mix of each batch of concrete. the compressive strength testing of cube specimens was conducted after 3, 7 and 28 days of curing in accordance to [18], while flexural tensile testing was conducted on 7 and 28 days of curing in accordance to [19]. the ambient temperature was recorded during the casting of each batch. the concrete batch cast under controlled temperature was named b1 while the concrete batches cast under uncontrolled temperature and various times were named b-2 to b-7. iii. results and discussion a. workability workability is directly related to temperature. as the temperature increases, the water available in concrete starts evaporating which reduces workability as shown in table i. the slump values are also varying with temperature. figure 1 gives the graphical representation of the variation of slump with respect to time. from the results, it can be observed that the difference in the surrounding temperature has significant impact on workability. the concrete specimens which were cast under controlled temperature (b-1 batch) exhibited the highest slump, but in the outside casting (b-2 to b-7 batches), the workability decreases with increase in temperature. this is attributed to the evaporation of water, which caused the concrete to become dry thus directly effecting workability. b. compressive strength from the results shown in tables ii-iv for compressive strength testing at 3, 7 and 28 days respectively, it can be observed that the compressive strength of controlled specimen (b-1 batch) is significantly lower than of the specimens which were cast outside under uncontrolled temperature. this is attributed to the reduction in workability. the w/c ratio plays a significant role in achieving specified strength, the higher the w/c ratio the lower the strength that can be achieved. for this experimental work, though the w/c ratio was on the higher side, the difference in strength between the controlled and uncontrolled temperature specimens was significantly high, engineering, technology & applied science research vol. 9, no. 4, 2019, 4480-4483 4482 www.etasr.com panhyar et al.: influence of casting temperature on the structural behavior of concrete because in uncontrolled environment the water evaporates faster due to rise in temperature causing reduction in workability but simultaneously increasing strength. the variation in compressive strength is illustrated in figure 2. table i. workability results of concrete samples batch casting time (h) temperature ( o c) slump (mm) b-1 ---26 112 b-2 11:00 am 34 102 b-3 12:00 pm 35 98 b-4 01:00 pm 37 91 b-5 02:00 pm 38 87 b-6 03:00 pm 40 82 b-7 04:00 pm 42 76 fig. 1. slump variation versus temperature and time table ii. average compressive strength on 3 days batch casting time temperature ( o c) avg compressive strength (mpa) increase in strength w.r.t. control specimen b-1 (%) b-1 ----26 13.70 --- b-2 11:00 am 34 15.56 +13.59 b-3 12:00 pm 35 16.10 +17.52 b-4 01:00 pm 37 16.57 +20.94 b-5 02:00 pm 38 16.88 +23.21 b-6 03:00 pm 40 17.38 +26.86 b-7 04:00 pm 42 17.68 +29.05 fig. 2. variation in compressive strength of concrete table iii. av erage ompressive strength on 7 days batch casting time temperature ( o c) avg compressive strength (mpa) increase in strength w.r.t. control (%) b-1 ----26 19.85 --- b-2 11:00 am 34 22.11 +11.39 % b-3 12:00 pm 35 22.57 +13.7 % b-4 01:00 pm 37 22.85 +15.11 % b-5 02:00 pm 38 23.32 +17.48 % b-6 03:00 pm 40 23.96 +20.70 % b-7 04:00 pm 42 24.50 +23.40 % table iv. average compressive strength on 28 d ay s batch casting time temperature ( o c) avg compressive strength (mpa) increase in strength w.r.t. control (%) b-1 ----26 26.09 --- b-2 11:00 am 34 28.19 +8.05 % b-3 12:00 pm 35 28.46 +9.08 % b-4 01:00 pm 37 28.87 +10.65 % b-5 02:00 pm 38 29.11 +11.57 % b-6 03:00 pm 40 29.94 +14.76 % b-7 04:00 pm 42 30.56 +17.13 % c. flexural tensile strength the results of flexural tensile testing are shown in tables v and vi for 7 and 28 days of curing respectively, while the variation in flexural tensile strength is shown in figure 3. the results illustrate that with the increase in temperature during casting, the water content is reduced which causes reduction in workability. the reduction in workability in turn causes some issues while mixing or pouring the concrete into moulds, but it has the advantage of achieving higher strength compared to samples with better workability. this is evident from the results, where the control specimens (b-1 batch), which were cast under controlled temperature achieved higher workability but recorded significantly lower flexural tensile strength compared to the specimens of the b-2 to b-7 batches which were cast under uncontrolled temperature. fig. 3. variation in flexural strength of concrete engineering, technology & applied science research vol. 9, no. 4, 2019, 4480-4483 4483 www.etasr.com panhyar et al.: influence of casting temperature on the structural behavior of concrete table v. average flex ural tensile strength on 7 days batch casting time temperature ( o c) avg flexural tensile strength (mpa) difference in strength w.r.t. control (%) b-1 ----26 3.24 --- b-2 11:00 am 34 3.46 +6.96 % b-3 12:00 pm 35 3.60 +11.28 % b-4 01:00 pm 37 3.72 +15.00 % b-5 02:00 pm 38 3.86 +19.31 % b-6 03:00 pm 40 3.94 +21.79 % b-7 04:00 pm 42 4.07 +25.81 % table vi. average flexural tensile strength on 28 days batch casting time temperature ( o c) avg flexural tensile strength (mpa) difference in strength w.r.t. control (%) b-1 ----26 4.34 --- b-2 11:00 am 34 4.53 +4.38% b-3 12:00 pm 35 4.68 +7.83% b-4 01:00 pm 37 4.78 +10.14% b-5 02:00 pm 38 4.89 +12.67% b-6 03:00 pm 40 4.99 +14.98% b-7 04:00 pm 42 5.11 +17.74% iv. conclusions from the results, it can be concluded that: • the temperature has significant impact on the properties of concrete. • controlled temperature allows high workability to be achieved. uncontrolled temperature tends to cause water content to evaporate, thus causing significant loss in workability. • the reduction in workability caused by the increase in temperature, has significant impact on the compressive and flexural tensile strength of concrete. • at the peak recorded temperature of 42 o c, after 3 days of curing, the specimens cast outside were found to have 29.05% higher compressive strength than the specimens cast at the controlled temperature of 26 o c. after 28 days this increase in percentage was 17.13%. • the same behavior was observed in flexural strength. after 7 days, the flexural strength of the specimen cast outside at 42 o c was 25.81% higher than the one of the specimen cast at the controlled temperature of 26 o c. after 28 days of curing, this increase was 17.74%. this shows that as the temperature increases, the compressive and flexural strength of concrete increase but this diminishes with time. references [1] a. a. jhatial, s. sohu, n. k. bhatti, m. t. lakhiar, r. oad, “effect of steel fibres on the compressive and flexural strength of concrete”, international journal of advanced and applied sciences, vol. 5, no. 10, pp. 16-21, 2018 [2] a. a. jhatial, w. i. goh, k. h. mo, s. sohu, i. a.bhatti, “green and sustainable concrete–the potential utilization of rice husk ash and egg shells”, civil engineering journal, vol. 5, no. 1, pp. 74-81, 2019 [3] i. a. memon, a. a. jhatial, s. sohu, m. t. lakhiar, z. hussain, “influence of fibre length on the behaviour of polypropylene fibre reinforced cement concrete”, civil engineering journal, vol. 4, no. 9, pp. 2124-2131, 2018 [4] m. s. islam, m. s. ali, r. parvin, “mechanical properties of concrete made by fine aggregate obtained from dismantled concrete”, world journal of science and engineering, vol. 3, no. 1, pp. 71-88, 2018 [5] a. m. neville, properties of concrete, wiley, 1996 [6] climate-data.org, khairpur climate: average temperature, weather by month, khairpur weather averages, available at: https://en.climatedata.org/asia/pakistan/sindh/khairpur-1228/, accessed at may 10, 2019 [7] j. ortiz, a. aguado, l. agullo, t. garcia, “influence of environmental temperatures on the concrete compressive strength: simulation of hot and cold weather conditions”, cement and concrete research, vol. 35, no. 10, pp. 1970-1979, 2005 [8] p. klieger, “effect of mixing and curing temperature on concrete strength”, aci 53rd annual convention, dallas, usa, february 27, 1958 [9] k. a. soudki, e. f. e. salakawy, n. b. elkum, “full factorial optimization of concrete mix design for hot climates”, journal of materials in civil engineering, vol. 13, no. 6, pp. 427-433, 2001 [10] b. chen, c. li, l chen, “experimental study of mechanical properties of normal-strength concrete exposed to high temperatures at an early age”, fire safety journal, vol. 44, pp. 997-1002, 2009 [11] y. wu, b. wu, “residual compressive strength and freeze-thaw resistance of ordinary concrete after high temperature”, construction and building materials, vol. 54, pp. 596-604, 2014 [12] m. husem, “the effects of high temperature on compressive and flexural strengths of ordinary and high-performance concrete”, fire safety journal, vol. 41, pp. 155-163, 2006 [13] f. sajedi, “effect of curing regime and temperature on the compressive strength of cement-slag mortars”, construction and building materials, vol. 36, pp. 549-556, 2012 [14] a. s. a. ghatani, “effect of curing methods on the properties of plain and blended cement concretes”, construction and building materials, vol. 24, no. 3, pp. 308-314, 2010 [15] n. a. memon, f. r. abro, u. memon, s. r. sumadi, “effect of curing conditions and super plasticizer on compressive strength of concrete exposed to high ambient temperature of nawabshah, pakistan”, international journal of engineering research, vol. 3, no. 7, pp. 462464, 2014 [16] g. yaun, q. li, “the use of surface coating in enhancing the mechanical properties and durability of concrete exposed to elevated temperature”, construction and building materials, vol. 95, pp. 375-383, 2015 [17] british standard institution, en 12390-2 (2009): testing fresh concrete. slump-test, british standard institution, 2009 [18] british standard institution, en 12390-3 (2009): testing hardened concrete. compressive strength of test specimens, british standard institution, 2009 [19] british standard institution, en 12390-5 (2009): testing hardened concrete. flexural strength of test specimens, british standard institution, 2009 microsoft word etasr_v11_n4_pp7508-7514 engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 7508 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … a techno-economic viability analysis of the twoaxis tracking grid-connected photovoltaic power system for 25 selected coastal mediterranean cities youssef kassem department of mechanical engineering engineering faculty near east university nicosia, cyprus yousseuf.kassem@neu.edu.tr hüseyin gökçekuş department of civil engineering civil and environmental engineering faculty near east university nicosia, cyprus huseyin.gokcekus@neu.edu.tr hamza s. abdalla lagili department of civil engineering civil and environmental engineering faculty near east university nicosia, cyprus hamzasalem409@gmail.com abstract—generating energy from renewable sources, particularly solar energy, offers significant benefits and achieves a more clean and sustainable development. in the present paper, the potential of developing a 4.2kw grid-connected rooftop twoaxis tracking pv system in 25 selected coastal mediterranean cities located in different arabic countries is evaluated using retscreen software. the proposed system is serving the basic household energy needs according to the load profile from monthly electrical bills. it is found that the proposed system produces about 8824kw annually, which helps to reduce co2 emissions. also, the average energy production cost is assumed to range from 0.0337 to 00.0475$/kwh. it is concluded that the proposed system can provide an effective solution for energy poverty in developing regions with a very positive socio-economic and environmental impact. the small-scale grid-connected pv system will provide the domestic energy needs at a lower energy production cost than the electricity price grid-connected consumers pay. this study demonstrated that generating electricity from solar energy will help reduce the electricity tariff rates and the dependence on fossil fuels. keywords-coastal mediterranean cities; two-axis sun tracking system; solar energy potential; grid-connected; small scale pv system; retscreen i. introduction the energy sector is the most prominent of the economic crisis and the environmental disaster in arabic countries such as lebanon, syria, palestine, and libya [1]. this sector is the biggest waste producer and the primary cause of budget deficits and debt ballooning, in addition to being the primary cause of air pollution and related deaths. moreover, the electricity crisis has been increased in many arabic countries due to the population growth, the rising living standards, and the growing industry sectors, which have led to an increase of the energy demand, and the increased electricity cost associated with fossil fuel-based electrical energy production [2]. generally, most arabic countries do not suffer from poverty in electrical energy sources, such as oil, gas, sunlight, and wind. for instance, libya is a rich country in natural resources, however, it has faced power outages for several years due to the poor maintenance and civil war. the electricity crisis is not new in most developing countries and the electricity sector has suffered from decades of mismanagement, weak policies, and the absence of proper planning. this problem has been increased due to the dilapidation of old power stations, accompanied by sabotage operations. as a result, the hours of power cuts increased, ranging from 8 to 20 hours per day. for this reason, citizens are dependent on domestic power generators or small home generators, both of them adding financial burdens to the residents. nowadays, all countries are looking to utilize renewable energy resources instead of fossil fuels to mitigate climate change [3]. additionally, the utilization of renewable energies, such as solar, as power sources, can be an alternative solution for solving the electricity crisis in most countries and reducing the consumption of fossil fuels [4, 5]. globally, solar energy is one of the most popular alternative energy resources for electricity production. photovoltaic (pv) panels are used to convert sunlight into electricity. in the literature, utilizing the pv systems helps to meet the basic domestic needs globally, especially in the developing countries [6]. pv systems can be categorized as stand-alone systems or grid-tied systems for domestic and commercial settings. the grid-tied pv systems corresponding author: youssef kassem engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 7509 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … are generally installed with no mandatory requirement for storage in the regions/countries with stable grids [7]. the benefits of these systems are that they are simple to design, easily manageable, require less maintenance, and are costeffective. the disadvantage of a grid-tilt system is that its energy production is lower than the power produced by tracking pv systems. to maximize the amount of output power of the pv system, it must be adjusted to the changing position of the sun throughout the day [6]. solar tracking systems are utilized to maximize energy production by the pv systems [8]. solar tracking systems are classified based on movement capability (single-axis and two-axis) and control system (astronomically controlled systems and sensor-controlled systems) [9]. according to [10-12] the tracking systems (single-axis and two-axis) increased the energy production from 20 to 40%. the performance of grid-connected pv systems with various sun-tracking systems has been investigated by several scientific studies [13-15]. for instance, authors in [13] investigated the feasibility of a 5kw gridconnected pv system under different tracking systems and pv technologies in nahr el-bared, lebanon. the results demonstrated that the two-axis tracking solar system was an economical option for electricity production compared to other systems. authors in [14] evaluated the performance of the gridconnected solar system under different tracking systems in gulf cooperation council countries. the results showed that the vertical-axis and two-axis tracker system could produce 20% and 34% more power than the fixed-tilt systems. the benefit of grid-connected systems is that the excess electricity produced by these systems can be given back to the grid, which can help to reduce the electricity bills and solve the electricity crisis. the main scope of the current study is to present a techno-economic feasibility evaluation for 4.2kw grid-connected pv systems on the rooftops of household/residential buildings in coastal mediterranean cities in some arabic countries. the performance of the two-axis – tracking system for a grid-connected pv system has been analyzed using the retscreen software to show the benefits of solar energy utilization as a power source for solving the electricity crisis in developing countries. ii. materials and methods the solar energy potential in 25 coastal locations under mediterranean climate conditions is discussed based on a national aeronautics and space administration (nasa) database, which includes solar radiation (sr) and air temperature data. besides, the performance of the two-axis tracking system for grid-connected rooftop pv systems as a solution for the electricity crisis and reducing the electricity bills is investigated with the help of the retscreen software. a. location details and collected data in this work, 25 coastal mediterranean cities located in libya, lebanon, syria, palestine, tunisia, and algeria have been taken into consideration. the geographical coordinates of the selected cities are listed in table i. in the literature, the solar potential of different regions is usually evaluated using the nasa database. for example, authors in [16] assessed the potential of solar energy in various locations in nigeria using the nasa database. authors in [17] found that the nasa database showed good agreement with the measurement data of global solar irradiation. therefore, the solar potential of 25 coastal locations is assessed using the monthly nasa database. table i. coordinates of the selected cities country city latitude [°] longitude [°] libya az zawiyah 32.76 12.74 tripoli 32.89 13.19 al khums 32.65 14.27 misratah 32.33 15.10 surt 31.19 16.57 benghazi 32.12 20.09 turbruq 32.07 23.94 lebanon tripoli 34.43 35.84 beirut 33.89 35.50 syria tartus 34.90 35.89 al ladhiqiyah 35.61 36.00 palestine gaza strip 31.35 34.31 egypt port said 31.27 32.30 alexandria 31.20 29.92 marsa matrch 31.36 27.22 tunisia djerba modoun 33.81 10.85 gabes 33.89 10.10 sfax 34.74 10.76 sousse 35.82 10.63 tunis 36.81 10.18 algeria annaba 36.91 7.74 skikda 36.87 6.91 bejaia 36.75 5.06 algiers 36.70 3.06 oran 35.70 -0.63 b. two-axis tracking pv arrays in general, solar tracking systems are utilized to maximize energy production by the pv system due to the maximization of the incident beam radiation [18]. the rotation of these systems can be about a single axis or about two axes. maximum energy can be achieved using a two-axis solar system due to its total freedom of movement. in two-axis pv systems, the solar panels are mounted on the structure, which can move the modules in two axes [19] as shown in figure 1. for a two-axis pv system, two motors are required for the rotation of the axes [19]. thus, the panel’s orientation with the two-axis tracker system is dependent on the solar position. generally, this system is required a control module to direct it. the solar tracker pv systems utilize a sr sensor to control the system orientation [20]. moreover, the performance of the pv system depends on the parameters of the system components and weather. additionally, existing power producers are trying to increase the output power of the pv system by improving operation and maintenance (o&m) activities [20]. the o&m is considered one of the important aspects of a pv solar system. improving the o&m can help reduce the energy production cost and improve the impact returns on investment. furthermore, there are several issues that the pv system faces during its lifecycle, such as natural degradation, component failures, weather conditions, etc. [21]. therefore, a holistic approach can address these issues under the o&m aspect, which is divided into three categories (preventative maintenance, corrective maintenance, and condition-based engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 7510 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … maintenance). in fact, the tracking pv systems increase the o&m) due to the requirement of periodic checking to ensure the optimal performance of the system. fig. 1. characteristics of two-axis tracker movements. c. design of the pv power system to build the 4.2kw pv system, a mono-si-cs6x-300m pv module was selected. it is made of mono-crystalline-silicon cells with a maximum power of 300wp. a total number of 14 modules are required with an area of about 28m 2 . a fronius symo 4.5-3-m light 4.5 kw solar inverter with a capacity of 4.5kw and 98.6% efficiency was chosen in this study. the specifications of the selected pv panel and inverter are available at [22, 23]. d. simulation tool there are many simulation tools such as homer energy, retscreen, etc. that may be utilized to evaluate the energy production and levelized cost of energy (lcoe). the comparison between these simulation tools is available at [24]. in this study, homer is utilized to evaluate the economical feasibility of the proposed systems. retscreen is developed by natural resources canada (nrc). it utilizes the long-term monthly average meteorological data from the nasa database as a source of meteorological information for a specific location [3, 16]. in the present study, the most important economic indicators of financial analysis including net present cost (npc), cost of energy (coe), simple payback (sp), and equity payback (ep) are estimated with the retscreen software. also, the greenhouse gas (ghg) emission reduction, energy production, and capacity factor (cf) for the proposed system are determined. iii. results and discussion a. characteristics of solar energy in the selected locations generally, regarding pv panels and inverters, the characteristics of the installation, and the meteorological conditions (relative humidity, air temperature, solar radiation, etc.) are the major factors that influence the performance of the pv system. the meteorological conditions affecting the generating power by the pv system are mainly solar irradiance [25-27]. therefore, global sr data were analyzed to estimate the potential of solar energy in the selected cities. table ii summarizes the average horizontal monthly daily sr for the selected locations. it is found that the average horizontal monthly daily sr varied from 2.01kwh/m 2 /day to 8.50kwh/m 2 /day. the maximum and minimum values of sr are recorded in port said (in june) and skikda (in december) respectively. the highest and lowest annual sr are 5.87kwh/m 2 /day and 4.51kwh/m 2 /day for alexandria and skikda, respectively as shown in figure 2. the highest value of average temperature (at) was recorded in port said (21.23�) and the next highest in alexandria (20.79�). based on the value of sr at the selected locations, it is found that the solar resource of the selected locations is categorized as excellent (class 5) according to [16]. therefore, these locations are suitable for installing a pv system in the future due to their high value of sr. fig. 2. average sr and air temperature as a function of location. b. electricity generation and capacity factor sr and the number of clear sunny days are essential factors that influence the performance of the pv system including output power and cf [28, 29]. the monthly electricity generation (eg) from the proposed system is shown in table iii. it is found that the monthly eg is within the range of 390kwh-1125kwh. the maximum average eg occurs at alexandria during july with a value of 1125kwh, while the minimum value of 390kwh was recorded at january in beirut. furthermore, figure 3 shows the annual eg and cf from the proposed systems. it is found that the value of eg is within the range of 7628-10333kwh for pv systems with two-axis tracking systems. the maximum value of eg is recorded in alexandria while the minimum value is obtained in algiers. besides, it is found that the cf values vary from 20.73% to 28.08%. these observations can be supported by other scientific researches who analyzed the feasibility of a gridconnected pv system. for instance, authors in [30] found that the cf of their proposed pv system in oman was within the range of 16-23%. also, authors in [31] found that the cf of grid-connected pv systems with various technologies varied from 15.37% to 15.75%. authors in [32] found that the value of cf of grid-connected pv systems with different suntracking modes was within the range of 17.54 to 27.42%. moreover, the use of the two-axis instead of the fixed-tilt option significantly increases the generated electricity [32, 33]. therefore, it can be concluded that the value gotten from the present study for each location is compatible with the generally acceptable values. consequently, it is technically sustainable to build a grid-connected rooftop pv system in these locations. the results indicate that the variation of the eg and cf is a function of location. engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 7511 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … table ii. average daily solar radiation [kwh/m2/day] location jan feb mar apr may jun jul aug sep oct nov dec az zawiyah 2.69 3.86 5.14 6.41 7.15 7.90 8.10 7.26 5.74 4.18 2.99 2.38 tripoli-libya 2.67 3.66 4.79 6.15 6.98 7.67 7.79 7.05 5.52 3.96 2.75 2.35 al khums 2.79 3.67 4.75 5.83 6.52 7.14 7.34 6.50 5.17 3.97 2.89 2.45 misratah 3.07 4.09 5.36 6.57 7.24 7.90 8.07 7.37 5.96 4.60 3.31 2.77 surt 3.41 4.25 5.23 6.18 6.54 7.42 7.26 6.86 5.77 4.67 3.49 3.06 benghazi 2.87 3.87 5.18 6.56 7.26 7.92 7.94 7.26 5.96 4.55 3.26 2.63 turbruq 2.69 3.65 4.90 6.18 6.89 7.52 7.63 6.96 5.86 4.38 3.05 2.47 tripoli-lebanon 2.76 3.68 5.03 6.37 7.62 8.31 8.08 7.37 6.32 4.65 3.24 2.49 beirut 2.67 3.51 4.80 6.18 7.45 8.08 7.83 7.16 6.07 4.53 3.15 2.41 tartus 2.76 3.68 5.03 6.37 7.62 8.31 8.08 7.37 6.32 4.65 3.24 2.49 al ladhiqiyah 2.65 3.63 4.96 6.28 7.45 8.22 7.99 7.29 6.20 4.56 3.06 2.38 gaza strip 3.08 3.90 5.29 6.58 7.50 8.07 7.90 7.23 6.22 4.67 3.50 2.87 port said 3.24 4.08 5.46 6.79 7.77 8.44 8.10 7.54 6.53 5.02 3.68 2.96 alexandria 3.21 4.14 5.56 6.92 7.79 8.50 8.35 7.72 6.56 5.04 3.64 2.97 marsa matrch 2.74 3.66 4.98 6.24 7.05 7.89 7.82 7.16 5.94 4.35 3.12 2.50 djerba modoun 2.62 3.64 4.90 6.27 7.02 7.67 7.82 7.07 5.51 3.87 2.82 2.32 gabes 2.62 3.64 4.90 6.27 7.02 7.67 7.82 7.07 5.51 3.87 2.82 2.32 sfax 2.58 3.53 4.57 5.87 6.85 7.38 7.44 6.63 5.06 3.47 2.68 2.36 sousse 2.37 3.29 4.33 5.67 6.72 7.38 7.60 6.53 4.97 3.34 2.48 2.16 tunis 2.30 3.20 4.22 5.22 6.34 6.94 7.31 6.29 4.69 3.37 2.47 2.09 annaba 2.28 3.15 4.25 5.21 6.25 6.98 7.14 6.08 5.00 3.57 2.45 2.02 skikda 2.27 3.15 4.26 5.22 6.20 6.91 7.08 6.12 4.94 3.53 2.42 2.01 bejaia 2.38 3.31 4.44 5.46 6.41 7.12 7.23 6.38 5.08 3.66 2.51 2.06 algiers 2.48 3.38 4.59 5.69 6.49 7.20 7.13 6.44 5.28 3.82 2.63 2.15 oran 2.72 3.64 4.74 5.94 6.55 7.08 6.92 6.26 5.25 3.94 2.80 2.40 table iii. average monthly eg [kwh] location jan feb mar apr may jun jul aug sep oct nov dec az zawiyah 558 593 754 771 847 860 910 868 732 625 541 430 tripoli-libya 561 595 755 772 847 860 910 868 733 627 543 432 al khums 520 522 701 745 855 888 955 872 677 616 483 465 misratah 592 602 821 865 979 1011 1078 1028 811 753 581 553 surt 656 606 735 703 804 855 886 834 739 625 535 592 benghazi 536 558 785 859 975 1007 1056 1008 811 742 568 510 turbruq 483 511 725 799 918 950 1006 954 794 702 513 460 tripoli-lebanon 554 571 786 849 1055 1083 1085 1043 905 810 610 520 beirut 390 438 630 672 845 901 887 849 720 660 508 419 tartus 474 480 655 731 930 979 978 930 781 678 507 444 al ladhiqiyah 546 574 779 835 1021 1061 1066 1029 890 806 585 511 gaza strip 581 557 751 805 919 989 988 967 827 792 602 582 port said 613 583 827 891 1065 1094 1079 1053 903 831 647 580 alexandria 606 597 849 913 1070 1107 1125 1089 911 840 641 585 marsa matrch 532 540 794 854 979 1007 1031 993 837 754 572 519 djerba modoun 499 550 744 822 941 969 1030 973 743 609 487 453 gabes 581 564 745 763 905 897 945 873 724 686 586 572 sfax 508 542 690 766 917 926 970 902 675 535 471 488 sousse 466 505 651 739 899 928 999 888 668 518 436 446 tunis 498 479 680 754 869 921 971 911 739 695 572 518 annaba 463 493 649 675 830 874 933 823 688 591 449 426 skikda 457 490 649 675 821 864 924 829 676 579 438 420 bejaia 491 525 688 715 857 897 949 876 701 611 463 436 algiers 434 456 616 623 789 757 914 881 706 526 513 414 oran 598 594 772 830 936 939 991 946 788 679 551 525 c. performance of the proposed system the environmental impact and economic performance of the proposed system were evaluated. in this study, the financial parameters (table iv) are assumed based on other previous scientific studies in different countries. in the present study, the system cost is around $5000, with the estimation being based on recent market data. the estimation is consistent with cost prices available in the literature. table iv. financial parameters factor unit value inflation rate % 2.5 discount rate % 3 reinvestment rate % 9 project life year 25 debt ratio % 50 debt interest rate % 7 debt term year 20 electricity export escalation rate % 5 engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 ᔐᔐᔐᔐ page 7512 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … fig. 3. annual eg and cf as a function of location. table v lists the results of the economic performances of the proposed system. the obtained results showed that the value of npv is positive, which makes the project to be financially and economically feasible. moreover, it is observed that the developed pv system in algiers has the longest value of ep of 4.0 years, while alexandria and port said have the lowest ep value (2.8 years). besides, the maximum and minimum values of sp are recorded in algiers and alexandria. these results indicate that the pv projects in all locations make financial sense. additionally, the lowest value of epc is found in alexandria with a value of 0.0334$/kwh followed by tripoli-lebanon with a value of 0.0336$/kwh. it is found that tripoli-libya and algiers have the highest value of lcoe compared to other selected regions with a value of 0.0475 $/kwh and 0.0453 $/kwh respectively. in general, the electricity price depends on the amount of energy consumption. for instance, the energy cost calculation in lebanon starts from 0.0255$/kwh for 0-100kwh energy consumption, 0.04$/kwh for 100-300kwh, 0.0584$/kwh for 300-400kwh, 0.0875$/kwh for 400-500kwh, and 0.146$/kwh for energy consumption over 500kwh. in syria, it starts from 0.005$/kwh for 1-100kwh energy consumption, 0.007$/kwh for 101-200kwh, 0.01$/kwh for 200-400kwh, 0.015$/kwh for 401-600kwh, 0.015$/kwh for 601-800kwh, 0.061$/kwh for 801-1000kwh, 0.071$/kwh for 1001-2000kwh, and 0.081$/kwh for energy consumption over 2000kwh. in palestine, it ranges from 0.15$/kwh to 0.17$/kwh for energy consumption over 200kwh. the households are charged a flat rate at 0.045, 0.04, and 0.077$/kwh respectively in egypt, algeria, and tunisia. hence, the energy production cost of the proposed systems is competitive with the electricity company tariff in the selected countries except for libya. table v. economic performance of the proposed pv system for all selected locations location npv [$] sp [year] ep [year] alcs [$/year] coe[$/kwh] ga-ghg [tco2] az zawiyah 21505.8 5.9 3.5 1235.0 0.0407 5.40 tripoli-libya 14927.7 5.9 3.5 1003.4 0.0475 5.40 al khums 20883.3 6.0 3.6 1199.3 0.0416 5.27 misratah 25344.5 5.2 3.0 1455.5 0.0357 6.15 surt 21762.4 5.8 3.5 1249.8 0.0403 5.45 benghazi 24505.5 5.3 3.1 1407.3 0.0367 5.98 turbruq 22554.1 5.7 3.3 1295.2 0.0392 5.60 tripoli-libya 26215.9 5.1 3.1 1505.5 0.0336 6.98 beirut 19655.3 6.3 3.8 1128.8 0.0436 5.60 tartus 21747.5 5.8 3.5 1248.9 0.0403 5.15 al ladhiqiyah 25442.0 5.2 3.0 1461.1 0.0356 5.84 gaza strip 24324.8 5.3 3.1 1396.9 0.0369 4.28 port said 26941.3 4.9 2.8 1547.2 0.034 4.65 alexandria 27477.5 4.8 2.8 1578.0 0.0334 4.73 marsa matrch 24493.0 5.3 3.1 1406.6 0.0367 4.30 djerba modoun 22569.0 5.7 3.3 1296.1 0.0391 4.02 gabes 22640.7 5.7 3.3 1300.2 0.039 4.03 sfax 21175.8 6.0 3.6 1216.1 0.0412 3.82 sousse 20382.1 6.1 3.7 1170.5 0.0424 3.71 tunis 21883.5 5.8 3.4 1256.7 0.0401 3.92 annaba 19575.6 6.3 3.8 1124.2 0.0437 4.39 skikda 19340.5 6.4 3.9 1110.7 0.0441 4.35 bejaia 20601.0 6.1 3.6 1183.1 0.042 4.56 algiers 18710.7 6.6 4.0 1074.5 0.0453 4.24 oran 23642.9 5.5 3.2 1357.8 0.0377 5.09 the results demonstrated that the proposed system can help in solving the electricity crisis while simultaneously reducing ghg emissions. consequently, it can be concluded that the developed system provides a very good insight into the economic viability of the project for all regions. additionally, the obtained results demonstrated that the development of the proposed 4.2kw pv power system is economically acceptable due to the obtained favorable economic results. iv. limitations and conclusions installing pv systems has become increasingly attractive for residential consumers due to increasing electricity tariff rates while it reduces a country’s dependence on fossil fuels. the objective of the current study was to investigate the feasibility of a two-axis tracking pv system in coastal mediterranean cities located in different countries using the retscreen software. before starting the main conclusions in the present study, it is essential to acknowledge the limitations of this work. first, the assumed financial parameters were engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 7513 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … based on historical values in the literature. second, the influence of various parameters such as dust, irradiation intensity, air temperature, and relative humidity was neglected due to the limitations of the software. third, the cost of the proposed projects was estimated based on the existing cost in the literature. the findings from the present study showed the annual value of sr for the selected regions is within the range of 1645.85 to 2141.33kwh/m 2 . based on these data, the analysis indicates that the selected regions selected cities have the potential for the distribution of pv power systems in household/residential applications. moreover, the average annual energy output showed that the 4.2kw grid-connected pv system could produce 8824kwh, indicating that it can cover the required electricity needs for one house located in each selected city. these results are supported by the findings in [13]. based on the financial assumptions used in this study, the average energy production cost ranges from 0.0334 to 0.0475$/kwh for the developed system. thus, the energy production cost of the proposed system is competitive with the electricity company tariff in the selected countries, except for libya. the results of this paper demonstrate that a small-scale grid-connected rooftop pv system has the potential to solve the electricity crisis, reduce the consumption of fossil fuel, and reduce the environmental pollution by minimizing the emissions of co2. the conducted analysis showed that the small-scale grid-connected rooftop pv systems are found to be technically, economically, and environmentally feasible solutions for generating electricity and reducing the dependency on fossil fuels. acknowledgment the authors would like to thank the faculty of engineering, especially the mechanical engineering department, at near east university for their support. references [1] a. j. mcmichael, "the urban environment and health in a world of increasing globalization: issues for developing countries," bulletin of the world health organization, vol. 78, no. 9, pp. 1117–1117, sep. 2000. [2] a. shahsavari and m. akbari, "potential of solar energy in developing countries for reducing energy-related emissions," renewable and sustainable energy reviews, vol. 90, pp. 275–291, jul. 2018, https://doi.org/10.1016/j.rser.2018.03.065. [3] y. kassem, h. camur, and o. a. m. abughinda, "solar energy potential and feasibility study of a 10mw grid-connected solar plant in libya," engineering, technology & applied science research, vol. 10, no. 4, pp. 5358–5366, aug. 2020, https://doi.org/10.48084/ etasr.3607. [4] f. chermat, m. khemliche, a. e. badoud, and s. latreche, "technoeconomic feasibility study of investigation of renewable energy system for rural electrification in south algeria," engineering, technology & applied science research, vol. 8, no. 5, pp. 3421–3426, oct. 2018, https://doi.org/10.48084/etasr.2253. [5] f. chien, h. w. kamran, g. albashar, and w. iqbal, "dynamic planning, conversion, and management strategy of different renewable energy sources: a sustainable solution for severe energy crises in emerging economies," international journal of hydrogen energy, vol. 46, no. 11, pp. 7745–7758, feb. 2021, https://doi.org/10.1016/ j.ijhydene.2020.12.004. [6] a. abdulmula, k. sopian, c. h. lim, and a. fazlizan, "performance evaluation of standalone double axis solar tracking system with maximum light detection mld for telecommunication towers in malaysia," international journal of power electronics and drive systems, vol. 10, no. 1, pp. 444–453, mar. 2019, https://doi.org/ 10.11591/ijpeds.v10n1.pp444-453. [7] k. n. nwaigwe, p. mutabilwa, and e. dintwa, "an overview of solar power (pv systems) integration into electricity grids," materials science for energy technologies, vol. 2, no. 3, pp. 629–633, dec. 2019, https://doi.org/10.1016/j.mset.2019.07.002. [8] h. mousazadeh, a. keyhani, a. javadi, h. mobli, k. abrinia, and a. sharifi, "a review of principle and sun-tracking methods for maximizing solar systems output," renewable and sustainable energy reviews, vol. 13, no. 8, pp. 1800–1818, oct. 2009, https://doi.org/10.1016/j.rser. 2009.01.022. [9] a. z. hafez, a. m. yousef, and n. m. harag, "solar tracking systems: technologies and trackers drive types – a review," renewable and sustainable energy reviews, vol. 91, pp. 754–782, aug. 2018, https://doi.org/10.1016/j.rser.2018.03.094. [10] g. c. lazaroiu, m. longo, m. roscia, and m. pagano, "comparative analysis of fixed and sun tracking low power pv systems considering energy consumption," energy conversion and management, vol. 92, pp. 143–148, mar. 2015, https://doi.org/10.1016/j.enconman.2014.12.046. [11] m. s. ismail, m. moghavvemi, and t. m. i. mahlia, "analysis and evaluation of various aspects of solar radiation in the palestinian territories," energy conversion and management, vol. 73, pp. 57–68, sep. 2013, https://doi.org/10.1016/j.enconman.2013.04.026. [12] s. a. s. eldin, m. s. abd-elhady, and h. a. kandil, "feasibility of solar tracking systems for pv panels in hot and cold regions," renewable energy, vol. 85, pp. 228–233, jan. 2016, https://doi.org/10.1016/ j.renene.2015.06.051. [13] h. camur, y. kassem, and e. alessi, "a techno-economic comparative study of a grid-connected residential rooftop pv panel: the case study of nahr el-bared, lebanon," engineering, technology & applied science research, vol. 11, no. 2, pp. 6956–6964, apr. 2021, https://doi.org/10.48084/etasr.4078. [14] h. z. al garni, a. awasthi, and m. a. m. ramli, "optimal design and analysis of grid-connected photovoltaic under different tracking systems using homer," energy conversion and management, vol. 155, pp. 42– 57, jan. 2018, https://doi.org/10.1016/j.enconman.2017.10.090. [15] a. a. bayod-rujula, a. m. lorente-lafuente, and f. cirez-oto, "environmental assessment of grid connected photovoltaic plants with 2axis tracking versus fixed modules systems," energy, vol. 36, no. 5, pp. 3148–3158, may 2011, https://doi.org/10.1016/j.energy.2011.03.004. [16] a. b. owolabi, b. e. k. nsafon, j. w. roh, d. suh, and j.-s. huh, "validating the techno-economic and environmental sustainability of solar pv technology in nigeria using retscreen experts to assess its viability," sustainable energy technologies and assessments, vol. 36, dec. 2019, art. no. 100542, https://doi.org/10.1016/j.seta.2019.100542. [17] k. belkilani, a. ben othman, and m. besbes, "assessment of global solar radiation to examine the best locations to install a pv system in tunisia," applied physics a, vol. 124, no. 2, jan. 2018, art. no. 122, https://doi.org/10.1007/s00339-018-1551-3. [18] l. m. fernandez-ahumada, j. ramirez-faz, r. lopez-luque, m. varomartinez, i. m. moreno-garcia, and f. casares de la torre, "influence of the design variables of photovoltaic plants with two-axis solar tracking on the optimization of the tracking and backtracking trajectory," solar energy, vol. 208, pp. 89–100, sep. 2020, https://doi.org/10.1016/ j.solener.2020.07.063. [19] j. reca-cardena and r. lopez-luque, "design principles of photovoltaic irrigation systems," in advances in renewable energies and power technologies, i. yahyaoui, ed. amsterdam, netherlands: elsevier, 2018, pp. 295–333. [20] t.-c. cheng, w.-c. hung, and t.-h. fang, "two-axis solar heat collection tracker system for solar thermal applications," international journal of photoenergy, vol. 2013, nov. 2013, art. no. e803457, https://doi.org/10.1155/2013/803457. [21] h. iftikhar, e. sarquis, and p. j. c. branco, "why can simple operation and maintenance (o&m) practices in large-scale grid-connected pv engineering, technology & applied science research vol. 11, no. 4, 2021, 7508-7514 7514 www.etasr.com kassem et al.: a techno-economic viability analysis of the two-axis tracking grid-connected … power plants play a key role in improving its energy output?," energies, vol. 14, no. 13, jan. 2021, art. no. 3798, https://doi.org/ 10.3390/en14133798. [22] y. kassem, h. camur, and r. a. f. aateg, "exploring solar and wind energy as a power generation source for solving the electricity crisis in libya," energies, vol. 13, no. 14, jan. 2020, art. no. 3708, https://doi.org/10.3390/en13143708. [23] "fronius symo 4.5-3-m light 4.5 kw solar inverter," 0bills diy solar, panels, complete systems, 12v, 24v and 48v batteries for energy independence. https://www.zerohomebills.com/product/fronius-symo-45-3-m-light-4-5-kw-solar-inverter/ (accessed aug. 06, 2021). [24] k. ram, p. k. swain, r. vallabhaneni, and a. kumar, "critical assessment on application of software for designing hybrid energy systems," materials today: proceedings, mar. 2021, https://doi.org/10.1016/j.matpr.2021.02.452. [25] b. brahma and r. wadhvani, "solar irradiance forecasting based on deep learning methodologies and multi-site data," symmetry, vol. 12, no. 11, nov. 2020, art. no. 1830, https://doi.org/10.3390/sym12111830. [26] b. amrouche, l. sicot, a. guessoum, and m. belhamel, "experimental analysis of the maximum power point’s properties for four photovoltaic modules from different technologies: monocrystalline and polycrystalline silicon, cis and cdte," solar energy materials and solar cells, vol. 118, pp. 124–134, nov. 2013, https://doi.org/ 10.1016/j.solmat.2013.08.010. [27] w. d. lubitz, "effect of manual tilt adjustments on incident irradiance on fixed and tracking solar panels," applied energy, vol. 88, no. 5, pp. 1710–1719, may 2011, https://doi.org/10.1016/j.apenergy.2010.11.008. [28] a. mehmood, f. a. shaikh, and a. waqas, "modeling of the solar photovoltaic systems to fulfill the energy demand of the domestic sector of pakistan using retscreen software," in international conference and utility exhibition on green energy for sustainable development, pattaya, thailand, mar. 2014, pp. 1–7. [29] a. khandelwal and v. shrivastava, "viability of grid-connected solar pv system for a village of rajasthan," in international conference on information, communication, instrumentation and control, indore, india, aug. 2017, pp. 1–6, https://doi.org/10.1109/icomicon.2017. 8279175. [30] h. a. kazem and m. t. chaichan, "status and future prospects of renewable energy in iraq," renewable and sustainable energy reviews, vol. 16, no. 8, pp. 6007–6012, oct. 2012, https://doi.org/10.1016/ j.rser.2012.03.058. [31] m. obeng, s. gyamfi, n. s. derkyi, a. t. kabo-bah, and f. peprah, "technical and economic feasibility of a 50 mw grid-connected solar pv at uenr nsoatre campus," journal of cleaner production, vol. 247, feb. 2020, art. no. 119159, https://doi.org/10.1016/j.jclepro. 2019.119159. [32] k. mohammadi, m. naderi, and m. saghafifar, "economic feasibility of developing grid-connected photovoltaic plants in the southern coast of iran," energy, vol. 156, pp. 17–31, aug. 2018, https://doi.org/10.1016/ j.energy.2018.05.065. [33] m. a. vaziri rad, a. toopshekan, p. rahdan, a. kasaeian, and o. mahian, "a comprehensive study of techno-economic and environmental features of different solar tracking systems for residential photovoltaic installations," renewable and sustainable energy reviews, vol. 129, sep. 2020, art. no. 109923, https://doi.org/10.1016/ j.rser.2020.109923. microsoft word 09-855-ed.doc engineering, technology & applied science research vol. 6, no. 6, 2016, 1253-1257 1253 www.etasr.com wang et al.: a study on ductility of prestressed concrete pier based on response surface methodology a study on ductility of prestressed concrete pier based on response surface methodology huili wang institute of bridge engineering dalian university of technology dalian, china wanghuili@dlut.edu.cn yan zhang institute of bridge engineering dalian university of technology dalian, china 18342209879@163.com sifeng qin numerical test research center for materials fracture mechanics dalian university, dalian, china qsifeng@163.com abstract—the ductility of prestressed concrete pier is studied based on response surface methodology. referring to the pervious prestressed concrete pier, based on box-behnken design, the ductility of 25 prestressed concrete piers is calculated by numerical method. the relationship between longitudinal reinforcement ratio, shear reinforcement ratio, prestressed tendon quantity, concrete compressive strength and ductility factor is gotten. the influence of the longitudinal reinforcement ratio, the shear reinforcement ratio, the prestressed tendon quantity and concrete compressive strength to curvature ductility is discussed. then the ductility regression equation is deduced. the result showed that the influence of the prestressed tendon quantity to the ductility of prestressed concrete pier is significant. with the increasing of the prestressed tendon quantity, the curvature ductility curved reduces. with the increasing of shear reinforcement ratio and compressive strength of concrete, the curvature ductility increases linearly. and the influence of the longitudinal reinforcement ratio to ductility of the prestressed concrete pier is insignificant. keywords-response surface methodology; experiment design; prestressed concrete; pier; ductility i. introduction the application of prestressed concrete (prc) pier has increased because of its efficiency and high quality. precast segmental construction methods can cut construction costs by reducing construction time while maintaining quality. in addition, because of the self-centering capability of prestressed tendon, prc pier could meet the performance requirement during the normal use stage as well as improve the seismic performance of a whole bridge [1]. many researchers have investigated the seismic performance of prc pier. hewes and priestley investigated the performance of unbonded posttensioned precast concrete segmental bridge columns under lateral earthquake loading [2]. in [3], authors studied the seismic performance, identify the key design variables, and evaluate the effect of different ground motions and different column configurations for a self-centering reinforced concrete column with unbonded prestressing strand placed at the center of the cross section. in [4], authors investigated the seismic performance of unbonded prestressed hollow concrete columns constructed with precast segments. in [5], authors tested several different pier bents in a four-span bridge earthquake simulation study. in [6], authors investigated the response of segment joints using detailed non-linear time-history analyses. a suite of ten near field earthquake records was used to determine the median joint response as well as to quantify the effect of vertical motion on the joint response. the authors showed that a prestressed bar could increase pier self reset capability and decrease the residual displacement of the bridge pier under earthquake. the mechanical properties of prc pier are relation to some parameters, such as the longitudinal reinforcement ratio, the shear reinforcement ratio, the prestressed tendon quantity and compressive strength of concrete. the parameters analysis can be conducted with the statistical analysis methodology. response surface methodology (rsm) represents a collection of statistical and mathematical techniques and it is often used for development, improvement and optimization of various processes, where certain response is influenced by several variables. in [7], authors used rsm to investigate the performance of corroding under-reinforced beams. in [8], authors adopted he rsm to create response surface functions of the specific energy for thin-walled columns. in [9], authors used rsm to estimate the representative fragility curves for horizontally curved steel, i-girder bridges in conjunction with monte carlo simulation. experimental design is widely used for controlling the effects of parameters in many processes. its usage decreases the number of experiments, using time and material resources. central composite design (ccd) and box behnken design (bbd) are usually adopted in rsm. ccds are a factorial or fractional factorial design with center points, augmented with a group of axial points [10]. bbd is a type of response surface design that does not contain an embedded factorial or fractional factorial design [11]. in [12], bbd was emp[loyed to optimize the indomethacin-loaded chitosan nanoparticle size. in [13], bbd was used to optimize the nanoscale retrograded starch formation. in [14], bbd was applied for fabrication of titanium alloy and 304 stainless steel joints. the present paper investigates the ductility of prestressed concrete pier through response surface methodology. the this work was supported by foundation of china scholarship council (201506060044, 201508210247); foundation of liaoning provincial department of education funded projects (l2014027) engineering, technology & applied science research vol. 6, no. 6, 2016, 1253-1257 1254 www.etasr.com wang et al.: a study on ductility of prestressed concrete pier based on response surface methodology influence of the longitudinal reinforcement ratio, the shear reinforcement ratio, the prestressed tendon quantity and concrete strength grade to curvature ductility is discussed. ii. response surface methodology response surface methodology (rsm) is a collection of statistical and mathematical methods that are useful for modeling and analysis engineering problems. response surface methodology was developed by box and collaborators in the 1950s [15]. the design procedure of response surface methodology is as follows [11, 16]:  designing of a series of experiments for adequate and reliable measurement of the response of interest.  choosing of the experimental design and carrying out the experiments according to the selected experimental matrix.  getting the experimental results by serial experiments.  mathematic–statistical treatment of the obtained experimental data through the fit of a polynomial function. a. mathematical model the simplest model which can be used in rsm is based on a linear function [17]. k 0 i i i=1 y=β + β x +εå (1) where β0, β1 represents the coefficients of the linear parameters, xi represents the variables, k is the number of variables, and ε is the residual associated with the experiments. to evaluate curvature, a second-order model must be used. in order to determine a critical point (maximum, minimum, or saddle), it is necessary for the polynomial function to contain quadratic terms according to the equation presented below[17]: k k k 2 0 i i ii i ij i j i=1 i=1 1 i j y=β + β x + β x + β x x +ε £ £ å å å (2) where bii, βij represents the coefficients of the quadratic parameter. b. experimental design the parameters analysis could be conducted with statistical analysis methodology. in this study, bbd was chosen. for bbd, the design points fall at combinations of the high and low factor levels and their midpoints. box and behnken suggested how to select points from the three-level factorial arrangement, which allows the efficient estimation of the first and second-order coefficients of the mathematical model. bbds have treatment combinations that are at the midpoints of the edges of the experimental space and require at least three continuous factors [18], shown in figure 1. because bbds often have fewer design points, they can be less expensive to do than central composite designs with the same number of factors. table i contains the coded values of the factor levels for bbd on three factors. fig. 1. bbd cube for 3 factors: (a) cube for bbd and three interlocking 3 factorial design, (b) points for bbd and three interlocking 3 factorial design: table i. bbd table for 3 factors number x1 x2 x3 1 -1 -1 0 2 1 -1 0 3 -1 1 0 4 1 1 0 5 -1 0 -1 6 1 0 -1 7 -1 0 1 8 1 0 1 9 0 -1 -1 10 0 1 -1 11 0 -1 1 12 0 1 1 c 0 0 0 iii. the ductility capacity of prestressed concrete pier a. constitutive relations 1) concrete the concrete damage plasticity (cdp) model was adopted [19]. a general constitutive relationship for cdp model is shown in figure 2, where m un ε and m un σ presents strain and stress of m-th tipping point, m pl ε is the concrete compressive plastic strain on m-th loading, dc is compressive damage factor, and dt is tensile damage factor. compression section was defined as [20]: (a) (b) engineering, technology & applied science research vol. 6, no. 6, 2016, 1253-1257 1255 www.etasr.com wang et al.: a study on ductility of prestressed concrete pier based on response surface methodology 2 c 0 0 0 0 c 0 u u 0 ε ε f 2 ε ε ε ε σ= ε-ε f 1-0.15 ε ε ε ε -ε ì é ùï æ öï ê ú÷çï ÷ £çï ê ú÷ç ÷ï çè øê úïï ë ûí ï é ùïï ê ú £ £ïï ê úï ë ûïî (3) where fc=uniaxial compressive strength of concrete, ε0=yield strain, εu=ultimate compressive strain. tension section was defined as t t t1.7 t t t ε ε σ=f ε ε ε ε a -1 + ε ε ³ æ ö÷ç ÷ç ÷ç ÷çè ø (4) where 2t ta =0.312f , tf = uniaxial ultimate tensile stress of concrete, tε = peak tensile strain of concrete [21]. fig. 2. constitutive relationship for cdp model 2) reinforcement bilinear kinematic (bkin) hardening of material was used to define the behavior of the steel bar. material properties for the bar were as follows [20]: elastic modulus εs=200gpa; yield stress fy=335mpa; yield strain εy=0.00168 and 0.1 εs as the slope of the hardening phase. bkin model for the steel bar behavior for unloading and reloading branches is shown in figure 3. fig. 3. constitutive relationship for reinforcement 3) pt strands in this study, the mechanical model of the pt strands was also defined as a bkin model [20], where elastic ratio of prestressing tendons es=195gpa, yield stress fy=1860mpa; yield strain εy=0.00954 and 0.1 es as the slope of the hardening phase. effective tensile stress σcon=800mpa which was equivalent to 43% of the ultimate tensile strength, and axial compression ratio u=22.7%. b. definition of ductility because of inelastic deformation capacity of reinforced concrete ductility members depends on cross section of plastic rotation capacity in plastic hinge zone, through responding to the ductility capacity of prestressed concrete could compute cross section curvature ductility coefficient [22]. a measure of the ductility of structures with regard to seismic loading is the displacement ductility factor defined as δu/δy ,where δu is the lateral deflection at the end of the post-elastic range and δy is the lateral deflection at first yield. a rotational ductility factor for members has been calculated by some dynamic analyses as [23] u φ y φ μ = φ (5) where φu=maximum curvature at the section and φy=curvature of the section at first yield, as shown in figure 4. fig. 4. moment-curvature relationship the curvature ductility coefficient of a structure is usually much bigger than its displacement ductility factor in the plastic hinge zone. the reason is that rotation of plastic hinge becomes the main deformation while yield occurring [24]. in this paper, members will yield if the outermost longitudinal tensile plain reinforcement of reinforced concrete members reach to the initial yield curvature. the ductility of prc pier are relation to the longitudinal reinforcement ratio, the shear reinforcement ratio, the prestressed tendon quantity and compressive strength of concrete. referring to pervious prestressed concrete pier, based on bbd, it is operable to analyze ductility of prc pier with fem. iv. case study one bridge pier circular cross section is 2 meters in diameter, just as shown in figure 5. the ratio of longitudinal reinforcement x1 would take the values of 0.80%, 1.00% and 1.20%. the shear reinforcement ratio x2 would take the values of 0.03%, 0.035% and 0.04%. the prestressed tendon x3 would engineering, technology & applied science research vol. 6, no. 6, 2016, 1253-1257 1256 www.etasr.com wang et al.: a study on ductility of prestressed concrete pier based on response surface methodology take the values of 15.27-10, 15.27-15 and 15.27-20. the concrete strength grade x4 would take the values of 30, 40 and 50. there are four factors with three level. base on bbd, the experiment design and ductility coefficient results y are shown in table ii. fig. 5. geometrical characteristic of the tested pier section table ii. the experiment design and results no. x1(%) x2(%) x3 x4(mpa) y 1 1.0 0.040 20 40 4.204 2 1.0 0.040 15 30 4.378 3 0.8 0.035 15 50 6.998 4 0.8 0.035 20 40 4.109 5 1.0 0.030 15 30 3.822 6 1.0 0.035 10 50 11.67 7 1.0 0.035 10 30 6.979 8 1.0 0.040 15 50 7.270 9 1.0 0.035 20 50 4.778 10 1.2 0.030 15 40 5.260 11 1.0 0.030 10 40 8.720 12 1.0 0.040 10 40 9.304 13 1.2 0.035 10 40 8.995 14 1.0 0.035 15 40 5.476 15 0.8 0.035 10 40 8.940 16 0.8 0.030 15 40 5.681 17 1.0 0.030 15 50 6.687 18 1.0 0.030 20 40 3.971 19 0.8 0.040 15 40 6.098 20 1.0 0.035 20 30 2.777 21 1.2 0.035 15 50 7.042 22 0.8 0.035 15 30 4.278 23 1.2 0.040 15 40 5.648 24 1.2 0.035 20 40 4.146 25 1.2 0.035 15 30 3.989 with statistical analysis and ignore insignificant quadratic term, the ductility regression equation is deduced. 1 2 3 4 2 2 3 3 4 1 2 2 2 3 =9.48720-4.05461x -55.14804x -1.04795x +0.35360x -3.51000x x -0.013450x x +1.81397x +2197.35294x +0.039947x y (6) based on (6), the influence of each factor to curvature ductility is discussed. the results are shown in figures 6-8. fig. 6. relationship between longitudinal reinforcement ratio, prestressed tendon and ductility factor fig. 7. relationship between shear reinforcement ratio, prestressed tendon and ductility factor fig. 8. relationship between concrete strength, prestressed tendon and ductility factor the results show that the influence of the prestressed tendon quantity to ductility of prestressed concrete pier is significant and the influence of the longitudinal reinforcement ratio to ductility of prestressed concrete pier is insignificant. with the increasing of the prestressed tendon quantity, the curvature ductility curved reduces. with the increasing of shear engineering, technology & applied science research vol. 6, no. 6, 2016, 1253-1257 1257 www.etasr.com wang et al.: a study on ductility of prestressed concrete pier based on response surface methodology reinforcement ratio and compressive strength of concrete, the curvature ductility linear increases. v. conclusion based on response surface methodology, the ductility of prestressed concrete pier is studied. according to box-behnken design, the ductility regression equation is deduced with statistical analysis. the influence of the longitudinal reinforcement ratio, the shear reinforcement ratio, the prestressed tendon quantity and concrete strength grade to curvature ductility is discussed. the results show that:  the prestressed tendon quantity to ductility of prestressed concrete pier is significant. with the increasing of the prestressed tendon quantity, the curvature ductility curved reduces.  with the increasing of shear reinforcement ratio and compressive strength of concrete, the curvature ductility linear increases.  the influence of the longitudinal reinforcement ratio to ductility of prestressed concrete pier is insignificant. references [1] p. m. davis, t. m. janes, m. o. eberhard, j. f. stanton, unbonded pretensioned columns for bridges in seismic regions, pacific earthquake engineering research center headquartered at the university of california, berkeley, 2012 [2] j. t. hewes, m. j. n. priestley, seismic design and performance of precast concrete segmental bridge columns, san diego, la jolla, california.: university of california 2002 [3] h. i. jeong, j. sakai, s. a. mahin, shaking table tests and numerical investigation of self-centering reinforced concrete bridge columns, berkeley, california.: university of california, berkeley, 2008 [4] r. yamashita, d. sanders, shake table testing and an analytical study of unbonded prestressed hollow concrete columns constructed with precast segments. report no.:cceer 05-09, university of nevada, 2005 [5] c. a. cruz-noguez, m. s. saiidi, experimental and analytical seismic studies of a four-span bridge system with innovative materials. report no.:cceer-10-04, university of nevada, 2010. [6] s. motaref, m. saiidi, d. sanders, seismic response of precast bridge columns with energy dissipating joints, report no.:cceer 11-01, university of nevada, 2011 [7] a. n. kallias, m. i. rafiq, “performance assessment of corroding rc beams using response surface methodology”, engineering structures, vol.49, pp. 671-685, 2013 [8] j. bi, h. fang, q. wang, x. ren, “modeling and optimization of foamfilled thin-walled columns for crashworthiness designs”, finite elements in analysis and design, vol. 46, no. 9, pp. 798-709, 2010 [9] j. seo, d. g. linzell, “horizontally curved steel bridge seismic vulnerability assessment”, engineering structures, vol.34, pp. 21–32, 2012 [10] m. n. hosseinpour, g. d. najafpour, h. younesi, m. khorrami, z. vaseghi, “lipase production in solid state fermentation using aspergillus niger: response surface methodology”, ije transactions b: applications, vol. 25, no. 3,pp.1 51-159, 2012 [11] n. aslan, y. cebeci, “application of box–behnken design and response surface methodology for modeling of some turkish coals”, fuel, vol. 86, pp. 90-97, 2007 [12] m. a. kalam, a. a. khan, s. khan, a. almalik, a. alshamsan, “optimizing indomethacin-loaded chitosan nanoparticle size, encapsulation, and release using box–behnken experimental design”, international journal of biological macromolecules, vol. 87, pp.329340, 2016 [13] y. ding, j. zheng, x. xia, t. ren, j. kan, “box–behnken design for the optimization of nanoscale retrograded starch formation by high-power ultrasonication”, lwt-food science and technology, vol. 67, pp. 206213, 2016 [14] m. balasubramanian, “application of box–behnken design for fabrication of titanium alloy and 304 stainless steel joints with silver interlayer by diffusion bonding”, materials & design, vol. 77, pp. 161– 169, 2015 [15] g. e. p. box, k. b. wilson, “on the experimental attainment of optimum conditions (with discussion)”, journal of the royal statistical society series b13, vol. 1, pp.1-45, 1951 [16] w. a. a. alqaraghuli, a. f. m. alkarkhi, h. c. low, “fitting secondorder models to mixed two-level and four-level factorial designs:is there an easier procedure?”, ije transactions b: applications, vol. 28, no. 11, pp.1644-1650, 2015 [17] m. a. bezerra, r. e. santelli, e. p. oliveira, l. s. villar, “response surface methodology (rsm) as a tool for optimization in analytical chemistry”, talanta, vol. 76, pp.965-977 ,2008 [18] s. l. c. ferreira, r. e. bruns, h. s. ferreira, g. d. matos, j. m. david, “box-behnken design: an alternative for the optimization of analytical methods”, analytica chimica acta, vol. 597, no. 2, pp. 179-186, 2007 [19] j. lee, g. l. fenves, “plastic-damage model for cyclic loading of concrete structures”, journal of engineering mechanics, vol. 124, pp. 892-900, 1998 [20] h. wang, s. liu, z. zhang, “seismic performance and effects of different joint shapes for unbonded precast segmental bridge columns”, journal of mechanics, vol. 32, no. 4, pp. 427-433, 2016 [21] o. saghaeian, f. nateghi, o. rezaifar, “comparison of using different modeling techniques on prediction of the nonlinear behavior of r/c shear walls”, ije transactions b: applications, vol. 27, no. 2, pp. 269282, 2014 [22] z. yang, y. zhang, m. chen, g. chen, “numerical simulation of ultra strength concrete-filled steel columns”, engineering review, vol. 33, no. 3, pp. 211-217, 2013 [23] b. a. suprenant, curvature ductility of reinforced and prestressed concrete columns, bozeman: montana state university, 1984 [24] a. namdar, x. feng, “economical considerations in the development of construction materials-a review”, engineering review, vol. 35, no. 3,pp. 291-297, 2015 microsoft word 12-3156_s_etasr_v9_n6_pp4933-4936 engineering, technology & applied science research vol. 9, no. 6, 2019, 4933-4936 4933 www.etasr.com ghabri et al.: performance optimization of 1-bit full adder cell based on cntfet transistor performance optimization of 1-bit full adder cell based on cntfet transistor houda ghabri leti laboratory national school of engineering of sfax sfax, tunisia houda.ghabri@gmail.com dalenda ben issa leti laboratory national school of engineering of sfax sfax, tunisia dalenda_benissa@yahoo.fr hekmet samet leti laboratory national school of engineering of sfax sfax, tunisia hekmet.samet@enis.rnu.tn abstract—the full adder is a key component for many digital circuits like microprocessors or digital signal processors. its main utilization is to perform logical and arithmetic operations. this has empowered the designers to continuously optimize this circuit and ameliorate its characteristics like robustness, compactness, efficiency, and scalability. carbon nanotube field effect transistor (cnfet) stands out as a substitute for cmos technology for designing circuits in the present-day technology. the objective of this paper is to present an optimized 1-bit full adder design based on cntfet transistors inspired by new cmos full adder design [1] with enhanced performance parameters. for a power supply of 0.9v, the count of transistors is decreased to 10 and the power is almost split in two compared to the best existing cntfet based adder. this design offers significant improvement when compared to existing designs such as c-cmos, tfa, tga, hpsc, 18t-fa adder, etc. comparative data analysis shows that there is 37%, 50%, and 49% amelioration in terms of area, delay, and power delay product respectively compared to both cntfet and cmos based adders in existing designs. the circuit was designed in 32nm technology and simulated with hspice tools. keywords-1-bit full adder; cntfet; pdp; low power; hspice i. introduction semiconductor technologies are in a constant innovation race to create a new functionality and meet growth-up expectations. implemented semiconductor devices for scientific, industry and consumers, are expected to offer high performance with high speed, scalability, and, especially, low power consumption. in embedded electronic products such as mobile phones, laptops and connected watches, power consumption is a key element that influences directly circuit operation. metal oxide semiconductor field effect transistors (mosfets) can’t no longer comply with moore’s law [2-3], which led to the need of finding alternative technologies. each technology has its advantages and disadvantages. carbon nanotube field effect transistor (cnfet) is the most promising technology with interesting advantages. it is widely adopted and has become one of the most interesting research areas [4, 5]. cnfet is flexible in overcoming challenges like ballistic transport, short channel effects, low off-current properties, etc. because the complete adder circuit is indispensable in any digital product, the performance of any digital circuit can be improved by enhancing it performance. efforts to optimize performance are continuous, efforts such as the conventional–complementary metal oxide semiconductor [1], the removed single driving full adder (rsd-fa) [6], the hybrid pass transistor logic with static cmos output drive (hpsc) [3], the 18 transistor 1-bit full adder [7], the hybrid multi-threshold full adder (hmtfa) [8], and the low power based cntfet [9]. the implementation of these adder circuits takes place using various logic families, therefore the above mentioned adders have different advantages and disadvantages. in this paper, a new schematic with 10t transistor is proposed, inspired from the new cmos full adder design [1]. after simulation and analysis, this design offers an interesting pdp with limited number of transistors. table i shows the comparison results for the proposed and the existing full adder designs implemented using cnfet and cmos based on power, delay, and power delay product parameters (pdp). the proposed full adder circuit was designed with cnfet technology simulated at 32nm with a voltage supply of +0.9v using the hspice tool. the model used is a stanford nano model. ii. background of cnfet among new technologies, cntfet is placed as one with the most potential thanks to its specific physical characteristics. it offers many advantages like quasi-ballistic transport due to high mobility, fast switching due to high carrier speed, and almost one-dimensional structure of carbon nanotube for better electrostatic control [10]. in fact, each carbon nanotube reacts as a channel contrary to mosfet, where the entire silicon acts as a channel [11]. the mobility in n and p types of cnfet is identical and the two types of transistor transfer the same driving current. this allows the creation of a completely novel logic not possible with mosfet transistors. iii. existing adder designs many full adder designs are available in the literature. among them, the 23t full adder cell, which is an improved version of the 18t adder block [12], combining two logics, pass-transistor and transmission gate. there are five inverter stages to have the final output (sum) causing a longer critical path and utilizing an important number of transistors. the corresponding author: houda ghabri engineering, technology & applied science research vol. 9, no. 6, 2019, 4933-4936 4934 www.etasr.com ghabri et al.: performance optimization of 1-bit full adder cell based on cntfet transistor logical operations are performed serially and not simultaneously. two different blocks are used to generate the sum and the cout. this impacts strongly the speed of this circuit and the consumed power. another interesting design is the hybrid full adder cell [12, 13]. two different circuits are used to generate simultaneously the sum and the cout output. two cascaded xor/xnor cells are used to generate the sum signal and many inverters are required. as a result, a huge number of transistors are used and the critical path becomes very long, causing speed problem. in order to solve this issue the cntcpl architecture was proposed [14]. because its critical path is composed of only two pass transistors, the delay is very short. in addition, using cntfet technology increases speed, and overcomes the inconvenience of non-full-swing nodes. although this design solves the problem of speed, it is not an appropriate choice for low-power applications. this design suffers from many issues, such as low driving capability caused by utilization of pass-transistor logic, high transistor counts due to duplicate blocks for sum and cout, and finally signal integrity problem caused by cascading blocks in series, especially in high frequencies. cntfa based on cntfet transistor is the last studied full adder design [14]. this circuit is composed in intermediate xor and xnor functions and pass-transistor logic. the generated xor/xnor signals are used as selectors in a multiplexer-based structure at the second stage to generate sum. sum and output carry are generated in parallel by a pass-transistor based block. this design has multiple merits like high-speed computation and low power consumption, but when facing high load capacitors, it suffers from downfalls due to its low driving capability. as discussed above, the known full adder circuits have many disadvantages and performance optimization to follow the growing expectation is an open research topic. in this context we propose a new cntfet full adder design trying to find a solution to some of these problems. iv. proposed full adder circuit inspired from the new cmos full adder design [1] we propose a 1-bit full adder based on 10 cntfet transistors that calculate the sum and carry using less transistors. the sum and carry of any full adder are achieved from the input bits including the previous stage carry. the expressions giving the relationship between the input and output bits are: sum = a ⊕ b ⊕ cin (1) carry = a.b + b.cin + a.cin (2) where a and b are the inputs, the outputs are sum and carry and cin represents the carry input, if any. the proposed full adder design requires 10 transistors and consists of two xor/xnor gates. the implemented circuit composed of two xor gates which are designed by using four transistors each is shown in figure 1. less transistor count results to less load capacitance values and so the switching power dissipation is less in this cntfet based full adder compared to other techniques. we are presenting the smallest full adder design in cntfet technology (figure 1). transistor number reduction allowed us a considerable gain in consumption and generally in pdp. the simulation results of the proposed full adder are presented in figure 2. to show the amelioration done by the proposed design we compared the simulation results with the best cntfet based adder in literature. comparison parameters were power, delay and pdp. the mathematical calculations needed to perform this comparison are explained below. fig. 1. the proposed full adder cell a. power consumption calculation pdp keeps a balance between delay and power, it is the multiplication result of the maximum delay and the average power consumption: pdp = max(delay)×p��� (3) power average is the sum of two powers: static power and dynamic and short-circuit power [5]. static power comes from biasing and leakage currents. the most important component of power consumption, the dynamic power, is a result of the load capacitances charging and discharging. the load capacitance, cload, can be presented as a mix of a fixed capacitance, cfix, and a variable capacitance, cvar, as follows: cload = cfix+cvar (4) where cfix is the technology-dependent due to diffusion capacitance and interconnect dependent capacitances, cvar is composed of the input capacitances of subsequent stages and a part of the diffusion capacitance at the gate output and can therefore be taken care of by proper transistor sizing. p���= ids×vdd×fc×cload (5) where ids is the drain to source current (a), vdd is the supply voltage (v), cload is the output load capacitance (f), and fc is the clock frequency (hz). b. calculation of propagation delay the adder is a fundamental element in most electronic systems. that is why the optimization of its response delay affects directly the speed of the whole system. the speed of the adder response is mainly dependent on the propagation delay of the carry signal which is usually minimized by reducing the path length of the carry signal. the delay is calculated from the time that the input signal reaches ½vdd to the time that the output signal reaches the same voltage level. engineering, technology & applied science research vol. 9, no. 6, 2019, 4933-4936 4935 www.etasr.com ghabri et al.: performance optimization of 1-bit full adder cell based on cntfet transistor fig. 2. output waveforms of the proposed full adder design in the present design, the carry signal is generated by controlled transmission of the input carry signal and either of the input signals a or b (when a=b). as the carry signal propagates only through the single transmission gate, the carry propagation path is minimized leading to a substantial reduction in propagation delay. the delay incurred in the propagation is further reduced by efficient transistor sizing and deliberate incorporation of strong transmission gates. based on this information, power consumption and pdp are calculated for the proposed design. in table i, the performance of the proposed full adder is compared with existing designs [9]. the number of transistors for the proposed full adder is 10. the calculated delay is 4ps. for 0.9 supply the power consumption is 0.073µw and the calculated pdp is 0.592 aj. table i. comparative analysis for number of transistors, delay, power, and pdp full adder transistor count power (µw) delay (ps) pdp (aj) c-cmos [2] 28 0.124 12.355 1.532 tga [13] 20 0.135 10.104 1.364 cpl [17] 1.33 0.84 0.38 tfa [18] 16 0.109 11.701 1.275 hpsc [3] 26 0.095 30.654 2.912 clrcl [15] 10 5.903 231.18 1364.65 ours1 [4] 28 0.163 10.866 1.771 hctg [5] 16 0.124 12.116 1.502 rsd-fa [6] 26 0.091 9.427 0.857 18t-fa [6] 18 0.088 8.93 0.785 hmtfa [7] 23 0.1216 16.909 2.056 1bfa16 [9] 16 .073 8.12 0.592 proposed 10 0.073 4 0.295 v. conclusions a novel full adder cell inspired from recent cmos design has been presented. although many designs have been presented recently with the aim of reducing the number of transistors, they suffer from serious problems such as pdp, and delay. although reducing the number of transistors intrinsically leads to less area and power consumption, the other performance parameters should be taken into consideration in order to make the circuit work properly in real conditions. simulation results prove that the proposed full adder exhibits improvement in area, delay, and pdp by approximately 37%, 50%, and 49% respectively compared to the best cntfetbased adder found in the literature. in future work, this design can be further extended for a 32-bit full adder implementation. references [1] c. venkatesan, s. m. thabsera, m. g. sumitrha, m. suriya, “analysis of 1-bit full adder using different techniques in cadence 45nm technology”, 5th international conference on advanced computing & communication systems, coimbatore, india, march 15-16, 2019 [2] n. h. e. weste, k. eshraghian, principles of cmos vlsi design: a system perspective, addison-wesley, 1988 [3] c. h. chang, j. gu, m. zhang, “a review of 0.18-µm full adder performances for tree structured arithmetic circuits”, ieee transactions on very large scale integration (vlsi) systems, vol. 13, no. 6, pp. 686-695, 2005 [4] m. a aguirre-hernandez, m. linares-aranda, “cmos full-adders for energy-efficient arithmetic applications”, ieee transactions on very large scale integration (vlsi) systems ,vol. 19, no. 4, pp. 718-721, 2011 [5] p. bhattacharyya, b. kundu, s. ghosh, v. kumar, a. dandapat, “performance analysis of a low-power high-speed hybrid 1-bit full adder circuit”, ieee transactions on very large scale integration (vlsi) systems, vol. 23, no. 10, pp. 2001-2008, 2015 [6] y. s. mehrabani, m. eshghi, “noise and process variation tolerant, lowpower, high-speed, and low-energy full adders in cnfet technology”, ieee transactions on very large scale integration (vlsi) systems, vol. 24, no. 11, pp. 3268-3281, 2016 [7] k. s. jitendra, a. srinivasulu, b. p. singh, “a new low-power full adder cell for low voltage using cnfets”, ieee 9th international conference on electronics, computers and artificial intelligence, targoviste, romania, june 29-july 1, 2017 [8] m. maleknejad, s. mohammadi, k. navi, h. r. naji, m. hosseinzadeh, “a cnfet-based hybrid multi-threshold 1-bit full adder design for energy efficient low power applications”, international journal of electronics, vol. 105, no. 10, pp. 1753-1768, 2018 [9] k. s. jitendra, a. srinivasulu, r. kumawat, “a low power high speed cntfets based full adder cell with overflow detection”, micro and nanosystems, vol. 11, no. 1, pp. 80-87, 2019 [10] m. h. moaiyeri, r. f. mirrzae, k. navi, a. momeni, “design and analysis of a high-performance cnfet-based full adder”, international journal of electronics, vol. 99, no. 1, pp. 113-130, 2012 [11] y. m. lin, j. appenzeller, p. avouris, “novel structures enabling bulk switching in carbon nanotube fets”, device research conference, 62nd drc., notre dame, usa, june 21-23, 2004 [12] m. moradi, r. f. mirzaee, m. h. moaiyeri, k. navi, “an applicable high-efficient cntfet -based full adder cell for practical environments”, 16th csi international symposium on computer architecture and digital systems, shiraz, iran, may 2-3, 2012 [13] r. f. mirzaee, m. h. moaiyeri, h. khorsand, k. navi, “a new robust and high-performance hybrid full adder cell”, journal of circuits, systems, and computers, vol. 20, no. 4, pp. 641-655, 2011 [14] m. h. moaiyeri, r. f. mirzaee, k. navi, a. momeni, “design and analysis of a high-performance cnfet-based full adder”, international journal of electronics, vol. 99, no. 1, pp. 113-130, 2012 engineering, technology & applied science research vol. 9, no. 6, 2019, 4933-4936 4936 www.etasr.com ghabri et al.: performance optimization of 1-bit full adder cell based on cntfet transistor [15] m. h. ghadiry, a. a. manaf, m. t. ahmadi, h. sadeghi, m. n. senejani, “design and analysis of a new carbon nanotube full adder cell”, journal of nanomaterials, vol. 2011, article id. 906237, 2011 [16] j. f. lin, y. t. hwang, m. h. sheu, c. c. ho, “a novel high speed and energy efficient 10-transistor full adder design”, ieee transactions on circuits and systems i: regular papers, vol. 54, no. 5 , pp. 1050–1059, 2007 [17] b. l. dokic, “a review on energy efficient cmos digital logic”, engineering, technology & applied science research, vol. 3, no. 6, pp. 552-561, 2013 [18] n. zhuang, h. wu, “a new design of the cmos full adder”, ieee journal of solid-state circuits ,vol. 27, no. 5, 840–844., 1992 microsoft word 12-3506_s_etasr_v10_n3_pp5648-5654 engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5648 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam hydrodynamic condition modeling along the northcentral coast of vietnam dao dinh cham institute of geography vietnam academy of science and technology hanoi, vietnam chamvdl@gmail.com nguyen thai son institute of geography vietnam academy of science and technology hanoi, vietnam nguyenthaison99@gmail.com nguyen quang minh institute of geography vietnam academy of science and technology hanoi, vietnam nguyenquangminh2110@gmail.com nguyen thanh hung key laboratory for river and coastal engineering vietnam academy for water resources hanoi, vietnam nthungpacific@gmail.com nguyen tien thanh dpt. of hydrometeorological modeling and forecasting thuyloi university hanoi, vietnam thanhnt@tlu.edu.vn abstract—an extremely dynamic morphology of the estuary is observed in the coastal regions of vietnam under the governing processes of tides, waves, and river system flows. the primary target of this paper is to provide insight into the governing processes and morphological behavior of the nhat le estuary, located in the north-central coast of vietnam. based on measured data from field surveys and satellite images combined with numerical model simulations of mike and delft3d, the influences of seasonal river flow, tides, and wave dynamics on the sediment transport and morphological changes are fully examined. the study showed that freshwater flow in the flood season plays a central role in cutting off the southern sandspit, maintain and shaping the main channel. the prevailing waves in winter and summer induce longshore drift and sediment transport in the southeast to northwest direction. in the low flow season, this longshore sediment transport is dominant, causing sediment to deposit on the southern side of the ebb tidal delta and elongating the southern sandspit which narrows the estuary entrance and reorients the main channel. keywords-hydrodynamics; morphology; delft3d; nhat le estuary; mike i. introduction coastal and estuarial morphological features highly depend on the combined influence and interplay of river flows, waves, tides, and currents. additionally, meteorological phenomena significantly affect the hydrodynamic and morphodynamic processes of estuarial and coastal zones [1]. especially in the tropics, the effects of atmospheric circulations on a synoptic scale on the precipitation regime and changes in temperature are carefully monitored [2-3]. the behavior of hydrologic processes is analyzed by hydrological models [4]. generally, these studies illustrate changes in river flows and evolutions of estuarial or oceanic features dominated by the effects of atmospheric circulations. the effects of dam construction on the morphology are investigated in [5]. authors in [6] used the deflt3d system [5] to study the fluvial erosion and to investigate the temporal evolutions of hydrodynamic processes. this system is widely applied in researches in hydrodynamics, sediment transport and wave modeling [8-12]. interaction of wave-current and tidal depth changes are considered in katama bay using combined models of deflt3d-flow and swan. these studies indicated a good reproduction of waves and currents. these studies mostly emphasize on the regions dominated by physical processes at a large scale. for many projects dealing with water resources and hydrodynamics, mike system [13] is applied [14-17]. the uncertainty strongly depends on a large range of data requirements and parameter values [18]. in other words, there is a need to further investigate the performance of mike for the estuary and coastal areas in the tropics like vietnam where the dynamics are very unstable. in the coastal zones of vietnam, human activities and natural environment changes lead to imbalance in the coastal processes, changes in dynamical action, and sediment transfer. so far, only a small number of studies have been conducted in this topic [19-20]. the nhat le estuary is selected as a study area, along the north-central coast of vietnam. it is located in quang binh province as shown in figure 1. it connects a drainage basin of 2647km 2 to the gulf of tonkin. under the governance of wave and river hydrodynamics in a tropical monsoon region, the morphology of the estuary and the adjacent coast are very dynamic and unstable. during the last ten years, the development of the southern sandspit posed difficulties for navigation, especially for the fishing boats entering the shelter areas during typhoons or rough sea conditions. since 1977, the northern and southern coasts have been eroded by the alternate attacks of the monsoon waves. many research and dredging projects have been invested and several coastal structures have been built to stabilize the estuary, but the problems remain, due corresponding author: nguyen tien thanh engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5649 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam to the fact that the main governing processes and the actual mechanism of the estuary’s morphological instabilities are still not clearly understood. therefore, the goad of this study is to comprehend the changes in bed-sea topography and the causes of estuary evolution, while additionally providing more insight into the main governing processes and the behavior of the morphology at the estuary based on measured data and numerical modeling of hydrodynamics, sediment transport, and morphological changes at the estuary. fig. 1. study area (screenshot from google earth, map data: esri, digital globe, geoeye, earthstar geographics, cnes/airbus ds, usda, usgs, aerogrid, ign, and the gis user community) ii. the study area the nhat le river is 85km long, origins from the truong son cordillera with two major tributaries, dai giang and kien giang, and discharges into the sea at dong hoi city. near the estuary the river has a width of about 400 to 500m, an average depth of 2 to 4.5m and a maximum depth of 7m. the river flow regime is strongly governed by the tropical monsoon climate regime with the two distinguish flood and dry seasons. the flood season lasts from september to december when the northeast monsoon winds carry moisture from the sea and cause considerable rainfall and river floods. also, torrential rain may occur from july to october due to severe typhoons coming from the western pacific basin. the flood season only lasts four months but produces 76% of the annual flow. the mean annual rainfall is about 2500mm. the dry season is characterized by a dry and hot climate due to the sheltering of the truong son cordillera from the southwest monsoon winds. the tidal regime in this area is semi-diurnal with a spring tidal range of 1.2-1.6m. due to the micro tidal regime, the tidal currents are also weak. the observed tidal currents along the coast are smaller than 0.5m/s. the wave climate strongly reflects the monsoon system. in the northeast monsoon season the prevailing wave direction is from northeast, the average wave height is 0.8-0.9m but the highest winter wave height can be 4.0-4.5m. in the summer, the dominant wave directions are southwest and southeast with an average wave height of 0.60.7m while the highest wave height can reach 3.5–4.0m. during major storms wave height may exceed heights of 6m. iii. materials and methodology a. materials instruments were used and installed at measurement stations. typically, the trimble r8s is used to create a topography on land [21]. jmcf-2000 and odom hydrotrac ii are used to create a topography of the seabed [22]. the primary instrument used for wave data collection was the nortek acoustic wave and current profiler (awac) [23]. field data at the nhat le estuary have been measured intensively by the authors themselves for projects of the institute of geography (e.g. project with the code vast 06.03/15-16) during the period from 2005 to 2016. the data include bottom topography, waves, tidal water level, river discharge, and sediment grain sizes. within this project, dong hoi hydrologic station is newly installed to measure tidal water level and discharge. the data include bottom topography, waves, tidal water level, river discharge, and sediment grain sizes. figure 2 shows the locations of stations including the dong-hoi hydrological station and awac deployments to measure waves. b. numerical modeling: set-up and boundary conditions the mike is an implicit finite difference model for one dimensional unsteady flow computation and can be applied to simulate surface runoff, flow, sediment transport, estuaries, water quality or floodplains [13]. for this study, mike 11hd package was applied [13]. the upstream boundary conditions include river flow discharge computed for 8 tributaries using rainfall-runoff model (i.e. nedbor astromnings model (nam)) [24] coupled with mike 11 [25]. the nam model is used as a module of mike 11 under the name of mike-nam. the output of nam (i.e. discharge) is used as upper boundary condition at 8 the tributaries for mike 11hd to compute the hydraulic boundaries. the upper boundary locations are shown in figure 2. this model was originally developed by the department of hydrodynamics and water resources at the technical university of denmark [26]. it is a conceptual model, describing the physical characteristics of the basin, on the basis of which it calculates rainfall flows. the nam is the conceptual hydrological model. its parameters and variables present the mean values for the entire basin. the input data for nam include the time series data of rainfall at kien giang hydrologic station (17°00'40.89"n, 106°44'09.94"e), time series of daily mean air temperature, wind speed, relative humidity, and solar radiation at dong hoi (meteo) (17°28'19.83"n, 106°37'27.32"e) to simulate daily evaportranspiration, and flow discharge for the catchments. the process-based numerical model system delft3d [7] primarily designed with a focus on applications of water flow and quality, was applied in this study. the ocean forcings (i.e. tidal, wave actions, and sediment transport) were simulated using the delft3d model, namely a couple of delft3d-flow and delft3d-wave using swan [7, 27-28] were applied to take into account the influences of tides, wave forcing, and river discharges. the flow model is a hydrodynamic component of delft3d with a three dimensional hydrodynamic and transport simulation program. it is applied to solve the depth-averaged non-linear shallow water equations for nonsteady flows. simulations of hydrodynamic, sediment transport engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5650 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam changes were conducted. the wave model is also a component of delft3d with two available modules of hiswa [29] and swan [27] as the second and third generation wave model, respectively. in this study, swan model was applied for wave propagation and transformation in near-shore. fig. 2. computational domain and grid the modules of rgfgrid and quickin combined within the delft3d system were used to create smooth, orthogonal curvilinear grids and interpolate the topographic data. figure 2 shows the model domain, computational grid, bathymetry at the estuary and locations of upstream flow boundaries and observation stations. the computational grid for the nhat le estuary and its rivers and adjacent continental shelf consists of 512×329 nodes. fine-grid resolution was used locally and coarse resolution was used away from the regions of interest. the maximum grid size at the offshore open boundary is about 300m. the depth is extended to -5m and -52m for the near-shore and offshore areas respectively. the grid areas are enlarged from about 3km up to about 40km for the boundary of near-shore and offshore far from the coastline respectively. grid cells in the main estuary channels are 15m in length, while grid cells in outer are up to 300m in length. the bathymetry for the fine grid mesh is taken from survey at 30m resolution in the nhat le estuary area and 500m resolution for offshore areas. the seaward open boundary forcings were assumed to be astronomical tides. based on the global ocean tide model tpxo 8.0 [30], ten tidal constituents of q₁, o₁, p₁, k₁, m₂, s₂, k₂, n₂, mf, mm were found to be dominant in the area and were used as the open boundary conditions of the model. note that the boundary conditions of wave model are deep-water wave parameters (i.e. significant wave height, peak wave period, mean wave direction) from wavewatch iii model [28]. for simulations of sediment transport, the parameters automatically adopted from [8-9, 32-33] were used. iv. results and discussion a. results at first the model performance of mike and delft3d system needs to be calibrated and validated. in the process of calibration, model parameters were modified to reduce the error between the simulated and observed discharges. the model parameters found during model calibration were kept in the process of validation. the measured flow data at kiengiang station were used to validate the rainfall-runoff and river flow model. figure 3 shows the discharge simulations of mike system for calibration and validation during the flood seasons of 2015 and 2016. it is observed that the difference between simulation and observation discharge at kien giang is negligible with nash-sutcliffe indices [26] of 0.89 (in 2015) and 0.98 (in 2016). it is significantly remarkable to provide well-fitted results of models against the measured data as shown in figure 3 at kien-giang station. the model calibration and validation showed that the timing of the peaks was captured well, but the model slightly underestimated the discharge value in november 2015 for calibration. the parameters of overland flow runoff coefficient (cqof) and time constants for routing overland flow (ck12) are most sensitive to the simulation results of discharge, followed by the maximum water content in root zone storage (lmax) and root zone threshold value for overland flow (tof) parameters. these parameters are defined on the basis of statistical performance indices (i.e. coefficient of determination). (a) (b) fig. 3. comparison between modeled and observed flow discharges at kien-giang station for the flood seasons in (a) 2015 and (b) 2016 the hydrodynamic model was calibrated for the period from 16 may to 20 may 2015 using the collected data at the awac station in the estuary. model validation is firstly performed using the water level, depth averaged current, and wave measurements at the awac station. figures 4-5 present a comparison between model predictions and measurements for water surface levels and depth averaged flow currents at awac station. the nash-sutcliffe indices for the efficiencies of the model versus data for water surface elevation, xand ycomponents of flow velocity are 0.96, 0.76 and 0.63 respectively indicating a good agreement between the model and the data. the model output is mostly influenced by the water level and discharge at the model boundaries and is especially sensitive to winds. besides water surface elevation and depth averaged flow currents, the comparison of wave engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5651 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam parameters including significant wave height, wave period, and wave direction between model and data also provides reasonable agreement (figure 6).the satellite images and field survey data were used to validate the simulations of the sediment transport. fig. 4. modeled (blue) and measured (red) water level at awac station (a) (b) fig. 5. comparison between modeled (blue) and measured (red) flow velocities for (a) xand (b) ycomponents at awac station (a) (b) (c) fig. 6. model prediction and measured data comparison for (a) significant wave height, (b) wave period, and (c) wave direction at awac station b. discussion based on the calibrated model, hydrodynamics and morphodynamics of the nhat le estuary were simulated and analyzed to investigate the influences of different governing processes such as tides, waves and river flows. firstly, a simulation of fresh water inflow and tides-only forcing was carried out. then a fully hydrodynamic model of all forcing (fresh water inflow, tides, winds and waves) was simulated. two simulation periods, may 2015 and september 2015, represented the different conditions in summer and winter respectively. 1) wave characteristics the model results of wave parameters were extracted at four different locations (p1 to p4) surrounding the entrance (figure 7(a)). wave roses for the period of 2015-2016 are plotted for these locations in figure 7(b). the purpose of this is to clarify the role of winds in affecting the wave characteristics. it is observed that the deep-water waves are prevailing from the east and north-northeast directions. when the waves are approaching the estuary, the dominant wave directions are east and northeast due to wave refraction. as the northeast waves are mostly normal to the coastline, the wave action would contribute mainly on the cross-shore sediment transport but not the long-shore sediment transport. therefore, the dominant eastern waves could be the major source in inducing longshore sediment transport in the estuary. due to the dominant eastern wave actions, the net long-shore sediment transport is directed in the southeast-northwest which elongates the southern sandspit and forces the main channel to head north. (a) (b) fig. 7. locations for wave extraction, (b) corresponding wave roses 2) estuary hydrodynamics figure 8 shows the peak velocities of ebb and flood tide velocities in three different situations: 1) tidal only forcing, 2) tides and northeast waves in winter monsoon season, and 3) tides and southeast waves in summer monsoon season. snapshots of the depth-averaged currents during flood and ebb tide periods show the strong velocities in the tidal channel at the entrance due to the restriction of the entrance because of the elongating southern sandspit. the combination of tides and fresh water inflow causes much stronger ebb tidal velocity than engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5652 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam flood tidal velocity (figures 8(a) and 8(d)). the strongest flow velocities during flood and ebb tides at the entrance have an important role in maintaining the entrance and orienting the main channel. away from the entrance, the flow currents decrease quickly over the ebb tidal delta. it can be seen that the tidal currents along the coast during the ebb tide are much stronger than during the flood tide, making the system to export sediments to the southern coast, i.e. the tidal currents transport fluvial sediment from the river mostly to the southern coast. fig. 8. peak velocities during flood tide (top) and ebb tide (bottom) in different tide and seasonal wave conditions (axes units are meters) in the case of wave present, the contribution of wave radiation stresses makes the flow field to be more complex. the results show that the long shore currents during the winter ne waves are much stronger than during the summer se waves but both wave conditions generate long shore currents and long shore sediment transport in the se-nw direction which cause the southern sandspit to elongate northwest. unlike the fresh water flow and tide only conditions, in the case of wave present both flood and ebb currents seem stronger and the flood currents are able to transport sediment from the ebb tidal delta and those delivered by long shore currents further landward thus reshaping the shores. the model results suggest that tidal and river flows dominate the main channel and the inner estuarine zone. especially during high river discharge events, which are frequent over winter months, river discharge and ebb tides flush sediment seaward and then the offshore tidal currents will transport the sediment to supply the southern coast. wave-induced circulations and alongshore currents prevail on the ebb tidal delta and in the near-shore region on both sides of the estuarine mouth. in the near-shore area away from the inlet, wave-induced circulation patterns are often driven by the interaction between the waves and the seabed. the strong wave radiation stress modifies the pattern of the depth-averaged velocity especially near to the coast and the sand bar due to wave breaking in this region. under the condition of combined currents and waves, the flow magnitude increases considerably in the tidal channel and particularly in shallow water depths. it is specially noted that coastal waves induced by currents (e.g. tidal currents) could reach up to 0.5m/s in most of the coastline. these currents combined with tidal and river flow produce a jet that can reach the depth of 10m in ebb tide conditions. 3) sediment transport and bottom changes sediment transport and bottom morphological changes are simulated for various conditions of hydrodynamic and wave forcings. sediment grain size is setup uniformly with d50=0.03mm based on bed core data. tides, waves, and river discharge are the main parameters that effect on the sediment transport and morphology responses. (a) (b) (c) fig. 9. erosion/accretion processes are displayed by (a) simulation modeling, (b) google earth image (screenshot from google earth, map data: image@2015 digital globe, image@2015 terrametrics and maxar technologies), and (c) survey data for the flood season in 2015 the simulations’ performance was well-fitted to the measured data during the high flow season. it is observed that sediment has pushed away from the estuary to the ebb tidal delta (figure 9). alongshore wave-driven and tidal currents engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5653 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam redistribute this sediment to accrete along the coastline. it is worth noting that during the low flow season, as a result of the long shore drift and sediment transport from the south which is weakly interrupted by the tides and river inflow, sediment continues to accumulate and deposit at the southern side of the ebb tidal delta and at the tip of the southern sandspit and causes the sandspit to develop northward (figure 10). (a) (b) fig. 10. erosion/accretion processes are displayed by (a) simulation modeling and (b) survey data for a low flow season (2015) v. conclusions this study presents the first attempt to fully couple hydrodynamic and morphodynamic models (mike and delft3d) for a better insight into the morphology evolution of nhat le estuary, vietnam. the simulations document a series of complexity trends under the ensemble effects of tides, waves, flows, and winds. the major physical processes governing the estuary morphology including tides, waves, and freshwater discharge were simulated. the model system was calibrated and validated using measured and observed data from 2005 to 2016. the simulations presented in this paper are, of course, limited to the particular sedimentary and morphological conditions of the nhat le estuary. although the findings are not considered with sedimentary and geological evolution at the upstream of nhat le river basin and at a regional scale of north-central coast of vietnam, they allow making hypotheses and conducting further research in a wider range. the results of the simulations are in accordance with the measured and observed data during the periods of calibration and validation. the results demonstrate that the seasonal variations of freshwater flow and ocean waves under the tropical monsoon regime significantly affect the behavior of the estuary morphology. the role of freshwater flow in the flood season is to cut off the southern sandspit, maintain, and shape the main channel. sediment from the river is exported to the ebb tidal delta due to the ebb dominant freshwater inflow and tidal currents. outside the estuary, ebb dominant tidal currents transport the sediment to the south and supply the southern coast. the prevailing waves in winter and summer induce long shore drift and sediment transport in the se-nw direction. in the low flow season, this long shore sediment transport is dominant which causes sediment to deposite on the southern side of the ebb tidal delta and elongates the southern sandspit to narrow the estuary entrance and reorient the main channel. acknowledgment this research was supported by projects vast 06.06/1920, kc08.16/16-20, and ndt.30.ru/17 protocol. references [1] g. masselink, “the effect of sea breeze on beach morphology, surf zone hydrodynamics and sediment resuspension”, marine geology, vol. 146, no. 1-4, pp. 115-135, 1998 [2] m. luo, n. c. lau, “synoptic characteristics, atmospheric controls, and long-term changes of heat waves over the indochina peninsula”, climate dynamics, vol. 51, no. 7-8, pp. 2707-2723, 2018 [3] n. t. thanh, n. h. son, “understanding shoreline and riverbank changes under the effect of meteorological forcings”, 10th international conference on asian and pacific coasts, hanoi, vietnam, september 26-28, 2019 [4] a. n. laghari, w. rauch, m. a. soomro, “a hydrological response analysis considering climatic variability”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2981-2984, 2018 [5] a. liaghat, a. adib, h. r. gafouri, “evaluating the effects of dam construction on the morphological changes of downstream meandering rivers (case study: karkheh river)”, engineering, technology & applied science research, vol. 7, no. 2, pp. 1515-1522, 2017 [6] m. rinaldi, b. mengoni, l. luppi, s. e. darby, e. mosselman, “numerical simulation of hydrodynamics and bank erosion in a river bend”, water resources research, vol. 44, no. 9, article id w09428, 2008 [7] deltares, delft3d-flow: simulation of multi-dimensional hydrodynamic flows and transport phenomena, including sediments. user manual, version 3.15, deltares, 2011 [8] l. c. van rijn, principles of sediment transport in rivers, estuaries and coastal seas, aqua, 1993 [9] l. c. van rijn, “unified view of sediment transport by currents and waves. i: initiation of motion, bed roughness, and bed-load transport”, journal of hydraulic engineering, vol. 133, no. 6, pp. 649-667, 2007 [10] l. c. van rijn, “unified view of sediment transport by currents and waves. ii: suspended transport”, journal of hydraulic engineering, vol. 133, no. 6, pp. 668-689, 2007 [11] l. c. van. rijn, “unified view of sediment transport by currents and waves. iii: graded beds”, journal of hydraulic engineering, vol. 133, no. 7, pp. 761-775, 2007 [12] l. c. van rijn, d. j. r. walstra, m. v. ormondt, “unified view of sediment transport by currents and waves. iv: application of morphodynamic model”, journal of hydraulic engineering, vol. 133, no. 7, pp. 776-793, 2007 [13] dhi, users manual: mike 11, danish hydraulic institute, 2005 [14] d. n. graham, m. b. butts, “flexible, integrated watershed modeling with mike she”, in: watershed models, crc press, 2005 [15] a. h. kamel, “application of a hydrodynamic mike 11 model for the euphrates river in iraq”, slovak journal of civil engineering, vol. 2, no. 1, pp. 1-7, 2008 [16] j. r. thompson, h. r. sorenson, h. gavin, a. refsgaard, “application of the coupled mike she/mike 11 modelling system to a lowland wet grassland in southeast england”, journal of hydrology, vol. 293, no. 14, pp. 151-179, 2004 [17] i. r. warren, h. k. bach, “mike 21: a modelling system for estuaries, coastal waters and seas”, environmental software, vol. 7, no. 4, pp. 229-240, 1992 engineering, technology & applied science research vol. 10, no. 3, 2020, 5648-5654 5654 www.etasr.com cham et al.: hydrodynamic condition modeling along the north-central coast of vietnam [18] w. s. merritt, r. a. letcher, a. j. jakeman, “a review of erosion and sediment transport models”, environmental modelling & software, vol. 18, no. 8-9, pp. 761-799, 2003 [19] n. d. thao, h. takagi, m. esteban, coastal disasters and climate change in vietnam: engineering and planning perspectives, elsevier, 2014 [20] y. mazda, m. magi, h. nanao, m. kogo, t. miyagi, n. kanazawa, d. kobashi, “coastal erosion due to long-term human impact on mangrove forests”, wetlands ecology and management, vol. 10, no. 1, pp. 1-9, 2002 [21] trimble, trimble r8s gnss receiver: user guide, trimble, 2015 [22] b. bayram, i. janpaule, m. ogurlu, s. bozkurt, h. c. reis, d. z. seker, “shoreline extraction and change detection using 1: 5000 scale orthophoto maps: a case study of latvia-riga“, international journal of environment and geoinformatics, vol. 2, no. 3, pp. 1-6, 2015 [23] a. a. alesheikh, a. ghorbanali, n. nouri, “coastline change detection using remote sensing”, international journal of environmental science & technology, vol. 4, no. 1, pp. 61-66, 2007 [24] dhi, mike 1d: dhi simulation engine for 1d river and urban modelling. reference manual, dhi ή danish hydraulic institute, 2012 [25] dhi, mike 11: a modelling system for rivers and channels, danish hydraulic institute, 2012 [26] j. e. nash, j. v. sutcliffe, “river flow forecasting through conceptual models, part i: a discussion of principles”, journal of hydrology, vol. 10, no. 3, pp. 282–290, 1970 [27] delft, swan user manual, swan cycle iii version 41.20a, delft university of technology, 2011 [28] deltares, delft3d-wave: simulation of short-crested waves with swan, user manual, version 3.04, delft university of technology, 2011 [29] l. h. holthuijsen, n. booij, t. h. c. herbers, “a prediction model for stationary, short-crested waves in shallow water with ambient currents”, coastal engineering, vol. 13, no. 1, pp. 23-54, 1989 [30] g. d. egbert, a. f. bennett, m. g. g. foreman, “topex/poseidon tides estimated using a global inverse model”, journal geophysical research, vol. 9, no. c12, pp. 24821–24852, 1994 [31] h. l. tolman, “validation of wavewatch iii version 1.15 for a global domain”, noaa/nws/ncep/omb technical note 213, 2002 [32] l. c. van rijn, “sediment transport, part i: bed load transport”, journal of hydraulic engineering, vol. 110, no. 10, pp. 1431-1456, 1984 [33] l. c. van rijn, handbook sediment transport by currents and waves, delft hydraulics, 1989 microsoft word 31-3390_setasr_v10_n2_pp5512-5519 engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5512 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings prestress losses in concrete rafters with openings falah jarass aied alkhafaji department of civil engineering college of engineering university of baghdad, iraq falahgaras@gmail.com amer farouk izzet department of civil engineering college of engineering university of baghdad, iraq amer.f@coeng.uobaghdad.edu.iq abstract—in this paper, experimental work was conducted to evaluate the losses in prestressing force of 13 (12 perforated and 1 solid) simply supported prestressed concrete rafters. all beams had the same dimensions and reinforcements. the tested beams were divided into four main groups and three additional subgroups were driven. these groups were classified according to size, number, and configuration of the openings, and the orientation of the posts (vertical or inclined). regarding the prestress losses that have been affected by the cross-section properties, the provision of the codes is applicable only to prismatic solid beams, so non-prismatic or moreover perforated beams also need to be studied. this paper aims to propose a method based on the same code provisions but taking into consideration the cross-section variation along the beam length. the proposed method divides the overall length of the rafter into a number of assumed prismatic segments with heights measured at centers. then, the overall prestress loss is found as the sum of these segments weighed by the ratios of the length of each beam segment to the overall length. the experimental results of the proposed method ranged from 84.749% to 95.607% denoting its validity. keywords-prestress losses; rafter beams; openings i. introduction the stresses in the tendons of prestressed concrete members are decreasing with time, but at a decreasing rate, and asymptotically level off after a long time. the total stress reduction during the lifespan of the member is called total prestress loss [1-3]. the reduction in the prestressing force can be grouped into two categories: • immediate elastic loss during the fabrication or construction process, including elastic shortening of the concrete, anchorage losses, and frictional losses. • time-dependent losses such as creep, shrinkage, and those due to temperature effects and steel relaxation, all of which are determinable at the service-load limit state of stress in the prestressed concrete element. an exact determination of the magnitude of these losses, particularly the time-dependent ones, is not feasible, since they depend on many interrelated factors. empirical methods of estimating losses differ with the different codes of practice or recommendations. the degree of rigor of these methods depends on the approach chosen and the accepted practice of record [4-5]. the presence of openings in gable reinforced concrete beams has many advantages such as flexibility, easier handling, and, most importantly, reduced overall weight. furthermore, concrete has very low to no maintenance cost and high fire resistance, therefore reinforced concrete gable beams can be used as a good alternative option to design roofs for warehouses, industrial buildings, and airplane hangars instead of steel sections [6-10]. the link structural elements (posts) between the upper and the lower chords of the rafter beam have many advantages such as avoiding vierendeel truss to prevent shear failure and allowing a beam to enhance its bending capacity and ductility [11]. the fact that concrete is not efficient in resisting tensile stress makes very difficult reaching a long span beam in design, so the addition of prestressing reinforcements has become necessary to reach span lengths which cannot be reached by using ordinary reinforcements [1213]. to estimate losses such as elastic shortening, creep, shrinkage, and relaxation in beams, the empirical methods recommended by the codes are only applicable to prismatic beams. no special recommendations have been included regarding rafter beams with or without openings. this study has proposed a method to estimate prestress losses in rafter beams, which are divided into a number of segments that are assumed to be prismatic along their lengths with heights measured at centers. ii. experimental investigation the experimental program consisted of casting and testing of 13 rafter beams, including 12 beams with openings (perforated) and one reference solid beam without openings. all the tested beams had the same rafter geometry, i.e. a rectangular cross-section of 100mm width and height of 400mm at center tapered to 250mm at the two ends, while the overall length was 3000mm with a clear span of 2800mm. figure 1 shows the geometrical details of the tested beams. figures 2 and 3 exhibit the details of the solid prestressed rafter concrete beam and the beams with openings respectively. mild steel reinforcement of 4, 6, and 12mm bar diameters were used while seven-wire low-relaxation strands, grade 270 with ø12.7mm diameter were used as prestressing steel. the tested beams were divided into four main groups (a, b, c, and d). these groups were classified according to the studied variables which are: size, openings number, posts inclination, and configuration of the openings. table i exhibits the grouping according to these variables as follows: corresponding author: falah jarass aied alkhafaji engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5513 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings • main groups to study the effect of openings’ width versus the number of openings along the beam length, it is worth to highlight that group a and b had vertical posts whereas group c had inclined posts. group a and c had the same upper and lower chords depth of 100mm whereas it was 75mm for group b. group d has been prepared in order to study the effect of the increasing of the opening area in the case of using the same number of circular openings. • subgroups group e has been driven to investigate the effect of increasing opening height or in other words decreasing depth of the upper and lower chords of the beam (beams have been taken from groups a and b). group f was driven to find the effect of post configurations linked between the upper and lower beam chords, i.e. the geometric shape of the quadratic openings (beams have been adopted from groups a and c). group g consists of beams with the same number of openings but with different openings configurations in order to compare circular and quadratic openings (beams have been chosen from groups a, b, c and d). a. measurements of prestress losses prestress losses were determined at different stages (at transfer of prestressing force and just before loading test). a special high accuracy electrical resistance measuring device was used for this purpose. electrical resistance strain gauges (fla-6-11, length=6mm) with gauge factor of 2.09±1%, and resistance of 120.4ω were fixed on the strand and bridged to the data logger. an initial reading was recorded on applying the prestressing force, and another reading was conducted before the load test. the electrical resistance is converted to strain through the following equation: 0 0 ir r kr ε − = (1) where ��=initial electrical resistance at the moment of prestress transfer, ��=electrical resistance immediately before testing, and �=strain gauge factor (for this type of strain gauge, �=2.09). through (1) the electrical resistance is converted to strain. this strain is compared with the strain calculated from the elongation which was measured immediately after applying the prestressing [14]. b. prestressing process for post-tensioning concrete gable beams, after attaining the age of 57 days for concrete, the prestressing process has been done according to the following sequence: • after fixing the strain gauges on the strand (7-wire strand, 12.7mm diameter), it was inserted through the pvc duct which has been embedded in the mold before concrete casting. • bridging the strain gauges wires to strain indicator (data logger) • attaching the predesigned end bearing steel plates with adequate grips at the beam ends. • applying the prestressing force (110kn) from one end according to the aci-318m-14 [15] limitations. • finally, releasing the jack and measuring the strand elongation and the strain which has been occurred in the strand. the corresponding strain was monitored and compared with the reading of the pressure gauge of the hydraulic jack. table i. details of tested beams group beam mark shape of openings number of openings total area of openings (mm 2 ) width of openings (mm) height of upper chord (mm) height of lower chord (mm) a pgb ----------0 --------------- pgt6 trapezoidal 6 180000 200 100 100 pgt8 trapezoidal 8 174000 150 100 100 pgt10 trapezoidal 10 144000 100 100 100 b pgb ----------0 --------------- pgth6 trapezoidal 6 240000 200 75 75 pgth8 trapezoidal 8 234000 150 75 75 pgth10 trapezoidal 10 195000 100 75 75 c pgb ----------0 --------------- pgp6 trapezoidal with inclined posts 6 154000 200 100 100 pgp8 trapezoidal with inclined posts 8 151000 150 100 100 pgp10 trapezoidal with inclined posts 10 138000 100 100 100 d pgb ----------0 --------------- pgc1 circular 8 184200 d 75 75 pgc2 circular 8 128000 0.83d 100 100 pgc3 circular 8 82000 0.67d 120 120 engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5514 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings fig. 1. geometrical details of the tested beams iii. experimental results and discussion the prestressing losses were monitored. these losses result from end anchorage slip, strand friction with the duct, strand relaxation and shrinkage, and creep of concrete. table ii shows the prestress losses and the residual prestress (effective prestress). it was observed that the losses ranged from 17.465% to 20.309% of the initial prestress depending on size, number of openings, posts inclination, and openings configuration. the effect of the considered parameters on the prestress losses are: for group a, b, and c, when the number of openings increases, the prestress losses decrease. this might be due to the minimization of total area of openings and increasing number of posts which have a positive effect on prestress losses and the behavior of the beam in general. the following comparison demonstrates the decreasing ratio in prestress losses with increasing number of openings: • group a: 2.558 and 4.33% for pgt8 and pgt10 beams respectively in relation to beam pgt6. • group b: 2.175 and 3.403% for pgth8 and pgth10 beams respectively in relation to beam pgth6. • group c: 2.888 and 4.764% for pgp8 and pgp10 beams respectively in relation to beam pgp6. • group d (circular openings) it can be noticed that, the prestress losses decrease with reducing size of circular openings. the decreasing ratio was 4.25% and 6.441% for gc2 and gc3 beams respectively in relation to gc1. • group e compares the beams of groups a and b. the decreasing depth of both upper and lower chords (group b) by 25% (elative to that of group a) led to reduced prestress losses by 3.824% for beam pgt6 related to beam pgth6 (group ei), 4.201% for beam pgt8 related to beam engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5515 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings pgth8 (group eii), and 4.749% for beam pgt10 related to beam pgth10 (group eiii) respectively. • group f consists of a comparison between the beams of groups a and c. the beams of the two groups have the same number of openings and post dimensions but they differ in their inclinations. the prestress losses were 0.438% for beam pgp6 related to beam pgt6 (group fi), 0.775% for beam pgp8 related to beam pgt8 (group fii), and 6.567% for beam pgp10 related to beam pgt10 (group fiii). (a) (b) fig. 2. (a) reinforcement details for solid rafter pgb, (b) section a-a (all dimensions are in mm) (a) (b) (c) fig. 3. (a) details of reinforcement for rafter with openings pgt6, (b) section b-b, (c) section c-c • group g consists of beams having eight openings but different configurations (circular and quadratic openings), with both opening shapes are restricted by the same chord depth. the results indicate a decrease in the prestress losses on the beams with circular openings in comparison with the beams with quadratic openings. the decrease ratios are: 2.488% for beam pgc2 related to beam pgt8 (group gi), 2.437% for beam pgc1 related to beam pgth8 (group gii), and 1.726% for beam pgc2 related to beam pgp8 (group giii). iv. prestress losses with the proposed method the variation in cross-section properties should be considered in order to calculate the prestress losses in solid or perforated rafter beams. the method suggests dividing a rafter beam into a set of segments that are considered as prismatic parts with heights measured at centers (figure 4). for the perforated beams, the number of segments should be chosen in a manner such that the segment is either solid or with openings. the overall prestress loss is found as the summation of the contributions of the beam subdivisions weighed by the ratio of the length of the beam segment to the overall length. the pgt6 beam has been taken as an example to show the steps of the estimation process in details. the same procedure was used for the other tested beams. 1) instantaneous losses a) anchorage seating loss usually long tendons will be less affected by seating loss. for short tendons, the seating loss should be detected and subtracted from the applied prestressing force [16]. assuming ∆a=1.5 mm and l=3000 mm and by (2): ���� �� �� (2) substituting we get ����=98.75mpa. b) elastic shortening no elastic shortening occurred because only one strand was used in each beam: ���� 0 c) friction losses the strand is straight therefore there is no curvature, α=0. assuming that wobble coefficient k=0.002, by (3) we get: ���� ����μ � � � �� (3) substituting we get: ����=6.25mpa. the stress remaining in the restressing strand after all instantaneous stresses is: ��� ��� � ����� � ���� � ����� (4) substituting we get: ��� =116-(105.8+6.25)=1003.95mpa. the net prestressing force is calculated by: �� ��� �� (5) substituting we get: �� ��� �� =99391n. 2) time dependent losses a) stage (i): losses after 24hr of the force transfer • relaxation loss: for t1=1hr, t2=24hr: ���! ��� "#$%&' (#$%&)*+ , ./0 ./1 � 0.554 (6) substituting we get: ���!=6.89 mpa. engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5516 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings table ii. prestress losses and stresses in prestressed concrete beams up to the moment of testing group beam's labeling prestress stress (mpa) age of beam at testing (days) age of beam transferred to testing (days) instantaneous losses after transfer (mpa) time dependent losses at testing (mpa) total prestress losses, (mpa) ∆pft residual prestress in strands, (effective prestress) (mpa) presstress loss/initial prestress (%) increasing ratio of losses% (1) main groups a pgb 1116 105 48 100.313 94.596 194.909 921.090 17.465 0 pgt6 1116 115 58 107.088 110.894 217.982 898.0176 19.533 11.837 pgt8 1116 120 63 105.494 106.912 212.406 903.594 19.033 8.9766 pgt10 1116 122 65 104.751 103.787 208.538 907.462 18.686 6.9922 b pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgth6 1116 123 66 105.122 121.528 226.650 889.350 20.309 16.284 pgth8 1116 125 68 104.244 117.476 221.720 894.280 19.867 13.755 pgth10 1116 127 70 105.208 113.729 218.936 897.063 19.618 12.327 c pgb 1116 105 48 100.313 94.596 194.909 921.090 17.465 0 pgp6 1116 128 71 101.489 115.539 217.028 898.972 19.447 11.348 pgp8 1116 129 72 97.996 112.764 210.760 905.240 18.885 8.1322 pgp10 1116 130 73 105.789 100.899 206.688 909.312 18.520 6.0427 d pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgc1 1116 131 74 98.013 118.303 216.316 899.684 19.383 10.982 pgc2 1116 132 75 97.727 109.396 207.123 908.878 18.559 6.2657 pgc3 1116 133 76 103.649 98.733 202.382 913.618 18.135 3.8338 secondary groups ei pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgt6 1116 115 58 107.088 110.894 217.982 898.018 19.533 11.837 pgth6 1116 123 66 105.122 121.528 226.650 889.350 20.309 16.284 eii pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgt8 1116 120 63 105.494 106.912 212.406 903.593 19.033 8.9766 pgth8 1116 125 68 104.244 117.476 221.720 894.240 19.867 13.755 eiii pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgt10 1116 122 65 104.751 103.787 208.538 907.462 18.686 6.992 pgth10 1116 127 70 105.207 113.729 218.937 897.063 19.618 12.327 fi pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgt6 1116 115 58 107.088 110.894 217.982 898.018 19.533 11.837 pgp6 1116 128 71 101.489 115.539 217.027 898.973 19.447 11.348 fii pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgt8 1116 120 63 105.494 106.912 212.406 903.594 19.033 8.977 pgp8 1116 129 72 97.997 112.764 210.761 905.240 18.885 8.132 fiii pgb 1116 105 48 100.313 94.596 194.909 921.091 17.465 0 pgt10 1116 122 65 104.751 116.464 221.215 894.785 19.822 6.640 pgp10 1116 130 73 105.789 100.899 206.688 909.312 18.520 6.043 gi pgb 1116 105 48 100.3134 94.596 194.9094 921.0906 17.465 0 pgt8 1116 120 63 105.4942 106.912 212.4062 903.5938 19.0328 8.9766 pgc2 1116 132 75 97.72652 109.396 207.1225 908.8775 18.5594 6.2657 gii pgb 1116 105 48 100.3134 94.596 194.9094 921.0906 17.465 0 pgth8 1116 125 68 104.244 117.476 221.72 894.28 19.8674 13.755 pgc1 1116 131 74 98.01274 118.303 216.3157 899.6843 19.3831 10.982 giii pgb 1116 105 48 100.3134 94.596 194.9094 921.0906 17.465 0 pgp8 1116 129 72 97.99647 112.764 210.7605 905.2395 18.8853 8.1322 pgc2 1116 132 75 97.72652 109.396 207.1225 908.8775 18.5594 6.2657 (1) 5678 �9:;<�( 5678 ��=>� 5678 ��=>� ∗ 100 • creep loss: ��a! 0 • shrinkage loss: ���b 0 • tendon stress at the end of stage i: �� ��� � ���! (7) substituting we get: �� 997.06mpa. b) stage (ii): losses after 58 days: • creep loss: �� ��� �� = 98.709kn rd ef�f (8) ẛh � � �0�f × -1 � :0' j0' 4 � kl0 :0ef (9) for part no.1: i=1, l1=0.6m (length of segment), x1=0.3m, y1= 0.15 0.3 1.45 × =0.031m, h1=0.25+y1=0.281m (height of sectional center of segment), b1=0.1m (beam width), a1=b1×h1=0.0281m 2 (cross section area of segment), so: i1= >)×b) *d =0.00018497m 2 , 1 1 2 2 0.00658mc c i i a r = = and m* = "b)d � nop, =0.0905m. taking as h=0.25m we have: engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5517 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings q* r* × s × ϫh =0.6kn/m and w2=0.0155kn/m. r is: � uvw(vx/yz0z[\×ϫ]d =0.963kn, and: md1 � × ^* � �q* × ^* × _* � qd × `'d × _d� 0.2617 ẛh � � �) �]) i "1 � :)'j)', � kl) :) eh� 7.75769mpa. with ẛh � i `0�=3.103mpa we have: ẛh ∑ ẛh � i `0�b�c* (10) �h 4700 i f�’h (11) i �/j�] (12) substituting in (10)-(12) we get: ẛh =7.718mpa, ec=29725, and n=6.65. for a posttensioned beam: kcr=1.6, so: ���a! i �a! ẛh (13) substituting we have ���a! =82.12mpa. the same steps are repeated for the other segments (portions) of the beam, and then the total prestress loss due to the creep is found after multiplying fcsi by li/l for all segments as in (10). the calculations are shown in table iii. • shrinkage loss: ���b� 2.8 i 10(m �no� eqr��1 � 0.06 v0�0 ��100 � �o�� (14) ���b ∑ ��no� i `��b�c* (15) for i=1, l1=0.6m, x1=0.3m, y1=31.034mm we get: h1=281.03mm, b1=100mm, vi=28103mm 2 , si=(b1+h1)× 2 =762.069mm 2 (surface area of segment), ksh=0.77, rh=70%, epsi=197500,and: ��no* 2.8 i 10 � 6 �no� eqrt "1 � 0.06v0�0, �100 � �o�� => ��no�=34.0994, while ��no* i `*� =13.6397845mpa and ��no ∑ ��no� i `��b�c* =34.48mpa. the same steps are repeated for the other segments of the beam, and then the total prestress loss due to shrinkage is the cumulative summarization of the ��no� multiplied by li/l for all parts as in (15). the calculations are shown in table iv. c) relaxation loss after 43 days fps=997.06mpa, t1=24hr, t2=1032hr, and δf6w ��� "#$%x' (#$%x)*+ , ./0 ./1 � 0.554=8.02mpa. the total losses are: ��yz ���n� � ��no � ����=124.622mpa ��: �� � ��yz=872.438mpa the same calculations are repeated for all beams. table v. shows the prestress losses by the proposed method and stresses in prestressed openings. for simplification reasons, the equivalent trapezoidal shape has the same area as the original opening and uniform posts are considered. the same simplification is used for group d, where equivalent rectangulars are positioned. table iii. calculation steps of creep loss for beam pgt6 (all dimensions are in m) (a) (b) (c) (d) fig. 4. dividing the beam to a set of segments. (a) details and number of segments, (b) section 1-1, (c) section 2-2, (d) section 3-3 part no. i li xi yi hi bi aci ici r 2 i ei pi mdi ẛcsi ẛcsi×l/l 1 1 0.6 0.3 0.031 0.281 0.1 0.0281 0.00018497 0.00658 0.0905 98.71 0.2597 7.75769 3.103 2 2 0.2 0.7 0.072 0.2 0.1 0.02 6.6667e-05 0.00333 0.05 98.71 0.5167 8.24952 1.1 3 3 0.1 0.85 0.088 0.338 0.1 0.0338 0.00032159 0.00952 0.119 98.71 0.585 7.04862 0.47 4 4 0.2 1 0.103 0.2 0.1 0.02 6.6667e-05 0.00333 0.05 98.71 0.6408 8.15642 1.087 5 5 0.1 1.15 0.119 0.369 0.1 0.0369 0.00041858 0.01134 0.1345 98.71 0.6849 6.72019 0.448 6 6 0.2 1.3 0.134 0.2 0.1 0.02 6.6667e-05 0.00333 0.05 98.71 0.7131 8.10221 1.080 7 7 0.05 1.425 0.147 0.397 0.1 0.0397 0.00052306 0.01316 0.1487 98.71 0.7134 6.45419 0.215 8 8 0.05 1.475 0.4 0.1 0.04 0.00053333 0.01333 0.15 98.71 0.7309 6.42646 0.214 ∑li 1.5 ẛcs 7.718 engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5518 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings table iv. calculation steps of shrinkage loss for beam pgt6 (all dimensions are in m) part no. l xi yi hi vi si ksh eps rh ∆fshi ∆fshi×li/ltotal 1 600 300 31.034 281.03 28103 762.069 0.77 197500 70 34.0994 13.639 2 200 700 200 20000 800 0.77 197500 70 35.1658 4.688 3 100 850 87.931 337.93 33793 875.862 0.77 197500 70 33.9463 2.263 4 200 1000 200 20000 800 0.77 197500 70 35.1658 4.688 5 100 1150 118.97 368.97 36897 937.931 0.77 197500 70 33.8785 2.2585 6 200 1300 200 20000 800 0.77 197500 70 35.1658 4.688 7 50 1425 147.41 397.41 39741 994.828 0.77 197500 70 33.8237 1.127 8 50 1475 400 40000 1000 0.77 197500 70 33.819 1.127 ∑l 1500 ∆fsh 34.482 table v. prestress losses based on the proposed method and stresses in prestressed concrete beams up to the moment of testing group beam instantaneous losses (1) time dependent losses (2) ∆pft (ii) fps fpe ∆pft (1)+(2) prestress loss/initial prestress (%) increasing ratio of losses% (1) at transfer (i) at the age of test (ii) ∆fpa ∆fes ∆fpf ∆fpr ∆fcr ∆fsh ∆fpr ∆fcr ∆fsh a pgb(ref) 98.75 0 6.25 6.89 0 0 7.65 76.47 33.975 118.095 997.06 878.965 229.985 20.608 0 pgt6 98.75 0 6.25 6.89 0 0 8.02 82.12 34.482 124.622 997.06 872.438 236.512 21.193 2.838 pgt8 98.75 0 6.25 6.89 0 0 8.18 81.54 34.48 124.2 997.06 872.86 236.09 21.155 2.654 pgt10 98.75 0 6.25 6.89 0 0 8.25 80.9 34.4 123.55 997.06 873.51 235.44 21.097 2.371 b pgb(ref) 98.75 0 6.25 6.89 0 0 7.65 76.47 33.975 118.095 997.06 878.965 229.985 20.608 0 pgth6 98.75 0 6.25 6.89 0 0 8.28 82.284 34.61 125.174 997.06 871.886 237.064 21.243 3.078 pgth8 98.75 0 6.25 6.89 0 0 8.33 81.703 34.609 124.642 997.06 872.418 236.532 21.195 2.846 pgth10 98.75 0 6.25 6.89 0 0 8.39 81.028 34.505 123.923 997.06 873.136 235.813 21.130 2.534 c pgb(ref) 98.75 0 6.25 6.89 0 0 7.65 76.47 33.975 118.09 997.06 878.965 229.985 20.608 0 pgp6 98.75 0 6.25 6.89 0 0 8.42 81.204 34.43 124.05 997.06 873.006 235.944 21.142 2.591 pgp8 98.75 0 6.25 6.89 0 0 8.45 80.84 34.43 123.72 997.06 873.34 235.61 21.112 2.445 pgp10 98.75 0 6.25 6.89 0 0 8.47 80.429 34.169 123.06 997.06 873.992 234.958 21.054 2.162 d pgb(ref) 98.75 0 6.25 6.89 0 0 7.65 76.47 33.975 118.095 997.06 878.965 229.985 20.608 0 pgc1 98.75 0 6.25 6.89 0 0 8.5 84.953 34.365 127.818 997.06 869.242 239.708 21.479 4.227 pgc2 98.75 0 6.25 6.89 0 0 8.53 80.915 34.23 123.675 997.06 873.385 235.565 21.108 2.426 pgc3 98.75 0 6.25 6.89 0 0 8.55 79.048 34.06 121.658 997.06 875.402 233.548 20.927 1.549 v. comparison between the experimental and the proposed method’s results as demonstrated in table vi, the experimental results converge to the ones of the proposed method. the ratio of experimental to the proposed method’s results ranged from 84.749% to 95.607% denoting the validity of the proposed estimation. table vi. result comparison group beam experimental loss proposed loss experimental loss / proposed loss % a pgb 194.909 229.985 84.749 pgt6 217.982 236.512 92.165 pgt8 212.406 236.09 89.968 pgt10 208.538 235.44 88.574 b pgb 194.909 229.985 84.749 pgth6 226.650 237.064 95.607 pgth8 221.720 236.532 93.738 pgth10 218.937 235.81377 92.843 c pgb 194.909 229.985 84.749 pgp6 217.028 235.944 91.983 pgp8 210.760 235.61 89.453 pgp10 206.688 234.958 87.968 d pgb 194.909 229.985 84.749 pgc1 216.316 239.708 90.241 pgc2 207.123 235.565 87.926 pgtc3 202.382 233.548 86.656 vi. conclusion • increasing the number of quadratic openings along the beam length from 6 to 8 and then to 10 decreased the prestress losses by an average of 2.54% and 4.166%, respectively. • the prestress losses decrease with reducing size of the circular openings by 17% and 33% from 4.25% to 6.441%. • decreasing the depth of both upper and lower chords for perforated beams by 25%, i.e. by increasing openings’ height, led to increase prestress losses by an average of 4.258%. • the average decrease in the prestress losses in beams having inclined posts in comparison with those having vertical ones was 2.593%. • the average losses decrease in prestressing force for beams with circular openings in comparison with that of quadratic ones was 2.217%. • the result of the proposed estimated method converges with the experimental results. this accordance ranged from 84.749% to 95.607%. engineering, technology & applied science research vol. 10, no. 2, 2020, 5512-5519 5519 www.etasr.com alkhafaji & izzet: prestress losses in concrete rafters with openings appendix �a area of net concrete section �� area of prestressed steel in tension zone �h modulus of elasticity of concrete �{o modulus of elasticity of prestressed steel ��� initial prestress stress in prestressed steel ��� stress in prestressed steel at jacking stage f’c cylinder concrete compressive strength at 28 days �� stress in prestressed steel at nominal flexural strength rh relative humidity |a second moment of area of net concrete section about an axis through its centroid � wobble coefficient md dead load moment �� initial prestress force μ curvature friction coefficient � total angular change of prestressing tendon profile in radians from tendon jacking end to any point x ∆a slip in tendon from anchorage ���� prestress losses due to anchorage seating ���� prestress losses due to elastic shortening ���� prestress losses due to friction δf6w prestress losses due to relaxation references [1] a. e. naaman, “computation of prestress losses”, in: prestressed concrete analysis and design, techno press 3000, pp. 445–514, 2004 [2] s. a. youakim, a. ghali, s. e. hida, v. m. karbhari, “prediction of long-term prestress losses”, pci journal, vol. 52, no. 2, pp. 116-130, 2007 [3] a. shokoohfar, a. rahai, “prediction model of long-term prestress loss interaction for prestressed concrete containment vessels”, archives of civil and mechanical engineering, vol. 17, no. 1, pp. 132-144, 2017 [4] e. g. nawy, “partial loss of prestress”, in: prestressed concrete: a fundamental approach, prentice hall, 5th edition, pp. 73–105, 2009 [5] d. m. h. al-fendawy, strengthening of reinforced concrete t-beams using external post-tensioning, msc thesis, university of baghdad, 2014 [6] m. a. mansur, k. h. tan, concrete beams with openings: analysis and design, crc press, 1999 [7] n. k. oukaili, a. shammari, “response of reinforced concrete beams with multiple web openings to static load”, fourth asia-pacific conference on frp in structures, melbourne, australia, december 1113, 2013 [8] v. n. baykov, e. e seagal, reinforced concrete structures, stroyizdat, 1991 [9] m. a. j. hassan, a. f. izzet, “serviceability of reinforced concrete gable roof beams with openings under static loads”, engineering, technology & applied science research, vol. 9, no. 5, pp. 4813-4817, 2019 [10] m. a. j. hassan, a. f. izzet, “experimental and numerical comparison of reinforced concrete gable roof beams with openings of different configurations”, engineering, technology & applied science research, vol. 9, no. 5, pp. 5066-5073, 2019 [11] b. aykac, s. aykac, i. kalkan, b. dundar, h. can, “flexural behavior and strength of reinforced concrete beams with multiple transverse openings”, aci structural journal, vol. 111, no. 2, pp. 267-278, 2014 [12] p. s. samir, precast/prestressed concrete truss-girder for roof applications, msc thesis, university of nebraska-lincoln, 2013 [13] b. m. dawood, h. a. al-katib, “flexural strength of prestressed concrete beams with openings and strengthened with cfrp sheets”, international journal of scientific & technology research, vol. 4, no. 6, pp. 161-172, 2015. [14] m. m. mohammed, strength, cracking and deformability of partially prestressed concrete beams under repeated loading, phd thesis, university of baghdad, 2018 [15] aci-318, building code requirements for structural concrete (aci 318m-2014) and commentary, american concrete institute, farmington hills [16] p. zia, h. kent preston, n. l. scott, e. b. workman, “estimating prestress losses”, concrete international, vol. 1, no. 6, pp. 32-38, 1975 microsoft word 06-3598_s_etasr_v10_n4_pp5903-5913 engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5903 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … extracting problem linkages to improve knowledge exchange between science and technology domains using an attention-based language model hajime sasaki institute for future initiatives the university of tokyo tokyo, japan sasaki@ifi.u-tokyo.ac.jp amarsanaa agchbayar data artist inc. tokyo, japan amar@data-artist.com satoru yamamoto data artist inc. tokyo, japan yamamoto@data-scientist.com nyamaa enkhbayasgalan data artist inc. tokyo, japan enkhbayasgalan@mn.data-artist.com abstract—science and technology activities can be considered problem-solving activities, and scientific papers and patent publications can be viewed as providing explicit knowledge gained from the problem-solving of academia and industry respectively. however, even in the same field, the approach to the same problem is not consistent between a paper and the patented technology. the creation of information silos in science and technology generates inefficiency in human intellectual production. therefore, this study examines whether insights from technical problems can be shared with academics to solve scientific problems. we propose a concept to link the problems between these two domains using a linguistic approach for knowledge discovery that connects science and technology. we extracted scientific papers from the association for computational linguistics dataset, and patent literature from the derwent innovation platform. from these, pairs of problem defining sentences were identified and extracted using an attention-based language model. for example, we were able to extract examples of issues that do not necessarily arise from scientific papers, such as annotation difficulties in the analysis of social network data, but can be hinted at by patented techniques prior to the paper. these results suggest that scientific problems and industrial solutions can provide mutual insight. this knowledge discovery approach is recommended not only for benefiting corporate activities but also for grasping research trends. keywords-problem extraction; information matching; scientometrics; literature-based discovery (lbd); attention-based language model i. introduction science progress and technology change have become important issues on innovation and economics studies [1-3]. the way science and technology interact is a long-standing question. the knowledge flow in some areas, such as pharmaceuticals, can be effectively explained by linear models through basic research, applied research, development, and diffusion (production) [4-6]. linear models have been widely disseminated by academic institutions [7] lobbying for research funding, by economists [8] serving as expert advisors to policy makers and have been viewed as linear concepts of innovation by science and technology scholars [4]. on the other hand, recent research on innovation shows that such a linear model of innovation is insufficient to represent reality [5, 9-13]. the linear model does not consider the empirical evidence that technological change often results from experience and ingenuity rather than scientific theory and methods, the instrumental role of technological development in eliciting scientific explanation, and the importance of technology-based instruments for scientific research [14, 15]. it is sometimes pointed out that the linear model overlooks technology’s influence on the setting of the scientific agenda [16, 17]. innovation involves the transfer of knowledge between the scientific and industrial domains, as exemplified by the chain link model [11, 18, 19] and the network model [16]. of course, the linear model has the ability to explain innovation, and thus it not the all ideas that consider innovation to be a linear model are wrong [20], however, when the complexity of science and technology interactions is understood, it is undeniable that science pushes technology and technology pushes science. information retrieval research involving academic articles and patents has a long history [21-23]. information retrieval is defined as finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers) [24]. existing bibliometric methods regard research papers as representing scientific research and patents as representing innovation [25]. non-patent literature is sometimes used as a method for directly measuring the corresponding author: hajime sasaki engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5904 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … relationship between science and technology [22, 26, 27]. the patent examiner searches for prior non-patent literature based on the patent specification field. the number of non-patent literature is an indication of how the relating technology has already been mentioned in the context of the science domain. as mentioned earlier, the relationship between science and technology is not always straightforward and simple. traditionally, patent citations to papers have been comprehensively studied to understand the transfer of knowledge from science to technology [28]. however, the transmission from technology to science has not been well studied [29]. while patents contain detailed methodological information on successful innovations, references to patents are rarely found in applied science or science texts [29, 30, 31]. according to glanzel and meyer [29], the publications that have such reverse citations account for only 0.98% of all total publications between 1996 and 2000, of which 30% are in chemical-related fields. however, the absence of bibliographic references does not necessarily mean that technical and scientific knowledge are unrelated [16]. further, the explosive increase in scientific and technical knowledge can be problematic. there are over three million articles written in english [32]. regarding patents, there were 3.3 million patent applications in 2018, up 5.2% from 2017 for a ninth straight yearly increase [33]. in this context, it is becoming increasingly difficult for scientific papers and patents to fully reference each other. in every field, science and technology are fragmented into information silos, resulting in a condition in which one information system is unable to interoperate with other systems that are or should be associated with it. if debate proceeds only within the corresponding silos and information is not shared, even though the science and technology fields work on similar issues, the resources devoted to humankind’s intellectual activities will be significantly wasted. in this study, we focus on the possibility of extracting common needs between science and technology that have not been fully addressed by existing articles. there is an approach known as literature-based discovery (lbd) [34-37]. one way to determine the common needs between two fields by using knowledge discovery methods and involving bibliographic information is swanson’s abc model [38]. he succeeded in hypothesizing and verifying the unknown relationship between raynaud’s disease and fish oil based on bibliographic information. although scientific papers and patent publications are usually used as knowledge sources for lbd, most studies using this approach focus on discovering hidden links in the same domain (scientific papers for science domain, patent articles for technology domain). for example, one paper discussed the semantic similarities between gerontology and robotics based on the clustering of direct citation networks in the scientific inner domain [39]. meanwhile, other papers focus on knowledge discovery in a certain field in a cross-domain between scientific papers and patent publications. authors in [40] identified the commercialization gap between fields amply discussed in the science domain but not nearly as well discussed in the technology domain, using the photovoltaics-related knowledge field as an example; they created clusters based on direct citation networks between scientific papers and patent publications and calculated the semantic similarities among these clusters [41]. wang [42] also applied the same method for the micro biofuel field. considerable evidence indicates the importance of linking the same fields in the science and technology domains. several studies have revealed how science pushes technological development [21, 27, 43, 44]. thus, it is effective for industries to extract knowledge and contribute to the technology domain from the science domain [39]. scientific articles provide readers with a problem-solving process in terms of an objective and reproducible knowledge. the introduction, methods, results, and discussion (imrad) format for scientific articles has been gradually adopted since the 1940s [45]. it allows the problem-solving process to be described explicitly and enables the reporting of scientific activities to follow a more standard construction. the establishment of such a document structure greatly influences the research of vocabulary patterns of written language. authors in [46, 47] developed a way to analyze the structure of scientific and technological documents [45]. their methods can be used as a possible approach for the purpose of automatic classification, information extraction, and automatic summarization for scientific articles. the extraction of sentences related to problem-solving can also be used for science and technology articles. research on information retrieval from patent publications has also attracted considerable attention. many techniques have been proposed to classify text data of patent publications into problem and solution statements [48-50]. a subject–action– object (sao) structure can be recognized as a problem and solution pattern, which several patents have used [51-53]. authors in [54] attempted to extract and analyze sao structures to detect patent infringement. authors in [55] focused on the identification of rapidly evolving technological trends, and authors in [56] proposed a method to recommend research and development candidates by extracting the sao structure from problem–solution patterns of patent information. however, a few studies have described the relationships between problems and solutions extracted from papers. in this study, we will focus on this particular knowledge discovery issue. regarding extracting information from technical documents, researchers have attempted to extract expressions that represent technical features from patent publications and scientific articles as subtasks of the patent mining task of ntcir–8 (nii testbeds and community for information access research). this project aims at a large-scale evaluation for technologies that support the understanding and use of information, such as information retrieval, question answering, summarization, text mining, and machine translation, from a vast amount of information [57]. this extraction is also expected to be useful for the automatic creation of a technology trend map. however, as described above, because the terms used in patents are often more abstract or creative than those used in research papers to widen the scope of the claims, problem-extraction methods for patent publications are underdeveloped. on the other hand, based on the premise that the phrase “problem to be solved” in patent publications appropriately represents the technical problem, it is proposed that a more specific patent map can be created by paying attention to the sentence [58]. engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5905 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … there are many approaches to information retrieval using scientific and patent texts, but there are still problems and uncertainties regarding the defining keywords. in this sense, information extraction that does not solely rely on keywords is required. heffernan and teufel [59] showed that word embeddings, a technique in which words are represented as vectors, can be used as features to extract sentences related to problems and solutions, using the association for computational linguistics (acl) anthology as a dataset. they claimed that the detection of the problem and solution statements from papers can enable the comparison of similar papers and lead to the automatic generation of review articles. however, they do not describe their method’s application for cross-domain articles and also mention the linking of problem and solution statements as an area of future work. the current study aims to answer the question “is it possible to extract sentences that refer to the same problem (i.e. needs) from both the science and technology domains and obtain information that contributes to knowledge discovery across domains?” referring to if knowledge from patents can provide insight into the scientific issues being investigated. in this paper, the concept of inter-domain links for knowledge discovery using a linguistic approach is proposed. this study makes a concrete contribution to the literature because it demonstrates the possibility of building a needs-focused portfolio that includes both scienceand technology-related information by extracting appropriate sentences from scientific articles and patents. this study makes a concrete contribution to the literature because it shows it is possible to build a needsfocused portfolio that includes both scienceand technologyrelated information by extracting appropriate sentences from scientific articles and patents. for example, research articles often mention potential future studies, and knowledge can be obtained from existing patent information for these future investigations. thus, we show that not only does science support technology, but technology can support science. another contribution of this research is a model that extracts problem statements (sentences) from the paper without preparing clue words in advance, and performs better than the existing method [59]. to achieve this, it is hypothesized that the application of a model of language understanding that enables context-sensitive processing, which has been evaluated in the field of natural language processing, would be effective. ii. method a. overview the methodology of this study is outlined in figure 1. at first, data from scientific publications were taken from the acl database, whereas data from patent publications were taken from the derwent innovation platform, as shown in figure 1(1). problem statements for patents were then identified by whether they begin with the phrase “problem to be solved,” as shown in figure 1(2). for scientific articles, the problem statements from the sampled data are extracted, as shown in figure 1(3). finally, we calculated the semantic similarity between the scientific and technical problem statements, as shown in figure 1(4). these processes are described in more detail in the next section. fig. 1. method overview. b. dataset this section describes the data acquisition and preprocessing procedure shown in figure 1(1). we considered that scientific articles contain scientific knowledge and patent articles industry knowledge. we limited the papers/patents to the field of computational linguistics. the computations linguistics corpus of scientific articles is a subset of the acl anthology released in march 2016 and contains the full text of 22,878 articles. these data were parsed using parscit [60], and tokenization, sentencing, and dependency analysis were done with rasp parser [60]. we randomly sampled 2,500 articles engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5906 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … from this dataset, which is the same one used in [59] and is under the creative commons attribution license (cc-by). this allowed easy comparison of the classification performance with [59]. patent data were extracted from the derwent innovation platform provided by clarify analytics. computer science-related patent data classified as g06n were defined by the world intellectual property organization as “a computer system based on a particular computational model” in the international patent classification. a total of 38,718 filtered patent publications were extracted from the database. the “problem-solving concept” is a statement describing the problem solved by the patent [61-63]. patent gazettes often include important sentences that begin with the term “problem to be solved” [48]. thus, we extracted statements containing “problem to be solved” from patents. c. extraction of problem/solution sentences the way sentences were classified as shown in figure 1(2) is described in this section. we identified problem and solution sentences based on a previously established neural language approach by creating word embedding-based features [59]. word embedding involves mapping words to a vector space in order to capture the meaning of the word or grammatical structure. it is based on the distribution hypothesis that words having similar meanings will appear in similar contexts, that is, the appearance distribution of surrounding words [64, 65]. heffernan and teufel [59] proposed a supervised learning model that classifies given sentences into problem or notproblem sentences. they indicated that embedding-based features were effective for classifying these sentences [59]. in this study, we used a neural network language model focused on “attention” that has become common [66-68]. “attention” is a mechanism that allows machines to learn which vectors are important when there are multiple vectors. in other words, it informs the prediction model which part of the input data to focus on. we hypothesize that our method can extract problem sentences with higher accuracy by considering the entire context, whereas existing methods such as word2vec focus only on the area immediately before and after the clue word. in this study and by using this methodology, we constructed a model that determines whether a sentence is a problem statement based on whether it contains word with high probability to correspond to the problem. in this step, we conducted unsupervised and supervised learning. 1) unsupervised pre-training given the token � = ���, … ,� , the likelihood to be maximized in the standard stochastic language model is given by: �1 �� = ∑ ���� ��|��−�, … , ��−1; ��� (1) where � is the size of the window, and p is the neural network model with parameter� . here, � is adjusted by stochastic gradient descent. a model using the attention mechanism is given by: � �� = softmax!ℎ #$%& (2) where ' is the number of layers in the neural network, and #$ is the token embedding matrix. in this study, we utilized a published learning model in which a multi-layer transformer decoder is implemented as a language model [67, 69, 70]. 2) supervised fine-tuning parameter adjustment was performed through supervised learning using the modeled function that has been learned in (1). we implemented tasks to classify problem and nonproblem sentences as supervised learning tasks. assuming a dataset ( containing labels that contain a document consisting of a word string of input tokens )�, … , )* and a label + in each instance. for example, suppose that there is a group of words that constitute a sentence as input tokens. if it is a problem sentence, 1 is assigned to the label +, and 0 is substituted if it is not. this input passes through the previously learned model with an output layer with parameters for predicting +: � +|)�, … , )*� = softmax!ℎ,*#-& (3) below is a constraint that maximizes: �. (� = ∑ ���� +|)�, … , )*� /,-� (4) we conducted a five-fold cross-validation and comprehensive evaluation with the average value of the following four evaluation indices. • precision is the percentage of positive data that is actually positive: �012�3��' = %45$6789:9;$ %45$6789:9;$<=>,8$6789:9;$� (5) • recall is the percentage of what was actually positive and was predicted to be positive: ?12@�� = %45$6789:9;$ %45$6789:9;$<=>,8$a$b>:9;$� (6) • f-measure (f1 score) is the harmonic mean of the precision and recall: f1= .64$c9897 ∙e$c>,, 64$c9897 < e$c>,,� (7) • accuracy is the percentage of data actually predicted to be positive or negative: f22g0@2+ = %45$6789:9;$<%45$a$b>:9;$%45$6789:9;$<=>,8$6789:9;$<%45$a$b>:9;$<=>,8$a$b>:9;$ (8) d. clustering and similarity extraction here, the processing of clustering and extraction of similar problem pairs is described corresponding to figure 1(4). we vectorized the documents against the obtained scientific paper problem sentences and patent problem sentences respectively, and performed clustering. for clustering, we used the ward’s [71] method, which is a kind of hierarchical clustering. ward’s method repeats the procedure of merging any two clusters with the smallest increment in the sum of squares in the cluster. this clustering method has shown high performance regarding hierarchical clustering. additionally, term frequency-inverse cluster frequency (tf-icf) is calculated to extract the characteristic keywords for each cluster. the term frequency gives a measure of the importance of the term within the particular sentences. the inverse cluster frequency refers to a measure of the general importance of a term. the hijki of term � in cluster l is given as follows: engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5907 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … hijki = mn9,o ∙ �2n9 = mn9,o ∙ logr a cst u (9) where n is the total number of sentences. each cluster was labeled based on the resulting characteristic keywords and sentences. based on the dot products of embedded vectors of the obtained characteristic words, the similarities of problem sentences between the papers and patents were calculated. by focusing on problem-sentence pairs with high similarity, it is possible to manually confirm whether problem-solving information provided by the patents can be helpful to solve problems mentioned in scientific research papers. iii. results a. extracting problem sentences table ii shows several results of scientific paper problem sentences classified with this model. the actual problem sentences were labeled as “1.” non-problem sentences were labeled as “0.” the predicted result “1” means the sentence is predicted as a problem sentence. the predicted result “0” means the sentence is predicted as a non-problem sentence. table i shows the classification performance index. each number represents precision/recall/f_measure/accuracy, in that order. heffernan and teufel’s [59] results, whose study is the most similar to ours, are also shown for comparison. we were able to extract 2,385 sentences beginning with “problems to be solved” in patent publication abstracts. table iii represents an example of the actually extracted sentences. b. problem clusters in patents table iv shows a summary of the top ten clusters in the patents. each cluster name was manually chosen after reviewing all featuring sentences. the first cluster was labeled “information system” based on the problem sentences and keywords extracted by the tf-icf, as the clusters related to information processing and input information technology. the second cluster was labeled “memory efficiency and parameter optimization for neural networks,” with many problem statements addressing efficiency and optimization. the third cluster was named “data extraction and processing,” with many problem statements including challenges in data extraction. table i. classification performance precision recall f1 accuracy [59] 0.82 0.83 0.82 0.82 proposed 0.89 0.85 0.87 0.87 the fourth cluster, “knowledge systems and humans” focuses on knowledge rather than data and therefore features several issues related to human behavior. the fifth cluster, which is a knowledge system as well, is named “user and knowledge systems”, because many of the issue statements focus on the issues faced by users of the system. the sixth cluster was named “data classification,” with several problems focusing on classification, a machine learning task. the seventh cluster, “image recognition,” consists mainly of tasks that used images as data. the first bowl cluster was named “circuit of a neuron model” because many of its issues focused on circuit design using the nervous system. several task statements belonging to the ninth cluster focus on mathematical probabilistic tasks, and thus we named the cluster “estimating parameters, probabilities, calculation methods, and so on.” the tenth cluster is a concentration of issues in terms of control engineering and was named “data processing for control engineering.” the cluster name is representative of the set, and not all sentences matched the cluster name precisely. figure 2 shows the result of the vectorization of each document against the problem sentence in patents followed by compression in two dimensions and clustering. the results are shown in different colors for the top ten clusters in order of cluster size. other clusters are in gray. table ii. samples of extracted problem/non-problem sentences from scientific articles (n=10) sentences label predicted “this reduces the efficiency of the dynamic programming” 1 1 “this is expensive” 1 1 “should probably be treated separately, as a preposition modifier” 0 0 “creating these rules requires much cost and that they are usually domain-dependent” 1 1 “it is not capable of modeling bilexical dependencies on the right hand side of the rules” 1 1 “unsupervised constituency parsing is also an active research area” 0 0 “consuming very large parameter spaces” 1 1 “the time required to load and watch the videos” 1 0 “the need for large training data” 1 1 “the possible relationships that exist among the various factors” 0 1 table iii. samples of problem-related sentences in patents (n=5) problem sentences (example) application no. “problem to be solved is the neuron action potential calculation speed is slow in large-scale computer simulation process, the method of the invention can greatly improve the calculation speed of the action potential, while maintaining a relatively high precision, and it is very suitable for simulation of large-scale brain nerve network.” cn106447032a “problem to be solved: to decrease the number of sensors in use without a significant loss of control precision by constituting a 2nd control system by using a 1st and a 2nd control signal.” jp2000187504a “problem to be solved is to easily register information for specifying the symptom not only by type but also by designating an individually managed subject.” wo2008007442a1 “problem to be solved: to effectively recognize an object in a practical time, with practical accuracy and in a practical object range.” jp2001195381a “problem to be solved: to reduce the number of times of multiplication required for finding a covariant matrix for obtaining the coefficient of prediction for minimizing a square root error.” jp2001195586a engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5908 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … table iv. summary of top 10 clusters related to patent problem statements cluster id no. of sentences cluster name keywords cluster #1: 21 information system identification, information, abnormality, assisting, creation, semi-supervised, artificial, image, input, system cluster #2: 17 memory efficiency and parameter optimization problem, solve, difficult, included, that, capability, sample, network, neural, conventional cluster #3: 17 data extraction and processing extracting, analyzing, annotation, correlation, expected, pattern, data, added, data, stored cluster #4: 16 knowledge systems and humans personality, artificial, intelligence, person, human, realize, answer, concepts, defined, divided cluster #5: 16 user and knowledge systems knowledge, user, base, contents, concept, around, document, enormous, extracted, modeling cluster #6: 16 data classification classifying, classification, target, partial, support, enhancing, kind, source, high, unknown cluster #7: 15 image recognition monitoring, image, costs., evaluating, holding, interpretation, intention, generating, improving, attributes cluster #8: 14 circuit of a neuron model neuron, circuit, element., resistance, circuit, neural, output, network, element, bond cluster #9: 14 estimating parameters, probabilities, calculation methods and so on probability, calculation, similarity, arithmetic, unit, arbitrary, cluster, continuous, decision, independence cluster #10: 14 data processing for control engineering optimization, control, antenna, robust, ship, enhance, controller, service, efficiency fig. 2. hierarchical clustering dendrogram in patents. c. semantic similarity for all the obtained scientific paper and patent problem sentences, feature words were extracted based on tf-icf. the sentence similarity was calculated between the two different sources. table v shows the five pairs with high similarity. iv. discussion at first, the proposed model’s word selection and classification was described. in table ii, words and phrases such as “reduces the efficiency,” “expensive,” “it is not capable,” and “consuming” intuitively imply the suggestion of a problem. engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5909 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … table v. problem pairs with high semantic similarity between patent publications and scientific articles no. similarity scientific article problem sentences patent publication problem sentences #1 1.393 the present framework can handle only one anchor point (the question term) in the candidate answer sentence to provide a question sentence candidate presentation device and a program, which allow even a questioner unfamiliar to interview to extract data of good quality from an answerer #2 1.276 the computational complexity of working with svms to provide a re-learning method for support vector machine (svm) for improving accuracy of the svm and reducing computational complexity by using a few good samples #3 1.265 it requires the proportion of positive and negative examples in the test data be close to the proportion in the training data, which may not always hold, particularly when the training data is small to improve the accuracy of data classification of test data even if the number of training data is small #4 1.211 the unbalanced knowledge sources shared by human beings and a computer system to provide an experience transmission system making it possible to mutually share experiences of human beings #5 1.110 osn data come with no annotations, and it would be impossible to manually annotate the data for a quantitative analysis of selfdisclosure to provide an annotation data analysis technology of determining whether an annotation added to data relates to content of the data or not in the conventional sentence extraction from scientific articles and patents, such clue words must be manually collected in advance. however, this process is time-consuming and it is difficult to collect all clue words from thousands of documents. in addition, it can be difficult to determine whether a clue word indicates a problem. from table i, we can confirm that the evaluation scale in the proposed problem/non-problem classifier exceeds the existing model [59]. the pre-learning attention mechanism classified documents in context to see if the clue words indeed correspond to a problem. this suggests that sentence classification in the proposed model is performed better than through the word2vec approach, which examines only the words before and after the clue words. in table iii five sample sentences are presented, all using the “problem to be solved” as a clue word from the abstract information in the patent data. it can be seen that each patent shows the objectives to be addressed, such as the slow computation speed (cn106447032a) and the desire to reduce the number of sensors without decreasing the precision (jp2000187504a). table iv outlines the top 10 clusters resulting from these patent issue statements using ward’s method. especially after the second cluster, the focus is on relatively specific tasks, indicating that the clustering of task sentences has been performed properly. in the dendrogram shown in figure 2, the top 10 clusters are scattered, therefore, it is reasonable to understand that the top 10 clusters capture an overview of the issue awareness in the field. the third, fourth, and fifth clusters are close to each other. it can be seen from the keywords that both 4 and 5 are close in terms of knowledge and 3 and 5 are close in terms of extraction. from this dendrogram, it is possible to read the similarity of awareness of each other's issues. similar problem statements between article and patent texts (table v) are discussed below. pair #1: the article phrase was extracted from the following complete sentence: “a more serious limitation is that the present framework can handle only one anchor point (the question term) in the candidate answer sentence,” which comes from the section “shortcoming and extensions” of the paper “learning surface text patterns for a question answering system” [72]. this article examined an open-domain question answering system. the patent jp2011002872a has the most semantically similar problem sentences. this patent also refers to a question answering device and is related to interviewing problems stemming from human interviewers needing a sophisticated, adaptable interview technique. it is described as an effective way for handling multiple questions and extracting respondents’ true intentions in an interview by flexibly changing question order and depth according to the flow of conversation with respondents. based on the aforementioned problem, the invention proposes a mechanism for estimating topics of interest to respondents and presenting question candidates. in the scientific article problem sentence, the problem is that the machine making the question can only use a single viewpoint. although viewpoints vary depending on whether the subject asking questions is a machine or human, the information in the patent publication could provide inspiration for solving the scientific problem. pair #2: the article phrase was extracted from “the real drawback is the computational complexity of working with svms [support vector machines], thus the design of fast algorithm is an interesting future work,” describing a limitation of the paper titled “semantic role labeling via tree kernel joint inference” [73]. this sentence was described at the end of the conclusion as a future topic of research of the paper. the patent jp2011039831a contributes to reducing svm computational complexity. the title of this patent is “relearning method for support vector machine,” which provides a re-learning method for svm that can improve the accuracy of svm and reduce the computational complexity by using a small number of high-quality samples of svm for re-learning. while the patent itself was published in the 2011 public gazette, the applicant had published a basic patent titled jp200421590a—“re-learning method for support vector machine”—in 2004. this suggests that it is possible to solve the problems described as future work in a scientific paper published in 2006 through the industrial-technical level at the time. in other words, this finding indicates that science does not necessarily anticipate technology as represented by the linear model. pair #3: the article phrase was extracted from “one drawback of his algorithm is that it requires the proportion of positive and negative examples in the test data be close to the proportion in the training data, which may not always hold, particularly when the training data is small,” from the paper “semi-supervised learning for semantic parsing using support engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5910 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … vector machines” [74]. this sentence points out imbalanced data in the estimation algorithm proposed in [75]. in general, it is desirable that the data sizes of positive and negative examples are balanced in machine learning, especially for small datasets. the problem sentence of the corresponding patent, jp2002133389, proposes a method for improving the accuracy of data classification for test data even when the amount of training data is small. the paper was published in 1999 and the patent application in 2000. the imbalanced data problem was discussed in the artificial intelligence field around 2000. this pair is a good example of the information available at that time that could have been extracted by inter-domain knowledge-sharing and contributed to problem-solving. pair #4: the article phrase was extracted from “the bottleneck in artificial intelligence is the unbalanced knowledge sources shared by human beings and a computer system” in the paper “latent features in automatic tense translation between chinese and english” [76]. the paragraph with this sentence points out that the data that can be input into artificial intelligence mechanisms constitute only a small part of the data human beings can manage. the corresponding patent, jp20033233798a, provides a system for sharing human experience. at first glance, it seems the only common terms are “human being” and “system.” however, the problem at the core of this patent is that feelings and experiences cannot be sufficiently conveyed through the internet using only textbased document data. although the context is different, common to each problem are abstract concepts, including insufficient data. pair #5: the article phrase was extracted from “the challenge with such analysis is that osn [online social network] data come with no annotations, and it would be impossible to manually annotate the data for quantitative analysis of self-disclosure” in a paper titled “self-disclosure topic model for twitter conversations” [77]. this problem sentence points out the difficulty of annotating self-disclosure information on osns. data analysis of osns such as twitter had just begun at the time the article was written. this problem sentence appeared in the abstract and turned out to express the essential problem highlighted by the paper. the problem indicated by the corresponding patent jp2010237864a is that the annotation in such social annotation services contains much information unrelated to the essence of the content, so it is necessary to remove it. as to why this pair was extracted, it is clear that “annotation” is a common word in each sentence. it is also interesting that the common issue of social service was unintentionally extracted. even though the word “social” does not appear in the patent problem sentence, the common context can be extracted. thus, a patent published in 2010 dealt with a technical problem that would have provided a clue to the essential problem highlighted by a paper published in 2014. although such an evaluation must be qualitative, we confirmed in several pairs that certain knowledge is likely to be obtained from a patent whose problem corresponds to one raised by scientific research. v. conclusions science and technology research involves the exploration and exploitation of knowledge. scientific research strongly emphasizes “exploration” in the pursuit of new knowledge, while industrial technology has strongly emphasized the “exploitation” of the existing knowledge. however, it is natural in complex innovation processes that scientific knowledge involves new exploration through the exploitation of industrial technology. gardner [78] defines four concepts regarding the relationship between science and technology: 1) the “demarcation view” when both are considered independent, 2) the “idea state view” when science development precedes technology development, 3) the “materialist view” when the technology development precedes science development, and 4) the “interaction model” when science and technology develop interactively. in this study, we demonstrated the practicality of extracting problem sentences based on a language model and thus linking scientific and industrial knowledge. we collected data from scientific articles and patent publications related to information science (the natural language processing field). we proposed a model to extract problem-related phrases and confirmed that it shows higher performance than the existing models, especially for scientific articles. clustering was performed on each extracted problem sentence for both scientific articles and patent publications to categorize and map these problems. by determining the similarity between the paper and patent problem sentences, we extracted pairs with the same problem consciousness. after examining some of the pairs with high similarity, we could understand not only the reason for the common words but also the essential background of the problem. this approach showed the possible insight to be gained that would be difficult to obtain with only a keyword search. this research has several limitations. first, we did not fully consider the publication year of each of the papers and patents included in this study. for example, in pair #3, the scientific article was published in 2006 and the patent in 2011. thus, the patent presents information five years after the problem was described in 2006. in particular, knowledge is updated quickly in the information technology field and information becomes obsolete in about a year. we think this issue can be addressed by taking related information from documents published in the same year. for this, a sufficient dataset is required. we did not deal with this issue in this study, but it is essential for future work in this area. there is also room for improvement regarding the length mismatch of extracted problem sentences. information from some articles is extracted as phrase units, while patents have relatively long sentences, which would not be appropriate to compare. we also want to improve the method of calculating similarity. since a common word string is obtained for both sides, this gives a high degree of similarity, so it is arguable whether word-based extraction is necessarily appropriate. to capture a problem’s essence, a good approach may be to consider the similarity of collocations with a series of functions represented by sao as shown in section i. although there are several points to be improved, this research makes an important contribution. it developed a engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5911 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … practical model for identifying problem sentences from scientific papers and a method of utilizing the perspective of technology management. we showed the possibility of solving problems in scientific research by finding issues common to both science and industrial technology. identifying issues in science also contributes to the identification of important research topics, which can lead to insights into scientific trends. in addition, by clarifying the problems in industrial technology, it is possible to identify future targets for business. acknowledgment we would like to thank editage (www.editage.com) for the english language editing. references [1] c. freeman, “the economics of technical change,” cambridge journal of economics, vol. 18, no. 5, pp. 463–514, 1994. [2] h. grupp, foundations of the economics of innovation: theory, measurement and practice, illustrated edition edition. cheltenham:uk: edward elgar, 1998. [3] g. dosi, innovation, organization and economic dynamics: selected essays. cheltenham:uk: edward elgar, 2000. [4] b. godin, “the linear model of innovation: the historical construction of an analytical framework,” science, technology, & human values, vol. 31, no. 6, pp. 639–667, nov. 2006, doi: 10.1177/0162243906291865. [5] d. edgerton, “‘the linear model’ did not exist: reflections on the history and historiography of science and research in industry in the twentieth century,” in the science-industry nexus: history, policy, implications., 2004, pp. 1–36. [6] d.a. hounshell, “industrial research: commentary”, in: the scienceindustry nexus. history, policy, implications, science history publications, 2004, pp. 59-68 [7] national science foundation (u.s.), basic research; a national resource. washington, dc, usa, 1957. [8] r. r. nelson, “the simple economics of basic scientific research,” journal of political economy, vol. 67, no. 3, pp. 297–306, jun. 1959, doi: 10.1086/258177. [9] w. j. price and l. w. bass, “scientific research and the innovative process,” science, vol. 164, no. 3881, pp. 802–806, 1969. [10] s. j. kline, “innovation is not a linear process,” research management, vol. 28, no. 4, pp. 36–45, jul. 1985, doi: 10.1080/00345334.1985.11756910. [11] r. landau and rosenberg, eds., the positive sum strategy: harnessing technology for economic growth. washington, dc, usa: the national academies press, 1986. [12] n. rosenberg, exploring the black box: technology, economics, and history. cambridge, uk: cambridge university press, 1994. [13] k. grandin, n. wormbs, and s. widmalm, the science-industry nexus: history, policy, implications : nobel symposium 123. usa: science history publications, 2004. [14] n. rosenberg, inside the black box: technology and economics paperback. cambridge, uk: cambridge university press, 1982. [15] m. gibbons, the new production of knowledge: the dynamics of science and research in contemporary societies. thousand oaks, cl, usa: sage, 1994. [16] a. verbeek, k. debackere, m. luwel, p. andries, e. zimmermann, and f. deleus, “linking science to technology: using bibliographic references in patents to build linkage schemes,” scientometrics, vol. 54, no. 3, pp. 399–420, 2002. [17] w. e. steinmueller, “basic research and industrial innovation,” in the handbook of industrial innovation, cheltenham:uk: edward elgar, 1995. [18] s. j. kline, innovation styles in japan and the united states: cultural bases : implications for competitiveness : the 1989 thurston lecture. stanford university, department of mechanical engineering, thermosciences division, 1990. [19] m. b. myers and r. s. rosenbloom, rethinking the role of industrial research. division of research, harvard business school, 1994. [20] m. balconi, s. brusoni, and l. orsenigo, “in defence of the linear model: an essay,” research policy, vol. 39, no. 1, pp. 1–13, feb. 2010, doi: 10.1016/j.respol.2009.09.013. [21] f. narin and d. olivastro, “status report: linkage between technology and science,” research policy, vol. 21, no. 3, pp. 237–249, jun. 1992, doi: 10.1016/0048-7333(92)90018-y. [22] f. narin and d. olivastro, “linkage between patents and papers: an interim epo/us comparison,” scientometrics, vol. 41, no. 1, pp. 51–59, jan. 1998, doi: 10.1007/bf02457966. [23] f. narin, m. rosen, and d. olivastro, “patent citation analysis: new validation studies and linkages statistics,” science and technology indicators, pp. 35–47, jan. 1989. [24] c. d. manning, p. raghavan, and h. schutze, an introduction to information retrieval. cambridge, england: cambridge university press, 2008. [25] m. meyer, “tracing knowledge flows in innovation systems,” scientometrics, vol. 54, no. 2, pp. 193–212, jun. 2002, doi: 10.1023/a:1016057727209. [26] j. callaert, b. van looy, a. verbeek, k. debackere, and b. thijs, “traces of prior art: an analysis of non-patent references found in patent documents,” scientometrics, vol. 69, no. 1, pp. 3–20, oct. 2006, doi: 10.1007/s11192-006-0135-8. [27] f. narin, k. s. hamilton, and d. olivastro, “the increasing linkage between u.s. technology and public science,” research policy, vol. 26, no. 3, pp. 317–330, 1997. [28] m. p. carpenter and f. narin, “validation study: patent citations as indicators of science and foreign dependence,” world patent information, vol. 5, no. 3, pp. 180–185, jan. 1983, doi: 10.1016/01722190(83)90139-4. [29] w. glanzel and m. meyer, “patents cited in the scientific literature: an exploratory study of ‘reverse’ citation relations,” scientometrics, vol. 58, no. 2, pp. 415–428, oct. 2003, doi: 10.1023/a:1026248929668. [30] f. narin and e. noma, “is technology becoming science?,” scientometrics, vol. 7, no. 3, pp. 369–381, mar. 1985, doi: 10.1007/bf02017155. [31] t.-k. hsiao and v. torvik, “knowledge transfer from technology to science: the longevity of paper‐to‐patent citations,” proceedings of the association for information science and technology, vol. 56, pp. 417– 421, jan. 2019, doi: 10.1002/pra2.41. [32] r. johnson, a. watkinson, and a. mabe, the stm report: an overview of scientific and scholarly publishing, 5th ed. hague, netherlands: international association of scientific, technical and medical publishers, 2018. [33] world intellectual property indicators 2019. geneva, switzerland: world intellectual property organization, 2019. [34] d. swanson, n. smalheiser, and v. torvik, “ranking indirect connections in literature-based discovery: the role of medical subject headings,” journal of the american society for information science and technology, vol. 57, no. 11, pp. 1427–1439, sep. 2006, doi: 10.1002/asi.20438. [35] m. weeber, h. klein, l. berg, and r. vos, “using concepts in literaturebased discovery: simulating swanson’s raynaud-fish oil and migrainemagnesium discoveries,” journal of the american society for information science and technology, vol. 52, no. 7, pp. 548–557, may 2001, doi: 10.1002/asi.1104.abs. [36] d. hristovski, b. peterlin, j. mitchell, and s. humphrey, “using literature-based discovery to identify disease candidate genes,” international journal of medical informatics, vol. 74, pp. 289–298, nov. 2004, doi: 10.1016/j.ijmedinf.2004.04.024. [37] m. d. gordon and r. k. lindsay, “toward discovery support systems: a replication, re-examination, and extension of swanson’s work on engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5912 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … literature-based discovery of a connection between raynaud’s and fish oil,” journal of the american society for information science, vol. 47, no. 2, pp. 116–128, feb. 1996, doi: 10.1002/(sici)10974571(199602)47:2%3c116::aid-asi3%3e3.3.co;2-p. [38] d. r. swanson, “undiscovered public knowledge,” the library quarterly: information, community, policy, vol. 56, no. 2, pp. 103–118, apr. 1986. [39] v. ittipanuvat, k. fujita, y. kajikawa, j. mori, and i. sakata, “finding linkage between technology and social issues: a literature based discovery approach,” in 2012 proceedings of picmet ’12: technology management for emerging technologies, vancouver, bc, canada, aug. 2012, pp. 2310–2321. [40] n. shibata, y. kajikawa, and i. sakata, “extracting the commercialization gap between science and technology — case study of a solar cell,” technological forecasting and social change, vol. 77, no. 7, sep. 2010, doi: 10.1016/j.techfore.2010.03.008. [41] n. shibata, y. kajikawa, y. takeda, and k. matsushima, “detecting emerging research fronts based on topological measures in citation networks of scientific publications,” technovation, vol. 28, no. 11, pp. 758–775, nov. 2008, doi: 10.1016/j.technovation.2008.03.009. [42] m.-y. wang, s.-c. fang, and y.-h. chang, “exploring technological opportunities by mining the gaps between science and technology: microalgal biofuels,” technological forecasting and social change, vol. 92, aug. 2014, doi: 10.1016/j.techfore.2014.07.008. [43] m. meyer, “does science push technology? patents citing scientific literature,” research policy, vol. 29, no. 3, pp. 409–434, mar. 2000, doi: 10.1016/s0048-7333(99)00040-2. [44] m. gittelman and b. kogut, “does good science lead to valuable knowledge? biotechnology firms and the evolutionary logic of citation patterns,” management science, vol. 49, no. 4, pp. 366–382, apr. 2003, doi: 10.1287/mnsc.49.4.366.14420. [45] l. sollaci and m. pereira, “the introduction, methods, results, and discussion (imrad) structure: a fifty-year survey,” journal of the medical library association : jmla, vol. 92, no. 3, pp. 364–7, aug. 2004. [46] r. d. huddleston, sentence and clause in scientific english. communication research centre, university college, 1968. [47] m. hoey, textual interaction: an introduction to written discourse analysis. routledge, 2013. [48] y.-h. tseng, c.-j. lin, and y.-i. lin, “text mining techniques for patent analysis,” information processing & management, vol. 43, no. 5, pp. 1216–1247, sep. 2007, doi: 10.1016/j.ipm.2006.11.011. [49] h. sakai, h. nonaka, and s. masuyama, “extraction of information on the technical effect from a patent document,” transactions of the japanese society for artificial intelligence, vol. 24, pp. 531–540, jan. 2009, doi: 10.1527/tjsai.24.531. [50] a. shinmori, m. okumura, y. marukawa, and m. iwayama, “rhetorical structure analysis of japanese patent claims using cue phrases,” in proceedings of the third ntcir workshop, tokyo,japan, oct. 2002. [51] i. bergmann, d. butzke, l. walter, j. p. fuerste, m. g. moehrle, and v. a. erdmann, “evaluating the risk of patent infringement by means of semantic patent analysis: the case of dna chips,” r&d management, vol. 38, no. 5, pp. 550–562, 2008, doi: 10.1111/j.14679310.2008.00533.x. [52] g. cascini, a. fantechi, and e. spinicci, “natural language processing of patents and technical documentation,” in document analysis systems vi, vol. 3163, 2004. [53] g. cascini and m. zini, “measuring patent similarity by comparing inventions functional trees,” in computer-aided innovation (cai), vol. 277, springer, 2008. [54] h. park, j. yoon, and k. kim, “identifying patent infringement using sao based semantic technological similarities,” scientometrics, vol. 90, no. 2, pp. 515–529, feb. 2012, doi: 10.1007/s11192-011-0522-7. [55] j. yoon and k. kim, “detecting signals of new technological opportunities using semantic patent analysis and outlier detection,” scientometrics, vol. 90, no. 2, pp. 445–461, feb. 2012, doi: 10.1007/s11192-011-0543-2. [56] x. wang et al., “identifying r&d partners through subject-actionobject semantic analysis in a problem & solution pattern,” technology analysis & strategic management, vol. 29, no. 10, pp. 1167–1180, nov. 2017, doi: 10.1080/09537325.2016.1277202. [57] h. nanba, a. fujii, m. iwayama, and t. hashimoto, “overview of the patent mining task at the ntcir-8 workshop,” presented at the proceedings of ntcir-8 workshop meeting, tokyo, japan, jun. 2010, pp. 293–302. [58] m. iwayama, a. fujii, and n. kando, “overview of classification subtask at ntcir-5 patent retrieval task,” presented at the proceedings of ntcir-5 workshop meeting, tokyo, japan, dec. 2005, pp. 359–365. [59] k. heffernan and s. teufel, “identifying problems and solutions in scientific text,” scientometrics, vol. 116, no. 2, pp. 1367–1382, 2018, doi: 10.1007/s11192-018-2718-6. [60] i. councill, c. l. giles, and m.-y. kan, “parscit: an open-source crf reference string parsing package,” in proceedings of the sixth international conference on language resources and evaluation (lrec’08), marrakech, morocco, may 2008, pp. 661–667. [61] d. j. phelps, “automatic concept identification: extracting problem solved concepts from patent documents,” presented at the irfs 2007 vienna information retrieval facility symposium, vienna, austria, 2007. [62] s. tiwana and e. horowitz, “extracting problem solved concepts from patent documents,” in proceedings of the 2nd international workshop on patent information retrieval, hong kong, china, nov. 2009, pp. 43–48, doi: 10.1145/1651343.1651356. [63] c. jeong and k. kim, “creating patents on the new technology using analogy-based patent mining,” expert systems with applications, vol. 41, no. 8, pp. 3605–3614, jun. 2014, doi: 10.1016/j.eswa.2013.11.045. [64] z. s. harris, “distributional structure,” word, vol. 10, no. 2–3, pp. 146–162, aug. 1954, doi: 10.1080/00437956.1954.11659520. [65] m. sahlgren, “the distributional hypothesis,” italian journal of linguistics, vol. 20, no. 1, pp. 33–54, 2008. [66] a. vaswani et al., “attention is all you need,” presented at the 31st conference on neural information processing systems, long beach, ca, usa, 2017. [67] a. radford, “improving language understanding by generative pretraining.” 2018, accessed: jun. 13, 2020. [online]. available: https://www.semanticscholar.org/paper/improving-languageunderstanding-by-generativeradford/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035, (preprint) [68] s. kobayashi, soskek/chainer-openai-transformer-lm. 2020. [69] p. j. liu et al., “generating wikipedia by summarizing long sequences,” arxiv:1801.10198 [cs], jan. 2018, accessed: jun. 12, 2020. [online]. available: http://arxiv.org/abs/1801.10198. [70] “finetune quickstart guide — finetune 0.8.3 documentation.” https://finetune.indico.io/ (accessed jun. 13, 2020). [71] j. h. ward, “hierarchical grouping to optimize an objective function,” journal of the american statistical association, vol. 58, no. 301, pp. 236–244, mar. 1963, doi: 10.1080/01621459.1963.10500845. [72] d. ravichandran and e. hovy, “learning surface text patterns for a question answering system,” presented at the proceedings of the 40th annual meeting of the association for computational linguistics, philadelphia,usa, jul. 2002, pp. 41–47. [73] a. moschitti, d. pighin, and r. basili, “semantic role labeling via tree kernel joint inference,” in proceedings of the tenth conference on computational natural language learning (conll-x), new york city, jun. 2006, pp. 61–68. [74] r. j. kate and r. j. mooney, “semi-supervised learning for semantic parsing using support vector machines,” in proceedings of the human language technology conference of the north american chapter of the association for computational linguistics, short papers (naacl/hlt2007), rochester, ny, usa, apr. 2007, pp. 81–84. [75] t. joachims, “transductive inference for text classification using support vector machines,” in proceedings of the sixteenth international engineering, technology & applied science research vol. 10, no. 4, 2020, 5903-5913 5913 www.etasr.com sasaki et al.: extracting problem linkages to improve knowledge exchange between science and … conference on machine learning, san francisco, ca, usa, jun. 1999, pp. 200–209. [76] y. ye, v. l. fossum, and s. abney, “latent features in automatic tense translation between chinese and english,” in proceedings of the fifth sighan workshop on chinese language processing, sydney, australia, jul. 2006, pp. 48–55. [77] j. bak, c.-y. lin, and a. oh, “self-disclosure topic model for twitter conversations,” in proceedings of the joint workshop on social dynamics and personal attributes in social media, baltimore, maryland, usa, jun. 2014, pp. 42–49. [78] p. gardner, “the representation of science-technology relationships in canadian physics textbooks,” international journal of science education, vol. 21, no. 3, pp. 329–347, mar. 1999, doi: 10.1080/095006999290732. authors’ profiles hajime sasaki is an associate professor at the institute for future initiatives, the university of tokyo. he received his ph.d. degree from the university of tokyo, and his m.s. degree from the tokyo institute of technology. his research interests include innovation management and data-driven decisionmaking. satoru yamamoto is the president and ceo of data artist inc., an ai solution company based in tokyo. he majored in ai at the university of tokyo. he aims to develop new and innovative ai solutions for various industries such as advertising, medicine, and finance. amarsanaa agchbayar is the cto of data artist inc. he majored in data mining at the university of tokyo. he is a bronze medalist in the international mathematical olympiad, and is presently leading an engineering team to develop new and innovative ai solutions for various industries. nyamaa enkhbayasgalan is a data scientist at data artist inc. he obtained his m.s. degree from tokyo institute of technology by majoring natural language processing. he has been involved in numerous projects utilizing ai technology. engineering, technology & applied science research vol. 8, no. 4, 2018, 3243-3248 3243 www.etasr.com le & vu: performance evaluation of traveling wave fault locator for a 220kv hoa khanh … performance evaluation of traveling wave fault locator for a 220kv hoa khanh-thanh my transmission line kim hung le the university of danang, university of science and technology, da nang, viet nam lekimhung@dut.udn.vn phan huan vu central power corporation, center electrical testing company limited, da nang, viet nam vuphanhuan@gmail.com abstract—this paper presents the traveling wave based fault location methods of sel-400l, and sfl-2000 available on the market for a 66.9km, 220kv hoa khanh-thanh my transmission line in central viet nam, such as single-ended, and doubleended, all of which rely on measurements from inductive cts and capacitive vts. focus was given on the building process of a matlab simulink model to evaluate these methods. current and voltage signals were sent to an analog chebyshev type ii filter which passes higher frequency signals at 3khz and rejects low frequencies signal at 50hz. after that, these output signals are used in clarke's transformation for getting 0 and α components. the detail coefficient of the selected components after dwt using db4 wavelet at decomposition level 1 can be used to determine the fault types, the direction of fault and propose a crest-wave comparison solution to identify exactly the adjacent bus' reflected wave from the fault point's reflected wave for the fault location. finally, the accuracy of fault location on the transmission line is reviewed by varying various parameters like fault type, fault location and fault resistance on a given power system model. keywords-transmission line; traveling wave fault locator; single ended method; double ended method; matlab/simulink i. introduction numerical relays are the most popular devices used for transmission line protection. they include fault location estimation based on impedance methods, which uses the voltage and current data measured in 50hz at one or more points along the power networks after the occurrence of a fault. typical impedance method’s error ranges from 2% to 5% depending on the relay model according to industry standards. however, the actual error is usually larger than 5% in practice operation manager at evn of viet nam [1]. it can be influenced by weather, high resistance ground faults, measurement errors, line impedance errors, mutual coupling, compensated lines, and other factors [2]. therefore, finding the accurate location of a fault constitutes a challenge in the power operator. selecting a new appropriate technology for a fault location application can be a daunting task so a performance evaluation is required when the evn wants to compare accuracy and finds the optimum and the most cost-effective one. in 2017, evn’s project installed a traveling wave fault locator (twfl) equipment kinkei sfl-2000, sel-400l for 220kv, 500kv transmission lines in substations such as son ha, thanh my, hoa khanh, hue, dong ha, tam ky, and doc soi in central viet nam. the project implementation plan followed 3 steps. the first step was setting up the twfl to work with single-ended method. the second step was synchronizing the twfls by the furuno gps/gnss clock receiver and using a configured communication channel. the twfls would send the traveling wave arrival information to f/l server that calculates the distance to fault base on doubleended method, display results and sends email to the operator. the third step, evn installs an npt communication network link between twfls as shown in figure 1. the twfls receive the remote traveling wave information that is necessary to provide an automatic fault location. sfl-2000’s accuracy is reported to be200m [3], whereas the accuracy for sel-400l is reported to be 2% [2]. although this issue was recognized by ptc2 as a good way to overcome the shortfalls of impedance-based methods, the sfl-2000 is not able to be used in step 1. so, if it is impossible to have communication between ends, the fault location cannot be implemented automatically by the f/l server. note that, the problem faced by the power system operator is manually collected traveling waves event reports from each substation. they lack skills to discriminate the reflection wave of fault point, the time of arrival of the wave for manual fault location estimation using the single-ended method because the reflected surge from the adjacent bus has the same polarity as the real fault point's reflected wave, so the confusion is inevitable. almost as many respondents said all the implementation and support work would be done by experienced professionals. consequently, evn needs to conduct a training session about the knowledge of the problem of fault location using traveling wave signals for communication technicians, field personnel, relay engineers, and analysis an in-depth review of all twfl system operations. in order to solve this issue, the paper focuses on twfl based double-ended and single-ended methods implemented in a 66.9km, 220kv hoa khanh–thanh my transmission line by matlab simulink. the proposed model has been assessed through several scenarios. results show that methods consistently and significantly yielded the accurate engineering, technology & applied science research vol. 8, no. 4, 2018, 3243-3248 3244 www.etasr.com le & vu: performance evaluation of traveling wave fault locator for a 220kv hoa khanh … location of the actual fault. ii. traveling wave based fault location methods the twfl can work based on current or voltage signals. it has two kinds of the traveling wave (tw) methods that are used for most fault locator systems. one is the single-ended, which captures data from the initial traveling wave and subsequent reflections at one terminal to calculate the fault location without requiring any information from the relay at the remote terminal. the other is double-ended, which requires data information from two terminals which are both equipped with a gps receiver to time tag the exact moment the traveling wave reaches each end of the line. fig. 1. traveling wave fault locator sfl-2000 on transmission line a. single ended method figure 2 for a fault at location f on a line of length l=66.9km (the bewley diagram is shown in figure 14). the fault is m (km) away from the hoa khanh terminal, that is suspected to be in the second half of the line hoa khanh-thanh my. a current tw is launched from the f at t0=40ms and arrives at the hoa khanh terminal at t1b. to discriminate the reflection wave of fault point, the number of samples in the selected window is limited to the interval from t1b to 40.825ms after the fault occurs. part of the wave transmission travels toward the bus da nang and then returns to the hoa khanh terminal at t’1a. it has opposite polarity with the first traveling wave (t’1a-t1b=constant). so we identify and eliminate false peak due to the effect of this wave. another part of the wave reflects, travels back toward the fault, reflects back from the fault, and then returns to the hoa khanh terminal at t2b. it has larger crest-current wave than the reflected surge from the thanh my terminal at t’1c with the same polarity. similarly, in the previously examined case, the fault is assumed to occur at the first half of the line as shown in figure 13 then the number of samples in the chosen window has limited the interval from t1b to 40.4125ms. now, the distance to the fault location from twfl b is [2]: 2 1( ) 2 b bv t tm    (1) wave propagation velocity is 1 11 243250 km/sv l c   (where l1=0.0013h/km is the inductance and c1=0.013μf/km is the capacitance of the propagation medium). when an external fault behind the hoa khanh terminal launches a traveling wave as shown in figure 3, the twfl b sees an initial wave behind it with t1b=t1, which travels across the transmission line to the thanh my terminal and is reflected back to the hoa khanh terminal after the known tw line propagation time (tl=0.275ms) with t2b=2×tl+t1. the twfl b displays l. fig. 2. single end twfl with internal fault fig. 3. single end twfl with external fault review: the single-ended method estimates the accurate fault location of an internal fault and does not require data synchronization. however, the accuracy of this method depends on the accuracy of l, v, sampling frequency, and errors in wave detection. if the impulse wave cannot be captured successfully or the impulse wave does not exist at all at the fault occurrence, the fault location will fail. for instance, strong buses on power system network influencing the voltage and current waveforms due to line impedances can reduce the amplitude of voltage waves making them harder to detect, and thus reducing the twfl accuracy [4]. b. double ended method when an internal fault f occurs at t0, waves generated at f run towards stations hoa khanh and thanh my (figure 4). the double-ended method determines the arrival time based on the rising point of surge waveform recorded at both end terminals (t1b at the hoa khanh, t1c at the thanh my) and then locates the fault point from the equation shown below [2, 3]. ( )1 1 2 2 2 v t tl l v tb cm         (2) we can calculate v using the line length l=66.9km and the arrival time of the surge generated by manually closing a circuit breaker at thanh my terminal as shown in figure 5 [3]: v=l/(t1b–t1c)=243270km/s. when an external fault behind the engineering, technology & applied science research vol. 8, no. 4, 2018, 3243-3248 3245 www.etasr.com le & vu: performance evaluation of traveling wave fault locator for a 220kv hoa khanh … hoa khanh terminal launches a traveling wave, the twfl b sees an initial wave with either positive or negative polarity with t1b=t1 which travels across the transmission line to the thanh my terminal. the twfl c sees the same initial wave tl=0.275ms later with opposite polarity (t1c=tl+t1). the distance to the fault location from twfl b is l. fig. 4. double ended twfl with internal fault fig. 5. calculate propagation velocity review: double-ended method requires data from both terminal ends to be synchronized. it estimates the accurate fault location of an internal fault. the accuracy of this method is effected by communication and precise timing stamp gps. this method is more expensive than the single-ended method. iii. power system under study recorded data from a real system are not available to evaluate the performance of the twfl. instead, the power system supplied from both ends can be modeled by matlab simulink software (figure 6). the overhead line hoa khanh– thanh my is 66.9km long, and the system nominal operating voltage is 220kv, 50hz. this model consists of: 1. the transmission line: three phase section lines are used to represent the distributed parameter transmission line. line sequence impedance is: rl1 =0.07(ω/km), rl0 =0.2164(ω/km). ll1 =0.0013(h/km), ll0 =0.0044(h/km). cl1 =0.013(μf/km), cl0 =0.0085(μf/km). 2. a load of 220kv, 56mw, and 34kvar is connected to the bus hoa khanh and thanh my. 3. three phase fault block to deduce fault types and fault resistance varies from 1 to 35 ohms. 4. three-phase measuring blocks to measure the three phase line and load current and voltage values. 5. a twfl model is located at hoa khanh bus. it has been developed for fault detection, fault classification, fault direction and fault location which will be presented in section iv. iv. traveling wave fault locator model this subsection helps to understand how the twfl works. the simulation model of the twfl has been designed with six functional flow block diagram (figure 7) by using simulink. fig. 6. power system model fig. 7. flowchart of travelling wave fault locator operation a. analog filter a fault generates current and voltage traveling waves that propagate along the overhead line. most of them contain a significant amount of high-frequency components. the twfl collects tws from conventional ct class 5p20, cvt class 3p. then it uses an analog chebyshev type ii filter to remove the fundamental component 50hz and a high-pass filter with a cutoff frequency of 3khz for phase currents and voltages. b. clarke’s tranformation to reduce the effect of mutual coupling between phases, this paper utilizes the clarke transformation to convert the three-phase currents and voltages into the α, ß and 0 mode components. for example, we use three sets of clarke components with reference to the a-phase, b-phase, and cphase of current signals, as follows [5]: 0 0 0 2 1 1 1 0 3 3 3 1 1 1 1 2 1 1 3 0 3 3 1 1 1 1 1 2 1 3 3 0 3 1 1 1 a a a b a c b a b b b c c a c b c c i i i i i i i i i i i i i i i i i i                                                                                        (3) c. discrete wavelet transform this function is developed by using an α mode and 0 (ground) mode component which capture voltages and current with a sampling rate of 10mhz at twfl a (uαa, u α b, u α c, i α a, iαb, i α c, ig), and twfl b (bus_b_i α a, bus_b_i α b, bus_b_i α c, bus_b_ig). by using dwt daubechies 4 mother wavelet (db4) engineering, technology & applied science research vol. 8, no. 4, 2018, 3243-3248 3246 www.etasr.com le & vu: performance evaluation of traveling wave fault locator for a 220kv hoa khanh … during the fault surges from the state steady sampled currents and voltages, we can rapidly extract the first level detail wavelet coefficients (cd1), frequencies up to 5mhz which is enough for the twfl transient frequency. now it is easy to determine the times of traveling wave occurrences (peaks can be observed on waveform) and to reveal their travel times between the point fault f and twfl. d. fault detection and fault classification under normal conditions, cd1 of phase a, b, c, and ground are zero (figure 8). under fault conditions, if the cd1 of ground is zero, the fault is identified as an ungrounded fault or as a grounded fault if it is nonzero. the cd1 of phase a, b, and c are available for all fault types. consider the ag fault occurring on the transmission line at time 40ms (figure 9). both reflection and refraction of the cd1 of current phase a and ground with the large amplitude occur at 40.0 to 40.8ms (or 2.0×105 to 2.04×105 samples). they are approximately twice as large as the cd1 of current phase b, c (healthy phases), and they have opposite polarity. consider the bc fault (figure 10), the cd1 of current phase b and c are also considerably larger than the cd1 in the healthy phase (a, ground), and these faulted phases have opposite polarities. based on information relationship between the squared cd1 magnitude of the first current traveling waves in each phase, the twfl can make fault type decisions that can be summarized in table i. fig. 8. the cd1 of phase a, b, c and ground at normal condition fig. 9. the cd1 of phase a, b, c and ground at ag fault e. direction fault the single-ended method can make a directional decision based on the polarity relationship between the first voltage and current traveling waves. for a fault on the transmission line in the forward direction, the voltage and current traveling waves observed by the relay have opposite polarity (figure 11). for a fault on the transmission line in the reverse direction, the voltage and current traveling waves observed by the relay have the same polarity as in figure 12. the double end method compares time-aligned current first tws at both ends of the protected line. for an external fault, a tw that entered one terminal with a given polarity leaves the other terminal with the opposite polarity exactly after δt=tl. for an internal fault, a tw that entered one terminal with a given polarity leaves the other terminal with the same polarity, δt1.5 and mag_a/mag_c>1.5 and mag_g/mag_a>0.2 cd1_a bg mag_b/mag_a>1.5 and mag_b/mag_c>1.5 and mag_g/mag_b>0.2 cd1_b cg mag_c/mag _a >1.5 and mag_c/mag_b>1.5 and mag_g/mag_c>0.2 cd1_c ab mag_a/mag_c>1.5 and mag_b/mag_c>1.5 and mag_g/mag_a<0.2 and mag_g/mag_b<0.2 cd1_a or cd1_b bc mag_b/mag_a>1.5 and mag_c/mag_a>1.5 and mag_g/mag_b<0.2 and mag_g/mag_c<0.2 cd1_b or cd1_c ca mag_c/mag_b>1.5 and mag_a/mag_b>1.5 and mag_g/mag_c<0.2 and mag_g/mag_a<0.2 cd1_c or cd1_a abg mag_a/mag_c>1.5 and mag_b/mag_c>1.5 and mag_g/mag_a>0.2 and mag_g/mag_b>0.2 cd1_a or cd1_b bcg mag_b/mag_a>1.5 and mag_c/mag_a>1.5 and mag_g/mag_b>0.2 and mag_g/mag_c>0.2 cd1_b or cd1_c cag mag_c/mag_b>1.5 and mag_a/mag_b>1.5 and mag_g/mag_c>0.2 and mag_g/mag_a>0.2 cd1_c or cd1_a abc mag_a>1.5 and mag_b>1.5 and mag_c>1.5 cd1_a or cd1_b or cd1_c f. distance calculations because the frequency response for cts is better than for ccvts [4], we chose to use current signals for fault location. in the case when twfl identifies a forward or internal fault, both single end and double end methods are activated. we use matlab's find peaks function to find values and locations of cd1 maxima in a time period set according to the travel time of the line. the fault distance is given by (1) and (2). engineering, technology & applied science research vol. 8, no. 4, 2018, 3243-3248 3247 www.etasr.com le & vu: performance evaluation of traveling wave fault locator for a 220kv hoa khanh … fig. 11. curent and voltage polarities of phase a in fault forward direction fig. 12. curent and voltage polarities of phase a in fault reverse direction v. simulation results after building, the proposed model is ready to analyze the operation of twfl applied with variation in fault parameters such as fault type, fault location (from -5km to 65km), fault resistance from 1ω to 35ω (increasing the fault resistance seen at relay point as though moving away from the transmission line). the fault creation time is t0=40ms. figure 13 shows the phase currents captured at both terminals and a beley diagram for a bg fault with rf=10ω that is assumed to occur at a distance of 20km from hoa khanh bus (forward direction). twfl b calculates the fault location by the single-ended method with t1b=40.0836ms, t2b=40.2476ms, due to the traveling time taken by the fault to appear at the twfl b. based on the measured tw arrival times, an estimated from (1) fault location 19.9523km from the hoa khanh terminal emerged. in figure 14, results are shown where a bcg fault with rf=25ω at a distance of 45km from hoa khanh bus (forward direction). according to the traveling time taken by the fault to appear at the point t1b=40.1866ms, t2b=40.5566ms. the twfl b also uses the single-ended method to calculate fault location is 45.0016km. the double-end method’s (figure 15) performance is simpler than the single-ended one. in the case an ab fault occurs on the transmission line with rf=15ω at 10km, it calculates the distance to fault based on time tags of traveling wave records acquired at both ends of the faulty line. the first peak of hoa khanh terminal occurs at t1b=40.0426ms, the first peak of thanh my terminal occurs at t1c=40.2356ms, so the distance to the fault point f of twfl b estimate from (2) is 9.9762km. as shown in figure 16, an abc fault occurs on the transmission line at 35km, rf=15ω. according to the change in current waveforms during the fault, twfl b captured point t1b=40.1456s, twfl c captured t1c=40.1326s, thus the doubleend method of twfl b estimate is 35.0313km. some of the results for various fault cases are given in table ii respectively for bg, ab, bcg, and abc. fig. 13. phase current waves and bewley diagram explaining single ended method for a bg fault at 20km from the hoa khanh terminal fig. 14. phase current waves and bewley diagram explaining single ended mehtod for a bcg fault at 45km from the hoa khanh terminal fig. 15. phase current waves and bewley diagram explaining double ended method for ab fault at 10km from the hoa khanh terminal review: the simulation results show that single ended method locates faults with accuracy less than ±114m and double ended method locates faults with accuracy less than ±112m. engineering, technology & applied science research vol. 8, no. 4, 2018, 3243-3248 3248 www.etasr.com le & vu: performance evaluation of traveling wave fault locator for a 220kv hoa khanh … fig. 16. phase current waves and bewley diagram explaining double ended method for abc fault at 35km from the hoa khanh terminal table ii. test results of faults on transmission line phase actual fault location [km] fault resistance [ω] single end method estimated distance double end method estimated distance m [km] difference [m] m [km] difference [m] bg -5 1 / / 10 5 9.9733 -26.7 9.9762 -23.8 20 10 19.9523 -53.3 19.9495 -50.5 30 15 30.0416 41.6 29.9228 -77.2 40 20 40.015 15 39.967 -33 50 25 49.9883 -11.7 50.112 112 60 30 60.889 88.9 60.0859 85.9 ab -5 1 / / 10 5 9.9733 -26.7 9.9744 -25.6 20 10 19.9467 -53.3 19.9495 -50.5 30 15 30.0416 41.6 29.9228 -77.2 40 20 40.015 15 39.967 -33 50 25 49.9883 -11.7 50.112 112 60 30 59.9616 -38.4 60.0861 86.1 bcg -5 1 / / 5 5 5.114 114 5.109 109 15 10 14.960 -40 15. 0831 83.1 25 15 25.055 55 25.0578 57.8 35 20 35.028 28 35.031 31 45 25 45.016 16 45.0045 4.5 55 30 54.932 -68 54.9778 -22.2 65 35 64.9053 -94.7 64.9512 -48.8 abc -5 1 / / 5 5 4.9867 -13.3 5.109 109 15 10 14.96 -40 15.0831 83.1 25 15 25.055 55 25.0578 57.8 35 20 35.0283 28.3 35.0313 31.3 45 25 45.0073 7.3 45.0045 4.5 55 30 54.9806 -19.4 54.9778 -22.2 65 35 64.954 -46 64.9535 -46.5 vi. conclusions performance evaluation of twfl systems has become an increasingly important issue given its design, manufacturing, sale/purchase, use, upgrade, tuning, etc. in this study, a twfl model built into a 220kv transmission line that can be easily and reliably simulated on the matlab simulink software with 10mhz sampling frequency. the proposed model determines exactly fault types, fault direction and fault location. according to the obtained results, it has been shown that twfl is more accurate than the traditional impedance-based methods in relay protection. double-ended method is more accurate than singleended method. furthermore, the paper demonstrates a bewley diagram with the times of the current impulse wave at each terminal and results which are intended to help the operator transition from beginner to experienced professionals. this can assist in the research to eliminate factors of misoperation, contribute substantially to the safe, reliable, economical operation and maintenance of overhead transmission lines. acknowledgment authors would like to thank power transmission company no. 2, viet nam for allowing the use of fault location equipment for 220kv and 500kv transmission lines used in this study. references [1] k. h. le, p. h. vu, “a studying of single ended fault locator on sel relay”, ietec’13 conference, ho chi minh city, vietnam, december 4-6, 2013 [2] sel, sel t400l ultra high speed transmission line relay traveling wave fault locator high resolution event recorder instruction manual, 2018 [3] kinkei system corporation, surge type fault locator system specifications sfl 2000, 2016. [4] s. parmar, fault location algorithms for electrical power transmission lines methodology, design, and testing, msc thesis, delft university of technology, 2015 [5] b. kasztenny, a. guzmán, n. fischer, m. v. mynam, d. taylor, “practical setting considerations for protective relays that use incremental quantities and traveling waves”, 43rd annual western protective relay conference, washington, usa, october 18-20, 2016 microsoft word 23-3590_s_etasr_v10_n3_pp5713-5718 engineering, technology & applied science research vol. 10, no. 3, 2020, 5713-5718 5713 www.etasr.com nagao: an experimental study on the way bottom widening of pier foundations affects seismic … an experimental study on the way bottom widening of pier foundations affects seismic resistance takashi nagao research center for urban safety and security kobe university kobe city, japan nagao@people.kobe-u.ac.jp abstract—the resistance of a pier to horizontal loads, like seismic loads, is due to the flexural rigidity of its foundations and the horizontal subgrade reaction. in the event of a massive earthquake, the latter becomes very small because of the softening of the ground, while the structure may experience a large inertia force and lateral spreading pressure. therefore, structures with high seismic resistance are required in areas with high seismicity. when a wide caisson is used as a pier foundation, a rotational resistance moment caused by the vertical subgrade reaction acting on the foundation bottom can be expected. although this rotational resistance moment increases if the foundation is widened, in design practice the subgrade reaction coefficient is evaluated as being low under such circumstances. therefore, even if the foundation is widened, the rotational resistance moment does not increase greatly. rotational resistance commensurate with the increased construction cost due to foundation widening cannot be expected. in the present study, horizontal loading experiments were performed at one pier with a normal foundation and at one with widened at the bottom foundation, and the way that the widening affected the seismic performance was examined. the results show that compared with the normal foundation, the bottom-widened one experienced far less displacement and offered higher earthquake resistance. keywords-earthquake resistance; subgrade reaction; pier; displacement i. introduction a pier supports vertical loads (e.g. dead weight, cargo) by means of columnar foundations (e.g. piles) that penetrate to the bedrock, and resists horizontal loads (e.g. inertia forces during an earthquake) by means of (i) the flexural rigidity of the foundations and (ii) the horizontal subgrade reaction (sr). as ships have become bigger, so wharfs have had to be made deeper, and this increase in water depth results in increased seismic load. in addition, it has been noted that (i) lateral spreading pressure may act during a massive earthquake and (ii) the maximum lateral spreading pressure may exceed the one specified in seismic design codes [1]. many damages to pile foundations during the 1995 kobe earthquake have been reported [2], and a pier in kobe port buckled at points below the ground surface, which was caused by the lateral spreading pressure [3]. many other cases have been reported of wharfs being displaced laterally during earthquakes [4-7]. because a pier is strongly affected by ground deformation during an earthquake and experiences residual deformation even when its structural members are not damaged [8], the deformation performance of a pier against seismic loads is an important design criterion. the damage to the pile foundation becomes even greater when liquefaction occurs [9]. pier foundations comprise steel-pipe and reinforcedconcrete piles, and large-diameter caissons are also used when large earthquake loads are considered. when caissons are used as foundations, a rotational resistance moment (rrm) due to the vertical sr acting on the foundation base bottom is expected because of the wide foundations. in the event of a massive earthquake, the horizontal sr becomes very small because of the deterioration in ground stiffness [10]. however, because the foundations are embedded in a strong soil layer, the effect of lowered ground stiffness at the foundation bottom (fb) is small even during a massive earthquake, and a sufficient vertical sr can be expected. in addition, when caisson foundations are used, the area subjected to the vertical sr can conceivably be increased by widening the fb, thereby increasing the rrm to enhance the earthquake resistance. also, the seismic performance can be expected to increase because of the soil weight acting on the widened section. however, it has been noted that although the rrm increases as the foundations are widened, the sr coefficient used in the calculation of sr decreases with increasing foundation width [11-15]. in design practice, formulas for calculating the sr coefficient are used, which incorporate that effect [16]. when such formulas are used, the rrm does not increase greatly even if the foundations are widened, and therefore one cannot expect a rotational resistance that is commensurate with the increased construction cost due to the foundation widening. however, no studies to date have clarified the effect by experiments on the frame structure, and it is very important in the earthquake-resistant design of piers to examine how the foundation width affects the earthquake resistance. in the present study, experiments involving horizontal loading were performed on pier models with either a normal columnar foundation or one with widened fb, and how the latter affected the seismic performance was examined. also the differences in sr and displacement performance due to the widened fb are discussed. corresponding author: takashi nagao engineering, technology & applied science research vol. 10, no. 3, 2020, 5713-5718 5714 www.etasr.com nagao: an experimental study on the way bottom widening of pier foundations affects seismic … ii. method a. experiments outline in the experiments, a soil tank of 900mm (width) × 500mm (depth) was used as shown in figure 1, and a steel rigid frame model simulating a pier was installed in the ground as shown in figure 2. this pier model was loaded horizontally using a mega-torque motor. to avoid effects due to the soil being restrained by the side walls of the soil tank, the 150mm-deep pier model was installed in the central part of the 500mm-deep soil tank. steady braces were installed so that the horizontally elongated model would not tilt in the depth direction of the soil tank because of loading. figure 3 shows the specifications of the model. the normal type involves a rigid frame that has a columnar foundation with a circular cross section of 60mm diameter whereas the widened type involves a column that has the same diameter but whose base has been widened to 115mm. the dimensions of these models are based on a scaling factor (prototype/model) of 100 for the length considering the recent increases in wharf water depth. the normal and widened model weights are 0.142kn and 0.150kn respectively, the latter being heavier because it has a wider base. fig. 1. experimental soil tank fig. 2. model installation status the ground was prepared by air pluviation, using tohoku silica sand no. 6 in the dry state. although pier foundations penetrate solid ground, the ground shallower than the bearing stratum is usually soft. thus the model ground comprised an upper layer and a lower layer with relative densities of 42% and 77% respectively, corresponding to standard penetration test n-values of 5 and 33, respectively. the upper layer was 100mm thick and the lower layer was 135mm thick, thereby ensuring that the deformation in the lower soil layer was not constrained because of the rigid bottom plate of the soil tank. as shown in figure 1, the model foundation was embedded by 45 mm into the stronger lower layer. (a) (b) fig. 3. model specifications. (a) widened, (b) normal (unit: mm) by similitude [17], the horizontal loading rate on the model was determined to be 1/31.6 of the actual scale, in which the horizontal loading rate was 20cm/s as in [1]. the maximum displacement under loading was 10mm, which by similitude corresponds to an actual displacement of 10m. b. measured quantities the quantities measured in these experiments were the sr on the pier bottom, the horizontal and vertical displacements of the pier, and the horizontal load. time history data were recorded by a data logger. the sr on the fb was measured by installing earth pressure gauges on the model fb, two per leg in the widened type and one per leg in the normal type, on the basis of the relationship between the diameter of the earth pressure gauge and that of the model foundations. the horizontal and vertical displacements were measured by attaching a displacement gauge to the model as shown in figure 3. in the following, the loading side is referred to as the rear side, and the side where displacement occurs by loading is referred to as the front side. the measured data were subjected to (i) a fast fourier transform, (ii) low-pass filtering at 1hz, and then (iii) an inverse fast fourier transform to obtain smooth time history data as in [1]. pressure gauge 5 0 0 3 0 4 7 0 900 1 5 0 45 mega torque motor steady brace steady brace model 45 (unit: mm) 100 135 plan view side view mega torque motor engineering, technology & applied science research vol. 10, no. 3, 2020, 5713-5718 5715 www.etasr.com nagao: an experimental study on the way bottom widening of pier foundations affects seismic … iii. results a. load–displacement relationship figure 4 shows the time history of horizontal load (red) and horizontal displacement (blue). because the loading was carried out at constant displacement speed, the horizontal displacement increased linearly with time at the same rate for each type. the load increased over time, but the degree of increase was not constant. in both types, the load increase with time was large until around 1s when it became gradual. this was due to the change in the displacement mode of the pier around 1s: at first the rigid frame tilted because of the horizontal load, then, as the tilt increased, the horizontal resistance was reduced by the rear side leg floating up, whereupon sliding occurred. the final horizontal displacement was 10mm for both types, but the loads required to produce the final displacement differed, with the maximum load being 0.40kn for the widened type and 0.34kn for the normal type. the widened type was less likely to deform than was the normal type. (a) (b) fig. 4. time history of horizontal load and horizontal displacement the relationship between the horizontal displacement (dx) and the vertical displacement (dy) is shown in figure 5. the red line indicates the widened type and the blue line denotes the normal type. the vertical displacement is positive upward, and when the tilting occurs by horizontal loading, the model is displaced upward by rotation. a large vertical displacement occurred with horizontal displacement for the normal type, but the vertical displacement of the widened type was small. this occurred because it was difficult for the widened type to tilt because the rrm due to the vertical sr acting on the fb was large. fig. 5. relationship between horizontal and vertical displacement b. vertical subgrade reaction figure 6 shows the time history of the vertical sr on the fb. as described above, the widened type had two earth pressure gauges per leg, namely p1–p4 from front to rear. the normal type had one earth pressure gauge per leg, namely p1 in the front and p2 in the rear. in both types, the sr at the rear leg decreased sharply with increasing load: it became zero and the rear leg floated upon application of a horizontal load of 0.113kn in the widened type and 0.091kn in the normal type. (a) (b) fig. 6. time history of sr as a ratio to the dead weight, this load was 0.75 for the widened type and 0.64 for the normal type. the sr for p2 of the widened type, which was the rear side sr in the front leg, became zero when a horizontal load of 0.252kn was applied, which corresponded to 1.80 as a ratio to the dead weight. the front sr for p1 of the normal type did not increase significantly even if the load or displacement increased after the rear sr for p2 became zero. this was because the displacement mode changed from tilting to sliding. the widened type exhibited a similar tendency. 0.5 1 1.5 2 2.5 0 0.1 0.2 0.3 0.4 0 5 10 widened time(s) lo a d (k n ) d x (m m ) 0.5 1 1.5 2 2.5 0 0.1 0.2 0.3 0.4 0 5 10 normal time(s) lo a d (k n ) d x (m m ) 0 2 4 6 0 0.2 0.4 0.6 dx(mm) d y (m m ) 0.5 1 1.5 2 20− 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 widened time(s) p re ss u re (k p a ) lo a d (k n ) 0.5 1 1.5 2 20− 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 normal time(s) p re ss u re (k p a ) lo a d (k n ) engineering, technology & applied science research vol. 10, no. 3, 2020, 5713-5718 5716 www.etasr.com nagao: an experimental study on the way bottom widening of pier foundations affects seismic … iv. discussion a. vertical subgrade reaction characteristics when a rigid frame is subjected to horizontal loading, tilted displacement occurs first. we discuss how the tilted displacement characteristics differ between the normal type and the widened type. figure 7 shows the change in the distribution of the sr with the change in the applied load. the horizontal axis is the distance from the front end, and the stars mark the installation positions of the earth pressure gauges. the values in the legend are those of the horizontal load. when the load is small (0.02kn), the sr (p2, p4) at the rear side of each leg is larger than that at the front side (p1, p3) for both the front and rear side legs of the widened type. this occurs because in the widened type, the column is installed on the rear side rather than in the center of the base bottom, and the loading distribution of its dead weight is not uniform. although the front side sr (p1) increases with increasing horizontal load, there is no change in the rear side sr (p2) on the front side leg in the range of horizontal load up to 0.10kn. by contrast, the sr decreases on both rear sides (p3 and p4). (a) (b) fig. 7. distribution of sr in the normal type, the sr (p1) at the front increases with increasing loading and the sr (p2) at the rear decreases, but the p2 decrease is larger than the p1 increase. this is because the rigid frame does not rotate at the center of the span of the superstructure but does rotate at a point closer to the front than the center. by contrast, in the widened type, it is difficult to evaluate the rotational center in this state because the initial distribution of the dead weight is not uniform. therefore, we first averaged the initial values of the four srs from p1 to p4 and then subtracted their initial average values from the values of each sr. in addition, the values of the srs on the front and back side of each leg were averaged, and were plotted against the center of the earth pressure gauge installation position for each leg. the results are shown in figure 8, where the legend is the same as in figure 7. these results indicate that the rotational center is unchanged and constant regardless of the magnitude of the load, and that the rotational center position differs between the normal type and the widened type. in the widened type, the rotational center is close to the center of the superstructure span, namely at 0.47 times the span length from the front end, while in the normal type, the rotational center is at 0.37 times the span length from the front end. the arm length of the rrm is larger for the widened type than for the normal type, and therefore the rrm of the widened type is larger than that of the normal type. this is the effect of bottom widening. fig. 8. distribution of sr the rrm calculation is based on the sr distribution and the rotational center. the rrm values are plotted against the load in figure 9. the red line indicates the widened type and the blue line expresses the normal type. for a load of 0.1kn, the rrm is 0.045kn·m in the widened type and 0.035kn·m in the normal type, with the former being 1.28 times the latter. in design practice, sr distributions are calculated assuming that the foundation center of each leg is the rotational center when evaluating the seismic resistance of a pier. the sr distributions in such a conventional design differ greatly from the sr distributions revealed by this study, and the conventional design underestimates rrm of a pier as a frame structure because the arm length of rrm is evaluated as being short. fig. 9. rotational resistance moment b. displacement characteristics figure 10 compares the relationship between the loads and the horizontal displacement of the two types. the red line 0 100 200 300 400 500 0 20 40 60 widened distance(mm) su b g ra d e r e a c ti o n (k p a ) 0 100 200 300 400 500 0 20 40 60 widened distance(mm) su b g ra d e r e a c ti o n (k p a ) 0 0.05 0.1 0.15 0.2 0 0.02 0.04 0.06 load(kn) m o m e n t( k n m ) engineering, technology & applied science research vol. 10, no. 3, 2020, 5713-5718 5717 www.etasr.com nagao: an experimental study on the way bottom widening of pier foundations affects seismic … indicates the widened type and the blue line the normal type. for a given load, the widened type exhibits smaller horizontal displacement than the normal type. because the model weights differ between the normal and the widened type, as described above, the results of comparing the displacement characteristics in seismic coefficient form by dividing the horizontal load by the model weight are converted into values on the real scale according to the similitude [17] as shown in figure 10(b). the red line denotes the widened type and the blue line indicates the normal type. in the range of seismic coefficient of 0.4–0.6, the range of displacement ratio is 0.19–0.38, and the larger the seismic coefficient, the higher the seismic resistance of the widened type. in comparison with the rrm ratio, the difference between the two types is large for the horizontal displacement. therefore, the widened type can be said to have especially high horizontal displacement resistance. for vertical displacement, as shown in figure 11, the widened type (red) produces very little displacement in comparison with the normal type (blue). in the range of seismic coefficient up to 0.6, the displacement ratio is less than 2%. (a) (b) fig. 10. relationship between seismic coefficient and horizontal displacement fig. 11. relationship between seismic coefficient and vertical displacement v. conclusions in this study, in order to discuss how widening the bottom of pier foundations affects the seismic resistance, horizontal loading experiments were performed by installing two types of pier models in the ground, namely (i) a normal type with a leg diameter of 60mm and (ii) a widened type with a width of 115mm. the main conclusions drawn are outlined below. in a rigid frame subjected to horizontal load, tilted displacement occurs first, the vertical sr on the front side leg increases with increasing horizontal load, and the vertical sr on the back side leg decreases. the rotational center as a rigid frame is not at the center of the span but at a point closer to the front than the center of the span. the rotational center position differs between the normal type and the widened type. the widened type has a larger rrm arm length than the normal type because the rotational center is closer to the center of the span compared with that of the normal type. therefore, compared to the normal type, the widened type has a larger rrm which at maximum is 1.28 times the rrm of the normal type. in design practice, such vertical sr distributions cannot be reproduced, and the rrm is underestimated. when the amount of horizontal displacement with respect to the seismic coefficient is compared in terms of real scale, the widened type has 0.19–0.38 times the horizontal displacement in the range of seismic coefficient from 0.4 to 0.6 compared with the normal type. compared with the normal type, the widened type experiences less horizontal displacement, especially in the range of large seismic coefficient, and the widened type has remarkably high seismic resistance against the action of a massive earthquake. for vertical displacement, the widened type produces only 2% or less of the displacement of the normal type. acknowledgments the experiments were conducted with the help of rie yamaoka and daisuke shibata. this research was supported financially by jsps kakenhi grant no. jp18k04324 and oriental shiraishi co., ltd. references [1] t. nagao, d. shibata, “experimental study of the lateral spreading pressure acting on a pile foundation during earthquakes”, engineering, technology & applied science research, vol. 9, no. 6, pp. 5021-5028, 2019 [2] k. tokimatsu, y. asaka, “effects of liquefaction-induced ground displacements on pile performance in the 1995 hyogoken-nambu earthquake”, soils and foundations, vol. 38, no. special, pp. 163-177, 1998 [3] pianc, seismic design guidelines for port structures, a.a. balkema publishers, 2001 [4] g. mondal, d. c. rai, “performance of harbour structures in andaman islands during 2004 sumatra earthquake”, engineering structures, vol. 30, pp. 174–182, 2008 [5] r. a. green, s. m. olson, r. brady, b. r. cox, g. j. rix, e. rathje, j. bachhuber, j. french, s. lasley, n. martin, “geotechnical aspects of failures at port-au-prince seaport during the 12 january 2010 haiti earthquake”, earthquake spectra, vol. 27, no. suppl. 1, pp. s43–s65, 2011 [6] t. sugano, a. nozu, e. kohama, k. shimosako, y. kikuchi, “damage to coastal structures”, soils and foundations, vol. 54, no. 4, pp. 883– 901, 2014 0 2 4 6 0 0.1 0.2 0.3 0.4 experiment horizontal displacement(mm) lo a d (k n ) 0 0.5 1 1.5 2 0 0.3 0.6 0.9 1.2 1.5 prototype horizontal displacement(m) se is m ic c o e ff ic ie n t 0 0.1 0.2 0.3 0.4 0 0.3 0.6 0.9 1.2 1.5 prototype vertical displacement(m) se is m ic c o e ff ic ie n t engineering, technology & applied science research vol. 10, no. 3, 2020, 5713-5718 5718 www.etasr.com nagao: an experimental study on the way bottom widening of pier foundations affects seismic … [7] g. a. athanasopoulos, g. c. kechagias, d. zekkos, a. batilas, x. karatzia, f. lyrantzaki, a. platis, “lateral spreading of ports in the 2014 cephalonia, greece, earthquakes”, soil dynamics and earthquake engineering, vol. 128, article id 105874, 2020 [8] t. nagao, p. lu, “a simplified reliability estimation method for pilesupported wharf on the residual displacement by earthquake”, soil dynamics and earthquake engineering, vol. 129, article id 105904, 2020 [9] g. li, r. motamed, “finite element modeling of soil-pile response subjected to liquefaction induced lateral spreading in a large-scale shake table experiment”, soil dynamics and earthquake engineering, vol. 92, pp. 573-584, 2017 [10] i. towhata, geotechnical earthquake engineering, springer-verlag, 2008 [11] m. a. biot, “bending of infinite beams on an elastic foundation”, journal of applied mechanics, vol. 59, pp. a1–a7, 1937 [12] k. v. terzaghi, “evaluation of coefficient of subgrade reaction”, geotechnique, vol. 5, no. 4, pp.297–326, 1955 [13] t. yoshinaka, “subgrade reaction coefficient and its correction based on the loading width”, pwri report, vol. 299, pp. 1-49, 1967 (in japanese) [14] r. ziaie-moayed, m. janbaz, “effective parameters on modulus of subgrade reaction in clayey soils”, journal of applied sciences, vol. 9, pp. 4006-4012, 2009 [15] j. lee, s. jeong, “experimental study of estimating the subgrade reaction modulus on jointed rock foundations”, rock mechanics and rock engineering, vol. 49, no. 6, pp. 2055–2064, 2016 [16] japan road association, specifications for highway bridges, part 4, substructures, japan road association, 2016 [17] s. iai, “similitude for shaking table test on soil-structure-fluid model in 1g gravitational field”, soil and foundations, vol. 29, no. 1, pp. 105118, 1989 microsoft word 07-3427_s1_etasr_v10_n3_pp5619-5626 engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5619 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay model reference adaptive controller for lti systems with time-variant delay ghedjati keltoum department of electrical engineering university ferhat abbas setif 1 setif, algeria ghedjat_keltoum@yahoo.com abstract—in this paper, a new direct model reference adaptive control procedure (dmrac) for linear time-invariant (lti) delay systems is presented with the use of the concept of the command generator tracker which expands the class of processes that can now be controlled with zero output error. the stability of the error between the system and the model is guaranteed by the lyapunov theory. the new algorithm is applied to control a perturbed delay system. matlab simulation examples are given to demonstrate the usefulness of the algorithm. keywords-adaptive control; asymptotic stability; time delay systems; dynamical uncertainties i. introduction the stability of time delay systems has been studied with the lyapunov–krasovskii and the lyapunov–razumikhin approach. these two concepts have been used in order to avoid the classical lyapunov method. authors in [1-3] give an overview of the stability of time delay systems with some advanced results. the rightmost roots of the characteristic are investigated in [4]. authors in [5] studied the control of an mimo nonlinear time delay system. stability analysis and stabilization for takagi–sugeno (t–s) fuzzy systems with time delay have been studied in [6, 7]. in [8], a delay-dependent stabilization condition was proposed for the stability of a class t–s fuzzy time-delay system using homogeneous polynomials scheme and polya's theorem with application on a truck-trailer model. authors in [9] investigated the pre-specified performance for time-varying delays using model reduction, fuzzy logic, and lmi techniques. the pid controller has also been used in the stability of the time-delay systems [10]. the developed method guarantees gain and phase margins besides stability. the introduction of adaptive control in uncertain time delay systems has been studied thoroughly. in [11], the author used the back stepping transformation where regulation was achieved despite the presence of partial measurements and disturbance. the adaptive identification of the parameters and the time delay of the time delay system were addressed in [12]. this identification is achieved with the use of the concept of transformation of the system in the parameterized form. the convergence of the identification error is guaranteed using the persistent excitation (pe) condition. also, finite time convergence was assured using the terminal sliding mode. in [13], the author applied a sliding mode controller to stabilize uncertain time-delay chaotic systems. the proposed controller was robust against time-delays, parameter uncertainties and disturbances. the h-infinity theory has also been used to control time-delay systems. in [14], time-delays appeared in the network used in the feedback loop. the delay-dependent stability criterion was derived from the lyapunov krasovskii function and the linear matrix inequality (lmi). the h2, hinfinity and the lmi concepts have been used for discrete time delay uncertain systems. authors in [15] used the past values of the states and the outputs, and were able to stabilize the system with time-varying delays. finite time stability of time-delay systems has been investigated in [7, 16-18] with the utilization of the homogeneity theory. the observer design of time delay systems was used in [19] for a switched singular system where two design methods were used and in [20], a luenberger-like observer has been used to estimate the unknown inputs for a large class of linear systems. the output regulation of timedelay systems has also been investigated in [21] by using the adaptive concept and the observer design using rbf neural network systems to approximate unknown functions. in [22] the well known lyapunov–krasovskii theorem was used to investigate the output stabilization for time-delay nonholonomic systems. the simple mrac of mimo plants was first proposed in [23]. this class of algorithms does not require full state access or satisfaction of perfect model conditions. asymptotic stability is ensured provided that the plant is almost strictly positive real (aspr). authors in [24] extended the original algorithm to a class of plants which violates this condition. this approach involved designing a supplementary feedforward filter to be included in parallel with the original plant resulting in a new augmented plant which had to satisfy the same strictly positive real condition. unfortunately, the tracking error was not the true difference between the plant and the model outputs since it included the contribution of the supplementary feedforward filter which led to an asymptotically stable error [25-28]. the application of adaptive fuzzy control can be found in [33, 34]. the authors considered the internal model for controlling dcdc converters. adaptive control is also used in many industrial fields, while authors in [35] used it for controlling uav systems. corresponding author: ghedjati keltoum engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5620 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay authors in [36] developed a saturated command for planar systems where the stabilization is achieved in finite time using just a simple proportional derivative corrector pd whose parameters are optimally adapted. this finite time stability is analyzed with lyapunov's theory and homogeneity concept. author in [37] aimed to replace a mechanical cam system with an electromagnetic actuator. the electromagnetic actuator creates a force which acts on the valve shaft and allows it to move linearly allowing the admission and exhaust of the explosion gas and therefore the combustion engine rotation. electromagnetic force is generated by making a velocity measurement without a speed sensor. the position is deducted adaptively according to the estimated speed. this adaptive technique improves the efficiency of the mechanical engine and its longevity. the same author in [38] follows up this work and tries to remedy the problems encountered by the classic pd corrector in the presence of noise at high frequencies by optimally approximating the pd corrector parameters. after finding an actuator’s model of electromagnetic valve actuator which replaces the classic mechanical valve actuator, the pd parameters are adjusted adaptively and on line with the variance minimization method. ii. direct model reference adaptive control the model reference adaptive control is considered for the non-linear plant: . 1 ( ) ( ) ( ( )) ( ) ( ) ( ) ( ) p p p p p p p p p p x t a x t a x t t b u t f x y t c x t τ= + − + + = (1) where ( ) p x t is the (n×1) state vector, up(t) is the (m×1) control vector, yp(t) is the (q×1) plant output vector, f(t) is an (n×1) vector of nonlinearities, ap, bp are matrices with appropriate dimensions, and τ(t) is the time delay that verifies assumption (2) as stated below. we assume that the parameters of the linear part of the plant model are uncertain, i.e. only known within certain finite bounds. the range of the plant parameters is assumed to be known and bounded with: _ ( , ) , , 1,...,ijp ij a a i j a i j n − ≤ ≤ = (2) _ ( , ) , , 1,...,ijp ij b b i j b i j n − ≤ ≤ = (3) • assumption 1: the non-linear function f(x) is lipschitz in its arguments, that means 1 2 1 2 ( ) ( )f x f x l x x− < − where l>0 is the constant of lipschitz, (.) is the euclidean norm and 1 2 ,x x belong to a compact set n rω∈ . • assumption 2 the derivative of the delay system τ(t) verifies: 1 d ( ) d t t τ τ≤ the objective of this paper is to find, without explicit knowledge of ap, bp, and for non-linear f(xp), the control up(t) such that the plant output vector yp(t) follows the reference model given by: . ( ) ( ) ( ) ( ) ( ) ( ) ( ) m m m m m m m m m m m m x t a x t a x t b u t b u t y t c x t τ τ τ τ = + − + + − = (4) the output ym is the desired response to the set point command um. the model incorporates the desired behavior of the plant, but its choice is not restricted. in particular, the order of the plant may be much larger than the order of the reference model. the ideal control law that generates perfect output tracking and ideal state trajectories is assumed to be a linear combination of the model states and the model input (see [29]). in our case, we suppose that the ideal state, its delay and the ideal input are related to the model state. its delay and the model input are related by: ( ) ( ) p m11 12 21 22 mp x t x (t)s s s s u (t)u t ∗ ∗      =            (5.0) the perfect output tracking means that the ideal output * ( )y t is equal to the output model ( ) m y t which means: * ( ) ( ) ( ) ( ) ( ) ( ) p p p p 11 m p 12 m m m m y t c x t c s x t c s u t y t c x t ∗= = + = = (5.1) taking into account (5.0) and that the ideal state * ( )x t verifies this relation and the assumption that the command um(t) is constant (in the case where the input is not a constant, we can always find a dynamic system to generate um(t) with a constant input), the derivative of * ( )x t can be written as: d ( ) d ( ( ) ( )) d d ( ( ) ( ) ( ) ( )) ( ) ( ) ( ) ( ) * p 11 m 12 m 11 m m m m m m m m 11 m m 11 m m 11 m m 11 m m x t s x t s u t t t s a x t a x t b u t b u t s a x t s a x t s b u t s b u t τ τ τ τ τ τ τ τ = + = + − + + − = + − + + − ( ) ( ) ( ) ( ( ) ( )) ( ( ) ( )) ( ( ) ( )) * * * p p 1 p p p p 11 m 12 m 1 11 m 12 m p 21 m 22 m a x t a x t τ b u t a s x t s u t a s x t s u t b s x t s u t τ τ = + − + = + + − + − + + (5.2) using (5.1)-(5.2), we obtain the following algebraic system: 2 0 11 m p 11 p 21 11 m p 12 p 22 11 m 1 11 11 m 1 1 p 11 m p 12 s a a s b s s b a s b s s a a s s b a s c s c c s τ τ = +  = +  =  =  =  = (5.3) which can be written as: engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5621 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay 0 0 0 p p11 m 11 m 11 12 11 m 11 m 1 21 22 pm a bs a s b s s s a s b a s s cc τ τ        =            (6) in the system (6) we have more unknowns than equations, so the solution almost always exists. when al, amτ, and bmτ are null, then the system and the model are without delay and we get the equations given in [29]. the adaptive control law based on the extended command generator tracker (cgt) approach is given by: ( ) ( ) ( ) ( ) ( ) ( ) ( ) p e y x m u m u t k t e t k t x t k t u t= + + (7) the adaptive law (7) has been applied for linear systems [30, 31]. here we aim to extend it to a linear time delay system described by (1) by adding a delay in the input and output in the model (4). the tracking error is given by: ( ) ( ) ( ) y m p e t y t y t= − and ( ) e k t , ( ) x k t and ( ) u k t are adaptive gains concatenated into matrix k(t) as : [ ]( ) ( ) e x uk t k t k (t) k (t)= (8) defining the vector r(t)(nr×1) as: ( ) ( ( ) ( )) t t t t m p m m r t y t y t x (t) u (t) = −  (9) the control up(t) is written in a compact form as: ( ) ( ) ( ) p u t k t r t= (10) where ( ) ( ) ( ) p i k t k t k t= + (11) ( ) ( ) ( ) . ( ) , 0 t p m p p p k t y t y t r t t t = − ≥  (12) . ( ) ( ) ( ) . ( ) , 0 t i m p i i k t y t y t r t t t = − >  (13) iii. stability study the first step of the demonstration is to design a positive definite quadratic form in the state variables ex(t) and ki(t) of the adaptive system. 1 i t − is assumed to be a symmetric positive definite matrix. then an appropriate choice of the lyapunovkrasovskii functional [32] is: ~ ~ 1 ( ) ( ) ( ) ( ) t t t x x x x t t t i i i v e pe e qe d tr s k k t k k s τ α α α − − = + +   − −   ∫ (14) where tr is the trace of a matrix. its time derivative is: . . . . ~ . 1 ( ) ( ) (1 ) ( ) ( ) 2 ( ) t t t x xx x x x t x x t ii i v e pe e pe e t qe t e t qe t tr s k k t k s τ τ τ − = + + − − − − +  −   (15) where p, q are symmetric positive definite matrices of size n×n, ~ k is a m×n matrix and s is a non-singular m×m matrix. since the matrix ~ k appears only in the function v and not in the control algorithm, it is called fictitious gain matrix. it has the same dimension as k where: ~ ~ ~ ~ e x up x m m k r k c e k x k u× = + + (16) the four gains ~ ek , ~ xk , ~ uk and ~ xrk are as ~ k fictitious. then we take the equation of the error using the fact that * x p p e x x= − to find: . * * * * 1 1 * * 1 * * * * 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x p p p p p p p p p p p p p p p p p p p p p p p x x p p p p p e a x a x t b u f x a x a x t b u f x a x x a x t x t b u u f x f x a e ae t b u u f x f x τ τ τ τ τ = + − + + − − − − −    = − + − − −     + − + −   = + − + − +  − (17) if we set: * ( ) ( ) p p df f x f x= − and substitute * p u from (5.0) and p u from (7), we get: . 1 21 22 ( )x p x x p m m x m u m e p x e a e ae t b s x s u k x k u k c e df τ= + −  + − − − +  (18.a) 1 21 22 ( ) p x x t p m m i p x p a e ae t b s x s u k r c e r t r df τ= + − +  + − − +  (18.b) then the adaptive system is described by: [ ] dfrtrecrkusxsb teaeae p t xpimmp xxpx +−−++ −+= 2221 1 . )( τ (19) . t i p x i k c e r t= (20) substituting (19) and (20) in (15), we get: . 1 21 22 1 21 22 ~ 1 . ( ) ( ( ) ( 2 ( ) ( ) ( ) ( ) (1 ) ( ) ( ) t p x x p m m i xt p x p p x x p m m t x t i p x p t t t i i p x i t t x x x x a e ae t b s x s u k r v pe c e r t r a e ae t b s x s u e p k r c e r t r tr s k k t c e r t s e t qe t e t qe t τ τ τ τ τ − + − + + −  =   −   + − + +  +   − −     + − +   − − − − + df (21) we can write it as: engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5622 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay . 1 21 22 1 21 22 ~ 1 ( ) ( ) ( ) ( ) 2 ( ) ( ) t t t t x p x x x t t t t t t t t t t t t t t m p m p i p p x p p x t t x p x x x t t x p m m i p x p t t t t i i i x p t x v e a pe e t a pe x s b u s b r k b r t re c b pe e pa e e pae t e pb s x s u k r c e r t r tr s k k t t re c s e t q τ τ − = + − + + − − + + − + + − − +   − +   . ( ) (1 ) ( ) ( ) t x x x e t e t qe t dfτ τ τ− − − − + (22) knowing that for two vectors ( ,1)u l and (1, )v l have [ ]. .tr u v v u= therefore: . 1 1 21 22 21 22 ~ ( ) ( ) ( ) 2 ( ) (1 t t t t x p p x x x t t t x x x p m x p m t t t t t t x p i x p p x p m p x t t t t t t t t t t t m p x i p x p x p p x t t t x p i v e pa a p q e e t a pe e pae t e pb s x e pb s u e pb k r e pb c e r t r x s b pe u s b pe r k b pe r t re c b pe e c s s k k r df τ τ = + + + − + − + + − − + + − − + − + − . ) ( ) ( ) t x x e t qe tτ τ τ− − − (23.a) which means that: . 1 1 21 22 ~ . ( ) ( ) ( ) 2 ( ) 2 2 2 (1 ) ( ) ( ) t t t t x p p x x x t t x x x p m m t t t t t x p p x p x p p i t t t t x p x x v e pa a p q e e t a pe e pae t e pb s x s u e pb c e r t r e c s s pb k r e c s s k r e t qe t df τ τ τ τ τ = + + + − + − + +  − + −  − − − − − + by setting: t p p c gb p= and 1( )tg s s −= , the derivative of the lyapunov function becomes: . 1 1 21 22 ~ 1 . ( ) ( ) ( ) 2 ( ) 2 ( ) 2 (1 ) ( ) ( ) t t t t x p p x x x t t x x x p m m t t t t t t t x p p x p x p t x x v e pa a p q e e t a pe e pae t e pb s x s u e pb s s b pe r t r e c s s k r e t qe t df τ τ τ τ τ − = + + + − + − + + − − − − − − + (24) substituting ~ ~ ~ ~ r e x up x m m k k c e k x k u= + + in (24) we get: . ~ ~ 1 1 1 ~ ~ 21 22 . ( ) ( ) ( ) ( ) 2 ( ) 2 ( ) ( ) (1 ) ( ) ( ) t t e ex p p p p p p x t t t x x x x t t t t x p p x p t x ux p m m t x x v e p a b k c a b k c p q e e t a pe e pae t e pb s s b pe r t r e pb s k x s k u e t qe t df τ τ τ τ τ − =  − + − +   + − + − −  + − + −   − − − − + (25) thus, if we set: ~ ~ 21 22 ( ) ( ) 0x um ms k x s k u   − + − =   (25) or ~ 21x k s= and ~ 22u k s= (none of which is required for implementation), the derivative of v becomes: . ~ ~ 1 1 1 . ( ) ( ) ( ) ( ) 2 ( ) (1 ) ( ) ( ) t t e ex p p p p p p x t t t x x x x t t t t x p p x p t x x v e p a b k c a b k c p q e e t a pe e pae t e pb s s b pe r t r e t qe t df τ τ τ τ τ −  = − + − +   − + − − − − − − + (26) taking into account the assumption 2, the derivative of the lyapunov function verifies (27): . 1 1 1 1 1 ( ) ( ) (1 )( ) ( ) 2 ( ) t x x t x x t t t t x p p x p e t e tq pa v a p qe t e t e pb s s b pe r t r df ττ τ −      ≤     − −− −     − + (27) with ~ ~ 1 ( ) ( ) t e ep p p p p p q p a b k c a b k c p q= − + − + (28) from (12), tp is positive semi-definite, so (27) becomes: . 1 1 1 1 ( ) ( ) (1 )( ) ( ) t x x t x x e t e tq pa v df a p qe t e tττ τ      ≤ +    − −− −     (29) let’s take: ( ) ( ) x x x e t e e t τ   =  −  and 1 1 2 1 1 (1 ) t q pa q a p qτ   = − − −  (30) then (29) is rewritten as: . 2 t x x v e q e df≤ − + (31) when df is equal to zero, the error is asymptotically stable if and only if 2 2 t q q= is positive semi-definite. in the case where df is different from zero and satisfies the assumption 1, then the derivative of the lyapunov function verifies: . * 2 2 2 2 2 min 2 min 2 min 2 ( ) ( ) 0 ( ) t t t x x x x x x x x x x x x v e q e df e q e l x x e q e l e q e l e q e l e l e q λ λ λ ≤ − + ≤ − + − = − + ≤ − + ≤ − + ≤ ⇒ ≥ where min 2 ( )qλ stands for the lowest eigenvalue of 2 q which is a positive number since 2 2 0 t q q= ≥ . the last inequality implies that the error ex is ultimately uniformly stable, which means that it belongs to a compact set around the origin. this set can be rendered much lower if we select min 2 ( )qλ to be engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5623 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay large. one choice is given by 2 .q iα= where i is the identity matrix and α is a scalar positive number. finally the derivative of the lyapunov function is negative definite in ex if min 2 / ( ) x e l qλ≥ . since v(t) is a positive definite function, then the vector ex(t) and the matrix ki(t) are bounded. we summarize the stability concept in the following theorem: • theorem: the control given by (10), and the adaptive laws given by (11), (12) and (13) applied to the non-linear uncertain system (1) that verifies the assumption 1 lead to a asymptotically stable error between the system and the model if and only if there are two p, q matrices 0 t p p= > and 0tq q= ≥ such that: 1) the matrix 1 2 1 1 (1 ) t h pa q a p qτ   = −  − −  where ~ ~ ( ) ( ) t e ep p p p p p h p a b k c a b k c p q= − + − + is positive semi-definite for some matrix ~ ek . 2) ( ) t p pb gc= , 1( ) ,tg s s −= for a non-singular matrix s. 3) 2 1 2 1 2 1 2 ( ) ( ) , 0 f x f x l x x l x ,x r− < − > ∈ ~ ~ 2 1 2 1 2 1 2 ( ) ( ) ( ) 0, 0 , 0 ( ) ( ) , 0 , t e ep p p p p p t t p p i p a b k c a b k c p q i pb gc t t g g f x f x l x x l x ,x r α  − + − = −   = = ≥ > = > − < − > ∈ where rα +∈ , and i is the identity matrix. these relations imply that the feedback system is spr for large α, and so that the original linear system is aspr. iv. simulation during simulations, it is required that the output of the system tracks the output reference. the controlled system is given by: . 1 ( ) ( ) ( ( )) ( ) ( ) ( ) p p p p p p p p p x t a x t a x t t b u t y t c x t τ= + − + = with [ ]1 1 3 5 6 3 , , , 5 6 3 4 6 8 4 p a a b c       = = = =            . the transfer function of the reference model is given by: 2 ( ) 1 m g s s = + the eigenvalues of ap and a1 are given by: { }( ) -0.85; 5.85paλ = and { }1( ) 0.31; 12.68aλ = which means that the two matrices are instable. the model input is illustrated in figure 1, where 1 m u = from 0 to 20s and 1 m u = − from 20s to 40s. from 40s to 60s we have chosen a sinusoidal input given by ( ) 2sin( ) m u t t= and from 60s to 100s, another sinusoidal input, ( ) sin(t/3) m u t = , was selected. fig. 1. the model input ( ) m u t a. case 1: without perturbation in this case we supposed that the controlled system is not affected by noise measurement and actuator failure. the adjustable parameter is chosen to be 3,3 1 p i t t i= = × where 3,3 i is the identity matrix of order 3. figure 2 shows the two outputs where the output of the controlled system tracks the reference whatever is the model input. the controlled system is given in figure 3 where we can see that this command has the same form as the model input. the four gains are presented in figure 4. note that these gains are used to construct the controlled system input (see (11), (12) and (13)). these gains are bounded so the controlled system input is also bounded. b. case 2: with perturbation and 3,3 1 p i t t i= = × in this case, a noise measurement and an actuator fault are added, so the perturbed system is given by: . 1 ( ) ( ) ( ( )) ( ( ) ( )) ( ) ( ) ( ) p p p p p p p p p x t a x t a x t t b u t d t y t c x t n t τ= + − + + = + where d(t) is the actuator fault given by d(t)=1sin(2t), and n(t) represents the measurement noise given by n(t)=1sin(4t). the adjustable parameters are taken as in the previous case. figure 5 illustrates the outputs of the system and the model. it is clear that the tracking has been deteriorated due to the perturbation that has affected the system. figure 6 represents the control input where the controller is doing a big effort in order to damp the effect of the perturbation and let the system output track the reference model. 0 10 20 30 40 50 60 70 80 90 100 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 time s engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5624 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay fig. 2. outputs of the system and the model without perturbation fig. 3. the system command u(t) fig. 4. the gains ke, kx, and ku fig. 5. outputs of the system and the model with perturbation 3,3 1 p i t t i= = × fig. 6. the system command u(t) fig. 7. outputs of the system and the model with perturbation 3,3 10 p i t t i= = × 0 10 20 30 40 50 60 70 80 90 100 -4 -3 -2 -1 0 1 2 3 time s y ym 0 10 20 30 40 50 60 70 80 90 100 -1.5 -1 -0.5 0 0.5 1 1.5 time s 0 10 20 30 40 50 60 70 80 90 100 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 time s ke kx ku 0 10 20 30 40 50 60 70 80 -4 -3 -2 -1 0 1 2 3 time s y ym 0 10 20 30 40 50 60 70 80 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 time s 0 10 20 30 40 50 60 70 80 -4 -3 -2 -1 0 1 2 3 time s y ym engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5625 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay c. case 3: with perturbation and 3,3 10 p i t t i= = × in this case and in order to overcome the drawback that has appeared in the previous case, we augmented the adjustable parameter tp, ti as 3,310p it t i= = × . figure 7 shows a perfect tracking compared to figure 5 and figure 8 shows the effect of the controller in overcoming the perturbation and letting the system output track the reference model. note that this command is bounded and does not represent a high oscillation. figure 9 presents the gains that are bounded and they are adjusted to construct the system input. fig. 8. the system command u(t) fig. 9. the gains ke, kx, and ku v. conclusion this paper presents an adaptive command applied for a perturbed time delay system. the lyapunov’s theory has been addressed in order to achieve a robust command against the uncertainty which is inherent in all real systems. the simulation results confirm the robustness of the developed command. references [1] e. fridman, “tutorial on lyapunov-based methods for time-delay systems”, european journal of control, vol. 20, no. 6, pp. 271–283, 2014 [2] k. gu, v. l. kharitonov, j. chen, stability of time-delay systems, springer, 2003 [3] m. wu, y. he, j. h. she, stability analysis and robust control of timedelay systems, springer, 2010 [4] x. p. chen, h. dai, “stability analysis of time-delay systems using a contour integral method”, applied mathematics and computation, vol. 273, pp. 390–397, 2016 [5] m. hashemi, j. ghaisari, j. askari, “adaptive control for a class of mimo nonlinear time delay systems against time varying actuator failures”, isa transactions, vol. 57, pp. 23–42, 2015 [6] z. zhang, c. lin, b. chen, “new stability and stabilization conditions for t–s fuzzy systems with time delay”, fuzzy sets systems, vol. 263, pp. 82–91, 2015 [7] j. song, s. he, “finite-time robust passive control for a class of uncertain lipschitz nonlinear systems with time-delays”, neurocomputing, vol. 159, pp. 275–281, 2015 [8] s. h. tsai, y. a. chen, j. c. lo, “a novel stabilization condition for a class of t–s fuzzy time-delay systems”, neurocomputing, vol. 175, pp. 223-232, 2015 [9] y. d. song, h. zhou, x. su, l. wang, “pre-specified performance based model reduction for time-varying delays systems in fuzzy frame work”, information sciences, vol. 328, pp. 206–221, 2016 [10] d. j. wang, “a pid controller set of guaranteeing stability and gain and phase margins for time-delay systems”, journal of process control, vol. 22, no. 7, pp. 1298–1306, 2012 [11] d. b. pietri, j. chauvin, n. petit, “adaptive control scheme for uncertain time-delay systems”, automatica, vol. 48, no. 8, pp. 1536–1552, 2012 [12] j. na, x. ren, y. xia, “adaptive parameter identification of linear siso systems with unknown time-delay”, systems & control letters, vol. 66, pp. 43–50, 2014 [13] m. c. pai, “chaotic sliding mode controllers for uncertain time-delay chaotic systems with input nonlinearity”, applied mathematics and computation, vol. 271, pp. 757–767, 2015 [14] y. zhang, q. wang, c. dong, y. jiang, “h∞ output tracking control for flight control systems with time-varying delay”, chinese journal of aeronautics, vol. 26, no. 5, pp. 1251–1258, 2013 [15] l. frezzatto, m. j. lacerda, r. c. l. f. oliveira, p. l. d. peres, “robust h2 and h∞ memory filter design for linear uncertain discrete-time delay systems”, signal processing, vol. 117, pp. 322-332, 2015 [16] v. andrieu, l. praly, l. astolfi, “homogeneous approximation, recursive observer design, and output feedback”, siam journal on control and optimization, vol. 47, no. 4, pp. 1814–1850, 2008 [17] a. polyakov, “nonlinear feedback design for fixed-time stabilization of linear control systems”, ieee transactions on automatic control, vol. 57, no. 8, pp. 2106–2110, 2012 [18] d. efimov, a. polyakov, e. fridman, w. perruquetti, j. p. richard, “comments on finite-time stability of time-delay systems”, automatica, vol. 50, no. 7, pp. 1944–1947, 2014 [19] j. lin, z. gao, “observers design for switched discrete-time singular time-delay systems with unknown inputs”, nonlinear analysis: hybrid systems, vol. 18, pp. 85–99, 2015 [20] g. zheng, f. j. bejarano, w. perruquetti, j. p. richard, “unknown input observer for linear time-delay systems”, automatica, vol. 61, pp. 35–43, 2015 [21] c. hua, g. liu, l. zhang, x. guan, “output feedback tracking control for nonlinear time-delay systems with tracking errors and input constraints”, neurocomputing, vol. 173, pp. 751-758, 2016 [22] y. q. wu, z. g. liu, “output feedback stabilization for time-delay nonholonomic systems with polynomial conditions”, isa transactions, vol. 58, pp. 1–10, 2015 0 10 20 30 40 50 60 70 80 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 time s 0 10 20 30 40 50 60 70 80 -1.5 -1 -0.5 0 0.5 1 1.5 time s ke kx ku engineering, technology & applied science research vol. 10, no. 3, 2020, 5619-5626 5626 www.etasr.com keltoum: model reference adaptive controller for lti systems with time-variant delay [23] k. sobel, h. kaufman, l. mabius, “implicit adaptive control for a class of mimo systems”, ieee transactions on aerospace and electronic systems, vol. 18, no. 5, pp. 576-590, 1982 [24] s. ozcelik, h. kaufman, “robust direct model reference adaptive controllers”, 34th ieee conference on decision and control, new orleans, usa, december 13-15, 1995 [25] g. w. neat, h. kaufman, r. steinvorth, “comparison and extension of a direct model reference adaptive control procedure”, international journal of control, vol. 55, no. 4, pp. 945-967, 1992 [26] i. bar-kana, “positive-realness in multivariable stationary linear systems”, journal of the franklin institute, vol. 328, no. 4, pp. 403-417, 1991 [27] i. barkana, m. c. m. teixeira, l. hsu, “mitigation of symmetry condition in positive realness for adaptive control”, automatica, vol. 42, no. 9, pp. 1611-1616, 2006 [28] i. barkana, “gain conditions and convergence of simple adaptive control”, international journal of adaptive control and signal processing, vol. 19, no. 1, pp. 13-40, 2005 [29] j. broussard, m. o’brien, “feedforward control to track the output of a forced model”, ieee transactions on automatic control, vol. 25, no. 4, pp. 851-853, 1980 [30] d. a. torrey, y. sozer, h. kaufman, “direct model reference adaptive control of permanent magnet brushless dc motors”, ieee international conference on control applications, hartford, usa, october 5-7, 1997 [31] h. kaufman, g. w. neat, “asymptotically stable multiple-input multiple-output direct model reference adaptive controller for processes not necessarily satisfying a positive real constraint”, international journal of control, vol. 58, no. 5, pp. 1011-1031, 1993 [32] m. darouach, “linear functional observers for systems with delays in state variables”, ieee transactions on automatic control, vol. 46, no. 3, pp. 491-496, 2001 [33] k. behih, k. benmahammed, z. bouchama, m. n. harmas, “real-time investigation of an adaptive fuzzy synergetic controller for a dc-dc buck converter”, engineering, technology & applied science research, vol. 9, no. 6, pp. 4984-4989, 2019 [34] z. r. labidi, h. schulte, a. mami, “a model-based approach of dc-dc converters dedicated to controller design applications for photovoltaic generators”, engineering, technology & applied science research, vol. 9, no. 4, pp. 4371-4376, 2019 [35] k. mokhtari, a. elhadri, m. abdelaziz, “a passivity-based simple adaptive synergetic control for a class of nonlinear systems”, international journal of adaptive control and signal processing, vol. 33, no. 9, pp. 1359-1373, 2019 [36] y. su, c. zheng, p. mercorelli, “global finite-time stabilization of planar linear systems with actuator saturation”, ieee transactions on circuits and systems ii: express briefs, vol. 64, no. 8, pp. 947-951, 2017 [37] p. mercorelli, “an adaptive and optimized switching observer for sensorless control of an electromagnetic valve actuator in camless internal combustion engines”, asian journal of control, vol. 16, no. 4, pp. 959–973, 2014 [38] p. mercorelli, “robust adaptive soft landing control of an electromagnetic valve actuator for camless engines”, asian journal of control, vol. 18, no. 4, pp. 1299–1312, 2016 microsoft word selvam_ed.doc etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 349-351 349 www.etasr.com selvam and latha: a simple square rooting circuit based on operational amplifiers (opamps) a simple square rooting circuit based on operational amplifiers (opamps) k. c. selvam department of electrical engineering indian institute of technology, madras chennai – 600 036, india kcselvam@ee.iitm.ac.in s. latha department of electrical engineering indian institute of technology, madras chennai – 600 036, india latha@ee.iitm.ac.in abstract  a simple circuit which accepts a negative voltage as input and provides an output voltage equal to the square root of the input voltage is described in this paper. the square rooting operation is dependent only on the ratio of two resistors and a dc voltage. hence, the required accuracy can be obtained by employing precision resistors and a stable reference voltage. the feasibility of the circuit is examined by testing the results on a proto type. keywords-square-rooting; generators; comparators; switches; low pass filters i. introduction a need for obtaining the square root of a measured quantity is often met in the field of measurement and instrumentation systems [1]. especially techniques that involve the measurement of unknown signals buried in excessive noise by employing either a phase sensitive detector (psd) or a tracking amplifier invariably need a final square rooting stage [2]. a square rooting circuit is also required in certain methods of determining impedances under sinusoidal excitation as well as in case of obtaining the three vectors of a 3 phase power system. methods based on expensive and complex multipliers have been proposed in the past for realizing a circuit that provides an output voltage whose magnitude is the square root of the input voltage. square rooting circuits have been implemented with the use of different high performance active building blocks with second generation current conveyors [3], otas [4], second generation current controlled current conveyors (ccciis) [5], current differencing transconductance amplifiers (cdtas) [6] and current follower transconductance amplifiers (cftas) [7]. unfortunately, these reported circuits suffer from one or more of following disadvantages: (a) excessive use of the active/passive elements, especially external resistors [3-5], (b) use of a floating resistor, which is not convenient to further fabricate in ic [3] and (c) absence of linearly electronic controllability of output signal [3-7]. in this paper, a novel simple circuit for square rooting employing operational amplifiers (opamps) which eliminates all drawbacks of the previously mentioned methods [3-7], is proposed. though the scheme is simple, the expression for the output indicates that with a precision dc voltage as a reference and a pair of precision resistors, good accuracy can be achieved. ii. circuit description the circuit diagram of the proposed scheme is shown in figure 1. the sawtooth wave is generated by charging a capacitor at a specified rate and then rapidly discharging it with a switch. let us assume that at start, the charge and, hence, the voltage at the output terminal of operational amplifier oa1 is zero. since the inverting terminal of the operational amplifier oa1 is at virtual ground, the current through r1, namely vr/r1 amps, would flow through and charge capacitor c1. during this charging (till the output of oa1 reaches the voltage level of vr) the output of the operational amplifier oa2, configured to work as a comparator, is at the low state and switch s1 is kept open (off). as soon as the output of oa1 crosses the level of vr, e.g. after a time period t, the output of comparator oa2 goes high and switch s1 is closed (on). the s1 switch would then short the capacitor c1 and hence vs drops to zero volts. during the time period t we have, 1 1 1 1 1 r s r v v v dt t r c r c = =∫ (1) after a very short delay time td, required for the capacitor to discharge to zero volts, the comparator oa2 output returns to low and switch s1 is opened, thus allowing c1 to resume charging. this cycle, therefore, repeats itself at a period (t+td). the waveforms at cardinal points in the circuit of figure 1 are shown in figure 2. from (1) and the fact that at time t=t, vs=vr, we get: 1 1t r c= (2) as seen in figure 1, the comparator oa3 compares the sawtooth waveform thus generated with the output voltage vo etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 349-351 350 www.etasr.com selvam and latha: a simple square rooting circuit based on operational amplifiers (opamps) and provides at its output, a pulse train vk. the on time of this pulse train will be: (3) o r v d t v = this pulse train vk controls switch s2. switch s2 connects (a) vo to the low pass filter realized with the operational amplifier oa4 during on time ‘d’ and (b) zero volts during off time of pulse train vk. another pulse train vp is generated at the output of switch s2 with the same on time ‘d’, period t and max value vo. the output of the low pass filter realized with the operational amplifier oa4 will be the average value of pulse vp which is, 0 1 d f o v v dt t = ∫ o v d t = 2 (4)o r v v = considering kcl for node ‘j’ in the circuit of figure 1, we get: 1 3 2i i i+ = 1 3 f v i r = 2 3 5 4 o o i r v v v r v r r + = if r3 = r4 = r and r5 >> r 2 o i r v v v = 2 o i r v v v= o i r v v v= (5) thus the output voltage vo is proportional to the square root of the input voltage vi. iii. experimental results and conclusion the circuit shown in figure 1 was implemented and tested in our laboratory. lf 356 ics were used for all operational amplifiers. switches s1 and s2 were realized with cd 4053. the following values were set for different circuit components; vr=6 v, r1=200 kω, c1=470 pf, r3=r4=10 kω, r5=1 mω. voltage levels of ±7.5 v were chosen for the power supply. the test results are shown in table i. the accuracy of the proposed circuit strongly depends upon the sharpness and linearity of the sawtooth waveform. the offset voltages of all operational amplifiers are to be nulled for better performance of the circuit. the small variations in voltage vr will cause an error at the output; hence, a stable precision voltage source must be used as vr. however the small variations in the power supply will not affect the circuit at all. it should be noted here that the polarity of input voltage vi should only be negative and its maximum value should be less than vr. experimental results indicate the practical feasibility of the proposed circuit. acknowledgement the author is highly indebted to prof. dr. enakshi bhattacharya, prof. dr. v. jagadeesh kumar and dr. bharath bhikkaji of electrical engineering department, iit madras, for their constant encouragement throughout the work. he also thanks mr. mithun for manuscript formatting. table i. test results on the proto type square rooting circuit input volts -vi output voltage by experiment output voltage by calculation error in % 0.5v 1.711 1.732 -1.20 1.0v 2.420 2.449 -1.18 1.5v 2.959 3.000 -1.34 2.0v 3.414 3.464 -1.44 2.5v 3.829 3.872 -1.10 3.0v 4.200 4.242 -0.99 3.5v 4.525 4.582 -1.24 4.0v 4.820 4.890 -1.43 4.5v 5.125 5.196 -1.35 references [1] e. o. doebelin, measurement systems: application and design, mc graw hill, new york, 2004 [2] m. a. atmanand, “novel schemes for impedance measurement and their implementation through electronic circuits”, ph.d thesis, iit madras, 1996 [3] s. i. liu, “square-rooting and vector summation circuits using current conveyors”, iee proc. circuits., dev. syst., vol. 142, no. 4, pp. 223226, 1995. [4] v. riewruja, “simple square rooting circuit using otas”, electronics letters, vol. 44, no. 17, pp. 1000-1002, 2008 [5] c. netbut, m. kumngern, p. prommee, k. dejhan, “new simple square rooting circuits based on translinear current conveyors”, ecti tranasactions on elecrtrical eng., electronics and communication, vol. 5, no. 1, pp. 10-17, 2007. [6] w. tangsrirat, t. pukkalanun, p. mongkolwai, w. surakampontorn, “simple current mode analog multiplier, divider, square-rooter and square based on cdtas”, aeu-int. j. electron. commun., vol. 65, no. 3, pp. 198-203, 2011. [7] p. mongkolwai, d. prasertsom, w. tangsrirat, “cftabased current controlled current amplifier and its application”, 25th international technical conference on circuits/systems, computers and communications (itc-cscc 2010), pattaya, thailand, 2010. etasr engineering, technology & applied science research vol. 3, �o. 1, 2013, 349-351 351 www.etasr.com selvam and latha: a simple square rooting circuit based on operational amplifiers (opamps) fig. 1. circuit diagram of square rooter fig. 2. associated waveforms of figure 1 authors profile k. c. selvam was born in 1968 in krishnagiri district of tamil nadu state, india. he graduated from the institution of electronics and telecommunication engineers, new delhi, in 1994. he got the best paper award by iete in 1996. at present he is working as technical staff in the controls and instrumentation laboratory, department of electrical engineering, indian institute of technology, madras, india. s. latha was born in 1967 in the vilupuram district of the tamil nadu state, india. she obtained a diploma in electronics and communication engineering from the periyar century polytechnic college, vallam, thanjavoor, tamil nadu, india in 1985. at present she is working as technical staff in the department of electrical engineering, indian institute of technology, madras, india. her research interests include microcontroller based digital measuring instruments. microsoft word 08-3094_s_etasr_v9_n6_pp4905-4911 engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4905 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … optimal battery sizing of a grid-connected residential photovoltaic system for cost minimization using pso algorithm nibaruta regis department of electrical engineering, pau institute of basic sciences, technology and innovation, nairobi, kenya nibaregis@gmail.com christopher maina muriithi school of engineering and technology murang’a university of technology nairobi, kenya cmmuriithi@mut.ac.ke livingstone ngoo faculty of engineering and technology multimedia university of kenya nairobi, kenya livingngoo@gmail.com abstract—this paper proposes a new optimization technique that uses particle swarm optimization (pso) in residential gridconnected photovoltaic systems. the optimization technique targets the sizing of the battery storage system. with the liberation of power systems, the residential grid-connected photovoltaic system can supply power to the grid during peak hours or charge the battery during non-peak hours for later domestic use or for selling back to the grid during peak hours. however, this can only be achieved when the battery energy system in the residential photovoltaic system is optimized. the developed pso algorithm aims at optimizing the battery capacity that will lower the operation cost of the system. the computational efficiency of the developed algorithm is demonstrated using real pv data from strathmore university. a comparative study of a pv system with and without battery energy storage is carried out and the simulation results demonstrate that pv system with battery is more efficient when optimized with pso. keywords-grid-connected pv; electricity surplus; sizing; battery energy storage; electricity prices; net metering; pso i. introduction nowadays electricity access plays a vital role and governments and private sector are investing in the electricity domain to ensure sustainable development. climate change and the relaxation of the electric energy market boosted the development of renewable energy integration in electric grids [1]. in many developing countries the reliable access to electricity is still a big challenge. grids are sometimes marked by limited supply and prevailing disruptions. due to this, some electricity users who especially own classical grid-connected pv systems do not derive a benefit from their installations considering the intermittent nature of the solar panel [2]. the hours of high pv production do not necessarily coincide with peak load demand hours. since customers usually experience frequent undesirable power cuts, it is possible that this issue will grow in the future. it is very important to look at alternative ways of minimizing the operational cost of a gridtied pv system. one of the alternatives is integrating optimally sized energy storages into a grid-connected pv system. there have been various contributions towards cost minimization and energy storage optimization where different approaches have been investigated. authors in [3, 4] used dynamic programing (dp), whereas in [5, 6] the optimization was performed by means of markov decision processes and fuzzy clustering method. authors in [7], presented an economic analysis of a pv system under a net-metering scheme. due to the randomness of renewable energy sources (res) and the serial characteristic of the decision problem in the analysis, a metaheuristic approach is preferred. whilst analytical methods usually endure some problems like slow convergence and dimensionality, metaheuristic-based optimizations are much more effective in handling large-scale nonlinear optimization problems. in [8], a genetic algorithm was developed to optimally size lead acid batteries that run under dynamic pricing strategies in both independent and aggregated ways. authors in [9] introduced a pso-based algorithm for optimally sizing constituents of a hybrid renewable system, aiming to maximize the energy production to cover the load at lowest cost and enhanced reliability. an improved firefly algorithm was proposed in [10] to optimally locate and size the battery energy storage system for mitigating the voltage rise in pvdg integrated distribution network. authors in [11] proposed a two layer optimization procedure using pso to optimize the battery size of a grid-tied pv system. authors in [12] conducted a comparative study between dp and pso for solving unit commitment problems. authors in [13] proposed a mechanism for minimizing the operation cost of a grid-tied system by optimizing the operation schedule of different energy sources in a residential complex energy system. the optimization was based on the invasive weed optimization technique (iwo) and the impact of selling and buying to/from the utility grid was considered. in this paper, storage optimization and cost minimization are based on pso. a grid-connected pv system with battery is presented with a configuration that allows the pv system owner to either sell or buy energy from the grid depending on the system’s output. the paper focuses on sizing the battery energy storage for a typical customer already owning a 5kw pv system in order to reduce daily electricity bills. corresponding author: nibaruta regis engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4906 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … ii. system modeling and problem formulation mathematical models of various components that are parts of this grid-connected pv system were developed in order to establish an optimal energy flow within the system. this energy flow approach is considered for the modeling process of the system’s components for a time step (δt) of one hour. the function of battery storage in a grid-tied pv system varies according to its configuration. some configurations use the direct charging method by charging the battery with pv panels dc voltage. in this paper, a different topology is set up in which the battery, the pv, the load and the grid are connected to the same ac bus as in figure 1. the battery is connected to the ac bus through an inverter/charge. the pv dc output is also fed to the ac bus via a three-phase dc/ac converter, which is modeled using a 3-level insulated gate bipolar transistor (igbt) bridge. b400_2 is a three-phase v-i measurement block serving as the common ac bus interconnecting the sources and the load. the grid is modeled as an ac source and functions as a swing bus to balance the power demands of the household or absorb pv power. power (positive sequence) block is used to monitor the active power exchange between the grid and pv system and serves for netmetering. an arbitrary household load of 3kw has been assumed in order to show how it is connected to the rest of the system. fig. 1. simulink model of the grid-connected pv system with battery an average hourly pv energy production along with the average hourly load consumption for the considered residence has been calculated and plotted in figure 2. the load energy used for simulation in this paper has a daily average of 12.15kwh whereas the pv energy output has a daily average of 21.443kwh. this means that only 56.6% of pv energy is consumed by the load and the rest must be sold to the grid. fig. 2. average hourly load and pv output throughout 2016 a. component modeling and data acquisition 1) pv array model the performance of the solar panels is highly governed by ambient temperature and solar irradiation. the optimal selection of a pv module with respect to the anticipated functional ambient conditions enhances the module’s performance and therefore increases energy production. for this work, a 5kwp pv array already installed on the top of a garage at strathmore university was considered. fig. 3. montly pv output power throughout 2016 the pv output is modeled as a linear power with respect to solar radiation [9, 13]: �����,�� � � � ��� (1) where � is the solar radiation at time interval �, is the pv panel area and ��� is the pv efficiency. the raw output data of the 5kwp pv array were collected from strathmore university, nairobi, kenya. the collected data were recorded hourly over the whole year of 2016 (figure 3). the module efficiency was taken as 14.91%. 2) battery storage capacity fading is normally estimated through real life experimentation by subjecting the battery to different charge/discharge rates, which can be hard and slow. an alternative way is the mathematical model which was developed based on arrhenius equation in [14]: ����� exp���������� (2) where ����� is capacity loss, is pre-exponential factor, �� is the processed energy, and !are the universal gas constant and temperature. in a practical scenario the battery charge and discharge rates change according to the grid and pv behavior, resulting in an increase of capacity fade with respect to increase in charge/discharge rate, "#� and temperature. the arrhenius equation of battery capacity fading has been adapted to take into account the charge and discharge rates of the battery. the proposed model depends on nominal capacity �%, charge rate �&' , discharge rate �&( , energy processed for charging �) , energy processed for discharging �* , the gas constant, and temperature. engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4907 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … �����) + exp,−� × �&' × �) × �% × ! ./+ 12 exp��(×345×�6×'7�×� �8 (3) this model has a particular advantage of being able to determine the capacity fade of the battery subjected to different charge-discharge rates. the various parameters of (3) can be found from the capacity loss data given by the battery manufacturers. the battery investment is taken as 200$/kwh [15] and the battery inverter cost has been estimated at 6006$/kw [16]. 3) grid electricity is expected to either be sold to the grid or purchased from it. for simplicity, we assume that the sellingpurchasing prices are equal at instant time � and we denote them as �39:);��,��. consequently, there is an electric power interchange between the utility grid and the pv system denoted by pgrid(t) such that: pgrid(t)<0 when electricity is sold to the utility grid and pgrid(t)>0 when electricity is bought from the grid. the system contains costs and benefits where costs account for the purchase of electricity from the grid and benefits account for selling electricity to the grid. the electric power exchange pgrid(t) in combination with the pv output power ppv(t) have to satisfy the power balance requirements as follow: �<9:*��� + ��������� = ����*��� + �=>���� �<9:*��� = ����*��� − ��������� + �=>���� (4) where �=>���� is the charging/discharging rate of the battery on the ac bus, and ����*��� is the load power. 4) residential load in a residential building, load appliances could have a fixed or a relatively flexible schedule, separating loads categories (lights and tvs from refrigerators and air conditioners). however, the details of load priorities are not analyzed in this work as a residential load profile has been collected from maisy database and adapted to the kenyan context. an hourly load profile for a residential building has been collected and plotted in figure 4. fig. 4. residential montly load profile throughout 2016 b. problem formulation and cost calculation the formulated problem is to minimize the sum of the different costs such as the cost of imported power, the cost of battery degradation, and the annualized inverter cost as expressed in (5): ?�@ a = b∑ ∑ �&'d��,�� + �'&��,��efgh%*gh i + �:j� (5) where �&'d is the cost of battery capacity loss, �'& is the energy cost and benefit, �:j� is the annualized battery inverter cost and �,� are day and time respectively. the pv system is considered as previously installed and its energy is already available to the sample customer. equation (5) emphasizes on the impact of battery capacity loss and electricity sell/purchase to/from grid on the overall running cost. it captures the effect of only the battery dc/ac inverter through �:j� (annualized inverter cost). in order to compare the running costs for grid-tied pv with battery storage and gridtied pv without battery storage, pv investment cost, pv dc/ac inverter cost, installation, and replacement cost are not considered. the reason is that those four variables are common for both cases. table i. input parameters parameter value unit pv capacity 5 kw pv efficiency �3k 14.91 % battery investment cost 200 $/kwh state of charge ("#�l:j,l�m) 30 and 90 % aging coefficient (n) 3 × 10�r n/a nominal charging rate 10 hrs self-discharging factor (s) 0.0000347 n/a pv inverter efficiency �:j� 97 % bat. inverter efficiency �=�f 94 % sampling time interval (tf) 1 hr annual real interest rate (�) 4 % battery charge/discharge eff. �)e/�*:�)e 90 % electricity charges 0.1583 off-peak 0.3560 on-peak $ the cost of the battery capacity fade for a given hour of the system operation is expressed as [6]: �&'d��,�� = &vw�*,f�×&xyz[\]^_`\]h�abcdxy (6) where 2'd��,�� is the battery capacity loss on day � at time �, 2:j�;�f�)��f is the battery investment cost and "#el:j is the battery minimum state of health. the battery has the following dynamic equation [17]: *�4�*,f� *f = �&��,�� (7) where �& is the battery energy and �&��,��>0 during battery charging state, �&��,��<0 during battery discharging state, and �&��,��=0 during battery inactive state. in order to take into account the battery aging effect, a battery usable capacity is considered after each sampling time and denoted as ����. obviously, at initial time �f the usable battery capacity is same as the battery nominal capacity �j. then �� �f� = �j. the above defined usable battery capacity is engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4908 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … updated at every sampling interval by subtracting a cumulative battery capacity loss �&������,�� from the battery nominal capacity as shown in (8): ���,�� = �j − �&������,�� �&������f� = 0 (8) therefore, the battery capacity loss on day � at time � can be expressed as: 2'd��,�� = �&������,�� − �&������,� − t�� (9) the model of the battery aging is [18]: *'4g`\\�*,f�*f = h −i × �&��,��, �j �&��,�� < 00 l�ℎnop�qn (10) using the conventional efficiency of the battery �& and the sampling interval t�, the above relation becomes: �&������,�� = r�&������,� − t�� �s×34�*,f�×tfu4 �j�2 < 0 �&������,� − t�� l�ℎnop�qn (11) where i is the battery aging coefficient. this expression simply indicates that for the battery-aging model, capacity loss is encountered only in the discharge process. thus, we can see that in (9), 2'd��,�� is equal to zero when the battery is charging. therefore, the calculation of cost of battery capacity loss in (6) entirely depends on the state of the battery, which is either charging or discharging. the battery state of charge is updated after each sampling period as: "#���,�� = "#���,� − t���1 − s�+ �)e 34�*,f�'�*,f�×k t� (12a) "#���,�� = "#���,� − t���1 − s�+ �*:�)e 34�*,f�'�*,f�×kt� (12b) equation (12a) is for battery charging and (12b) for discharging. ���,�� is the usable battery capacity. since the system allows purchasing and selling electricity from and to the utility grid, its operation contains both cots and benefits for the system owner that are combined and denoted together as �'& . costs account for energy bought from and benefits account for energy sold to the utility grid. equation (13) describes the mathematical expression of the cost of the energy exchanged: �'&��,�� = �39:);��,�� × �<9:*��,��t� (13) considering the two different scenarios that result from the reading on the net-meter, the above equation can be extended to account for the net power readings. it can be expressed as: 1 2 ( , ) {[ ( , ) ( , ) [ ( , ) ( , )]} cb price net price net e d t e d t p d t e d t p d t tδ = × + × (14) where �%;fh��,�� > 0 corresponds to the cost of buying power from the grid and �%;fw��,�� < 0 corresponds to expected benefits for selling excess power to the grid. � = b∑ ∑ ��9:);��,�� × �<9:*:l��9fefgh%*gh ��,��i (15) 2 = �∑ ∑ ��9:);��,�� × �<9:*;m��9fefgh%*gh ��,��� (16) where �<9:*:l��9f is the imported power from the grid and �<9:*;m��9f is the power exported to the grid. the capital recovery factor and the annualized battery cost can be calculated by (17) and (18) respectively: 2s��nox �sy��sz n{l|nox as{�lo �� a� = :�h}:�7�h}:�7�h (17) where � is the annual interest rate and ~ is the battery lifetime, and s@@�sz�nn� �s��nox {lq� = 2:j�;�f�)��f × � a (18) for easier calculations it is assumed that the prices for sales and purchases are identical. the system is subjected to a number of operational constraints: �&l:j ≤ �&��,�� 0 ≤ �&��,�� ≤ ���,�� "#�l:j ≤ "#���� ≤ "#�l�m "#e��� ≥ "#el:j (19) the total annual operation cost is therefore calculated as: 365 24 1 1 . .cos ( ( )) bcl cbd t inv an op t c e dt ac = = = + + ∑ ∑ (20) the lifetime of the battery is calculated in (21): battery lifespan �~� = 'y×k'4g`\\^�[�� (21) where �&������;�9 is the annual additive battery capacity loss. iii. particle swarm optimization algorithm particle swarm optimization is used to solve this optimization problem. the algorithm initializes the particles to search the best solution in the entire search space from the objective function [19]. in this study, the swarm is initialized as � = 1:@�ly where @�ly is the swarm size, taken as 30 particles. the particles refer to different random battery sizes from 100 to 3000ah. the number of iterations is set to a maximum of 20 and initialized as � = 1:?s��� where ?s��� is the maximum number of iterations. the inertia maximum and minimum weight is set to 0.9 and 0.2 respectively. the acceleration coefficients are assumed as 2. each particle is set to an initial zero velocity and for every particle (random battery size) the objective function is executed to compare the costs. each particle compares the target value with the best particle’s value, if the target value is lower, sets this value, and records the location of the corresponding particle [20]. velocities and positions are updated after each iteration [21]. each battery capacity has a position and each position has a velocity. the velocity of the �-th is updated by: |�,j;�� = p|�,��*� + {hohb��,�=;�f� − ���i + {wow (22) where |�,j;�� is the new velocity of the k-th particle at the j-th iteration, w is the inertia weight, |�,��*� is the old velocity of the k-th particle at the j-th iteration, {h and {w are the acceleration engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4909 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … constants, oh and ow are two random numbers in [0, 1]. the position of the the �-th is updated using (23): ��,j;�� ��,��*��h 0 |�,j;�� (23) where ��,lz� �-1 is the old position of the k-th particle at the previous iteration. in this study, the position x is the size and the velocity is the corresponding running cost. the objective function is to minimize the total operating cost f. the personal best of each battery is its size and the resulting operating cost. therefore, for every iteration, there is a corresponding battery size which, when integrated in this grid-connected pv system results in a minimum operating cost which is the global best. fig. 5. flow chart of the proposed pso algorithm iv. results and discussion in this simulation, the optimization algorithm computes the optimized battery size, the corresponding operation cost of the system, the additive battery degradation or capacity loss, and the battery lifespan. these different values are generated and compared with respect to two scenarios: a grid-connected system with and without battery energy storage. in both cases, we have distinguished import and export energies as �<9:* > 0 and �<9:* k 0 respectively. similarly, the battery charged and discharged energies are distinguished as �&�{ v 0 and �&�{ k 0 respectively. this is because the net-metering system needs to calculate the net power. a. grid-connected with battery energy storage in this pso-based optimization, a set of adequate numerical quantities for the pso parameters were chosen to get the algorithm quicker and faster. the population size was set to 30, the maximum number of iterations was set to 20, c1 and c2 were set to 2 and wdamp to 0.99. after initializing the parameters, the pso algorithm was utilized to compute the optimum battery size with respect to cost as depicted in figure 6. the dots represent the fitness values (total cost for a given battery capacity) and each of them represents a potential solution. according to figure 6(a), we can realize that a few points are close to the optimal battery size and could be mistaken as the best solution. this has been avoided by magnifying the bottom part of the figure and narrowing the search space to 1000-1600ah to highlight the final solution or the optimal battery size as 1200ah in figure 6(b). this optimal size is equivalent to 14.4kwh with a corresponding total annual cost or an annual income of 449.42$ to the pv system owner paid by the utility. the negative sign simply indicates that the exported energy to the grid was higher than the imported energy from the grid resulting in a benefit to the system owner. the capital cost of the 14.4kwh battery is calculated as 2880$ at a rate of 200$/kwh. the battery lifespan is calculated by (21) as 13.5 years and the battery recovery factor is found as 0.100143 by (17). the annualized battery cost or real battery capacity loss cost is then estimated as 288.4$ using the expression in (18). fig. 6. optimal battery capacity with respect to cost figure 7 shows the optimal energy flow within the system when a 14.4kwh battery storage is installed. the plots are for one sample day of the year (53 rd ) and we can realize that the load demand entirely depends on the grid during off-peak hours (00:00 to 7:00). pv power starts to be available after 7:00 and the peak production of 4.047kw is recorded at 13:00. the load takes only 0.799376kw out of 4.047kw and the excess 3.2476kw is fed to the grid to make profit. engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4910 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … fig. 7. optimal energy flow schedule for one day figure 8 illustrates how the energy varies in the battery and how the algorithm handles the constraint of "#� boundaries. according to this figure, battery discharges its energy to the utility grid from 8:00 until its state of charge reaches its minimum. the battery is kept inactive for one hour (13:00 to 14:00) then starts charging from pv until 16:00. the battery releases its energy to the utility grid again until its minimum "#�. from 18:00 to 22:00, the load is mostly relying on the grid and the battery is kept inactive. finally, during off-peak hours (22:00 forwards), the battery charges from the utility grid. we realize that the algorithm only discharges the battery at points where it is advantageous. fig. 8. soc variation of the battery during one regular day in figure 9, it can be seen that during the sampled day the additive battery capacity loss of this grid-connected pv-battery system increases during discharging time due to the aging effect described in (11). the additive battery capacity loss is 0.167318kwh at the start of the day and ends at 0.170278kwh totaling a daily additive battery capacity loss of 0.00296kwh or 2.96wh. the figure shows that the battery degradation increases only during discharge state from 8:00 to 13:00 and from 16:00 to 18:00. figure 10 reflects the convergence of the proposed pso algorithm to find the optimal solution for four independent runs with respect to costs. as depicted in this figure, optimal battery size is reached after about 10 iterations and the optimum solution converges to the same global best for the four runs. if the load demand is increased by 10% for each of the 8760 hours, the results show that the optimal battery capacity remains the same but the battery lifetime decreases by 3.7%. a change of 21% in the annual income is also recorded. if the load is decreased by the same percentage, the same proportions apply in favor of the owner. on the other hand, if the energy consumption is kept constant and the hourly power profile is decreased by 5%, the results show that the optimal battery capacity changes from 1200ah to 1100ah and the annual income of the pv system owner decreases by 24%. fig. 9. additive battery capacity loss during one day fig. 10. convergence of the pso for four independent runs b. grid-connected without battery energy storage if the system has no energy storage, the utility grid acts as both energy storage and energy source. during peak production, the energy surplus is injected to the utility grid. the system starts feeding energy to the grid immediately after the production exceeds load demand. during nights, cloudy days, or power cuts, the utility grid performs as an energy source to cover the load. fig. 11. energy flow for a typical day of the year engineering, technology & applied science research vol. 9, no. 6, 2019, 4905-4911 4911 www.etasr.com regis et al.: optimal battery sizing of a grid-connected residential photovoltaic system for cost … in figure 11, it can be seen that during the hours when the pv was unavailable, the load entirely relied on the grid. pv starts generating from 7:00, however, its energy is not enough to fully supply the load before 9:00. during this period, the grid continues to supply energy. between 9:00 and 17:00, pv energy exceeds the load demand and the excess is injected to the grid. according to the figure, out of 4.047kw of peak production recorded at 13:00, only 0.79937kw were consumed by the load and the excessive 3.2476kw were sold to the grid. from 19:00 and onwards the load is entirely covered by the grid. the optimization algorithm returned an annual benefit of 210.4384$ in favor of the system owner since only annual electricity cost and benefits are the costs involved in this type of configuration. v. conclusion the current paper presented a pso method for optimally sizing the energy storage of a grid-connected residential pv system. while satisfying a set of operation and optimization constraints, the goal was two-fold: to lower the amount of power import from the grid and to minimize the cost of battery degradation caused by the aging effect. the goals were achieved through optimally scheduling the battery operation. simulations were carried out for a system with battery storage and a system without battery storage. the results showed that electricity charges from utility and battery degradation costs highly influence the optimal battery capacity determination in a grid-tied pv system. results demonstrated that upon efficiently and optimally scheduling the battery operation, electricity bills can be significantly minimized. the system without battery returns lower benefits and its entire dependency on the utility grid during nights and cloudy days makes it less desirable. data availability the raw data used to get the results in this paper can be provided upon request. acknowledgment the authors acknowledge the african union (au) for funding this research through the pan african university. references [1] y. ru, j. kleissl, s. martinez, “storage size determination for gridconnected photovoltaic systems”, ieee transactions on sustainable energy, vol. 4, no. 1, pp. 68–81, 2013 [2] d. abdoulaye, z. koalaga, f. zougmore, “grid-connected photovoltaic (pv) systems with batteries storage as solution to electrical grid outages in burkina faso”, 1st international symposium on electrical arc and thermal plasmas in africa, ouagadougou, burkina faso, october 17-21, 2012 [3] y. riffonneau, s. bacha, f. barruel, s. ploix, “optimal power flow management for grid connected pv systems with batteries”, ieee transactions on sustainable energy, vol. 2, no. 3, pp. 309–320, 2011 [4] y. choi, h. kim, “optimal scheduling of energy storage system for selfsustainable base station operation considering battery wear-out cost”, eighth international conference on ubiquitous and future networks, vienna, austria, july 5-8, 2016 [5] s. grillo, a. pievatolo, e. tironi, “optimal storage scheduling using markov decision processes”, ieee transactions on sustainable energy, vol. 7, no. 2, pp. 755–764, 2016 [6] m. gitizadeh, h. fakharzadegan, “battery capacity determination with respect to optimized energy dispatch schedule in grid-connected photovoltaic (pv) systems”, energy, vol. 65, pp. 665–674, 2014 [7] f. mavromatakis, g. viskadouros, g. xanthos, “photovoltaic systems and net metering in greece”, engineering, technology & applied science research, vol. 8, no. 4, pp. 3168–3171, 2018 [8] j. li, “optimal sizing of grid-connected photovoltaic battery systems for residential houses in australia”, renewable energy, vol. 136, pp. 12451254, 2019 [9] m. a. mohamed, a. m. eltamaly, a. i. alolah, “pso-based smart grid application for sizing and optimization of hybrid renewable energy systems”, plos one, vol. 11, no. 8, pp. 1–22, 2016 [10] l. a. wong, h. shareef, a. mohamed, a. a. ibrahim, “optimal placement and sizing of energy storage system in distribution network with photovoltaic based distributed generation using improved firefly algorithms”, world academy of science, engineering and technology, international journal of electrical, computer, energetic, electronic and communication engineering, vol. 11, no. 7, pp. 864–872, 2017 [11] m. o. badawy, f. cingoz, y. sozer, “battery storage sizing for a grid tied pv system based on operating cost minimization”, ieee energy conversion congress and exposition, milwaukee, usa, september 1822, 2016 [12] v. s. borra, k. debnath, “comparison between the dynamic programming and particle swarm optimization for solving unit commitment problems”, ieee jordan international joint conference on electrical engineering and information technology, amman, jordan, april 9-11, 2019 [13] p. ahmadi, m. h. nazari, s. h. hosseinian, “optimal resources planning of residential complex energy system in a day-ahead market based on invasive weed optimization algorithm”, engineering, technology & applied science research, vol. 7, no. 5, pp. 1934–1939, 2017 [14] k. thirugnanam, h. saini, p. kumar, “mathematical modeling of li-ion battery for charge/discharge rate and capacity fading characteristics using genetic algorithm approach”, ieee transportation electrification conference and expo, dearborn, usa, june 18-20, 2012 [15] d. t. ton, c. j. hanley, g. h. peek, j. d. boyes, solar energy grid integration systems: energy storage (segis-es), sandia national laboratories, 2008 [16] e. mckenna, m. mcmanus, s. cooper, m. thomson, “economic and environmental impact of lead-acid batteries in grid-connected domestic pv systems”, applied energy, vol. 104, pp. 239–249, 2013 [17] p. mohanty, k. r. sharma, m. gujar, m. kolhe, a. n. azmi, “pv system design for off-grid applications”, in: solar photovoltaic system applications: a guidebook for off-frid electrification, springer, 2015 [18] y. riffonneau, s. bacha, f. barruel, s. ploix, “optimal power flow management for grid connected pv systems with batteries”, ieee transactions on sustainable energy, vol. 2, no. 3, pp. 309-320, 2011 [19] l. ravi, c. v. kumar, m. r. babu, “stochastic optimal management of renewable microgrid using simplified particle swarm optimization algorithm”, 4th international conference on electrical energy systems, chennai, india, february 7-9, 2018 [20] k. yenchamchalit, y. kongjeen, k. bhumkittipich, n. mithulananthan, “optimal sizing and location of the charging station for plug-in electric vehicles using the particle swarm optimization technique”, international electrical engineering congress, krabi, thailand, march 7-9, 2018 [21] d. truong, “hybrid pso-optimized anfis-based model to improve dynamic voltage stability”, engineering, technology & applied science research, vol. 9, no. 4, pp. 4384–4388, 2019 microsoft word 4-2994_s1_etasr_v9_n5_pp4596-4599 engineering, technology & applied science research vol. 9, no. 5, 2019, 4596-4599 4596 www.etasr.com bheel et al.: effect of tile powder used as a cementitious material on the mechanical properties … effect of tile powder used as a cementitious material on the mechanical properties of concrete naraindas bheel department of civil engineering mehran uet, sindh, pakistan naraindas04@gmail.com rameez ali abbasi department of civil engineering indus university, karachi, pakistan engr.rameez13@gmail.com samiullah sohu department of civil engineering quest campus, sindh, pakistan sohoosamiullah@gmail.com sohail ahmed abbasi department of civil engineering quest campus, sindh, pakistan engr.sohail63@gmail.com abdul wahab abro department of civil engineering mehran uet, sindh, pakistan ablwab82@gmail.com zubair hussain shaikh department of civil engineering mehran uet, sindh, pakistan azubair.shaikh56@gmail.com abstract—this study was undertaken to reduce the usage of cement in concrete where different proportions of tile powder as cement replacement were used. since in the manufacture of cement an exuberant amount of carbon dioxide is disposed of in the environment, this research aims to curtail the dependence on cement and its production. the objective of this work is to investigate the properties of fresh mix concrete (workability) and hardened concrete (compressive and splitting tensile strength) in concrete with different proportions of 0%, 10%, 20%, 30%, and 40% of tile powder as a cement substitute. in this study, a total of 90 concrete samples were cast with mix proportions of 1:1.5:3, 0.5 water-cement ratio, cured for 7, 14 and 28 days. for determining the compressive strength, cubical samples, with dimensions of 100mm×100mm×100mm, were cast, while for the determination of the splitting tensile strength, cylindrical samples with dimensions of 200mm diameter and 100mm height, were tested after 7, 14, and 28 days. the highest compressive strength of concrete achieved for tile powder concrete was 7.50% at 10% replacement after 28days of curing. the splitting tensile strength got to 10.2% when concrete was replaced with 10% of tile powder and cured for 28 days. it was also shown that with increasing percentage of the tile powder content, the workability of the fresh concrete increases. keywords-tile powder; cement replacement material; strength increase; cement use reduction i. introduction concrete is the most commonly used building material in the world. it consists of two parts: paste and filler. the paste comprises cement and water and sometimes other chemical additives, where the aggregate comprises sand and gravel. the paste ties the aggregates together. aggregates are moderately inert filler constituents which occupy 70% to 80% of the volume of concrete, therefore, it is expected to have effects on its properties [1]. for every ton of cement produced about one ton of carbon dioxide is released into the atmosphere. cement industry production corresponds to about 5% of global anthropogenic emissions of carbon dioxide. cement production is also related to dust, and noise [2, 3]. the necessity for more economical and environmentally friendly cement materials expanded the interest in other materials which can be used as substitutes to partially replace conventional portland cement [4-6]. the cost of natural resources is increasing constantly, leading to the search for alternatives, such as recycled materials, rice husk ash, sawdust ash [7], silica fume, fly ash, coal bottom ash [8], marble powder, millet husk ash, etc. also, ordinary portland cement (opc), is related to several diseases [9-11]. construction industry can be the final consumer of all waste tile powder and, thus, help solve this environmental problem [12, 13]. it is known that about 100 million tons of tiles are produced annually. fifteen to 30% of total production is turned into waste without recycling from the tile industry. tile powder has various advantages such as reduced cost, saving energy and reduced environment risks [14]. tile waste can be used in concrete to increase some of its properties like strength. various studies have been conducted on the use of by-products to increase their effectiveness [15, 16]. authors in [17] experimented on hardening concrete blended with 10%-30% of crushed tiles as replacement of coarse aggregates and 10%30% of granite powder used as fine aggregates. concrete samples were prepared and tested at 7, 14, 21, 28 and 90 days. in testing, it was found that compressive and splitting tensile strength were enhanced to about 8.02% and 41.6% respectively with addition of 20% crushed tile and 30% granite powder in concrete cured for 28 days. authors in [18] studied the hardening of concrete with inclusion of 10% and 20% waste tile powder used as a replacement of coarse aggregates and 10% and 20% of tile powder as replacement of fine aggregates. it was reported that the compressive strength was measured by 14.2% with addition of 10% waste tile powder and 20% of tile powder in concrete after 28 days. authors in [19] investigated hardened concrete with addition of 0%-50% crushed ceramic tile powder as cement substituent material. the compressive strength was measured by 18.3% at 28 days by using 30% of corresponding author: naraindas bheel engineering, technology & applied science research vol. 9, no. 5, 2019, 4596-4599 4597 www.etasr.com bheel et al.: effect of tile powder used as a cementitious material on the mechanical properties … crushed ceramic tile powder in concrete. the tiles were made of natural materials sintered at high temperatures. there were no damaging chemicals in the tiles. waste tiles can only cause pollution [20]. therefore, this study endeavors to use the tile powder produced in pakistan as a material that replaces cement. experimental work was carried out to find the influence of tile powder on the properties of concrete. ii. research methodology the aim of this experimental work was to check the properties of fresh concrete (workability) and the mechanical properties of hardened concrete such as compressive and splitting tensile strength with tile powder utilized as partial cement replacement material, with various percentages, in order to quantify the influences of tile powder in concrete and to know its impact on the mechanical properties of concrete. for this reason, there were two types of specific standards samples (cube, 100mm×100mm×100mm and cylinder 100mm diameter and 200mm height) were made in the structural and concrete laboratory. a total of 90 concrete samples were prepared with mix ratio of 1: 1.5: 3 in which 0.5 water-cement ratio was used and were cured for 7, 14, and 28 days as shown in table i. furthermore, the concrete cube samples were used for compressive strength tests and the cylindrical samples were used for splitting tensile strength tests under the british standard (bs) code. three specimens were cast for each proportion of tile powder and the average value was taken as the final result [6]. table i. concrete samples details samples tile powder percentage 7 days 14 days 28 days cube samples used for compressive strength testing 0% 3 3 3 10% 3 3 3 20% 3 3 3 30% 3 3 3 40% 3 3 3 total 15 15 15 cylindrical samples used for splitting tensile strength testing 0% 3 3 3 10% 3 3 3 20% 3 3 3 30% 3 3 3 40% 3 3 3 total 15 15 15 iii. materials used a. cement locally available opc was used in this experimental work under the brand name “pakland”. tests were conducted on cement as given in table ii. b. fine and coarse aggregates aggregates were obtained from the local market in the region of hyderabad, pakistan. fine aggregates passed from a 4.75mm sieve for removing unwanted materials and the coarse aggregates used in this work had a size of 20mm. various tests of fine and coarse aggregates were conducted in the laboratory for assuring the quality of materials as shown in table iii. table ii. cement tests tests results normal consistency 33% initial setting time 45 min final setting time 220 min table iii. properties of aggregates properties fine aggregates coarse aggregates fineness modulus 2.24 water absorption 1.30% 0.54% specific gravity 2.67 2.63 bulk density 120lb/ft 3 98lb/ft 3 c. tile powder tile waste is generated during the finishing and polishing of tiles in the industry. this waste was collected in the form of pest and after drying, hand crushing, and sieving through #300 sieves it can be used as cement replacement in concrete. d. water drinking water was used for mixing concrete in the laboratory. iv. results and discussion a. workability of fresh concrete the workability of concrete was measured with a slump cone in terms of slump reduction. the maximum slump value recorded was 88mm at 40% of tile powder and the minimum value was 58mm at 0% of tile powder as cement replacement. by carrying out the experimental work, it was shown that the workability of concrete gets proliferated with increasing quantity of tile powder [18, 21] as shown in figure 1. fig. 1. workability (fresh concrete) b. compressive strength of concrete the cubical samples (100mm××100mm) were tested to determine the compressive strength of the concrete’s various proportions of tile powder as cement replacement. at each proportion of tile powder, three concrete samples were cast and the average value was considered. the compressive strength of concrete improved by 7.50% at 10% of tile powder while it was decreased to about 12.60% when 40% of tile powder substituted cement in concrete cured for 28 days. the compressive strength of concrete was reduced at the initial engineering, technology & applied science research vol. 9, no. 5, 2019, 4596-4599 4598 www.etasr.com bheel et al.: effect of tile powder used as a cementitious material on the mechanical properties … stage of the curing period and increased at the final stages of curing period as presented in figure 2. fig. 2. compressive strength of concrete c. splitting tensile strength of concrete the cylindrical samples were used for determining the splitting tensile strength of concrete. at each proportion of tile powder, three concrete samples were cast and the average value was considered. the splitting tensile strength improved up to 10.20% when 10% of tile powder was used and it decreased to 8.0% when using 40% of tile powder as substituent for cement in concrete cured for 28 days. the splitting tensile strength of concrete reduced at the initial stage of the curing period and increased at the final stages of the curing period as displayed in figure 3. fig. 3. split tensile strength of concrete v. conclusions • maximum slump value was recorded as 88mm at 40% of tile powder and minimum slump value was 58mm at 0% of tile powder. it was observed that the workability of fresh concrete increased with the increase in the percentages of tile powder. • the compressive strength of concrete improved by 7.50% with 10% of tile powder and decreased by 12.60% at 40% of tile powder used as a substitute for cement in concrete cured for 28 days. the compressive strength of concrete was reduced at the initial stage of the curing period and increased at the final stages of the curing period. • splitting tensile strength increased to 10.20% at 10% of tile powder and decreased to 8.0% at 40% tile powder used as cement replacement in concrete cured for 28 days. references [1] s. mindess, j. f. young, d. darwin, concrete, prentice hall, 2003 [2] s. ghosal, s. moulik, “use of rice husk ash as partial replacement with cement in concrete-a review”, international journal of engineering research, vol. 4, no. 9, pp. 506-509, 2015 [3] n. d. bheel, s. l. meghwar, s. a. abbasi, l. c. marwari, j. a. mugeri, r. a. abbasi, “effect of rice husk ash and water-cement ratio on strength of concrete”, civil engineering journal, vol. 4, no. 10, pp. 2373-2382, 2018 [4] a. goyal, a. m. anwar, h. kunio, o. hidehiko, “properties of sugarcane bagasse ash and its potential as cement-pozzolana binder”, twelfth international colloquium on structural and geotechnical engineering, ain shams, 2007 [5] n. d. bheel, f. a. memon, s. l. meghwar, i. a. shar, “millet husk ash as environmental friendly material in cement concrete”, 5th international conference on energy, environment and sustainable development, jamshoro, pakistan, 2018 [6] n. d. bheel, s. a. abbasi, s. l. meghwar, f. a. shaikh, “effect of human hair as fibers in cement concrete”, international conference on sustainable development in civil engineering, jamshoro, pakistan, november 23-25, 2017 [7] s. a. mangi, n. jamaluddin, m. h. w. ibrahim, n. mohamad, s. sohu, “utilization of sawdust ash as cement replacement for the concrete production: a review”, engineering science and technology international research journal, vol. 1, no. 3, pp. 11-15, 2017 [8] s. a. mangi, m. h. w. ibrahim, n. jamaluddin, m. f. arshad, f. a. memon, r. p. jaya, s. shahidan, “a review on potential use of coal bottom ash as a supplementary cementing material in sustainable concrete construction”, international journal of integrated engineering, vol. 10, no. 9, pp. 28-36, 2019 [9] v. r. vummaneni, d. s. r. murty, m. a. k. reddy, “study on strength and behavior of conventionally reinforced short concrete columns with cement from industrial wastes under uniaxial bending”, international journal of civil engineering and technology, vol. 7, no. 6, pp. 408417, 2016 [10] n. bheel, s. l. meghwar, s. sohu, a. r. khoso, a. kumar, z. h. shaikh, “experimental study on recycled concrete aggregates with rice husk ash as partial cement replacement”, civil engineering journal, vol. 4, no. 10, pp. 2305-2314, 2018 [11] n. bheel, a. w. abro, i. a. shar, a. a. dayo, s. shaikh, z. h. shaikh, “use of rice husk ash as cementitious material in concrete”, engineering, technology & applied science research, vol. 9, no. 3, pp. 4209-4212, 2019 [12] f. p. torgal, s. jalali, “compressive strength and durability properties of ceramic wastes based concrete”, materials and structures, vol. 44, no. 1, pp. 155-167, 2011 [13] e. fatima, a. jhamb, r. kumar, “ceramic dust as construction material in rigid pavement”, american journal of civil engineering and architecture, vol. 1, no. 5, pp. 112-116, 2013 [14] v. s. n. v. l. ganesh, n. c. rao, e. v. r. rao, “partial replacement of cement with tile powder in m40 grade concrete”, international journal of innovations in engineering research and technology, vol. 5, no. 7, pp. 34–39, 2018 [15] h. dullah, z. a. akasah, n. m. z. n. soh, s. a. mangi, “compatibility improvement method of empty fruit bunch fibre as a replacement material in cement bonded boards: a review”, iop conference series: materials science and engineering, vol. 271, no. 1, article id 012076, 2017 engineering, technology & applied science research vol. 9, no. 5, 2019, 4596-4599 4599 www.etasr.com bheel et al.: effect of tile powder used as a cementitious material on the mechanical properties … [16] i. s. yadav, laboratory investigations of the properties of concrete containing recycled plastic aggregates, msc thesis, thapar university, 2008 [17] m. padma, m. n. rao, “influence of granite powder as partial replacement of fine aggregate and crushed tiles as coarse aggregate in concrete properties”, international journal for modern trends in science and technology, vol. 3, no. 5, pp. 9-14, 2017 [18] c. h. kumar, k. a. ramakrishna, k. s. babu, t. guravaiah, n. naveen, s. jani, “effect of waste ceramic tiles in partial replacement of coarse and fine aggregate of concrete”, international advanced research journal of science, engineering and technology, vol. 2, no. 6, pp. 1316, 2015 [19] s. aswin, v. mohanalakshmi, a. a. rajesh, “effects of ceramic tile powder on properties of concrete and paver block”, global research and development journal for engineering, vol. 3, no. 4, pp. 84–87, 2018 [20] i. b. topcu, m. canbaz, “utilization of crushed tile as aggregate in concrete”, iranian journal of science & technology, transaction b, engineering, vol. 31, no. b5, pp. 561-565, 2007 [21] s. a. mangi, n. jamaluddin, m. h. w. ibrahim, a. h. abdullah, a. s. m. a. awal, s. sohu, n. ali, “utilization of sugarcane bagasse ash in concrete as partial replacement of cement”, iop conference series: materials science and engineering, vol. 271, no. 1, article id 012001, 2017 microsoft word 1-2933_s1 engineering, technology & applied science research vol. 9, no. 5, 2019, 4581-4585 4581 www.etasr.com bousselmi et al.: a novel high-gain quad-band antenna with amc metasurface for satellite … a novel high-gain quad-band antenna with amc metasurface for satellite positioning systems amira bousselmi microwave electronics research laboratory, faculty of sciences of tunis, tunis el manar university, el manar tunis, tunisia bousselmiamira@gmail.com ali gharsallah microwave electronics research laboratory, faculty of sciences of tunis, tunis el manar university, el manar tunis, tunisia ali.gharsallah@fst.utm.tn tan phu vuong imep – lahc, institute of microelectronics electromagnetism and photonics grenoble, france tanphu.vuong@gmail.com abstract—in this paper, a new design single feed multi-band antenna is presented. the proposed antenna is designed to operate at the 1.278ghz, 2.8ghz, 5.7ghz, and 10ghz frequency bands which cover the galileo satellite positioning system (1.278ghz), wlan (2.8ghz), wimax (5.7ghz) and the radar applications (10ghz), respectively. the antenna has a compact size, it is printed on an fr4 substrate of dimensions (60mm×27.5mm×1.67mm) placed on a ground plane of 60mm×17.5mm×0.035mm dimensions. to improve the radiation performance of the proposed antenna, an artificial magnetic conductor (amc) was used as a reflector plane with dimensions of 13.5mm × 13.5mm × 1mm. the simulated and measured results are in good agreement and show the significant improvement of the gain value of the multiband antenna with amc which is a required propriety for novel wireless communications systems. keywords-antenna design; multiband antenna; galileo; amc metasurface i. introduction due to the strong appearance of future generations of mobile communication systems, such as wireless local area network (wlan) and/or worldwide interoperability for microwave access (wimax), low cost, multiband, and compact size antennas are required [1-2]. in recent years, a new system of navigation has been developed by the european union named galileo which is expected to be rolled out in 2020. a high level of quality will be provided as part of the fee-based services offered to professionals, guaranteed global positioning service under civilian control [3-4]. it will be able to exchange with gps and glonass, the two other global satellite navigation systems [5]. for this reason, many techniques have been used recently to design planar multi band antennas [6-9]. in [6], a planar dual band monopole antenna has been proposed, the radiator of the antenna was consisted of a short stem connecting to two branches and generating two frequency bands at around 2.4 and 3.4ghz for wimax applications. in [7], a three band square slot antenna with symmetrical l strips has been discussed. the antenna was able to operate at 2.5, 3.5, and 5.2ghz, which include wlan and wimax. a four-band slot antenna was proposed in [8] using several stubs on the ultra-wideband slot radiator. although the size of the antenna was compact, a low gain of only -6 to -4dbi in the frequency band of 1.5 to 3 ghzwas achieved, which is not required for many practical uses. in this paper a quadband antenna is proposed. the planar antenna consists of two rectangular patches forming an l shape and implemented on an fr4 substrate and covered by ground plan. the size of the ground plan is reduced to generate the two first resonance frequencies. thus, the proposed design covers the galileo (1.278ghz), wlan (2.8ghz) bandwidth [2.793.1], wimax (5.7ghz) bandwidth [5.65-5.75] and radar applications (10ghz). measured and simulated results of the fabricated prototype are showing good agreement. then, in order to further improve the radiation performances of the proposed design, especially the gain, an artificial magnetic conductor (amc) structure acting as reflector plane was employed. the amc has been extensively used in recent decades for its outstanding advantages and its ability to increase the gain, efficiency and to reduce the size of antennas [9-11]. therefore, a significant improvement in the gain value at the four operation bands of the proposed quad band antenna with amc was figured out in both measurement and simulation. ii. quadband antenna without amc meta-surface a. antenna design the designed structure of the quad-band antenna is presented in figure 1 where the top and the bottom views are shown. the antenna is built on a fr-4 substrate with dielectric permittivity εr=4.4, loss tangent δ=0.025 and thickness equal to 1.6mm including the copper thickness of 0.035mm at both sides of the substrate. the size of the ground plane is reduced to obtain the quad-band operation where its dimension is 60mm×17.5mm×0.035mm. besides, two rectangular patch slots forming an l shape are created to generate both two higher resonant frequencies. to verify the proposed antenna design, a prototype is fabricated (figure 2) and measured. table i summarizes all the geometric parameters of the proposed design. b. results of the quad band antenna without amc the antenna was designed using the electromagnetic simulation software cst. a prototype was fabricated and corresponding author: amira bousselmi engineering, technology & applied science research vol. 9, no. 5, 2019, 4581-4585 4582 www.etasr.com bousselmi et al.: a novel high-gain quad-band antenna with amc metasurface for satellite … experimentally tested. the reflection coefficient of the fabricated quad-band antenna measured by a rohde schwarz zva 67 vector network analyzer is compared with the simulation results and shown in figure 3. a good agreement is observed between the results. the proposed design should operate at four frequency bands, 1.278ghz, 2.8ghz, 5.8ghz and 10ghz, whereas the measured ones are 1.3ghz, 2.82ghz, 5.5ghz and 9.5ghz, the small discrepancy obtained at the higher resonance frequency is attributed to the fabrication tolerances provided by the sma connector which are not considered in the simulation. more details about the simulated and the measured return losses results are recapitulated in table ii. (a) (b) (c) fig. 1. the geometry of the proposed quad band antenna: (a) top view, (b) bottom view (c) left side (a) (b) fig. 2. the fabricated prototype of the quad band antenna, (a) top view, (b) bottom view table i. antenna dimensions parameter value (mm) l 60 w 27.5 a 25.5 b 20.6 w a 1.90 w b 1.4 w1 17.5 fig. 3. measured and simulated s11 result of the proposed multiband antenna without amc table ii. measured and simulated return loss of the quad band antenna simulated measured first band (galileo) resonant frequency(ghz) 1.278 2.82 bandwidth (ghz) 0.15 0.15 matching level(db) -13 -23 second band (wlan) resonant frequency(ghz) 2.8 1.3 bandwidth (ghz) 0.31 0.58 matching level(db) -25 -9 third band (wimax) resonant frequency (ghz) 5.7 5.5 bandwidth (ghz) 1 1.2 matching level(db) -20.47 -15 fourth band (radars) resonant frequency (ghz) 10 9.5 bandwidth (ghz) 1.1 2.3 matching level(db) -30.7 -20 figure 4 shows the simulated 3d gain of the antenna without amc given at the four resonance frequencies. it is noted that the gain is about -4.24db in the lower band of 1.278ghz, -0.74db in the second band of 2.8ghz, 1.29db in the third band of 5.7ghz and 2.89db in the upper band of 10ghz. in addition, an omnidirectional radiation pattern is presented at all resonant frequencies. the proposed design achieves a low gain value, especially for the first two resonances, which may not be suitable for many modern communication applications. therefore, a solution to increase the gain value and the radiation performance of the design will be described below. c. parametric effect this section shows a parametric study for the elements of the antenna to understand their operation and the various radiating elements. the first parameter is the second slot width. for the first band, the parameter has no effect on the variation of the frequency. for the other three bands, if we increase the value of the parameter, the resonant frequencies of 2.8ghz, 5.7ghz and 10ghz increase. we find that 0 is the ideal value for optimizing the antenna. engineering, technology & applied science research vol. 9, no. 5, 2019, 4581-4585 4583 www.etasr.com bousselmi et al.: a novel high-gain quad-band antenna with amc metasurface for satellite … (a) (b) (c) (d) fig. 4. simulated gain of the proposed quad band antenna at the resonance frequencies, (a) 1.278ghz, (b) 2.8ghz, (c) 5.7ghz, (d) 10 ghz the second parameter is the width of the vertical slot, the change in this parameter from (0 to 3mm) shows the variation of the reflection coefficient for the proposed antenna. for the first band, the value of the parameter has no effect on the variation of the resonance frequency. for the second band, if we increase the value of b the resonance frequency decreases while keeping the same bandwidth. for the third band, if we increase the value of b then the bandwidth increases. the same stands for the last band with resonance frequency reduction. we find that b=1.5 is the ideal value for optimizing the antenna. the decrease of the length of the second slot reacts on the higher frequencies. iii. quad band antenna with amc metasurface for gain improvement the amc structure can be used as a reflector plane employed to improve radiation performance and antenna gain. indeed, these structures allow obtaining more compact antennas by favoring their directivity. a. design of the quad band antenna with amc generally, an amc is defined on the frequency band in which the phase of the reflection coefficient is given between −90° and 90°, i.e. in the frequency band where the interferences occur between the incident and the reflected wave [12]. this characterization is usually performed by illuminating with a plane wave at normal incidence and for an amc consisting of an infinite number of cells. to reduce the interaction between the amc and the antenna that will be associated with, it is important that the size of the unit cells composing the amc will be smaller than the dimension of the antenna. figure 5 presents the designed and fabricated prototype of the amc unit cell. (a) (b) (c) fig. 5. the proposed amc unit cell, (a) geometric (b) fabricated unit cell, (c) top view of the metasurface plan the amc structure consists of two rings where the inner ring allows the operation of the lower frequency and the outer engineering, technology & applied science research vol. 9, no. 5, 2019, 4581-4585 4584 www.etasr.com bousselmi et al.: a novel high-gain quad-band antenna with amc metasurface for satellite … ring provides the third resonance frequency. the radii of the outer and inner rings are equal to 9.7mm and 1.1mm, respectively. the amc cell is built on the rogers ro4003 substrate. to obtain the operation at the four desired frequency bands, the unit cell is arranged into a 2×4 array to form the amc plane, as shown figure 5(c). the overall quad-band amc structure with dimensions of 108mm×54mm consists of eight unit cells with dimensions 27mm×27mm. this configuration is selected through several parametric studies. the simulated result of the phase reflection of the amc unit cell used as a reference plane allows revealing its operating bands. the frequency bands are marked by the ± 90° variations around the 0° phase values, as shown in figure 6. hence, it can be noted that 0° of phase reflection is obtained at the 1.278, 2.8, 5.7 and 10ghz operating frequencies which correspond with the performances of the proposed quad –band antenna. fig. 6. reflection phase of the proposed amc unit cell b. results of the quad band antenna witht amc figure 7 presents the configuration of the proposed quad band antenna integrated with the amc plane surface. the distance of the spacing g given between the amc place and the antenna is well optimized to avoid the mutual coupling and also to keep the antenna as thin as possible. in practice, a foam substrate is used to separate the amc and the antenna. this configuration aims to improve the gain and efficiency of the antenna without reducing its bandwidth. fig. 7. fabricated multiband antenna placed above the amc plane the s-parameters of the quad band antenna with amc were tested by the vector network analyzer. the measured results of s11 of the antenna with the amc are compared with the obtained results without amc and are exhibited in figure 8. it can be observed that a good agreement is obtained which indicates that the emplacement of the amc surface does not perturb the resonance frequencies of the proposed antenna. fig. 8. measured results of the multiband antenna with and without the amc metasurface the radiation performance of the fabricated antenna with the presence of the amc is measured in an anechoic chamber. figure 9 compares the measured gain and directivity over the frequency of the proposed design in both discussed cases, with and without the amc plane. it is noted that a significant increase in the gain of more than 4db is observed at all the operating frequency bands. at the resonance frequencies 1.278, 2.8, 5.7 and 10ghz, the gain values are increased to 1.12, 4.5, 4.92 and 5.66db respectively. results indicate the expected behavior aimed by the emplacement of the artificial magnetic conductors. fig. 9. measured gain and directivity of the quad band antenna with and without amc metasurface the performances of the proposed quad band antenna and some other reported multi-band antennas [13-15] are compared in table iii. it is evident that the proposed design described in this study is characterized by high gain values which make it suitable for modern communication systems. engineering, technology & applied science research vol. 9, no. 5, 2019, 4581-4585 4585 www.etasr.com bousselmi et al.: a novel high-gain quad-band antenna with amc metasurface for satellite … table iii. comparison of the proposed quad-band antenna with other multi band antennas ref frequencies (ghz) gain (dbi) [13] tri band 2.46 2.33 3.59 3.134 5.69 2.89 [14] four band 1.5 -6 1.8 not reported 2.4 not reported 5.5 2.5 [15] four band 1.57 3.55 2.45 3.93 3.55 5.02 5.2 4.86 proposed 1.278 1.12 2.8 4.5 5.7 4.92 10 5.66 iv. conclusion the main objective of this paper is the design, fabrication and measurement of a new multiband antenna operating in the galileo e6, wlan, wimax and radar application bands. the design was fabricated and tested and the measured results showed a favorable agreement with the simulated ones. an amc metasurface was employed to increase the gain of the antenna by more than 4db at all operating frequencies. the proposed antenna is suitable for satellite positioning systems. references [1] t. y. wu, k. l wong., “on the impedance bandwidth of a planar inverted-f antenna for mobile handsets”, microwave and optical technology letters, vol. 32, no. 4, pp. 249-251, 2002 [2] l. m. mortensen, “growth responses of some greenhouse plants to environment. iii. design and function of a growth chamber prototype”, scientia horticulturae, vol. 16, no. 1, pp. 57-63, 1982 [3] m. c. huynh, w. stutzman, “ground plane effects on planar inverted-f antenna (pifa) performance”, iee proceedings microwaves, antennas and propagation, vol. 150, no. 4, pp. 209-213, 2003 [4] c. j. hegarty, e chatre, “evolution of the global navigation satellite systems (gnss)”, proceedings of the ieee, vol. 96, no. 12, pp. 1902– 1917, 2008 [5] p. ciais, r staraj, g .kossiavas, c. luxey, “design of an internal quadband antenna for mobile phones”, ieee microwave and wireless components letters, vol. 14, no. 4, pp. 148–150, 2004 [6] r. l. fante, j. j .vacarro, “cancellation of jammers and jammers multipath in a gps receiver”, ieee aerospace and electronic systems magazine, vol. 13, no.11, pp. 25–28, 1998 [7] k. c. r. gupta, r. garg, i. bahl, p. bhartia, microstrip lines and slotlines, artech house, 1996 [8] x. l. sun, s. w. cheung, t. i. yuk, “dual-band monopole antenna with frequency tunable feature for wimax applications”, ieee antennas and wireless propagation letters, vol. 12, pp. 100-103, 2013 [9] w. hu, y. z. yin, p. fei, x. yang, “compact triband square-slot antenna with symmetrical l-strips for wlan/wimax”, ieee antennas and wireless propagation letters, vol. 10, pp. 462-465, 2011 [10] m. bod, h. r. hassani, m. m. samadi taheri, “compact uwb printed slot antenna with extra bluetooth, gsm, and gps bands”, ieee antennas and wireless propagation letters, vol. 11, pp. 531-534, 2012 [11] v. a. a. filho, a. l. p. s. campos, “performance optimization of microstrip antenna array using frequency selective surfaces”, journal of microwaves, optoelectronics and electromagnetic applications, vol. 13, no. 1, pp. 31–46, 2014 [12] k. chen, z. yang, y. feng, b. zhu, j. zhao, t. jiang, “improving microwave antenna gain and bandwidth with phase compensation metasurface”, aip advances, vol. 5, no. 6, article id 067152, 2015 [13] h. oraizi, b. rezaei, “improvement of antenna radiation efficiency by the suppression of surface waves”, journal of electromagnetic analysis and applications, vol. 3, pp. 79-83, 2011 [14] a. p. saghati, m. azarmanesh, r. zaker, “a novel switchable singleand multifrequency triple-slot antenna for 2.4-ghz bluetooth, 3.5-ghz wimax, and 5.8-ghz wlan”, ieee antennas and wireless propagation letters, vol. 9, pp. 534-537, 2010 [15] m. bod, h. r. hassani, m. m. samadi taheri, “compact uwb printed slot antenna with extra bluetooth, gsm, and gps bands ieee antennas and wireless propagation letters, vol. 11, pp. 531-534, 2012 [16] y. f. cao, s. w. cheung, t. i. yuk, “a multi-band slot antenna for gps/wimax/wlan systems”, ieee transactions on antennas and propagation, vol. 63, no. 3, pp. 952-958, 2015 microsoft word 7-3046_s_etasr_v9_n5_pp4612-4615 engineering, technology & applied science research vol. 9, no. 5, 2019, 4612-4615 4612 www.etasr.com tunio et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part i … influence of coarse aggregate gradation on the mechnical properties of concrete, part i: no-fines concrete zaheer ahmed tunio department of civil engineering quaid-e-awam university of engineering, science & technology, nawabshah, pakistan zaheerahmedtunio@gmail.com fahad-ul-rehman abro department of civil engineering, mehran university of engineering and technology, jamshoro, pakistan fahad.abro@gmail.com tariq ali department of civil engineering quaid-e-awam university of engineering, science & technology, nawabshah, pakistan tariqdehraj@gmail.com abdul salam buller department of civil engineering quaid-e-awam university of engineering, science & technology, larkana campus, pakistan buller.salam@quest.edu.pk muhammad ali abbasi department of civil engineering, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan maliengrs@gmail.com abstract—it is an accepted fact that in concrete construction, the self-weight of the structure is a major part of its total load. reduction in the unit weight of the concrete results in many advantages. structural lightweight aggregate concrete (lwac) of adequate strength is now commonly used. in frame structures, the partition walls are free of any loading, where the construction of these non-structural elements with lightweight concrete of low strength would lead to the subsequent reduction of the overall weight of the structure. no-fines concrete is one of the forms of lightweight concrete and it is porous in nature. it can be manufactured similarly as normal concrete but with only coarse aggregates and without the sand. thus, it has only two main ingredients, the coarse aggregates, and the cement. the coarse aggregates are coated with a thin cement paste layer without fine sand. the current paper is a report of a detailed experimental study carried on nfc with fixed cement to aggregate proportion of 1:6 with 0.40 w/c (water-cement) ratio. coarse aggregate of various gradations(7mm-4.75mm, 10mm-4.75mm, 10mm-7mm, 13mm-4.74mm, 10mm-7mm, 13mm-4.75mm, 13mm-10mm, 13mm-7mm, 20mm-4.75mm, 20mm-7mm, 20mm-10mm, 20mm13mm), were used. specimens of standard sizes were cast to determine the compressive and splitting tensile strength after the specimens were cured in water up to the age of testing (28 days). keywords-no-fines; cement to aggregate mix proportion; unit weight; compressive strength; splitting tensile strength i. introduction concrete resulting by removing fine aggregates from normal concrete is termed as no-fines concrete (nfc), which is classified as a type of lightweight porous concrete. gravels known as coarse aggregates are coated with a thin layer of cement paste with no fine particles thus showing a two-phase material system. the coarse material is connected with a pointto-point network system with a small fillet of cement paste, holding particles together and giving strength to concrete [1]. nfc is usually used in parking areas [2], partition walls [3], as a contamination deterrent [4], in parking lots [5], and as brick material [6]. nfc pavements provide storm water management [7, 8] allowing water and air to percolate underground [9, 10]. generally, nfc has aggregate-cement ratios ranging from 6:1 to 10:1. normally nfc has w/c ranging from 0.28 to 0.40 [11]. in [11], it was concluded that the highest strengths were obtained with an aggregate-cement ratio of 7:1 and it has been concluded that when cement-aggregate ratio is increasing the strength properties are decreasing. the tensile and flexural strengths of nfc were significantly lower than those obtained from conventional concrete [12, 13]. the strength of nfc is less than normal concrete’s because of the existence of more voids with in its body [14]. nfc is different from conventional concrete which includes no-fine aggregates in the mixture. the coarse aggregates are combined with low water content with w/c ratio within the range of 0.25 to 0.35 for making nfc mixture with void contents ranging from 11% to 35% which results to high water and air permeability [15-17]. previously, nfc was developed with only one size of coarse aggregate gradation: 20mm-10mm, maintained by appropriate sieves [15]. the current study shows the influence of coarse aggregate gradation on the mechanical properties of nfc with different sizes of gravel. ii. experimental methodology the main aim of this study is to investigate the compressive and splitting tensile strength of no-fines concrete. the cementaggregate (c-a) proportions 1:6 of nfc were adopted. twelve corresponding author: abdul salam buller engineering, technology & applied science research vol. 9, no. 5, 2019, 4612-4615 4613 www.etasr.com tunio et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part i … different coarse aggregate gradations were used (details in table i). the nfc is cast with 0.4 w/c ratio. ordinary portland cement (opc) as per standard of astm c150 was used to manufacture the nfc and nc and cast the specimens of both concretes. the crushed stones used as coarse aggregates were obtained from the local market. they were washed and air dried and then they were sieved accordingly to achieve the specified aggregate gradations. potable water was used for casting and curing of the specimens. all the ingredients of each mix were batched accordingly following the proper mixing in an electric operated mixer. a total number of 60 cube specimens for nfc of standard size 150mm×150mm×150 mm, and 60 cylinders for nfc of standard size 150mm×300mm were cast. the specimens were demoulded after 24 hours of casting and were kept in a curing tank up to the age of testing. wet curing was applied. all specimens were tested after 28 days of curing. table i. batches details s. no. aggregate gradation (max-min)* (mm) 01 7-4.75 02 10–4.75 03 10–7 04 13–4.75 05 13–7 06 13–10 07 20–4.75 08 20–7 09 20–10 10 20–13 *nfc with gravel varying between max and min in mm. iii. results and discussion a. compressive strength of nfc the cubes were tested to measure the compressive strength of concrete. compressive strength tests were conducted in a universal testing machine (utm). the cubes were placed between the plates of utm and then load was applied gradually until the cubes were crushed. the load at crushing failure was recorded. the load is divided by the cross sectional area of the cube to determine the ultimate compressive stress using (1). the results of the average compressive strength are presented in table ii. ��� = � � (1) b. splitting tensile strength of nfc split cylinder test was carried out to measure the tensile strength of concrete. the cylinders were placed horizontal between two plates in the utm and load was applied gradually on the center of the cylinder until failure. the load at failure was recorded and calculations were made with (2) to determine tensile strength. the results of the average splitting tensile strength are presented in table iii. �� = � �� (2) c. unit weight in the unit weight test, the weight of the specimens was recorded before testing. the results are presented in table iv. from table iv, it is obvious that coarse gradation affects the unit weight of nfc. nfc with coarse aggregate gradation of 7mm-4.75mm had maximum density equal to 2089kg/m 3 , and with coarse aggregate gradation 13mm-10mm had the lowest, which was 1754kg/m 3 . the average unit weight of nfc was found to be 1883kg/m 3 . it may be observed that unit weight of nfc increases with the widening range of coarse aggregates. table ii. average compressive strength of nfc s. no. aggregate gradation (mm) average compressive strength fcu mpa psi 01 7–4.75 7.31 1059.95 02 10–4.75 4.27 619.15 03 10–7 11.12 1612.40 04 13–4.75 7.23 1048.35 05 13–7 6.87 996.15 06 13–10 3.29 477.05 07 20–4.75 4.66 675.70 08 20–7 7.56 1096.20 09 20–10 6.94 1006.30 10 20–13 5.17 749.65 maximum: 10–7 11.12 1612.40 minimum: 13–10 3.29 477.05 table iii. average splitting tensile strength of nfc s. no. aggregate gradation (mm) average tensile strength ft mpa psi 01 (7– 4.75) 0.85 123.25 02 (10–4.75) 1.02 147.90 03 (10–7) 0.92 133.40 04 (13–4.75) 1.23 178.35 05 (13–7) 1.13 163.85 06 (13–10) 0.89 129.05 07 (20–4.75) 1.28 185.60 08 (20–7) 1.26 182.70 09 (20–10) 1.08 156.60 10 (20–13) 1.05 152.25 maximum: 20–4.75 1.28 185.6 minimum: 7-4.75 0.85 123.3 table iv. average unit weight of nfc s. no. aggregate gradation (mm) average unit weight (kg/m 3 ) 01 7–4.75 2089 02 10–4.75 1820 03 10–7 1760 04 13–4.75 2021 05 13–7 1909 06 13–10 1754 07 20–4.75 1760 08 20–7 2009 09 20–10 1852 10 20–13 1852 maximum: 7–4.75 2089 minimum: 13–10 1754 engineering, technology & applied science research vol. 9, no. 5, 2019, 4612-4615 4614 www.etasr.com tunio et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part i … fig. 1. a cubic specimen before and after testing in utm fig. 2. a cylinder specimen before and after testing in utm fig. 3. compressive strength with various aggregate gradations and 1:6 c-a proportion at 0.4 w/c ratio it has been observed that the aggregate gradation significantly affects the compressive strength of concrete. the maximum compressive strength of nfc is 11.12mpa with 10mm-7mm size of coarse aggregate gradation, whereas the lowest compressive strength of nfc was 3.29mpa with 13mm10mm range of coarse aggregate gradation. the compressive strength of nfc with coarse aggregate gradation of 10mm7mm is increased by 51.12%, 162%, 52.12%, 61.86%, 236.62%, 47.08%, 60.23%, 115.08%, 120% as compared to gradation of 7-4.75, 10-4.75, 13-4.75, 13-7, 13-10, 20-4.75, 207, 20-10, and 20-13 respectively (in mm). it may be concluded that while producing the nfc, the gradation of the coarse aggregates may be considered properly if the compressive strength is a major parameter. like the compressive strength of nfc, its tensile strength is also greatly affected by the gradation of the coarse aggregates used. the maximum split tensile strength of nfc obtained was 1.29mpa with 20mm4.75mm size of coarse aggregate gradation and the minimum was 0.85mpa for the 13mm-10mm coarse aggregate gradation. the split tensile strength of nfc with coarse aggregate gradation of 20mm-4.75mm is increased by 50.58%, 25%, 39.13%, 4.06%, 13.27%, 43.82%, 1.58%, 18.5%, 20.9% for gradation of 7-4.75, 10-4.75, 10-7, 13-4.75, 13-7, 13-10, 20-7, 20-10, and 20-13mm, respectively. fig. 4. splitting tensile strength with various aggregate gradation and 1:6 c-a proportion at 0.4 w/c ratio iv. conclusions different gradations of coarse aggregate were used in nfc, and the effects on unit weight, compressive strength and tensile strength were studied. the results may be summarized as: • the maximum obtained unit weight was 2089kg/m 3 for nfc with aggregate gradation range of 7mm-4.75mm. the minimum obtained unit weight was 1.574kg/m 3 for nfc with aggregate gradation range of 13mm-10mm. the average unit weight was 1882.1882kg/m 3 which is approximately 21% of that of normal weight concrete which is 2400kg/m 3 . • the variation in the compressive strength of nfc due to coarse aggregate gradation ranged from 11.12mpa to 2.93mpa exhibited with n.f.c of coarse aggregate gradations of 10mm-4.75mm and 13mm-10mm respectively. • the tensile strength of n.f.c due to the different coarse aggregate gradations ranges between 0.85mpa-1.28mpa. • the behavior of nfc with various coarse aggregate gradations in terms of compressive strength and tensile strength is different because the compressive strength of nfc with 10mm-7mm coarse aggregate gradation is maximum whereas the tensile strength of nfc with coarse aggregate gradation of 20mm-4.75mm is minimum. engineering, technology & applied science research vol. 9, no. 5, 2019, 4612-4615 4615 www.etasr.com tunio et al.: influence of coarse aggregate gradation on the mechnical properties of concrete, part i … • the behavior of nfc with 7mm-4.75mm, 13mm-4.75mm, and 20mm-7mm coarse aggregate gradations in terms of compressive strength is approximately similar. • the behavior of nfc with 13mm-4.75mm and 20mm-7mm coarse aggregate gradations in terms of tensile strength is approximately similar. • it is observed from the results that coarse aggregate gradation has considerable effects on the compressive and tensile strength of concrete. • the relationship of nfc with various coarse aggregate gradations in terms of compressive strength and split tensile is different as compared to normal concrete. based on the results of this experimental study, it may be concluded that while producing nfc, the gradation of aggregates, c-a ratio and w/c ratio may be chosen appropriately particularly when the compressive strength is the major parameter of consideration. however, to a limited extent, unit weight and apparent texture also depend upon these factors. acknowledgment the authors are grateful to the quaid-e-awam university of engineering, science and technology for providing the research facilities. references [1] b. alam, m. javed, q. ali, n. ahmad, m. ibrahim, “mechanical properties of no-fines bloated slate aggregate concrete for construction application, experimental study”, international journal for computational civil and structural engineering, vol. 3, no. 2, pp. 302312, 2012 [2] r. lomte, “a review on study and analysis of strength, permeability and void ratio of pervious concrete”, international journal for research in applied science & engineering technology, vol. 6, no. 1, pp. 17171720, 2018 [3] a. muneeb, m. a. memon, m. a. bhutto, a. lakho, i. a. halepoto, a. n. memon, “effects of uncrushed aggregate on the mechanical properties of no-fines concrete”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2882-2886, 2018 [4] s. ali, s. kacha, “correlation among properties of no fines concrete–a review”, international journal of advance engineering and research development, pp. 87-91, 2017 [5] g. yuvaraj, k. sundaravadivelu, p. vembuli, r. shankaranarayanan, e. ramya, “a study on compressive strength of pervious concrete by varying the size of aggregate”, international journal of engineering science and computing, vol. 7, no. 4, pp. 10149-10152, 2017 [6] k. b. thombre, a. b. more, s. r. bhagat, “investigation of strength and workability in no-fines concrete”, international journal of engineering and technical research, vol. 5, no. 9, pp. 390-393, 2016 [7] g. divya, l. reena, “an experimental stduies on behaviour of pervious concrete by using addition of admixtures”, international research journal of engineering and technology, vol. 4, no. 3, pp. 2366-2370, 2017 [8] p. p. pragnya, k. b. parikh, a. r. darji, “a review on experimental investigation of pervious concrete using alternate materials”, journal of emerging technologies and innovative research, vol. 4, no. 3, pp. 6870, 2017 [9] c. h. s. priyanka, “experimental analysis on high strength pervious concrete”, international journal of advances in mechanical and civil engineering, vol. 4, no. 2, pp. 9-13, 2017 [10] u. m. muthaiyan, s. thirumalai, “studies on the properties of pervious fly ash–cement concrete as a pavement material”, civil & environmental engineering, vol. 4, no. 1, article id 1318802, 2017 [11] k. r. balsaraf, d. r. kurhade, k. a. varpe, n. s. lohote, d. s. mehetre, “a review paper on no fines concrete”, international journal of engineering sciences & management, vol. 7, no. 1, pp. 293-303, 2017 [12] m. kovac, a. sicakova, “pervious concrete as a sustainable solution for pavements in urban areas”, 10th international conference environmental engineering, vilnius, lithuania, april 27-28, 2017 [13] d. s. shah, j. pitroda, “an experimental study on durability and water absorption properties of pervious concrete”, international journal of research in engineering and technology, vol. 3, no. 3, pp. 439-444, 2014 [14] i. barisic, m. galic, i. n. grubesa, “pervious concrete mix optimization for sustainable pavement solution”, iop conference series: earth and environmental science, vol. 90, article id 012091, 2017 [15] m. nallanathel, b. ramesh, p. h. vardhan, “effect of water cement ratio in pervious concrete”, journal of chemical and pharmaceutical sciences, vol. 6, pp. 200-203, 2017 [16] k. b. thombre, a. b. more, s. r. bhagat, “investigation of strength and workability in no-fines concrete”, international journal of engineering research & technology, vol. 5, no. 9, pp. 390-393, 2016 [17] w. t. kuo, c. c. liu, d. s. su, “use of washed municipal solid waste incinerator bottom ash in pervious concrete”, cement and concrete composites, vol. 37, pp. 328-335, 2013 [18] m. a. alam, s. naz, “experimental study on properties of no-fine concrete”, international journal of informative & futuristic research, vol 2, no. 10, pp. 3687-3694, 2015 microsoft word mousavi-ed.doc etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 461-466 461 www.etasr.com mahdiuon-rad et al.: analysis of pm magnetization field effects on the unbalanced magnetic … analysis of pm magnetization field effects on the unbalanced magnetic forces due to rotor eccentricity in bldc motors s. mahdiuon-rad department of electrical engineering, sahand university of technology, tabriz, iran s_mahdiuonrad@sut.ac.ir s. r. mousavi-aghdam department of electrical & computer engineering university of tabriz, tabriz, iran rmousavi@tabrizu.ac.ir m. reza feyzi department of electrical & computer engineering university of tabriz tabriz, iran feyzi@tabrizu.ac.ir m. b. b. sharifian department of electrical & computer engineering university of tabriz tabriz, iran sharifian@tabrizu.ac.ir abstract—this paper investigates both static and dynamic eccentricities in single phase brushless dc (bldc) motors and analyzes the effect of the pm magnetization field on unbalanced magnetic forces acting on the rotor. three common types of pm magnetization field patterns including radial, parallel and sinusoidal magnetizations are considered. in both static and dynamic eccentricities, harmonic components of the unbalanced magnetic forces on the rotor are extracted and analyzed. based on simulation results, the magnetization fields that produce the lowest and highest unbalanced magnetic forces are determined in rotor eccentricity conditions. keywordsfinite element method; single phase brushless dc (bldc) motor; static and dynamic eccentricity; unbalanced magnetic force i. introduction rotor eccentricity occurs when an unbalanced air gap exists between the stator and the rotor. therefore, an unbalanced magnetic flux density between the rotor and the stator is occurred, which is the main contributor to magnetically induced vibration and noise [1]. rotor eccentricity in a motor can be divided into three categories: static eccentricity, dynamic eccentricity, and their combination [2]. unbalanced magnetic force exists as long as there is eccentricity between the rotor and the stator, because a portion of the stator is closer to the permanent magnet of the rotor, thus generating a net attraction force acting on the rotor [3]. unbalanced magnetic force is also important because it has an exhausting effect on the bearings and also generates noise and vibration. on the other side, when eccentricity becomes large, the resulting unbalanced radial forces can cause stator-torotor rub, and this can result to stator and motor damage [4]. in a surface mounted permanent magnet motor, the permanent magnets act as air gaps (magnetically) between the iron in the stator and rotor. small changes in actual air gap length have a negligible impress on the motor effective air gap. this suggests that surface mounted pm motors may be less sensitive to rotor eccentricities than induction motors, and would be good candidates for applications where noise and vibration are significant [5]. therefore, a detailed study of unbalanced magnetic forces due to rotor eccentricity in such motors would be very useful. it should be noted that in the design of permanent magnet motors for high-precision applications, it is sometimes necessary to have a detailed analysis of the effect of rotor eccentricity [6]. in this paper pm magnetization field effects on the unbalanced magnetic forces due to both static and dynamic eccentricities in single phase bldc motors are analyzed. ii. static and dynamic eccentricity the external rotor permanent magnet motor with rotor eccentricity is schematically shown in figure 1. fig. 1. schematic of the external rotor pm motor with rotor eccentricity. the radial distance between the stator axis (o1) and rotor axis (o2) is defined as the eccentricity of the rotor and is denoted [2] by ge.   etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 461-466 462 www.etasr.com mahdiuon-rad et al.: analysis of pm magnetization field effects on the unbalanced magnetic … where e is the eccentricity ratio and g is the nominal air gap length. the eccentricity ratio has the following limit [2]: 10  e   static eccentricity with which the rotor is displaced from the stator center but is still turning upon its own axis o2 can be modeled by assuming ε and  as constants [2], that the position of the minimal radial air gap length is fixed in space and that there is a steady pull in one direction. this makes the unbalanced magnetic force difficult to be detected unless special equipment is used, which is impractical for motors in service. typical causes of static eccentricity include bearing wear, out of tolerance manufacturing and incorrect positioning of the rotor or the stator at assembly stage. dynamic eccentricity is where the rotor does not rotate on its own axis but does rotate on the stator axis and the center of the rotor is not at the center of rotation so that the point of minimum air gap rotates with rotor speed. this means that dynamic eccentricity is a function of space and time and can be treated by considering ε and  as functions of time and position. in this case the rotor center rotates around the stator center. dynamic eccentricity produces a radial magnetic pull that rotates at the mechanical speed of the motor and acts directly on the rotor. this makes the unbalanced magnetic force easier to detect by vibration or current monitoring [4]. dynamic eccentricity could be caused by a bent shaft, mechanical resonances at critical speeds, bearing wear, out of tolerance manufacturing, using fluid dynamic or aerodynamic bearings and misalignment of bearings. iii. different types of pm magnetization fields the single phase bldc motor geometry considered for simulation is shown in figure 2 [7]. different pm magnetization field patterns can give different air gap field distributions. therefore, the effect of pm magnetization field on the unbalanced magnetic forces due to rotor eccentricity of pm motor is investigated in this paper. for surface mounted permanent magnets, three common types of pm magnetization field patterns including radial, parallel and sinusoidal are considered as shown in figure 3. motor specifications are listed in table i. table i. motor specifications. stator inner diameter 18 mm stator outer diameter 42 mm rotor inner diameter 48 mm coil turns 72 number of pole 4 pm material ferrite rotor outer diameter 60 mm permanent magnet thickness 4 mm hc 200 ka/m br 0.25 t winding type concentrated winding rated speed 6000 rpm fig. 2. single phase bldc motor geometry. fig. 3. (a) radial magnetization (b) parallel magnetization (c) sinusoidal magnetization. iv. static eccentricity analysis and results unbalanced magnetic forces with eccentricity ratios of 0.3, 0.6 and 0.9 which are also known as radial magnetic forces, have been compared for various eccentricities. figure 4 shows the 2-d finite-element model of the single phase bldc motor used for the calculation of the unbalanced magnetic forces. the radial magnetic force on the rotor is calculated using maxwell stress tensor at every 3 degrees of rotor position period and a total number of 120 calculations are needed for the simulation of a complete revolution. radial magnetic forces on the rotor due to different eccentricities for parallel, radial and sinusoidal magnetization are shown in figures 5-7. in the static eccentricity analysis, the radial magnetic force directions are fixed in a complete revolution and magnitudes of these forces fluctuate between maximum and minimum values. it is shown that the average unbalanced magnetic force on the rotor and the force ripple increase with the eccentricity ratio. harmonic components of radial magnetic forces on the rotor for various eccentricity ratios are compared and shown in figure 8. harmonic components of radial magnetic force increase with the eccentricity ratio and 4th, 8th and 12th etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 461-466 463 www.etasr.com mahdiuon-rad et al.: analysis of pm magnetization field effects on the unbalanced magnetic … harmonic components are greater than other components as shown in figure 8. as mentioned earlier, in the case of static eccentricity, the position of the minimal radial air gap length is fixed in space. in other words, the part of the stator which is closer to the magnet is stationary and experiences different magnet polarity np times per revolution. where np is the number of magnet poles. this leads to npi  harmonics where i is a positive integer. fig. 4. 2-d finite-element model of the single phase bldc motor. fig. 5. radial magnetic forces on the rotor for parallel magnetization. fig. 6. radial magnetic forces on the rotor for radial magnetization. fig. 7. radial magnetic forces on the rotor for sinusoidal magnetization. fig. 8. comparison of harmonic components of radial magnetic forces on the rotor for parallel magnetization. the magnitude of the unbalanced magnetic force is known to be proportional to the amount of the eccentricity. therefore, an efficient way of comparing two bldc motors in terms of magnetic force characteristics is to assume the same amount of eccentricity in fem calculation and compare the magnetic force profiles [3]. the average of these forces are calculated for each case in order to compare the effect of eccentricity ratio e and different pm magnetization field patterns on radial magnetic forces simultaneously as shown in figure 9. the radial magnetic force on the rotor can be expressed as [8]:  22 02 1  bbf rr    where rb and b stands for radial and tangential component of magnetic flux density, respectively and 0 is the permeability of the air. from (3) it can be seen that the radial magnetic force is proportional to the square of air gap magnetic flux density. etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 461-466 464 www.etasr.com mahdiuon-rad et al.: analysis of pm magnetization field effects on the unbalanced magnetic … it can be shown that pm generates the highest air gap flux density for radial magnetization and the lowest air gap flux density for sinusoidal magnetization. therefore, radial magnetization gives the highest unbalanced magnetic force and sinusoidal magnetization offers the lowest unbalanced magnetic force on the rotor for all three different eccentricity ratios. fig. 9. comparison of averages of radial magnetic forces on the rotor. v. dynamic εccentricity αnalysis and ρesults in this section, dynamic eccentricity has been analyzed in the motor mentioned in section iii and unbalanced magnetic forces have been compared for various rotor eccentricity conditions with some available ratios of 0.3, 0.6 and 0.9. the calculation method of the radial magnetic force is similar to the one used in the case of static eccentricity explained in section iv. x and y components and also amplitudes of unbalanced magnetic forces on the rotor for parallel magnetization with different eccentricity ratios, are shown in figures 10-12. fig. 10. x components of unbalanced magnetic forces on the rotor due to dynamic eccentricity of rotor for parallel magnetization. fig. 11. y components of unbalanced magnetic forces on the rotor due to dynamic eccentricity of rotor for parallel magnetization. fig. 12. unbalanced magnetic forces on the rotor due to dynamic eccentricity of rotor for parallel magnetization. as shown in figures 10-11, x and y components of unbalanced magnetic forces have negative and positive values in a complete revolution in dynamic eccentricity and this means that these forces revolve with rotor rotation. the magnitudes of the unbalanced magnetic forces on the rotor increase with the eccentricity ratio. harmonic components of the unbalanced magnetic force x component on the rotor with eccentricity ratio of 0.3 are shown in figure 13. as illustrated, it can be seen that in the dynamic eccentricity analysis, in addition to the first harmonic, 3th, 5th, 7th, 9th and 11th harmonic components are greater than other harmonic components. the reason is explained as follows. in the case of dynamic eccentricity, the part of the rotor which is closer to the stator rotates with the rotor. thus, a rotating radial magnetic force is generated and modulated by the slots. this force can be expressed as [9]:     n i iir tilff 0 sin   etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 461-466 465 www.etasr.com mahdiuon-rad et al.: analysis of pm magnetization field effects on the unbalanced magnetic … where i , l and  are the integer, slot number and rotational speed respectively. x and y components of the magnetic force can be expressed as follows: )])1cos(( ))1cos(([ 2 1 )sin()sin( ))1[sin(( 2 1 )(sin)(cos 0 0 0 0 )])1sin((                      itil itilif itiliftf itil i f i til i ff n i n i y i n i n i x til t   where  is the angle between the stator rotation center and the rotor one. the above equations show that dynamic eccentricity leads to first harmonic ( 0i ) and 1il harmonic contents in the unbalanced magnetic force waveform. the harmonic analysis of the unbalanced magnetic force is of great importance to the vibration analysis of the motor because large vibration will be generated when the natural frequencies of the motor coincide with or are close to the frequency of the electromagnetic force or its harmonic frequencies. by comparing figures10 and 11, it can be deduced that the variation of y component of unbalanced magnetic force is similar to the x component variation. therefore, only the x component has been investigated in this paper. on the other side, results obtained for the x component of unbalanced magnetic force can be applied to the y component as well. fig. 13. harmonics of x component of unbalanced magnetic forces on the rotor for parallel magnetization. finally, amplitudes of x components of radial magnetic forces are calculated for each case in order to compare the effect of eccentricity ratio e and different pm magnetization field patterns on radial magnetic forces simultaneously. the results are shown in figure 14. fig. 14. comparison of amplitudes of x component of radial magnetic forces on the rotor. considering (3), radial magnetic force is proportional to the square of air gap magnetic flux density. therefore, with dynamic eccentricity radial magnetization produces the highest unbalanced magnetic force whereas sinusoidal magnetization generates the lowest unbalanced magnetic force on the rotor for all three different eccentricity ratios, as expected and shown in figure 14. vi. conclusion surface mounted pm motors may be less sensitive to rotor eccentricities than induction motors, and would be good candidates for applications where noise and vibration are significant but in the design of permanent magnet motors for high-precision applications, it is necessary to have a detailed analysis of the effect of rotor eccentricity. in this paper, static and dynamic eccentricities with eccentricity ratios of 0.3, 0.6 and 0.9, are analyzed in a single phase bldc motor. it can be seen from the simulation results that unbalanced magnetic forces acting on the rotor increase with the eccentricity ratio. further, in the case of static eccentricity radial magnetic forces consist of harmonics which are multiples of the rotor pole numbers and in the case of dynamic eccentricity consist of the first harmonic and the harmonics which are multiples of slot numbers plus or minus one. it is clearly shown that the radial magnetization generates the highest unbalanced magnetic force and sinusoidal magnetization produces the lowest unbalanced magnetic force acting on the rotor in a bldc motor. references [1] h. s. chen, m. c. tsai ,“effect of rotor eccentricity on electric parameters in a pm brushless motor with parallel winding connections”, journal of applied physics, vol. 105, no. 7, pp. 07f121-07f121-3, 2009 [2] u. kim, d. k. lieu, “magnetic field calculation in permanent magnet motors with rotor eccentricity: without slotting effect”, ieee transactions on magnetics, vol. 34, no. 4, pp. 2243–2252, 1998 etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 461-466 466 www.etasr.com mahdiuon-rad et al.: analysis of pm magnetization field effects on the unbalanced magnetic … [3] t. yoon, “magnetically induced vibration in a permanent-magnet brushless dc motor with symmetric pole-slot configuration”, ieee transactions on magnetics, vol. 41, no. 6, pp. 2173–2179, 2005 [4] s. rajagopalan, j. m. aller, j. a. restrepo, t. g. halbetler, r.g. harley, “analytic-wavelet-ridge-based detection of dynamic eccentricity in brushless direct current bldc motors functioning under dynamic operating conditions”, ieee transactions on industrial electronics, vol. 54, no. 3, pp. 1410-1419, 2007. [5] s. salon, k. sivasubramaniam, l. t. ergene, “the effect of asymmetry on torque in permanent magnet motors”, iemdc 2001, ieee electric machines and drives international conference , cambridge, usa, pp. 208-217, 2001 [6] z. j. liu, j. t. li, m. a. jabbar, “prediction and analysis of magnetic forces in permanent magnet brushless dc motor with rotor eccentricity”, journal of applied physics, vol. 99, no. 8, pp. 08r32108r321-3, 2006 [7] c. l. chiu, y. t. chen, w. s. jhang, “properties of cogging torque, starting torque, and electrical circuits for the single-phase brushless dc motor”, ieee transactions on magnetics, vol. 44, no. 10, pp. 23172323, 2008 [8] k. t. kim, s. m. hwang, g. y. hwang, t. j. kim, w. b. jeong, c. u. kim, “effect of rotor eccentricity on spindle vibration in magnetically symmetric and asymmetric bldc motors”, isie 2001, ieee international symposium on industrial electronics, pusan, south korea, vol. 2, pp. 967–972, 2001 [9] c. i. lee, g. h. jang, “experimental measurement and simulated verification of the unbalanced magnetic force in brushless dc motors”, ieee transactions on magnetics, vol. 44, no. 11, pp. 4377-4380, 2008 authors profile shahin mahdiuon rad received the b.sc. degree in electrical engineering from the university of zanjan, iran, in 2008, and the m.sc. degree, in 2011, in electrical engineering from the university of tabriz, iran. she is currently working toward the ph.d. degree in the faculty of electrical engineering, sahand university of technology. her research interests include control of electrical drives and electrical machines. seyed reza mousavi-aghdam received his b.sc. degree with first class honors in electrical power engineering from azarbaijan university of tarbiat moallem in 2009, tabriz, and m.sc. degree from university of tabriz with honor in 2011. he is currently working toward the ph.d. degree in the university of tabriz. his current research interests include design of electrical machines, electric drives and analysis of special machines. mohammad reza feyzi received his b.sc. and m.sc. in 1975 from the university of tabriz in iran with honors. he worked in the same university from 1975 to 1993. he started his ph.d. work at the university of adelaide, australia in 1993. soon after his graduation, he rejoined the university of tabriz. currently, he is a professor in the same university. his research interests are finite element analysis, design and simulation of electrical machines and transformers. mohammad bagher bannae sharifian studied electrical power engineering at the university of tabriz, tabriz, iran. he received the b.sc. and m.sc. degrees in 1989 and 1992 respectively from the university of tabriz. in 1992 he joined the electrical engineering department of the university of tabriz as a lecturer. he received the ph.d. degree in electrical engineering from the same university in 2000. in 2000 he rejoined the electrical power department of the faculty of electrical and computer engineering of the same university as assistant professor. he is currently professor of the mentioned department. his research interests are in the areas of design, modeling and analysis of electrical machines, transformers, electric drives, liner electric motors, and electric and hybrid electric vehicle drives. microsoft word 37-3465_s1_etasr_v10_n2_pp5547-5553 engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5547 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system efficient feature extraction algorithms to develop an arabic speech recognition system abdulmalik a. alasadi dept. of computer science and it dr. babasaheb ambedkar marathwada university aurangabad, india dba.ora10g@gmail.com theyazn h. h. adhyani community college in abqaiq king faisal university saudi arabia taldhyani@kfu.edu.sa ratnadeep r. deshmukh dept. of computer science and it dr. babasaheb ambedkar marathwada university aurangabad, india rrdeshmukh.csit@bamu.ac.in ahmed h. alahmadi department of computer science taibah university saudi arabia aahmadio@taibahu.edu.sa ali saleh alshebami community college in abqaiq king faisal university saudi arabia aalshebami@kfu.edu.sa abstract—this paper studies three feature extraction methods, mel-frequency cepstral coefficients (mfcc), powernormalized cepstral coefficients (pncc), and modified group delay function (modgdf) for the development of an automated speech recognition system (asr) in arabic. the support vector machine (svm) algorithm processed the obtained features. these feature extraction algorithms extract speech or voice characteristics and process the group delay functionality calculated straight from the voice signal. these algorithms were deployed to extract audio forms from arabic speakers. pncc provided the best recognition results in arabic speech in comparison with the other methods. simulation results showed that pncc and modgdf were more accurate than mfcc in arabic speech recognition. keywords-speech recognition; feature extraction; pncc; modgdf; mfcc; arabic speech recognition i. introduction speech is the most commonly and widely used form of communication. many researches focus on developing reliable systems that can understand and accept commands through speech. nowadays computers are involved in almost every aspect of our life, and as communication between people is mostly vocal, people anticipate the same way of interaction with computers [1]. speech has the capacity to be an important mode of human-computer interaction, and the interest in developing computers that can accept speech as input is growing. the substantial research effort in global speech recognition and the increasing computational power at lower cost could result in more speech recognition applications in the near future [3]. arabic language is the most popular in the arab world, and the arabic alphabet is used in some other languages such as persian, urdu, and malaysian [2]. research in human-computer speech interaction has focused mostly on developing better technical speech recognition systems, and gains in precision and productivity [4]. this research applied three distinct feature extraction methods onto an arabic speech dataset, namely mel-frequency cepstral (mfcc), power-normalized cepstral coefficients (pncc) and modified group delay function (modgdf). the extracted features were classified by a support vector machine (svm). the results of these three feature extracting techniques were compared in order to get the most efficient and accurate output. the feature extraction techniques, having their own properties like modgdf, give additive and high-resolution signal. the additive property adds different functions in one group domain, and high-resolution property is used to sharpen the peaks of a group delay domain [5]. ii. background speech awareness and evaluation have captivated researchers from fletcher's early works [6] and the first voice identification devices [7], to present-day. nevertheless, high precision machine speech recognition can be achieved mostly in quiet settings, as the efficiency of a typical speech recognizer reduces significantly in loud settings [8]. environmental influence and other variables were explored in [9]. as technology progresses, speech recognition will be embedded in more devices used in everyday activities, where environmental variables perform a major part, such as mobile phone voice recognition applications [10], cars [11], integrated access control and information systems [12], emotion identification systems [13], application monitoring [14], disabled assistance [15], and intelligent technology. in addition to voice, many acoustic applications are also essential in diverse engineering issues [16–22]. a noise decrease method could be deployed to enhance efficiency in real-world noisy settings [23–26]. machine efficiency degrades on noise, corresponding author: theyazn h. h. adhyani engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5548 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system channel variance, and spontaneous expressions further below than humans [27]. automatic speech recognition (asr) has not surpassed human efficiency in precision and robustness but we continue to avail from it by knowing the central values behind the identification of human speech (hs) [28]. despite the advancements in auditory processing and popular frontends for asr devices, only a few elements of noise handling in the auditory periphery are modeled and simulated [29]. for instance, common methods such as mfcc use auditory features like varying bandwidth filter bank and compression size. coefficients of perceptual linear prediction (plp) focus on perceptual processing by using curves of critical band resolution, corresponding loudness scaling, and cube root energy laws of listening linear prediction coefficients (lpc) [30]. synaptic adjustment could include an instance of auditory-motivated enhancements of voice depiction. standard mfcc or plp coefficients could be substituted by coefficients depending on some cochlear model in order to better represent human auditory periphery. the proposed model of synaptic adaptation in [31] showed important improvements in the efficiency of speech recognition. the pncc proposed in [32], was based on auditory processing, including new characteristics, using a nonlinearity of power-law, a noisesuppression algorithm relying on asymmetric filtering, and temporal masking. the experimental findings exhibited enhanced precision of acceptance, comparing to mfcc and plp. another strategy for feature removal was based on deep neural networks (dnn). the noise robustness of sound designs relying on dnn was evaluated in [33]. recurrent neural networks (rnn) for cleaning distorted input characteristics were applied in [34]. the use of lstm-rnns was suggested in [35] to manage extremely non-stationary additive noise. for solid voice recognition, an all-inclusive outline of profound teaching was presented in [36]. many researches utilized pncc and mfcc to extract the most significant features from speech signals [37-39]. group delay function (modgdf) was used to extract speech signals, being more efficient than mfcc. iii. method figure 1, shows the developed recognition system for evaluating the identification of arabic speech. fig. 1. proposed speech recognition system audio from arabic speakers was given as input to the system, and three feature extraction techniques, mfcc, pncc and modgdf, were applied to extract significant features of arabic speech. svm algorithm was used for training and classification, and performance measures were employed to evaluate these algorithms. iv. database a speech database was created, populated with utterances from volunteered yemeni students studying at dr. babasaheb ambedkar marathwada university, in aurangabad, india. tables i and ii, include the demographic information of the volunteers and the basic parameters of the recordings. table i. demographics of volunteers parameter values speaker type students (bsc, msc, phd) gender 35 male, 15 female basic language arabic accent standard and yemeni age group 20 35 country yemen environment dept. of cs & it table ii. basic recording parameters parameter value sampling rate 16000hz speakers dependent condition of noise normal accent arabic pre–emphasis 1-0.9/(z-1) window type hamming, 25ms window step size 20ms a. recording procedure the database was recorded using high quality headsets (sennheiser pc360) and praat software, in a quiet environment. speech samples were recorded in mono mode with 16000hz sampling rate. a microphone was placed at a distance of about 3cm from the volunteer’s mouth. table iii, displays the hardware and software used during the speech samples recording procedure. table iii. hardware and software details hardware software laptop hp elite book: (core i7 ,5th gen, 8gb ram, ssd 500gb) windows 10 headphone :sennheiser pc360 praat: 6102_win64 microphone b. isolated digits table iv shows the recorded arabic digits. c. isolated words isolated arabic words of the speech corpus were used. table v shows the arabic words related to learning. d. continuous sentences table vi shows the continuous sentence text corpus. five utterances were collected for each sentence. engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5549 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system table iv. arabic digits digit pronunciation arabic writing 0 safer صفر 1 wahed َواِحد 2 ethnan اِْثنَان 3 thlathah ثَةzََث 4 arbaah أَْربََعة 5 khamsah َخْمَسة 6 settah ِستَّة 7 sabaah َسْبَعة 8 thamaneyah ثََمانِيَة 9 tesaah تِْسَعة table v. arabic words arabic word arabic pronunciation english word jameaah university جامعة koleyah collage كلية kesm department قسم taaleem education تعليم mauhader lecture محاضر modares teacher مدرس maamal lab معمل madah course مادة table vi. arabic sentences related to greetings english language arabic language when does registrations begin at the university? متى يبدأ التسجيل في الجامعة ؟ is there a graduate department? للدراسات العليا ؟ھل يوجد قسم what are the admission requirements? ما ھي شروط القبول ؟ is there a university website? ھل يوجد موقع الكتروني للجامعة؟ what are the available majors? ما ھي التخصصات المتوفرة ؟ the university has modern programs. حديثةالجامعة لديھا برامج the mission of the university is ambitious. رسالة الجامعة طموحة v. feature extraction algorithms feature extraction is vital for developing a speech recognition system. its main objective is to extract the most significant features for identifying arabic speakers. three feature extraction algorithms were applied: pncc, modgdf, and mfcc. a. power normalized cepstral cofficients (pncc) the pncc feature extraction algorithm for extracting features for speech recognition can be seen in [3]. pncc has two components: initial processing, and temporal integration for environmental analysis. 1) initial processing this processing uses a pre-emphasis filter in the form of: ����� 1�0.97��1 (1) subsequently, a short-time fourier transformation (stft) is conducted using hamming windows. the use of a dft volume of 1024 was intended to produce a length of 25.6ms, with 10ms between frames. by weighting magnitudesquared stft outputs, spectral power in 40 analysis bands was obtained for positive frequencies. center frequencies are also linearly spaced between 200hz and 8000hz using gamma tone filters in equivalent rectangular bandwidth (erb) [3]. 2) temporal integration for environmental analysis most speech recognition systems use length frames of analysis between 20 and 30ms. it is often found that longer analytical windows deliver greater noise modeling efficiency and environmental normalization [6], because of the facility related to most background conditions, and changes slower than the speaking-related instant power. in pncc processing, an estimate is made of a quantity referred to as “medium-time power” q[m,l] by calculating the running average of p[m, l], the power observed in a single frame of analysis, according to: '1 2 1 [ , ] [ ] m m m m m m q m l p m l + + = − = ∑ (2) where m is the index of the frame, and l is the index of the channel. b. modified group delay function (modgdf) this method was discussed in detail in [7-15]. it should be noted that the group delay feature is different from the phase spectrum, and it is defined as the phase negative derivative which can be used effectively to extract different system parameters when the signal is considered as a minimum phase signal. this is mainly so because a minimum phase signal’s magnitude spectrum is similar to each other and its group delay feature. figure 2, shows the process of modgdf algorithm for extracting speech features. the algorithm is described below. algorithm: modgdf feature extraction pseudocode input: speech x(n) output: modgdf (features vector) c(n) begin initialize parameters; apply the dft of the speech x (n) as x[k]; apply the dft of the speech n x (n) as y[k]; calculate group delay function where r and i represents real and imaginary parts; compute the spectrally smoothed spectra of x [k] and designate it as s [k]; compute modified group delay where s [k] is the smoothed version of x [k] and two new parameters α and γ are used to regulate the dynamic range of modgdf; apply the dct to get the modgd features; obtain modgd features vector (13 coefficients for each frame); end. fig. 2. feature extraction process of modgdf c. mel frequency cepstral coefficients (mfcc) mfcc is the mostly used method in speech technology development, as it is similar to the human auditory system [16], taking into account its characteristics. moreover, these engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5550 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system coefficients are robust and reliable to variations of speakers and recording conditions. figure 3 shows the processing steps of mfcc for feature extraction. fig. 3. processes in mfcc feature extraction method pre-emphasis is the first step of mfcc, which produces energy, that was earlier compressed during sound generation, at a high frequency. framing uses narrower parts to trim the sound signals. windowing is used to avert discontinuity of the signals produced by the framing method. fast fourier transform (fft) is used for adapting a signal from time to frequency domain. filter bank is the overlapping band pass filter. the final process is the discrete cosine transform (dct) making the coefficients of mfcc [18]. mfcc is computed from speech signal using the following three steps: • compute the fft power spectrum of the speech signal • apply a mel-space filter-bank to the power spectrum to get energies • compute dct of log filter-bank energies to get uncorrelated mfcc’s the speech signal is first divided into time frames comprising of a random number of samples. in most systems, overlapping of frames is used to smooth transition from frame to frame. each time frame is then windowed with a hamming window to eliminate discontinuities at the edges [17]. the filter coefficients w(n) of a hamming window of length n are computed according to: �� � � 0.54�0.46cos��������, 0 � � ��1 �� � � 0, otherwise. (2) where n is the total number of samples, and n is the current sample. mel scale links perceived frequency or pitch of a pure tone to its actual measured frequency. humans discern better small changes in pitch at lower frequencies. integrating this scale makes the features match more closely to what humans hear. the formula for converting from frequency to mel scale is: �!� � 1125ln�1% & '(( � (3) while the formula to go back from mel’s scale to frequency is: ���)� � 700�exp� -���.��1� (4) vi. classification svm is principally a binary classifier, but with the following two approaches it can be extended to multi-class tasks, the first being 1-vs-all i.e. comparing each class to the rest and the second, 1-vs-1, i.e. each class to the other, separately [20]. in this study, the i-vs-all was used, consisting of multiple binary svms equal to the number of classes. every svm with each one of the classes against the rest of them is taught and taken into consideration when testing. the decision is eventually made based on the distance from all svms between the test data and the hyper planes. vii. simulation results several experiments were conducted, employing the speech database, for classification and recognition using mfcc, pncc and modgdf for feature extraction. training procedure used 60% of the data, while 40% were used for testing. the test procedure was implemented in matlab 2016, and screen shots are shown in figures 4 and 5. evaluation and testing was performed using accuracy rate, specificity, sensitivity, precision, and execution time. fig. 4. layout of the main system fig. 5. implementation a. analysis for arabic digits the feature extraction methods were applied on the digit samples, and the results are shown in table vii. table vii. svm results on digits feature extraction technique accuracy rate specificity sensitivity precision execution time (s) modgdf 90.3 94.5 50.5 72.7 16.39 pncc 97.5 98.6 87.6 88.7 54.8 mfcc 88.3 93.5 41.7 53.7 87.5 engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5551 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system figure 6 illustrates the methods’ performance. as it can be observed, modgdf with svn obtained better results regarding time cost. pncc and mfcc with svn obtained good results, but their execution time was much higher. it is concluded that modgdf had lower time cost, as it reduced execution time complexity. table viii, shows the confusion matrix of pncc for the recognition of arabic digits. figure 7, displays a sample of modgdf with svm for the recognition of an arabic digit (“khamsah”). fig. 6. methods' performance on the recognition of arabic digits fig. 7. modgdf sample recognizing the arabic digit “khamsah” table viii. confusion matrix of digits usin g pncc/svm 19 0 0 0 0 0 0 0 0 0 0 19 0 0 0 0 0 0 0 0 0 2 17 0 0 0 0 0 0 0 1 0 1 17 0 0 0 0 0 0 0 1 0 0 18 0 0 0 0 0 0 1 1 0 0 17 0 0 0 0 0 0 0 2 0 1 16 0 0 0 2 0 1 1 0 1 1 13 0 0 1 0 0 3 0 1 0 1 13 0 0 0 0 1 0 0 1 0 0 17 b. analysis for arabic words table ix shows the results on the recognition of arabic words. the results of modgdf with svm are reported to be not satisfactory, but time cost is much lesser than the other feature extraction methods. pncc with svm performed better, but time cost turned out to be significantly more. the results are also shown in figure 8. table x shows the confusion matrix of pncc/svm for the recognition of arabic words. the confusion matrix has attested that pncc is more robust and demonstrates more strength to identify arabic words. figure 9 illustrates the performance of the pncc on the recognition of an arabic word (“dirham”). table ix. results on words feature extraction technique accuracy rate specificity sensitivity precision execution time (s) modgdf 89.3 94.1 46.8 58.6 12.3 pncc 95.15 97.3 75.8 79.2 49.5 mfcc 88.6 93.6 43.1 51.8 99.5 fig. 8. performance on the recognition of arabic words fig. 9. sample of pncc with the svm recognizing the arabic word “dirham” table x. confusion matrix of words usin g pncc/svm 19 0 0 0 0 0 0 0 0 0 2 17 0 0 0 0 0 0 0 0 2 2 15 0 0 0 0 0 0 0 2 1 1 15 0 0 0 0 0 0 2 0 3 2 12 0 0 0 0 0 1 1 2 2 0 13 0 0 0 0 1 3 1 0 0 0 14 0 0 0 0 0 0 1 2 0 0 16 0 0 0 1 1 1 0 0 0 0 16 0 2 0 0 0 0 0 2 2 1 12 c. analysis for arabic sentences table xi shows the performance results on the recognition of arabic sentences. as it can be observed, pncc with svm engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5552 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system performed better, but had greater execution time. pncc had again the highest accuracy and lower execution time than mfcc. modgdf had the lowest execution time of 12.3s, and accuracy of 88.2, while mfcc showed again the lowest accuracy. the accuracy of pncc/svm can be also confirmed by its confusion matrix analysis of sentences in table xii. the confusion matrix shows that the pncc/svm is capable of recognizing sentences with satisfactory results. the results are also shown in figure 10, while figure 11 illustrates the performance of pncc on the recognition of an arabic sentence (“what are the available majors?”). table xi. results on sentences feature extraction technique accuracy rate specificity sensitivity precision execution time (s) modgdf 88.2 93.5 41.2 45.3 18.9 pncc 93.05 96.14 65.26 71.04 70.0 mfcc 86.0 92.2 30.0 49.48 125.0 fig. 10. feature extraction performance on arabic sentences fig. 11. sample of pncc/svm recognizing the arabic sentence “what are the available majors?” table xii. confusion matrix of sentences using pncc/svm 19 0 0 0 0 0 0 0 0 0 2 17 0 0 0 0 0 0 0 0 1 7 11 0 0 0 0 0 0 0 0 0 1 18 0 0 0 0 0 0 1 0 0 8 10 0 0 0 0 0 0 0 0 0 3 16 0 0 0 0 0 0 0 0 0 7 12 0 0 0 0 0 0 1 0 0 1 17 0 0 0 0 0 0 0 0 0 9 10 0 6 0 0 0 0 0 0 0 3 10 viii. conclusion in this paper, a speech recognition system for arabic language was presented, evaluating three feature extraction algorithms, namely mfcc, pncc, and modgdf, while an svm was used for the classification process. results showed that pncc was more efficient, while modgdf had moderate accuracy. pncc and modgdf fill the gaps in svm, as they both had greater accuracy than mfcc. pncc had a 93-97% accuracy rate, modgdf had 90% and mfcc had 88%. references [1] p. p. shrishrimal, r. r. deshmukh, v. b. waghmare, “indian language speech database: a review”, international journal of computer applications, vol. 47, no. 5, pp. 17-21, 2012 [2] s. k. gaikwad, b. w. gawali, p. yannawar, “a review on speech recognition technique”, international journal of computer applications, vol. 10, no. 3, pp. 16-24, 2010 [3] c. huang, t. chen, e. chang, “accent issues in large vocabulary continuous speech recognition”, international journal of speech technology, vol. 7, no. 2-3, pp. 141-153, 2004 [4] m. a. anasuya, s. k. katti, “speech recognition by machine: a review”, international journal of computer science and information security, vol. 6, no. 3, pp. 181-205, 2009 [5] p. l. garvin, p. ladefoged, “speaker identification and message identification in speech recognition”, phonetica, vol. 9, no. 4, pp. 193199, 1963 [6] g. ceidaite, l. telksnys, “analysis of factors influencing accuracy of speech recognition”, elektronika ir elektrotechnika, vol. 105, no. 9, pp. 69-72, 2010 [7] z. h. tan, b. lindberg, “speech recognition on mobile devices”, in: mobile multimedia processing – wmmp 2008, lecture notes in computer science, vol. 5960, springer, 2010 [8] w. li, k. takeda, f. itakura, “robust in-car speech recognition based on nonlinear multiple regressions”, eurasip journal on advances in signal processing, 2007 [9] w. ou, w. gao, z. li, s. zhang, q. wang, “application of keywords speech recognition in agricultural voice system”, second international conference on computational intelligence and natural computing, wuhan, china, september 13-14, 2010 [10] l. zhu, l. chen, d. zhao, j. zhou, w. zhang, “emotion recognition from chinese speech for smart affective services using a combination of svm and dbn”, sensors, vol. 17, no. 7, 2017 [11] j. e. noriega-linares, j. m. navarro ruiz, “on the application of the raspberry pi as an advanced acoustic sensor network for noise monitoring”, electronics, vol. 5, no. 4, 2016 [12] m. al-rousan, k. assaleh, “a wavelet-and neural network-based voice system for a smart wheelchair control”, journal of the franklin institute, vol. 348, no. 1, pp. 90-100, 2011 [13] i. v. mcloughlin, h. r. sharifzadeh, “speech recognition for smart homes”, in: speech recognition, technologies and applications, intech, 2008 [14] a. glowacz, “diagnostics of rotor damages of three-phase induction motors using acoustic signals and smofs-20-expanded”, archives of acoustics, vol. 41, no. 3, pp. 507-515, 2016 [15] a. glowacz, “fault diagnosis of single-phase induction motor based on acoustic signals”, mechanical systems and signal processing, vol. 117, pp. 65-80, 2019 [16] m. kunicki, a. cichon, “application of a phase resolved partial discharge pattern analysis for acoustic emission method in high voltage insulation systems diagnostics”, archives of acoustics, vol. 43, no. 2, pp. 235-243, 2018 [17] d. mika, j. jozwik, “advanced time-frequency representation in voice signal analysis”, advances in science and technology research journal, vol. 12, no. 1, pp. 251-259, 2018 engineering, technology & applied science research vol. 10, no. 2, 2020, 5547-5553 5553 www.etasr.com alasadi et al.: efficient feature extraction algorithms to develop an arabic speech recognition system [18] l. zou, y. guo, h. liu, l. zhang, t. zhao, “a method of abnormal states detection based on adaptive extraction of transformer vibroacoustic signals”, energies, vol. 10, no. 12, 2017 [19] h. yang, g. wen, q. hu, y. li, l. dai, “experimental investigation on influence factors of acoustic emission activity in coal failure process”, energies, vol. 11, no. 6, article id 1414, 2018 [20] l. mokhtarpour, h. hassanpour, “a self-tuning hybrid active noise control system”, journal of the franklin institute, vol. 349, no. 5, pp. 1904-1914, 2012 [21] s. c. lee, j. f. wang, m. h. chen, “threshold-based noise detection and reduction for automatic speech recognition system in human-robot interactions”, sensors, vol. 18, no. 7, article id 2068, 2018 [22] s. m. kuo, w. m. peng, “principle and applications of asymmetric crosstalk-resistant adaptive noise canceler”, journal of the franklin institute, vol. 337, no. 1, pp. 57-71, 2000 [23] j. w. hung, j. s. lin, p. j. wu, “employing robust principal component analysis for noise-robust speech feature extraction in automatic speech recognition with the structure of a deep neural network”, applied system innovation, vol. 1, no. 3, article id 28, 2018 [24] r. p. lippmann, “speech recognition by machines and humans”, speech communication, vol. 22, no. 1, pp. 1-15, 1997 [25] j. b. allen, “how do humans process and recognize speech?”, ieee transactions on speech and audio processing, vol. 2, no. 4, pp. 567577, 1994 [26] s. haque, r. togneri, a. zaknich, “perceptual features for automatic speech recognition in noisy environments”, speech communication, vol. 51, no. 1, pp. 58-75, 2009 [27] h. hermansky, “perceptual linear predictive (plp) analysis of speech”, the journal of the acoustical society of america, vol. 87, no. 4, pp. 1738-1752, 1990 [28] m. holmberg, d. gelbart, w. hemmert, “automatic speech recognition with an adaptation model motivated by auditory processing”, ieee transactions on audio, speech, and language processing, vol. 14, no. 1, pp. 43-49, 2005 [29] c. kim, r. m. stern, “power-normalized cepstral coefficients (pncc) for robust speech recognition”, 2012 ieee international conference on acoustics, speech and signal processing, kyoto, japan, march 25-30, 2012 [30] m. l. seltzer, d. yu, y. wang, “an investigation of deep neural networks for noise robust speech recognition”, 2013 ieee international conference on acoustics, speech and signal processing, vancouver, canada, may 26-31, 2013 [31] a. l. maas, q. v. le, t. m. o'neil, o. vinyals, p. nguyen, a. y. ng, “recurrent neural networks for noise reduction in robust asr”, 13th annual conference of the international speech communication association, portland, usa, september 9-13, 2012 [32] m. wollmer, b. schuller, f. eyben, g. rigoll, “combining long shortterm memory and dynamic bayesian networks for incremental emotionsensitive artificial listening”, ieee journal of selected topics in signal processing, vol. 4, no. 5, pp. 867-881, 2010 [33] z. zhang, j. geiger, j. pohjalainen, a. e. d. mousa, w. jin, b. schuller, “deep learning for environmentally robust speech recognition: an overview of recent developments”, acm transactions on intelligent systems and technology, vol. 9, no. 5, pp. 1-28, 2018 [34] e. principi, s. squartini, f. piazza, “power normalized cepstral coefficients based supervectors and i-vectors for small vocabulary speech recognition”, 2014 international joint conference on neural networks, beijing, china, july 6-11, 2014 [35] e. loweimi, s. m. ahadi, “a new group delay-based feature for robust speech recognition”, 2011 ieee international conference on multimedia and expo, barcelona, spain, july 11-15, 2011 [36] b. kurian, k. t. shanavaz, n. g. kurup, “pncc based speech enhancement and its performance evaluation using snr loss”, 2017 international conference on networks & advances in computational technologies, thiruvanthapuram, india, july 20-22, 2017 [37] t. fux, d. jouvet, “evaluation of pncc and extended spectral subtraction methods for robust speech recognition”, 23rd european signal processing conference, nice, france, august 31 – september 4, 2015 [38] a. kaur, a. singh, “power-normalized cepstral coefficients (pncc) for punjabi automatic speech recognition using phone based modelling in htk”, 2nd international conference on applied and theoretical computing and communication technology, bangalore, india, july 2123, 2016 [39] c. kim, r. m. stern, “feature extraction for robust speec recognition based on mmximizing the sharpness of the power distribution and on power flooring”, 2010 ieee international conference on acoustics, speech and signal processing, dallas, usa, march 14-19, 2010 [40] d. s. kim, s. y. lee, r. m. kil, “auditory processing of speech signals for robust speech recognition in real-world noisy environments”, ieee transactions on speech and audio processing, vol. 7, no. 1, pp. 55-69, 1999 microsoft word ismael-ed.doc etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 467-472 467 www.etasr.com mekhamer et al.: design practices in harmonic analysis studies applied to industrial electrical … design practices in harmonic analysis studies applied to industrial electrical power systems s. f. mekhamer faculty of engineering, ain shams university cairo, egypt saidfouadmekhamer@yahoo.com a. y. abdelaziz faculty of engineering, ain shams university cairo, egypt almoatazabdelaziz@hotmail.com s. m. ismael electrical engineering division, enppi cairo, egypt shriefmohsen@enppi.com abstract—power system harmonics may cause several problems, such as malfunctions of electrical equipment, premature equipment failures and plant shutdowns. accordingly, mitigation of these harmonics is considered an important target especially for industrial applications where any short downtime period may lead to great economic losses. harmonic analysis studies are necessary to analyze the current and voltage harmonic levels and check if these levels comply with the contractual or international standard limits. if the studies reveal that the preset limits are exceeded, then a suitable harmonic mitigation technique should be installed. harmonic analysis studies in the industrial electrical systems are discussed in many references. however, a comprehensive procedure for the steps required to perform a harmonic study is rarely found in the literature even though it is strongly needed for design engineers. this paper provides a comprehensive procedure for the steps required to perform a harmonic study in the form of a flowchart, based on industrial research and experience. hence, this paper may be considered as a helpful guide for design engineers and consultants of the industrial sector. keywords-harmonic analysis study; distortion; point of common coupling (pcc); variable frequency drive (vfd); resonance i. introduction due to the dramatic increase in the usage of nonlinear loads in industrial applications (mainly regarding variable frequency drives or vfds), the power system harmonics problems has gain in significance, representing a big obstacle against the wide application of vfds although they enhance system efficiency and provide great energy saving. the power system harmonics cause many harmful effects including:  overheating of generators, motors, transformers, and power cables that lead to early equipment failures  failure of capacitor banks  nuisance tripping to protection relays  interference to communication systems and sensitive electronic devices accordingly, the mitigation of the power system harmonics is of great importance in industrial electrical systems in order to increase system reliability, enhance operation economics, avoid unwanted equipment failure and process downtimes [1]. nowadays, industrial electrical systems contain a valuable amount of nonlinear loads. accordingly, power system studies for industrial plants should contain harmonic analysis studies beside short circuit, load flow and motor starting studies. the harmonic analysis studies for the industrial systems are discussed in [2], but the author did not focus on the guidelines of the harmonic study. also the authors did not introduce the various international standards that set the limits of the harmonic distortions. the goals of this paper can be summarized as follow: 1. to highlight the purpose of a harmonic analysis study 2. to highlight some guidelines for harmonic analysis studies 3. to provide a comprehensive description of the procedure required to perform a harmonic study 4. to introduce the international standards limits for the harmonic distortions ii. purpose of a harmonic analysis study nowadays, the applications of the nonlinear loads in the industrial plants grow rapidly and the percentage of these loads may be in the range of 30% to 50% of the total plant load. accordingly, the effects of harmonics within the electrical system and their impact on the electric utility and neighboring plants should be examined to avoid equipment damage and plant shutdowns. the following cases may necessitate performing a harmonic study [3]: 1. during the design stage of a project, if the amount of the nonlinear loads exceeds 25% of the total loads on a bus or a system, a harmonic analysis study is required to check the compliance with the contractual/ international harmonic limits 2. to solve harmonic-related problems such as failure of electrical equipment or malfunction of protective relays 3. if an existing plant is going to be expanded and a significant amount of nonlinear loads is going to be added, then a harmonic analysis study is required to verify the plant performance after the addition of these loads etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 467-472 468 www.etasr.com mekhamer et al.: design practices in harmonic analysis studies applied to industrial electrical … 4. if a capacitor bank is installed in any electrical networks that contain many nonlinear loads, then a harmonic analysis is required to check the possibility of resonance occurrence iii. guidelines for harmonic analysis studies a. harmonic sources all nonlinear loads are defined also as harmonic sources, as clearly shown in figure 1, because they draw non-sinusoidal currents when a sinusoidal voltage is applied. the nonlinear load acts as a source of harmonic currents in power system, thus causing voltage distortions at the various system buses due to the harmonic voltage drops across the system impedances. fig. 1. effect of a nonlinear load on the current waveform to perform a harmonic study, the design engineer must identify the available harmonic sources and the harmonic currents generated by these harmonic sources. there are three options available for the design engineer to determine the harmonic currents, as described below: a. to measure the generated harmonics from each harmonic source (time-consuming option, applicable only in case of existing plants) b. to calculate the generated harmonic currents by using suitable mathematical analysis (may require extensive manual and time-consuming calculations) c. to use typical values based on computerized softwares libraries or based on the available data from the nonlinear load's manufacturer practically, options (a) and (c) are the most used options and provide reasonable results. the following are the main sources of harmonics in industrial applications [4]: 1) saturable magnetic equipment: there are various saturable magnetic equipment that cause harmonic problems such as: a. rotating machines, rotating machines like induction motors may act as sources of the third harmonic currents when they are operating in abnormal or overloaded conditions. b. ballasts of discharge lamps, the discharge lamps like mercury vapor, high-pressure sodium and fluorescent lamps are dominant sources of the third harmonic currents. c. transformer harmonics, transformers create harmonics when they are overexcited. in addition, the transformer inrush currents may contain some even harmonics, but the duration is rather limited. d. generator harmonics, voltage harmonics are created from the synchronous generators due to the non-sinusoidal distribution of the flux in the air gap. selection of suitable coil-span factor (called also pitch factor) can significantly reduce the voltage harmonics from the generators. 2) power electronic devices: there are various power electronic devices that cause harmonic problems such as: a. variable frequency drives (vfds) used in fans and pumps b. switched mode power supplies (smps), used in instruments and personal computers c. high voltage dc transmission stations (hvdc) d. static var compensators e. uninterruptable power supply systems (ups) f. battery charger systems g. flexible ac transmission systems (facts) h. ac and dc arc furnaces in steel manufacturing plants b. resonance the inductive reactance increases as the frequency increases as follow: lfxl ...2    where: xl: inductive reactance f : system frequency l : inductance while the capacitive reactance decreases as the frequency increases as follows: )...2 cf(/ 1xc    where: xc: capacitive reactance c : capacitance due to the opposite characteristics of the inductive and capacitive reactances, there must be a frequency at which xl equals xc. this condition of equal and opposite reactances is called “resonance”. most of the power system elements are inductive. accordingly, the presence of shunt capacitors used etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 467-472 469 www.etasr.com mekhamer et al.: design practices in harmonic analysis studies applied to industrial electrical … for power factor correction or harmonic filtering can increase the probability of resonance occurrence. there are two types of resonance, the series resonance and the parallel resonance. the harmful effect of the series resonance may be the flow of excessive harmonic currents through the network elements. these excessive currents cause nuisance tripping to the protection relays, overheating of cables, motors, and transformers and premature failure to the electrical equipment. the harmful effect of the parallel resonance may be the presence of excessive harmonic voltages across the network elements. these excessive harmonic voltages cause dielectric breakdown of the electrical equipment’s insulation [3]. 1) series resonance: the series resonance occurs when an inductor and a capacitor are connected in series and they resonate together at a certain resonance frequency. an example of the series resonant circuit is shown in fig. 2. this ac circuit is said to be in resonance when the inductive reactance xl is equal to the capacitive reactance xc. fig. 2. ac circuit representing an example for the series resonance 2) parallel resonance: the parallel resonance occurs when an inductor and a capacitor are connected in parallel and they resonate together at a certain resonance frequency. there are many forms of parallel resonant circuits. a typical parallel resonant circuit is shown in figure 3. this circuit is said to be in parallel resonance when xl=xc similar to the series resonance. fig. 3. ac circuit representing an example for the parallel resonance c. tools of performing a harmonic analysis study: the harmonic analysis study can be performed by any of the following tools: a. manual calculations, which are limited to small-size networks since they are very complicated and susceptible to errors. b. field measurements, which are often used as a verification of the design, or as a preliminary diagnosis of a field problem. c. digital computer simulations, which nowadays are the most convenient and economical method for analyzing system harmonics d. power system modeling: at the presence of harmonics, the electrical system elements models must be updated to encounter for the presence of higher frequencies in the system rather than the power frequency (50 hz or 60 hz). details for electrical system elements models under harmonics distortions can be found in [3]. e. types of analyses performed during the harmonic analysis: there are two main types of analyses that could be performed during harmonic analysis [2]: a. current and voltage distortion analysis, in which the individual and total current and voltage harmonic distortions are calculated at the various buses then the results are compared with the relevant contractual limits. b. impedance versus frequency analysis, in which a plot of the system impedance at various buses is plotted against the frequency. this analysis is important in predicting the system resonances prior to energizing the electrical system. a peak in the impedance plot indicates a parallel resonance while a valley in the impedance plot indicates a series resonance. iv. steps of performing a harmonic analysis study if a harmonic analysis study is required to be performed due to any of the cases described in section (ii), the following steps should be followed: a. obtain the electrical system one-line diagram and highlight the available nonlinear loads, capacitor banks and medium voltage cables of long length within the industrial system. b. highlight the point of common coupling (pcc) which is the point that connects the industrial network with the utility or with the neighboring plant. c. highlight the in-plant system buses that are expected to be affected from harmonic distortions. d. gather the harmonics-related data of all nonlinear loads within the plant. e. obtain, from the utility company, the relevant data of current and voltage harmonics at the contractual pcc including the minimum and maximum short circuit fault levels and the permissible limits on voltage and current harmonics because the allowable harmonic limits vary from country to country. f. model the electrical network using any of the commercially available softwares such as the electrical transient analyzer program (etap). g. perform the harmonic analysis for the electrical network at the various possible operating scenarios. h. check the individual and total voltage and current distortion levels at the interested system buses and at the pcc. etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 467-472 470 www.etasr.com mekhamer et al.: design practices in harmonic analysis studies applied to industrial electrical … i. check the harmonic frequency spectrum, which is a plot of each individual harmonic value with respect to the fundamental value versus frequency. j. if the harmonic distortion results exceed the allowable limits, select an appropriate harmonic mitigation solution and the optimum insertion point for that solution. further details about this point are introduced in section (v). k. re-perform the harmonic analysis study after adding the harmonic mitigation technique to ensure compliance with the contractual / international harmonic limits. an extensive literature review over the past twenty years leads to the fact that there is no single article that summarizes the steps required to perform a harmonic analysis study even though the importance of this procedure for the design engineers. the comprehensive flowchart presented in figure 4 provides this novel helpful approach. v. selection of the harmonic filter's insertion point even if the design engineer selects the optimum harmonic mitigation technique for his plant, among the available harmonic mitigation solutions in the market [5], then the filter insertion point should be studied carefully as it greatly affects system performance [6]. as shown in figure 5, the possible filter insertion points can be classified into three categories as follow: a. local harmonic mitigation in this mode of mitigation, the shunt type (passive or active) harmonic filter is directly connected to the nonlinear load terminals. this mode is efficient if the number of nonlinear loads is limited and the power of each nonlinear load is significant compared to the total plant power. circulation of harmonic currents in the electrical network is avoided, thus the harmonic impact on the upstream network elements is minimized. b. semi-global harmonic mitigation in this mode of mitigation, the shunt type (passive or active) harmonic filter is connected to the input of the lv subdistribution switchboard. accordingly, the filter treats several sets of nonlinear loads. this type of compensation is ideal in presence of multiple nonlinear loads each having low rated power. a practical example of this mode is found in commercial buildings where a harmonic filter may be found on each floor of the building. c. global harmonic mitigation this mode of mitigation is more concerned with meeting the contractual harmonic limits at the (pcc) than the reduction of the in-plant harmonics. the major drawback of this mitigation mode is that the harmonic currents are allowed to circulate in the electrical network. thus, the various electrical elements within the plant will be subjected to harmful harmonic impact. vi. international harmonic standards the purpose of imposing strict limits on the harmonics emissions is to ensure that the current and voltage distortions at the pcc are kept sufficiently low. thus, the other customers connected at the same point are not disturbed. the international standards related to harmonic distortion limits can be classified as follows: a. standards specifying limits for individual nonlinear equipment  iec 61000-3-2 [7], which specifies the current harmonic limits for low voltage equipment that has an input current less than 16 a.  iec 61000-3-12 [8], which specifies the current harmonic limits for an equipment that has an input current between 16a and 75a  iec 61800-3 [9],which specifies the electro-magnetic compatibility (emc) requirements of the adjustable speed drive systems as noticed, the above standards are for small rating and low voltage harmonic loads only. in addition, the above standards do not set limits on the overall distribution network. b. standards specifying limits for electrical networks  ieee 519-1992 [10], this document introduces many useful recommended practices for harmonics control in electrical networks. this document is widely used in the industrial sector and many consultants/clients use the limits indicated in it as contractual limits within their specifications.  iec 61000-3-6 [11], this specification performs an assessment of the harmonic emission limits for distorting loads in medium voltage and high voltage power systems. up till now, this specification is not widely used in the industrial sector because it is rather new (punlished in 2008).  british engineering recommendation g5/4-1 [12], this document provides some helpful engineering recommendations for establishing the allowable harmonic limits of the voltage distortions in the united kingdom. vii. ieee 519-1992 harmonic limits a. harmonic current distortion limits harmonic current distortion limits are introduced in the ieee 519-1992. a summary of these current harmonic limits is shown in table i. setting limits for the current harmonic levels protects the utility company and the other utility consumers connected on the same feeder. where: isc : maximum short circuit current at pcc i1 or il: maximum demand load current (fundamental frequency component) at pcc etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 467-472 471 www.etasr.com mekhamer et al.: design practices in harmonic analysis studies applied to industrial electrical …   yes no no new plant existing plant harmonic analysis is not required end existing or new plant 1. gather all the required data about the existing harmonic sources. 2. make a survey about the harmonic related .problems within the plant 3. define the pcc. 4. perform site harmonic measurements at the pcc and at the system buses that contain the major nonlinear loads. 1. define the harmonic sources within the plant. 2. define the ratio of the nonlinear loads with respect to the total plant loads as follow: ratio = nonlinear loads (kva) / total plant loads (kva ratio of nonlinear loads with respect to the total plant loads ≥ 25% harmonic analysis is required. perform the following steps: 1. obtain the electrical system one-line diagram and highlight the available nonlinear loads, capacitor banks and long medium voltage cables within the industrial system. 2. highlight the point of common coupling (pcc). 3. highlight the in-plant system buses (or switchgears) that are foreseen to be affected from harmonic distortions. 4. gather the required equipment data, ratings and the harmonics related data of all the plant nonlinear loads. 5. obtain, from the utility company, the relevant data of current and voltage harmonics at the contractual pcc including the minimum and maximum short circuit fault levels and the permissible limits on voltage and current harmonics. 6. model the electrical network using any of the commercially available softwares such as the electrical transient analyzer program (etap). 7. perform the harmonic analysis for the electrical network at the various possible operating scenarios. 8. check the individual and total voltage and current distortion levels at the interested system buses and the pcc to ensure compliance with the contractual / international harmonic limits. voltage and current harmonic levels exceed the allowable c 1. evaluate the results of the harmonic study then select the optimum technical and economic harmonic mitigation technique. 2. select the optimum insertion point for the selected harmonic mitigation technique. 3. for a new plant, implement the selected harmonic mitigation technique in the network model then re-perform the harmonic analysis study. 4. for an existing plant, model electrical network of the plant after adding the selected harmonic mitigation technique then perform the harmonic analysis study. end c start yes fig. 4. a comprehensive procedure for the steps required to perform a harmonic analysis etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 467-472 472 www.etasr.com mekhamer et al.: design practices in harmonic analysis studies applied to industrial electrical … fig. 5. various insertion points for the harmonic filters it should be noted that all the power generation equipment are limited to these values of current distortion, regardless of the actual isc/i1 ratio. the ratio isc/il is the ratio of the short circuit current available at the (pcc) to the maximum fundamental load current. it is recommended that the load current (il) be calculated over any (15) or (30) min period and then averaged over the next (12) month period. table i. harmonic current distortion limits for general distribution systems (system voltages: from 120v to 69 kv) odd harmonic order h (%) individual current harmonic distortion (%) isc / i1 h < 11 11 ≤ h < 17 17 ≤ h < 23 23 ≤ h <35 h ≥ 35 total harmonic distortion thdi % < 20 * 4 2 1.5 0.6 0.3 5 20-50 7 3.5 2.5 1 0.5 8 50100 10 4.5 4 1.5 0.7 12 1001000 12 5.5 5 2 1 15 >1000 15 7 6 2.5 1.4 20 b. harmonic voltage distortion limits: the ieee 519-1992 defines the allowable voltage harmonic limits at the pcc. table ii summarizes the limits for the low voltage systems and table iii summarizes the limits for the medium and high voltage systems. where:  special systems: critical applications like hospitals and airports  dedicated systems: systems that contain only nonlinear loads it is important to highlight that the limits listed in table iii should be used as system design values for normal operation conditions (lasting more than one hour). for shorter operation periods, during start-ups or unusual transient conditions, these harmonic limits may be allowed to exceed by 50%. table ii. harmonic voltage distortion limits for low voltage distribution systems (system voltages: below 1 kv) allowable voltage thd special systems general distribution systems dedicated systems voltage thd (%) 3% 5% 10% table iii. harmonic voltage distortion limits for medium and high voltage distribution systems bus voltage individual voltage harmonic distortion (%) total voltage harmonic distortion (%) 69 kv and below 3 % 5 % from 69 kv to 161 kv 1.5 % 2.5 % 161 kv and above 1 % 1.5 % viii. conclusions harmonic analysis studies are necessary to analyze the current and voltage harmonic levels within any industrial electrical system and to check if these levels comply with the contractual or international standard limits. this paper provides a comprehensive approach for performing a harmonic study, presented in the form of a flowchart. in addition, this paper presents the current and voltage harmonic limits used in industrial systems. references [1] m. z. el-sadek, power system harmonics, 2nd edition, mukhtar press, egypt, 2007 [2] r. g. ellis, “harmonic analysis of industrial power systems”, ieee transactions on industry applications, vol. 32, no. 2, pp. 417-421, 1996 [3] ieee std 399-1997, recommended practice for industrial and commercial power systems analysis, ansi/ieee, 1997 [4] j. p. nelson, “a better understanding of harmonics distortions in the petrochemical industry”, ieee transactions on industry applications, vol. 40, no. 1, pp. 220-231, 2004 [5] s. f. mekhamer, a. y. abdelaziz, s. m. ismael, “technical comparison of harmonic mitigation techniques for industrial electrical power systems”, mepcon 2012, fifteenth international middle east power systems conference, paper id: 214, alexandria, egypt, 2012 [6] e. bettega, j. n. fiorina, cahier technique no. 183: active harmonic conditioners and unity power factor rectifiers, schneider electric, 1st edition, 1999 [7] iec std 61000-3-2, electromagnetic compatibility (emc)-part 3-2: limits for harmonic current emissions (equipment input current ≤16 a per phase, iec, 2009 [8] iec std 61000-3-12, electromagnetic compatibility (emc)-part 3-12: limits for harmonic currents produced by equipment connected to public low-voltage systems with input current >16 a and ≤ 75 a per phase, iec, 2011 [9] iec std 61800-3, adjustable speed electrical power drive systems-part 3: emc requirements and specific test methods, iec, 2004 [10] ieee std 519, recommended practice and requirements for harmonics control in electrical power systems, ansi/ ieee, 1992 [11] iec std 61000-3-6, electromagnetic compatibility (emc)-part 3-6: assessment of emission limits for the connection of distorting installations to mv, hv and ehv power systems, iec, 2008 [12] british engineering recommendation, planning levels for harmonic voltage distortion and the connection of nonlinear equipment to transmission systems and distribution networks in the united kingdom, g5/4-1, 2001 microsoft word 28-2658_s engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4019 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 green scenarios for power generation in vietnam by 2030 vu h. m. nguyen faculty of electrical and electronics engineering ho chi minh city university of technology and education ho chi minh city, vietnam vunhm.ncs@hcmute.edu.vn cuong v. vo faculty of electrical and electronic engineering ho chi minh city university of technology and education ho chi minh city, vietnam cuongvv@hcmute.edu.vn luan d. l. nguyen department of urban engineering ho chi minh city university of architecture ho chi minh city, vietnam luan.nguyenleduy@uah.edu.vn binh t. t. phan faculty of electrical and electronics engineering ho chi minh city university of technology ho chi minh city, vietnam thanhbinh055@yahoo.com abstract—energy for future sustainable economic development is considered one crucial issue in vietnam. this article aims to investigate green scenarios for power generation in vietnam by 2030. four scenarios named as business as usual (bau), low green (lg), high green (hg) and crisis have been proposed for power generation in vietnam with projection to 2030. three key factors have been selected for these scenarios, namely: (1) future fuel prices, (2) reduction of load demand caused by the penetration of led technology and rooftop photovoltaic (pv) systems, and (3) the introduction of power generation from renewable sources. the least costly structure of power generation system has been found. co2 emission reduction of hg in comparison to the bau scenario and its effect on generation cost reduction are computed. results show that bau is the worst scenario in terms of co2 emissions because of the higher proportion of power generation from coal and fossil fuels. lg and hg scenarios show their positive impacts both on co2 emissions and cost reduction. hg is defined as the greenest scenario by its maximum potential on co2 emission reduction (~146.92mt co2) in 2030. additionally, selling mitigated co2 can make green scenarios more competitive to bau and crisis in terms of cost. two ranges of generation cost (4.3-5.5 and 6.07.7us$cent/kwh) have been calculated and released in correspondence with low and high fuel price scenarios in the future. using led lamps and increasing the installed capacity of rooftop pvs may help reduce electric load demand. along with the high contribution of renewable sources will make the hg scenario become more attractive both in environmental and economic aspects when the crisis scenario comes. generation costs of all scenarios shall become cheap enough for promoting economic development in vietnam by 2030. keywords-green; scenario; least cost; optimum power generation; vietnam i. introduction energy for sustainable development is one of the most crucial issues globally. in long-term energy planning, there are many uncertainty factors. in order to address the issue, green energy scenarios of iea, bp, and china have been built which are projected to 20-30 years ahead [1-3]. electricity usualy takes a high part of about 25% to 45% of total energy consumption. finding the optimum structure for a power system in a competitive market with many constraints of transmission lines, the rules of the market constitute a multiresearched topic [4-8]. vietnam has been considered as one of the most dynamic emerging countries with approximately 7% of annual gross domestic products (gdp) increase in the last 20 years. that development has led to up to 15% increase of the yearly nationwide electricity power demand [9]. it is currently a big issue of vietnam, relevant to the lack of primary fuels supplies. another obstacle for power generation in vietnam to meet the mentioned rapid economic development is environmental pollution [10]. summarizing, supplying electricity power to meet the development of the economy is one of the most urgent issues of vietnam. vietnam electricity corporation (evn) is a state-owned enterprise which is responsible for the whole country. evn is required to cooperate with other relevant energy institutes and departments to prepare and release multiscale development plans for the national power system, where the periodic master plan which projects the development plan for the next 15 to 20 years ahead is expected as the most important outcome. the most recent relevant release is decision no 428/qd-ttg on march 18 th 2016 [9]. three development scenarios for vietnam power system by 2030 have been proposed, namely base, high ratio of renewable energy, and high load demand scenarios. however, numerous dubiousness regarding historical input data, mathematical functions, and results of load demand forecasting have been identified from those three scenarios. that vagueness made forecasting results of [9]. not verifiable. the shifts in power consumption awareness and generation sources have created three new corresponding author: cuong v. vo engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4020 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 factors which have a strong and direct effect on the power system: (1) the rapid growth of led lamp technology, (2) the withdrawing of nuclear power from the national electric power structure (since june 2016), and (3) the release of decision no. 11/qd-tags on april 14 th 2017 [11]. the above factors made the scenario in [9] become inappropriate. new scenarios with updated forecasting objectives by 2030 must be studied and introduced to fulfill [9]. the purpose of this paper is to propose two state-of-the-art concepts of green power generation scenarios for vietnam power system by 2030. new concepts are strongly based on: the uncertainty of future fuel price, the penetration of led technology and the increasing installed capacity of rooftop pv systems, and the difference in proportion of renewable energy exploited in vietnam power system. a least cost objective function and its constraints in correspondence with each scenario will be established. a software named lindo (linear, interactive, and discrete optimizer) is employed to figure the optimum structure of vietnam power generation system with projection to 2030. the scope of the study is the power generation system in vietnam only. the system is not yet capable to be put into a whole power system with constraints of trasmission lines and rules of a competitive maket. ii. method the scenarios are proposed based on uncertainty factors which have strong effects on sources (input) and load demands (output) of the power system. an objective function in terms of least cost and its constraints is employed to figure the optimum structure of vietnam power generation system. a. power generation scenarios three variable key factors have been selected for creating power generation scenarios: future fuel price, reduction of load demand caused by the penetration of led technology and rooftop pv systems, and the introduction of power generation from renewable sources. table i presents two scenarios of vietnam’s fuel price by 2030. proposals for the price of two common fuels used for power generation in vietnam (coal and gas) are computed. indicators show that there are big differences between the low and high scenarios. two models computed by the two prestigious institutes have approximately a difference of 100% [11, 12]. table i. fuel scenarios fuel price scenario 2020 2025 2030 coal ($/ton) high [12] 93.5 98.3 103.3 low [11] 41.8 44.4 48.2 gas ($/mbtu) high [12] 9.2 10.9 11.6 low [11] 4.9 5.5 5.7 since the price of led lamps has been dramatically reduced, they are considered as the best choice for replacing conventional lamps in existing and new constructions. the strong penetration of led lamps will lead to significant reduction of power load demand. assumptions of lighting load share and led penetration by 2030 are presented in table ii. decision no 11/qd-ttg [11] promoted the installation wave of residential rooftop pv systems in the south regions of vietnam. taking effect since june 1 st 2017, it was the first cornerstone of vietnam’s regulatory framework to encourage the development of solar energy utilization, which has been considered as one of the most effective ways to reduce the load demand. assumptions of the rooftop pv systems are shown in table iii. electric load demand scenarios resulted by the penetration of led lamp technology are shown in table iv. the high values are forecasted, while the low values are reduced by the penetration of led technology from 1.2% to 6.2% and 1.5% to 8.2%, respectively. table ii. assumptions of lighting load and led year 2020 2025 2030 lighting consumption (%) [9] 23 21 19 capacity factor of lighting 0.75 total lighting pmax (%) [9] 30.7 28.0 25.3 penetration of led (%) 10 30 65 energy reduction by led (%) 50 table iii. assumptions of rooftop pv systems year residential consumption [9] (%) penetration of rooftop pvs (%) low high 2020 34.75 2 2 2025 31.36 10 15 2030 28.63 20 30 table iv. assumptions of lighting load and led year demand (twh) pmax (gw) high [13] low reduction (%) high [14] low reduction (%) 2020 230.20 227.55 1.2 40.33 39.71 1.5 2025 349.95 338.93 3.2 60.84 58.28 4.2 2030 511.27 479.70 6.2 87.56 80.35 8.2 considering the combination of the development of rooftop pv systems and the penetration of leds, an option of deeply low load demand is proposed. a concept of cumulative reduction is presented in table v. it is noted that the rooftop pvs will take to the reduction of twh only while pmax is not affected because pmax normally occurs at evening (at around 7:00 pm). table v. cummulative electric load demand reduction by led and rooftop pv systems year demand low (twh) reduction (%) deeply low (twh) reduction (%) 2020 225.97 1.8 225.97 1.8 2025 328.30 6.2 322.98 7.7 2030 452.23 11.5 438.50 14.2 table vi indicates the maximum gwh and mw of power generation from renewable sources by 2030 [9]. however, the decision no. 2068/qd-ttg, issued on november 25 th 2015 [15] stipulated another option in which the values of exploitable renewable sources are much higher than the ones presented in table vi (see table vii). it is noted that high values of biomass are assumed to be reach up to 70% of that in [15] only. when combining those above factors, various scenarios are generated. four difference scenarios are proposed engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4021 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 and shown in table viii: (1) business as usual (bau) is similar to what happened during the last 5 years in case of low fuel price, high load demand, and low sharing of renewable energy generation, (2) low green (lg) represents the case of low fuel price, low load demand, and high sharing of renewable energy, (3) high green (hg) is generated to perform the conditions of high fuel price, deeply low load demand, and high renewable energy, and (4) crisis scenario is the case of high fuel price, low load demand and low renewable energy. table vi. scenarios of low renewable energy [9] generation unit 2020 2025 2030 mini hydro twh 11.1 12.4 17.7 gw 3.8 5.2 6.8 biomass low twh 2.7 4.8 12.6 gw 0.5 0.9 2.4 wind low twh 2.1 4.0 12.0 gw 0.8 2.0 6.0 solar low twh 8.8 12.8 18.9 gw 6.1 8.0 12.0 table vii. scenarios of high renewable energy [15] generation unit 2020 2025 2030 biomass high (70% of [8]) twh 8.0 19.2 36.0 gw 1.5 3.6 6.9 wind high twh 2.7 6.0 15.4 gw 1.0 3.0 7.7 solar high twh 8.8 23.8 34.3 gw 6.1 15.0 21.8 table viii. proposed scenarios scenario fuel price load demand renewable energy bau low high low lg low low high hg high deeply low high crisis high low low vietnam’s economy has been performing well during the last 30 years. this boosts the electric load demand and therefore sustainable development solutions for energy supply must be considered. however, vietnam’s elasticity of electricity demand has still remained at a high value of more than 1.7 [9] due to the ineffective implementation of energy efficiency policies and frameworks, while the exploitation of renewable energy has been neglected. hence, energy scenarios have to be built comprehensively with aim to promote the exploitation of renewable energy and to reduce the electric load demand. these are the essential conditions to reform a green economy for vietnam. when the price of fuel increases, then the load demand must be reduced and renewable energy must be considered more seriously. this is a great opportunity for developing renewable energy in vietnam. b. objective function the objective function of the optimal structure for generation is the function where the power generation cost is minimized in 2020, 2025, and 2030. , , , , , , , . . min= →∑ y g y g q t y g q t y o w ce x (1) where g represents the type of generation (hydro, coal, gas, mini-hydro, biomass, wind, photovoltaic, import), q is the load pattern identified by number 1 to 8 (see table ix), t is the time of a day (from 1:00 to 24:00), y is the considered year (2020, 2025, 2030), ceg,y is the generation cost of power plant g at year y, xg,q,t,y is the least-cost generation power of power plant g, corresponding to the load pattern q at time t of the year y, and wy is the net present value coefficient, calculated by (2): 2014 1 1 ε − +  =  +  y y r w (2) where r is the interest rate, estimated at 8% per year, and ε is the inflation rate, estimated at 4% per year. the generation cost ceg,y [us$/kwh] of the power plant g at year y is calculated by: ( ), , , , , + + = ∑ g y g y g y g y g y f a mo ce q (3) where fg,y is the fuel price, ag,y is the yearly investment depreciation, mog,y, is the operation and maintenance cost, and qg,y is the power production of power plant g at year y [kwh]. the annual investment depreciation ag,y [us$/year] is calculated by: ( ) ( ) 0 0 3 , , , 0 . 1 10 1 1 + = × × × + − n g y g y g yn r r a i c r (4) where r0 is the oda interest rate, estimated at 3.8%/year, n is the lifetime of the power plant g, [year], ig,y is the investment cost per unit of a power plant g at year y, [us$/kw], and cg,y is the installed capacity of the power plant g at year y , [mw]. c. constraints the constraints of the suggested objective function are: load demand, upper limit of generation power, reserve power capacity, variable limitation of generation power between two consecutive hours, and capacity factor. 1) load demand in order to meet load demand, it is required for the total generation power to be equal with the load power demand: , , , , , =∑ g q t y q t y g x p (5) where pq,t,y is the load power demand for pattern q at the time t of year y. 2) maximum generation power the generation power of power plant g, corresponding to the load pattern q at time t of the year y must be lower than the maximum generation power of that power plant: max., , , , , ,qg q t y g q t y x x≤ (6) where tmax,q is the time that the power plant g operates at maximum power, corresponding to the load pattern q. engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4022 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 the generation power at the time tmax,q for the load pattern q must be lower than the total installed capacity of the power plant g in the year y: max., , , ,qg q t y g y x c≤ (7) the power production of a power plant g in the year y must be lower than the upper capacity limit of that power plant: , max. ,g y g y q q≤ (8) where qmax,g,y is the maximum generation power of the power plant g in the year y. 3) maximum installed capacity the maximum installed capacity of power generations is decided by the upper limitation of input primary energies which can be exploited, and the funding sources which can be used for constructing new power facilities in certain years. the installed capacity of a power plant g in the year y must be lower than the maximum installed capacity of that plant at the same time: , max. ,g y g y c c≤ (9) 4) reserve power capacity to assure the reliability of the power system, the total installed capacity of generation facilities in the year y must be higher than the maximum demand power including reserve power: ( ), max.1 .α≤ +∑ g y y yc p (10) where pmax,y is the maximum power demand in year y, αy is the reserve limitation in the year y. reserve power capacity is closely related to the loss of load expectation (lole) which is chosen as an indicator of power system reliability in this study. values of αy do not include the installed capacity of renewable energy sources, i.e. biomass, wind, and solar generation. 5) capacity factor in each pattern of power load, the diurnal power generation of a power plant g must be less than its capacity factor multiplied by the theoretical power generation production: , , , , , 24. .≤∑ t g q t y g q g yx l c (11) where lg,q is the capacity factor of the power plant g corresponding to load pattern q. 6) limitation of generation power between two consecutive hours the relation between the probability of changing load power demand and the generation capacity of power plant g is presented as: ( ) ( ), , , , , , , , 1,1 . 1 .ρ ρ −− ≤ ≤ +g g q t y g q t y g g q t yx x x (12) where ρg is defined as the limitation of generation power variation between two consecutive hours of power plant g. iii. input data collection a. load pattern instead of using historical hourly data of power load, 8 load patterns, representing 8 typical groups of load profile in a year, are employed to minimize calculation time (table ix [16]). table ix. load patterns of vietnam power system [16] no. pattern 1 tet holidays 2 w: 1, 2 3 w: 3, 4, 5 4 w: 6, 7, 8 5 w: 9, 10, 11, 12 6 s & n: 1, 2 7 s & n: 3, 4, 5, 6, 7, 8, 9 8 s & n: 10, 11, 12 w: working day, s: sunday, n: national holiday b. maximum installed capacity maximum installed capacities of hydro, mini-hydro, coal, and gas generations are officially cited from [8] and [15] (table x). the maximum installed capacities of biomass, wind and pv are presented in tables vi and vii. table x. maximum installed capacity (gw) [8, 15] power generation 2020 2025 2030 hydro 18.16 18.63 21.22 coal 26.71 47.47 65.89 gas 9.47 17.55 23.23 mini-hydro 3.80 5.20 6.80 c. reserved capacity table xi presents the reserve margin and installed capacity of vietnam power system by 2030. it is worth noting that the reserved capacity does not include renewable energy sources. table xi. reserve margin and installed capacity [9] year lole target (h/y) reserve margin (%) demand pmax (gw) installed capacity (gw) high low high low 2020 24 25 40.33 39.71 50.41 49.64 2025 24 20 60.84 58.28 73.01 69.94 2030 24 20 87.56 80.35 105.07 96.42 d. capacity factor the fact that some power resources depend on climate and other natural conditions leads to dependence characteristics of capacity factor on natural conditions. for example, the capacity factor of a solar power plant depends on the variation of solar radiation, the capacity factor of a wind farm may be affected by the changes of wind speed, and the capacity factor of a hydro power plant will vary when the water flow alters. therefore, it is required to verify the exact capacity factor of each type of energy source before taking them into account. engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4023 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 1) solar power plant in order to identify the capacity factor of a solar power plant, three important constraints must be accounted for: • sunny time: useful solar radiation could only be collected from 6:00 to 18:00. the other times, there may be some frail radiation in some regions but it cannot be used. • radiation intensity: solar radiation intensity varies continuously throughout the day. • geographical location: regions which have stable radiation duration are the central, highland and southern of vietnam [17]. therefore, it is recommended that solar power plants should be constructed around those regions. based on the solar radiation values of central and southern provinces presented in [17], the capacity factor of a solar power plant could be calculated. solar radiation of central coast area, and capacity factors of pv power plants are presented in table xii. 2) wind power plant at the height of 60m above sea level, wind energy could only be obtained effectively in ca mau, a mountainous province of highland, and some provinces of central coast area [18]. the biggest wind farm of the south east asia, tuy phong wind farm, is located in binh thuan province. however, because of the unstable wind speed, the generation power of tuy phong has high variations. based on forecasted wind speed [18], the capacity factor of a wind power plant located in vietnam could be computed. table xii presents the capacity factors corresponding to the monthly variation of wind speed in central coastal vietnam. 3) hydro and mini-hydro power plants as mentioned above, generation capacity of hydro and mini hydro plants strongly depends on water sources and rainfall. based on historical data involving climate conditions, annual average rainfall, and rainfall predictions, along with the assumption that there will be no unusual changes in the local natural conditions, the capacity factor of a hydro power plant could be calculated. regarding a mini hydro power plant, the current national regulations on dam design stipulate a very small limitation for height, leading to the result that generation power of a mini hydro power plant is closely connected to local rainfall. capacity factors of hydro and mini hydro power plants are presented in table xiii. 4) capacity factors of other power plants in theory, coal, gas, and biomass power plants can operate throughout the year. however, they need to be maintained periodically. it normally takes them at least 30 to 45 days per year to stop operating for maintenance or for accidental shutdown. therefore, in this research, the capacity factors of coal, gas, and biomass generations are proposed to be 0.8, as shown in table xiii. e. limitation of generation power variation between two consecutive hours of a power plant for solar and wind power plants, the generation ability between two consecutive hours totally depends on local natural conditions, i.e. wind speed changes or variations of solar radiation at different times of a day. table xii. solar radiation, wind speed in central coast area and pv, wind generation capacity factors [17, 18]] month solar wind solar radiation (kwh/m 2 .day) capacity factor [per unit] wind speed (m/s) capacity factor (per unit) 1 3.6 0.32 8.0 0.75 2 4.8 0.43 7.0 0.56 3 5.2 0.47 5.8 0.34 4 5.6 0.50 4.2 0.10 5 5.2 0.47 5.0 0.19 6 5.2 0.47 5.7 0.32 7 5.2 0.47 6.5 0.41 8 5.2 0.47 6.5 0.41 9 4.4 0.40 5.5 0.29 10 4.4 0.40 4.3 0.10 11 4.0 0.36 6.7 0.44 12 3.6 0.32 8.0 0.75 table xiii. generation capacity factors month no. of load pattern hydro minihydro wind pv coal, gas, biomass 1 1, 2, 6 0.51 0.20 0.75 0.32 0.8 2 1, 2, 6 0.48 0.20 0.56 0.43 0.8 3 3, 7 0.48 0.25 0.34 0.47 0.8 4 3, 7 0.82 0.72 0.10 0.50 0.8 5 3, 7 0.92 0.60 0.19 0.47 0.8 6 4, 7 0.76 0.78 0.32 0.47 0.8 7 4, 7 0.76 0.90 0.41 0.47 0.8 8 4, 7 0.92 1.00 0.41 0.47 0.8 9 5, 7, 8 0.82 0.75 0.29 0.40 0.8 10 5, 7, 8 0.58 0.60 0.10 0.40 0.8 11 5, 7, 8 0.58 0.27 0.44 0.36 0.8 12 5, 7, 8 0.58 0.21 0.75 0.32 0.8 1) wind power plant the generation capability between two consecutive hours depends on the changes of wind speed at the location where the wind-mill is constructed. however, the difference on wind speed between two consecutive hours is very small, at 1-2m/s (approximately 15%). 2) solar power plant as mentioned, two reasons which lead to the change of generation power between two consecutive hours of a solar power plant are the variation of solar radiation intensity during the day and the duration of sunny time per day. however, changes of solar radiation between two consecutive hours are still petite and accounted for around 15% as in the wind power case. table xiv presents the limitation of generation power variation between two consecutive hours of different types of power plants which may be called as the load traceability of power generation. table xiv. load traceability ratio of power generation (%) hydro coal gas biomass wind pv 10 20 60 20 15 15 engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4024 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 f. co2 emissions and prices 1) co2 emission factor the emission factors of co2 for different energy sources in vietnam are shown in table xv. emission factors of wind energy and solar energy are cited from [20, 21]. table xv. co2 emission factors in vietnam unit coal gas biomass hydro pv wind [g-co2/kwh] 1,473 464 20 11 40 11.7 2) co2 prices although the clean development mechanism (cdm) issued by kyoto protocol (1997) has not been extended since 2012, international co2 trading market is still operating. selling prices of co2 in the global market are chosen at 18, 20, and 25 us$/short t-co2, for year 2020, 2025, and 2030, respectively. those are the lowest prices according to [22]. g. levelized cost in vietnam, renewable power plants will be offered a fixed selling tariff (table xvi). reference values relevant to investment, power factor, lifetime, fuel consumption per unit production, operation and maintenance cost, fuel price are included in table xvii. they are categorized as levelized cost. the values are cited from [9]. table xvi. electricity prices buyed by evn unit biomass [23] wind [24] pv [11] import [9] [us$cent/kwh] 7.4 7.8 9.35 6.02 table xvii. levelized cost of conventional power plants [9] indicator unit gas coal hydro 2020 2025 2030 2020 2025 2030 2020 2025 2030 investment us$/kw 1,224 1,660 1,660 1,400 1,850 1,850 1,500 1,500 1,500 energy consumption kcal/kwhe 2,457 1,870 1,870 2,098 1,720 1,720 lifetime yr 25 30 50 fixed o&m us$/kw.yr 24.5 28 28 42 43.5 43.5 5 5 5 variable o&m us$/mwhe 0.88 1.37 1.37 0.15 3 3 2 2 2 interest rate (wb-ida suf) % 3.8 fuel price – low us$/mbtu 4.88 5.46 5.69 1.6 1.7 1.9 fuel price – high us$/mbtu 9.16 10.9 11.6 3.6 3.8 4.0 heat value (lhv) 1000kcal/kg 9.8 8.5 8.5 6.5 6.5 6.5 iv. results when the input parameters are inserted into lindo software, optimal generation scenarios of installed capacity and power generation production based on different power sources for the years 2020, 2025, and 2030 are generated as final results. the following values are calculated: co2 emission capacity, electricity selling prices in case of non-purchasing mitigated-co2, and electricity selling prices with sharing by mitigated-co2 trading. a. installed capacity optimum installed capacity of power generation is presented in figure 1. fig. 1. optimum installed capacity hydro installed capacity is around 18.1gw, 18.6gw, and 21.2gw in 2020, 2025, and 2030 respectively reducing from 32% to 16.8%. it reaches its upper limit of installation and does not change through the scenarios. coal generation capacity on the other hand, is dramatically increased from around 15.817gw in 2020 to 24.6-29.3gw in 2025, and 38.9-49.9gw in 2030 changing its percentage of total capacity from about 27.8% to 40.6%. gas generation capacity is increased by years but not changed much through different scenarios at around 9.5gw in 2020, 15.6gw in 2025, and 23.2gw in 2030. in percentage of total, it varies in the range of 16.6% to 20.3%. the other generations are all reaching their upper limit installation. all scenarios have reserved capacity at more than 20% not including renewable sources. b. power generation production figure 2 illustrates the computed-results of optimum electricity generations. hydro generation reaches its upper limit from 66.3 to 68.6twh from 2020 to 2030 respectively decreasing its percentage from 35.8% to 13.9%. coal generation increases dramatically to meet the increasing load demand. the generation takes a very high percentage from 41.5% to 65.8%. gas generation shares 19% to 26.3% of total, depending on the years and scenarios. other generations are all reaching their upper limits. c. co2 emission capacity co2 emissions of the power generation system are shown in figure 3. bau scenario has the highest emissions of 188.9mtco2, 341.8mt-co2, and 516.9mt-co2 in 2020, 2025, and 2030 engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4025 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 respectively. lowest emissions are resulted by the hg scenario and are reduced by 5.9% (2020), 20.4% (2025), and 28.4% (2030) compared to bau. this is caused by a considerable increase of renewable sources, and very low load demand. details of reduction in capacity of emissions are presented in table xviii. fig. 2. optimum power generation production fig. 3. co2 emissions table xviii. co2 reduction compared to bau scenario 2020 2025 2030 lg 11.14 62.04 127.74 hg 11.14 69.60 146.92 crisis 6.05 30.80 2.38 d. generation cost generation costs for both two cases of non-selling co2 emissions and selling mitigated co2 are shown in figure 4. two ranges of generation cost, 4.3-5.5 and 6.0-7.7 us$cent/kwh have been reached in correspondence with low and high fuel price future scenarios. it is demonstrated that selling mitigated co2 will help reduce the generation cost. maximum reduced cost is recorded in hg scenario in 2030 at 10% in case of non-selling mitigated co2. maximum amount of selling co2 emission reduction is 2.64 billion us$ in hg scenario in 2030. this helps generation cost of both hg and crisis scenarios to be nearly the same in 2030. selling mitigated co2 also makes generation cost of lg scenario in 2030 lower than that of bau. bau scenario has the same cost for both cases of selling and non-selling mitigated co2 as it has no emission reduction. fig. 4. electricity generation cost v. conclusion four projection scenarios of power generation by 2030 in vietnam have been proposed. three variable key-factors have been selected for creating the power generation scenarios: future fuel price, reduction of load demand caused by the penetration of led technology and rooftop pv systems, and the introduction of power generation from renewable sources. the scenarios named bau, lg, hg, and crisis are taken into account for finding the optimum structure of power generation system in terms of installed capacity and power generation production. co2 emission reduction compared to bau scenario and its effects on reducing generation cost are calculated with the aim to give an understanding of the magnitude of the impact of green scenarios on the power generation system. bau is the worst scenario in terms of co2 emissions because of the highest proportion of generation from coal and fossil fuels. it also leads to a poor energy security because thermal generations increase their generation share at about 62.6%, 72.5%, and 75.1% of total in 2020, 2025, and 2030, respectively. lg and hg scenarios show their positive impacts on co2 emissions and generation cost reduction. hg scenario is defined as the greenest one when renewable energy sources contribute 11.2% (2020), 15.5% (2025), and 19.4% (2030) of total generation. maximum emission reduction is about 146.92mt-co2 in hg scenario in 2030. additionally, selling mitigated co2 can make green scenarios more competitive to bau and crisis in terms of cost. two ranges of generation cost (4.3-5.5 and 6.0-7.7us$cent/kwh) have been reached in correspondence with the low and high fuel price scenarios. these generation costs are low enough to support future sustainable economic development in vietnam. replacing conventional electric lamps with led lamps and increasing the installed capacity of rooftop pvs may help reducing electric load demand. increasing the contribution of renewable generation will make the hg scenario become more attractive both in environmental and economic aspects when the crisis scenario comes. generation costs of all scenarios shall become engineering, technology & applied science research vol. 9, no. 2, 2019, 4019-4026 4026 www.etasr.com nguyen et al.: green scenarios for power generation in vietnam by 2030 cheap enough for promoting economic development in vietnam to 2030. references [1] iea, world energy outlook 2017, iea, 2017 [2] bp, bp energy outlook 2035, bp, 2015 [3] g. fan, n. stern, o. edenhofer, s. xu, k. eklund, f. ackerman, l. li, k. halding, the economics of climate change in china: towards a low-carbon economy, stockholm environment institute, 2011 [4] m. r. salehizadeh, “a. rahimi-kian, k. hausken, a leader–follower game on congestion management in power systems”, in: game theoretic analysis of congestion, safety and security, pp. 81-112, springer, 2015 [5] m. r. salehizadeh, s. soltaniyan, “application of fuzzy q-learning for electricity market modeling by considering renewable power penetration”, renewable and sustainable energy reviews, vol. 56, pp. 1172-1181, 2016 [6] m. r. salehizadeh, a. rahimi‐kian, m. oloomi‐buygi, “a multi‐attribute congestion‐driven approach for evaluation of power generation plants”, international transactions on electrical energy systems, vol. 25, no. 3, pp. 482-497, 2015 [7] m. r. salehizadeh, a. rahimi-kian, m. oloomi-buygi, “security-based multi-objective congestion management for emission reduction in power system”, international journal of electrical power & energy systems, vol. 65, pp. 124-135, 2015 [8] w. wangjiraniran, b. eua-arporn, “assessment of renewable energy penetration on power development plan in thailand”, journal of power and energy systems, vol. 5, no. 3, pp. 209-217, 2011 [9] n. t. dung, decision 428/qd-ttg, march, 18, 2016 of the prime minister of vietnam, revisions to the national power development plan from 2011 to 2020 with visions extended to 2030, 2016 [10] h. t. nguyen, “main drivers of carbon dioxide emissions in vietnam trajectory 2000-2011: an input-output structural decomposition analysis”, journal of sustainable development, vol. 11, no. 4, pp. 129147, 2018 [11] n. x. phuc, decision 11/qd-ttg, april, 14, 2017 of the prime minister of vietnam, support mechanisms for the development of solar power projects in vietnam, 2017 [12] department of energy & climate change uk, decc fossil fuel price projections, 2013 [13] v. h. m. nguyen, c. v. vo, k. t. p. nguyen, b. t. t. phan, “forecast on 2030 vietnam electricity consumption”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2869-2874, 2018 [14] v. h. m. nguyen, c. v. vo, b. t. t. phan, “peak load forecasting for vietnam national power system to 2030”, journal of science & technology of technical universities – hanoi university of science and technology, no. 123, pp. 7-13, 2017 [15] n. t. dung, decision 2068/qd-ttg, november, 25, 2015 of the prime minister of vietnam, the development strategy of renewable energy of vietnam by 2030 with a vision to 2050, 2015 [16] v. h. m. nguyen, a. n. nguyen, c. v. vo, b. t. t. phan, “forecasting vietnam’s electric load profile to 2030”, journal of technical education science, vol. 49, pp. 51-57, 2018 [17] j. polo, s. martinez, c. m. fernandez-peruchena, a. navarro, j. m. vindel, m. gaston, l. r. santigosa, e. soria, m. v. guisado, a. bernados, i. pagola, m. olano, maps of solar resource and potential in viet nam, ministry of industry and trade of the socialist republic of vietnam, 2015 [18] aws truepower, wind resource atlas of vietnam, 2011 [19] v. v. cuong, “co2 life cycle emission factors of power generation in vietnam”, journal of science & technology of technical universities – hanoi university of science and technology, vol. 79, 102-107, 2010 [20] nrel, life cycle greenhouse gas emissions from solar photovoltaics, nrel/fs-6a20-56487, nrel, 2012 [21] r. c. thomson, g. p. harrison, life cycle costs and carbon emissions of wind power, university of edinburgh, 2015 [22] synapse energy economics, carbon dioxide price forecast, synapse energy economics inc., 2015 [23] ministry of industry and trade, decision 942/qd-bct, march, 11, 2016 of the minister of ministry of industry & trade, vietnam, avoiding cost appling for bimass power projects in 2016, 2016 [24] n. x. phuc, decision 39/2018/qd-ttg, september, 10, 2018 of the prime minister of vietnam, amending several articles of decision no. 37/2011/qd-ttg dated june 29, 2011 of the prime minister on provison of assistance in development of wind power projects in vietnam, 2018 microsoft word kardjilova-ed_r3.doc etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 424-428 424 www.etasr.com kardjilova et al.: influence of temperature on energetic and rheological characteristics… influence of temperature on energetic and rheological characteristics of plantohyd bio lubricants – a study in the laboratory krassimira kardjilova department of physics technical university of varna varna, bulgaria kardjilova@yahoo.com vlasta vozarova department of physics slovak university of agriculture of nitra nitra, slovak republic vlasta.vozarova@uniak.sk mihal valah department of physics slovak university of agriculture of nitra nitra, slovak republic michal.valach@uniag.sk abstract — this article presents the results of measuring the calorific value and the rheological characteristics of plantohyd bio lubricants. measurements were conducted under laboratory conditions with an ika c5000 calorimeter and a dv-3p anton paar digital viscometer. results are presented graphically. it is shown that the physical interpretation of energy values results and the dependence of rheological properties on temperature can be used to assess the quality of lubricants. keywords: plantohyd; calorific value; rheological characteristics; quality of lubricants. i. introduction biofuels and bio lubricants are widely entering the market, especially the liquid fuel market. biofuels and bio lubricants are compatible with existing engines and vehicles, but produced from organic material (waste agricultural products, sunflower, rapeseed oil, etc.). therefore, they are far more environment-friendly, whereas they are capable of producing similar energy values with ordinary fuels. plantohyd lubricants are products based on synthetic esters, providing an alternative to petroleum-based hydraulic oils (e.g. [1]). the main advantages of these lubricants are: • excellent composting 90% for 14 days; • excellent resistance to aging and oxidation; • good viscosity–temperature dependence. plantohyd hydraulic lubricants are suitable for all mobile and stationary hydraulic equipment in vehicles and industry. their use is recommended especially when there is a danger of leakage of hydraulic and lubricating oisl in soil, groundwater or surface water. to maintain their production properties, they must be used independently and not in combination with petroleumbased oils. they can be applied in a wide temperature range from -35 o c to +90 o c, covered by different plantohyd types. there is little information for the energetic values of different lubricants. changing other characteristics of the lubricants depending on temperature, pressure during use and storage conditions has also met limited investigation. measurements were conducted in the scientific laboratory of the department of physics of the slovak university of agriculture in nitra. we measured the specific heat of combustion of three plantohyd samples and of the dependence of rheological characteristics-dynamic viscosityη , kinematic viscosity ν , fluidity φ and density ρ on temperature .t measurement methods of thermophysical properties and their theoretical principles can be found in [2-5] ii. theoretical base basic physical processes that can be used to measure the influence of temperature on physical properties of lubricants are the hot wire method [2] and thermal analysis methods [6]. among the most common methods of thermal analysis are the thermo gravimetric analysis, the differential thermal analysis and the calorimetric differential compensatory difference. the main quantity studied in thermal analysis methods is the change in enthalpy h∆ [3, 7]. a. specific enthalpy (calorific value of fuels) the amount of energy generated when a unit mass of fuel is burned completely is known as the calorific value of the fuel. enthalpy is a measure of the energy content in a thermodynamic (td) system. for each balance state of the td system, specific enthalpy has a specific value. the value of enthalpy and its change depend on the initial and final state of the td system, rather than the intermediate states [4]. specific enthalpy is the enthalpy per unit mass[ ] kj h kg = . the change of specific enthalpy determines the heat released from the system at a fixed process. the release during the combustion of fuel energy transfers the products of combustion, which leads to a change in its enthalpy. enthalpy of combustion is defined as the difference between the enthalpy of the products and the enthalpy of etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 424-428 425 www.etasr.com kardjilova et al.: influence of temperature on energetic and rheological characteristics… reagents when complete combustion occurs at a given temperature and pressure [4]. in the combustion, the calorimeter technique is adopted to determine the enthalpy of combustion products that result from the fuel unit. enthalpy-energy diagrams, with graphical interpolation for intermediate values, can be used to determine the enthalpy of combustion products at different temperatures, excess air in the construction of thermal balance and the related thermo calculations of combustion. in many cases however, the enthalpy of combustion, which can be experimentally determined, is used for energetic analysis when the enthalpy formation data are not directly accessible fast and precise enough measurement of the specific enthalpy (calorific value of fuels) can be acquired with a c5000 calorimeter. b. rheological characteristics rheology is a branch of physics, which deals with the ways in which materials and fluids deform in response to applied forces or stress [8]. newton formulated the law for an ideal fluid: shear stress between layers is proportional to the velocity gradient, ∂u/∂y, in a direction perpendicular to the layers, in other words, the relative movement of layers. u y τ η ∂ = ∂ (1) here, the constant η is the coefficient of viscosity (viscosity) or dynamic viscosity. many fluids, such as water, satisfy newton's criterion and are known as newtonian fluids. non-newtonian fluids exhibit a more complex relationship between shear stress and velocity gradient. in some cases, the ratio of inertial resistance is calculated using fluid density ρ. kinematic viscosity is determined by: η ν ρ = (2) reciprocal value of viscosity determines the fluidity 1 η ϕ = (3) it is known that dynamic viscosity changes with temperature, which means that this impact will also occur in kinematic viscosity and fluidity. theoretical and experimental studies show that this dependence is exponentially or linearly decreasing for η and ν and linearly or exponentially increasing for φ [8, 9]. the dependencies are determined by the kind of the substance and the manner of processing and storage. this means that many different fluids are required in order to investigate the dependency of rheological properties on temperature. iii. model and method of measurement three samples of plantohyd (15s, 46s and 40n) were examined. measurements were made under laboratory conditions in air temperature t=21 o c, atmospheric pressure and normal humidity ω=56%. all calorie values are measured two times and all rheological three times. the presented results are average values. a. measurement of energy values measurements are performed with an ika calorimeter system c5000 control (c5040 calwin) [3, 7]. the selected operating condition is adiabatic. calibration of the apparatus is conducted before the measurement, in order to achieve a high degree of precision and accuracy the masses of the samples are measured with a digital balance accurate to 0.0001 g and then the samples are placed in a calorimetric bomb in which the combustion process takes place, under certain conditions. the specific heat of combustion is calculated based on data obtained for the mass of the samples, the heat capacity of the calorimeter and the temperature rise of the water in the calorimeter. b. measurement of rheological values measurements are made with a dv-3p anton paar digital viscometer, which is a rotational viscometer [8]. it holds and operates on the principle of measuring the torsion force as a function of resistance, which has a model for the rotation of the cylinder or spindle immersed in the sample. viscosity is calculated from the measured values. the combination of spindles (r2, r3) and speed allows an optimal choice for measuring viscosity. measurements of viscosity should be conducted in laminar flow, which requires a specific time (in our case 2 minutes). there should be no air bubbles in the model. it should have a uniform texture and should be free of mechanical impurities. the temperature must be constant throughout the volume of the sample. only under these conditions, the measured value of viscosity can be considered correct. the density of the samples is measured using the pycnometric method and a calibrated pycnometer with a volume of 50 ml. the dynamic viscosity of the three samples was measured at different temperatures. for the same temperature range, we measured the densities of the samples and calculated kinematic viscosity and fluidity. iv. results the energetic values of the three plantohyd samples are shown in table i. the kinematic viscosity and fluidity are calculated from the measured dynamic viscosity and density of the three samples. graphs are constructed using the grafer software, and the mode of the functional dependence and the coefficient of determination are calculated. the results are presented in table ii to viii and in figures 1 to 4. etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 424-428 426 www.etasr.com kardjilova et al.: influence of temperature on energetic and rheological characteristics… table i. measured energetic values of three plantohyd samples type ( )m g ( / )h j g / )(h mj kg 0.2363 37511 plantohyd 15s 0.5100 37345 37.428 0.2667 39521 plantohyd 46s 0.5554 39457 39.483 0.3334 39466 plantohyd 40n 0.5963 39378 39.417 table ii. measured values of , , ,η ν ϕ ρ plantohyd 15s ( ) o t c ρ 3 ( / )kg m η ( )mpa s⋅ ν 2 6( / 10 )m s − ⋅ φ 1 1 ( )pa s − − ⋅ -10 939.974 79.3 84.41 12.61 -5 937.897 75.8 80.87 13.19 0 934.776 62.4 66.77 16.03 5 931.656 57.1 61.26 17.51 10 928.537 51.2 55.17 19.53 20 922.298 45.9 49.79 21.79 30 913.980 35.5 38.84 28.17 40 908.781 24.0 26.41 41.67 50 902.542 21.7 23.99 46.08 table iii. measured values of , , ,η ν ϕ ρ plantohyd 46s ( ) o t c ρ 3 ( / )kg m η ( )mpa s⋅ ν 2 6( / 10 )m s − ⋅ φ 1 1 ( )pa s − − ⋅ -10 927.497 216.4 233.30 4.62 -5 926.457 166.3 179.50 6.01 0 925.417 122.5 132.37 8.16 5 924.378 102.2 110.55 9.78 10 922.298 84.6 91.70 11.81 20 913.980 61.2 66.95 16.34 30 907.741 50.9 56.05 19.65 40 900.462 43.2 48.01 23.15 50 896.303 37.6 41.95 26.59 table iv. measured values of , , ,η ν ϕ ρ plantohyd 40n ( ) o t c ρ 3 ( / )kg m η ( )mpa s⋅ ν 2 6 ( / 10 )m s − ⋅ φ 1 1 ( )pa s − − ⋅ -10 934.776 233.7 239.30 4.28 -5 937.696 174.7 186.30 5.37 0 930.616 132.3 142.16 7.56 5 926.457 93.7 10109 10.67 10 922.298 69.8 75.73 14.33 20 918.139 54.9 59.79 18.21 30 906.701 47.5 52.39 21.05 40 904.621 39.3 43.45 25.45 50 897.343 36.2 40.38 27.62 table v. results for functional dependence and coefficient of determination for density ( )f tρ = type equation association coefficient 15s y= -0.642x + 934.4 r 2 = 0.997 46s y= -0.566x + 924.8 r 2 = 0.974 40n y= -0.675x + 930.3 r 2 = 0.976 table vi. results for functional dependence and coefficient of determination for dynamic viscosity ( )f tη = type equation association coefficient 15s y= -0.000x 3 + 0.015x 2 1.369x + 64.87 r 2 = 0.986 46s y = -0.001x 3 +0.181x 2 6.447x + 129.4 r 2 = 0.997 40n y= -0.002x 3 + 0.236x 2 7.761x + 129.8 r 2 = 0.997 table vii. results for functional dependence and coefficient of determination for kinematic viscosity ( )f tν = type equation association coefficient 15s y= -1e-10x 3 + 2e-08x 2 1e-06x + 7e-05 r 2 = 0.985 46s y = -2e-09x 3 + 2e-07x 2 7e-06x + 0.000 r 2 = 0.997 40n y= -2e-09x 3 + 2e-07x 2 8e-06x + 0.000 r 2 = 0.997 table viii. results for functional dependence and coefficient of determination for fluidity ( )f tϕ = type equation association coefficient 15s y = 15.35e 0.022x r 2 = 0.983 46s y = 0.374x + 8.184 r 2 = 0.998 40n y = 0.411x + 8.547 r 2 = 0.984 fig. 1. dependence of density on temperature. fig. 2. dependence of dynamic viscosity on temperature. etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 424-428 427 www.etasr.com kardjilova et al.: influence of temperature on energetic and rheological characteristics… fig. 3. dependence of kinematic viscosity on temperature. fig. 4. dependence of fluidity on temperature. a. analysis of results the results for the energetic values show that they are very close for both plantohyd 46s and 40n and lower for 15s. constructed graphs of the dependence of density on temperature show a linear decreasing function for all three samples. dependence is: ( )tba −=ρ (4) where a and b are constants that depend on the type of substance. the values of densities fall in the following ranges: • plantohyd 15s: from 939.97 kg/m 3 at -10 0 c to 902.54 kg/m 3 at +50 0 c. • plantohyd 46s: from 927.50 kg/m 3 at -10 0 c to 896.30 kg/m 3 at +50 0 c. • plantohyd 40n: from 934.78 kg/m 3 at -10 0 c to 897.34 kg/m 3 at +50 0 c. it can be seen from the graphs that plantohyd 46s and 40n have slightly different densities after +400c, and that plantohyd 15s has the highest density values. our results for plantohyd densities are comparable to results given in reference books (for plantohyd 46s: 921 kg/m 3 , for plantohyd 40n: 922 kg/m 3 , data for plantohyd 15s were not found, but for plantohyd 32s density is 921 kg/m 3 ). the results for η show that dependence on t is not exponential and not linear. it rather follows a cubic decrease. dependence is: ( ) ( ) ( ) 2 3 c d t i t f tη = − + − (5) where c, d, f and i are constants that depend on the type of substance. the values of dynamic viscosity for plantohyd 46s and 40n are close: • plantohyd 46s: 216.4 mpa·s at -10 0 c to 37.6 mpa·s at +50 0 c. • plantohyd 40n: 233.7 mpa·s at -10 0 c to 36.2 mpa·s at +50 0 c. significantly, lower values of dynamic viscosity are for plantohyd 15s: 79.3 mpa·s at -10 0 to 21.7 mpa·s at +50 0 c. similar results can be seen for the dependence of kinematic viscosity on temperature. decreasing dependence is cubic. dependence is ( ) ( ) ( ) 2 3 g h t m t * tν = − + − (6) g, h, m and * are constants that depend on the type of substance. values for 46s and 40n are relatively close, and those for 15s are smaller. the results are as follows: • plantohyd 46s: 233.3·10 -6 m 2 /s at -10 0 c and 41.9·10 6 m 2 /s at +50 0 c. • plantohyd 40n: 239.3·10 -6 m 2 /s at -10 0 c and 40.4·10 -6 m 2 /s at +50 0 c. • plantohyd 15s: 84.4·10 -6 m 2 /s at -10 0 c and 23.9·10 -6 m 2 /s at +50 0 c. comparison of the results can be made with the given values of kinematic viscosity characteristics of those lubricants at 40 0 c. • plantohyd 46s: 48.8·10 -6 m 2 /s – our calculatted value is 48.01·10 -6 m 2 /s. • plantohyd 40n: 42·10 -6 m 2 /s our calculatted value is 43.4·10 -6 m 2 /s. • plantohyd 15s: no evidence for kinematic viscosity. the dependence of fluidity on temperature is a linearly increasing function of plantohyd 46s and plantohyd 40n. dependence is: ( )k l tϕ = + (7) dependence for plantohyd 15s is: ( )expk l tϕ = (8) , k and l are constants that depend on the type of substance. these functions are not differing significantly at temperatures from -10 0 c to +10 0 c. the obtained values for fluidity are: etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 424-428 428 www.etasr.com kardjilova et al.: influence of temperature on energetic and rheological characteristics… • plantohyd 46s: from 4.62 pa -1 ·s -1 at -10 0 c to 26.59 pa -1 ·s -1 at +50 0 c. • plantohyd 40n: from 4.28 pa -1 ·s -1 at -10 0 c to 27.62 pa -1 ·s -1 at +50 0 c. the dependence of fluidity on temperature for plantohyd 15s is an exponentially increasing function. the obtained values for fluidity are from 12.61 pa -1 ·s -1 at -10 0 c to 46.08 pa -1 ·s -1 at +50 0 c. v. conclusion lubricants are classified by their two main characteristics: viscosity and operational level the kinematic viscosity. the physical interpretation of the results of energy values and the dependence of rheological properties on temperature can be used to assess the quality of lubricants. using bio lubricants helps to lubricate the engine, providing longer engine and segment life. suitability for use of a bio lubricant is estimated with the help of certain criteria. as one of the roles of fuel and lubrication is its function, the change of its viscosity and its density are indicative of a change in quality. the viscosity values show the ability to withstand and protect against wear and corrosion. the results presented in this paper can be useful for manufacturers of similar synthetic oils and could be used by the automotive and mechanical engineering industry, which uses similar lubricants. further, the data obtained can be used in technological processes as well as for studying the physical properties of biо lubricants which would allow the development of new technologies using bioenergetics conversion and use of bio lubricants with better performance. references [1] fuchs europe schmierstoffe gmbh, “plantohyd s: environmentally-friendly, synthetic ester-based hydraulic and lubricating fluid (product information)”, pi 4-1274, pm 4 /03.08, 2008 [2] j. krempasky, fyzika, alfa, 1982 [3] v. vozarova, “study of processes and properties of materials by method of thermal analysis”, research and teaching of physics in the context of university education international conference, nitra, slovakia, pp. 141-144, june, 2007 [4] j. moran, n. shapiro, fundamentals of engineering thermodynamics, john wiley & sons, 1991 [5] p. haines, thermal methods of analysis: principles, appplications and problems, blackie akademic and proffessional, 1995 [6] a. blazek, termicka analiza, stnl, 1972 [7] m. valach, l. hires, “examination of viscosity and specific heat of combustion of biofuels”, physics–research–applications–education international conference, nitra, slovakia, pp.138-140, october, 2011 [8] m. bozikova, p. hlavac, selected physical properties of agricultural and food products, slovak university of agriculture, nitra, slovakia, 2010 [9] p. hlavac, “the rheologic properties of plum jam”, ptep journal of processing and energy in agriculture, vol. 11, no. 3, pp. 106-108, 2007 microsoft word 12-3301_s1_etasr_v10_n2_pp5406-5411 engineering, technology & applied science research vol. 10, no. 2, 2020, 5406-5411 5406 www.etasr.com espino & bellotindos: a system dynamics modeling and computer-based simulation in forecasting … a system dynamics modeling and computer-based simulation in forecasting long-term sufficiency: a philippine chicken meat sector case study ma. theresa m. espino engineering graduate program school of engineering university of san carlos cebu city, philippines and department of industrial engineering ateneo de davao university davao city, philippines mtmespino@addu.edu.ph luzvisminda m. bellotindos engineering graduate program school of engineering university of san carlos cebu city, philippines and center for research in energy systems and technologies university of san carlos cebu city, philippines lmbellotindos@usc.ed abstract—as the human population continues to grow, the global growth of the livestock sector will continue to rise as well. in the philippines, the demand for chicken meat is projected to triple by 2050. in this study, the increasing consumption and long-term sufficiency were evaluated with the use of the system dynamics concept. with system modeling and computer-based simulation techniques, the available data on chicken meat supply chain were processed considering that factors behave dynamically. the simulated model facilitated the forecasting of key variables which may drop sufficiency from 87% in 2015 to 60% by 2050 if no proper actions take place in the areas of production and consumption. as a whole, this study developed and demonstrated preliminary system dynamics-based and computer basedapproaches in order to understand the chicken meat sector. this showed that a dynamic systems-based paradigm shift in food and agricultural systems analysis can help address operational and strategic issues regarding food security. keywords-forecasting; system dynamics modeling; computerbased simulation; philippine chicken meat; sufficiency i. introduction as human population continues to grow by an approximately annual 1.1% percentage, the growth of livestock sector will continue to rise as well. the demand for meat is projected to grow by 70% in 2050 with 2005 as baseline. the poultry sector, which is basically chicken meat (cm) has the highest growth among meats at 121% [1]. it is a challenge for poultry sector to satisfy the demand and to align to the mandate on sustainable development goals (sdg). the sector has to comply primarily to food security as expressed in sdg #2 zero hunger [2]. one of the tools in achieving this is addressing food self-sufficiency, which is the ability to meet consumption needs, especially for staple food items such as meat, from domestic production rather than by importing. in addressing food security or food sufficiency, reliable forecasting is the foundation of all warning systems, in order to give decisionmakers enough time to plan and respond to red flag warnings. forecasting has to have a high degree of reliability to avoid false alarms. traditional popular forecasting techniques such as linear regression and multiple regression are used to examine the relationship between independent variable(s) and a dependent variable [3-6]. their usefulness is considerably acceptable for simple forecasting requirements. however, the outcomes are sometimes compromised, especially when the data are incomplete and correlation might be regarded as causation. to consider causation, system dynamics (sd) approach is more advantageous. sd is based on the synthesis of various concepts including operating theory, system theory, control theory, information feedback theory, decision-making theory, mechanical systems theory, and computer science. sd can model the system and analyze the system’s behavior given the desired parameters over time [7]. in contrast to traditional forecasting, sd models are more flexible in predicting complex phenomena due to their dynamic behavior [7, 8]. the sd models are more reliable since they can estimate the sensitivity of results to variables and employ stronger scenarios with the impact of significant changes in strategies or interventions. the process of establishing the model may seem complex at the initial model build-up stage. however, rigor and results can be more significant, especially in forecasting variables in dynamic systems. sd studies have been done in various industries [8, 9] and across agricultural and natural resources [10, 11]. on food supply chain systems, sd has already been applied in rice production [12-14] and food supply [15, 16], which makes it appropriate for the poultry sector. though these studies were done in food agricultural systems, they were not anchored to specific timelines of long-terms plans and sustainability goals on the respective countries. ii. philippine chicken meat sector in philippines, chicken is the most progressive animal enterprise. in 2017, the total chicken meat volume production corresponding author: m. t. m. espino engineering, technology & applied science research vol. 10, no. 2, 2020, 5406-5411 5407 www.etasr.com espino & bellotindos: a system dynamics modeling and computer-based simulation in forecasting … in philippines was 1,344.3 thousand metric tons (tmt). the industry volume is growing at 3.6% per annum during the last five years. a total inventory of 140.20 million birds for meat production has been recorded [17]. in terms of scale, the industry is characterized by “backyard” and “commercial” farms/installations. as described by sikap/strive foundation, rural families run the typical backyard farms, which comprise around 100 birds of native or improved breeds, which are raised primarily for their own consumption. on the other hand, commercial farms have at least 1,000 broilers, or combinations of at least 100 broilers and 100 layers [18]. the per capita consumption was 12.7kg in 2017, a 47% rise over a decade. the increase in local consumption is reported to be attributed to the changes in lifestyle, income and urbanization [19]. at present, the government tracks sufficiency expressed in selfsufficiency ratio (ssr), or the equivalent self-sufficiency level (ssl), an indicator of adequacy of local food production to satisfy the demand of the population. for chicken meat, the ssl at a national level was 92.82% in 2013 but declined to 84.67% in 2016 which is the lowest over a decade. the drop of 8.8% showed the widening gap of production and consumption. moreover, this indicated that around 15% of the demand was not satisfied in 2016 by local production, but was fulfilled through importation. however, ssl improved back to 96.1% in 2017 with the big improvement in local production [20]. total supply is augmented through importations. it registered 55.49 tmt in 2017. it has been increasing by 20132016 but declined in 2017, which was a manifestation that local production improved [21]. total production also utilized exports. the lowest level of exports was in 2017 at 355 metric tons which continued the downtrend since 2015 [17]. in terms of cost of production, the highest cost driver is feeds in both production systems. it accounts for about approximately 65 to 70% of total live broiler production [22]. soybean meal (sbm) is the major source of protein in feeds production. at present there is a limited soybean local production which only serves 8% of the country’s requirements [23]. on the other hand, most feed formulation uses maize as a primary source for energy. based on consolidated data in global livestock environmental assessment model (gleam), the baseline mix for broilers is 71% maize, 27% sbm, and 2% other additives. based on gathered field data, the average feed across stages is comprised of 51% maize, 25% sbm, 9% rice bran, 5% fish meal, 4% copra meal, 2% vegetable oil, and 4% other additives. for backyard production the baseline feed is 40% swill, 17.7% grass, 6% cassava, 4.4% maize, and the rest come from agriculture by-products [24]. however, field data showed that there is no comparable mix since different backyard operations have different feedings based on locale availability, practices, and capacity. the imported feed ingredients have more impact on broiler production. in terms of feeding, the average feed consumption is 2-2.5kgs per bird over a 30-day average rearing period while the amount is 3-3.9kgs per bird for backyard chickens over a period of 90 days. another challenge to improve in local production is technology. changes in production practices have increased productivity and capacity. for chicken production, this is represented by the broiler inventory and native breed inventory which describe the method and scale of production. based on field and gleam data, the average mortality rate is between 2-5% for broiler production, with a marketable average live weight of 1.6-1.2kg and carcass weight of 1.21.0kg over 30 days. on the other hand, the native breed inventory characterizes the typical backyard farms. backyard chicken would achieve a live weight of 1.3-0.9kg and a carcass weight of 1-0.70kg over a period of 90 days [18]. an extensive review on the related literature in philippines showed that no concrete studies have been done regarding meat production and consumption from a long-term sufficiency or sustainability perspective. this paper aims to answer the questions of “how do we evaluate the long term-sufficiency?” and “what are the factors that affect the long-term sufficiency of the philippine chicken meat sector”. with those in mind, the following key objectives are namely: to establish a dynamic model on the impact of increasing consumption to local production, to simulate consumption and production projections up to 2050, and to evaluate forecasts and their implications to sufficiency. the results of this paper can evaluate the model in terms of long-term sufficiency, sustainability, and stability to satisfy local consumption in the future in the philippine poultry sector. this can be used as a reference for planning, identifying opportunities, and conducting relevant researches to improve chicken meat production in view of achieving sdg 2 by 2050, the timeframe where sdgs and paris agreement are referred to. the model can help agricultural decision-makers and government agencies to come up with better programs to promote local agricultural economy and to provide better policies and programs to address issues related to food security. iii. methodology a. materials statistical data were taken from philippine statistics office, the official web-based system for food and agricultural statistical information in philippines [17] and the global meat consumption per country by oecd [1]. production related data were taken from the related literature and actual fieldwork and experiments conducted by the authors. regarding software, vensim, a simulation software for causal loop diagram (cld) [25] and stella (systems thinking for education and research) for stock and flow diagram (sfd) [26] were utilized. b. methods the sd model in this study was primarily anchored on the methodology presented in [7] that includes a three-step process: (a) formulation of the dynamic hypothesis, (b) implementation of the simulation, and (c) testing and validation. this was further expounded with specific steps as presented in figure 1, which serves as the framework of this study. this study is limited and focused to the forecasting of related variables and their implications by 2050. no policies or proposed action plans will be integrated in the model to demonstrate other functionalities. in ensuring a high level of accuracy of the model, validation was done with mean absolute percentage error (mape) [27, 28], a simple yet reliable statistical measure on forecast accuracy and error of models, as demonstrated in [29-31]. forecasting with less than 10% mape is considered engineering, technology & applied science research vol. 10, no. 2, 2020, 5406-5411 5408 www.etasr.com espino & bellotindos: a system dynamics modeling and computer-based simulation in forecasting … as highly accurate, with 10-20% as good, with 20-50% as reasonable and forecasting with mape greater than 50% is considered inaccurate [32]. validating the model was an iterative stage until the desired mape was achieved. fig. 1. framework of system modeling and simulation in forecasting longterm sufficiency iv. results and discussion a. dynamic hypothesis the analysis was centered on the factors that influence local production and consumption. production is basically driven by the technology or processes applied to the two methods of production, broiler and backyard, as quantified in their respective inventories. on the other hand, consumption is driven by the population and per capita consumption. the dynamic hypothesis is illustrated in the clp and further broken down to sfd. the timeframe considered is up to 2050, aligned to the timeframe of sdgs and paris agreement. 1) causal loop diagram (cld) cld is the visual representation of how variables in a system interact and interrelate. auxiliary variables were included in the cld that were simply calculations based on the stock and flows with some discrete event and agent-based modeling capabilities. to complete the cld, the polarity links were rationalized among factors, which were either positive (+) or negative (-). this signified the influence of one variable to the other relevant variables. in figure 2 the cld of chicken meat sufficiency is shown. the diagram was anchored on ssl, the state when local production satisfies local consumption. on the demand side, local consumption is driven by population and per capita chicken consumption. while on the supply side, local production is driven by both broiler and backyard production, which are influenced by their respective yield and inventory. surplus is included in the diagram, which is ideally the determinant for exportation should there be excesses and for importation should there be deficits in the supply chain. 2) stock and flow diagram (sfd) after cld, model building was done for the sfd. to ensure that stock and flows were defined accordingly, units were checked and equations were specified. there were 5 stocks identified, namely population, chicken per capita consumption, cm, broiler inventory, and backyard chicken inventory as shown in figure 3. the core stock in the system is cm, which is influenced by its inflow converter cm supply and outflow converter cm utilization. projection on population growth is available. for the other 3 growth rates, projections were based on 20 year historical performance data (19982017). included in the sfd are auxiliary variables on sbm usage and maize usage which are relevant in the study to evaluate the whole context of sufficiency given that they are imported production inputs. fig. 2. cause loop diagram of chicken meat production b. simulation to proceed with forecasting, the 4 relevant growth rate data were used as inputs in stella. with the equations set in the computer software, the values of the various variables in were generated from the equations in sfd indicated in figure 4. c. validation to validate the model, the actual consumption and production from 2008-2017 were compared to their equivalent simulated data as shown in table i. mape is 8.5% for consumption and 2.8 % for production, indicating that the whole model can be considered acceptable with a high level of accuracy. d. forecasting with the validated model, forecasts can then be generated from the sfd model. in figure 5, is the supply utilization accounts summary of actual and projected variables from 2010 to 2050 is presented. it is indicated that sufficiency level will drop to 60% by 2050, if no interventions take place. e. implications to long-term sufficiency the model showed that consumption will almost triple. also, it can be seen that there is no room for exportation in order to serve its own requirements. moreover, 40% of the requirements will be satisfied from chicken meat importations. this can be a red flag, knowing that chicken price has been rapidly increasing in the global market. engineering, technology & applied science research vol. 10, no. 2, 2020, 5406-5411 5409 www.etasr.com espino & bellotindos: a system dynamics modeling and computer-based simulation in forecasting … fig. 3. stock flow diagram of chicken meat production fig. 4. simulation equations of stock and flow diagram in the production side, broiler production is the driver, currently contributing at 75% of the volume and will continue and rise to 84% by 2050 as shown in figure 6. in figure 7, broiler inventory versus backyard chicken is shown. we can see that by 2025, broiler will outpace backyard production. further analysis for broiler chicken production and feeds usage may be necessary since the country cannot claim absolute sufficiency given that the main feed ingredients are imported, especially sbm and maize. with the projections, the importations of these two ingredients are expected to rise exponentially as shown in figure 8. this can be another red flag for instability in terms of resource management, operational efficiency, and cost control on imported inputs especially for broiler chicken. fig. 5. supply utilization accounts (2010-2050) it should be noted that the same feed raw materials are used for food and biodiesel, further increasing food-feed-fuel competition in terms of utilization [33]. to resolve this, the government should be looking already for alternative and 2010 2015 2020 2025 2030 2035 2040 2045 2050 production 1,041 1,278 1,316 1,473 1,630 1,787 1,944 2,100 2,257 imports 98.0 190.5 206.7 382.9 583.8 804.0 1,036 1,275 1,515 exports 5.5 3.7 consumption 1,134 1,465 1,523 1,856 2,214 2,591 2,981 3,376 3,772 % ssl 92% 87% 86% 79% 74% 69% 65% 62% 60% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 in t h o u sa n d m e tr ic t o n s engineering, technology & applied science research vol. 10, no. 2, 2020, 5406-5411 5410 www.etasr.com espino & bellotindos: a system dynamics modeling and computer-based simulation in forecasting … locally sourced feed inputs. on the other hand, backyard chicken volume contribution will drop from 25% to 16%, with its slow growth. perhaps, the industry must focus and identify the areas for improvement on its prevailing practices, methods, and systems and address them to increase productivity. it is also worth noting that the backyard chicken inventory is declining, which shows that backyard needs attention with its low productivity in both count and yield. table i. validation of simulation model year local cm consumption local cm production actual* simulation* error actual* simulation* error 2008 775.0 772.5 2.5 986.6 981.8 4.8 2009 798.5 812.0 13.5 1,001.7 1,014.1 12.4 2010 864.9 852.9 12.1 1,041.9 1,045.8 3.9 2011 921.8 895.2 26.6 1,089.0 1,077.5 11.5 2012 1,013.5 939.6 73.9 1,139.2 1,110.0 29.2 2013 1,058.9 986.1 72.8 1,197.4 1,143.2 54.2 2014 1,143.6 1,034.9 108.7 1,210.3 1,177.1 33.2 2015 1,211.0 1,086.0 125.0 1,278.8 1,211.8 67.1 2016 1,299.9 1,139.6 160.3 1,289.4 1,247.2 42.2 2017 1,498.7 1,195.2 303.5 1,344.3 1,282.8 61.5 mape 8.5% 2.8% * in thousand metric tons (tmt) the increasing consumption is driven by population and per capita consumption. interventions are more feasible in the per capita growth since it can be influenced by changes in eating pattern and lifestyle. in developed economies, awareness on sustainability issues on meat consumption are increasing such as reduction through practice of meatless days [34] and consumption of plant based meat substitutes and alternative protein sources [35]. perhaps, the industry can also embark on awareness, so that the consumers can somehow have the chance to make sustainable choices. with the model in place, decision makers can use the above implications in integrating proper interventions. other actions in the forms of technology improvements and consumer awareness can be conceptualized and tested in the model, which can project the relevant variables and check the level of sufficiency at specific timelines. hence, in the context of planning, the impact of short-term programs and long-term strategies up to 2050 can be evaluated in addressing food security. fig. 6. broiler and backyard chicken production (2020-2050) fig. 7. broiler and backyard chicken inventory (2020-2050) fig. 8. sbm and maize consumption (2020-2050) v. conclusion long-term sufficiency in this study was evaluated by system modeling and computer-based simulation forecasting. the simulated model projected a sufficiency drop to 60% by 2050 if no interventions take place in production and consumption. with the relevant variables in place in the model, this can be a sound basis in evaluating future scenarios and incorporating possible solutions. in conclusion, the developed dynamic model could be used to facilitate informed decisionmaking. initially, the process of establishing the model may seem complex in comparison with the traditional forecasting techniques. however, rigor and results can be more significant, especially for dynamic systems in the food and agriculture sectors. this study can encourage a paradigm shift in understanding dynamic factors and give decision-makers a better way of planning and addressing long-term sufficiency and food security concerns. acknowledgment the authors are grateful for the funding support given by engineering research and development for technology (erdt). due appreciation is also given to benjamin rubin for assisting in data processing. references [1] oecd, “meat consumption”, available at: http://www.oecd-ilibrary.org [2] un, “sustainable development goals”, available at: http://www.un.org/ sustainabledevelopment 2020.00 2026.00 2032.00 2038.00 2044.00 2050.00 1: 1: 1: 2: 2: 2: 3: 3: 3: 300000 1400000 2500000 1: broiler chicken production 2: backyard chicken production 3: chicken meat production 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 3:15 pm sun, 1 dec 2019page 1 2020.00 2026.00 2032.00 2038.00 2044.00 2050.00 time 1: 1: 1: 2: 2: 2: 40000000 95000000 150000000 1: backyard chicken inventory 2: broiler chicken inventory 1 1 1 1 1 2 2 2 2 2 8:23 pm sun, 24 nov 2019page 1 2020.00 2026.00 2032.00 2038.00 2044.00 2050.00 time 1: 1: 1: 2: 2: 2: 400000 1300000 2200000 1: sbm consumption of broiler chicken 2: total maize consumption 1 1 1 1 1 2 2 2 2 2 engineering, technology & applied science research vol. 10, no. 2, 2020, 5406-5411 5411 www.etasr.com espino & bellotindos: a system dynamics modeling and computer-based simulation in forecasting … [3] r. kurt, s. karayilmazlar, y. cabuk, “important non-wood forest products in turkey: an econometric analysis”, engineering technology & applied science research, vol. 6, no. 6, pp. 1245–1248, 2016 [4] v. h. m. nguyen, k. t. p. nguyen, c. v. vo, b. t. t. phan, “forecast on 2030 vietnam electricity consumption”, engineering technology & applied science research, vol. 8, no. 3, pp. 2869–2874, 2018 [5] b. v. b. prabhu, m. dakshayini, “performance analysis of the regression and time series predictive models using parallel implementation for agricultural data”, procedia computer science, vol. 132, pp. 198–207, 2018 [6] k. m. m. e. dash, o. m. o. ramadan, w. m. m. a. youssef, “duration prediction models for construction projects in middle east”, engineering technology & applied science research, vol. 9, no. 2, pp. 3924–3932, 2019 [7] j. d. sterman, “system dynamics modeling: tools for learning in a complex world”, california management review, vol. 43, no. 4, pp. 8– 25, 2002 [8] m. k. saraji, a. m. sharifabadi, “application of system dynamics in forecasting: a systematic review”, international journal of management, accounting and economics, vol. 4, no. 12, pp. 1192–1206, 2017 [9] w. srijariya, a. riewpaiboon, u. chaikledkaew, “system dynamic modeling: an alternative method for budgeting”, value in health, vol. 11, pp. s115–s123, 2008 [10] b. l. turner, h. m. menendez, r. gates, l. o. tedeschi, a. s. atzori, “system dynamics modeling for agricultural and natural resource management issues: review of some past cases and forecasting future roles”, resources, vol. 5, no. 40article id 5040040, 2016 [11] j. p. walters, d. w.archer, g. f. sassenrath, j. r. hendrickson, j. d. hanson, j. m. halloran, p. vadas, v. j. alarcon, “exploring agricultural production systems and their fundamental components with system dynamics modelling”, ecological modelling, vol. 333, pp. 51–65, 2016 [12] e. suryani, r. a. hendrawan, t. mulyono, l. p. dewi, “system dynamics model to support rice production and distribution for food security”, jurnal teknologi, vol. 68, no. 3, pp. 45–51, 2014 [13] a. sachan, b. s. sahay, d. sharma, “developing indian grain supply chain cost model: a system dynamics approach”, international journal of productivity and performance management, vol. 54, no. 3, pp. 187– 205, 2005 [14] f. h. a. rahim, n. n. hawari, n. z. abidin, “supply and demand of rice in malaysia: a system dynamics approach”, international journal of supply chain management, vol. 6, no. 4, pp. 234–240, 2017 [15] c. sampedro, f. pizzitutti, d. quiroga, s. j. walsh, c. f. mena, “food supply system dynamics in the galapagos islands: agriculture, livestock and imports”, renewable agriculture and food systems, pp. 1–15, 2018 [16] n. tsolakis, j. s. srai, “a system dynamics approach to food security through smallholder farming in the uk”, chemical engineering transactions, vol. 57, pp. 2023–2028, 2017 [17] psa, “countrystat philippines”, available at: http://openstat.psa. gov.ph [18] j. a. sison, feed use estimation: data, methodology and gaps-the case of the philippines, amis, 2014 [19] nielsen, “what’s in our food and on our mind, ingredient and dining– out”, global ingredient and out-of-home dining trenda report, 2016 [20] psa, agricultural indicators system: food sufficiency and security, philippines statistics authority, 2019 [21] psa, chicken industry performance report, philippines statistics authority, 2017 [22] w. a. dozier, m. t. kidd, a. corzo, “dietary amino acid responses of broiler chickens”, journal of applied poultry research, vol. 17, no. 1, pp. 157–167, 2008 [23] indexmundi, “philippines soybean meal imports by year”, available at: https://www.indexmundi.com/agriculture/?country=ph&commodity=soy bean-meal&graph=imports [24] http://www.fao.org/gleam/resources/en [25] https://vensim.com/vensim-software [26] https://www.iseesystems.com/resources/help/v1-2 [27] u. khair, h. fahmi, s. a. hakim, r. rahim, “forecasting error calculation with mean absolute deviation and mean absolute percentage error”, journal of physics: conference series, vol. 930, no. 1, article id 012002, 2017 [28] r. j. hyndman, a. b. koehler, “another look at measures of forecast accuracy”, international journal of forecasting, vol. 22, no. 4, pp. 679– 688, 2006 [29] n. p. barbosa, e. s. christo, k. a. costa, “demand forecasting for production planning in a food company”, arpn journal of engineering and applied sciences, vol. 10, no. 16, pp. 7137–7141, 2015 [30] y. chen, q. i. wang, s. fay, “the role of marketing in social media: how online consumer reviews evolve”, journal of interactive marketing, vol. 25, no. 2, pp. 85–94, 2011 [31] j. l. r. renteria, t. d. e. huerta, f. s. t. pacheco, j. l. g. perez, r. l. dorantes, “an electrical energy consumption monitoring and forecasting system”, engineering technology & applied science research, vol. 6, no. 5, pp. 1130–1132, 2016 [32] m. gilliland, the business forecasting deal: exposing myths, eliminating bad practices, providing practical solutions, john wiley & sons, 2010 [33] h. p. s. makkar, “animal nutrition in a 360-degree view and a framework for future r&d work: towards sustainable livestock production”, animal production science, vol. 56, no. 10, pp. 15611568, 2016 [34] j. d. boer, h. schosler, h. aiking, ““meatless days” or “less but better”? exploring strategies to adapt eestern meat consumption to health and sustainability challenges”, appetite, vol. 76, pp. 120–128, 2014 [35] c. apostolidis, f. mcleay, “should we stop meating like this? reducing meat consumption through substitution”, food policy, vol. 65, pp. 74– 89, 2016 microsoft word 13-3678_s_etasr_v10_n4_pp5953-5957 engineering, technology & applied science research vol. 10, no. 4, 2020, 5953-5957 5953 www.etasr.com quoc: accurate magnetic shell approximations with magnetostatic finite element formulations by … accurate magnetic shell approximations with magnetostatic finite element formulations by a subdomain approach vuong dang quoc department of electrical and electronic equipment school of electrical engineering hanoi university of science and technology hanoi, vietnam vuong.dangquoc@hust.edu.vn abstract—this paper presents a subproblem approach with hconformal magnetostatic finite element formulations for treating the errors of magnetic shell approximation, by replacing volume thin regions by surfaces with interface conditions. these approximations seem to neglect the curvature effects in the vicinity of corners and edges. the process from the surface-tovolume correction problem is presented as a sequence of several subdomains, which can be composed to the full domain, including inductors and thin magnetic regions. each step of the process will be separately performed on its own subdomain and submesh instead of solving the problem in the full domain. this allows reducing the size of matrix and time computation. keywords-magnetostatic finite element formulation; magnetic scalar potential; magnetic field; magnetic shell; subproblem approach i. introduction the local fields in magnetic shells are approximated by a priori 1-d analytical distributions across the shell thicknesses [1, 2]. this means that the interior of volume thin regions is not meshed and is introduced by surfaces with impedance-type interface conditions (ics) linked to the inner-analytical distributions. this leads to negligible edges and corners of magnetic shells, increasing with thickness. in order to overcome this disadvantage, the sub-problem method (spm) for the magnetodynamic problem with dual formulation has been proposed for one-way coupling [3-10]. in this development, a subdomain technique based on the spm is extended for the hconformal magnetostatic finite element formulation in order to improve the local fields (magnetic scalar potential, magnetic flux density and magnetic field) appearing around the edges and corners of magnetic shells. the idea of the method is to perform subdomain solving in three steps (figure 1): • step 1: a lower subdomain attending with stranded inductors is first considered on a simplified mesh without any magnetic shells. • step 2: a shell with the very coarse mesh that does not contain stranded inductors anymore is then added. • step 3: a volume correction replacing the magnetic shell finite element (fe) by an actual thin region is introduced to improve shell inaccuracies. fig. 1. devision of a full domain into three steps. the relation between steps is constrained by volume source (vs) expressed changes of the material propoerties or surface sources (ss) presented changes of ics. in each step, the problem is independently solved in an individual sub-mesh and its surrounding without depending on other meshes, which allow to distinct mesh refinements. the method is applied on a practical problem. ii. magnetostatic problems a canonical magnetostatic problem q presented at step q is solved in a domain ω�, with boundary �ω� � γ� � γ�,� ∪ γ ,�. maxwell's equations, constitutive laws and boundary conditions (bcs) of the problem q give [3-11]: curl �� � ��,div�� � 0 (1a-b) �� � ���� � ��,� (2) corresponding author: vuong dang quoc engineering, technology & applied science research vol. 10, no. 4, 2020, 5953-5957 5954 www.etasr.com quoc: accurate magnetic shell approximations with magnetostatic finite element formulations by … � ∙ ��|г�,� = 0, �� ∙ �� !� = �",� (3a-b) where �� is the magnetic field (a/m), �� is the magnetic flux density (t), �� is the electric current density (a/m 2 ), �� is the magnetic permeability (h/m) and � is the unit normal exterior to ω�. the source field ��,� in (2) is a vs that accounts for volume changes of permeability (��) from the current problem to the next problem �$ (q =p), i.e.: ��,$ = %�$ − ��'��. (4) the notation [∙]!� = |!�+ − |!�, is the discontinuity of a quantity across the negative and positive sides of any interface -� in ω�. the field �",� is a ss between subdomains [3-10]. in addition, the magnetic field �� in (1a) is split in two parts ��,� and �.,�, i.e. �� = ��,� + �.,�, where �.,� is the reaction field and ��,� is a source magnetic field due to the imposed current density ��,� (curl ��,� = ��,�). iii. sequence of fe weak formulations a. weak formulation for inductor model step 1 (sp q) the magnetostatic weak formulation (�� − /) for step 1 (sp q) is obtained via the magnetic gauss's law (1b), i.e. [1, 2]: −%����,�, grad/� 2 ' 3� + %��grad/�, grad/� 2 ' 3� +< � ∙ ��, /� 2 >г�,�6!�+< −�� ∙ �� !� , /� 2 >!�= 0, (5) ∀/� 2 ∈ 9�,� :; %<�' where 9�,� :; %<�' is a function space presented in <� including the basis functions for /� as well as for the test function /� 2. notations of (. , . )>� and 〈. , . 〉a� are respectively the volume integral in <� the surface integral on γ�, of the product of their vector field arguments. the surface term < � ∙ ��, /� 2 >г�,�,b� in (5) is considered as a natural bc of type (3a), usually zero. b. weak formulation for magetnic shell model step 2 (sp p) the shell model (sp p) is defined via the last term in (5). the weak form of sp p, is [1, 2]: %�$grad/$, grad/$ 2 ' 3c +< � ∙ �$, /$ 2 >г�,c6!c +< �� ∙ �$ !c , /$ 2 >!c= 0, ∀/$ 2 ∈ 9�,� :; %<�' (6) the trace discontinuity term < �� ∙ �$ !c , /$ 2 >!c in (6) is given as [4]: < �� ∙ �$ !c , /$ 2 >!c= < �� ∙ �$ !c , /d,$ 2 >!c + < � ∙ �$|!c+, /e,$ 2 >!c+ (7) in addition, the term < �� ∙ �$ !f,$, /d,$ 2 >!f,$ in (7) is obtained from [1, 2]: < �� ∙ �$ !c , /d,$ 2 >!c = −< �$g$��,$, grad/$ 2 >!c +< �$g$grad/$, grad/$ 2 >!c (8) the remaining term < � ∙ �$|!c+, /e,$ 2 >!c+ in (7) is weakly presented via the surface source integral term, i.e.: < � ∙ �$|!c+, /e,$ 2 >!c+= −< � ∙ ��|!c+, /e,$ 2 >!c+= (��grad/�, grad/e,$ 2 )3c+h!c+ + (����,�, grad/e,$ 2 )3c+h!c+ = −�",� (9) the volume integrals in (9) are also limited to a single layer of fes on the positive side of <$ i touching -$ i [4-10]. by substituting (9) and (8) into (6), the weak form of sp p is rewritten as: (�$grad/$, grad/$ 2 )3c−< �$g$��,$, grad/$ 2 >гj,c + < �$g$grad/$, grad/$ 2 >гj,c− (��grad/�, grad/e,$ 2 )3c+ +(����,�, grad/e,$ 2 )3c+ = 0, ∀/$ 2 ∈ 9�,$ :; %<$' (10) at the discrete mesh, the source fields /� and ��,�, initially in mesh of sp q, have to be transferred to the mesh of sp p via a projection method [15-17]. c. weak formulation for volume correction step 3 (sp k) the weak form of sp k is finally established via a vs given by (2): (�kgrad/k, grad/k 2 )3l − ((�k − �$)grad/$, grad/k 2 )3l +(�k − �$)((��,$ − grad/k 2 ), grad/k 2 )3l + < � ∙ �k, /k 2 >г�,l6!l +< [� ∙ �k]!l , /k 2 >!l= 0 ∀/k 2 ∈ 9�,k :; (!l= −< [� ∙ �$]!l, /k 2 >!l (12) d. transformation of solutions between sub-meshes as presented above, the source fields /� and �� obtained from the previous meshes of spi (e.g. sp q) are transferred to the mesh of sp p, i.e. [15-17]: (��,$6$.mn , � 2)3o,c = (��, � 2)3o,c, ∀� 2 ∈ 9�,$ : %<�,$' (13) where ∀�2 ∈ 9�,$ : %<�,$' is a curl-conform function space for the p-projected source ��,$6$.mn (the projection of ��,$6$.mn on mesh of sp p) and the test function �2 defined on <�,$. in the same way, the magnetic scalar potential /$, can project the grad of /� in the mesh of sp q to the mesh of sp p, i.e. [9]: (grad/�,$6$.mn , grad/ 2)3o,c = (grad /�, grad/ 2)3o,c ∀/2 ∈ 9�,$ :; %<$' (14) where /2 ∈ 9�,$ :; %<$' is the grad-conform function space for the p-projected source /�,$6$.mn (the projection of /� on mesh of sp p) and the test function /2 defined on <�,$. engineering, technology & applied science research vol. 10, no. 4, 2020, 5953-5957 5955 www.etasr.com quoc: accurate magnetic shell approximations with magnetostatic finite element formulations by … iv. application test the practical test is a shielded problem. it consists of a plate located in the middle of two stranded inductors carrying a magnetomovetive force of 1000 ampere-turns (figure 2). the magnetic shields (screen up and down) cover the plate and the stranded inductors, for �.,��p qe=1 and �.,$qrf =200. the test is performed in the 2-d case. as a sequence, the test at hand is performed in three steps. the solutions on the magnetic scalar potential / of each subdomain are illustrated in figure 3. an initial problem sp q including the stranded inductors alone is solved in a sub-domain without the shielding plate and screens up and down (figure 3(a), /�). fig. 2. geometry of a 2-d shielding problem (d=3÷7.5mm, lpl=2m, ls=2m+2d, hs=0.4m, hy=0.14m, cdx=0.8m, cdy=0.01m, cy=0.2m, cx=0.05m). (a) (b) (c) fig. 3. distribution of magnetic scalar potentials for (a) the stranded inductors alone sp q (/�), (b) addition of ts solution sp p (/$), and (c) volume correction sp k (/k) for thickness d=5mm). the shell approximation sp p that does not include the stranded inductors anymore is then added (figure 3(b), /$). the volume improvements covering the shielding plate and screen up and down are finally introduced to overcome the shell approximations [1, 2], for d=5mm, �.,��p qe =1 and �.,$qrf =200 (figure 3(c), /k ). in a similar way, the distribution of magnetic flux density for each subdomain obtained in each step is shown in detail in figure 4. the sequence from step 1 (sp q) → step 2 (sp p) → step 3 (sp k) is pointed out from top to bottom, for thickness d=5mm. (a) (b) (c) fig. 4. distribution of magnetic flux densities � = �(�� − grad /# for (a) stranded inductors alone sp q, (b) addition of the shell model sp p, and (c) volume correction sp k for thickness d=5mm. the significant errors on the magnetic flux densities of the shell approximation solution (sp p) along the plate are corrected by the volume correction (sp k) indicated in figure 5. the error reaches approximately 35% near the middle of the plate, for d=5mm (�.,��p qe=1 and �.,qrf =200). the volume engineering, technology & applied science research vol. 10, no. 4, 2020, 5953-5957 5956 www.etasr.com quoc: accurate magnetic shell approximations with magnetostatic finite element formulations by … solution is then checked to be similar to the reference solution in the computation from the traditional finite element method (fem) [12-14]. fig. 5. magnetic flux density on the shell solution, volume correction and reference solution along the plate, for d=5mm. (a) (b) fig. 6. ts inaccuracy on the magnetic flux density along the shielding plate (a) before making a correction and (b) after the correction for different values d. fig. 7. relative (improvement) correction of the magnetic flux density along the screen up for different values of d. the relative inaccuracy on the magnetic flux densities before making corrections is presented in figure 6 for various thicknesses. the error can reach 90% at the end regions of the plate, for d=7.5 mm, and 75% for smaller thickness d=3 mm for �.,��p qe=1 and �.,$qrf =200. accurate local improvement with volume correction sp k after improving are less than 15% for d=7.5mm near the plate end, or 10% for d=3 mm. it is worth noting that the error is less than 1% in the middle of the plate for both cases. the relative improvement (correction) of the ts magnetic flux along the screen up is presented in figure 7 for different screen thicknesses. it can reach a percentage up to 47% near the edge of the screen up, for d=7.5mm. it reduces to about 40% for d=5 mm, or 30% for d=3mm, with �.,��p qe=1 and �.,$qrf =200. v. discussion and conclusion in this research, a subdomain technique for coupling thin magnetic shells has been successfully developed with hconformal magnetostatic finite element formulations for improving the errors of the magnetic scalar potential, magnetic flux density, and magnetic flux around the edges and corners appearing from the magnetic shell approximation [1, 2]. the obtained results of the method were found to be quite similar to the reference solution in the computation of the traditional fem [12-14]. the proposed technique has been successfully carried out with a three step sequence. in the future, it could be extended to the case of multilayer-ts with different characteristics. the source-code of the method has been extended from the one subproblem method that was developed by the author with the help of patrick dular and christophe geuzaine at the deparment of electrical engineering and computer science, university of liege, belgium. it will be ran in the background of the getdp and gmsh (http://getdp.info and http://gmsh.info) as open source code. references [1] c. geuzaine, p. dular, and w. legros, “dual formulations for the modeling of thin conducting magnetic shells,” compel the international journal for computation and mathematics in electrical and electronic engineering, vol. 18, no. 3, pp. 385–398, jan. 1999, doi: 10.1108/03321649910274946. [2] c. geuzaine, p. dular, and w. legros, “dual formulations for the modeling of thin electromagnetic shells using edge elements,” ieee transactions on magnetics, vol. 36, no. 4, pp. 799–803, jul. 2000, doi: 10.1109/20.877566. [3] p. dular and r. v. sabariego, “a perturbation method for computing field distortions due to conductive regions with $\mmb h$ -conform magnetodynamic finite element formulations,” ieee transactions on magnetics, vol. 43, no. 4, pp. 1293–1296, apr. 2007, doi: 10.1109/tmag.2007.892401. [4] p. dular, r. v. sabariego, c. geuzaine, m. v. ferreira da luz, p. kuopeng, and l. krähenbühl, “finite element magnetic models via a coupling of subproblems of lower dimensions,” ieee transactions on magnetics, vol. 46, no. 8, pp. 2827–2830, aug. 2010, doi: 10.1109/tmag.2010.2044028. [5] p. dular, v. q. dang, r. v. sabariego, l. kahenbuhl, and c. geuzaine, “correction of thin shell finite element magnetic models via a subproblem method,” ieee transactions on magnetics, vol. 47, no. 5, pp. 1158–1161, may 2011, doi: 10.1109/tmag.2010.2076794. [6] v. q. dang, p. dular, r. v. sabariego, l. krahenbuhl, and c. geuzaine, “subproblem approach for thin shell dual finite element 0 0.2 0.4 0.6 0.8 1 1.2 -1 -0.5 0 0.5 1 m a g n e ti c f lu x d e n si ty 1 0 -3 (t ) position along the shielding plate (m) d=5mm, ts approximation d=5mm, correction d=5mm, reference 0 20 40 60 80 100 -1 -0.5 0 0.5 1 e rr o rs b e fo re c o rr e c ti o n b ( % ) position along the shielding plate (m) d=3mm d=5mm d=7.5mm 0.1 1 10 100 -1 -0.5 0 0.5 1 e rr o rs a ft e r c o rr e c ti o n b ( 1 0 -2 % ) position along the shielding plate (m) d=3mm d=5mm d=7.5mm 0 20 40 60 80 100 -1 -0.5 0 0.5 1 c o rr e c ti o n o f m a g n e ti c f lu x ( % ) position along the screen up (m) d=3mm d=5mm d=7.5mm engineering, technology & applied science research vol. 10, no. 4, 2020, 5953-5957 5957 www.etasr.com quoc: accurate magnetic shell approximations with magnetostatic finite element formulations by … formulations,” ieee transactions on magnetics, vol. 48, no. 2, pp. 407–410, feb. 2012, doi: 10.1109/tmag.2011.2176925. [7] d. q. vuong, “modeling of magnetic fields and eddy current losses in electromagnetic screens by a subproblem method,” tnu journal of science and technology, vol. 194, no. 1, pp. 7–12, 2019. [8] v. d. quoc and c. geuzaine, “using edge elements for modeling of 3-d magnetodynamic problem via a subproblem method,” science and technology development journal, vol. 23, pp. 439–445, feb. 2020, doi: 10.32508/stdj.v23i1.1718. [9] d. q. vuong and n. d. quang, “coupling of local and global quantities by a subproblem finite element method – application to thin region models,” advances in science, technology and engineering systems journal (astesj), vol. 4, no. 2, pp. 40–44, 2019, doi: 10.25046/aj040206. [10] p. dular, r. v. sabariego, m. v. ferreira da luz, p. kuo-peng, and l. krahenbuhl, “perturbation finite element method for magnetic model refinement of air gaps and leakage fluxes,” ieee transactions on magnetics, vol. 45, no. 3, pp. 1400–1403, mar. 2009, doi: 10.1109/tmag.2009.2012643. [11] v. d. quoc, “robust correction procedure for accurate thin shell models via a perturbation technique,” engineering, technology & applied science research, vol. 10, no. 3, pp. 5832–5836, jun. 2020. [12] s. koroglu, p. sergeant, r. v. sabariego, v. q. dang, and m. d. wulf, “influence of contact resistance on shielding efficiency of shielding gutters for high-voltage cables,” iet electric power applications, vol. 5, no. 9, pp. 715–720, nov. 2011, doi: 10.1049/iet-epa.2011.0081. [13] k. abubakri and h. veladi, “investigation of the behavior of steel shear walls using finite elements analysis,” engineering, technology & applied science research, vol. 6, no. 5, pp. 1155–1157, oct. 2016. [14] g. meunier, the finite element method for electromagnetic modeling. new york, ny, usa: john wiley & sons, ltd, 2010. [15] c. geuzaine, b. meys, f. henrotte, p. dular, and w. legros, “a galerkin projection method for mixed finite elements,” ieee transactions on magnetics, vol. 35, no. 3, pp. 1438–1441, may 1999, doi: 10.1109/20.767236. [16] p. e. farrell and j. r. maddison, “conservative interpolation between volume meshes by local galerkin projection,” computer methods in applied mechanics and engineering, vol. 200, no. 1, pp. 89–100, jan. 2011, doi: 10.1016/j.cma.2010.07.015. [17] g. parent, p. dular, f. piriou, and a. abakar, “accurate projection method of source quantities in coupled finite-element problems,” ieee transactions on magnetics, vol. 45, no. 3, pp. 1132–1135, mar. 2009, doi: 10.1109/tmag.2009.2012652. author’s profile vuong dang quoc received his phd degree in 2013 from the faculty of applied sciences at the university of liege in belgium. after that he came back to the hanoi university of science and technology in september 2013, where he is currently working as the director of the training center of electrical engineering, school of electrical engineering, hanoi university of science and technology. dr. vuong dang quoc’s research domain encompasses modeling of electromagnetic systems by coupling of subproblem method with application to thin shell models. microsoft word rev-ed.doc etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 155 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study field engineers’ scheduling at oil rigs: a case study dr. ateekh-ur-rehman department of industrial engineering king saud university, riyadh, saudi arabia arehman@ksu.edu.sa yusuf usmani department of industrial engineering king saud university, riyadh, saudi arabia yusmani@ksu.edu.sa abstract— oil exploration and production operations face a number of challenges. professional planners have to design solutions for various practical problems or issues. however, the time consumed is often very extensive because of the large number of possible solutions. further, the matter of choosing the best solution remains. the present paper investigates a problem related to leading companies in the energy and chemical manufacturing sector of the oil and gas industry. each company’s field engineers are expensive and valuable assets. therefore, an optimized roster is rather important. in the present paper, the objective is to design a field engineers’ schedule which would be both feasible and satisfying towards the various demands of rigs, with minimum operational cost to the company. an efficient and quick optimization technique is presented to schedule the shifts of field engineers. keywordsfied engineers (fe); oil rigs; scheduling; uneven demand i. introduction scheduling is the allocation of resources over time to perform a collection of tasks. a workforce schedule that ensures appropriate service and production level is a key management function and has high practical importance. among the terms used, predictive scheduling describes the design of a schedule in advance whereas reactive scheduling describes the adaptation of the schedule according to actual events [1]. however, due to the exponential size of the scheduling problem it is extremely difficult to find good solutions to these highly constrained and complex problems [2]. further, providing the right people at the right time at the right cost whilst achieving a high level of employee satisfaction is another critical problem [2]. personnel scheduling, or rostering, is the process of constructing work timetables for the staff so that ab organization can satisfy the demand for its goods or services [2-3]. the origin of staff scheduling and rostering can be traced back to edie’s work on traffic delays at toll booths [4]. since then, staff scheduling and rostering methods have been applied to transportation systems, such as airlines and railways, health care systems, emergency services, such as the police, ambulance and fire brigade, call centers and many other service organizations such as hotels, restaurants and retail stores. therefore, extensive model and algorithm development has been carried out in the literature on crew scheduling and rostering in transportation systems, nurse scheduling in health care systems, and tour scheduling for various service systems. a focused review on applications of both personnel and vehicle scheduling can be found in [5] where scheduling objectives, constraints, and methodologies are surveyed for each application area. personnel scheduling has been a subject of investigation over the past 30 years with a survey in every decade. an application of such problems is presented in the following section. the present paper is concerned with the [r, n] days-off scheduling problem, where for a given cycle of n periods each field engineer is assign a work-stretch of r consecutive periods and break of ‘n-r’ consecutive periods. the focus is to address the issue related to uneven demand of field engineers for the oil rigs. the primary objective of the days-off scheduling problem is to minimize the workforce size, i.e., total number of field engineers assigned. the paper is organized into seven sections. section ii presents literature survey on days-off scheduling approaches. section iii presents the problem in detail. the uneven demands procedures for determining the minimum workforce size and assigning workers (field engineers) to days-off patterns are presented in sections iv and v with an example. section vi presents the case application of the model in an oil rig company. finally, the last section concludes with the discussion. ii. literature survey scheduling problems and their treatments are very diverse. the problem of designing a staffing schedule or roster (sometimes known as a tour) subject to a particular set of constraints was solved by williams [6]. early examples of the use of linear programming in scheduling problems were given by baker and magazine [7] and bartholdi et al. [8]. the solution of a problem with some similarities was also given by townsend [9]. an aspect of this problem was the existence of several different duties which had to be distributed fairly amongst crews. the rules governing the pattern of days on and days off were simpler. in aircrew scheduling, as described by ryan [10], with rosters used as inputs, one aspect of the technique was allocating rosters to staff. the problem solved in his paper was the finding of a feasible schedule. sydney [11] proposed the goal programming models for an integrated problem of crew duties assignment, for baggage services section staff. easton and rossin [12] used a heuristic approach to find improvements regarding the set of a feasible schedule. the problem considered by hung [13-14] had the added complication of non-homogenous labor force (one kind of etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 156 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study worker can replace another, but not vice versa) factor. bechtold et al. [15] exemplify the approach of finding the few schedules, out of the numerous feasible schedules, which maximize certain desirable criteria. hojati and patil [16] considered the scheduling of heterogeneous part-time employees of service organizations. rafael [17] described how a simple procedure, combining random and greedy strategies with heuristics, has been successfully applied in assigning guard shifts to the physicians in a department. workforce scheduling problems are traditionally classified into three types i.e. shift scheduling, days-off scheduling, and roster scheduling. nanda and browne [18] provided a thorough survey of literature on these three types. narasimhan [19] reflected on multiple worker types, giving each worker two days off per week. emmons and burns [20] considered a workforce composed of n worker types, but assume a constant employee demand for all days of the week. the days-off scheduling model proposed by hung [21] was based on two assumptions. the first assumption is x workers are required on weekdays and y workers on weekends, and the second assumption was that each worker must have a out of b weekends off. alfares [22] presented a single-shift optimum solution technique for 3-day workweeks. similarly, alfares [23] extended the expression for the minimum workforce size, and included it as a constraint in the linear programming. backtracking techniques was used by musliu et al. [24] to obtain cyclic schedules (cyclic assignments of shifts to employees) that are optimal for weekends off, long weekends off and also in terms of the regularity of weekends off. some practical general scheduling applications can be found in the work of pinedo and chao [25], blazewicz [26], and pinedo [27]. the notion of skill is well known in the field of personnel scheduling [28]. ne´ron [29] consider the resource constrained scheduling problem where resources are staff members that have one or more skill. cai and li [30] considered the problem of scheduling staff with mixed skills. these papers tend to emphasize problems in which the same numbers of periods are worked each cycle. some attention is given to the important practical point of the period worked each cycle being contiguous. iii. problem definition the problem on hand is concerned with one of the leading companies in the energy and chemical manufacturing sector of the oil and gas industry. the company’s field engineers are typically one of its most expensive as well as one of its most valuable assets. therefore, scheduling needs focus on how to allocate field engineers to satisfy the forecasted requirement of field engineers on duty to cover the workload. questions that also needs answering are: what are the best roster assigning field engineers to shifts and which of them should cover a vacant shift? the objective is to reproduce (r, n) day’s on-off assigning problem, considering a ten week cycle. here for a given cycle of 70 consecutive days, each field engineer is assigned one work stretch of 42 consecutive workdays (i.e. break of 28 consecutive days off). the main objective is to reduce the cost by optimizing the field engineers’ schedule i.e. the total number of field engineers assigned. in order to reduce assignment cost, it is required to minimize the number of active days-off patterns. in the present paper, an added goal is to assign filed engineers to different rigs according to job requirement on particular oil rig. it is also evident from the literature survey, that there is the need to address issues related to mostly uneven demand of field engineers for the oil rigs. under such circumstances field engineers are assigned to different shift types, each involving a different pattern of “on” and “off” work periods, in such a way that the number of field engineers who are “on” in each period is sufficient to meet the demand in that period. the objective is to minimize the total cost of the shift. if the cost of field engineer to a shift is the same for all shift types, then the objective is to minimize the amount by which the capacity provided by the schedule exceeds the demand. the tool used is microsoft ms solver. thus, focus is given to scheduling field engineers to satisfy uneven demands of different oil rigs. the objective is to minimize the total cost of the shift. the formulation of even and uneven demand scheduling problem, the details of the solution and analysis are presented in the subsequent sections 4 and 5. iv. field engineers’ shifts scheduling formulation in practice, when there is a case of uneven demand of field engineers at oil rigs, the field engineers are scheduled to different shifts, involving a different pattern of “on” and “off” work periods, in such a way that the number of field engineers who are “on” in each period is sufficient to meet the demand in that period. the present paper is concerned with the [r, n] days-off scheduling problem, where for a given cycle of n periods each field engineer is assign a work-stretch of r consecutive periods and break of ‘n-r’ consecutive periods let’s consider a [r, n] days-off scheduling problem, where for a given cycle of n periods each field engineer is assign a work-stretch of r consecutive periods. there are ‘n’ schedule patterns. each field engineer is assigned to exactly one shift pattern, so that he/she will be on ‘n-r’ consecutive periods off. the scheduling for ‘m’ periods, where, i b = number of field engineer required on oil rigs during period i. j x = number of field engineer assigned to shift pattern j. i e =number of field engineer called on emergency in period i. j c = cost of a field engineer on normal duty assigned to shift pattern j. ei c = cost of a field engineer called on emergency in period i. the problem can be formulated as presented hereunder. etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 157 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study let constraint matrix ][a a ij= , where row ‘i’ corresponds to period and column ‘j’ corresponds to shift pattern. so that, ij a = 1, if period i is an ‘on’ period in shift pattern j. = 0, otherwise. ∑∑ == += m 1i iei n 1j jj ecxcmin z subjected to ( ) .m……1,2, = ifor bexa ii n 1j jji ≥+∑ = ( ) .m……1,2, = ifor xexa n 1j ji n 1j jji ∑∑ == ≤+ ( )2,....n. 1, = j and ..m.……1,2 = ifor integer and 0e,x ij ≥ the objective function is to minimize the field engineer scheduling cost including the cost of calling the field engineers on emergency if required. there is the constraint that one should have enough field engineers to operate the rigs in each period. so, the first constraint set ensures that sufficient number of filed engineers is provided to meet or exceed the minimum demand of field engineers at oil rigs for the period. there is also another constraint that, a field engineer should be called on emergency duty only when he is on an off period. the second constraint limits the availability of field engineers on emergency basis in every period. the third constraint set place non-negativity and integer restriction on the decision variables. the application of the above formulation is illustrated with an example hereunder. a. computational illustration let’s consider the situation of a cycle of 10 weeks where field engineers are schedules on 6 weeks consecutive on shift and 4 weeks consecutive off shift. ms solver is used to find a schedule that uses the fewest number of field engineers and meets all weekly demands of field engineers at oil rigs. there is a need of certain number of field engineers to meet the oil rigs’ demand. a cycle consists of 10 periods. in other words there are 10 schedule patterns. for computational illustration as presented in table i, there are ten different schedule patterns (s1 to s10) each pattern have consecutive 4 weeks off (represented in column b). there are ten weeks (w1 to w10). as presented in table i, cell (c2:c11) in column c which have been set equal to zero at the start. ms excel refers to these cells as changing cells in ms solver. these changing cells are the number of field engineers required to meet demand. as the objective is to minimize cost (value in cell c15), it is calculated by multiplying total demand required during 10 week periods and pay per field engineer per day. ms solver refers to this as the “target cell” and it corresponds to the objective function defined. as presented in table i, cells d12 to m12 the allocation total field engineers for a period is calculated by multiplying and adding the number of workmen in cells c2:c11 with cells d2:d11 for result of cell d12 (d12=c1*d1+…..+c11*d11). the results of cells e12 through m12 will be calculated in same manners. the oil rig demands in these periods are entered in cells d13 to m13. using ms solver, the objective to minimize the cost for ten periods by optimizing the number of field engineers required to meet demand is calculated and presented in cell c15 of table i. ms solver acknowledges that a solution was found that appears to be optimal and the obtained results are presented in table i. from the results one can see that the number of employees in different schedule pattern is allocated. table i. scheduling of field engineer (fe) for different shift pattern using ms solver columns → rows↓ a b c d e f g h i j k l m week schedules 1 weeks off fe 1 2 3 4 5 6 7 8 9 10 2 s1 w1,w2,w3,w4 2 0 0 0 0 1 1 1 1 1 1 3 s2 w2,w3,w4,w5 3 1 0 0 0 0 1 1 1 1 1 4 s3 w3,w4,w5,w6 2 1 1 0 0 0 0 1 1 1 1 5 s4 w4,w5,w6,w7 1 1 1 1 0 0 0 0 1 1 1 6 s5 w5,w6,w7,w8 3 1 1 1 1 0 0 0 0 1 1 7 s6 w6,w7,w8,w9 1 1 1 1 1 1 0 0 0 0 1 8 s7 w7,w8,w9,w10 2 1 1 1 1 1 1 0 0 0 0 9 s8 w1,w8,w9,w10 1 0 1 1 1 1 1 1 0 0 0 10 s9 w1,w2,w9,w10 6 0 0 1 1 1 1 1 1 0 0 11 s10 w1,w2,w3,w10 0 0 0 0 1 1 1 1 1 1 0 12 total fe scheduled 21 12 10 14 13 12 14 14 14 11 12 13 weekly total demand of fe 12 10 14 12 12 14 14 14 10 12 14 cost/fe/week $3500 15 cost for 10 period $441000 etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 158 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study the solution presented will not satisfy the oil company requirements to plan work shifts to approximately match the requests as per service. there is also the issue of how will the solution affect the utilization of personnel time, group morale and time required to perform customer service. the subsequent section presents field engineers’ scheduling using heuristic model. v. a heuristic approach any oil company wants to plan work shifts to approximately match the requests for the service. it is also concerned about the schedules affecting the utilization of personnel time, group morale and time required to perform the required service. in the present section, the same scenario as presented in section 4, which relates to 6 weeks on and 4 week off shift schedules to weekly numbers of field engineers available, is considered. a heuristic model is used to find a schedule that uses the smaller number of field engineers and meets all oil rigs’ demands. the requirements of the modelcan be formulated in the following question: what is the number of required field engineers and what ways could be used to reduce the amount of slack in the work shift schedules? thus, the model uses a “work shift heuristic procedure” to develop shift schedules for field engineers. the heuristic rule is stated as: choose two consecutive periods with least total number of field engineers required. in the case of ties, arbitrarily select a pair and continue. this heuristic was originally developed by baker and magazine [7]. for the problem on hand as presented in section 3, the company needs a field engineers’ schedule that provides six weeks on duty and four weeks off which minimizes the amount of total slack capacity. for simplicity let’s consider one cycle of 10 weeks and each week stands for seven working days. the number of workmen required in each week is the same as in cells d13 to m13 of table 1, the same is presented in table ii. the subsequent subsection presents the steps followed to illustrate the above mentioned heuristic approach using an example. a. steps step 1 find all the pairs of consecutive days that exclude the maximum daily requirements. select the unique pair that has the lowest total requirements for the 4 periods (weeks). periods 3 and 7 contains the maximum requirements (7), and periods 1, 2, 9 and 10 have the lowest total requirements. therefore, field engineer 1 is scheduled to work for period 3 to period 8 without a break, as presented in table ii. step 2 if a tie occurs, choose one of the tied pairs or ask the field engineer to make a choice and continue. table ii. field engineers (fe) schedule using heuristic approach week 1 2 3 4 5 6 7 8 9 10 total demand of field engineers 5 5 7 6 5 6 7 6 5 6 schedule of field engineer 1 off off on on on on on on off off net demand after 1 st iteration 5 5 6 5 4 5 6 5 5 6 schedule of field engineer 2 on on off off off off on on on on net demand after 2 nd iteration 4 4 6 5 4 5 5 4 4 5 schedule of field engineer 3 off off on on on on on on off off net demand after 3 rd iteration 4 3 5 4 3 4 4 3 4 5 schedule of field engineer 4 on on on on off off off off on on net demand after 4 th iteration 3 2 4 3 3 4 4 3 3 4 schedule of field engineer 5 off off off off on on on on on on net demand after 5 th iteration 3 2 4 3 2 3 3 2 2 3 schedule of field engineer 6 off off on on on on on on off off net demand after 6 th iteration 3 2 3 2 1 2 2 1 2 3 schedule of field engineer 7 on on on on off off off off on on net demand after 7 th iteration 2 1 2 1 1 2 2 1 1 2 schedule of field engineer 8 on off off off off on on on on on net demand after 8 th iteration 1 1 2 1 1 1 1 0 0 1 schedule of field engineer 9 on on on on on off off off off on net demand after 9 th iteration 0 0 1 0 0 1 1 0 0 0 schedule of field engineer 10 off on on on on on on off off off total total fe scheduled, c 5 5 7 7 6 6 7 6 5 6 60 total demand of fe, d 5 5 7 6 5 6 7 6 5 6 58 slack, c-d 0 0 0 1 1 0 0 0 0 0 2 etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 159 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study step 3 subtract the requirements satisfied by the field engineer 1 from the net requirements for each period (week) the field engineer is to work and repeat step one. in continuation with above step it is observed that periods 1, 2, 9 and 10 has the lowest total requirements. therefore, field engineer 2 is scheduled to work for period 3 to period 8 as presented in table ii. step 4 repeat steps 1 through 3 until all the requirements have been satisfied. after field engineers 1, 2, and 3 have reduced the requirements, the period with the lowest requirements changes and field engineer 4 will be scheduled for periods 6, 7, 8, 9, 10 and 1. the above steps 1 to 3 are repeated until the schedule for individual field engineers can be planned such that all demands are met. the details are presented in table ii. the application of the above methods and the discussion is presented in the following section. vi. field engineers’ scheduling in an oil company: a case study the above presented model of uneven demand has been applied successfully at an oil company. the ultimate objective was to optimize the number of field engineers required to meet the demand and minimizing the total cost. the relevant data was collected for 6 months, from january 2009 to june 2009. it is observed that the numbers of field engineers available for oil rigs are twenty. the number of jobs “running” during this period was four to seven. two field engineers were required on each job. the schedule for each field engineer is considered as 42 days on shift / 28 days off shift (continuous 6 weeks on and 4 weeks off). the details of the data available and the scheduling cost without using any optimizing techniques are as presented in table ii. it’s observed that without using an optimization tool the field engineers are allotted throughout the shifts. because of this approach during a period of uneven demand the company had to call field engineers (particularly those that were on an off shift) back on duty on emergency call with higher bonuses. it was observed that field engineers were called on emergency duties twenty times during periods of high demand, whereas there were some incidences were demand was too low and the company was holding the field engineers on rigs without work, which is an idle cost incurred by the company. as presented in table ii the total cost incurred by the company without following an optimization technique for field engineer scheduling is found to be $1,169,000. after application of the proposed models it is observed that there is no need to call field engineers on emergency duty. the brief results obtained after the application of the proposed model are as presented in table iv. as presented in table iv, the total cost incurred by the company with following optimization techniques for field engineer scheduling is found to be $945,000. table iii. scheduling result and total cost before applying scheduling optimization techniques y e a r 2 0 0 9 m o n th w e e k 5 o . 5 o . o f jo b s f ie ld e n g in e e r s r e q u ir e d f ie ld e n g in e e r s p e r o n sh if t f ie ld e n g in e e r s p e r o ff sh if t t o ta l fi e ld e n g in e e r s f ie ld e n g in e e r s c a ll e d o n e m e r g e n c y f ie ld e n g in e e r s s e a ti n g id le 1 4 8 8 12 20 2 6 12 12 8 20 3 6 12 11 9 20 1 4 7 14 12 8 20 2 5 5 10 12 8 20 2 6 7 14 12 7 19 2 7 5 10 9 10 19 1 8 4 8 10 10 20 2 9 6 12 12 8 20 10 6 12 10 10 20 2 11 7 14 12 8 20 2 12 6 12 11 9 20 1 13 6 12 10 10 20 2 14 5 10 10 10 20 15 7 14 12 8 20 2 16 6 12 12 8 20 17 6 12 11 8 19 1 18 7 14 12 8 20 2 19 7 14 14 6 20 20 7 14 12 8 20 2 21 5 10 12 8 20 2 j a n u a r y t o j u n e 22 6 12 12 8 20 total 22 131 262 248 189 437 20 6 a. number of days per shift 7 b. cost per field engineer per day (if on duty) $500 c. extra cost per day if field engineer called on emergency duty $2000 d. total number of field engineers on normal shift 248 e. total number of field engineers called on emergency duty 20 f. total number of field engineers seating idle on normal duty 6 g. total cost of field engineers on normal duties = (a x b x d) $868000 h. total extra cost if field engineers called on emergency duty = (a x c x e) $280000 i. total cost for idle field engineers = (a x b x f) $21000 j. total cost for period = (g + h + i) $1169000 etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 160 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study table iv. scheduling result and total cost after applying scheduling optimization techniques y e a r 2 0 0 9 m o n th w e e k 5 o . 5 o . o f jo b s f ie ld e n g in e e r s r e q u ir e d f ie ld e n g in e e r s p e r o n sh if t f ie ld e n g in e e r s p e r o ff sh if t t o ta l fi e ld e n g in e e r s f ie ld e n g in e e r s c a ll e d o n e m e r g e n c y f ie ld e n g in e e r s s e a ti n g id le 1 4 8 8 12 20 2 6 12 12 8 20 3 6 12 12 8 20 4 7 14 14 6 20 5 5 10 10 10 20 6 7 14 14 5 19 7 5 10 11 8 19 1 8 4 8 8 12 20 9 6 12 12 8 20 10 6 12 12 8 20 11 7 14 14 6 20 12 6 12 12 8 20 13 6 12 12 8 20 14 5 10 11 9 20 1 15 7 14 14 6 20 16 6 12 13 7 20 1 17 6 12 12 7 19 18 7 14 14 6 20 19 7 14 14 6 20 20 7 14 14 6 20 21 5 10 11 9 20 1 j a n u a r y t o j u n e 22 6 12 12 8 20 total 22 131 262 266 171 437 0 4 a. number of days per shift 7 b. cost per field engineer per day (if on duty) $500 c. extra cost per day if field engineer called on emergency duty $2000 d. total number of field engineers on normal shift 266 e. total number of field engineers called on emergency duty 0 f. total number of field engineers seating idle on normal duty 4 g. total cost of field engineers on normal duties = (a x b x d) $931000 h. total extra cost if field engineers called on emergency duty = (a x c x e) $00000 i. total cost for idle field engineers = (a x b x f) $14000 j. total cost for period = (g + h + i) $945000 vii. conclusion and discussion the present paper demonstrated the [r, n] days-off scheduling problem. the focus is to address the issue related to uneven demand of field engineers for the oil rigs. here the primary objective of the days-off scheduling problem was to minimize the workforce size, i.e., total number of field engineers assigned, in order to reduce the transportation costs. however, a model that does not consider demands, which may get cancel or amended unpredictably at the last moment due to unavoidable reason, faces limitations. as a result, the schedule of field engineers can be subjected to last minute changes. however, it is possible to minimize the number of field engineers required on site, which results in subsequent cost savings. on occasion, when field engineers are called (on emergency) for work before his or her days off the extra cost of transportation and payroll occurs. all these can be optimized by using the approach suggested in this paper. the company can benefit not only in terms of saving but also in terms of providing to its field engineers life quality leading to increased retention. generally, company’s planners may modify provisional work assignments and review business objectives at any time, since the working environment itself (weather, traffic conditions) is unpredictable. owing to limitations of the study, the present paper includes limited variables regarding field engineers’ scheduling. as a future goal, the above model can include added variables, such as stochastic demand of field engineers at oil rigs and working conditions. acknowledgment authors would like thank mr. vinod agrahari of the client organization for introducing them to this problem and his help specially in obtaining data. references [1] s. f. smith, “knowledge-based production management: approaches, results and prospects”, production planning & control, vol. 3, no. 4, pp. 350-380, 1992 [2] a. t. ernst, h. jiang, m. krishnamoorthy, d. sier, “staff scheduling and rostering: a review of applications, methods and models”, european journal of operational research, vol. 153, pp. 3-27, 2004 [3] j. berg van den, d. panton, “personnel shift assignment: existence conditions and network models”, networks, vol. 24, pp. 385–394, 1994 [4] l. edie, “traffic delays at toll booths”, journal of the operations research society of america, vol. 2, no. 2, pp. 107–138, 1954 [5] s. aggarwal, “a focused review of scheduling in services”, european journal of operational research, vol. 9, no. 2, pp. 114–121, 1982 [6] h. p. williams, model building in mathematical programming, john wiley and sons, 1993 [7] k. r. baker, m. magazine, “workforce scheduling with cyclic demands and day-off constraints”, management science, vol. 24, no.2, pp. 161170, 1977 [8] j. j. barthoidi, j. b. orlin, h. d. ratliff, “cyclic scheduling via integer programs with circular ones”, operations research, vol. 28, no. 5, pp. 1074-1085, 1980 [9] w. townsend, “an approach to bus-crew roster design in london regional transport”, journal of the operational research society, vol. 39, no. 6, pp. 543-550, 1988 [10] d. m. ryan, “the solution of massive generalized set partitioning problems in aircrew rostering”, journal of the operational research society, vol. 43, no. 5, pp. 459-467, 1992 etasr engineering, technology & applied science research vol. 2, �o. 1, 2012, 155-161 161 www.etasr.com ateekh-ur-rehman and usmani: field engineers’ scheduling at oil rigs: a case study [11] s. c. k. chu, “generating, scheduling and rostering of shift crew duties: application at the hong kong international airport”, european journal of operational research, vol. 177, no. 3, pp. 1764-1778, 2007 [12] f. f. easton, d. f. rossin, “equivalent alternate solutions for the tour scheduling problem”, decision sciences, vol. 22, pp. 985-1007, 1991 [13] r. hung, “single-shift off-day scheduling of a hierarchical workforce with variable demands”, european journal of operational research, vol. 78, no. 1, pp. 49-57, 1994 [14] r. hung, “multiple-shift workforee scheduling under the 3-4 workweek with different weekday and weekend labour requirements”, management science, vol. 40, no. 2, pp. 280-284, 1994 [15] s. e. bechtold, m. j. brusco, “working set generation methods for labour tour scheduling”, european journal of operational research, vol. 74, no. 3, pp. 540-551, 1994 [16] m. hojati, a. s. patil , “an integer linear programming based heuristic for scheduling heterogeneous, part-time service employees”, european journal of operational research, vol. 209, no. 1, pp. 37-50, 2011 [17] r. c. carrasco, “long-term staff scheduling with regular temporal distribution”, computer methods and programs in biomedicine, vol. 100, no. 2, pp. 191–199, 2010 [18] r. nanda, j. browne, introduction to employee scheduling, van nostrand reinhold, new york, 1992 [19] n. narasimhan, “an algorithm for single shift scheduling of hierarchical workforce”, european journal of operational research, vol. 96, pp. 113-121, 1996 [20] h. emmons, r. n. burns, “off-day scheduling with hierarchical worker categories”, operations research, vol. 39, no. 3, pp. 484-495, 1991 [21] r. hung, “single-shift workforce scheduling model under a compressed workweek”, omega, vol. 19, pp. 494-497, 1991 [22] h. k. alfares, “optimum compressed workweek scheduling”, proceedings of the 22nd international conference on computers & industrial engineering, cairo, pp. 13-16, 1997 [23] h. k. alfares, “an efficient two-phase algorithm for cyclic days-off scheduling”, computers & operations research, vol. 25, no. 11, pp. 913-923, 1998 [24] n. musliu, j. gaertner, w. slany, “efficient generation of rotating workforce schedules”, discrete applied mathematics, vol. 118, no. 1– 2, pp. 85–98, 2002 [25] m. pinedo, x. chao, operations scheduling with applications in manufacturing and services, mcgraw-hill, computer science series, 1999 [26] j. blazewicz, k. ecker, e. pesch, g. schmidt, j. weglarz, scheduling computer and manufacturing processes, springer, new york, 2001 [27] m. pinedo, scheduling. theory, algorithms and systems, prentice hall, 2002 [28] “staff scheduling and rostering: theory and applications. part i”, annals of operations research, speciall issue, vol. 127, no. 1-4, 2004 [29] e. ne´ron, “lower bounds for the multi-skill project scheduling problem”, 8th international workshop on project management and scheduling, valencia, spain, 2002. [30] x. cai, k. n. li, “a genetic algorithm for scheduling staff of mixed skills under multi-criteria”, european journal of operational research, vol. 125, no. 2, pp. 359–369, 2000 engineering, technology & applied science research vol. 8, no. 3, 2018, 2892-2896 2892 www.etasr.com iqbal et al.: effect of maximum aggregate size on the bond strength of reinforcements in concrete effect of maximum aggregate size on the bond strength of reinforcements in concrete shahid iqbal department of civil engineering cecos university of it and emerging sciences peshawar, pakistan shahid.iqbalmce@gmail.com naqeeb ullah department of civil engineering cecos university of it and emerging sciences peshawar, pakistan naqibmarwat@hotmail.com ahsan ali department of civil engineering quaid-e-awam university college of engineering, science & technology larkana, pakistan ahsanone@gmail.com abstract-the bond between reinforcements and concrete is the only mechanism that transfers the tensile stresses from concrete to reinforcements. several factors including chemical adhesion, roughness and reinforcement interface and bar bearing affect the bond strength of reinforcements with concrete. this work was carried out considering another varying factor which is maximum aggregate size. four mixes of concrete with similar compressive strengths but different maximum aggregate sizes of 25.4mm, 19.05mm, 12.7mm and 9.53mm were used with the same bar size of 16mm. compressive strength, splitting tensile strength and bond strength for each concrete mix were studied. test results depict a slight increase in compressive and splitting tensile strength with decrease in maximum aggregate size. the bond strength remained at the same level with decrease in maximum aggregate size except at maximum aggregate size of 9.53mm when there was a drop in bond strength, despite better compressive and splitting tensile strengths. aci-318 and fib2010 codes equation for bond strength calculation work well only when the maximum aggregate size is 12.7mm and above. therefore, maximum aggregate size is critical for bond strength when smaller size aggregates are used. keywords-concrete; aggregate size; pullout test; bond strength i. introduction the role of the bond of reinforcement in concrete is of great importance. when the concrete member is loaded, tensile stresses from steel to concrete are transferred through the bond. therefore, to ensure the safety of concrete, proper bonding between reinforcing steel and concrete is essential when the concrete member cracks, tensile stresses are resisted by the reinforcement, reinforcement slip occurs which is resisted by friction and reinforcement bearing producing bond stresses [1]. bond behavior of steel reinforcements with concrete is an important aspect which affects the performance of reinforced concrete [2-6]. there are three main components of bond between reinforcements and concrete: friction, chemical adhesion and mechanical interlocking of deformations in steel bars. the factors affecting bond strength include strength and cover of concrete around reinforcement, geometry and yielding strength of reinforcement, embedded length, type of aggregates and admixtures used [1, 7-10] an increase of up to 20% in pullout load is reported with increase of concrete cover depth from 40mm to 70mm [9]. studying the effect of concrete cover on bond behavior, it is reported that the bond strength increases but the rate of increase decreases with the increase of concrete cover [10]. thus when concrete cover increases, initially the increase in bond strength is more pronounced to further increase in concrete cover. authors in [11] investigated the bond behavior of reinforcements in lightweight self-compacting concrete and reported 30% lower bond strength for all lightweight scc compared to normal weight concrete. corrosion also plays an import role in the bond performance. it is reported that there is a decrease in bond strength when corroded reinforcements are used [12]. for this purpose protective layers against corrosion may be used. it is reported that, galvanization of reinforcements can improve its resistance against carbonation and can extend the life of reinforced concrete structures [13]. investigating the effect of bar size on the bond strength of reinforcements in concrete, it is reported that bond strength of smaller bars (10mm diameter) is 21% higher than that of the larger bars (20mm) [1]. size of aggregate may be another bond influencing factor. to the best of our knowledge, there is limited literature available on the effect of aggregate size on the bond strength. therefore, this study is conducted to investigate the effect of aggregate size on the bond behavior of reinforcements in concrete. bond strength is calculated by dividing the maximum pullout load by the surface are of the reinforcement bar given by (1). μ = (1) where μ is bond strength, p is the maximum pullout load, l and d are embedded length and diameter of bar used. ii. materials and methodology a. materials used normal weight fine aggregates and crushed coarse aggregates, supplied by a local material supplier, were used. four different coarse aggregates having different maximum aggregate sizes i.e. 9.5mm, 12.5mm, 19mm, and 25mm were used (figure 1). the maximum sizes of the aggregates were engineering, technology & applied science research vol. 8, no. 3, 2018, 2892-2896 2893 www.etasr.com iqbal et al.: effect of maximum aggregate size on the bond strength of reinforcements in concrete selected on the basis of astm standard sieves. cem-i 42.5n cement, supplied by a local cement manufacturer (cherat cement) was used for all the concrete mixes. fig. 1. coarse aggregates b. experimental program fresh and hardened concrete properties were investigated for each mix. workability and density were investigated on the fresh concrete while compressive strength, splitting tensile strength and bond strength were studied on the hardened concrete. workability and density were found using [14] and [15] standards. six cylinder specimens 100mm in diameter and 200mm in height were casted from each concrete mix and cured in water tank as defined in [16]. three cylinders of each mix were tested for compressive strength as per [17] at 28 days concrete age, by application of constant loading rate of 0.25mpa/sec. three cylinder specimens of each concrete were tested for splitting tensile strength as per [18] at the concrete age of 28 days by load application at constant rate of 1mpa/min. three cube samples were casted embedded with 16mm diameter bars to test the bond strength as per [19]. the size of cube was kept at 150mmx150mmx150mm which is the most common cube size in concrete tests. these samples were cured in water tank for 28 days and tested in pullout test with pullout load applied at a constant rate of 0.1kn/sec. pullout test is a widely practiced and easy to perform test used to investigate the bond strength. it has been used by different researchers to study the factors affecting bond strength i.e. concrete compressive strength, size and geometry of reinforcement bars and active and passive confinement [20, 21]. the pull-out sample and testing assembly are shown in figure 2. iii. results and discussion a. concrete mix design trial mixes were conducted to finalize the concrete mix with maximum aggregate size of 25mm and target 28 day compressive strength of 20mpa. from previous studies [22-24] and trials of concrete compressive strength tests of each mix, it was observed that with the decrease in maximum aggregate size of coarse aggregates the compressive strength of concrete increases. the study aims at investigating the effect of aggregate size on the bond strength keeping all the other parameters constant. therefore, to achieve similar compressive strengths and compensate for this variation of compressive strength due to change in maximum aggregate size, the cement quantity was reduced by 2% for every consecutive reduction of coarse aggregate maximum size with increase in w/c ratio by 0.005. mix compositions for all the mixes are summarized in table i. the concrete mixes containing maximum aggregate sizes of 25mm, 19mm, 12.5mm and 9.5mm were nominated as mix-25, mix-19, mix-12.5 and mix-9.5 respectively. fig. 2. pull-out specimen and testing arrangement table i. concrete mix composition concrete type cement (kg/mᵌ) coarse aggregate (kg/mᵌ) fine aggregate (kg/mᵌ) water (kg/mᵌ) w/c ratio mix-25 400 800 880 192 0.48 mix-19 392 800 880 190 0.485 mix-12.5 384 800 880 188 0.49 mix-9.5 376 800 880 186 0.495 b. fresh concrete properties the test results for fresh concrete properties are presented in table ii. test results for fresh concrete properties indicate decrease in slump with decrease in maximum aggregate size while concrete density remains unchanged. the relation is graphically represented in figure 3. table ii. fresh concrete properties concrete type maximum coarse aggregate size (mm) slump (mm) concrete density (kg/m3) mix-25 25 89 2421 mix-19 19 83 2417 mix-12.5 12.5 78 2412 mix-9.5 9.5 65 2408 c. hardened concrete properties the test results for hardened concrete properties of all the concrete types are presented in table iii. instances of the tests of compressive strength, splitting tensile strength and pull-out strength tests that were conducted in the lab are shown in figure 4. 1) compressive strength the tests results for compressive strength of all concrete mixes are summarized in table iii. although the cement engineering, technology & applied science research vol. 8, no. 3, 2018, 2892-2896 2894 www.etasr.com iqbal et al.: effect of maximum aggregate size on the bond strength of reinforcements in concrete content decreased intentionally with decrease in maximum aggregate size to keep the compressive strengths of all concretes at the same level in order to study only the effect of maximum aggregate size on bond strength, there is still indication of increase in concrete compressive strength when the maximum aggregate size in concrete reduces. the relation is graphically shown in figure 5. fig. 3. fresh concrete properties variation table iii. hardened concrete properties concrete type maximum coarse aggregate size (mm) compressive strength (mpa) splitting tensile strength (mpa) bond strength (mpa) mix-25 25 21.22 2.76 7.76 mix-19 19 21.82 2.78 7.93 mix-12.5 12.5 21.95 2.97 7.88 mix-9.5 9.5 22.02 3.11 7.15 fig. 4. hardened concrete tests fig. 5. compressive strength variation 2) splitting tensile strength the test results for all concrete types are presented in table iii. similar to compressive strength, there is an increasing trend in splitting tensile strength with decrease in maximum aggregate size used despite the decrease in cement content. this trend is graphically presented in figure 6. fig. 6. splitting tensile strength variation 3) bond strength the main objective of this study was to investigate the effect of maximum aggregate size on the bond strength of reinforcements with concrete. results for bond strength are summarized in table iii calculated using (1). results indicate no major change in bond strength with decrease in maximum aggregate size. however, surprisingly, there is a drop in bond strength when maximum aggregate size of 9.5mm was used despite possessing the highest compressive and splitting tensile strengths among all the concretes used. the reason may be the decrease in locking provided by small aggregates to the pulled out bar. the failure pattern in all pullout specimens was by splitting of the concrete cover which is caused by wedging effect of bar deformations as shown in figure 7. fig. 7. pullout samples after bond failure the bond strength experimental results were compared with those calculated from different equations in the literature. four different equations were considered: (2)-(5) taken from [25-28] respectively. 0.25 0.2 0.33 0.1 , 2 max min max 20 6.54 ( ) ( ) [( / ) ( / ) 8 ] u split ck trf c c c k         (2) min min1/ 4 1/ 4 max [1.43 ( 0.54 ) 57.4 ] (0.1 0.9) c b s d b b c c t a f c l c d a f f c      (3) engineering, technology & applied science research vol. 8, no. 3, 2018, 2892-2896 2895 www.etasr.com iqbal et al.: effect of maximum aggregate size on the bond strength of reinforcements in concrete = 0.10 + 0.25 + 4.15 (4) = . . √ (0.88 + 0.12 ) (5) table iv. experiment and calculated bond strengths c on cr et e ty pe bond strength (mpa) experimental (2) (3) (4) (5) mix-25 7.76 7.87 7.83 7.34 7.25 mix-19 7.93 7.93 7.88 7.45 7.33 mix-12.5 7.88 7.94 7.89 7.47 7.35 mix-9.5 7.15 7.94 7.90 7.48 7.36 results indicate that there is no major impact of aggregate size on the bond strength of reinforcements in concrete when higher maximum aggregate sizes are used. equations (2) and (3) (from [25, 26]) are extremely good in predicting bond strength values while (4) and (5) (from [27, 28]) give slight conservative results. however, when the size of maximum aggregate reduces below 10mm, there is drop in experimental bond strength results despite better compressive and tensile strengths which may be due to lower resistance provided by smaller size aggregates to pulled out bars. this drop is not reflected in the equations used for bond strength calculations. as all the specimens failed by splitting, this phenomenon may have been more pronounced if concrete cover was increased to initiate pure pullout instead of specimen splitting. in that case, bigger size aggregates may induce more resistance to bars pulling out by locking compared to smaller size aggregates. the comparison of all values is shown in figure 8. the percentage variations of equation values were calculated with respect to the experimental results and are summarized in table v. equations (2) and (3) fit really well with the experimental values with less than 2% variation when 12.5mm and above maximum aggregate sizes are used. however, when the maximum aggregate size of 9.5mm was used, the variation was above 10%. the variations of (3), (4) with respect to the experimental values were in the range of ±5-7% for all values. fig. 8. bond strength variation and comparison table v. percentage variations c on cr et e ty pe percentage variations (2) (3) (4) (5) mix-25 1.42% 1% -5.4% -6.6% mix-19 0 -0.6% -6% -7.6% mix-12.5 0.76% 0 -5.2% -6.7% mix-9.5 11% 10.5% 4.6% 3% iv. conclusions the following conclusions have been drawn based on the experimentally performed study:  the workability of concrete decreases with decrease in maximum aggregate size used.  with increase in maximum aggregate size used in concrete, the compressive strength and splitting tensile strength decrease.  when higher maximum aggregate size of coarse aggregates values are used in concrete, there is no variation in bond strength but it reduces when maximum aggregate size of less than 10mm is used. this may be due to lower locking provided to pulled out bar by smaller aggregates. this may be more pronounced if splitting of samples is avoided by increasing concrete cover.  equations taken from [25, 26] for bond strength calculation work extremely well with higher maximum aggregate size concretes but do not work well when lower maximum aggregate sizes (<10mm) are used. aggregate size effect may be introduced in these equations for lower maximum aggregate size concretes to better reflect the actual bond strength. referencees [1] a. ali, s. iqbal, k. holschemacher, t. a. bier, “bond of reinforcement with normal-weight fiber reinforced concrete”, periodica polytechnica civil engineering, vol. 61, no. 1, pp. 128-134, 2017 [2] fib, “bond of reinforcement in concrete: state-of-art report”, bulletin no. 10, fib, 2000 [3] s. hong, s. k. park, “uniaxial bond stress-slip relationship of reinforcing bars in concrete. advances in material science and engineering”, advances in materials science and engineering, vol. 2012, article id 328570, 2012 [4] c. jiang, y. f. wu, g. wu, “plastic hinge length of frp-confined square rc columns”, journal of composites for construction, vol. 18, no. 4, 2014 [5] d. s. gu, y. f. wu, g. wu, z. s. wu, “plastic hinge analysis of frp confined circular concrete columns”, construction and building materials, vol. 27, no. 1, pp. 223-233, 2012 [6] d. guan, c. jiang, z. guo, h. ge, “development and seismic behavior of precast concrete beam-to-column connections”, journal of earthquake engineering, vol. 22, no. 2, pp. 234-256, 2016 [7] e. fehling, p. lorenz, t. leutbecher, “experimental investigations on anchorage of rebars in uhpc”, in: proceedings of hipermat 2012 3rd international symposium on uhpc and nanotechnology for high performance construction materials, pp. 533-540, 2012 [8] a. f. bingol, r. gul, “residual bond strength between steel bars and concrete after elevated temperatures”, fire safety journal, vol. 44, no. 6, pp. 854–859, 2009 engineering, technology & applied science research vol. 8, no. 3, 2018, 2892-2896 2896 www.etasr.com iqbal et al.: effect of maximum aggregate size on the bond strength of reinforcements in concrete [9] h. s. arel, s. yazici, “concrete-reinforcement bond in different concrete classes”, construction and building materials, vol. 36, pp. 78– 83, 2012 [10] b. bai, h. k. choi, c. s. choi, “bond stress between conventional reinforcement and steel fiber reinforced reactive powder concrete”, construction and building materials, vol. 112, pp. 825–835, 2016 [11] m. i. kaffetzakis, c. g. papanicolaou, “bond behaviour of reinforcement in lightweight aggregate self-compacting concrete”, construction and building materials, vol. 113, pp. 641–652, 2016 [12] x. fu, d. d. l. chung, “effect of corrosion on the bond between concrete and steel rebar”, cement and concrete research, vol. 27, no. 12, pp. 1811–1815, 1997 [13] p. pokorny, p. tej, m. kouril, “evaluation of impact of corrosion of hotdip galvanized reinforcement on bond strength with concrete – a review”, construction and building materials, vol. 132, pp. 271–289, 2017 [14] astm, “c143/c143m -15a: standard test method slump of hydrauliccement concrete”, in: annual book of astm standards, volume 04.02 concrete and aggregates, astm international, 2012 [15] astm, “c138/c138m-17a: standard test method for density (unit weight), yield and air content (gravimetric) of concrete”, in: annual book of astm standards, volume 04.02 concrete and aggregates, astm international, 2012 [16] astm, “c192/c192m-16a, “standard practice for making and curing concrete test specimens in the laboratory”, in: annual book of astm standards, volume 04.02 concrete and aggregates, astm international, 2012 [17] astm, “c39/c39m-18, standard test method for compressive strength of cylindrical concrete specimens”, in: annual book of astm standards, volume 04.02 concrete and aggregates, astm international, 2012 [18] astm, “c496/c496m-17, standard test method for splitting tensile strength of cylindrical concrete specimens”, in: annual book of astm standards, volume 04.02 concrete and aggregates, astm international, 2012 [19] rilem, “technical recommendations for the testing and use of construction materials”, taylor & francis, 1994 [20] r. eligehausen, e. p. popov, v. v. bertero, local bond stress–slip relationships of deformed bars under generalized excitations, report no. usb/eerc 83/23, earthquake engineering research center, university of california, berkeley, california, 1983 [21] b. s. hamad, “bond strength improvement of reinforcing bars with specially designed rib geometries”, structural journal, vol. 92, no. 1, pp. 3–13, 1995 [22] m. yaqub, i. bukhari, “effect of size of coarse aggregate on compressive strength of high strength concrets”, 31st conference on our world in concrete and structures, singapur, august 16-17, 2006 [23] n. a. a. hamid, n. f. abas, “a study on effects of size coarse aggregate in concrete strength”, jurnal teknologi, vol. 75, no. 5, pp. 51–55, 2015 [24] a. neville, “aggregate bond and modulus of elasticity of concrete”, materials journal, vol. 94, no. 1, pp. 71–74, 1997 [25] aci committee 408, “bond and development of straight reinforcing bars in tension (aci 408r-03)”, american concrete institute, 2003 [26] fib, model code 2010 first complete draft, vol. 1, 2010 [27] c. o. orangun, j. o. jirsa, j. e. breen, “a reevaluation of test data on development length and splices”, aci journal, vol. 74, no. 3, pp. 114–122, 1977 [28] m. r. esfahani, b. v. rangan, “bond between normal strength and high-strength concrete (hsc) and reinforcing bars in splices in beams”, aci structural journal, vol. 95, no. 3, pp. 272–280, 1998 microsoft word 35-2750_s_etasr_v9_n3_pp4261-4264 engineering, technology & applied science research vol. 9, no. 3, 2019, 4261-4264 4261 www.etasr.com saada et al.: application of stochastic analysis, modeling and simulation (sams) to selected … application of stochastic analysis, modeling and simulation (sams) to selected hydrologic data in the middle east nidhal saada civil engineering department, al-ahliyya amman university, amman, jordan n.saada@ammanu.edu.jo mustafa abdullah civil engineering department, al-ahliyya amman university, amman, jordan mrashied@ammanu.edu.jo arwa hamaideh water, environment and energy center, university of jordan, amman, jordan arwa.efb@gmail.com ali abu-romman civil engineering department, al-ahliyya amman university, amman, jordan a.aburuman@ammanu.edu.jo abstract—water resources in the middle east are very scarce and the management of these resources is a challenge. in this paper, the use of stochastic analysis, modeling, and simulation (sams) software package to selected hydrologic data in the middle east (namely jordan and saudi arabia) are explored. modeling and simulation experiments were conducted to test the capabilities of sams to be used for stochastic modeling and simulation in the middle east region. the hydrologic data used in this study consist of historic observed rainfall data of different lengths at various sites in jordan and saudi arabia. the models used in this study include: autoregressive moving average (arma) models, periodic autoregressive moving average (parma) models, multi-site contemporaneous autoregressive moving average (carma) models, and temporal disaggregation models. results indicate that sams can be used as a tool for stochastic modeling and simulation of hydrologic data in jordan and saudi arabia. it is important for managers and decision makers of water resources in these countries to be able to use sophisticated tools such as sams while deciding water management policies in these countries. keywords-stochastic analysis; modeling; simulation; hydrologic data i. introduction the region of middle east suffers from water resource scarcity. the situation is getting worse due to climate change, conflicts, wars, and economic and political instability. as a result, water resource management is a priority for the wellbeing of countries in the region. the use of sophisticated tools for better management of water resources is vital for that region. sams is a software package that deals with stochastic analysis, modeling, and simulation of hydrologic time series, and runs under windows operating system. the package is user friendly and consists of many menu and option windows which enable the user to choose among different available options. the current version of sams is sams 2007. sams capabilities can be classified into three categories: analysis of historic data, model fitting and parameter estimation, and synthetic data generation. the data analysis features of sams consist of: data plotting, checking the normality of the data, data transformation, and data statistical characteristics. sams has the capability of analyzing single site and multisite annual and seasonal data. the second application of sams is model fitting. it includes parameter estimation and model testing for alternative univariate and multivariate annual and monthly stochastic models. these include arma, parma, multisite arma, and disaggregation models [1]. the third main application of sams is data generation. data generation is undertaken based on the fitted models mentioned above. the statistical characteristics of the data are presented in graphical or tabular forms along with the historical statistics of the used data in fitting the models used. in this study, we explore the use of sams as a modeling and simulation tool in the middle east. for that purpose, selected hydrologic data from jordan and saudi arabia were used. providing water resource managers in the region with powerful modeling and simulation tools is vital for better management of water resources in the region. ii. methodology a. data used the data used in this study consist of the historic monthly and annual rainfall data for two stations in saudi arabia (surat obeida and malaki) and the standardized precipitation index (spi) data for five stations in jordan (table i). the data from surat obeida covered a period of 30 years from 1981 to 2010 while at malaki 27 years (1967–1993). the historic monthly rainfall data for the five stations in jordan were used to corresponding author: nidhal saada engineering, technology & applied science research vol. 9, no. 3, 2019, 4261-4264 4262 www.etasr.com saada et al.: application of stochastic analysis, modeling and simulation (sams) to selected … calculate the spi [2] for jordan by using files from the national drought mitigation center. b. models used 1) arma model the arma(p,q) model may be written as [3]: �� = � �� �� � p i=1 �� � � �� q i=1 (1) where �� represents the standardized process for year t, it has a mean=0 and variance ��� and is normally distributed, �� is the uncorrelated noise term with mean=0 and variance ��� and is also normally distributed. �� ,…. ,�� are the autoregressive parameters; �� ,…. ,�� are the moving average parameters. for example, for p=q=1, the arma(1,1) model becomes: �� � ������ �� � ������ (2) table i. data used station name data type period length (y) location malaki annual rainfall 1967-1993 27 asir, s.a. surat obeida monthly rainfall 1981-2010 30 asir, s.a. kufr sawm spi 1983-2013 31 irbid, jordan ras munif spi 1983-2013 31 irbid, jordan jarash spi 1983-2013 31 jarash, jordan swileh spi 1983-2013 31 swileh, jordan amman airport spi 1983-2013 31 amman, jordan s.a: saudi arabia 2) parma model the parma(p,q) model may be written as [4]: ��,� = � ��,� !,� � p i=1 ��,� � ��,� �!,� � q i=1 (3) where ��,� represents the standardized process for year υ and season τ, it has mean=0 and variance ���"�# and is normally distributed, ��,� is the uncorrelated noise term with mean=0 and variance ���"�# and is also normally distributed. ��,� ,…. ,��,� are the seasonal autoregressive parameters; ��,� ,…. ,��,� are the seasonal moving average parameters. specifically, for p=q=1, the parma(1,1) model becomes: ��,� � ��,���,��� ��,� � ��,���,��� (4) 3) carma model the carma model can be decoupled into component univariate models thus making the parameter estimation much easier than the full multivariate models [4]. the carma(p,q) model can be described as: $� � % &'$��'�'(� )� � % *')��' � '(� (5) where $� is a column vector for year t where each element represents the process (spi-12 at each site in this case). each element is normally distributed with mean=0 and variance �+�. &' are the diagonal autoregressive parameter matrices. *' are the diagonal moving average matrices, )� is a vector of residuals of the process at time t. they are uncorrelated in time but are correlated in space. equation (1) can be decoupled and written for each site as: $�, � � �', � '(� $��', )�, � � �', � '(� )��', (6) equation (2) represents the univariate arma(p,q) model for site i. for p=q=1, the arma(1,1) model at each site i can then be described as: $�, � ��, $���, )�, � ��, )���, (7) the residuals can be expressed as: )� � .� (8) where .� is a vector of random residuals that are uncorrelated in time and in space, and b is a parameter matrix. it can be shown that the covariance matrix of the residuals )� (g) can be expressed as [4]: / � --0 (9) where -0 is the transpose of matrix b. as such, the carma model implies that the cross-correlations between sites are preserved through the residuals [4]. notice also that the variances of the residuals (�1�) at each site are the diagonal elements in the g matrix [4] for each corresponding site. 4) disaggregation model the general lane’s temporal disaggregation model for a number of sites n can be expressed as [5]: ��,� � 2�3� -� )�,� 4���,��� (10) where ��,� is an n×1 column vector representing the seasonal series, n is the number of sites (n=1 in our case), 3� is an n×1 column vector representing the annual data series, )�,� is an (n×1) vector of uncorrelated normally distributed noise term. the model parameters a, b, and c can be estimated using the method of moments [5]. c. parameter estimation sams has two methods for parameter estimation. these are the method of moments (mom) and the least squares method (ls). authors in [1] provided more details about these methods and their calculations by sams. figure 1 shows a screenshot of the sams parameter estimation of an arma(2,1) model. fig. 1. sams parameter estimation for an arma(2,1) model. engineering, technology & applied science research vol. 9, no. 3, 2019, 4261-4264 4263 www.etasr.com saada et al.: application of stochastic analysis, modeling and simulation (sams) to selected … sams also provides the user with the ability for the estimation of the parma, carma, and the disaggregation models. d. stochastic simulation sams allows the user to run stochastic simulation experiments. once the model parameters are estimated, the user can generate synthetic data using the model. a user can specify the number of samples to generate and the length of each sample and sams will then generate the required data. figure 2 shows a screenshot of the sams generation of data from an arma model. the average statistics calculated from these generated series can then be compared with the historical data statistics. figure 3 shows such a comparison of the basic statistics such as mean, standard deviation etc. for an arma(2,1) model. additionally, sams provides a statistical comparison for the important drought related statistics such as the longest drough, deficit and surplus statistics, range, and hurst coefficient as shown in figure 4. sams also provides a statistical comparison for a number of other important statistics such as the correlation structure of the data. fig. 2. sams data generation window fig. 3. comparison between historic and generated basic statistics for an arma(2,1) model figure 5 shows a comparison of the serial correlation of the arma(2,1) model for malakai, saudi arabia. the ability of a certain model to preserve these statistics is important for water resources managers and decision makers. sams gives the managers the ability to try different models in a simple and easy manner. fig. 4. comparison between historic and generated drought and surplus related statistics for an arma(2,1) model. fig. 5. comparison between historic and generated serial correlation for an arma(2,1) model. iii. results and discussion sams was used to fit several arma models to the annual rainfall data at malakai in saudi arabia. table ii shows the parameter estimations for the arma(1,0), arma(1,1), and arma(2,1) models. the results shown are the mom parameter estimates. table ii. sams parameter estimation of arma models* model parameters arma(1,0) autoregressive parameters: ϕ1=0.514 variance of residuals: �1�=14108.7 arma(1,1) autoregressive parameters: ϕ1=0.742 moving average parameters: θ1=0.319 variance of residuals: �1�=13711.1 arma(2,1) autoregressive parameters: ϕ1=−0.008, ϕ2=−0.386 moving average parameters: θ1=−0.455 variance of residuals: �1�=13660.5 * for the annual rainfall at malakai, saudi arabia simulations were conducted for malakai, saudi arabia by generating synthetic time series data from the arma models mentioned above [6]. in each experiment 100 samples, each with length equal to the historical length of the series at malakai were generated from the arma models [6]. statistical comparison of historic and generated data revealed that the models were capable of preserving the statistics of historic data such as mean, standard deviation and serial correlation structure [6]. sams was also used for modeling and simulation of parma models to the monthly rainfall data for surat obeida, saudi arabia [7]. similarly, the temporal disaggregation model was also used for modeling and simulation purposes for surat obeida, saudi arabia [7]. results indicate that both parma and disaggregation model were capable of preserving the seasonal engineering, technology & applied science research vol. 9, no. 3, 2019, 4261-4264 4264 www.etasr.com saada et al.: application of stochastic analysis, modeling and simulation (sams) to selected … statistics of the data [7]. however, the disaggregation model was superior to the parma model in terms of preserving the underlying annual correlation structure of the data [7]. multisite carma(1,1) model was applied to the spi data for the five stations in jordan [8]. table iii shows the estimated autoregressive and moving average parameters and table iv shows the estimated variance-covariance matrix of the residuals. simulation experiments conducted by using the carma(1,1) model reveal that the model performed well in preserving the historical statistics of the observed data at each station. furthermore, the model was able to preserve the spatial cross correlation structure for the stations studied [8]. table v shows the historical and generated lag-0 cross correlations. based on the above results, sams can provide the user with a very powerfull tool to do sophisticated modeling and simulation of hydrologic data. this is very important for a water resource manager for his/her estimation, prediction and forecasting efforts for better management of water resources. table iii. fitted carma(1,1) model parameters for spi data in jordan station carma (1,1) model parameters autoregressive parameter (ϕ1) moving average parameter (θ1) kufr sawm 0.922 -0.008 ras munif 0.910 -0.031 jarash 0.917 0.090 swileh 0.938 0.082 amman airport 0.895 -0.006 table iv. residuals’ covariance matrix of the fitted carma(1,1) model, spi data, jordan station kufr sawm ras munif jarash swileh amman airport kufr sawm 0.148 0.134 0.134 0.112 0.128 ras munif 0.134 0.162 0.151 0.123 0.146 jarash 0.134 0.151 0.188 0.124 0.169 swileh 0.112 0.123 0.124 0.141 0.147 amman airport 0.128 0.146 0.170 0.147 0.196 table v. historical and generated lag-0 correlations of the spi-12 data station kufr sawm ras munif jarash swileh kufr sawm 1.0 (1.0) ras munif 0.87 0.86) 1.0 (1.0) jarash 0.80 (0.77) 0.86 (0.85) 1.0 (1.0) swileh 0.77 (0.72) 0.80 (0.77) 0.75 (0.71) 1.0 (1.0) amman airport 0.75 (0.70) 0.82 (0.81) 0.88 (0.87) 0.85 (0.83) generated values in parentheses iv. conclusion sams is a software tool that can be used for stochastic modeling and simulation of hydrologic data. several models were used in this study to fit different stochastic models (arma, parma, carma, and disaggregation models) to hydrologic data in jordan and saudi arabia. simulation experiments were conducted. synthetic data were geerated from the different fitted models by sams. sams provides the user the ability to compare the statistics of the generated data with the historic data. sams was proved to be a powerful and valuable tool that can be used by water resource managers in the middle east and should help them making better decisions in the management of the valuable and scarce water resources in that region. references [1] o. g. b. sveinsson, j. d. salas, w. l. lane, d. k. frevert, stochastic analysis, modeling, and simulation (sams), version 2007, user’s manual, colorado state university, 2007 [2] t. b. mckee, n. j. doesken, j. kleist, “the relationship of drought frequency and duration to time scales”, eighth conference of applied climatology, anaheim, usa, january 17–22, 1993 [3] j. d. salas, n. saada, c. h. chung, stochastic modeling and simulation of the nile river system monthly flows, hydrologic science and engineering program, department of civil engineering, colorado state university, 1995 [4] j. d. salas, n. saada, c. h. chung, w. l. lane, d. k. frevert, stochastic analysis, modeling, and simulation (sams), version 2000user’s manual, colorado state university, 2000 [5] w. l. lane, d. k. frevert, applied stochastic techniques: (personal computer version): user manual, bureau of reclamation, us department of interior, 1990 [6] n. saada, “simulation of long term characteristics of annual rainfall in selected areas in saudi arabia”, computational water, energy, and environmental engineering, vol. 4, no. 2, pp. 18-24, 2015 [7] n. saada, “time series modeling of monthly rainfall in arid areas: case study for saudi arabia”, american journal of environmental sciences, vol. 10, no. 3, pp. 277-282, 2014 [8] n. saada, a. abu-romman, “multi-site modeling and simulation of the standardized precipitation index (spi) in jordan”, journal of hydrology: regional studies, vol. 14, pp. 83-91, 2017 engineering, technology & applied science research vol. 8, no. 4, 2018, 3108-3112 3108 www.etasr.com ahmad et al. : effect of shear flow on crystallization of sydiotactic polypropylene/clay composites effect of shear flow on crystallization of sydiotactic polypropylene/clay composites naveed ahmad department of chemical and material engineering college of engineering northern border university arar, kingdom of saudi arabia elsayed fouad department of chemical and material engineering college of engineering northern border university arar, kingdom of saudi arabia farooq ahmad department of chemical and material engineering college of engineering, northern border university arar, kingdom of saudi arabia abstract-the high sensitivity of crystallization to shear flow is a subject of great research interest the last several years. a set of syndiotactic polypropylene/clay composite samples were used to examine the effect of shear flow on crystallization kinetics. this phenomenon alters both processing and material final properties. in the present work, the effects of clay contents and shear flow on the rate of flow induced crystallization were investigated using rheological technique. small amplitude oscillatory shear experiments were performed using advanced rheometric expansion system (ares). the crystallization rate is found to alter by both shear and clay contents in the polymer composites. keywords-shear flow; flow induced crystallization; syndiotactic polypropylene/clay composites; induction time; deborah number; crystallization kinetics i. introduction the enhancement in the rate of polymer crystallization due to the application of flow is known as flow induced crystallization. in other words flow induced crystallization can be defined as the process in which the rate of polymer crystallization is accelerated by the action of flow [1]. this phenomenon alters both processing and material final properties. the physics behind flow induced crystallization is simple. when a polymer is subjected to a flow, the polymer chains are oriented and stretched. this results to a decrease in the entropy or equivalent increase in free energy [2, 3]. this increase in free energy acts as a driving force and thus accelerates the polymer crystallization process by accelerating the rate of nucleation. in general, the process of crystallization occurs in two steps. in the 1st step, the formation of nuclei (stable nuclei) occurs, while in the 2nd step the subsequent growth of crystallite occurs. the flow has an effect on the first step of crystallization (nucleation stage) [1]. the flow mechanism induced crystallization has been explained very well in [2, 3]. the process can be explained as the stretching of long chains to form fibrous crystals. during the stretching process, distortion of chains from their most probable conformation results and hence a decrease in the conformational entropy occurs. if this deformation is maintained in this lower conformational entropy state, then less conformational entropy needs to be sacrificed by transforming to a crystalline state. the decrease in total entropy allows the crystallization to occur at high temperatures that will take place under quiescent conditions. normally, the formation of such fibrous morphology is accompanied by the formation of an epitaxial layer over and around the inner fiber giving rise to the so called shish-kebab kind of morphology [3]. a critical review shows that the outside, kebab like regions are essentially folded chain regions comprised of chains which do not crystallize during the orientation process [2-5]. while, in the inner shish region, the formation of folded chain discs occurs due to nucleation events taking place on the surface of extended chains. in the light of the above discussion, the enhancement in the rate of crystallization process by the shear flow is due to the enhancement in the rate of nucleation. numerous works regard nucleation kinetics [5]. according to [7], isothermal nucleation kinetics is expressed as: = ∆ − (∆ ) (1) where n=rate of nucleation, k=boltzmann’s constant, t=absolute temperature, ∆ =gl-gs=volumetric free energy difference between liquid and crystalline phase and k=constant containing geometrical and energetic factors of nucleus. it is generally accepted that the shear flow contributes to the free energy difference appearing in (2). ∆ = ∆ + ∆ (2) where ∆ and ∆ refer to the free energy contribution under quiescent and shear flow conditions respectively. in order to investigate the influence of flow on crystallization, a characteristic time for the crystallization is measured. this is usually called induction time. it is the time required for the steady state of nucleation to be reached. both induction time and nucleation rate are nearly inversely proportional. induction times can also be measured by detecting the sharp upturn in the viscosity vs time curve under constant shear rate [9]. the ratio between the induction time under quiescent and flow conditions can be defined as: engineering, technology & applied science research vol. 8, no. 4, 2018, 3108-3112 3109 www.etasr.com ahmad et al. : effect of shear flow on crystallization of sydiotactic polypropylene/clay composites = / = 1/1 + ∆ /∆ exp [ (∆ ) ] (3) where q and f refer to quiescent and flow conditions respectively. the dimensionless induction time is 1 under quiescent conditions, while it is less than 1 when the shear flow is applied .for steady shear flow ∆ = 3 γ ( ) (4) where de is deborah number. it is the product of the shear rate and the polymer reptation time. γ is dimensionless free energy, which is a function of de. in order to evaluate for a given polymer under isothermal flow conditions several material properties are needed. the quiescent free energy requires the knowledge of the thermodynamic melting temperature (tm) and latent heat of fusion (ho). ∆ = (1 − ) (5) values of the quiescent crystallization constant (k) and exponent (n) are required for calculating . besides this some values of rheological parameters of polymer melts like repetition time (td), entanglement density ( e) and molecular weight between entanglements (me) are also required. in case of polymer melting, the ability of the shear flow to produce conformation and morphological changes with respect to equilibrium and isotropic state results from the coupling between the shear flow intensity and relaxation behavior of the chain. according to the theory of repetition [6], chain segment conformation or orientation occurs only when the characteristic flow time (γ-1) is smaller than the repetition or disengagement time td. in the other words chain stretching is possible only when γ-11. molecular structure factors like molecular weight, molecular weight distribution or polydispersity and tactility are the important structural properties in quantitatively determining the flow induced crystallization rate [8-10]. in case of monodisperse polymers, longer polymer chains will be more oriented than the shorter ones under the same flow conditions, as high molecular weight chain has longer relaxation times. the same applies on the polydisperse polymers. the presence of a long tail of molecular weight chains should enhance the flow induced nucleation rate. fiber pulling experiments on the long series of isotactic polypropylenes of different molecular weight were conducted [10]. it was found that the overall crystallization kinetics exponentially increased upon increasing polymer’s molecular weight at constant fixed shear rate [10, 11]. authors in [11] conducted rheological flow induced crystallization experiments on the isotactic polypropylene samples of different molecular weight and molecular weight distribution. they found an increase in the rate of crystallization by increasing the molecular weight of the samples at constant shear rate. furthermore, they found that after a combined thermo mechanical treatment which mainly caused a degradation of the high molecular weight tail, the effect of the shear rate on the crystallization rate was strongly reduced. in [14], authors obtained the same results by investigating the process using differential scanning calorimetry (dsc) technique. they performed experiments on both linear and branched chain polypropylene. long chain branched polypropylene showed accelerated crystallization kinetics in comparison with that of low branched level. the crystallization of long chain branched polypropylene was found more sensitive to shear flow than that of linear polypropylene during the induced period at low shear rates, which depicts that the longer relaxation time of the polymer chains played an important role in the nucleation of polypropylene under shear flow fields. in a nut shell an increase in molecular weight will produce a faster crystallization under given flow conditions. authors in [12] studied melt blended nanocomposites of pp/talc. nanocomposites of pp/talc were processed using an internal mixer. an elongational rheometer was used to generate well controlled different extensional flow conditions. samples were then characterized by waxs to reveal and quantify the fillers and polypropylene crystalline phase orientation. crystalline orientation of polypropylene was found to be strongly affected by the addition of talc under extensional flow and talc orientation. more recently, isotactic polypropylene (ipp) based single-polymer composites (spcs) were prepared by introducing ipp fibers into the molten or super cooled homogeneous ipp matrix [13]. the influences of fiber introduction temperature (ti) on the resultant morphology of transcrystallinity (tc) and mechanical properties of spcs were investigated via a polarized optical microscopy (pom) and a universal tensile test machine. the effects of interfacial crystallization on mechanical properties were also studied. the tensile strength of spcs was observed to increase firstly and to reach a maximum value at ti=160°c, and then to decrease with further increasing of ti. wide-angle x-ray diffraction (waxd), scanning electron microscopy (sem) and pom were employed to understand the mechanical enhancement mechanism. it is found that the enhanced tensile strength of spcs was strongly dependent on the synergistic effects of tc, high orientation degree of ipp fibers and good adhesion between the ipp fiber and the matrix. in the present work we studied the effect of clay loading and shear flow on the rate of crystallization of spp/clay composites using rheological technique. ii. experimental work a. materials samples of spp/clay composites with different contents of clay were used for the flow induced crystallization study. the diagnostic properties of the polymers are reported in table i. all the samples were synthesized in our chemistry department using solution mixing technique [15]. engineering, technology & applied science research vol. 8, no. 4, 2018, 3108-3112 3110 www.etasr.com ahmad et al. : effect of shear flow on crystallization of sydiotactic polypropylene/clay composites table i. list of all samples [15]. sample number sample name percentage of clay contents degree of syndiotacticity <%rrr> 1 spp-1 10% 2 spp-2 7.5% 3 spp-3 5% 4 spp 0 60 b. methodology the effects of shear flow and clay contents on the rate of crystallization (flow induced crystallization) were investigated by rheological technique using ares rheometer. before staring the flow induced crystallization experiments stability and range of shear stress were explored for each sample. after confirming the stability and determining the range of shear stress, the effect of shear flow on crystallization was examined at two different temperatures above and below the melting point. in case of spp-1, shear flow was applied at 145°c and 125°c, while in case of spp-3, the shear flow was applied at 120°c and 105°c. the procedure of the rheological flow induced crystallization experiments is explained below: 1. annealing of the polymer sample was carried out by time sweep test at 220°c for 20 minutes in order to clean the sample. 2. the polymer sample was cooled from 220°c to temperatures above melting temperature by temperature ramp test at a constant cooling rate of 40°c/min, 1 rad/s of frequency and at a strain of 1%. 3. different shear rates ranging from 0.01 to 0.25s-1 were applied at a specific temperature. 4. after the application of shear rate, temperature ramp test (crystallization test) was started within a time of 13 seconds. 5. in another set of experiments shear rate was applied at a temperature below melting point. different shear rates within the range (0.01 to 0.25s-1) were applied at 125°c and 105°c respectively for spp-1 and spp-3 respectively within the induction time for different periods of time (shear flow times). both sets of experiments were carried out for all samples. iii. results and discussion annealing of the samples by time sweep test at 220°c and 1rad/s was carried out for 20 minutes in order to investigate the stability and clean the sample completely from the spherulites and nuclei. time sweep test for spp-1 has been shown in figure 1. thermal stability of all samples was examined at 220°c and at 1rad/s of frequency. all the samples were found to be stable. after confirming the thermal stability, the stability of the samples was examined for different shear rates. all the samples were found stable in the range of 0.01 to 0.25s-1 of shear rates as shown in figure 2 for spp-1. crystallization behavior under quiescent conditions and at different shear rates was explored using the temperature ramp test from 220 to 125°c for spp-1. different shear rates were applied at 125°c after cooling from 220°c. in case of quiescent condition, no shear rate was applied at 125°c. in both cases of quiescent and flow induced crystallization, crystallization behavior was observed by cooling from 220°c to 125°c. changes in moduli dictate the process of crystallization. jump in the elastic modulus after the incubation and induction time is considered as the actual crystallization process. enhancement in the rate of crystallization was observed by the application of different shear rates. in another words the induction time is not the same in all cases. these findings have been exhibited graphically in figure 3. time sweep test at 220°c time (s) 0 200 400 600 800 1000 1200 1400 g ', g '' 0 2000 4000 6000 8000 10000 time(s) vs g' time(s) vs g'' fig. 1. time sweep test for spp-1 at 220°c. time (s) 0 100 200 300 400 500 600 s h e a r st re ss ( p a ) -2000 0 2000 4000 6000 8000 10000 12000 14000 16000 at 0.01 shear stress at 0.07 shear stress at 0.1 shear stress at 0.12 shear stress at 0.20 shear stress at 0.22 shear stress fig. 2. plot between time and shear stress for spp-1. in order to investigate the effect of shear flow on crystallization process, some other types of experiments were conducted on spp-11 and spp-5. in these experiments, different shear flows were applied within the induction time at temperatures below and above melting point after cooling from 200°c for all samples. each shear flow was applied for different periods of time ranging from 50 to 800 seconds depending upon on the induction time of the sample. a significant effect of shear flows was found on the crystallization kinetics. the characteristic deborah number was calculated from the relaxation time and shear flows. relaxation time was calculated by different methods. in all cases the deborah number was found greater than one (de>1), which verifies our experimental findings that the applied shear flow is able to orient the polymer chains. shear flow can be increased to the extent of making deborah number greater. in the present experimental work this attempt was made, but the sample was fou tha can inc on de she de ind inc for ind fig rate fig per com of beh dif our beh rat fou kin engineerin www.etasr und to come o at in the pres nnot be applie crease in the s e. the same eborah numbe ear flow time eborah numbe duction time d crease in cla rmation. the duction is show g. 3. crystalliz es for spp-1. g '( p a ) 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 g. 4. crystalliz riods of time at 80 crystallizatio mposites at di shear flow in havior. the d fferent shear ra r findings of havior of spp tes. the effect und. increase netics. the rat ng, technology r.com out from the rh sent experime ed. deborah nu hear rate. in a trend was fou er was found e. the relatio er is shown i decreases with ay contents e relationship wn in figure 6 zation under quie 0 500 quscent crystalliza after 50s after 150 s after 300 s zation of spp-2 0°c. iv. c on measurem ifferent shear r n the range of eborah numbe ates was found f effect of sh p/clay compos of clay conten in clay conte te of crystalliz y & applied sci ahmad et a heometer plate ental set up h umber was fou all cases it was und in case o to increase w onship betwee in figure 5. h increasing o increases the p between c 6. escent conditions time 1000 1500 ation at a shear rate o conclusion ments perform rates show tha 0.01 to 0.25s er calculated fo d greater than hear flow on sites in the gi nts on crystall ents increases zation was fou ience research al. : effect of sh es, which conf higher shear und to increase s found greate f shear flow with increase i en shear flow it was found of the clay con e time of cr clay contents s and at differen 2000 2500 of 0.07 s-1 for d med on spp at there is infl s-1 on crystalliz for all the samp one, which ve the crystalliz iven range of lization kinetic s the crystalliz und to increase h v hear flow on c firmed flows e with er than times. in the ws and d that ntents. rystals s and nt shear different p/clay luence zation ples at erified zation shear cs was zation e with incr wit f stud dea ara [1] [2] [3] [4] [5] [6] vol. 8, no. 4, 20 crystallization o rease in clay c th increase in t d e 6.3x10 3.15x10 2.14x10 1.575x10 1.26x10 1.05x10 fig. 5. relati in d u c ti o n t im e ( s ) 0 200 400 600 800 1000 1200 1400 1600 fig. 6. relati authors wish dy by the anship of scie ar, kingdom o h. janeschitz-k promotor of nu rheological ac r. h. somani, lustiger, “shea isotactic polypr macromolecule r. h. somani, l precursor struct 20, pp. 8587–86 e. w. fischer macromolecule royal society o j. stejny, j. d characterization crystals”, journ 1979 s. t. milner, fluctuations in 81, no. 3, pp. 7 018, 3108-3112 of sydiotactic p contents. that the clay conten 0 100 200 0-4 0-4 0-4 0-4 0-4 0-4 shear flow tim onship between s clay 0 2 ionship between c acknowl h to acknowle grant no.68 entific researc of saudi arabia refer kriegl, e. ratajsk ucleation in poly cta, vol. 42, no. 4 , l. yang, b. s ar-induced molec ropylene: effects es, vol. 38, no. 4, l. yang, l. zhu, tures in entangled 623, 2005 r, m. stamm, m es in the condense of chemistry, vol dlugosz, a. ke n of the fibrou nal of materials s t. c. b. mc melts of linear po 725–728, 1998 2 polypropylene/c t's why induct nts. time (s) 300 400 500 me (s) vs de shear flow time an y contents (%) 4 6 8 clay contents vs in clay and contents ledgement dge the suppo 57-eng-2016 ch in northern a. rences ki, m. stadlbauer ymer melts: a qu 4, pp. 355–364, 20 s. hsiao, t. sun cular orientation s of the deform , pp. 1244–1255, b. s. hsiao, “flo d polymer melts”, m. dettenmair, ed phase”, in: far l. 68, 1980 eller, “electron us structure of science, vol. 14, leish, “reptatio olymers”, physica 3111 clay composites tion time decr 600 700 nd deborah numb 10 12 duction time and induction tim ort of this res 6-1-6-f from border unive r, “flow as an eff uantitative evalua 003 n, n. v. pogodin and crystallizat mation rate and s 2005 ow-induced shish, polymer, vol. 4 “organization o raday discussions microscope diffr poly (sulphur n no. 6, pp. 1291– on and contoural review letters s reases ber. me. earch m the ersity, ffective ation”, na, a. tion in strain”, -kebab 46, no. of the s of the raction nitride) –1300, -length s, vol. engineering, technology & applied science research vol. 8, no. 4, 2018, 3108-3112 3112 www.etasr.com ahmad et al. : effect of shear flow on crystallization of sydiotactic polypropylene/clay composites [7] j. i. lauritzen jr, j. d. hoffman, “formation of polymer crystals with folded chains from dilute solution”, the journal of chemical physics, vol. 31, no. 6, pp. 1680–1681, 1959 [8] s. acierno, n. grizzuti, h. h. winter, “effects of molecular weight on the isothermal crystallization of poly(1-butene)”, macromolecules, vol. 35, no. 13, pp. 5043–5048, 2002 [9] s. acierno, n. grizzuti, “flow-induced crystallization of polymer: theory and experiments”, international journal of material forming, vol. 1, pp. 583–586, 2008 [10] c. duplay, b. monasse, j. m. haudin, j. l. costa, “shear-induced crystallization of polypropylene: influence of molecular weight”, journal of materials science, vol. 35, no. 24, pp. 6093–6103, 2000 [11] s. vleeshouwers, h. e. h. meijer, “a rheological study of shear induced crystallization”, rheological acta, vol. 35, no. 5, pp. 391–399, 1996 [12] m. khalil, p. hebraud, a. mcheik, h. mortada, h. lakiss, t. hamieh, “elongational flow-induced crystallization in polypropylene/talc nanocomposites”, physics procedia, vol. 55, pp. 259–264, 2014 [13] l. zhang, y. qin, g. zheng, k. dai, c. liu, x. yan, j. guo, c. shen, z. guo, “interfacial crystallization and mechanical property of isotactic polypropylene based single-polymer composites”, polymer (uk), vol. 90, pp. 18–25, 2016 [14] f. hernandez sanchez, l. f. del castillo, r. vera-graziano, “isothermal crystallization kinetics of polypropylene by differential scanning calorimetry, i. experimental conditions”, journal of applied polymer science, vol. 92, pp. 970-978, 2004 microsoft word 29-2726_s_etasr_v9_n4_pp4474-4479 engineering, technology & applied science research vol. 9, no. 4, 2019, 4474-4479 4474 www.etasr.com mahessar et al.: flash flood climatology in the lower region of southern sindh flash flood climatology in the lower region of southern sindh ali asghar mahessar sindh barrages improvement project, sindh irrigation department, pakistan amahessar@yahoo.com abdul latif qureshi uspcas-w, mehran university of engineering and technology, pakistan alqureshi.uspcasw@faulty.muet.edu.pk insaf ali siming department of english, quest, nawabshah, pakistan insaf.siming@quest.edu.pk shafi muhammad kori uspcas-w, mehran university of engineering and technology, jamshoro, pakistan alqureshi.uspcasw@faulty.muet.edu ghulam hussain dars uspcas-w, mehran university of engineering and technology, jamshoro, pakistan ghdars.uspcasw@faculty.muet.edu.pk madeheea channa uspcas-w, mehran university of engineering and technology, jamshoro, pakistan madhachana.uspcasw@admin.muet.edu.pk abdul nasir laghari department of energy and environment engineering, quest, nawabshah, pakistan a.n.laghari@quest.edu.pk abstract-climate change impact is felt at a global scale. one of its results is the abnormal rain occurrence during monsoon season. in recent years, visible changes due to unusual weather events in pakistan’s hydrological cycle were observed in the form of intensification of the hydrological cycle with changing of precipitation events such as floods and prolonged droughts. hence, abnormal rainfall occurred in regions of southern and northern parts of sindh, like torrential river floods (2010), flash floods (2011-2012), unpredictable rainstorms, etc. causing loss of lives, damaging infrastructures and crops, structures, and inhabitant displacement. in 2011, heavy cumulative precipitation has been recorded in the southern sindh districts and the coastal belt of badin and the lbod and kotri surface drainage system achieved their extreme heights. another example of erratic rain occurred from september 8 to september 13, 2011 and produced an extraordinary discharge of about 14000 cusecs against the designed discharge of 4600 cusecs in the lbod and kotri surface drainage systems overtopping drains from several locations and wreaking havoc in the whole area of the southern part of sindh. keywords-climatology; rainfall; flood; lbod watershed area; drain carrying capacity; damages i. introduction climate variables such as rain, wind and temperature can sometime reach abnormal values and create natural disasters. these are termed as extreme events. changes in global precipitation estimate are a more complex and challenging task than the changes in temperature. extreme precipitation events often occur in populated areas causing disaster and affecting population. changes in extreme events such as torrential rainfall, cyclone, irregular temperatures, heat waves, prolonged droughts and floods are mainly caused by climate change [1, 2], and pakistan is one of the most vulnerable to climate change countries [3]. global climate change experts have identified pakistan as part of a zone that faces extreme changes in weather. climate change and environmental crises in vulnerable areas of pakistan indicate that 40% of the population is highly vulnerable to natural disasters [4]. developing countries including pakistan will be affected by serious droughts, floods, increasing temperatures, and lifethreatening events [5]. climate change risk is proposed to be an ambiguity related to the influence of climate change in specific areas of concern [6]. scientifically guesstimating the conceivable influences of future climate modification is the precondition to demeanor alteration activities. at primary period, the growing situation is regularly used to project the future influences of climate [7]. temperature will rise from 0.9 to 3.5°c by 2100, causing fluctuation in intensity, frequency and timing of rainfall, frequent hot days/nights occurrence and changed effects of biotic factors [8]. high temperature, reduction in rainfall occurrence, and an increase in the frequency of extreme climatic events are expected in the future climate of the tropics [5, 9]. the change in the weather disturbed life and cropping pattern. this expected deviation in climate has effects on the food chain and other segments through spatial and temporal scales [5, 10]. the frequency and severity of floods in some areas during the monsoon have serious impacts [12]. cyclone formations and irregularities in rainfall pattern are related consequences of global warming. global warming is closely linked to the seasonal atmospheric flow during the monsoon season with varying degree of uncertainty. irregular flooding is a serious repercussion of the monsoon season that damages infrastructure, human life and causes serious financial losses [11, 13]. the worst flood corresponding author abdul nasir laghari engineering, technology & applied science research vol. 9, no. 4, 2019, 4474-4479 4475 www.etasr.com mahessar et al.: flash flood climatology in the lower region of southern sindh disaster in the 80 year history of pakistan occurred in july 2010 with following heavy monsoon rains [14]. from july to september, the summer rainfall concentrates, which is generally a monsoon breath formed on the bengal bay reaching pakistan across india. another mechanism of summer monsoon rainfall is the flow of moisture from the arabian sea in the southwest and it will be activated in case of persisting depression. both phenomena strengthen the precipitation process and produce high intensity rainfall in a short time [15, 16]. coastal areas may suffer from increased tropical storm frequency and strength in the near future, while over 50,000 people may be displaced from pakistan's coastal deltas [17]. the southern part of sindh has been affected with strong rains, cyclones, high tides, causing damages in life, property, crops and causing waterlogging and soil salinity. kotri surface drainage and left bank outfall drain (lbod) system [3] were designed and constructed primarily aiming to provide effective drainage facilities to address the issues of waterlogging, salinity and generated rain runoff. this system fulfilled its purpose and provided rainwater release from its watershed area timely but a severe submergence of lands of badin district occurred during the erratic rainstorms of 1994 and 2003 ranging from 200 to 304mm respectively. the generated rain runoff exceeded the carrying capacity of lbod and kotri surface drainage system causing long term massive area submergence and damaging infrastructures and crops, while it increased the waterlogging problem. moreover, the components of the lbod system were overtopped at the heavy rain of 2006. from the historical rainfall data of 39 years of meteorological stations, table i shows the maximum rainfall in 24 hours in the watershed area of lbod and kotri surface drainage system. when the rainfall intensity is low there is no problem of safe disposal, but with high intensities that overcome the designed carrying capacity of the system, over extensive areas are submerged. the overall discharge passing through the outfall drains depends upon the weighted average rainfall for all the meteorological stations representing the entire catchment area. after the construction of kotri and lbod surface drainage systems, rainfall intensity of more than 100mm within 24h at one or more stations was recorded in 1994, 1999, 2003 and 2006 and caused severe damages [18]. the rain runoff during the monsoon in 2011 generated cumulative discharge of about 14000 cusecs while the designed carrying capacity of the lbod system was only 4600 cusecs, resulting in damages of the drainage system at several places and causing flooding in huge areas. ii. research study area the present study covered the watershed area of kotri surface drainage and lbod system which was designed to control the problems of waterlogging and salinity, and the generated rain runoff from benazirabad, sanghar, mirpurkhas districts and badin area in the left bank of indus river with the objective of improving the cultivable area of rohri and nara canals. the drainage system lies between 240 10' and 260 40' n and 680 09' and 690 26' e [19]. waterlogged and salinized lands will be reclaimed due to the construction of the drainage system [20], however, the performance of the system was found to be unsatisfactory during the monsoon rains in 2003 and 2006 causing the submerging of vast parts of badin. moreover, it was exacerbated during the ever highest rainfall intensity recorded from 29th august 2011 to 13th september 2011 as shown in figure 1. table i. maximum rainfall in sindh districts (mm) years hyderabad badin shaheed benazirabad chhor 1968 7.40 4.60 3.60 17.50 1969 3.00 0.00 13.20 4.10 1970 67.60 117.60 59.70 61.00 1971 18.00 42.20 8.60 51.30 1972 12.70 29.70 256.50 21.80 1973 21.80 35.60 17.30 32.80 1974 12.70 13.60 0.50 5.00 1975 29.10 61.00 52.00 40.00 1976 57.60 64.70 32.80 58.40 1977 47.40 60.80 22.00 88.60 1978 106.60 43.50 73.00 87.20 1979 41.60 241.00 64.00 29.00 1980 41.00 27.00 40.90 11.00 1981 17.70 138.00 51.20 42.20 1982 26.30 75.60 42.60 35.30 1983 101.70 67.40 48.20 119.90 1984 110.30 111.00 48.00 83.60 1985 30.70 55.70 35.90 97.60 1986 69.90 61.10 99.00 48.30 1987 14.20 0.00 0.00 23.70 1988 60.40 124.70 13.30 88.60 1989 79.20 95.40 27.50 86.60 1990 58.00 159.30 87.50 214.60 1991 6.50 36.00 26.00 20.00 1992 104.30 80.00 97.00 117.00 1993 27.00 119.00 29.20 123.40 1994 76.70 176.50 143.00 81.30 1995 31.20 88.00 72.20 60.00 1996 6.30 12.00 1.20 69.50 1997 12.00 31.00 30.00 36.30 1998 19.40 45.20 22.00 52.30 1999 36.10 113.50 8.00 107.20 2000 25.80 46.80 22.00 32.90 2001 40.50 26.10 16.80 70.40 2002 4.00 26.00 2.00 2.30 2003 71.00 150.40 61.00 137.20 2004 85.60 73.00 11.80 57.80 2005 15.40 23.00 26.30 32.10 2006 46.00 60.00 46.00 141.20 fig. 1. intensity of rainfall, 2011 engineering, technology & applied science research vol. 9, no. 4, 2019, 4474-4479 4476 www.etasr.com mahessar et al.: flash flood climatology in the lower region of southern sindh iii. methodology a walk through survey was conducted in the 2011 flood which damaged standing crops, communication infrastructure, irrigation and drainage systems, villages, towns and cities in the lower part of southern sindh and also changed the lake ecosystem. two to three feet of rainwater was observed to be standing in major roads and lowlands in various sindh areas. rainfall data of years 2011 and 2012 which were recorded at various rain gauge stations by the irrigation and the meteorological department were collected for computing rainfall statistics. the collected rainfall data have been analyzed in order to prepare the present and future strategy, and to customize flood management plans for controlling flood risk. iv. results and discussion most flash floods were caused from heavy rainfall. an understanding of flash flood climatology helps to develop tools to identify the risk of flooding and the likely timescale involved [21]. eventough abnormal rains and cyclones are nowadays common phenomena, they are major causes of damages. frequently heavy rains hit in 2010 the whole area of pakistan, in 2011 the southern part of sindh and in 2012 the northern part of sindh, but the mostly severe rainfall occurred in 1992, 1994, 2003 and 2006 in the same region of south sindh. figure 2 shows the rainfall occurring in badin, benazirabad and tharparkar chhor during the events of july and august of 1994. fig. 2. rainfall in lbod watershed area during the monsoon of 1994 august rains generated heavy runoff because the watershed area was already saturated and filled with water during july. the rain flood damaged crops, irrigation and drainage systems, road infrastructures, and cost many human lives. figure 3 shows the rainfall in badin, chhor and benazirabad for the same period of 20003. we see that the rainfall varied from 20 to 155mm but with a higher intensity in badin. it rained with severe intensity in the lower region of southern sindh during 2003 and with low intensity in the upper region of northern sindh. the first event of heavy rain occurred in 25-29 july and the second, in the lower region, on 26 august of 2003. the generated rain runoff was higher than the designed carrying capacity of the lower drainage system, so rainwater stagnated for months. this stagnant rainwater caused damages to agricultural lands and crops, infrastructure communication systems, and irrigation and drainage networks. figure 4 shows that rainfall variation during the monsoon of 2006. unusual rains occurred in badin, benazirabad and chhor area from 25 to 30 of july. the rain flood intensity was higher than the carrying capacity of the lower drainage system which flooded the entire area of the lower region and coastal belt. crops and communication systems were damaged in the affected area. the phenomenon of abnormal rainfall is often observed in the coastal area of southern sindh so the locals have become frequent victims of rain related disasters. figure 5 reveals that heavy rainfall in 2010 occurred in the catchment area of indus river from the glacier to arabian sea which caused many breaches in the protected banks of indus river and damages in communication infrastructures, buildings, crops, causing losses in human lives in the provinces of pakistan, i.e. kpk, punjab and sindh except baluchistan. fig. 3. rainfall in lbod watershed area during the monsoon of 2003 fig. 4. rainfall in lbod watershed area during the monsoon of 2006 fig. 5. rainfall in lbod watershed area during the monsoon of 2010 engineering, technology & applied science research vol. 9, no. 4, 2019, 4474-4479 4477 www.etasr.com mahessar et al.: flash flood climatology in the lower region of southern sindh however, rains in the lower region of the left side of indus in sindh province occurred with low intensity with 100mm in badin and 70mm in benazirabad (figure 5). hence, no loss of life and property occurred in 2010 in the lower region of sindh. figure 6 shows that heavy rainfall occurred in methi, badin, benazirabad and chhor in august-september of 2011 in the watershed area of the left bank command area of sukkur and kotri barrages. fig. 6. monsoon rainfall in lbod catchments, august-september, 2011 the recorded maximum rainfall was 300mm, occurred within 24 to 48 hours and was 8 to 10 times higher from the designed capacity of lbod and kotri surface drainage system. the main events occurred on august 11 and 29 and continued with intervals until september 13 when the highest rainfall in the history of sindh was recoded, which shows the unpredictable rainfall pattern in this area due to climate change. figure 7 exhibits the relation between normal and abnormal (2011) rainfall in pakistan, kpk, punjab, balochistan, and sindh. the highest rainfall deviation error among the provinces of pakistan was found in sindh. the rain floods of 2011 were unique in intensity, spread and simultaneous recurrence in the lower region of sindh province. fig. 7. percentage of deviation of 2011monsoon (july-august) figure 8 shows the recoded erratic rainfall in badin, mirpurkhas, mithi, umerkot, benazirabad and hyderabad (600mm, 820mm, 1100mm, 500mm and 300mm respectively). fig. 8. highest rainfall recorded in the districts of southern sindh in august-september of 2011 the intensity of rainfall was so high that not only damaged almost all the catchment area of the lbod system, but also other parts of sindh. the rainfall at the lower region during the july-august of 2011 interval is shown in figure 9. the rainfall water flooded lower region districts like badin, methi, nawab shah (benazirabad), mirpurkhas with highest intensity while the upper region districts such as jacobabad, larkana, sukkur, rohari, dadu, moin-jo-daro and dadu received low intensity and even less rains. as a result the rainstorm in the lower region affected all structural and non-structural activities. fig. 9. rainfall in sindh districts during july-august, 2011 in the northern part of sindh, there is a limited facility of surface drainage for agricultural runoff and waterlogged areas and existing surface drainage has no carrying capacity to drain out abnormal rains. the heavy rain flood of 2011 caused damage to crops, villages, towns, cities and road infrastructures. climatological flash flood events were recorded in 1992, 1994, 2003 and 2006 2010 and 2011 in southern and northern sindh, but events with high intensity were more frequent in the coastal part of the southern sindh. v. flood impact in lower sindh region heavy rainfalls caused disasters in the lower region of sindh and huge amounts of rainwater were accumulated in the land, villages and towns situated in the southern part of sindh. the local people made relief cuts in irrigation channels for saving mature crops and households on a temporary and short strategy basis. several breaches occurred in the banks of main and branch canals. major breaches occurred in fuleli canal, akram wah and nara canal system and also several breaches engineering, technology & applied science research vol. 9, no. 4, 2019, 4474-4479 4478 www.etasr.com mahessar et al.: flash flood climatology in the lower region of southern sindh occurred in the main and branch drains of the lbod system. almost all the drainage system was damaged and overtopped due to overwhelming rainfall intensity. the designed capacity of the spinal drainage was 4600 cusecs while the cumulative discharge was about 14,000 cusecs. the official land utilization statistics of sindh show that all cotton area, 50% of the rice and fodder crops, and a smaller percentage of sugarcane area were damaged. the statistics were worked out through the use of satellite remote sensing technology. the affected cotton area is 45.9 thousand ha and the production loss was estimated at around 0.34 million bales. the rice area damaged is estimated at 32.4 thousand ha and the rice production loss was estimated at 99.9 thousand tons. sugarcane crop was generally affected in a minor degree. the damages were estimated by suparco through satellite remote sensing. the cultivated area in sindh is 4.89 million ha. the net sown area over a year is 2.81 million ha and the fallow area over a year is 2.08 million ha. this is almost a ratio of 60% cropped area and 40% fallow area. however, in view of our assessments it is assumed that cropped area is 70% and fallow area is 30% [22]. vi. conclusions monsoon rains of 2011 were the most intense event in the history of pakistan. the heavy rainfall flash flooded the whole area of lower sindh. relief cuts were made in the irrigation and drainage network for saving crops and households but this worsened the situation, this water entered the spinal system, which was forced to deal with about 14000 cusecs discharge against the designed 4600 cusecs. this resulted to overtopping of the mirpurkhas main drain, spinal and doro puran outfall drain at various places, particularly at where the constructed drains intersect the dhoro puran natural water way. the major reason of this disaster was the small carrying capacity of the lbod system, but also we have to point out that the drainage system was chocked, the natural waterways were blocked by altered agriculture lands, construction of roads, having inadequate size and number of culverts and aqueducts over drains, consequently overtopping the drainage and irrigation networks. sindh irrigation and drainage authority (sida) and area water boards (awbs) have as goals to introduce reforms in the water sector and to restore equitable and reliable water delivery of the irrigation system. their functions are operation, maintenance and rehabilitation of canals, distributaries and minor networks, maintenance, rehabilitation and monitoring of the main drainage system, construction, operation and maintenance of the outfall drains, receiving effluent drainage water from awbs and convey it to the sea, maintain the flood protection infrastructure along river indus, and overall to act as the prime agent of change, advising awbs and farmer organizations (fos). it is concluded that to enhance the capacity of lbod and kotri surface drainage system for safe disposal of extraordinary rainfall from its watershed area, the best should be made from the use of the existing facilities and natural waterways based upon realistic hydro-meteorological analyses for draining out rainwater. facilities, monitoring tools, weather radars, and digital rain gauge stations must be provided in order to forecast more reliably heavy disasters, in order to perform more succesfully risk management and preparedness plans in southern sindh. acknowledgments authors are thankful to the sindh irrigation and drainage authority and the left canals area water board for providing the data and facilities for this research. references [1] m. a. melieres, c. marechal, climate change: past, present, and future, john wiley & sons, 2015 [2] a. a. mahessar, a. l. qureshi, g. h. dars, m. a. solangi, “climate change impacts on vulnerable guddu and sukkur barrages in indus river, sindh”, sindh university research journal (science series), vol. 49, no. 1, pp. 137-142, 2017 [3] q. u. z. chaudhry, climate change profile of pakistan, asian development bank, 2017 [4] oxfam, climate change in pakistan: stakeholder mapping and power analysis, oxfam, 2009 [5] j. j. mccarthy, o. f. canziani, n. a. leary, d. j. dokken, k. s. white, climate change 2001: impacts, adaptation and vulnerability, cambridge university press, 2001 [6] m. v. aalst, managing climate risk: integrating adaptation into world bank group operations, world bank group: global environment facility program, 2006 [7] y. xu, x. huang, y. zhang, w. lin, e. lin, “statistical analyses of climate change scenarios over china in the 21st century”, advances in climate change research, vol. 2, no. 1, pp. 50-53, 2006 [8] j. s. dukes, j. pontius, d. orwig, j. r. garnas, v. l. rodgers, n. brazee, b. cooke, k. a. theoharides, e. e. stange, r. harrington, j. ehrenfeld, j. gurevitch, m. lerdau, k. stinson, r. wick, m. ayres, “responses of insect pests, pathogens, and invasive plant species to climate change in the forests of northeastern north america: what can we predict?”, canadian journal of forest research, vol. 39, no. 2, pp. 231-248, 2009 [9] t. mitchell, t. tanner, adapting to climate change: challenges and opportunities for the development community, institute of development studies, 2006 [10] a. dinar, r. hassan, r. mendelsohn, j. k. a. benhin, climate change and agriculture in africa: impact assessment and adaptation strategies, routledge, 2012 [11] j. a. dixon, d. p. gibbon, a. gulliver, farming systems and poverty: improving farmers' livelihoods in a changing world, food & agriculture organization of the united nations, 2001 [12] j. briscoe, u. qamar, pakistan's water economy: running dry, oxford university press, 2006 [13] r. agnihotri, k. dutta, w. soon, “temporal derivative of total solar irradiance and anomalous indian summer monsoon: an empirical evidence for a sun–climate connection”, journal of atmospheric and solar-terrestrial physics, vol. 73, no. 13, pp. 1980-1987, 2011 [14] y. y. loo, l. billa, a. singh, “effect of climate change on seasonal monsoon in asia and its impact on the variability of monsoon rainfall in southeast asia”, geoscience frontiers, vol. 6, no. 6, pp. 817-823, 2015 [15] a. a. mahessar, a. l. qureshi, a. baloch, “flood forecasting for the super flood 2010 in sukkur-kotri reach of indus river”, international water technology journal, vol. 3, no. 4, pp. 255-262,2015 [16] s. das, s. v. singh, e. n. rajagopal, r. gall, “mesoscale modeling for mountain weather forecasting over the himalayas”, bulletin of the american meteorological society, vol. 84, no. 9, pp. 1237-1244, 2003 [17] g. rasul, q. u. z. sixiong, z. sixiong, z. qingcun, “a diagnostic study of record heavy rain in twin cities islamabad-rawalpindi”, advances in atmospheric sciences, vol. 21, no. 6, pp. 976-988, 2004 [18] m. murshed, “does improvement in trade openness facilitate renewable energy transition? evidence from selected south asian economies”, south asia economic journal, vol. 19, no. 2, pp. 151-170, 2018 engineering, technology & applied science research vol. 9, no. 4, 2019, 4474-4479 4479 www.etasr.com mahessar et al.: flash flood climatology in the lower region of southern sindh [19] a. a. mahessar, k. c. mukwana, a. qureshi, m. ehsan, h. leghari, a. l. manganhar, “assessment of water quality of lbod system and environmental concerns”, quaid-e-awam university research journal of enginnering, science & technology, vol. 15, no. 1, pp. 32-39, 2016 [20] a. a. mahessar, a. l. qureshi, a. n. laghari, s. qureshi, s. f. shah, f. a. shaikh, “impact of hairdin, miro khan and shahdad kot drainage on hamal dhand, sindh”, engineering, technology & applied science research, vol. 8, no. 6, pp. 3652-3656, 2018 [21] c. a. gandarillas, m. saleh soomro, a. nazir, a. a. dasti baloch, a. turangzai, a. javed, b. a. shahid, d. veselinovic, f. samo, g. r. keerio, j. e. priest, k. ansari, k. h. soofi, m. silver, m. s. samo, n. ibrahim, r. tabassum, r. renfro, r. wilkins, s. u. zafar, s. ali soomro, s. m. akhtar, s. a. h. jagirdar, t. k. baloch, u. g. dar, z. saeed, z. habib, z. mangrio, regional master plan for the left bank of indus, delta and coastal, sindh irrigation and drainage authority, 2013 [22] suparco, pakisttan rain/flood 2011: report on flash floods, bbreaches in canals and damage to infrastructure & agriculture sectors in sind province, suparco, 2011 ab (w im con nec col un ada dra me ove an opt the alg con bas inf the ove rea mi rou pro app pac obs loc opt 802 qo are fun gat sin req sho dur engineerin www.etasr even a e si s bstract—the m wsn) in disast prove quality nsumption. lo cessary. with llected data is i ique form of o aptive routing awbacks of ac essages and l ercome by con n event-based timization (eb e performance gorithm is to i nsumption by sed on the b formation of no e sink (base sta erheads. ebc active routing nimizes energy uting algorithm oposed algo proximately 7 cket delivery r served in com calization bas timization (al 2.11 standard i keywords-ac os; rssi; wsn wsn consis e small in siz nctional node thering purpos nk node using quirement in ould perform ration. a trad ng, technology r.com t base ant co enhan shankar d departmen inhgad colleg pune, maha sdchavan27@ main challeng er situations i y of service ocation awaren hout knowing insignificant. a optimization m g and guarante co are data f ong converge nsidering locat clustering loc bc_lee-aco) e of wsn. t improve qos a cluster format biological insp odes. in cluster ation) through _lee-aco is algorithm wh y consummati ms like aodv rithm reduc %, in addition ratio and incre mparison with ed eligible e leep with ac in proposed wo co; aleep; a n i. in sts of a numb e and have lo s are deploye ses. the senso g multiple ho disaster situa the commun ditional routing y & applied sci chava ed clu olony cemen d. chavan nt of e&tc ge of engineeri arashtra, india @rediffmail.com ge of a wirele is to discover (qos) and ness of nodes is the position ant colony opti method, which i eed packet de flooding, huge nce time. th tion informati alized energy ) algorithm is p the main focu and minimize tion and selecti pired routingring, data is ag h cluster head s a scalable a hich improves on of wsn as v, aco, aco ces energy n to improvem ease in packet d other algorith energetic path co) of the net ork increased p aodv; ch; clu ntroduction er of sensor n ow power cap ed in a partic or nodes (sn) ops as shown ations is that nication witho g algorithm do ience research an & kulkarni: usterin optim nt of w ing m ess sensor ne efficient routi to reduce e s also useful o n of sensor imization (aco is highly suitab livery. the pr overhead of c hese drawback ion of sensor efficient ant c proposed to en us of the pro the network e ing the optima -aco and lo ggregated and s (ch) which re and energy ef s qos, lifetim s compared to o using rssi consumption ment in throu drop which ha hms, i.e. autono h_with_ant c twork. use of packet drop. ustering; ebc; nodes. these pacity. these cular area for pass the data n in figure 1 t the sensor out failure for oes not consid h v event based c ng loc mizatio wirele etwork ing, to energy r even nodes, o) is a ble for rimary control ks are nodes. colony nhance oposed energy al path ocation sent to educes fficient me and o other i. the n by ghput, as been omous colony ieee ; lee; nodes multir data to the 1. the nodes r long der the loca whe algo solv qos effi is p enh clus biol nod a. hyb con nod ene netw reac pro vol. 8, no. 4, 20 clustering loca calized on for ess sen dr. ation informat ere stable com orithm is one ved [1-3]. the s and network therefore, an icient ant colo proposed. the hance qos, to ster formation logical inspire des. classical rou classical rou brid [4, 5]. r nsumption beh des in the ws ergy efficiency work perform ctive protocol tocols due to 018, 3177-3183 alized energy ef d ener perfo nsor n anju v d. y. patil in pune, mah anju_k64@ tion and thus mmunication is of the major p e main goals k lifetime and t fig. 1. w n event base ony optimizati e main focus o minimize n n and to selec ed routing-ac ii. rel uting protocol uting protoco routing proto havior of nod sn are supplie y is a major pr mance. takin s are more ef o less control 3 efficient ant co rgy ef ormanc networ v. kulkarni nstitute of tech harashtra, indi @yahoo.co.in cannot be use s important. s problems of se of wsn routi to reduce conn wsn architecture ed clustering ion (ebc_le of the propos network energy ct the optimal co and locati lated work ls ls are proact ocol has an des in a netw ed with a limit roblem that inf ng energy in fficient as com overheads. b 3177 olony optimizat fficient ce rk hnology a n ed in disaster selection of ro ensor network ing are to imp nectivity failur localized en e-aco) algor sed algorithm y consumptio path based o ion informatio tive, reactive impact on en work. since m ted energy ba fluences the ov nto considera mpared to proa both proactive tion… t areas outing to be prove re. nergy rithm is to on by on the on of e and nergy mobile attery, verall ation, active e and engineering, technology & applied science research vol. 8, no. 4, 2018, 3177-3183 3178 www.etasr.com chavan & kulkarni: event based clustering localized energy efficient ant colony optimization… reactive protocols are unaware of energy metrics and hence cause lowering of the battery energy of the nodes over the most heavily used routes in the network. b. localization based routing protocols location information for sensor networks by most routing protocols needs to calculate the distance between two particular nodes so that energy consumption can be calculated. according to the dependency of range measurements, the existing localization schemes can be categorized into two major categories: range-based approaches and range-free approaches. range-based and range-free schemes are again divided into anchor based and anchor free schemes. the anchor-free schemes do not assume that the node positions are known at first. on the other hand, the anchor-based schemes need some nodes that are aware of their positions (anchor nodes) to provide geographic information to unidentified nodes to localize [6]. c. location based bio-inspired routing algorithms various location-based routing algorithms have been proposed, nevertheless, they have a relative shortcoming: either not guaranteeing to find a way to the destination or locating a path which is much longer than the shortest path. position based ant colony (posant) routing algorithm is a collection of ant colony based routing algorithms that use the data about the position of nodes to improve the efficiency of an ant algorithm. contrary to other position based algorithms, this algorithm does not fail when the network contains nodes with different transmission range. posant is a multipath routing algorithm using gps to find position information which adds to cost of nodes and is not suited for indoor network [7]. location based ant colony optimization (lobant) algorithm uses distance to consider routing metrics due to received signal strength indicator (rssi), but energy aware metrics are not taken into account [8]. autonomous localization based eligible energetic path_with_ant colony optimization (aleep with aco) algorithm was developed in [9] by a combination of the advantages of the best exiting protocols. authors used the location of the nodes, adaptive transmission power (atp) and energy aware metrics to increase the efficiency of routing. after studying the related work, we came to conclusion and proposed an event based clustering localized energy efficient ant colony optimization (ebc_lee-aco) routing algorithm by combining the advantages of aco, rssi and clustering. iii. proposed scheme a. problem definition the main challenge of wsn is to discover efficient routing, as the sensor nodes are not static and change their position randomly. limited battery life is another issue. a disaster situation is one more challenge for wsn by which the communications in the network may fail and lead to excessive packet drop and can hang the network. for solving these problems, ebc_lee-aco algorithm is proposed, which is aco, based on geographical location with clustering approach. b. objectives  to find and reconstruct the optimal path for routing in disaster situations smoothly and quickly.  to reconstruct communication links in case of link failure.  to reduce network energy consumption by selecting the least distance from source to destination node with localized and clustering approaches.  to improve network qos.  to verify whether the proposed routing algorithm is more efficient than other routing algorithms like aodv, aco and aco using rssi and present aleep with aco routing algorithm. c. methodology to achieve the objectives we considered the following implementation steps:  design ebc_lee-aco routing algorithm by combining the advantages of aco, rssi and clustering.  simulate network considering variable number of nodes and variable node mobility.  this simulation will provide us data to perform network analysis of performance parameters like throughput, packet delivery ratio, packet drop and consumed energy.  compare above network parameters between the proposed algorithm and aodv, aco, aco using rssi algorithm and present aleep with aco algorithm.  network simulator 2 (ns2) is used for simulation. iv. current methods a. ad-hoc on demand distance vector (aodv) protocol as the name itself suggests, aodv protocol is an on demand routing protocol, which means that it determines a route to a destination only when a node wants to send a packet to that destination [10]. essential objectives of the algorithm are:  to broadcast discovery packets only when necessary using rreq message.  to perform local connectivity management, neighborhood identification and general topology maintenance using hello messages.  to spread data about changes in availability to neighboring nodes which will probably require the information using rerr message. aodv operation is divided into two phases, route discovery and route maintenance. b. ant colony optimization (aco) algorithm aco is a bio-inspired meta-heuristic algorithm introduced in [11, 12]. the main idea is to use ants as an inspirational source because they follow self-organizing principles which allow highly coordinated behavior. ants have collective learning intelligence. each ant communicates, learns and cooperates non-verbally with the others through pheromones. different kinds of ant’s algorithm can be inspired by different ant behaviors, e.g. foraging, labor division, brood sorting, and engineering, technology & applied science research vol. 8, no. 4, 2018, 3177-3183 3179 www.etasr.com chavan & kulkarni: event based clustering localized energy efficient ant colony optimization… cooperative transport [13, 14]. the basic rule of aco is the ability of ants to discover the shortest path between food sources and the anthill. in the beginning, the route the ants find may not be the shortest path, but with the passage of the time, more and more ants move cooperatively and the trail of their path becomes shorter and shorter until they get the shortest path. there are three phases of the ant based algorithm namely route discovery, route maintenance and route failure handling. c. ant colony optimization using received signal strength indicator (aco using rss) algorithm location aware aco routing is a high performance routing protocol for wsn design [15]. the main reason to seek location awareness in aco routing is the dynamic network topology causing frequent link breakup that causes the source node to spend most of its time in route setup and route maintenance. in location awareness in aco, each node will have a general idea about the network topology and its neighbors so that it can choose the nearest neighbor toward the destination [15, 16]. this new routing algorithm is based on aco and uses location as a parameter to enhance its efficiency. from the rssi value every node can determine the distance between nodes. in aco using rssi, the route is searched only when there is a collection of information packets to be sent from a source node (s), to a destination node (d), thus it is a reactive routing algorithm. sending the information packets will begin after a route from the source to the destination node is built up. before that, only forward ants are being exchanged with backward ants. to limit the time it needs to discover a route while keeping the quantity of generated ants as small as it could be expected under the circumstances, data about the position of nodes is utilized as a heuristic value [7, 8]. when there is a packet to be sent, the source starts a route discovery phase. at first, a route request (rreq) broadcasts to each one of the nodes from s to d. when the d gets the first rreq message, its answer is a route reply message to s. on receiving the rrep message, a node will extract rssi value from it and would calculate the location of the neighboring nodes and in turn the location of the destination. the routing table is updated with distance information between the nodes utilizing rssi value. every sensor node in the wsn has a memory block in which the leftover energy, the location data of the node, its neighbors and the base station are stored. route establishment using distance is described in [8, 20]. v. proposed methods a. ebc_lee-aco algorithm previous algorithms have some limitations such as: for proper monitoring, dense and large wsn will be used for different types of applications. there is a high probability of redundant data being recorded by neighboring nodes during an event. as many nodes might sense the same event, they will establish a route separately. routing algorithms based on ant colony are considered to have a high percentage in terms of packet delivery, but the drawback is overhead of the control messages required for discovering the route. cluster-heads use only forward route discovery control messages. their limitation is the dynamic topology of the network, which limits the bandwidth availability and energy constraints. to overcome these problems, clustering technique is used, as the clustering approach has an advantage of spatial reuse of resources to increase system capacity, data aggregation, reduce energy consumption etc. [17-19]. in the present research work an event based clustering localized energy efficient ant colony optimization (ebc_lee-aco) algorithm is proposed. the main focus of the proposed ebc_lee-aco algorithm is to minimize the energy consumption of the network by cluster formation and to select the optimal path based on the biological inspired routing aco and location information of nodes. b. three phases (stapes) of proposed algorithm 1) hop tree formation phase in wsn, data transmission takes place in multi-hop fashion where each node forwards its data to a neighbor node nearer to the sink. the node doesn’t have a full understanding of the network, but has only knowledge of its neighboring nodes. each node only knows the hop level in which it is in a hop tree [20]. in this phase, the distance metric used is the hop count (i.e. the number of nodes from a to b). the distances between the sink and different nodes are calculated. the algorithm is initiated by the sink node broadcasting a hop configuration message (hcm) to its neighboring nodes with a hop value. this hop value gets incremented every time the message is transmitted and is stored in their routing table. this process is continued until all the nodes are configured with a hop value within a tree. the hcm has two parts id and hoptotree (htt). id is the node identifier whereas the htt is the distance in hops. in this approach the sink node broadcasts hcms having values of htt as 1 and hop count 0. the receiving nodes forward messages to their neighboring nodes. initially all nodes set the value of htt as infinity. on receiving the hcm, each node compares the value of htt in the hcm with the value of htt that node already has in it. if the above conditions are met, then the node updates its internal stored values by the value of the field id as well as the value of the htt variable of the hcm. the node broadcasts the hcm with the new values. if the condition is not met which means that the node received the hcm from a shorter distance then the node discards it. the step described above occurs repeatedly until the whole network is configured. initially there is no recognized route and the value of htt variable has the smallest distance to the sink. when the first event is triggered the variable still stores the smallest distance, but a new route is established. after the event, the variable stores the lower among the 2 values: the distance to the sink or the distance to the closest already recognized route. hop tree formation phase is shown in figure 2. 2) cluster formation and cluster head (ch) selection phase when an event is detected by one or more nodes, the cluster formation and the cluster head selection algorithm gets started. a set of nodes that have detected an event forms a cluster. once clusters are formed, the next process is to select the cluster head within each cluster. the main process for forming the cluster is the selection of leader node which is called cluster head (ch) by using the cluster configuration message (ccm). ccm has four attributes (type, id, htt, state), where id is the sto fig netw the no sel bet cou i.e on tie wi me col the wh sam ch the nu red agg clu ac 3) rou coo wo des bei sea to loc rou no inv the me des no are me engineerin www.etasr e identifier of ores state value g. 2. flooding work where node if this is the e cluster head de or to the no lection is based tween the cur unt that the no ., the one with e can use the . after the end ll be selected embers which llects all infor em to the sink hole set of info me event is ac h. in addition, erefore large mber of con duced, the n gregation and uster formatio co algorithm i route discov the final st ute discovery ordinator) to d ould be done stination. befo ing exchanged arched when t a destination. cation informa ute. when the de, the source volves a broad eir neighboring essage each no stination node de initiates a r e at 1 hop dista essage reaches ng, technology r.com the node that e of the node. g of hcm messag es are deployed ra e first event, a d is the one w ode which is c d on paramete rrent energy v ode has been se h smaller id w energy level d of the proce d as the ch h also are a p rmation gathe k. the main be ormation gathe ccumulated or , collected info amount of re ntrol message number of o d reliable tran on and ch se is shown in fi very and selec tep involves t and optimal p d (sink). actu once the rou fore that, only d for the rout the packets are . the main pa ation of the no source node w e node initiates dcast message g nodes which ode checks wh e in its routin rreq messag ance. the proc s destination no y & applied sci chava started the c ge from the sink andomly all sensing nod which is closes closest to an es ers such as hop versus the ini elected as ch wins over the o to select the c ess, one of the h. the other part of the sa ered by memb enefit of this a ered by variou r aggregated a formation is hi edundant data es for buildin overlapping r nsmission of election phase igure 3. ction phase the route esta path selection ual sending an ute is establish y forward and te discovery p e required to b arameter for th ode which is u wants to send d s a route disco e to be sent fro h are at 1 hop hether it conta ng table. if th ge to its neigh cess will conti ode d. ience research an & kulkarni: cm. htt and k to the last node des are eligibl st to either th stablished rout p count, energy tial energy an . in the case of others. alternat ch in the cas e nodes in the nodes will b ame event. th ber nodes and algorithm is th us nodes sensin at a single nod ighly correlate a gets cleaned ng routing tr route for eff data is maxim e of the ebc_ ablishment th n for s (in ou nd receiving o hed from sou d backward an process. a ro be sent from s his algorithm used to discov data to a destin overy process om source no p distance. wit ains the entry f here is no entr hboring nodes inue until the r h v event based c d state e of the le and e sink te. ch y ratio nd the f a tie, tively, se of a group be the he ch sends hat the ng the de, the ed and d, the ree is fficient mized. _leehrough ur case of data urce to nts are oute is source is the ver the nation which des to th that for the ry the which rreq f nod s b rou nod extr nod this (fa divi info ben this info pro bro form esta the algo low of t with 1. 2. 3. 4. 5. 6. 7. vol. 8, no. 4, 20 clustering loca fig. 3. nodes if the entry o des a reply me by following b ute reply pack de receives th racted which de and the rou s algorithm div ants) and ba ide ant agents ormation coll nefiting from t s principle, f ormation, but o cess is done adcasting a fa m the route fo ablishment by table whose orithm focuse w as possible a the link betwee h each other a the s (coo launching fa fants disco nearer to s b repeated until the fant cr node. upon receivi remaining ene when the des it initiates a during this t received fan transition rule once d is rea takes the stack the bant fo path from the 018, 3177-3183 alized energy ef s detected the sam of the d is fo essage (rrep) back the path ket (rrep) co he rrep m will help in uting table will vides ant agent ackward ants s in two secti lected by th the information fants do only collect th by bants. t ant agent to r the event dis y sending fan first two neig s on keeping and the pherom en the nodes. as follows: ordinator) ini ants to destin over the route y analyzing th l an ant reache reates a stack, ng a fant e ergy with the t stination node timer called f time the d n nts, d will c e. ached, fant k and follows ollows the sta e sink to the so 3 efficient ant co me event form a cl ound in any o ) is generated h that was fol ontains rssi essage, rssi determining t l be updated. ts into two sec (bants). th ions is to take he other sec n collected by not create o he information the process s wards the sink ssemination. t nts to two p ghbor nodes ar the number o mone trail show these agents a tiates route nation at regula e to d based he routing tab es d. , pushing in tr each of the n threshold ener or sink receiv fet (forward node accepts calculate the o is converted i it. ack entries and ource. 3180 olony optimizat luster and select c of the neighb by d and is se llowed to reac value. when i value woul the location o for route sele ctions: forward he main reaso e the advantag ction i.e. ba y fants. base or update ro n and route cre starts with the k or base stati the s initiates ossible paths re closer to s f generated an ws the edge w are communic establishmen ar time interva on which no ble. this proce rip times for e nodes compare rgy. ves the first fa d expiration ti all fants. optimal path u into bant. b d traces the re tion… ch boring ent to ch d. each ld be of the ection d ants on to ge of ants ed on outing eation e ch ion to route from . the nts as weight cating nt by als. ode is ess is every es its ant, mer). with using bant verse engineering, technology & applied science research vol. 8, no. 4, 2018, 3177-3183 3181 www.etasr.com chavan & kulkarni: event based clustering localized energy efficient ant colony optimization… 8. pheromone edge updates depend upon the residual energy of the node and location. 9. once the optimal path is found, d backs up all possible paths in case the path is failed. vi. practical analysis this section presents a practical analysis of network performance metrics like throughput, packet delivery ratio, packet drop, consumed energy and node mobility. a. network scenarios and simulation parameters scenarios and simulation parameters are shown in table i. table i. scenarios and simulation parameters network parameters values routing protocol / algorithm aodv, aco, aco using rssi, ebc_lee-aco traffic patterns cbr (constant bit rate) network size 1000 × 1000 (x x y) mac protocol 802.11 initial energy 200j (for each node) simulation time 30s simulation platform ns-allinone-2.32 node variables number of nodes 10/30/60/100 node speed 3m/s variable mobility number of nodes 50 maximum speed 1/2/3/4/5 m/s b. results and analysis 1) results on varying number of nodes it is observed from figures 4 to 7 that for increasing number of nodes and constant mobility, throughput and packet delivery ratio decrease and consumed energy and the number of dropped packets increase because:  the probability of success in accessing the channel decreases,  as hop count increases, congestion and delay increases and collision and transmission error increase. fig. 4. throughput versus number of nodes fig. 5. pdr versus number of nodes fig. 6. packets drop versus number of nodes fig. 7. energy versus number of nodes 2) results of varying node mobility figures 8 to 11 show that for increasing mobility with constant nodes, throughput, packet delivery ratio and consumed energy decrease and packet drop increases because the probability of path breakage increases and the construction of a new path takes time. results show that the performance parameters of the network are improved by the use of the proposed ebc_lee-aco algorithm in comparison with aodv, aco and aco using rssi algorithms due to the following characteristics of the ebc_lee-aco algorithm: nodes 10 20 30 40 50 60 70 80 90 100 50 55 60 65 70 75 80 85 90 95 packet delivery ratio aodv aco aco rssi ebc lee aco p ac ke ts d ro p( n o. ) engineering, technology & applied science research vol. 8, no. 4, 2018, 3177-3183 3182 www.etasr.com chavan & kulkarni: event based clustering localized energy efficient ant colony optimization…  no back propagation.  multipath routing.  no packet flooding.  shortest distance routing.  ideal nodes.  reduce overhead.  data aggregation.  no redundant data transmission. fig. 8. throughput versus node mobility fig. 9. energy versus number of nodes vii. conclusions in this work, the ebc_lee-aco routing algorithm is implemented and the network performance, by varying number of nodes and node mobility, is analyzed. this algorithm is extensively compared to other algorithms like aodv, aco and aco-rssi by considering network metric parameters like throughput, packet delivery ratio, packet drop and consumed energy. simulation results show that the ebc_lee-aco algorithm outperformed the other algorithms in disaster situations. aco achieves better performance compared to aodv, as aco allows rerouting to another link in the case of existing link failure (no back propagation). aco using rssi routing algorithm improves the routing by minimizing the flooding of routing packets because, it has the location information of nearby nodes. fig. 10. pdr versus node mobility fig. 11. packet drop versus node mobility fig. 12. energy versus node mobility the ebc_lee-aco algorithm has achieved better performance due to clustering technique and location information of nodes. clustering data is aggregated and sent to the sink through ch which reduces overheads. also, location engineering, technology & applied science research vol. 8, no. 4, 2018, 3177-3183 3183 www.etasr.com chavan & kulkarni: event based clustering localized energy efficient ant colony optimization… information of nodes is useful to send data to shortest distance node in less time. the proposed algorithm reduces energy consumption by approximately 7%. an improvement in throughput, packet delivery ratio and increase in packet drop has been observed in comparison with present network routing algorithms, i.e. autonomous localization based eligible energetic path_with_ant colony optimization (aleep with aco) [9]. use of ieee 802.11 standard increased packet drop. hence, our ebc_lee-aco algorithm is useful for improvement in qos and reduction in energy consumption of the wsn. it is most suitable for information monitoring in disaster situations. the extension of the proposed algorithm will be considered for varying network areas as well as increasing number of nodes and mobility in future work. references [1] i. f. akyildiz, w. su, y. sankarasubramaniam, e. cayirci, “wireless sensor networks: a survey”, computer networks, vol. 38, no. 4, pp. 393–422, 2002 [2] j. yick, b. mukherjee, d. ghosal, “wireless sensor network survey”, computer networks, vol. 52, no. 12, pp. 2292–2330, 2008 [3] s. k. gupta, p. sinha, “overview of wireless sensor network: a survey”, international journal of advanced research in computer and communication engineering, vol. 3, no. 1, pp. 5201-5207, 2014 [4] a. k. gupta, h. sadawarti, a. k. verma, “review of various routing protocols for manets”, international journal of information and electronics engineering, vol. 1, no. 3, pp. 251-259, 2011 [5] h. s. a. hamatta, n. i. zanoon, r. m. al-tarawneh, “comparative review for routing protocols in mobile ad-hoc networks”, international journal of ad hoc, sensor & ubiquitous computing, vol. 7, no. 2, pp. 13-31, 2016 [6] a. mesmoudi, m. feham, n. labraoui, “wireless sensor networks localization algorithms: a comprehensive survey”, international journal of computer networks & communications, vol. 5, no. 6, pp. 45-64, 2013 [7] s. kamali, j. opatrny, “a position based ant colony routing algorithm for mobile ad-hoc networks”, third international conference on wireless and mobile communications (icwmc'07), guadeloupe, france, march 4-9, 2007 [8] r. vallikannu, s. e. jubin, “a location based aco routing algorithm for mobile ad hoc networks using rssi”, ieee international conference on communication and signal processing, chennai, india, april, 3-5, 2013 [9] r. vallikannu, a. george, s. k. srivatsa, “autonomous localization based energy saving mechanism in indoor manets using aco”, journal of discrete algorithms, vol. 33, pp. 19–30, 2015 [10] p. k. maurya, g. sharma, v. sahu, a. roberts, m. srivastava, “an overview of aodv routing protocol”, international journal of modern engineering research, vol. 2, no. 3, pp. 728-732, 2012 [11] m. dorigo, c. blum, “ant colony optimization theory: a survey”, theoretical computer science, vol. 344, no. 2-3, pp. 243–278, 2005 [12] c. blum, “ant colony optimization: introduction and recent trends”, physics of life reviews, vol. 2, no. 4, pp. 353–373, 2005 [13] s. binitha, s. s. sathya, “a survey of bio inspired optimization algorithms”, international journal of soft computing and engineering, vol. 2, no. 2, pp. 137-151, 2012 [14] o. deepa, a. senthilkumar, “swarm intelligence from natural to artificial systems: ant colony optimization”, international journal on applications of graph theory in wireless ad hoc networks and sensor networks, vol. 8, no.1, pp. 9-17, 2016 [15] x. wang, q. li, n. xiong, y. pan, “ant colony optimization-based location-aware routing for wireless sensor networks”, in: lecture notes in computer science, vol. 5258, pp. 109-120, springer, 2008 [16] c. dominguez-medina, n. cruz-cortes, “energy-efficient and locationaware ant colony based routing algorithms for wireless sensor networks”, 13th annual conference on genetic and evolutionary computation, dublin, ireland, pp. 117-124, july 12-16, 2011 [17] s. k. popat, m. emmanuel., “review and comparative study of clustering techniques”, international journal of computer science and information technologies, vol. 5, no. 1, pp. 805-812, 2014 [18] s. mahajan, p. k. dhiman, “clustering in wireless sensor networks: a review”, international journal of advanced research in computer science, vol. 7, no. 3, pp. 198-201, 2016 [19] s. k. gupta, n. jain, p. sinha, “clustering protocols in wireless sensor networks: a survey”, international journal of applied information systems, vol. 5, no. 2, pp. 41-50, 2013 [20] l. aparecido villas, a. boukerche, h. soares ramos, h. a. b. fernandes de oliveira, r. borges de araujo, a. a. ferreira loureiro, “drina: a lightweight and reliable routing approach for in-network aggregation in wireless sensor networks”, ieee transactions on computers, vol. 62, no. 4, pp. 676-689, 2013 microsoft word etasr_v10_n6_pp6418-6421 engineering, technology & applied science research vol. 10, no. 6, 2020, 6418-6421 6418 www.etasr.com phan: contractor's attitude towards risk and risk management in construction in two western … contractor's attitude towards risk and risk management in construction in two western provinces of vietnam van tien phan department of civil engineering vinh university vinh city, vietnam vantienkxd@vinhuni.edu.vn abstract—risk management is an important task in construction management that helps the contractor to actively identify, evaluate, control, and minimize negative impacts of risks on the project, thereby ensuring its effectiveness. people involved in the construction industry need to be well equipped with information and knowledge to manage risks adequately and systematically. the purpose of this research is to explore the attitude towards risk and risk management in construction projects of the vietnamese construction industry, with emphasis on the perspective of contractors. the research data are collected through a questionnaire associated with in-depth semi-structured interviews. the results indicate that the perception of risk within the vietnamese construction industry includes both threats and opportunities. the majority of professionals in the industry have a risk-neutral approach, contrary to previous research. the importance of implementing effective risk management is shared, in the planning and production phase while risk identification was perceived to be the most important out of the four core processes. keywords-contractor; risk; risk management; construction i. introduction risk management in construction is designed to plan, monitor, and control those measures needed to prevent exposure to risk. to do this, it is necessary to identify the hazard, assess the extent of the risk, provide measures to control the risk and manage any residual risks. the integration of an effective risk management is considered essential for the project’s success. construction projects are described as tremendously complex and uncertainty might arise from various sources. risk management is therefore becoming an extensive component of the project management in civil engineering in a pursuit to efficiently deal with unexpected risks and uncertainty. however, the managing of the adverse effects of risk and uncertainty in construction projects is rated ineffective, resulting in delays and a failure to meet quality and cost targets [1]. the aim of applying an efficient risk management procedure is to facilitate risk neutral decisions, resulting in superior performance. in order to obtain more information about risk and uncertainty of the construction project, different methods needed to be applied systematically [2]. although the application of various such techniques will not remove all risks, it ensures that the risks are assessed and managed in a manner allowing the overall objectives of the project to be achieved [3]. risk management in construction is identifying, analyzing, and taking steps to reduce or eliminate the exposures to loss faced by an organization or individual. the practice utilizes many tools and techniques, including insurance, to manage a wide variety of risks. this allows the project to be prepared for unavoidable issues with increased transparency [4]. this process is repeated continuously throughout the entire project life cycle due to the constant possibility of emerging risks. risk and uncertainty must be identified, assessed and responded from the earliest possible phases in order to achieve an efficiently dealt with when they arise [3]. the benefits of the process are the clearer understanding of the specific risks associated with the project, supported decisions by detailed analysis and a built-up of historical data that can be used to assist future risk management procedures. however, many contractors have still not realized the importance of applying risk management techniques as an integral part of the delivery of a project [1]. inefficient risk management has many causes, including the lack of formalized procedures, the discontinuity in different phases of the construction project, and the inadequate integration of knowledge management and interaction between processes and parties [1]. in place of the contractors, the responsibility to deal with risks is deciding if the risks should be reduced, avoided, transferred, or retained [6]. the contractor needs to understand the importance of risk and risk management capabilities to achieve effective risk implementation [7]. in this research, risk management in the vietnamese construction industry has been investigated, with focus being given on the attitude of the contractors towards risk and risk management techniques. the contractors participating in the survey were selected from small and large companies. the research is limited to the vietnamese construction organization in two main provinces in vietnam, vinh long and can tho. corresponding author: van tien phan engineering, technology & applied science research vol. 10, no. 6, 2020, 6418-6421 6419 www.etasr.com phan: contractor's attitude towards risk and risk management in construction in two western … ii. research methodology the research was carried out using a mixed method, in which both quantitative and qualitative data collection techniques and analytical procedures were used [8]. therefore, this form of survey integrates two types of data and the core assumption of this approach is that a combination of qualitative and quantitative methods leads to a better understanding of the problem [10]. mixed approach studies use multiple approaches in answering research questions, and are not restricted or limited [11]. previous research includes thorough understanding of the field of study and formulating research questions. the next step involves preparing a set of semistructured interview questions and building a questionnaire based on the theoretical framework. these steps are needed in order to achieve the purpose of this study, which is to understand the perceptions, knowledge, and practical implementations of risk management process in constructions in vietnam. the questionnaire was designed to identify attitudes, knowledge, and risk management application in the vietnamese construction industry. a survey strategy is associated with inference methodology and tends to be used for exploratory and descriptive research. it allows potential gathering of large amounts of data from a large population [8]. the purpose of descriptive research is to obtain an accurate representation of the person or situation, thus describing the characteristics of the phenomenon under investigation. a questionnaire is one of the most commonly used data collection techniques in surveys. each individual is provided with a questionnaire and is asked to answer the same set of questions, allowing a way to collect answers from a large sample before analysis [8]. a successful questionnaire should be short and simple [12]. simplified questions should be conducted in a logical sequence that moves from easier to more difficult. the questions provided in the survey may be open-ended allowing unlimited answers, bidirectional in which the answer is limited to a pair of alternatives such as yes or no, rating questions, and finally multiple-choice questions in which the respondents are asked to choose the most suitable option. the goal was to get an overall representation of the industry related to risk management. the questionnaire consisted of 29 questions divided into three parts. the first part was designed to collect background information and reveal respondents’ perceptions and attitudes towards risk management. the purpose of the second part was to explore ways to manage and transfer knowledge in the respective companies, and the third part covered practical risk management and implementation methods in the industry. an invitation was emailed to 336 vietnamese contractors a total of 43 responses were received, with a response rate of approximately 13%. the average response rate for external surveys is around 10-15%. about 70% of the respondents had more than 15 years of experience within the construction industry and the majority (88%) where contractors, 24% where developers and 2,38% where consultants. the size of the companies where equally represented, approximately 48% had more than 1000 employees while 52% had less than 1000 employees. iii. results the results of this study show the respondents’ perception of risks and how they view risk management in terms of importance. the respondents covered a variety of occupations in vietnamese construction industry, representing the overall picture of risk management perception and implementation, with their majority having more than 10 years of experience to enhance their credibly. contractors were the 78%, developers (clients) the 19%, and consultants the 13% of the participants. interviews were conducted only with contractors. an equal distribution among company sizes was attained in the data collection, as stated above. a difference of opinion related to the company size will merely be mentioned when a significant differentiation can be observed between them, otherwise an overall picture of the industry will be presented due to similar answers to the questions. the results indicate that the attitude among vietnamese contractors regarding risk is a combination of both opportunity and threat. this contradicts the results presented by [5] where the construction industry is predominantly risk averse. the overwhelming majority of the respondents in both the questionnaire and the interviews described themselves as being risk-neutral rather than riskaverse or risk-seekers, which coincides with previous studies. hence, their attitudes and perception of risk are in line with their risk approach profile as risk-neutral decision makers. one of the interviewees viewed risk as overall negative consequences depending on the type of risk, although he stated that an opportunity might be found when dealing with financial risks. however, the majority of respondents described risks as a mix of threat and opportunity since risks might lead to exploring other ways of managing hazardous situations which may be more prosperous. a. questionnaire the related part of the questionnaire consisted of the following questions: 1) how do you perceive risk within the construction industry? the answers show that the majority of respondents' attitudes toward risk were a combination of threats and opportunities, as shown in figure 1. only two respondents perceived risk only as a threat, two more respondents realized that the risk was something positive, that is only an opportunity. however, about 90% (30 people) of the respondents perceived the risks in the construction industry as a combination of both positive and negative associations. 2) what is your attitude in relation to tisk? about 5% of respondents feel they have a risk-seeking personality while 13% say they do not like risks. the majority, around 82%, has a risk-neutral approach and can balance between avoiding and seeking risks. therefore, a correlation between their cognitive traits and their attitudes is observable. 3) which stage/phase do you consider most important in risk management? respondents varied quite a lot when asked at what stage they considered the implementation of risk management to be the most important. the reason for this is probably the engineering, technology & applied science research vol. 10, no. 6, 2020, 6418-6421 6420 www.etasr.com phan: contractor's attitude towards risk and risk management in construction in two western … professional diversity among respondents because developers and consultants probably value the completion phase more in relation to construction and site managers consider planning and production risks in higher priority. however, the findings indicate that the majority consider the planning stage as the most important stage to implement risk management. this is followed by production, then conceptual stage and finally by the completion and finishing stages, as shown in figure 2. fig. 1. respondents risk perception (left) and respondent’s attitude in relation to risk. fig. 2. answers on “which stage do you consider most important for risk management?”. 4) which risk management process is most important? risk identification was perceived by the respondents as the most important risk management process as shown in figure 3. risk assessment, risk response, and risk monitoring phase are considered rather equally important with small variances in opinion. fig. 3. answer on “which risk management process is most important?”. the questionnaire revealed that the respondents perceived risk management to be most important during the planning and production rather than the conceptual and completion phases. the result illustrates that every phase is considered highly important since the planning and production phase didn’t exceed the other phases significantly. this outcome is probably caused by the fact that the participants came from various professions within the industry, in which different risks are considered. furthermore, out of the four core processes in risk management, the respondents viewed risk identification to be the most important while assessment, whereas response and monitoring where rather equally significant. this parallels the claim by [7] that the identification process might be viewed as the most crucial step. b. interviews the perception of risk management as the adoption of efficient processes of managing risks was shared among all the interviewees. risk management is crucial in order to achieve project objectives. the projects in the construction industry are filled with risks and uncertainty, thus it is essential to have an effective risk management process in place. the concept described in the literature facilitates the ability to maximize opportunities and simultaneously reduce threats. however, no universal standard or method can be observed among the respondents in the interviews, with the implementation of risk management varying in practice. nonetheless, four stages where considered as the core process within the construction industry even though there are many methodologies used for risk management [1]. the findings from the interviews revealed that even though the respondents didn’t have previous knowledge regarding theoretical models and processes they still had analogous organizational processes. hence, they were indirectly practicing risk management similar to the concepts described in the literature. the respondents described a general identification process in order to shed light on various risks followed by an assessment of the risks and a prioritization depending on impact and probability, which was determined by experience and discussion. this process is iterative and continuous throughout their projects, which is the essential information loop principal used when describing the use of an effective risk management implementation [2]. in terms of work environment risks, it was stated by one respondent that they made sure to manage a high impact risk scenario straightaway while low risks were allowed to be fixed during a longer time period. all respondents emphasized the importance of discovering risks as early as possible. iv. conclusions by preparing a set of semi-structured interview questions and by building a questionnaire based on the theoretical framework, the perceptions, knowledge, and practical implementations of risk management process in the construction industry in vietnam have been investigated. the questionnaire was designed to identify attitudes, knowledge, and risk management application in the vietnamese construction industry. the contractors participating in the questionnaire were selected from small and large companies in vinh long and can tho provinces. the perception of risk within the vietnamese construction industry includes awareness of threats and opportunities. the majority of the professionals in the industry have a risk neutral approach, a result that comes in contrast with the findings in [5]. the importance of implementing an effective risk management is shared, especially in the planning engineering, technology & applied science research vol. 10, no. 6, 2020, 6418-6421 6421 www.etasr.com phan: contractor's attitude towards risk and risk management in construction in two western … and production phases, while risk identification was perceived to be the most important of the four core processes. references [1] n. j. smith, t. merna, and p. jobling, managing risk: in construction projects, 2nd ed. hoboken, nj, usa: wiley-blackwell, 2009. [2] g. m. winch, managing construction projects, 2nd edition. chichester ; ames, iowa: wiley-blackwell, 2009. [3] k. potts, construction cost management: learning from case studies. london, uk: routledge, 2014. [4] m. schieg, “risk management in construction project management,” journal of business economics and management, vol. 7, no. 2, pp. 77– 83, jan. 2006, https://doi.org/10.1080/16111699.2006.9636126. [5] a. s. akintoye and m. j. macleod, “risk analysis and management in construction,” international journal of project management, vol. 15, no. 1, pp. 31–38, feb. 1997, https://doi.org/10.1016/s0263-7863(96)00035x. [6] j. liu, b. li, b. lin, and v. nguyen, “key issues and challenges of risk management and insurance in china’s construction industry: an empirical study,” industrial management & data systems, vol. 107, no. 3, pp. 382–396, jan. 2007, https://doi.org/10.1108/02635570710734280. [7] m. n. k. saunders, a. thornhill, and p. lewis, research methods for business students, 5th edition. new york, ny, usa: pearson, 2009. [8] a. f. serpella, x. ferrada, r. howard, and l. rubio, “risk management in construction projects: a knowledge-based approach,” procedia social and behavioral sciences, vol. 119, pp. 653–662, mar. 2014, https://doi.org/10.1016/j.sbspro.2014.03.073. [9] r. b. johnson and a. j. onwuegbuzie, “mixed methods research: a research paradigm whose time has come:,” educational researcher, vol. 33, no. 7, 2004, https://doi.org/10.3102/0013189x033007014. [10] p. t. nguyen and p. c. nguyen, “risk management in engineering and construction: a case study in design-build projects in vietnam,” engineering, technology & applied science research, vol. 10, no. 1, pp. 5237–5241, feb. 2020, https://doi.org/10.48084/etasr.3286. [11] m. s. shahbaz, a. g. kazi, b. othman, m. javaid, k. hussain, and r. z. r. m. rasi, “identification, assessment and mitigation of environment side risks for malaysian manufacturing,” engineering, technology & applied science research, vol. 9, no. 1, pp. 3852–3858, feb. 2019, https://doi.org/10.48084/etasr.2529. [12] a. chenarani and e. a. druzhinin, “a quantitative measure for evaluating project uncertainty under variation and risk effects,” engineering, technology & applied science research, vol. 7, no. 5, pp. 2083–2088, oct. 2017, https://doi.org/10.48084/etasr.1530. microsoft word mainier-ed.doc etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 479-482 479 www.etasr.com monteiro et al.: socio-environmental impacts associated with burning alternative fuels in clinker kilns socio-environmental impacts associated with burning alternative fuels in clinker kilns luciane p. c. monteiro escola de engenharia universidade federal fluminense niterói, rio de janeiro, brazil luciane@predialnet.com.br fernando b. mainier escola de engenharia universidade federal fluminense niterói, rio de janeiro, brazil fmainier@uol.com.br renata j. mainier prog. pós-graduação eng.civil universidade federal fluminense niterói, rio de janeiro, brazil renatajogaibmainier@msn.com abstract— the pollutants found in emissions from cement plants depend on the processes used and the operation of the clinker kilns. another crucial aspect concerns the characteristics of raw materials and fuels. the intensive use of fuels in rotary kilns of cement plants and the increasing fuel diversification, including fuels derived from coal and oil, from a multitude of industrial waste and from biomass, charcoal and agricultural waste (sugarcane bagasse, rice husk), is increasing the possibilities of combinations or mixtures of different fuels, known as blends. thus, there are socio-environmental impacts associated with the burning of alternative fuels in clinker kilns. in view of the growing trend of entrepreneurs who want to target the waste produced in their unit and of the owners of the cement plants who want to reduce their production costs by burning a waste with lower cost than conventional fuels, it is necessary to warn that a minimum level of environmental care should be followed regarding these decisions. it is necessary to monitor the points of emission from cement kilns and in the wider area influenced by the plant, in order to improve environmental quality. laboratory studies of burning vulcanised rubber contaminated with arsenic simulate the burning of used tyres in cement clinker kilns producing so2 and as2o3. keywordscement plants; arsenic; tyres; clinker kilns i. introduction co-incineration of waste in industrial clinker kilns is a practice that dates back to the time of the oil crisis and is currently being viewed as a coordinated action among industries and cement industries that generate waste, more contextualised in the environmental sphere and less in the energy sphere and considered by waste generators with the approval of environmental agencies, as a final solution for the disposal of their industrial waste. it should also be noted that in the manufacture of cement, as in any other large industrial activity, risks are associated with the scale of operations, i.e., they depend on the quantities of waste handled, transported, prepared, fed and incinerated and the degree of dangerousness of such materials. therefore, the scale of the enterprise determines the extent of risk exposure for workers, the surrounding population and the environment. the most commonly used cement is composed of 96% clinker and 4% gypsum by mass. the clinker is produced from the thermal treatment in rotary kilns at elevated temperatures of a rocky material usually containing 80% calcium carbonate (caco3), 15% silicon dioxide (sio2), 3% aluminium oxide (al2o3) and minor amounts of other constituents, such as iron, sulphur and others. these materials are found in limestone deposits often located near the site of the clinker kiln. the raw material is mixed and finely ground, before being subjected to a heating process that leads to the production of clinker [1, 2]. the minerals employed as raw materials for cement might be associated with a number of other secondary minerals, which might contain other contaminants in the form of complex salts or oxides. other dangerous contaminants to be highlighted include lead, copper, zinc, thallium, cadmium, chromium, nickel and arsenic. additionally, if burning chlorinated compounds, then fluoride and sulphide minerals might lead to problems of equipment corrosion and serious socio-environmental problems. ii. risks and contitions for co-processing industrial waste for the daily processing of 3600 tons of clinker, the main component of the cement kiln, a large capacity, an oil-fired rotary kiln is required, which would consume about 300 tons of fuel, or the equivalent of ten tanker truck loads. in brazil, the number of operating cement plants is 487 and among those, 30 have environmental licences for co-processing waste, consuming the equivalent of 39.48% of the total energy consumption of the country [3, 4]. around 1979-1981, the price of fuel oil tripled because of the national dependence on imported oil. a scheme was then introduced where the supply of fuel oil for industries, should not exceeding the consumption practiced in 1979. thus, incentives and subsidies were introduced for some alternative sources of fuel and heat energy, through the signing of protocols for the use of domestic coal for the steel, cement, paper and cellulose industries. in september 1979, the cement industry signed the "protocol reduction and replacement fuel oil consumption etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 479-482 480 www.etasr.com monteiro et al.: socio-environmental impacts associated with burning alternative fuels in clinker kilns in cement industry", which pledged to achieve by the end of 1984, the total substitution of fuel oil consumed by cement plants in their domestic production and to adopt energy conservation measures at the plant level. thus, in 1985, the cement industry had already replaced about 95% of its fuel oil consumption [3, 5]. according to ferrari [6], the basic assumptions for the use of waste in clinker kilns are:  not all types of waste may be used in cement clinker kilns owing to environmental restrictions (legislation) and to the manufacturing process of clinker  checking of the residue using partial replacement for fuel and/or as partial replacement of raw materials  the consideration that a residue might be considered as fuel and must provide heat to the process  if the residual considered for a partial replacement of the raw material should contain as major components: calcium, silicon, aluminium and iron. included in this case are material mineralisation and/or fluxes  assessment of physico-chemical characteristics. certain contaminants from waste should be in limited amounts in relation to the feeding rate of waste to the furnace  the maximum rate of feeding the waste to the clinker kiln is established by material balances, based on previous tests a. principal characteristics of the clinker kiln the clinker kiln should be adequate for the incineration of diverse industrial waste [6]. for example:  the oven must be able to operate at high temperatures, which is needed for the destruction of some hazardous organic wastes. the material in the kiln for production of cement clinker, must reach temperatures of 1400 – 1500 °c and the heating of the material requires a flame temperature of 3200 °c. the residence time at temperatures above 1100 °c is 6 to 10 seconds.  the need for turbulent gases in the furnace system with a reynolds number greater than 100,000; a condition that is highly favourable for the process of combustion and destruction of waste.  the clinker kiln has a basic environment that neutralises acid gases by the very nature of its raw materials.  the complete elimination of waste is expected, because the ash produced by incineration of residues is incorporated in the mass of the clinker produced.  stopping the waste stream following any mishap in relation to normal conditions of operation. the clinker kilns are necessarily a function of the basic relationship between the amounts of cao (calcium oxide) and sio2. thus, the cao/sio2 ratio must be greater than one, i.e., cao must be present in greater quantity than sio2, characterising the basics of the furnace system [6]. b. operations in cement plant licensed for the burning of industrial waste the cement manufacturing operation with residues using alternative fuels are based on two steps. 1) step 1: preparation and conditioning of waste fuel this step involves the following operations: receiving the waste, temporary storage, classification, segregation, mixing of different sources to form a blend with acceptable calorific value and finally, packaging in standard. 2) step 2: the co-processing this stage involves the following operations: overflow system, raising and feeding of waste packaged in patterns; system storage, mixing and transport of solid waste; storage system, pumping, transporting and injecting viscous mixture of waste; storage system, pumping, transportation and injection of liquid waste mixture. iii. monitoring the operation of a cement unit in relation to environmental contamination the environmental monitoring of a cement unit should be done by the industry and their records made available to the environmental control agencies based on the following guidelines. there should be periodic and non-periodic monitoring of parameters according to the composition of the waste that is fed to the furnace and among the sampling points critical to the control, such as:  sampling and analysis of the flue gases  sampling and analysis of clinker produced  reporting and documentation requirements of the environmental agency second controller  monitoring of air quality in the vicinity of the plant  control of the final specification of the cement produced when using industrial waste in the clinker furnace continuous monitoring is done by analysing and recording one or more parameters whenever the facility is in operation. in the furnace fuel particulate material, sox, nox, co and co2 are monitored. periodic monitoring is performed by analysing and recording particulate materials, sox, nox, fluoride, chlorine, metals, cyanides, pops (persistent organic compound) and vocs (volatile organic compounds). one has to consider the inclusion of a label stating that the cement being used was produced from the burning/co-incineration of waste, as well as an identification of the main types of contaminants expected in the product [7-9]. etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 479-482 481 www.etasr.com monteiro et al.: socio-environmental impacts associated with burning alternative fuels in clinker kilns iv. risks related to workers’ health, public health and the environment risks in the field of cement production are present in the form of occupational accidents with the introduction of industrial waste of any nature, which could cause immediate, acute toxicity to the workers following a break in the reliability of the process, irrespective of whether or not alternative fuels are used in the clinker furnace. as other forms of occupational risks, the cement units might impart a progressive chemical contamination to their workers that manifests over time. another risk related to cement production, concerns the character of environmental contaminants transferred, for example, through some climatic factor (wind and/or rain) to surrounding neighbourhoods and industrial units. there are risks related to the use of waste in cement from its place of origin to the final cement production involving all who participate in this operation. the characteristics of the technological process and the physico-chemical and toxicological properties of the raw materials and inputs employed in cement production, mean that cement plants pose risks to workers' health, public health and to the environment. these are mainly associated with exposure to the material in powder form that permeates the entire chain of production and from emissions of polluting substances, which occur continuously and even in small concentrations, characterise a chronic risk [1]. therefore, the entire cycle of the cement manufacturing process constitutes risk; the mining and processing of lime, the grinding and homogenisation of raw materials, the manufacture of clinker and the grinding and dispatch of the cement. throughout this process, there are emissions of particulate matter, consisting of the raw materials, clinker and cement, salt vapours, metal and gases formed in the combustion process and other missions generated elsewhere in the plant. the risk of further dissemination remains when using the cement end product. table i presents some metals found in solid waste and their consequences to human health. table i. metals vs risks contaminant solid waste disease cadmium solder, tobacco, batteries and cells. lung and prostate cancer, kidney injury. chromium industrial dyes, enamels, paints, steel and nickel alloys. asthma (bronchitis), cancer. nickel nickel-cadmium batteries, nickel electrowinning, castings. breast and lung cancer, sinuses. v. laboratory testing burning of vulcanised rubber containing arsenic the incineration of industrial residues in rotary kilns has been discussed across the world, because of the environmental problems caused to the atmosphere and the quality of the produced cement. this affects the sustainability of the cement industry because it has to guarantee the raw materials and fuels, as well as obey the environmental legislation. used tyres containing toxic contaminants are an example of this kind of fuel [11]. this experimental phase of the study was to simulate the burning of scrap tyres in cement clinker kilns generating gaseous environmental contaminants, such as arsenious oxide (as2o3) and sulphur dioxide (so2). the experimental method consisted of the preparation of rubber coupons (styrene-butadiene) with sulphur vulcanised to a high purity and contaminated with arsenic and sulphur. about 100 mg of finely grated coupons was placed in a porcelain crucible and subjected to firing at 1000 °c in a furnace with a controlled temperature heating rate (10 °c/min) and an atmosphere of helium and oxygen at a flow rate of 30 cm3/min, as shown in figure 1. fig. 1. equipment used for burning of rubber shavings: 1-voltage regulator, 2-nebuliser, 3-temperature indicator, 4-thermocouple, 5-heating mantle, 6-bottle containing sodium hydroxide solution the combustion gases were collected in a flask with a 2 m sodium hydroxide solution. the results of the burning of the rubber contaminated with sulphur and arsenic are shown in table ii. table ii. burning of the rubber contaminated with sulphul and arsenic gas burner outlet samples % sulphur in rubber % arsenic in rubber so2 mg/l as2o3 mg/l 1 11.13 ----1510 2 14.45 ----17450 3 11.13 4.46 1600 158 4 14.45 4.46 1570 164 5 14.45 16.65 17600 740 vi. conclusions based on the study, the following conclusions are considered as critical:  the majority of workers in the cement industry, as well as the residents in the neighbourhood of these factories are unaware of the origins and contents of waste fuels that are burned in the clinker ovens etasr engineering, technology & applied science research vol. 3, no. 4, 2013, 479-482 482 www.etasr.com monteiro et al.: socio-environmental impacts associated with burning alternative fuels in clinker kilns  usually waste chemicals from various segments are mixed with sawdust to form a mixture based on economic value and with a calorific value viable for burning  there are no effective controls placed on the burning process and equipment used by the cement factories nor on the toxic gaseous contaminants generated during the burning process  laboratory analyses carried out on burning vulcanised rubber coupons contaminated with arsenic, simulating the burning of used tyres, generated significant levels of arsenic oxide (as2o3)  burning of scrap tyres in clinker kilns is not a recommended alternative for the disposal of this environmentally troublesome waste, especially because it can generate undesirable contaminants to the environment  negligent and irresponsible sales of industrial waste with undeclared toxic contaminants for incineration in industrial furnaces may occur  because of the dangers related to contamination of the cement produced from the use of industrial waste, its use should be avoided as much as possible in the absence of additional controls. references [1] m. achternbosch, k. r. bräutigam, n. hartlieb, c. kupsch, u. richers, p. stemmermann, m. gleis, heavy metals in cement and concrete resulting from the co-incineration of wastes in cement kilns with regard to the legitimacy of waste utilization, forschungszentrum karlsruhe gmbh, karlsruhe , 2003. [2] r. kikuchi, r. gerardo, “more than a decade of conflict between hazardous waste management and public resistance: a case study of nimby syndrome in souselas (portugal)”, journal of hazardous materials, volume 172, no. 2-3, pp. 1681-1685, 2009 [3] l. p. c. monteiro, avaliação do impacto ambiental associado à queima de resíduos industriais em fornos de clínquer: visão sob o prisma da educação ambiental, universidade federal fluminense, tese de doutorado, uff, outubro, 2007 [4] c. y. kawabata, h. savastano junior, j. souza-coutinho, “rice husk derived waste materials as partial cement replacement in lightweight concrete”, ciência e agrotecnologia, vol.. 36, no. 5, pp. 567-578, 2012 [5] j. g. silva, há emissões acrescidas na co-incineração de resíduos industriais perigosos em cimenteiras, universidade de coimbra, associação nacional de conservação da natureza, quercus, 2002 [6] r. ferrari, “co-processamento de resíduos industriais em fornos de clínquer”, companhia de cimento itambé, balsa nova, 2002 [7] p. shih, j. chang, h. lu, l. chiang, “reuse of heavy metal-containing sludges in cement production”, cement and concrete research, vol. 35, no. 11, pp. 2110-2115, 2005 [8] d. lemarchand, “cement kiln incineration associated to pre-treatment, a viable waste management solution”, congresso brasileiro de cimento, anais são paulo, 1999 [9] f. bagnoli, a. bianchi, a. ceccarini, r. fuoco, s. giannarelli, “trace metals and organic pollutants in treated and untreated residues from urban solid waste incinerators”, microchemical journal, vol. 79, no. 12, pp. 291-297, 2005 [10] f. b. mainier, b. p. salvini, l. p. c. monteiro , r. j. mainier, “recycling of tires in brazil: a lucrative business or an imported problem”, international journal of engineering and applied sciences, vol. 2, no. 3, pp. 19-28, 2013 microsoft word 31-3159_s_etasr_v9_n6_pp5037-5040 engineering, technology & applied science research vol. 9, no. 6, 2019, 5037-5040 5037 www.etasr.com hafeez et al.: a survey on the adaption of cms in pakistani universities a survey on the adaption of cms in pakistani universities abdul hafeez department of computer science smi university karachi, pakistan samreen javed department of computer science smi university karachi, pakistan anus bin murtaza department of computer science muhammad ali jinnah university karachi, pakistan abdul aziz department of computer science national university of computer and emerging sciences, karachi, pakistan syed muhammad hassan department of computer science smi university karachi, pakistan imtiaz hussain department of computer science smi university karachi, pakistan abstract—the use of campus management systems (cmss) has increased dramatically in pakistani universities, a huge amount of money has been invested in its development and deployment. the cms provides an integrated platform for managing academic activities, controlling process flows and provide online access to related information. it improves the efficiency and effectiveness of the universities and eventually improves the quality of teaching. however, it is very important to consider the attitudes and perceptions of faculty members and students towards the adoption of cms because they may affect the acceptance of this technology. the aim of this study is to investigate and highlight the user satisfaction level about the cms quality according to iso/iec 9126 standard. this work uses cross-sectional design as a primary research method, and data from 105 students and faculty members were collected from a pakistani public sector university using a questionnaire survey. the response of the respondents illustrates that the system functionality is good, reliable, usable, and efficient. however, some improvements are necessary in some areas such as understandability, learnability, operability, and attractiveness. keywords-software quality; software functionality, cms i. introduction information technology applications play a significant role in any organization or company [1]. they are controlling robots, helping in healthcare and commerce [2-4], they have also reached higher education in recent years under the name of cms [5] which basically are enterprise recourse planning (erp) programs for institutions [6, 7]. the cms improves the efficiency and effectiveness of the overall organization and ultimately improves the quality of teaching and learning. it shows the holistic picture of university processes [7]. in developing countries, public and private sector universities are facing competition in the higher education market, and want to use information and communication technology (ict) as an edge on other institutions [8]. in the ict landscape system, student-oriented processes are important because they reduce the operational cost, which is done when students independently manage their academic activities such as enrollment or acquiring information such as attendance and grades. through the integration of cms and web, these services are available any time. the importance of cms has also been recognized by the accreditation of institutions such as the higher education commission (hec) of pakistan which initiated the hec-funded deployment of cms on different public sector universities [9]. various universities in pakistan are implementing cms to handle their academic processes assessment and evaluation are conducted to improve the efficiency and effectiveness of any system. there are different kinds of evaluations such as (1) formative, which is performed during the development and (2) summative, which is performed after the completion of the system. it focuses on system’s effectiveness and if it meets the original requirements. this work evaluates the cms in four characteristics: functionality, reliability, usability, and efficiency. these characteristics are basically a part of iso/iec 9126-1 standard [10, 11] which focuses on 6 characteristics of software quality. however, the remaining two, portability and maintainability are not directly related to the end-users (students and faculty members). only cms features related to students and faculty such as student records, gradebook, student self-service, and faculty selfservice were considered in the current survey. a. functionality the functionality of the system concentrates the complete function list that should be provided by it. table i shows the subcharacteristics of functionality and the respected questions asked to the end-users. b. reliability reliability concerns the probability of error-free operation of the software and its ability to maintain its service delivery under specified circumstance. table ii shows the subcharacteristics of reliability and the respected questions asked to the end-users. corresponding author: abdul hafeez (ahkhan@smiu.edu.pk) engineering, technology & applied science research vol. 9, no. 6, 2019, 5037-5040 5038 www.etasr.com hafeez et al.: a survey on the adaption of cms in pakistani universities table i. functionality sub-characteristics characteristic sub-characteristics question functionality suitability can the cms do the required tasks? interoperability does the cms interact with another system? security does the cms prevent unauthorized access? accuracy does the cms give results as expected? table ii. reliability sub-characteristics characteristic sub-characteristics questions reliability maturity has a maximum number of errors in the cms been removed over time? fault tolerance is cms able to handle errors? recoverability after failure, can the cms continue working and recover lost data? c. usability usability is concerned about the software system’s ease of use and whether it is learnable. in usability, we focus on the user interface and how much the software system interface is easy to operate, easy to learn, and attractive to the end-users. table iii shows the sub characteristics of usability and the respected questions asked to the end-users. table iii. usability sub-characteristics characteristic sub-characteristics questions usability understandability does the user apprehend how to easily use the system? learnability can the end-user learn the system easily? operability can the user use the system without much effort? attractiveness does the system has a good look and feel? d. efficiency the efficiency concerns the resources’ utilization when performing the required operations such as memory, amount of disk space, processor, network, etc. table iv shows the subcharacteristics of efficiency and the respected questions asked to the end-users. table iv. efficiency sub-characteristics characteristics sub-characteristics questions efficiency time behavior how rapidly does the system respond and answer to queries? resource utilization does the system efficiently utilize system resources? ii. methodology a. research method there are various methods for evaluating software quality but selecting the correct one for evaluation depends on complexity and functionality. this study used the qualitative method for data collection and in order to understand the participants’ attitudes and experience. it is used in order to evaluate ideas, experience, and beliefs of the participants regarding the quality of the cms. b. participants in the survey, data were collected from n=105 students and faculty members. in the sample, 74% participants were male and 24% were female, while 20% were faculty members and 80% were students, out of which 59% were in their 4th year, 34% in their 3rd year and the rest were in their 2nd year, while the 1st year students were not selected due to their little experience on cms. c. data collection tools in this study, the evaluation form included two sections. the first section included demographic information, such as gender, age, and department, and the 2nd section consisted of the survey questions. the results showed that this method of assessment is effective in pointing various software quality issues in the cms. d. data analysis data were analyzed with spss to find how much the cms realized the four considered characteristics of the iso/iec 9126-1. the feedbacks of all sub-characteristic were combined, and the basic mean was obtained. e. validity and reliability in this study, for ensuring reliability, the data were collected through according to ethical research. in order to reach the utilization in the study, the faculty and students of pakistani university who experienced cms were selected. fig. 1. percentage distribution of cms product quality characteristics feedback table v. descriptive statistics quality characteristics n min max mean sd functionality 105 27 160 84 50.3 reliability 35 98 63 27.1 usability 50 126 84 32.5 efficiency 17 62 42 17.4 0 5 10 15 20 25 30 35 40 strongly agree agree may be disagree strongly disagree cms product quality characteristics functionality reliability usability efficency engineering, technology & applied science research vol. 9, no. 6, 2019, 5037-5040 5039 www.etasr.com hafeez et al.: a survey on the adaption of cms in pakistani universities f. results and discussion in this section, the data collected from the questionnaire survey are presented and analyzed statistically. in this work, a questionnaire was spread among faculty members and students who were familiar with the cms. in addition, interviews with some faculty members and students were conducted in order to validate the gaps in the questionnaire. figure 1 shows the detailed feedback regarding the quality characteristics of the cms. four items were combined to determine this factor, and the results are shown in table v. the minimum and maximum scores were 27 and 160, respectively. as it can be seen from table v, the overall mean is 84. so, it can be concluded that the faculty and students were satisfied with the functionality of the cms. table vi and figure 2 show the detail feedbacks on how the functionality of the cms works. as it can be seen in table vi, around 51.5% of the participants’ responses fall in the range of agree and strongly agree which shows satisfaction on the functionality of the studied system. it is worth mentioning that 23.8% of the participants were neutral on their response on the functionality of the system and 25.1% of the participants were unhappy with the functionality of cms and gave disagree and strongly disagree feedback. fig. 2. percentage distribution of cms functionality characteristics feedback fig. 3. percentage distribution of cms reliability characteristics feedback the reliability of the system was examined and the results are shown in table v. the minimum and maximum scores are 35 and 98 respectively and the mean is 63. it can be concluded that the faculty and students were not very much satisfied with the systems’ reliability. as it can be seen from the results in table vi and figure 3, 42.98% of the participants' responses fall in the range of agree and strongly agree which shows that the participants are moderately satisfied with the system’s efficiency. it is worth mentioning that 23.14% of the participants are neutral on their response and 33% consider the cms unreliable and give feedback as disagree and strongly disagree. the impact on the control of the system upon the freedom of use was examined and the results are shown in table v. the minimum and maximum scores were 50 and 126 and the mean was 84 which is near to minimum. as it can be seen from the results in table vi and figure 4, only 37% of the participants' responses fall in the range of agree and strongly agree which shows that the users were unsatisfied with the usability features. it is worth mentioning that 13% of the participants are neutral on their response on that feature and 49.2% of them consider that the cms is not easy to use and does not provide easy learning and gave feedback as disagree and strongly disagree. finally, the capability of the system to provide efficient usage was examined and the results are shown in table v. the minimum and maximum scores were 17 and 62 and the mean was 42. by this it can be concluded that the participants were moderately satisfied with the efficiency of the cms. table vi and figure 5 present the details of how the system efficiency responds to the user. as it can be seen from the results in table vi, 45.98% of the participants’ responses fall in the range of agree and strongly agree which shows that the participants are moderately satisfied with the system’s efficiency, 25.23% of the participants are neutral on their response and only 29% of them consider the cms as not efficient and give feedback as disagree and strongly disagree. fig. 4. percentage distribution of cms usability characteristics feedback fig. 5. percentage distribution of cms efficiency characteristics feedback 17 15 14 8 47 57 36 20 18 15 25 42 17 13 21 27 6 5 8 8 0 20 40 60 80 100 120 suitab ility sec urity accurateness interoperability f u n c ti o n a li ty strongly agree agree may be disagree strongly disagree 19 9 7 45 30 23 13 38 24 22 22 27 6 6 23 0 20 40 60 80 100 120 maturity rec overability fault tolerance r e li a b il it y strongly agree agree may be disagree strongly disagree 8 12 15 11 32 29 26 20 17 14 14 11 28 34 31 33 20 15 18 28 0 20 40 60 80 100 120 understandability learnability operability attractiveness u sa b il it y strongly agr ee agree may be disagree strongly disagree 18 15 28 34 24 29 24 20 10 7 0 20 40 60 80 100 120 time behav ior resource utilization e ff ic ie n c y strongly agr ee agree may be disagree strongly disagree engineering, technology & applied science research vol. 9, no. 6, 2019, 5037-5040 5040 www.etasr.com hafeez et al.: a survey on the adaption of cms in pakistani universities table vi. frequency statistics functionality sub-characteristics questions strongly agree agree maybe disagree strongly disagree n % n % n % n % n % suitability can the cms do the required tasks? 17 16.19 47 44.76 18 17.1 17 16.19 6 5.71 security does the cms prevent unauthorized access? 15 14.29 57 54.29 15 14.3 13 12.38 5 4.76 accuracy does the cms give results as expected? 14 13.33 36 34.29 25 23.8 21 20 8 7.62 interoperability does the cms interact with another system? 8 7.62 20 19.05 42 40 27 25.71 8 7.62 reliability maturity has a maximum number of errors in the cms been removed over time? 19 18.1 45 42.86 13 12.38 22 20.95 6 5.71 recoverability after failure, can the cms continue working and recover lost data? 9 8.57 30 28.57 38 36.19 22 20.95 6 5.71 fault tolerance is the cms able to handle errors? 7 6.67 23 21.9 24 22.86 27 25.71 23 21.9 usability understandability does the user apprehend how to easily use the system? 8 7.62 32 30.48 17 16.19 28 26.67 20 19.1 learnability can the end-user learn the system easily? 12 11.4 29 27.62 14 13.33 34 32.38 15 14.3 operability can the user use the system without much effort? 15 14.3 26 24.76 14 13.33 31 29.52 18 17.1 attractiveness does the system has a good look and feel? 11 10.5 20 19.05 11 10.48 33 31.43 28 26.7 efficiency time behavior how rapidly does the system respond and answer to queries? 18 17.1 28 26.67 24 22.86 24 22.86 10 9.52 resource utilization does the system efficiently utilize system resources? 15 14.3 34 32.38 29 27.62 20 19.05 7 6.67 iii. conclusion and recommendations this paper presented an evaluation of cms in four dimensions: functionality, reliability, usability, and efficiency. these characteristics are a major part of iso/iec 9126-1. from the evaluation, it was observed that there were no issues related to the functionality of the system regarding suitability, security, accuracy, and interoperability, with 51.5% of the participants’ responses falling in the range of agree and strongly agree. in addition, the results show that participants were moderately satisfied with the reliability of the system regarding maturity, recoverability, and fault tolerance, while around 43% of the participants agree and strongly agree with the reliability features of the cms. the participants were not very much satisfied from the usability features of the cms and only 37% of them agreed or strongly agreed with the effectiveness of usability features such as understandability, learnability, operability, and attractiveness. finally, it was observed that the participants were also moderately satisfied with the efficiency and around 43% of them agreed and strongly agreed with the questions regarding the efficiency features. as a conclusion, it is observed that most participants are satisfied with the cms quality characteristics, but it would be better to enhance its usability features. references [1] m. hemmati, h. hosseini, “effect of it application on project performance focusing on the mediating role of organizational innovation, knowledge management and organizational capabilities”, engineering, technology & applied science research, vol. 6, no.6, pp. 1221-1226, 2016 [2] m. ben ayed, l. zouari, m. abid, “software in the loop simulation for robot manipulators”, engineering, technology & applied science research,vol. 7, no. 5, pp. 2017-2021, 2017 [3] f. idachaba, e. idachaba, “robust e-health communication architecture for rural communities in developing countries”, engineering, technology & applied science research, vol. 2, no. 3, pp. 237-240, 2012 [4] m. odhiambo, p umenne, “net-computer: internet computer architecture and its application in e-commerce”, engineering, technology & applied science research, vol. 2, no. 6, pp. 302-309, 2012 [5] z. unal, a. unal, “evaluating and comparing the usability of web-based course management systems”, journal of information technology education, vol. 10, pp. 19-38, 2011 [6] g. sabau, m. munten, a. r. bologa, r. bologa, t. surcel, “an evaluation framework for higher education erp systems”, wseas transactions on computers, vol. 8, no. 11, pp. 1790-1799, 2009 [7] a. rainer, g. auth, “campus-management-system”, business & information systems engineering, vol. 52, no. 3, pp. 185-188, 2010 [8] a. s. sife, e. t. lwoga, c. sanga, “new technologies for teaching and learning: challenges for higher learning institutions in developing countries”, international journal of education and development using information and communication technology, vol. 3, no. 2, pp. 57-67 2007 [9] higher education pakistan, hec vision 2025, available at: https://www.hec.gov.pk/english/hecannouncements/documents/anno uncement/hec-vision-2025.pdf, accessed september 28, 2018 [10] h. w. jung, s. g. kim, c. s. chung, “measuring software product quality: a survey of iso/iec 9126”, ieee software, vol. 21, no. 5, pp. 88-92, 2004 [11] p. botella, x. burgues, j. p. carvallo, x. franch, g. grau, j. marco, c. quer, “iso/iec 9126 in practice: what do we need to know”, software measurement european forum, 2004 microsoft word 12-2756_s_etasr_v9_n4_pp4377-4383 engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4377 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture applications of artificial intelligence in agriculture: a review ngozi clara eli-chukwu department of electrical & electronics engineering alex ekwueme federal university ndufu alike, ebonyi, nigeria ngozieli@gmail.com abstract—the application of artificial intelligence (ai) has been evident in the agricultural sector recently. the sector faces numerous challenges in order to maximize its yield including improper soil treatment, disease and pest infestation, big data requirements, low output, and knowledge gap between farmers and technology. the main concept of ai in agriculture is its flexibility, high performance, accuracy, and cost-effectiveness. this paper presents a review of the applications of ai in soil management, crop management, weed management and disease management. a special focus is laid on the strength and limitations of the application and the way in utilizing expert systems for higher productivity. keywords-artificial intelligence; agriculture; soil management; crop management; disease management; weed management; yield i. introduction agriculture is the bedrock of sustainability of any economy [1]. it plays a key part in long term economic growth and structural transformation [2-4], though, may vary by countries [5]. in the past, agricultural activities were limited to food and crop production [6]. but in the last two decades, it has evolved to processing, production, marketing, and distribution of crops and livestock products. currently, agricultural activities serve as the basic source of livelihood, improving gdp [7], being a source of national trade, reducing unemployment, providing raw materials for production in other industries, and overall develop the economy [8-10]. with the global geometric population rise it becomes imperative that agricultural practices are reviewed with the aim of proffering innovative approaches to sustaining and improving agricultural activities. the introduction of ai to agriculture will be enabled by other technological advances, including big data analytics, robotics, the internet of things, the availability of cheap sensors and cameras, drone technology, and even wide-scale internet coverage on geographically dispersed fields. by analyzing soil management data sources such as temperature, weather, soil analysis, moisture, and historic crop performance, ai systems will be able to provide predictive insights into which crop to plant in a given year and when the optimal dates to sow and harvest are in a specific area, thus improving crop yields and decrease the use of water, fertilizers, and pesticides. via the application of ai technologies the impact on natural ecosystems can be reduced, and worker safety may increase, which in turn will keep food prices down and ensure that the food production will keep pace with the increasing population. ii. consideration overview farming entails a great deal of choices and uncertainties. from season to season the weather varies, the prices of farming materials fluctuate, soil degrades, crops are not viable, weeds suffocate crops, pests damage crops, and the climate changes. farmers must cope with these uncertainties. although agricultural practice is broad, this research considers soil, crop, disease and weeds as major contributors to agricultural production. it is paramount to review the application of ai to agriculture in respect to soil, crop, diseases and pest management. • soil is a critical part of successful agriculture and is the original source of the nutrients used to grow crops. soil is the basis of all production systems in agriculture, forestry and fishery. soil stores water, nutrients and proteins in order to make them available for proper crop growth and development. • crop production plays a crucial role in nigeria’s economy. it does provide food, raw materials, and employment. in modern times, marketing, processing, distribution and aftersales service are also accepted as parts of crop production. in places where the real income per capital is low, emphasis is being laid on crop production and other primary industries. it is seen that increased crop production output and productivity tend to contribute substantially to the overall economic development of a country. it will hence be appropriate to place greater emphasis on further crop production development. • as agriculture struggles to support the rapidly growing population, plant diseases reduce crop production quantity and quality. agricultural losses due to post-harvest diseases can be disastrous. • weeds consist one of the major threats to all agricultural activities. weeds reduce farm and forest productivity, invade crops, smother pastures, and in some cases harm livestock. they aggressively compete with the crops for water, nutrients and sunlight, resulting in reduced crop yield and poor crop quality. corresponding author: ngozi clara eli-chukwu engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4378 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture iii. soil management soil management is an integral part of agricultural activities. a sound knowledge of various soil types and conditions will enhance crop yield and conserve soil resources. it is the use of operations, practices and treatments to improve soil performance. urban soils may contain pollutants which can be investigated with a traditional soil survey approach [11]. the application of compost and manure improve soil porosity and aggregation. a better aggregation indicates the addition of organic materials that play an important role in preventing soil crust formation. it is possible to adopt alternative tillage systems to prevent soil physical degradation. the application of organic materials is essential to improve soil quality [12]. production of vegetables and other edible crops is often significantly affected by several soil-borne pathogens that require control through soil management [13]. sensitivity to soil degradation is implicit in the assessment of the sustainability of land management practices, with recognition of the fact that soils vary in their ability to resist change and recover [14]. a summary in ai soil management techniques is shown in table i. management-oriented modeling (mom) minimizes nitrate leaching as it consists of a set of generated plausible management alternatives, a simulator that evaluates each alternative, and an evaluator that determines which alternative meets the user-weighted multiple criteria. mom uses “hillclimbing” as a strategic search method that uses “best-first| as a tactical search method to find the shortest path from start nodes to goals [15]. knowledge of engineering for constructing the soil risk characterization decision support system (srcdss) involves three stages: knowledge acquisition, conceptual design and system implementation [16]. an artificial neural network (ann) model predicts soil texture (sand, clay and silt contents) based on attributes obtained from existing coarse resolution soil maps combined with hydrographic parameters derived from a digital elevation model (dem) [21]. the dynamics of soil moisture are characterized and estimated by a remote sensing device embedded in a higher-order neural network (honn) [22]. iv. crop management the crop management techniques are summarized in table ii. crop management starts with sowing, and continues with monitoring growth, harvesting, and crop storage and distribution. it is summarized as the activities that improve the growth and yield of agricultural products. in-depth understanding of class of crops according to their timing and thriving soil type will certainly increase crop yield. precision crop management (pcm) is an agricultural management system designed to target crop and soil inputs according to field requirements to optimize profitability and protect the environment. pcm has been hampered by lack of timely, distributed information on crop and soil conditions [26]. farmers must combine various crop management strategies to cope with water deficit resulting from soil, weather or limited irrigation. flexible crop management systems based on decision rules should be preferred. timing, intensity, and predictability of drought are important features for choosing among cropping alternatives [27]. table i. ai in soil management summary application technique strength limitation [15] mom minimizes nitrate leaching, maximizes production. takes time. limited only to nitrogen. [16] fuzzy logic: src-dss can classify soil according to associated risks. needs big data. only a few cases were studied. [17] dss reduces erosion and sedimentary yield. requires big data for training. [18] ann can predict soil enzyme activity. accurately predicts and classifies soil structure. only measures a few soil enzymes. it considers more classification than improving the performance of the soil. [19] ann can predict monthly mean soil temperature considers only temperature as a factor for soil performance. [20] ann it predicts soil texture requires big data for training. has restriction in areas of implementation. [21] ann able to predict soil moisture. the prediction will fail with time as weather conditions are hardly predictable. [22] ann successfully reports soil texture. it does not improve soil texture or proffers solution to bad soil texture. [23] ann cost-effective, saves time, has 92% accuracy requires big data. [24] ann can estimate soil nutrients after erosion. its estimate is restricted to only nh � . proper understanding of weather patterns helps in the decision-making process that will result in high and quality crop yield [28]. prolog utilizes weather data, machinery capacities, labor availability, and information on permissible and prioritized operators, tractors, and implements for evaluating the operational behavior of a farm system. it also estimates crop production, gross revenue, and net profit for individual fields and for the whole farm [30]. crop prediction methodology is used to predict the suitable crop by sensing various soil parameters and parameter related to the atmosphere. parameters like soil type, ph, nitrogen, phosphate, potassium, organic carbon, calcium, magnesium, sulfur, manganese, copper, iron, depth, temperature, rainfall, humidity [31]. demeter is a computer-controlled speed-rowing machine, equipped with a pair of video cameras and a global positioning sensor for navigation. it is capable of planning harvesting operations for an entire field, and then executing its plan by cutting crop rows, turning to cut successive rows, repositioning itself in the field, and detecting unexpected obstacles [32]. the use of ai in harvesting cucumber comprises of the individual hardware and software components of the robot including the autonomous vehicle, the manipulator, the end-effector, the two computer vision systems for detection and 3d imaging of the fruit and the environment and, finally, a control scheme that generates engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4379 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture collision-free motions for the manipulator during harvesting [33]. field-specific rainfall data and weather variables can be used for each location. adjusting ann parameters affects the accuracy of rice yield predictions. smaller data sets required fewer hidden nodes and lower learning rates in model optimization [38]. table ii. ai in crop management summary application technique strength limitation [29] calex can formulate scheduling guidelines for crop management activities. takes time. [30] prolog removes less used farm tools from the farm. location-specific. [31] ann predicts crop yeild. only captures weather as a factor for crop yeild. [32] roboticsdemeter can harvest up to 40 hectares of crop expensive: uses a lot of fuel. [33] robotics has 80% success rate in harvesting crops slow picking speed and accuracy. [34] ann above 90% success rate in detecting crop nutrition disorder. a little number of symptoms were considered. [35] fuzzy cognitive map predict cotton yield and improve crop for decision management. it is relatively slow. [36] ann can predict the response of crops to soil moisture and salinity. considers only soil temperature and texture as factors. [37] ann and fuzzy logic reduces insects that attack crops. shows inability to differentiate between crop and weed. [38] ann can accurately predict rice yield. time-consuming, limited to a particular climate. v. disease management to have an optimal yield in agricultural harvest, disease control is necessary. plant and animal diseases are a major limiting factor regarding the increase of yield. several factors play role in the incubation of these diseases which attack plants and animals, which include genetic, soil type, rain, dry weather, wind, temperature, etc. due to these factors and the unsteady nature of some diseases causative influence, managing the effects is a big challenge, especially in large scale farming. table iii lists the ai applications in disease management available in the literature. to effectively control diseases and minimize losses, a farmer should adopt an integrated disease control and management model that includes physical, chemical and biological measure [39]. to achieve these is time consuming and not at all that cost effective [40], hence the need for application of ai approach for disease control and management. explanation block (eb) gives a clear view of the logic followed by the kernel of the expert system [42]. a novel approach of rule promotion based on fuzzy logic is used in the system for drawing intelligent inferences for crop disease management. a text-to-speech (tts) converter is used for providing capability of text-to-talking user interface. it provides highly-effective interactive user interface on web for live interactions [45]. a rule based and forward chaining inference engine has been used for the development of the system that helps in detecting the diseases and provide treatment suggestion in [46]. table iii. ai in disease management summary application technique strength limitation [42] computer vision system (cvs), genetic algorithm (ga), ann works at a high speed. can multitask. dimension-based detection which may affect good species. [42] rule-based expert, data base (db) accurate results in the tested environment. inefficacy of db when implementing in large scale. [43] fuzzy logic (fl), web gis cost effective, eco friendly. inefficiency due to scattered distribution. takes time to locate and disperse data. the location of the data is determined by a mobile browser. [44] fl web-based, web-based intelligent disease diagnosis system (widds) good accuracy. responds swiftly to the nature of crop diseases. limited usage as it requires internet service. its potency cannot be ascertained as only 4 seed crops were considered. [45] fl & tts converter resolves plant pathological problems quickly. requires high speed internet. uses a voice service as its multimedia interface. [46] expert system using rule-base in disease detection faster treatment as diseases are diagnosed faster. cost effective based on its preventive approach. time consuming. needs constant monitoring to check if pest has built immunity to the preventive measure. [47] ann, gis 95% accuracy internet-based. some rural farmers will not have access. [48] fuzzyxpest provides pest information for farmers. it is also supported by internet services. high precision in forecast. internet dependent. [49] web-based expert system high performance. internet and web based. [50] ann has above than 90% prediction rate. the ann does not kill infections or reduces its effect. vi. weed management weed consistently reduces the farmers’ expected profit and yield [51]. a report confirms a 50% reduction in yield for dried beans and corn crops if weed infestations are not controlled [51]. there is about 48% loss in wheat yield due to weed competition [52, 53]. these losses may at times rise up to 60% [54]. a study on the impact of weed on soybean showed about 8%-55% reduction in yield [55]. a study on yield losses in engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4380 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture sesame crops accounts them to about 50%-75% [56]. the fluctuation in yield losses may be attributed to the length of exposure of the crops to the weeds [57, 58] and spatial heterogeneity of weeds [59]. beyond these, weed has both positive and negative effects to the ecosystem. according to the relative weed science society of america (wssa) report, weed effects include flooding during hurricane, some species of weeds can pave their way during rampant fire, some cause irreparable liver damage if consumed, and they muscle out plants or crops by competing for water, nutrients and sunlight. some weeds are poisonous and cause allergic reactions or even may threat public health. table iv lists a summary of the ai in weed managements uses. table iv. ai in weed management summary application technique strength limitation [61] ann, ga high performance. reduces trial and error. requires big data. [62] optimization using invasive weed optimization (ivo), ann cost effective, enhanced performance. adaptation challenge with new data. [63]. mechanical control of weeds. robotics. sensor machine learning saves time and removes resistant weeds. expensive. constant use of heavy machine will reduce soil productivity. [64] uav, ga can quickly and efficiently monitor weeds. has little or no control on weeds. expensive. [65] saloma expert system for evaluation, prediction & weed management. high adaptation rate and prediction level. requires big data and usage expertise. [66] support vector machine (svm), ann quickly detects stress in crop that will prompt timely site– specific remedies. only detects low levels of nitrogen. [67] digital image analysis (dia), gps has above 60% accuracy and success rate. its success was achieved after 4 years and as such, it is really time consuming. [68] uav high rate of weed detection within a short period of time. it is really expensive and requires vast human expertise. [69] learning vector quantization (lvq), ann high weed recognition rate with short processing time. the method of data input used affected the ai’s perfromance. an intensive management with herbicides has been deployed over the past decades to reduce its effect on crops. however, even with this management pattern, it was predicted that crop losses due to weed in western canada field crops are estimated to exceed $500 million annually [60], hence the need for a more expert weed management technique to compensate for this loss emerges [51]. a system can utilize an unmanned aerial vehicle (uav) -imagery to divide image, compute and convert to binary the vegetation indexes, detect crop rows, optimize parameters and learn a classification model. since crops are usually organized in rows, the use of a crop row detection algorithm helps to separate properly weed and crop pixels, which is a common handicap given the spectral similitude of both [64]. weed control in sugar-beet, maize, winter wheat, and winter barley, can be done by applying online weed detection using digital image analysis taken by an uav (drone), computer‐based decision making and global positioning system (gps)‐controlled patch spraying [67]. the drone in [68] travelled at a speed of 1.2km/h, with 58.10ms and 37.44ms execution time to find the tomato and weed locations to the spray controller respectively vii. curtailing challenges of ai in agriculture expert systems are tools for agricultural management since they can provide site-specific, integrated, and interpreted advices. however, the development of expert systems for agriculture is fairly recent, and the use of these systems in commercial agriculture is rare to date [70]. although ai has made some remarkable improvement in the agricultural sector, it still has a below the average impact on the agricultural activities when compared to its potentials and impacts in other sectors. more still need to be done to improve agricultural activities using ai as there are many limitations to its implementation. a. limitation: response time and accuracy a major attribute of an intelligent or expert system is its ability to execute tasks accurately in very short time. most of the systems fall short either in response time or accuracy, or even both. a system delay affects a user's selection of task strategy. strategy selection is hypothesized to be based on a cost function combining two factors: (1) the effort required to synchronize input system availability, and (2) the accuracy level afforded. people seeking to minimize effort and maximize accuracy, choose among three strategies: automatic performance, pacing, and monitoring [71]. b. limitation 2: big data required the strength of an intelligent agent is also measured on the volume of input data. a real-time ai system needs to monitor an immense volume of data. the system must filter out much of the incoming data. however, it must remain responsive to important or unexpected events [72]. an in-depth knowledge of the task of the system is required from a field expert and only very relevant data should be used improving the system’s speed and accuracy. the development of an agricultural expert system requires the combined efforts of specialists from many fields of agriculture, and must be developed with the cooperation of the growers who will use them [70]. c. limitation 3: method of implementation the beauty of any expert system lies on its execution methodology. since it uses big data, the method of looking-up and training should be properly defined for speed and accuracy. d. limitation 4: high data cost most ai systems are internet-based which in turn reduces or restricts their usage, particularly in remote or rural areas. engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4381 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture the government can support farmers by designing a web service enabling device with lower tariff to uniquely work with the ai systems for farmers. also, a form of “how to use” orientation (training and re-training) will really help farmers adapt to the use of ai on the farm. e. limitation 5: flexibility flexibility is a strong attribute of any sound ai system. it is perceived that much progress has been made in applying ai techniques to particular isolated tasks, but the important theme at the leading edge of the ai-based robotics technology seems to be the interfacing of the subsystems into an integrated environment. this requires flexibility of the subsystems themselves [73]. it should also have expansive capabilities to accommodate more user data from the field expert. viii. the future of ai in agriculture global population is expected to reach more than nine billions by 2050 which will require an increase in agricultural production by 70% in order to fulfil the demand. only about 10% of this increased production may come from unused lands and the rest should be fulfilled by current production intensification. in this context, the use of latest technological solutions to make farming more efficient remains one great necessity. present strategies to intensify agricultural production require high energy inputs and market demands high quality food. [74]. robotics and autonomous systems (ras) are set to transform global industries. these technologies will have great impact on large sectors of the economy with relatively low productivity such as agro-food (food production from the farm to the retail shelf). the uk agro-food chain generates over £108bn p.a., with 3.7m employees in a truly international industry yielding £20bn of exports in 2016 [75]. references [1] m. a. kekane, “indian agriculture-status, importance and role in indian economy”, international journal of agriculture and food science technology, vol. 4, no. 4, pp. 343-346, 2013 [2] b. f. johnston, p. kilby, agriculture and structural transformation: economic strategies in late-developing countries, oxford university press, 1975 [3] s. kuznets, “modern economic growth: findings and reflections”, american economic association, vol. 63, no. 3, pp. 247–258, 1973 [4] m. syrquin, “patterns on structural change”, in: handbook of development economics, vol. 1, elsevier, 1988 [5] r. dekle, g. vandenbroucke, “a quantitative analysis of china’s structural transformation”, journal of economic dynamics and control, vol. 36, no. 1, pp. 119-135, 2012 [6] m. fan, j. shen, l. yuan, r. jiang, x. chen, w. j. davies, f. zhang, “improving crop productivity and resource use efficiency to ensure food security and environmental quality in china”, journal of experimental botany, vol. 63, no. 1, pp. 13-24, 2012 [7] o. oyakhilomen, r. g. zibah, “agricultural production and economic growth in nigeria: implication for rural poverty alleviation”, quarterly journal of international agriculture, vol. 53, no. 3, pp. 207-223, 2014 [8] t. o. awokuse, “does agriculture really matter for economic growth in developing countries?”, the american agricultural economics association annual meeting, milwaukee, newark, usa, july 28, 2009 [9] o. badiene, sustaining and accelerating africa’s agricultural growth recovery in the context of changing global food prices, ifpri policy brief 9, 2008 [10] s. block, c. timmer, agriculture and economic growth: conceptual issues and the kenyan experience, harvard institute for international development, 1994 [11] c. r. d. kimpe, j. l. morel, “urban soil management: a growing concern”, soil science, vol. 165, no. 1, pp. 31-40, 2000 [12] m. pagliai, n. vignozzi, s. pellegrini, “soil structure and the effect of management practices”, soil and tillage research, vol. 79, no. 2, pp. 131-143, 2004 [13] g. s. abawi, t. l. widmer, “impact of soil health management practices on soilborne pathogens, nematodes and root diseases of vegetable crops”, applied soil ecology, vol. 15, no. 1, pp. 37-47, 2000 [14] j. k. syers, managing soil for long-term productivity, the royal society, 1997 [15] m. li, r. yost, “management-oriented modelling: optimizing nitrogen management with artificial intelligence”, agricultural systems, vol. 65, no. 1, pp. 1-27, 2000 [16] e. m. lopez, m. garcia, m. schuhmacher, j. l. domingo, “a fuzzy expert system for soil characterization”, environment international, vol. 34, no. 7, pp. 950-958, 2008 [17] h. montas, c. a. madramootoo, “a decision support system for soil conservation planning”, computers and electronics in agriculture, vol. 7, no. 3, pp. 187-202, 1992 [18] s. tajik, s. ayoubi, f. nourbakhsh, “prediction of soil enzymes activity by digital terrain analysis: comparing artificial neural network and multiple linear regression models”, environmental engineering science, vol. 29, no. 8, pp. 798-806, 2012 [19] e. r. levine, d. s. kimes, v. g. sigillito, “classifying soil structure using neural networks”, ecological modelling, vol. 92, no. 1, pp. 101108, 1996 [20] m. bilgili, “the use of artificial neural network for forecasting the monthly mean soil temperature in adana, turkey”, turkish journal of agriculture and forestry, vol. 35, no. 1, pp. 83-93, 2011 [21] z. zhao, t. l. chow, h. w. rees, q. yang, z. xing, f. r. meng, “predict soil texture distributions using an artificial neural network model”, computers and electronics in agriculture, vol. 65, no. 1, pp. 36-48, 2009 [22] a. elshorbagy, k. parasuraman, “on the relevance of using artificial neural networks for estimating soil moisture content”, journal of hydrology, vol. 362, no. 1-2, pp. 1-18, 2008 [23] d. h. chang, s. islam, “estimation of soil physical properties using remote sensing and artificial neural network”, remote sensing of enviroment, vol. 74, no. 3, pp. 534-544, 2000 [24] t. behrens, h. forster, t. scholten, u. steinrucken, e. d. spies, m. goldschmitt, “digital soil mapping using artificial neural networks”, journal of plant nutrition and soil science, vol. 168, no. 1, pp. 21-33, 2005 [25] m. kim, j. e. gilley, “artificial neural network estimation of soil erosion and nutrient concentrations in runoff from land application areas”, computers and electronics in agriculture, vol. 64, no. 2, pp. 268-275, 2008 [26] m. s. moran, y. inoue, e. m. barnes, “opportunities and limitations for image-based remote sensing in precision crop management”, remote sensing of enviroment, vol. 61, no. 3, pp. 319-346, 1997 [27] p. debaeke, a. aboudrare, “adaptation of crop management to waterlimited environments”, european journal of agronomy, vol. 21, no. 4, pp. 433-446, 2004 [28] c. aubry, f. papy, a. capillon, “modelling decision-making processes for annual crop management”, agricultural systems, vol. 56, no. 1, pp. 45-65, 1998 [29] r. e. plant, “an artificial intelligence based method for scheduling crop management actions”, agricultural systems, vol. 31, no. 1, pp. 127155, 1989 [30] h. lal, j. w. jones, r. m. peart, w. d. shoup, “farmsys-a wholefarm machinery management decision support system”, agricultural systems, vol. 38, no. 3, pp. 257-273, 1992 [31] s. s. snehal, s. v. sandeep, “agricultural crop yield prediction using artificial neural network approach”, international journal of innovative engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4382 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture research in electrical, electronics, instrumentation and control engineering, vol. 2, no. 1, pp. 683-686, 2014 [32] t. pilarski, m. happold, h. pangels, m. ollis, k. fitzpatrick, a. stentz, the demeter system for automated harvesting, springer, 2002 [33] e. j. v. henten, j. hemming, b. a. j. v. tuijl, j. g. kornet, j. meuleman, j. bontsema, e. a. v. os, an autonomous robot for harvesting cucumbers in greenhouses, springer, 2002 [34] h. song, y. he, “crop nutrition diagnosis expert system based on artificial neural networks”, 3rd international conference on information technology and applications, sydney, australia, july 4–7, 2005 [35] e. i. papageorgiou, a. t. markinos, t. a. gemtos, “fuzzy cognitive map based approach for predicting crop production as a basis for decision support system in precision agriculture application”, applied soft computing, vol. 11, no. 4, pp. 3643-3657, 2011 [36] x. dai, z. huo, h. wang, “simulation of response of crop yield to soil moisture and salinity with artificial neural network”, field crops research, vol. 121, no. 3, pp. 441-449, 2011 [37] c. c. yang, s. o. prasher, j. a. landry, h. s. ramaswamy, “development of herbicide application map using artificial neural network and fuzzy logic”, agricultural systems, vol. 76, no. 2, pp. 561574, 2003 [38] b. ji, y. sun, s. yang, j. wan, “artificial neural networks for rice yield prediction in mountainous regions”, journal of agricultural science, vol. 145, no. 3, pp. 249-261, 2007 [39] bea, value added by industry as a percentage of gross domestic product, available at: https://apps.bea.gov/itable/itable.cfm?reqid=51 &step=1#reqid=51&step=51&isuri=1&5114=a&5102=5, 2018 [40] weed science society of america, facts about weeds, available at: http://wssa.net/wp-content/uploads/wssa-fact-sheetfinal.pdf [41] j. fang, c. zhang, s. wang, “application of genetic algorithm (ga) trained artificial neural network to identify tomatoes with physiological diseases”, international conference on computer and computing technologies in agriculture, wuyishan, china, august 1820, 2007 [42] k. balleda, d. satyanvesh, n. v. s. s. p. sampath, k. t. n. varma, p. k. baruah, “agpest: an efficient rule-based expert system to prevent pest diseases of rice & wheat crops”, 8th international conference on intelligent systems and control, coimbatore, india, january 10–11, 2014 [43] j. jesus, t. panagopoulos, a. neves, “fuzzy logic and geographic information systems for pest control in olive culture”, 4th iasme/wseas international conference on energy, environment, ecosystems & sustainable development, algarve, portugal, june 11–13, 2008 [44] s. kolhe, r. kamal, h. s. saini, g. k. gupta, “a web-based intelligent disease-diagnosis system using a new fuzzy-logic based approach for drawing the interferences in crops”, computers and electronics in agriculture, vol. 76, no. 1, pp. 16-27, 2011 [45] s. kolhe, r. kamal, h. s. saini, g. k. gupta, “an intelligent multimedia interface for fuzzy-logic based inference in crops”, expert systems with applications, vol. 38, no. 12, pp. 14592-14601, 2011 [46] m. y. munirah, m. rozlini, y. m. siti, “an expert system development: its application on diagnosing oyster mushroom diseases”, 13th international conference on control, automation and systems, gwangju, south korea, october 20-23, 2013 [47] g. liu, x. yang, y. ge, y. miao, “an artificial neural network–based expert system for fruit tree disease and insect pest diagnosis”, international conference on networking, sensing and control, lauderdale, usa, april 23–25, 2006 [48] f. siraj, n. arbaiy, “integrated pest management system using fuzzy expert system”, knowledge management international conference & exhibition, kuala lumpur, malaysia, june 6–8, 2006 [49] p. virparia, “a web based fuzzy expert system for insect pest management in groundnut crop ‘prajna’”, journal of pure & applied sciences, vol. 15, pp. 36-41, 2007 [50] x. wang, m. zhang, j. zhu, s. geng, “spectral prediction of phytophthora infestans infection on tomatoes using artificial neural network”, international journal of remote sensing, vol. 29, no. 6, pp. 1693-1706, 2006 [51] k. n. harker, “survey of yield losses due to weeds in central alberta”, canadian journal of plant science, vol. 81, no. 2, pp. 339–342, 2001 [52] m. khan, n. haq, “wheat crop yield loss assessment due to weeds”, national agricultural research cen intensification tre, vol. 18, no. 4, pp. 449–453, 2002 [53] s. fahad, s. hussain, b. s. chauhan, s. saud, c. wu, s. hassan, m. tanveer, a. jan, j. huang, “weed growth and crop yield loss in wheat as influenced by row spacing and weed emergence times”, crop protection, vol. 71, pp. 101–108, 2015 [54] a. n. rao, s. p. wani, j. k. ladha, weed management research in india-an analysis of the past and outlook for future, icar, 2014 [55] a. datta, h. ullah, n. tursun, t. pornprom, s. z. knezevic, b. s. chauhan, “managing weeds using crop competition in soybean [glycine max(l.) merr.]”, crop protection, vol. 95, pp. 60–68, 2017 [56] t. mruthul, chemical weed management in sesame (sesamum indicum l.), msc thesis, college of agriculture, raichur, university of agricultural sciences, 2015 [57] c. j. swanton, r. nkoa, r. e. blackshaw, “experimental methods for crop-weed competition studies”, weed science society of america, vol. 63, no. 1, pp. 2–11, 2015 [58] p. jha, v. kumar, r. k. godara, b. s. chauhan, “weed management using crop competition in the united states: a review”, crop protection, vol. 95, pp. 31–37, 2017 [59] p. milberg, e. hallgren, “yield loss due to weeds in cereals and its large-scale variability in sweden”, field crops research, vol. 86, no. 2–3, pp. 199–209, 2004 [60] c. j. swanton, k. n. harker, r. l. anderson, “crop losses due to weeds in canada”, weed technology, vol. 7, no. 2, pp. 537–542, 1993 [61] a. m. tobal, s. a. mokhtar, “weeds identification using evolutionary artificial intelligence algorithm”, journal of computer science, vol. 10, no. 8, pp. 1355-1361, 2014 [62] p. moallem, n. razmjooy, “a multi-layer perception neural network trained by invasive weed optimization for potato color image segmentation”, trends in applied sciences research, vol. 7, no. 6, pp. 445-455, 2012 [63] m. brazeau, “fighting weeds: can we reduce, or even eliminate, herbicides by utilizing robotics and ai”, available at: https://geneticliteracyproject.org/2018/12/12/fighting-weeds-can-wereduce-or-even-eliminate-herbicide-use-through-robotics-and-ai/, 2018 [64] m. p. ortiz, p. a. gutierrez, j. m. pena, j. t. sanchez, f. l. granados, c. h. martinez, “machine learning paradigms for weed mapping via unmanned aerial vehicles”, symposium series on computational intelligence, athens, greece, december 6–9, 2016 [65] l. stigliani, c. resina, “seloma: expert system for weed management in herbicide-intensive crops”, weed technology, vol. 7, no. 3, pp. 550559, 1993 [66] y. karimi, s. o. prasher, r. m. patel, s. h. kim, “application of support vector machine technology for weed and nitrogen stress detection in corn”, computers and electronics in agriculture, vol. 51, no. 1-2, pp. 99-109, 2006 [67] r. gerhards, s. christensen, “real-time weed detection, decisionmaking and patch-spraying in maize, sugarbeet, winter wheat and winter barley”, wiley online library, vol. 43, no. 6, pp. 385-392, 2003 [68] f. l. granados, “weed detection for site-specific weed management: mapping and real-time approaches”, weed research, vol. 51, no. 1, pp. 1-11, 2011 [69] c. c. yang, s. o. prasher, j. laundry, h. s. ramaswamy, “development of neural networks for weed recognition in corn fields”, american society of agricultural and biological engineers, vol. 45, no. 3, pp. 859-864, 2002 [70] e. g. rajotte, t. bowser, j. w. travis, r. m. crassweller, w. musser, d. laughland, c. sachs, “implementation and adoption of an agricultural expert system: the penn state apple orchard consultant”, engineering, technology & applied science research vol. 9, no. 4, 2019, 4377-4383 4383 www.etasr.com eli-chukwu: applications of artificial intelligence in agriculture in: international symposium on computer modelling in fruit research and orchard management, ishs, 1992 [71] s. l. teal, a. i. rudnicky, “a performance model of system delay and user strategy selection”, conference on human factors in computing systems, california, usa, may 3-7, 1992 [72] r. washington, b. h. roth, “input data management in real-time ai system”, 11th international joint conference on artificial intelligence, michigan, usa, august 20-25, 1989 [73] p. mowforth, i. bratko, ai and robotics: flexibility and integration, cambridge university press, 1987 [74] d. g. panpatte, artificial intelligence in agriculture: an emerging era of research, anand agricultural university, 2018 [75] t. duckett, s. pearson, s. blackmore, b. grieve, agricultural robotics: the future of robotic agriculture, uk-ras, 2018 microsoft word novel-final-ed.doc etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 209 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … a novel three dimension autonomous chaotic system with a quadratic exponential nonlinear term fei yu college of information science and engineering hunan university changsha, china yufeiyfyf@yahoo.com.cn chunhua wang college of information science and engineering hunan university changsha, china wch1227164@sina.com abstract— a novel three dimension autonomous (3d) chaotic system with a quadratic exponential nonlinear term and a quadratic cross-product term is described in this paper. the basic dynamical properties of the new attractor are studied. the forming mechanism of its compound structure, obtained by merging together two simple attractors after performing one mirror operation, has been investigated by detailed numerical as well as theoretical analysis. finally, the exponential operation circuit and its temperature-compensation circuit, which makes the new system more applicable from a practical engineering perspective, are investigated. keywords3d chaotic system; exponential nonlinear term; exponential operation circuit; temperature-compensation circuit i. introduction since lorenz found the first chaotic attractor in a three firstorder autonomous ordinary differential equations (odes) when he studied the atmospheric convection in 1963 [1], many new three dimension (3d) chaotic attractors have been proposed in the last three decades, such as the rossler system [2], the chen system [3], the lü system [4], the liu system [5], and the generalized lorenz system family [6]. the complicated dynamic properties of these chaotic systems are all obtained by some quadratic cross-product nonlinearity terms at the righthand side in odes. recently, wei and yang [7] revealed a 3d autonomous chaotic attractor with a nonlinear term in the form of exponential function at the right-hand side in odes as x ay ax= −ɺ , y by mxz= − +ɺ , xyz n e= −ɺ , where the existence of singularly degenerate heteroclinic cycles for a suitable choice of the parameters was investigated. chaotic attractors with a quadratic exponential nonlinear term in three odes have never been found so far. in this paper, a novel chaotic attractor is proposed. it is a 3d autonomous system which relies on a quadratic exponential nonlinear term and a quadratic cross-product term to introduce the nonlinearity necessary for folding trajectories. the chaotic attractor obtained from the new system according to the detailed numerical as well as theoretical analysis is also the two-scroll attractor, exhibiting complex chaotic dynamics [5]. the chaotic attractor is similar to lorenz chaotic attractor, but not equivalent in the topological structure. nonlinear dynamic properties of this system are studied by means of nonlinear dynamics theory, numerical simulation, lyapunov exponents, poincare mapping, fractal dimension, continuous spectrum and bifurcation diagram. the compound structure of the two-scroll attractor obtained by merging together two simple attractors after performing one mirror operation is explored here. finally, the exponential operation circuit with temperaturecompensation of the new chaotic system is simply designed. ii. novel 3d autonomous chaotic system a novel 3d autonomous chaotic system is expressed as follows: ( ) , , , xy x a y x y bx cxz z e dz = −  = −  = − ɺ ɺ ɺ (1) where , , ,a b c d are all constants coefficients assuming that , , , 0a b c d > and , ,x y z are the state variables. there are six terms on the right-hand side but it mainly relies on two quadratic nonlinearities, one is a quadratic exponential nonlinear term and the other is a quadratic cross-product term, namely, xy e and xz , respectively. system (1) can generate a new chaotic attractor for the parameters 10, 40, 2, 2.5a b c d= = = = with the initial conditions [ ]2.2, 2.4, 28 t . the chaotic attractor is displayed in figure 1. it appears that the new attractor exhibits the interesting complex and abundant of the chaotic dynamics behavior, which is similar to lorenz chaotic attractor, but is different from that of the lorenz system or any existing systems. etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 210 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … fig. 1. phase portraits of the new attractor. iii. basic properties of the new system a. equilibria let: ( ) 0, 0, 0, xy a y x bx cxz e dz − =  − =  − = (2) if db c> and 0c ≠ , the system has two equilibria points, which are respectively described as follows: ( ) ( )( )ln , ln ,e db c db c b c+ , ( ) ( )( )ln , ln , .e db c db c b c− − − when 10, 40, 2, 2.5a b c d= = = = , we operate above those nonlinear algebraic equations and obtain that: ( )1.978,1.978, 20e+ , ( )1.978, 1.978, 20e− − − . for equilibrium point e + , system (1) are linearized, the jacobian matrix is defined as: 0 10 10 0 0 0 0 3.956 . 30.278 30.278 2.5 xy xy a a j b cz cx ye xe d + − −        = − − = −       − −    to gain its eigenvalues, we let 0i jλ +− = . these eigenvalues corresponding to the equilibrium point e + are 1 14.192λ = − , 2 0.846 +12.965iλ = and 3 0.846 -12.965iλ = . here 1 λ is a negative real number, 2 λ and 3 λ become a pair of complex conjugate eigenvalues with positive real parts. the equilibrium point e + is a saddle-focus point. and system (1) is unstable at this equilibrium point. and then, for the equilibrium point e − , its jacobian matrix equals to: 0 10 10 0 0 0 0 3.956 . 30.278 30.278 2.5 xy xy a a j b cz cx ye xe d − − −        = − − =       − − − −    the same we let 0i jλ −− = . these eigenvalues corresponding to the equilibrium point e − are 1 14.192λ = − , 2 0.846 +12.965iλ = and 3 0.846 -12.965iλ = . apparently 1 λ is a negative real number, 2 λ and 3 λ form a complex conjugate pair and their real parts are positive. the equilibrium point e − is also a saddle-focus point. and system (1) is unstable at this equilibrium point. by the above brief analysis, the two equilibrium points of the non-linear system are all saddle focus-nodes. b. symmetry and invariance it is easy to see the invariance of system under the coordinate transformation ( ) ( ), , , ,x y z x y z→ − − , i.e., the system has rotation symmetry around the z -axis [7]. c. dissipativity the three lyapunov exponents and the divergence of the vector field is: ( ) 3 1 , i i x y z le v a d f x y z= ∂ ∂ ∂ = ∆ = + + = − + = ∂ ∂ ∂ ∑ ɺ ɺ ɺ (3) where ( )1, 2,3ile i = denote the three lyapunov exponents of the system. note that ( ) 12.5f a d= − + = − is a negative value, so the system is a dissipative system and an exponential rate is: 12.5 . fdv e e dt −= = (4) from (4), it can be seen that a volume element 0 v is contracted by the flow into a volume element 12.5 0 t v e − in time t . this means that each volume containing the system trajectory shrinks to zero as t → ∞ at an exponential rate of 12.5− . therefore, all system orbits are ultimately confined to a specific subset having zero volume and the asymptotic motion settles onto an attractor [8]. etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 211 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … d. lyapunov exponent and fractional dimension the lyapunov exponents generally refer to the average exponential rates of divergence or convergence of nearby trajectories in the phase space. if there is at least one positive lyapunov exponent, the system can be defined to be chaotic. according to the detailed numerical as well as theoretical analysis and (3), the lyapunov exponents are found to be 1 1.459l = , 1 0l = and 3 13.959l = − . therefore, the lyapunov dimension of this system is: 1 1 2 31 1.459 2 2 2.104 13.959 j i i l j l l l d j ll = + + = + = + = + = − ∑ (5) equation (5) means system (1) is really a dissipative system, and the lyapunov dimensions of the system are fractional. having a strange attractor and positive lyapunov exponent, it is obvious that the system is really a 3d chaotic system. e. spectrum map, time domain and poincare maps the spectrum of system (1) exhibits a continuous broadband feature as shown in figure 2. figure 3 shows that the evolution of the chaos trajectories is very sensitive to initial conditions [9]. the initial values of the system are set to [ ]2.2, 2.399, 28 t for the solid line and [ ]2.2, 2.4, 28 t for the dashed line. fig. 2. spectrum of log x fig. 3. sensitivity of system (1) the poincare maps are shown in figure 4. from figure 4(a)-(b), it can be seen that the poincare maps consist of virtually symmetrical branches and a number of nearly symmetrical twigs. it is also found that the section of the attractor looks like some circles from poincare map as figure 4(c)-(d). f. the influence of system parameters from the above analysis, it is visible that the stability of system equilibria will be changed along with the change of system parameters, and the system will also be in different state. by using numerical simulation method, the change of system parameters and system conditions are analyzed below. we let a increasing when other parameters are fixed. figures 5 and 6 show the bifurcation diagram and lyapunov exponent spectrum versus increasing a . while a increases, the system is undergoing some representative dynamical routes, such as stable fixed points, chaos, quasi-periodic loops and period-doubling bifurcation, which are summarized as follows: • 0 1.35a< ≤ , system (1) is stable, as shown in figure 7(a). • 1.35 12a< ≤ , system (1) is chaotic, and there are several periodic windows in the chaotic band, as shown in figure 7(b)-(c). • 12 22a< ≤ , systems (1) are some quasi-periodic loops and also call a reverse period-doubling bifurcation window, as shown in figure 7(d). • 22 28.9a< ≤ , system (1) is chaotic, and there are several periodic windows in the chaotic band, as shown in figure 7(e). • 28.9 40a< ≤ , there is a reverse period-doubling bifurcation window too, as shown in figure 7(f). g. forming mechanism of this new chaotic attractor structure compound structures of the new system (1) may be obtained by merging together two simple attractors after performing one mirror operation. such an operation can be revealed through the use of a controlled system of the form: ( ) xy x a y x y bx cxz ky u z e dz = −  = − + +  = − ɺ ɺ ɺ (6) where u is a parameter of control and the value of u can be changed within a certain range. here, we still select the initial values of the system as [ ]2.2, 2.4, 28 t . when 3u = , the attractor evolves into partial but is still bounded in this time, the corresponding strange attractors are shown in figure 8a. when 5u = , the attractors are evolved into the single right scroll attractor, it is only one half the original chaotic attractors etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 212 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … in this time, the corresponding strange attractors are shown in figure 8b. then we select u to be a negative value. when 3u = − , the corresponding strange attractors are shown in figure 8c, the attractor evolves into partial but is still bounded in this time. when 5u = − , the corresponding strange attractors are shown in figure 8d, the attractors are evolved into the single left scroll attractor; it is only one half the original chaotic attractors in this time. fig. 4. poincaré maps in planes where (a) 0x = , (b) 0y = , (c) 15z = , (d) 10z = fig. 5. bifurcation diagram of system (1) with 0 < a < 40 fig. 6. lyapunov exponents spectrum of system (1) with 0 < a < 40 etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 213 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … iv. exponential operation circuit the exponential operation circuit is simply studied in this part. first, using the semiconductor pn-junction exponential volt-ampere characteristic, the exponential operation can be realized. then, in order to suppress temperature-drift, the technology of temperature-compensation is used in the exponential operation circuit. fig. 7. phase portraits of system (1) in y z− plane with ( ) ( ), , 40, 2, 2.5b c d = at initial values [ ]2.2, 2.4, 28 t : (a) 1a = , (b) 3a = , (c) 10a = , (d) 15a = , (e) 24a = , (f) 35a = fig. 8. phase portraits of system (6) in y z− plane at (a) 3u = , (b) 5u = , (c) 3u = − , (d) 5u = − . a. basic exponential operation circuit in actual application, we usually connect the collector with the base of bipolar junction transistor (bjt) to form a diode type. the exponential operation circuit is shown in figure 9. according to the relationship of c i and be u , we get: fig. 9. exponential operation circuit. etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 214 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … 1 , be t u u c e es i i i eα α   = =  −      (7) where α is the current amplification and 1α ≈ , es i and be u are the transistor parameters. when be t u u≫ , equation (7) is simplified to: , be t u u c es i i e≈ (8) then 0 1 1 1 . be i t t u u u u c es es u i r i r e i r e − = = = (9) b. exponential operation circuit with temperaturecompensation as is well known, es i and t u are functions of the temperature and very sensitive to temperature, so how to improve the temperature stability of the exponential operation circuit is an important problem that needs to be solved in actual application. here, two methods are applied. first we adopt a pair of bjts to eliminate the effect of es i , then we use the thermistor to compensate the influence of t u . figure 10 illustrates the exponential operation circuit with temperaturecompensation. the voltage of point a can be expressed as: 1 2 0 1 1 2 5 5 0 1 ln ln ln , a be be ref t es es ref t u u u u u u i r i r u r u u r = −   = −    = (10) here, the parameters of two bjts are equal ( 1 2es es i i= ). ref u is a reference voltage. when the value of 3 r was small, the output voltage 0 u can be expressed as: 3 3 4 5 0 1 , i t r u ref r r u u r u e r − • += (11) where 3 r is the positive temperature coefficient thermistor which use to compensate the temperature variations of t u . v. conclusion this paper presented a novel 3d autonomous chaotic system with a quadratic exponential nonlinear term. some basic properties of the system have been investigated. in addition, forming mechanisms of compound structures of the new chaotic attractor have been studied and explored. finally, the exponential operation circuit and its temperaturecompensation circuit of the chaotic system were simply designed. despite the abundant and complex dynamical behaviors of the new 3d autonomous system has been discussed in detail in this paper. but even more important, some analysis like control, synchronization and secure communication of the system will be taken into consideration in a future work. therefore, further research into the system is still important and insightful. fig. 10. exponential operation circuit with temperature-compensation references [1] e. n. lorenz, “deterministic non-periodic flow”, j. atmos. sci., vol. 20, no. 1, pp. 130-141, 1963 [2] o. e . rossler, “an equation for continuous chaos”, phys. lett. a, vol. 57, no. 5, pp. 397-399, 1976 [3] g. chen, t. ueta, “yet another chaotic attractor”, internat. j. bifur. chaos, vol. 9, no. 7, pp. 1465-1457, 1999 etasr engineering, technology & applied science research vol. 2, �o. 2, 2012, 209-215 215 www.etasr.com yu and wang: a �ovel three dimension autonomous chaotic system with a quadratic … [4] j. lü, g. chen, “a new chaotic attractor conined”, internat. j. bifur. chaos, vol. 12, no. 3, pp. 659-662, 2002 [5] c. liu, t. liu, l. liu, k. liu, “a new chaotic attractor”, chaos, solitons fractals, vol. 22, no. 5, pp. 1031-1038, 2004 [6] s. celikovsky, g. chen, “on the generalized lorenz canonical form”, chaos, solitons fractals, vol. 26, no. 5, pp. 1271-1276, 2005 [7] z. wei, q. yang, “dynamical analysis of a new autonomous 3-d chaotic system only with stable equilibria”, nonlinear anal.: rwa, vol. 12, no. 1, pp. 106-118, 2011 [8] w. zhou, y. xu, h. lu, l. pan, “on dynamics analysis of a new chaotic attractor”, phys. lett. a, vol. 372, no. 36, pp. 5773-5777, 2008 [9] d. sara, r. m. hamid, “a novel three-dimensional autonomous chaotic system generating two, three and four-scroll attractors,” phys. lett. a, vol. 373, no. 40, pp. 3637-3642, 2009 microsoft word etasr_v11_n4_pp7399-7404 engineering, technology & applied science research vol. 11, no. 4, 2021, 7399-7404 7399 www.etasr.com mugheri & keerio: an optimal fuzzy logic-based pi controller for the speed control of an induction … an optimal fuzzy logic-based pi controller for the speed control of an induction motor using the v/f method noor hussain mugheri department of electrical engineering quaid-e-awam university of engineering science and technology nawabshah, sindh, pakistan noorhussain@quest.edu.pk muhammad usman keerio department of electrical engineering quaid-e-awam university of engineering science and technology nawabshah, sindh, pakistan usmankeerio@quest.edu.pk abstract-the induction motor (im) is popular because of its low price, higher efficiency, and low maintenance cost. a comparative analysis of im speed controllers using voltage/frequency (v/f) control or scalar control (sc) is presented in this paper. sc is commonly used due to its ease of implementation, simplicity, and low cost. to decrease the difficulty and cost of hardware implementation, this paper proposes an optimal fuzzy proportional integral (fuzzy-pi) controller. firstly, the speed of im using the v/f control technique is discussed. then, speed control of im using a conventional pi controller is performed. finally, a simplified-rules fuzzy-pi controller is developed in matlab/simulink and its performance is compared with that of open-loop sc and the traditional pi controller. the performance of the simplified-rules fuzzy-pi controller is superior to that of an open-loop constant v/f control and a conventional pi controller. keywords-induction motor; constant v/f control; pi controller; optimal fuzzy pi controller i. introduction its low cost, simple structure, reliability, and good robustness have made the induction motor (im) a most attractive choice [1, 2] for industry applications. to change the speed of ims, frequency and voltage can be varied. in voltage/frequency (v/f) control, only their magnitudes are controlled. v/f control is easy to implement, it requires a small number of components, and can be employed in several applications such as variable speed pumps, fans, blowers, etc. [3]. normally, a proportional integral (pi) controller is employed for im speed control for most applications [4, 5]. but, the conventional pi controller has some disadvantages: its accuracy depends on the mathematical model, the system suffers fromnon-linearity, and it is very sensitive to parameter and temperature variations and to load disturbances [6, 7]. a fuzzy logic-based controller (flc) can overcome these disadvantages [8-10]. the flc does not need the model of the plant, can handle non-linearity, it is less sensitive to load disturbances, it produces human logic linguistic rules, and is robust [11-13]. authors in [14] suggested a 3-phase im speed control technique using constant v/f control. a simplified fuzzy rule-based flc can be easily implemented in hardware and performs better in comparison with the standard 25-rule flc [15]. d. asija proposed a standard 25-rule fuzzy-pi controller for im speed control using sc and concluded that the fuzzy-pi controller outperforms the traditional pi controller [16]. b. n. kar and k. b. mohanty presented a standard 49rule flc for im speed control by indirect vector control (ivc) and concluded that compared with the conventional pi controller, the proposed flc performs better with regard to change in load, settling time, and overshoot [17]. authors in [18] proposed a standard 49-rule flc based im speed control by ivc and concluded that flc performs better than the traditional pi controller with regard to load disturbances and changing reference speed. authors in [19] proposed and implemented a 3-phase im using v/f control. ii. proposed framework a simplified-rule fuzzy-pi controller using the v/f control technique is presented in this paper. various researchers proposed flc-based im speed control using standard 49 fuzzy rules, standard 25 rules, standard 9 rules, and simplified fuzzy rules [20-25]. in this paper, the standard 9 rules are simplified into 5 rules for the first time and the simplified-rule fuzzy-pi controller is developed and implemented for the speed control of a 3-phase im using constant v/f control. the rule base is selected using the trial and error technique. the proposed simplified fuzzy-pi controller reduces complexity and computational load, it is easily implemented, it is simple, and has better performance. figure 1 exhibits the diagram of the proposed framework. the dc supply voltage is converted into variable voltage and variable frequency by a metal oxide semiconductor field effect transistor (mosfet) inverter. the optimal fuzzy-pi controller gets feedback signals from an im. the method of speed control used here is the popular constant v/f control. the svpwm generator produces firing pulses to the inverter. the mosfet converts the dc supply voltage into variable ac by the svpwm method. corresponding author: noor hussain mugheri engineering, technology & applied science research vol. 11, no. 4, 2021, 7399-7404 7400 www.etasr.com mugheri & keerio: an optimal fuzzy logic-based pi controller for the speed control of an induction … fig. 1. the proposed framework. iii. constant v/f control with the recent developments in power electronics, variable voltage variable frequency ims are increasingly employed in a variety of industrial applications. the circuit diagram of v/f control is shown in figure 2. speed control of im is achieved using a 3-phase inverter. the frequency and applied voltage must be varied to maintain constant air-gap flux and to evade saturation of the im. the stator voltage and frequency are changed at the same time to keep v/f ratio constant. fig. 2. constant v/f control. iv. the fuzzy logic controller the flc is an intelligent controller that is very similar to human reasoning. it accepts crisp input, performs calculations, and then gives an output value [26]. the structural diagram of the flc is shown in figure 3. there are four steps for the implementation of an flc: fuzzification, inference engine, rule base, and de-fuzzification. the flc inputs are characterized by triangular membership functions (mfs) and 3 triangular mfs are used for output control. there are 9 rules out of which 5 are executed by the fuzzy inference system (fis). the triangular mfs for the flc system are shown in a figure 4. the mfs for two inputs and a single output are: positive (p), zero (z), and negative (n). fig. 3. the flc structural block diagram. (a) (b) (c) fig. 4. the triangular membership functions: (a) error (e), (b) change of error (∆e), (c) output control (∆u). table i shows the simplified rule base for the flc. for this application the mamdani type fis is chosen. the fuzzy logic process which is based on if-then rules is represented as follows: if n and p denote the error and change of error respectively then the output control will be denoted by z. table i. the simplified fuzzy rules for the flc e ∆e n z p simplified rules 1. if e is n and ∆e is z then ∆u is n 2. if e is z and ∆e is p then ∆u is p 3. if e is z and ∆e is z then ∆u is z 4. if e is z and ∆e is n then ∆u is n 5. if e is p and ∆e is n then ∆u is z p z p p z n z p n n n z -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 error d e g re e o f m e m b e rs h ip n z p -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 changeoferror d e g re e o f m e m b e rs h ip n z p -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 outputcontrol d e g re e o f m e m b e rs h ip n z p engineering, technology & applied science research vol. 11, no. 4, 2021, 7399-7404 7401 www.etasr.com mugheri & keerio: an optimal fuzzy logic-based pi controller for the speed control of an induction … fig. 5. the simulink diagram of the open loop v/f control-based im speed control. (a) (b) fig. 6. the open loop im speed control: (a) rotor speed, (b) electromagnetic torque. v. results and discussion figure 5 depicts the simulink diagram of the open loop im speed control using the svpwm technique. a 3-phase im is fed from an inverter and is connected with a dc voltage source. a mosfet inverter is modeled by a universal block and the im by an asynchronous machine block. a constant value of 11.9nm load torque is applied to the im shaft. a speed set point in rpm is applied to the v/hz block. the initially reference speed is 1725rpm and the final speed value is 1300rpm. the induction motor reaches a speed of 1275rpm. in this application, im does not reach the final speed of 1300rpm which is the required final value. the simulation results for open loop scalar control are shown in a figure 6. the simulink diagram of the traditional pi controller based im closed loop speed control using svpwm technique is shown in figure 7. the zeigler-nichols method is used in this paper for tuning the pi controller. to study the performance of the 3-phase im, two performance indices, i.e. rotor speed and electromagnetic torque are used. in order to achieve the actual im speed, feedback is used and it is compared to the im reference speed. the error is produced by the obtained difference of the two and the error is processed in a conventional pi controller which reduces it. the same reference step speed and load torque is used for the traditional pi controller. the simulink diagram of the simplified fuzzy-pi controller using sc is given in figure 9. the error signal and the change of error are the input of an intelligent controller. the fuzzy-pi controller produces the controlled output signal and provides it to the v/f control. 0 0.5 1 1.5 1250 1300 1350 1400 1450 1500 1550 1600 1650 1700 1750 time (sec) s p e e d ( re v /m in ) 0 0.5 1 1.5 5 6 7 8 9 10 11 12 13 14 time (sec) t o rq u e ( n m ) engineering, technology & applied science research vol. 11, no. 4, 2021, 7399-7404 7402 www.etasr.com mugheri & keerio: an optimal fuzzy logic-based pi controller for the speed control of an induction … fig. 7. the simulink model of the conventional pi controller based im speed control. (a) (b) fig. 8. the simulation results for conventional pi controller based im speed control: (a) rotor speed, (b) electromagnetic torque. here, the frequency varying device is the svpwm generator which uses the svpwm technique to produce firing pulses to the inverter and hence the frequency of the supply is changed w.r.t the voltage of the im. using the svpwm method, the mosfet inverter converts the dc supply voltage into variable ac. from figure 8, it is clear that the conventional pi controller reaches the final speed of 1300rpm. from figure 10, it is clear that the optimal fuzzy-pi controller also achieves the final speed of 1300rpm but with less settling time and without overshoot compared with a conventional pi controller and also better electromagnetic torque response is obtained. when compared with the existing literature, the proposed simplified fuzzy-pi controller uses the least number of fuzzy rules. the proposed intelligent controller employs a trial and error approach to minimize the fuzzy rules while achieving better im speed control performance. figure 11 depicts the speed response of the im with step speed reference with both controllers and constant v/f control. the performance analysis of both controllers and without any controller at a constant reference speed is given in table ii. table ii. performance analysis settling time (sec) overshoot (%) load (nm) conventional pi controller 1.440 0.307 11.9 optimal fuzzy-pi controller 0.980 0 without controller does not settle --- 0 0.5 1 1.5 1250 1300 1350 1400 1450 1500 1550 1600 1650 1700 1750 time (sec) s p e e d ( re v /m in ) 0 0.5 1 1.5 5 6 7 8 9 10 11 12 13 14 time (sec) t o rq u e ( n m ) engineering, technology & applied science research vol. 11, no. 4, 2021, 7399-7404 7403 www.etasr.com mugheri & keerio: an optimal fuzzy logic-based pi controller for the speed control of an induction … fig. 9. the simulink diagram of the optimal fuzzy-pi controller based im speed control. (a) (b) fig. 10. the simulation results for the optimal fuzzy-pi controller based im speed control: (a) rotor speed, (b) electromagnetic torque. vi. conclusion to reduce the computation load, the need of memory space, and to enable more easy hardware implementation an optimal fuzzy-pi controller using sc is successfully developed and simulated in this paper. the proposed optimal fuzzy-pi controller uses a minimum number of fuzzy rules. the simulation results show that the performance and speed response of the simplified-rule set fuzzy-pi controller surpass the ones of the open loop constant v/f control and the traditional pi controller. the optimal fuzzy-pi controller fully eradicates the overshoot of the conventional pi controller and the settling time is much smaller than that of the traditional pi controller. fig. 11. speed response of the 3-phase im with step speed reference with an initial value of 1725rpm and final speed reference of 1300rpm at 0.1s and 11.9nm load with optimal fuzzy-pi controller, conventional pi controller, and without controller. acknowledgement this research was funded by the quaid-e-awam university of engineering, science and technology, nawabshah, sindh, pakistan. 0 0.5 1 1.5 1250 1300 1350 1400 1450 1500 1550 1600 1650 1700 1750 time (sec) s p e e d ( re v /m in ) 0 0.5 1 1.5 5 6 7 8 9 10 11 12 13 14 time (sec) t o rq u e ( n m ) engineering, technology & applied science research vol. 11, no. 4, 2021, 7399-7404 7404 www.etasr.com mugheri & keerio: an optimal fuzzy logic-based pi controller for the speed control of an induction … appendix im electrical parameters power = 3hp , voltage = 220v, poles = 4, frequency = 60hz rs = 0.435ω, rr = 0.815ω ls = 2*2.0mh, lr = 2.0mh lm = 69.31mh j = 0.089 kg.m 2 references [1] a. bhowate, m. aware, and s. sharma, "predictive torque control with online weighting factor computation technique to improve performance of induction motor drive in low speed region," ieee access, vol. 7, pp. 42309–42321, 2019, https://doi.org/10.1109/ access.2019.2908289. [2] a. abdel menaem, m. elgamal, a.-h. abdel-aty, e. e. mahmoud, z. chen, and m. a. hassan, "a proposed ann-based acceleration control scheme for soft starting induction motor," ieee access, vol. 9, pp. 4253–4265, 2021, https://doi.org/10.1109/access.2020.3046848. [3] o. e.-s. mohammed youssef, "a new open-loop volts/hertz control method for induction motors," in 2018 twentieth international middle east power systems conference (mepcon), cairo, egypt, dec. 2018, pp. 266–270, https://doi.org/10.1109/mepcon.2018.8635102. [4] g. a. olarinoye, c. akinropo, g. j. atuman, and z. m. abdullahi, "speed control of a three phase induction motor using a pi controller," in 2019 2nd international conference of the ieee nigeria computer chapter (nigeriacomputconf), zaria, nigeria, oct. 2019, https://doi.org/10.1109/nigeriacomputconf45974.2019.8949624. [5] a. w. nasir, i. kasireddy, and a. k. singh, "real time speed control of a dc motor based on its integer and non-integer models using pwm signal," engineering, technology & applied science research, vol. 7, no. 5, pp. 1974–1979, oct. 2017, https://doi.org/ 10.48084/etasr.1292. [6] s. gdaim, a. mtibaa, and m. f. mimouni, "design and experimental implementation of dtc of an induction machine based on fuzzy logic control on fpga," ieee transactions on fuzzy systems, vol. 23, no. 3, pp. 644–655, jun. 2015, https://doi.org/10.1109/tfuzz.2014.2321612. [7] k. s. belkhir, "simple implementation of a fuzzy logic speed controller for a pmdc motor with a low cost arduino mega," engineering, technology & applied science research, vol. 10, no. 2, pp. 5419–5422, apr. 2020, https://doi.org/10.48084/etasr.3340. [8] m. a. hannan, j. a. ali, p. j. ker, a. mohamed, m. s. h. lipu, and a. hussain, "switching techniques and intelligent controllers for induction motor drive: issues and recommendations," ieee access, vol. 6, pp. 47489–47510, 2018, https://doi.org/10.1109/access. 2018.2867214. [9] s. mikkili and a. k. panda, "shaf for mitigation of current harmonics using p-q method with pi and fuzzy controllers," engineering, technology & applied science research, vol. 1, no. 4, pp. 98–104, aug. 2011, https://doi.org/10.48084/etasr.44. [10] z. ibrahim and e. levi, "a comparative analysis of fuzzy logic and pi speed control in high-performance ac drives using experimental approach," ieee transactions on industry applications, vol. 38, no. 5, pp. 1210–1218, sep. 2002, https://doi.org/10.1109/tia.2002.802993. [11] j. liu, y. gao, s. geng, and l. wu, "nonlinear control of variable speed wind turbines via fuzzy techniques," ieee access, vol. 5, pp. 27–34, 2017, https://doi.org/10.1109/access.2016.2599542. [12] y. raziyev, r. garifulin, a. shintemirov, and t. d. do, "development of a power assist lifting device with a fuzzy pid speed regulator," ieee access, vol. 7, pp. 30724–30731, 2019, https://doi.org/10.1109/ access.2019.2903234. [13] m. a. hannan, z. abd. ghani, md. m. hoque, p. j. ker, a. hussain, and a. mohamed, "fuzzy logic inverter controller in photovoltaic applications: issues and recommendations," ieee access, vol. 7, pp. 24934–24955, 2019, https://doi.org/10.1109/access.2019.2899610. [14] r. bharti, m. kumar, and b. m. prasad, "v/f control of three phase induction motor," in 2019 international conference on vision towards emerging trends in communication and networking (vitecon), vellore, india, mar. 2019, https://doi.org/10.1109/vitecon.2019. 8899420. [15] m. h. n. talib, z. ibrahim, n. abd. rahim, a. s. abu hasim, and h. zainuddin, "performance improvement of induction motor drive using simplified flc method," in 2014 16th international power electronics and motion control conference and exposition, antalya, turkey, sep. 2014, pp. 707–712, https://doi.org/10.1109/epepemc.2014.6980580. [16] d. asija, "speed control of induction motor using fuzzy-pi controller," in 2010 2nd international conference on mechanical and electronics engineering, kyoto, japan, aug. 2010, vol. 2, pp. v2-460-v2-463, https://doi.org/10.1109/icmee.2010.5558463. [17] b. n. kar, k. b. mohanty, and m. singh, "indirect vector control of induction motor using fuzzy logic controller," in 2011 10th international conference on environment and electrical engineering, rome, italy, may 2011, https://doi.org/10.1109/eeeic.2011.5874782. [18] b. sahu, k. b. mohanty, and s. pati, "a comparative study on fuzzy and pi speed controllers for field-oriented induction motor drive," in 2010 modern electric power systems, wroclaw, poland, sep. 2010, https://doi.org/10.1109/iecr.2010.5720134. [19] j. m. peña and e. v. díaz, "implementation of v/f scalar control for speed regulation of a three-phase induction motor," in 2016 ieee andescon, arequipa, peru, oct. 2016, https://doi.org/10.1109/ andescon.2016.7836196. [20] m. a. magzoub, n. b. saad, and r. b. ibrahim, "an intelligent speed controller for indirect field-oriented controlled induction motor drives," in 2013 ieee conference on clean energy and technology (ceat), langkawi, malaysia, nov. 2013, pp. 327–331, https://doi.org/ 10.1109/ceat.2013.6775650. [21] s. saahithi and r. p. mandi, "technological advances of speed control of induction motor with pi and fuzzy logic controllers," international journal of recent technology and engineering, vol. 8, no. 6s, pp. 41– 45, mar. 2020, https://doi.org/10.35940/ijrte.f1009.0386s20. [22] m. a. mannan, a. islam, m. n. uddin, m. k. hassan, t. murata, and j. tamura, "fuzzy-logic based speed control of induction motor considering core loss into account," intelligent control and automation, vol. 3, no. 3, pp. 229–235, aug. 2012, https://doi.org/ 10.4236/ica.2012.33026. [23] y. k. sahu, k. quraishi, s. rajwade, and p. choudhary, "comparative analysis of pi fuzzy logic controller based induction motor drive," in 2016 international conference on electrical power and energy systems (icepes), bhopal, india, dec. 2016, pp. 210–214, https://doi.org/ 10.1109/icepes.2016.7915932. [24] n. farah et al., "a novel self-tuning fuzzy logic controller based induction motor drive system: an experimental approach," ieee access, vol. 7, pp. 68172–68184, 2019, https://doi.org/10.1109/ access.2019.2916087. [25] q. a. tarbosh et al., "review and investigation of simplified rules fuzzy logic speed controller of high performance induction motor drives," ieee access, vol. 8, pp. 49377–49394, 2020, https://doi.org/10.1109/access.2020.2977115. [26] e. h. mamdani and s. assilian, "an experiment in linguistic synthesis with a fuzzy logic controller," international journal of man-machine studies, vol. 7, no. 1, pp. 1–13, jan. 1975, https://doi.org/10. 1016/s0020-7373(75)80002-2. microsoft word 33-3392_s_etasr_v10_n2_pp5524-5527 engineering, technology & applied science research vol. 10, no. 2, 2020, 5524-5527 www.etasr.com nadareishvili et al.: investigation of the visible light-sensitive zno photocatalytic thin films 5524 investigation of the visible light-sensitive zno photocatalytic thin films m. nadareishvili andronikashvili institute of physics ivane javakhishvili tbilisi state university, tbilisi, georgia malkhaz.nadareishvili@tsu.ge g. mamniashvili andronikashvili institute of physics ivane javakhishvili tbilisi state university, tbilisi, georgia mgrigor@rocketmail.com d. jishiashvili v. chavchanidze institute of cybernetics georgian technical university and condensed matter physics department andronikashvili institute of physics ivane javakhishvili tbilisi state university, tbilisi, georgia d_jishiashvili@gtu.ge g. abramishvili condensed matter physics department andronikashvili institute of physics ivane javakhishvili tbilisi state university, tbilisi, georgia gocha00007@mail.ru c. ramana university of texas at el paso el paso, usa rvchintalapalle@utep.edu j. ramsden university of buckingham united kingdom jeremy.ramsden@buckingham.ac.uk abstract—zno photocatalytic thin films deposited on a glass substrate are obtained by chemical spraying technique, and they are active in the visible light spectrum. optical studies have shown that zno thin films doped by nickel impurities absorb visible light at wavelengths from 400nm to 600nm. at the same time, this absorption rate increases with the increase of concentration of nickel impurities. at high concentration (5%), the absorption of light is reduced in the visible area, but after heat treatment at 600 0 c the light absorption in these samples improves, which allows us to conclude that the observed effect is caused by a violation of the homogeneity of the distribution of nickel impurities and the creation of agglomerates. decoration of zno thin film surfaces by silver clusters improves light absorption, as it happens to the nanopowders, but in the case of thin films, this effect is much smaller. experiments on methylene blue determine the significance of photocatalytic activity in the visible area of sun irradiation of zno thin films containing nickel impurities, which are obtained by chemical spraying technique. keywords—thin films; zno; impurities; photocatalysis; ecology i. introduction photocatalysis is the activation of reduction-oxidation (redox) reactions due to the influence of light. the reaction is connected to specific substances, called photocatalysts. photocatalysis can be used for the decomposition of water in hydrogen and oxygen and for the degradation of harmful substances in water and air under sun exposure [1]. as a result of light irradiation, a semiconductor photocatalyst particle generates electron-hole pairs that can reach the surface of the particle, enter in the redox reactions with environmental molecules and cause their dissolution, e.g. water dissolves into oxygen and hydrogen. hydrogen, obtained from water, can be used as an environmentally friendly fuel with the final product of its combustion being water again. photocatalysis can also decompose organic molecules, including bacteria, into the environment, and result in carbon dioxide, oxygen, and water. this is the main reason of the ongoing research for photocatalysis usage in hydrogen energy, ecology, and medicine [2-7]. the low efficiency of the reaction is the main challenge, which prevents the wide practical usage of photocatalysis. this is caused by two reasons: 1. low quantum yield brought by recombination of electrons and holes and 2. low level of visible light usage in photocatalysis, which has ten times more energy in the solar irradiation than its uv share, caused by the large width of energetic gap in a stable photocatalist. to improve the quantum yield, small clusters of different materials, so-called co-catalists, are coated on the surface of photocatalist particles, which capture electrons and holes, and thus, reduce the recombination [8]. introduction of various impurities in photocatalists, which decreases the width of energy gap is the main method of increasing the amount of visible light in the process of photocatalysis [9, 10]. a photocatalytic reaction occurs on the surface of the photocatalyst. therefore, photocatalysts are generally used in the form of powders to increase surface area. however, photocatalytic thin films are frequently needed as well, especially when the photocatalysts are used for environmental purposes, since these substances are often necessary in continuous flow systems, and it is very difficult to separate photocatalytic nanoparticles from a suspension. the thin photocatalytic films are also necessary to obtain self-cleaning surfaces, e.g. to create lamps that do not get attached with soot in automobile tunnels, smart window glasses, etc. corresponding author: m. nadareishvili engineering, technology & applied science research vol. 10, no. 2, 2020, 5524-5527 www.etasr.com nadareishvili et al.: investigation of the visible light-sensitive zno photocatalytic thin films 5525 currently, research is underway to create new photocatalytic thin films of various substances that are active in the visible region of solar radiation and therefore have improved efficiency. authors in [11] prepared pure cds and cu-doped cds thin films using the spray pyrolysis technique while 2%, 4% and 6% cu-contents were used for doping. reflectance and transmission measurements were studied in the spectral range of 200–1100nm to extract the optical properties variation upon copper doping. authors in [12] developed successful strategies on combining the versatility of mechanochemical synthesis with rf-sputtering for the controllable deposition of bivo4 thin films. photocatalytic activity experiments were performed for the degradation of rhodamine 6g (rh6g) dyes under visible light irradiation. authors in [13] investigated the long term stability and photocatalytic activity of cu2znsns4/tio2 thin film heterostructures, under simulated solar radiation, using phenol and imidacloprid as testing pollutants. our group conducted studies on the sensitization of thin photocatalytic films to visible light similar to those described above. zno photocatalytic thin films doped by nickel impurities and obtained by chemical spraying were studied. this choice was due to the fact that despite its relatively low photocatalytic efficiency, zno possesses a number of unique properties which make it one of the strongest candidates for industrial use, if it is sensitized to visible light and therefore has increased efficiency. these properties are: non-toxicity, direct forbidden gap at room temperature (3.37ev), decent optical properties, chemical resistance to photoreactions, low cost, etc. ii. experimental part zno thin films were deposited onto glass substrate at 460°c by spray pyrolysis [14]. undoped zno thin films were prepared using a zinc acetate precursor (c4h6o4zn. 2h2o) dissolved in 2-propanol to obtain a starting solution with a concentration of 0.1mol/l. nickel-doped zno thin films were prepared by adding a compound source of nickel chlorure hexahydrate (nicl2(6h2o), 99.9% purity) to the precursor solution while maintaining the acidity level for atomic percentages of [ni]/[zn] of 1,2,3,4,5 wt%. these samples are presented in figure 1. fig. 1. zno:ni thin film samples deposited on glass zno:ni thin films coated by silver clusters were obtained by a novel technology developed in the andronikashvili institute of physics at ivane javakhishvili tbilisi state university, which consists of depositing metallic clusters on the surfaces of the nanoparticles of fine powders [15, 16]. the technology was modified for ag nanoclaster deposition on the surface of zno:ni thin films. this inexpensive electroless technology proceeds at low temperatures (50-60 0 c) and hence it does not change either the coated material or the material of clusters itself. a solution of the following composition: agno3 – 0.7g/l, nh4oh – 7ml/l, naoh – 0.8g/l was prepared for this purpose. kna was added right before the experiment started. kna amount and deposition time determine the size of coated nanoclasters. scanning electron microscope (sem) and energy dispersive spectroscopy (eds) investigations were conducted on the sem tescan vega3 xmu, which is equipped with energy dispersive spectrometer (eds): oxford instruments, aztecone (figure 2). fig. 2. sem vega3 equipped with eds of oxford instruments iii. results and discussions light absorption increases in the visible area during the introduction of nickel impurity in zno thin films. when this process was observed, the detailed studies started by zno thin films been deposited on a glass substrate of uniform thickness, containing different amounts of ni impurities prepared beforehand, and their structure and optical spectra were studied afterwards. the concentration of ni impurities varied from 1% to 5% with a 1% step. the sem and eds investigation results of these samples are shown in figures 3 and 4 respectively. the sem image of the zno:ni thin film (figure 3) shows that the surfaces of the films are quite smooth. the eds image of the zno:ni thin film with 3% ni content (figure 4) points out the existence of other impurities apart from ni in the zno sample. fig. 3. sem image of the zno:ni thin film with 3% ni content and deposited on the glass engineering, technology & applied science research vol. 10, no. 2, 2020, 5524-5527 www.etasr.com nadareishvili et al.: investigation of the visible light-sensitive zno photocatalytic thin films 5526 fig. 4. eds spectrum of the zno:ni thin film with 3% ni content and deposited on the glass the zno:ni film sample may contain a part of these impurities while the rest of it may be in a glass substrate. to clarify this situation, the similar eds spectrum of the glass without zno:ni thin film was obtained, which showed that there were only ca and k additional impurities in the zno film itself. figure 5 shows the experimental results of the optical investigations. it has to be noted that there are no results below 400nm, because zno thin films have been deposited on the glass and hence the optical properties of the films in this area could not be studied because of the strong absorption of ultraviolet rays by the glass. the experiments have shown that the absorption of light by zno with the introduction of ni impurities increases with the increase of the concentration of ni impurities from 1% to 4%, in the visible area at the wavelength range of 400nm to 600nm. when the concentration of impurities increases further, e.g. at the nickel concentration of 5%, the light absorption decreases at this wavelength interval. figure 5 also shows that ni plays the main role in increasing the absorption of light in the visible area while the other impurities which exist in the zno film have considerably less influence in the existing amounts. fig. 5. dependence of the optical spectrum of zno thin films on the concentration of nickel impurities. curve-1: zno with 1% content of ni impurities, curve-2: zno with 2% ni, curve-3: zno with 3% ni, curve-4: zno containing 4% ni, curve-5: zno containing 5% ni, and curve-6: zno without nickel impurities the image on figure 5 is quite complicated, so to make this process more visible, the dependence of light absorption rate on the concentration of impurities at a specific wavelength of 450nm has been built for nickel-containing zinc oxides. figure 6 (continuous line) shows this dependence graph, which clearly indicates that as the nickel concentration increases up to 4% the absorption rate increases too, but at 5% it decreases sharply (continuous line and dot a). it was suggested that the reduction (shown on figure 6) in light absorption at high concentrations of impurities should have been caused by the inhomogeneous distribution of these impurities and by the creation of their agglomerates. in order to test this assumption, heat treatment was carried out on a sample containing 5% ni in vacuum at 600 0 c for 1 hour. the goal of the test was to remove the aforementioned agglomerates. figure 6 (dash line and dot b) shows the absorption value of this sample at a wavelength of 450nm after thermal treatment in a vacuum. as it is clear from figure 6, the absorbance has been increased sharply at 450nm after the vacuum thermal treatment at 600 0 c, confirming our assumption. fig. 6. dependence of light absorption of zinc oxides with nickelimpurities on nickel concentration at 450nm wavelength before heat treatment (continuous line and dota) and after heat treatment (dash line and dotb) as mentioned above, one of the main methods of increasing the quantum yield of photocatalyst nanopowders is to place different nano-size clusters on the surface of their particles which would capture photo-induced electrons and holes and reduce their recombination. to hit the goal in our case, silver clusters were placed on the surface of zno thin films with ni impurities and their optical properties have been investigated. figure 7 shows the results of the corresponding investigations: curve 1 corresponds to the absorption spectrum of zno thin film containing 3% of nickel impurities before decorating its surface with silver clusters and curve 2 does the same after decorating its surface. fig. 7. change of the absorption spectrum of zno thin film by decorating its surface with silver clusters. curve 1: the black circles show the absorption spectrum before zno thin film decoration. cutve 2: the black triangles show the absorption spectrum after the zno thin film was decorated with silver clusters there is an obvious increase in the light absorption after the clusters are applied to the surface, but the effect is much smaller than in the case of decorating nanopowders [17]. we suggest that the reason is that the surface area is much larger in the case of nanopowders than in the case of thin films. figure 8 shows the results of the investigation of the photocatalytic activity of the zno thin film with nickel impurities in the visible area of sunlight. methylene blue solution has been used to evaluate the photocatalytic activity. by changing the methylene absorption in this solution at 665nm wavelength, methylene degradation has been observed. two glass vessels of 5ml methylene blue were used for the experiment. it was spectrophotometrically established that the glass vessels hardly engineering, technology & applied science research vol. 10, no. 2, 2020, 5524-5527 www.etasr.com nadareishvili et al.: investigation of the visible light-sensitive zno photocatalytic thin films 5527 allowed the ultraviolet rays with less than 38nm wavelength to pass through. one vessel contained only a solution of methylene blue while another vessel contained the same solution of methylene blue with a thin layer of zno containing 4% nickel impurities deposited on glass substrates. the surface area of zno:ni thin film with 4% ni content was approximately 1cm 2 . both vessels were placed under the summer sunlight at 30 0 c for six hours. after each hour of solar irradiation the methylene blue solution from the glass vessels was placed in the spectrophotometer cuvette and the concentration of methylene blue was determined in the solution using methylene absorption peak value at 665nm wavelength. the spectrophotometer was pre-calibrated and the methylene concentration dependence on the absorption peak value was plotted. using this graph the concentration of methylene was determined according to the amount of absorption. the experiments showed that the concentration of methylene remained almost unchanged in the vessel without the zno thin layer, whereas the concentration of methylene gradually decreased in the vessel containing the zno thin layer. figure 8 illustrates the results. fig. 8. the graph of methylene blue degradation by the action of visible light. curve 1: without the thin film of zno in the vessel. curve 2: the vessel contained a thin film of zno with 4% nickel impurities iv. conclusion photocatalytic zno thin films active in the visible light were obtained by spray pyrolysis technique when hydrated ni chloride hexahydrate nicl2(6h2o) was added to the precursor solution. optical investigations have shown that ni impurities enhance the absorption of light by zno thin films in the visible area from 400nm to 600nm wavelength and this absorption increases with the increase of the concentration of ni impurities. also, it has been established that the formation of agglomerates of the impurities at high concentrations reduces the light absorption. the light absorption gets increased when the surfaces of zno thin films with ni impurities are decorated with silver clusters but the effect is much smaller than in the case of nanopowders. experiments on the determination of photocatalytic activity using methylene blue have shown that zno thin films with nickel impurities are characterized by photocatalytic activity in the visible area of sunlight. acknowledgment this work was supported by the shota rustaveli national science foundation of georgia (srnsfg) grant number stcu-2017-24 and the science and technology center in ukraine (stcu) grant n 7095. the authors are grateful to the professors of tunis el manar university a. mhamdi, k. boubaker and m. amlouk for kindly providing the research samples. references [1] m. kaneko, i. okura, photocatalisis science and technology, springer, 2002 [2] v. kumaravel, s. mathew, j. bartlett, s. c. pillai, “photocatalytic hydrogen production using metal doped tio2: a review of recent advances”, applied catalysis b: enviromental, vol. 244, pp. 1021-1064, 2019 [3] w. wang, g. huang, j. c. yu, p. k. wong, “advances in photocatalytic disinfection of bacteria: development of photocatalysts and mechanisms”, journal of environmental sciences, vol. 34, pp. 232–247, 2015 [4] h. a. maddah, “numerical analysis for the oxidation of phenol with tio2 in wastewater photocatalytic reactors”, engineering, technology & applied science research, vol. 8, no. 5, pp. 3463-3469, 2018 [5] m. mahshidnia, a. jafarian, “forecasting wastewater treatment results with an anfis intelligent system”, engineering, technology & applied science research, vol. 6, no. 5, pp. 1175-1181, 2016 [6] z. y. ilerisoy, y. takva, “nanotechnological developments in structural design: load-bearing materials”, engineering, technology & applied science research, vol. 7, no. 5, pp. 1900-1903, 2017 [7] m. kalbacova, j. m. macak, f. schmidt-stein, c. t. mierke, p. schmuki, “tio2 nanotubes: photocatalyst for cancer cell killing”, physica status solidi (rrl)–rapid research letters, vol. 2, no. 4, pp. 194–196, 2008 [8] g. sadanandam, l. zhang, m. s. scurrell, “enhanced photocatalytic hydrogen formation over fe-loaded tio2 and g-c3n4 composites from mixed glycerol and water by solar irradiation”, journal of renewable and sustainable energy, vol. 10, pp. 034703-034708, 2018 [9] s. higashimoto, “titanium-dioxide-based visible-light-sensitive photocatalysis: mechanistic insight and applications”, catalysts, vol. 9, no. 2, pp. 201-209, 2019 [10] s. kaufhold, l. petermann, d. sorsche, s. rau, ““trojan horse” effect in photocatalysis: how anionic silver impurities influence apparent catalytic activity”, chemistry-a european journal, vol. 23, no. 10, pp. 2271-2274, 2017 [11] a. a. aboud, a. mukherjee, n. revaprasadu, a. n. mohamed, “the effect of cu-doping on cds thin films deposited by the spray pyrolysis technique”, the journal of materials research and technology, vol. 8, no. 2, pp. 2021-2030, 2019 [12] r. venkatesan, s. velumani, k. ordon, m. makovska-janusik, g. corbel, a. kassiba, “nanostructured bismuth vanadate (bivo4) thin films for efficient visible light photocatalysis”, materials chemistry and physics, vol. 205, pp. 325-333, 2018 [13] c. bogatu, m. covei, d. perniu, i. tismanar, a. duta, “stability of the cu2znsns4/tio2 photocatalytic thin films active under visible light irradiation”, catalysis today, vol. 328, pp. 79-84, 2019 [14] a. mhamdi, a. boukhachem, m. madani, h. lachheb, k. boubaker, a. amlouk, m. amlouk, “study of vanadium doping effects on structural, opto-thermal and optical properties of sprayed zno semiconductor layers”, journal of optik, vol. 124, pp. 3764–3770, 2013 [15] t. khoperia, g. mamniashvili, m. nadareishvili, t. zedginidze, “competitive nanotechnology for deposition of films and fabrication of powder-like particles”, ecs transactions, vol. 35, pp. 17-30, 2011 [16] t. khoperia, t. zedginidze, k. kvavadze, m. nadareishvili, “development of competitive nanotechnologies for solution of challenges in photocatalysis, electronics and composites fabrication”, 212th ecs meeting, washington dc, usa, october 7-12, 2007 [17] d. japaridze, d. daraselia, e. chikvaidze, t. gogoladze, m. nadareishvili, t. gegechkori, t. zedginidze, t. petriashvili, g. mamniashvili, a. shengelaya, “magnetic properties and photocatalytic activity of the tio2 micropowders and nanopowders coated by ni nanoclusters”, journal of superconductivity and novel magnetism, vol. 32, no. 10, pp. 3211-3216, 2019 microsoft word 35-3434_s_etasr_v10_n2_pp5534-5537 engineering, technology & applied science research vol. 10, no. 2, 2020, 5534-5537 5534 www.etasr.com bheel et al.: effect of sugarcane bagasse ash and lime stone fines on the mechanical properties of … effect of sugarcane bagasse ash and lime stone fines on the mechanical properties of concrete naraindas bheel department of civil technology h.c.s.t hyderabad, pakistan naraindas04@gmail.com abdul samad memon department of civil technology h.c.s.t hyderabad, pakistan samad.memon105@gmail.com imdad ali khaskheli department of civil technology h.c.s.t hyderabad, pakistan imdadali961@gmail.com noor muhammad talpur department of civil technology h.c.s.t hyderabad, pakistan mirnoor2018@gmail.com sher muhammad talpur department of civil technology h.c.s.t hyderabad, pakistan shermohd168@gmail.com muhammad awais khanzada department of civil technology h.c.s.t hyderabad, pakistan awais.khanzada145@gmail.com abstract—cement production releases huge amounts of carbon dioxide having a significant impact on the environment while also having huge energy consumption demands. in addition, the disposal and recovery of natural concrete components can lead to environmental degradation. the use of waste in concrete not only reduces cement production, but it also reduces energy consumption. the aim of this study is to evaluate the properties of fresh and hardened concrete by partially replacing cement with sugarcane bagasse ash (scba) and limestone fines (lsf). in this investigation work the cement was replaced with scba ash and lsf by 0% (0% scba+ 0% lsf), 5% (2.5% scba+ 2.5% lsf), 10% (5% scba+ 5% lsf), 15% (7.5% scba+ 7.5% lsf) and 20% (10% scba+ 10% lsf) by weight of cement. in this regard, a total of 60 samples of concrete specimens were made with mix proportion of 1:1.5:3 with 0.56 water-cement ratio. cube specimens were tested for compressive strength and cylindrical specimens were used for determining splitting tensile strength at 7 and 28 days respectively. the optimum result displayed that the crushing strength and split tensile strength increased by 10.33% and 10.10% while using 5% scba+ 5% lsf as a substitute for cement in concrete after the 28 th day. the slump value of concrete declined as the content of scba and lsf increased. keywords-limestone fines; sugarcane bagasse ash; cement replacement; enhance strength and reduce environmental issues i. introduction concrete is a man-made construction material, which is most commonly used for the construction of various civil engineering structures [1-3]. ordinary portland cement (opc) concrete is used in numerous structural applications and it is favorable for normal construction projects. however, due to some of its limitations, certain requirements have been difficult to satisfy especially in terms of strength and durability regarding complex structures. the need for the development of high-strength and high-performance concrete has extensively increased in order to meet the requirements for advanced and complex structures [4]. the development of high-strength concrete (hsc) requires a large amount of cement and the production of cement is considered as the most energyintensive component for the production of concrete [5]. co2 emissions during the production of cement are an environmental concern. it is a well-known fact that approximately one ton of co2 is released into the environment through one-ton production of opc cement. moreover, cement manufacturing is responsible for 5% to 7% of co2 emissions from industrial sources [6]. without compromising the performance of the concrete structures, the use of portland cement needs to be reduced in order to reduce co2 emissions related to cement production while the sustainability of construction needs to be taken into consideration [7-9]. partial substitutions of cement by a combination of cement replacing materials (crms) are advantageous not only from the economic point of view but also for their mechanical and microstructural characteristics [10]. the use of crms into concrete has gained popularity with emphasis on increasing the service life of concrete structures [11]. many crms are commercially available and can be used in concrete. some of the most common materials are sugarcane bagasse ash (scba) [12, 13], limestone fines (lsf), rice husk ash (rha) [14-16], silica fume (sf), etc. [17, 18]. in this experimental investigation, the combined influence of scba and lsf used as cement replacement materials in cement concrete was determined. scba is a sugar mill by-product found after burning bagasse, which in turn was originated after the sugar extraction from sugarcane. it has been tested for volcanic ash properties and improvements have been found in mortar and concrete, such as in crushing strength, durability, and water resistance in certain proportions [19]. lsf were collected from hyderabad and they can be used either as a cementitious material or as fine aggregates in concrete mix [20-22]. there are several studies corresponding author: naraindas bheel engineering, technology & applied science research vol. 10, no. 2, 2020, 5534-5537 5535 www.etasr.com bheel et al.: effect of sugarcane bagasse ash and lime stone fines on the mechanical properties of … conducted on the strength development of concrete containing scba and lsf. authors in [23] determined the influence of scba on the hardened concrete. concrete samples were prepared with 1:2:4 mix ratio and were tested for compressive and split tensile strength at 28 days. the test results advocated that the crushing and indirect tensile strength were enhanced by 7.90% and 14% at 10%. authors in [24] studied the effects of lsf content on concrete’s compressive strength and durability. they reported that increasing the amount of lsf in concrete enhances strength and decreases permeability. lsf concrete having 0.40w/b ratio performed better as compared to 0.50 and 0.60w/b ratio lsf concrete regarding strength development. the porosity and pore size of concrete were significantly decreased after 28 days. authors in [25] observed that the crushing and bending strength and permeability related properties were improved by using lsf in concrete. in the available literature there are a limited number of studies available on the individual and combined effects of scba and lsf as cement replacing material in concrete. several types of mineral admixtures are used in concrete but their effects on concrete properties with binary and ternary blends are not investigated in satisfying depth. the main aim of this paper is to investigate the combined effect of scba and lsf with cement on fresh and hardened concrete. since the compressive strength of concrete is an important parameter, and all other properties of concrete are judged on the basis of its compressive strength. in addition, statistical assessment on compressive strength of concrete using rsm has been performed in order to investigate the effectiveness of each material on the basis of its compressive strength. ii. research methodology this research study aimed to determine the fresh, physical and hardening properties of concrete by using of 0% (0% scba+ 0% lsf), 5% (2.5% scba+ 2.5% lsf), 10% (5% scba+ 5% lsf), 15% (7.5% scba+ 7.5% lsf) and 20% (10% scba+ 10% lsf) cement replacement materials in concrete. a total of 60 concrete samples of 1:1.5:3 mix proportions were prepared (30 cylinders and 30 cubes) with 0.56 water/cement ratio and were cured for 7 and 28 days. table i. concrete mixes id scba + lsf (%) f.a & ca (%) cement (%) water-cement ratio (%) 01 0%+0% 100 100 0.56 02 2.5%+2.5% 100 95 0.56 03 5%+5% 100 90 0.56 04 7.5%+7.5% 100 85 0.56 05 10% +10% 100 80 0.56 mix ratio 1:1.5:3 the variables cement (binder), coarse aggregates, fine aggregates, and water were considered. scba and lsf were used as crms and the concrete samples were tested on an utm. in this study, concrete cubes (100mm×100mm×100mm) ware cast and tested for compressive strength. similarly, cylinders (200mm×100mm) were tested for splitting tensile strength. moreover, the concrete specimens were tested for water absorption and concrete density after 28 days. three concrete samples were cast for each ratio, and the mean of the samples was considered as the ultimate result. the study was conducted in a laboratory of the department of civil technology, h.c.s.t hyderabad, sindh, pakistan [26, 27]. iii. materials used a. cement in this investigational study, opc was utilized with 33% normal consistency, and initial and final setting time of 46min and 160min respectively. b. fine and coarse aggregates hill sand was used as fine aggregates that passed through #4 sieves and crushed stones having 20mm in size were used as coarse aggregates. these aggregates were collected locally in the region of hyderabad. c. limestone fines (lsf) the lsf were collected from hyderabad. after collecting, they were sieved through #300 sieves to obtain fine powder form and then they were utilized as cementitious material in concrete mixes. d. sugarcane bagasse ash (scba) scba was collected from matiari sugar mill. after collecting, it was dried under the atmosphere and the dried ash was sieved through #300 sieves to obtain the desired ash. this desired ash was utilized as cement replacement in concrete. e. water drinking water was used. iv. results and discussion a. slump test slump test was studied based on slump losses using a standard slump cone in accordance with astm c 143-05. at 0% scba and 0% lsf, the maximum concrete slump is 65mm, and at 10% scba and 10% lsf, the minimum slump of fresh concrete is 29mm. the slump value reduced as the amount of scba and lsf increased as shown in figure 1. fig. 1. slump test results b. density of concrete the concrete samples were used to analyze the density of concrete with addition of several ratios of scba and lsf by engineering, technology & applied science research vol. 10, no. 2, 2020, 5534-5537 5536 www.etasr.com bheel et al.: effect of sugarcane bagasse ash and lime stone fines on the mechanical properties of … weight of cement on the 28 th day. the maximum value of 2380kg/m 3 was noted at 0% scba + 0% lsf and the minimum value of 2290kg/m 3 at 10% scba + 10% lsf after 28 days. the density of concrete reduced as the scba and lsf content increased as shown in figure 2. fig. 2. density of concrete c. water absorption the concrete samples were used to analyze the water absorption of concrete with addition of several ratios of scba and lsf by weight of cement at the 28 th day. it was recorded maximum (4.30%) at 10% scba + 10% lsf and minimum 2.50% at 0% scba + 0% lsf after 28 days. the water absorption of concrete augmented with increasing content of scba and lsf as shown in figure 3. fig. 3. water absorption of concrete d. compressive strength the cubical samples were checked for analyzing compressive strength of concrete. the optimum crushing strength was increased to 8.96% and 10.33% at 5% scba + 5% lsf and it was decreased by 9.80% and 6.40% at 10% scba + 10% lsf after 7 days and 28 days respectively as shown in figure 4. e. split tensile strength the cylindrical samples were checked for split tensile strength of concrete with several percentages of lsf and scba. the optimum indirect tensile strength was augmented by 8.20% and 10.10% at 5% scba + 5% lsf and was reduced by 9.84% and 4.04% at 10% scba + 10% lsf after 7 and 28 days respectively as displayed in figure 5. fig. 4. compressive strength of concrete fig. 5. split tensile strength of concrete v. conclusions the basic aim of this study was the utilization of scba and lsf as cement replacements in concrete and to determine their effect on the fresh and hardened concrete properties. from this research study, the following conclusions can be drawn: • at 0% scba and 0% lsf, the concrete slump is maximum (65mm), and at 10% scba and 10% lsf, the slump of fresh concrete is minimum (29mm). moreover, the slump value reduced as the amount of scba and lsf increased. • the density of concrete was maximum (2380kg/m 3 ) at 0% scba + 0% lsf and minimum (2290kg/m 3 ) at 10% scba+10% lsf after 28 days. the density of concrete reduced as the scba and lsf content increased. • water absorption of concrete was maximum (4.30%) at 10% scba + 10% lsf and minimum (2.50%) at 0% scba+0% lsf after 28 days. the water absorption of concrete increased as the amount of scba and lsf increased. engineering, technology & applied science research vol. 10, no. 2, 2020, 5534-5537 5537 www.etasr.com bheel et al.: effect of sugarcane bagasse ash and lime stone fines on the mechanical properties of … • the optimum crushing strength increased by 8.96% and 10.33% at 5% scba + 5% lsf and decreased by 9.80% and 6.40% at 10% scba + 10% lsf after 7 and 28 days respectively. • the optimum indirect tensile strength increased by 8.20% and 10.10% at 5% scba + 5% lsf and reduced by 9.84% and 4.04% at 10% scba + 10% lsf after 7 and 28 days respectively. references [1] m. uysal, v. akyuncu, “durability performance of concrete incorporating class f and class c fly ashes”, construction and building materials, vol. 34, pp. 170-178, 2012 [2] n. bheel, r. a. abbasi, s. sohu, s. a. abbasi, a. w. abro, z. h. shaikh, “effect of tile powder used as a cementitious material on the mechanical properties of concrete”, engineering, technology & applied science research, vol. 9, no. 5, pp. 4596-4599, 2019 [3] n. d. bheel, f. a. memon, s. l. meghwar, a. w. abro, i. a. shar, “millet husk ash as environmental friendly material in cement concrete”, 5th international conference on energy, environment and sustainable development, jamshoro, pakistan, november 14-16, 2018 [4] s. w. m. supit, f. u. a. shaikh, “durability properties of high volume fly ash concrete containing nano-silica”, materials and structures, vol. 48, pp. 2431-2445, 2015 [5] n. bheel, k. a. kalhoro, t. a. memon, z. u. z. lashari, m. a. soomro, u. a. memon, “use of marble powder and tile powder as cementitious materials in concrete”, engineering, technology & applied science research, vol. 10, no. 2, pp. 5448-5451, 2020 [6] n. bheel, m. a. jokhio, j. a. abbasi, h. b. lashari, m. i. qureshi, a. s. qureshi, “rice husk ash and fly ash effects on the mechanical properties of concrete”, engineering, technology & applied science research, vol. 10, no. 2, pp. 5402-5405, 2020 [7] l. j. hanle, k. r. jayaraman, j. s. smith, “co2 emissions profile of the us cement industry”, washington dc: environmental protection agency, available at: https://www3.epa.gov/ttnchie1/ conference/ei13/ghg/hanle.pdf, 2004 [8] f. ma, a. sha, p. yang, y. huang, “the greenhouse gas emission from portland cement concrete pavement construction in china”, international journal of environmental research and public health, vol. 13, no. 7, article id 632, 2016 [9] p. c. aitcin, high performance concrete, crc press, 1998 [10] k. a. gruber, t. ramlochan, a. boddy, r. d. hooton, m. d. a. thomas, “increasing concrete durability with high-reactivity metakaolin”, cement and concrete composites, vol. 23, no. 6, pp. 479484, 2001 [11] r. kumar, a. y. b. m. yaseen, n. shafiq, a. jalal, “influence of metakaolin, fly ash and nano silica on mechanical and durability properties of concrete”, key engineering materials, vol. 744, pp. 8-14, 2017 [12] n. d. bheel, s. k. meghwar, r. a. abbasi, i. a. ghunio, z. h. shaikh, “use of sugarcane bagasse ash as cement replacement materials in concrete”, international conference on sustainable development in civil engineering, jamshoro, pakistan, december 5-7, 2019 [13] a. a. dayo, a. kumar, a. raja, n. bheel, a. w. abro, z. h. shaikh, “effect of sugarcane bagasse ash as fine aggregates on the flexural strength of concrete”, international conference on sustainable development in civil engineering, jamshoro, pakistan, december 5-7, 2019 [14] n. bheel, a. w. abro, i. a. shar, a. a. dayo, s. shaikh, z. h. shaikh, “use of rice husk ash as cementitious material in concrete”, engineering, technology & applied science research, vol. 9, no. 3, pp. 4209-4212, 2019 [15] n. bheel, s. l. meghwar, s. a. abbasi, l. c. marwari, j. a. mugeri, r. a. abbasi, “effect of rice husk ash and water-cement ratio on strength of concrete”, civil engineering journal, vol. 4, no. 10, pp. 2373-2382, 2018 [16] n. bheel, s. l. meghwar, s. sohu, a. r. khoso, a. kumar, z. h. shaikh, “experimental study on recycled concrete aggregates with rice husk ash as partial cement replacement”, civil engineering journal, vol. 4, no. 10, pp. 2305-2314, 2018 [17] s. ghosal, s. c. moulik, “use of rice husk ash as partial replacement with cement in concrete: a review”, international journal of engineering research, vol. 4, no. 9, pp. 506-509, 2015 [18] z. h. shaikh, a. kumar, m. a. kerio, n. bheel, a. a. dayo, a. w. abro, “investigation on selected properties of concrete blended with maize cob ash”, 10 th international civil engineering conference, karachi, pakistan, february 23-24, 2019 [19] l. g. li, a. k. h. kwan, “adding limestone fines as cementitious paste replacement to improve tensile strength, stiffness and durability of concrete”, cement and concrete composites, vol. 60, pp. 17-24, 2015 [20] s. kenai, b. menadi, a. attar, j. khatib, “effect of crushed limestone fines on strength of mortar and durability of concrete”, international conference on construction and building technology, kuala lumpur, malaysia, june 16-20, 2008 [21] i. a. shar, f. a. memon, n. bheel, z. h. shaikh, a. a. dayo, “use of wheat straw ash as cement replacement material in the concrete”, international conference on sustainable development in civil engineering, jamshoro, pakistan, december 5-7, 2019 [22] a. m. diab, i. a. mohamed, a. a. aliabdo, “impact of organic carbon on hardened properties and durability of limestone cement concrete”, construction and building materials, vol. 102, pp. 688-698, 2016 [23] a. a. dayo, a. kumar, a. raja, n. bheel, z. h. shaikh, “use of sugarcane bagasse ash as a fine aggregate in cement concrete”, engineering science and technology international research journal, vol. 3, no. 3, pp. 8-11, 2019 [24] j. j. chen, a. k. h. kwan, y. jiang, “adding limestone fines as cement paste replacement to reduce water permeability and sorptivity of concrete”, construction and building materials, vol. 56, pp. 87-93, 2014 [25] c. aquino, m. inoue, h. miura, m. mizuta, t. okamoto, “the effects of limestone aggregate on concrete properties”, construction and building materials, vol. 24, no. 12, pp. 2363-2368, 2010 [26] n. d. bheel, s. a. abbasi, s. l. meghwar, f. a. shaikh, “effect of human hair as fibers in cement concrete”, international conference on sustainable development in civil engineering, jamshoro, pakistan, november 23-25, 2017 [27] a. n. khan, n. d. bheel, m. ahmed, r. a. abbasi, s. sohu, “use of styrene butadiene rubber (sbr) polymer in cement concrete”, indian journal of science and technology, vol. 13, no. 5, pp. 606-616, 2020 microsoft word 44-2929_s_etasr_v9_n4_pp4548-4553 engineering, technology & applied science research vol. 9, no. 4, 2019, 4548-4553 4548 www.etasr.com dung & phuong: short-term electric load forecasting using standardized load profile (slp) and … short-term electric load forecasting using standardized load profile (slp) and support vector regression (svr) nguyen tuan dung planning department, evnhcmc power company, ho chi minh, vietnam dp1526@gmail.com nguyen thanh phuong institute of engineering, hutech university of technology, ho chi minh, vietnam nt.phuong@hutech.edu.vn abstract—short-term load forecasting (stlf) plays an important role in business strategy building, ensuring reliability and safe operation for any electrical system. there are many different methods used for short-term forecasts including regression models, time series, neural networks, expert systems, fuzzy logic, machine learning, and statistical algorithms. the practical requirement is to minimize forecast errors, avoid wastages, prevent shortages, and limit risks in the electricity market. this paper proposes a method of stlf by constructing a standardized load profile (slp) based on the past electrical load data, utilizing support regression vector (svr) machine learning algorithm to improve the accuracy of short-term forecasting algorithms. keywords-short-term load forecast; regression model; standardized load profile; support vector regression i. introduction load forecasting is electrical systems is a topic that has been studied extensively. there are two main approaches in this area: traditional statistical methods of the relationship between the load and load-affecting factors (such as time series, regression analysis, etc.) and machine learning methods (a branch of artificial intelligence). statistical methods assume load data according to a sample and try to forecast the value of future loads using different time series analysis techniques. intelligent systems are derived from mathematical expressions of human behavior and experience. especially since the early 1990s, neural networks have been considered one of the most commonly used techniques in the field of electrical load forecasting, because they assume that there is a nonlinear function related to historical values and some external variables with future values may affect the output [1]. the approximate ability of neural networks has made their applications popular. in recent years, an intelligent calculation method involving support vector machines (svm) has been widely used in the field of load forecasting. authors in [2] used the support vector regression (svr) technique to solve the electrical load prediction problem (forecasting a maximum daily load for the next 31 days). this was a competition organized by eunite (european network on intelligent technologies for smart adaptive systems). the provided information included: the demand data of the past two years, daily temperature of the past four years and local holiday events. the data were divided into 2 parts: a part used for training (about 80%-90%) and the rest used for algorithm testing (about 20%-10%). the set of training inputs included data of previous day, previous hour, previous week, and the average of the previous week. since then, there have been several studies exploring the different techniques used for optimizing svr to perform load forecasting [3-10]. the main reason for using svm in load forecasting is that it can easily model the load curve, the relationship between the load and the dynamics of changing load demand. however, there are some problems encountered when the above algorithms are applied to real situations: • climate conditions always play an important role in load forecasting. they show the relationship between climate and load demand. when we do load forecasting for the post-test period, it is very difficult to forecast the values of weather and climate used as the input of the algorithm and these values are often not available. • electrical load samples include hidden elements, which tend to be similar to the previous load model. however, it will lead to a false forecast of the following days if the date pattern is different from the previous day or there is an event that impacts. therefore, the use of the dataset (training inputs include data of the previous day, the previous hour, the previous week, the average of the previous week) has many risks if the load models are not identical. • if the forecast time frame is greater than the past data frame (more than 7 days), there will be a lack of input to run the algorithm. • in addition, for asian countries (such as vietnam) that use lunar calendar, there are difficult and unpredictable issues as the lunar new year (usually in late january or early february), etc. there is a deviation between the solar calendar and the lunar calendar (the load models are not identical). therefore, it often leads the forecast results of algorithm for this period with large errors. this paper proposes a solution to build a standardized load profile (slp) based on the historical load dataset as a training corresponding author: nguyen tuan dung engineering, technology & applied science research vol. 9, no. 4, 2019, 4548-4553 4549 www.etasr.com dung & phuong: short-term electric load forecasting using standardized load profile (slp) and … dataset. this input dataset is combined with the svr algorithm to improve the accuracy of short-term forecast results, solve the problem of deviation between the solar and the lunar calendar, and overcome the input data frame. slp will be built for all 365 days and 8,670 hour cycles in a year. slp will be an important dataset during the training, testing and forecasting process. it will standardize load models by hours, days, seasons, and special day types (including lunar dates). therefore, slp will contribute to solve the above-mentioned difficulties and improve the quality of electrical load forecasting. ii. methodology observing the load profiles of february of ho chi minh city over the years (figure 1), we can see a huge fluctuation in the chart shape over the years. the results in the use of historical data for forecasting this period of time are extremely complicated. (a) (b) (c) fig. 1. the load profiles of february over the years: (a) 2016, (b) 2017, (c) 2018 in fact, the algorithms used to forecast in vietnam have to go through an intermediary stage in which the months are converted into regular months (without holidays and lunar new year). after, the forecast result will be reversed or the result will be accepted with a large error. this is a common problem in software provided by foreign countries. a. standardized load profiles (slp) while observing the load profiles of the days in a week and some special holidays of the year in ho chi minh city (figure 2), we see the difference between weekdays (from tuesday to friday) is not much and they have the same load chart. for the load profiles on monday, they are different from the normal days at 0:00 to 9:00, due to the forwarding demand from sunday. for load profiles on saturday, there is a change but not much compared to normal days. mainly the load demand decreases in the evening due to the start of weekends. particularly for load profiles on sunday, it is completely different from normal days (the demand for electricity is low). (a) (b) (c) fig. 2. typical load profiles on some days in a year when observing the load chart of the new year and the lunar new year, we see the difference completely, the graphs are almost flat, and the load demand is quite low because these are holidays. particularly on lunar new year, the load demand is the lowest, because this is the longest holiday of the year (from 6 to 9 days). slps are built by taking the value of the collected capacity in a 60-minute period divided by its maximum capacity. we need to build slp for 365 days per year. some typical slps are shown in figure 3. based on the slp of each cycle of the past data set, we can build the slp data set for future forecast periods. this should be accurate to each cycle, each type of day (weekdays, working days, holidays, etc.), each week, and each month. therefore, the slp is a special feature and is also an important input parameter of the svr (nn) training process to rebuild the load curves, from which we can estimate lost or not recorded data during the measurement process. b. support vector regression (svr) the feature of svr is that it provides us with a sparse solution. that is, to build the regression function, we do not need to use all the data points in the training set. the points that engineering, technology & applied science research vol. 9, no. 4, 2019, 4548-4553 4550 www.etasr.com dung & phuong: short-term electric load forecasting using standardized load profile (slp) and … contribute to the construction of the regression are called support vectors. the layering for a new data point will depend only on the support vectors [5–6]. (a) (b) (c) (d) fig. 3. slp of some days in a year: (a) sunday, (b) lunar new year, (c) saturday, and (d) a normal day. the regression function has the formula: ( ) ( ) t y f x w x b= = φ + (1) thus, the goal of svr training is to find w and b [7-10] for the training set {(x1, t1), (x2, t2), …, (xn, tn)} rr n ×⊂ . with a simple regression problem, to find w and b we have to minimize the normalized error function: 2 1 2 2 }{ 2 1 wty n n nn λ +−∑ = (2) where λ is a normalized constant. to get a sparse solution, we will replace the above error function with the ε-insensitive error function. the characteristic of this error function is that if the absolute value of the difference between the predicted value y(x) and the target value is less than ε (with ε>0) then the error is considered zero. now, we must minimize the normalized error function: 2 1 2 2 1 ))(( wtxyec n n nn +−∑ = ε (3) with ( ) ( ) ( ) t n ny x f x w x b= = φ + , c is a normalized constant like λ but is multiplied by an error function instead of 2 w . to allow some points outside the tube ε, we will add slack variables. for each data point xn, we need two liquid variables 0νξ ≥ and ˆ 0νξ ≥ , with 0νξ ≥ corresponding to the point that ( )n nt y x ε> + (outside and above the tube) and ˆ 0νξ ≥ corresponding to the point that ( )n nt y x ε< − (outside and below the tube). fig. 4. illustration for liquid variables ξn the condition for a destination point in the pipe is: n n ny t yε ε− ≤ ≤ + with yn=y(xn). using liquid variables, we allow destination points outside the tube (corresponding to liquid variables > 0) and thus the condition will now be: ˆ n n n n n n t y t y ε ξ ε ξ ≤ + + ≥ − − thus, we have an error function for svr: n 2 n n n 1 1ˆc ( w ) 2= ξ + ξ +∑ . our goal is to minimize this error function with constraints: ˆ0, 0 ˆ n n n n n n n n t y t y ξ ξ ε ξ ε ξ ≥ ≥ ≤ + + ≥ − − using the lagrange function and the karush-kuhn-tucker condition, we have the equivalent optimization problem: 1 1 1 1 1 ˆ ˆ( )( ) ( , ) 2 ˆ ˆ( ) ( ) n n n n m m n m n m n n n n n n n n n a a a a k x x a a a a tε = = = = − − − − − + − ∑∑ ∑ ∑ (4) engineering, technology & applied science research vol. 9, no. 4, 2019, 4548-4553 4551 www.etasr.com dung & phuong: short-term electric load forecasting using standardized load profile (slp) and … where k is the kernel function: )'()()',( xxxxk t φφ= . maximizing with constraints: 1 0 ˆ0 ˆ( ) 0 n n n n n n a c a c a a = ≤ ≤ ≤ ≤ − =∑ (5) from here, we have the regression function of svr: 1 ˆ( ) ( ) ( , ) n n n n n y x a a k x x b = = − +∑ (6) thus, for svrs using the ε-insensitive error function and the gaussian kernel function we obtain three parameters: the normalization coefficient c, the parameter γ of the gaussian kernel function, and the width of the pipe ε [7]. these parameters affect the forecast accuracy of the model and need to be selected carefully. • if c is too large, it will give a priority to the training error. it leads to a complex model and it is easy to be over fitting. if c is too small, it will give a priority to the complexity of the model. it leads to a too simple model and reduces forecast accuracy. • the meaning of ε is the same. if it is too large, there will be less support vectors, making the model too simple. on the other hand, if ε is too small, there are many support vectors, leading to complex models, which are more likely to be over fitting. • the γ parameter reflects the correlation between the support vectors and also affects the forecast accuracy of the model. c. research models the flowchart of the slp-svr forecasting algorithm is given in figure 5. fig. 5. flowchart of forecasting algorithm by slp – svr processed historical data (power consumption, capacity and temperature recorded in 24 cycles 60 minutes each) with the slp will be included in modules to build regression functions under svr, neural network (nn) algorithms to build regression functions. then we use the above dataset to check and evaluate the error of regression functions. after that, we choose the regression function with the smallest error to be used as regression function for the next forecast phase. the slp dataset in 24 cycles of the expected period (including holidays, etc.) and the forecasted temperature in 24 cycles of the corresponding period will be the input for the regression function that is selected to export forecast results in 24 cycles for a period of 7-30 days. iii. results and discussion a. input data: the article uses data from january 1 st , 2015, to november 17 th , 2018 of evnhcmc to run test models. after pretreatment, the dataset is divided into 2 parts: training set and testing set, in which the testing set is the last 30 days of the dataset. or the dataset is divided into phases to test the forecast results in different time periods. input data for training algorithms include: capacity (pmax/pmin) in 60-minute cycles, temperature (max/min) in 60-minute cycles, standardized load profiles of 24 hours of day and a list of holidays and lunar new year in the forecast year. a useful measurement parameter is the mean absolute percentage error (mape) which is used to evaluate the error of models. 1 mape 100 f t t t y y n y − = ∑ (7) the algorithms are programmed in matlab and the results are exported to excel files for data exploitation. b. svr models it is necessary to correctly select the input parameters to run svr models such as: normalization coefficient c, width of pipe ε and gaussian kernel function. the algorithm uses the same input dataset of models. some typical proposed svr model parameters are shown in table i. table i. svr model parameters model c ε kernel function svr 1 93.42 32.5 polynomial svr 2 500.32 0.01 gaussian svr 3 1 50.03 linear svr 4 100 0.01 linear c. rfr models a set of regression trees is used with each set of different rules to perform a non-linear regression. the algorithm builds a total of 20 trees, with a minimum leaf size of 20. the number of leaves is smaller or equal to the size of the tree to control overfitting and bring about high performance [13-14]. the algorithm uses the same input dataset of models. d. neural network models we used feedforward neural network models with the mentioned above input variables and training dataset. ahidden-layer network architecture with a class size of 10 and sigmoid activation function was used. at the same time, the usual neural network with 3-hidden-layer network architecture, engineering, technology & applied science research vol. 9, no. 4, 2019, 4548-4553 4552 www.etasr.com dung & phuong: short-term electric load forecasting using standardized load profile (slp) and … in which: the first hidden layer has a size of 10 nodes, the second hidden layer has 8 and the third hidden layer has 5 nodes. e. results and analysis 1) regression models test we run the forecast results for february of 2018 (the month of the lunar new year) to assess the degree of error of the models. the model included as inputs the data of the previous day, previous hour, previous week and the previous week average. processed historical data (power consumption, capacity, temperature recorded at 24 cycles of 1 hour) with the slp were included in modules to build regression functions under svr, neural network and random forest algorithms to build regression functions. fig. 6. regression models test table ii. checking errors of regression models results date ytr yts1 yts2 yts3 yts4 ytnn ytfeed ytrf 1/23/18 9.71 4.05 5.02 6.35 4.19 6.09 4.55 2.91 1/24/18 8.30 3.65 2.61 7.00 4.25 0.65 4.76 4.19 1/25/18 7.17 4.35 3.57 7.42 4.21 4.58 5.84 4.63 1/26/18 7.10 6.20 6.77 7.48 6.39 6.58 5.82 6.44 1/27/18 9.22 1.37 0.44 3.27 1.33 0.56 1.91 1.06 1/28/18 9.68 2.16 3.28 7.12 0.32 25.51 5.89 3.93 1/29/18 9.15 5.30 6.17 6.92 4.91 5.71 5.96 5.67 we chose the regression function with the smallest error to be used for the next forecast phase. the yts4 model was selected as a forecasting model. 2) forecast results for february of 2018 considering the model forecast results for february, we see a big difference between forecast and reality (figure 7). the reason is that we used the historical data of january of 2019 (714-30 days before the forecasting date) as the input for the training model. 3) results of testing svr models we see the results in figure 8 and table iii. 4) results of testing machine learning models we see the results in figure 9 and table iv. fig. 7. forecast results for the next 30 days fig. 8. svr models test table iii. results of checking errors of svr models date yts1 yts2 yts3 yts4 1/23/18 1.15 0.64 2.22 3.87 1/24/18 1.70 2.12 2.95 6.19 1/25/18 3.03 3.30 3.38 6.68 1/26/18 1.35 1.04 1.76 2.76 1/27/18 6.77 4.56 6.42 1.56 1/28/18 4.18 5.09 1.81 0.76 1/29/18 0.24 0.12 2.69 2.14 mape 2.63 2.41 3.03 3.42 fig. 9. machine learning models test engineering, technology & applied science research vol. 9, no. 4, 2019, 4548-4553 4553 www.etasr.com dung & phuong: short-term electric load forecasting using standardized load profile (slp) and … table iv. checking errors of machine learning models result date ytnn ytfeed ytrf 1/23/18 1.25 1.61 1.70 1/24/18 2.14 2.90 3.36 1/25/18 0.99 5.55 3.89 1/26/18 3.16 1.84 2.26 1/27/18 4.81 1.56 1.92 1/28/18 7.51 5.85 4.68 1/29/18 4.41 2.05 0.43 mape 3.47 3.05 2.60 5) results of testing regression models: we see the results in figure 10 and table v. fig. 10. regression test models table v. results of test models checking errors date ytr yts1 yts2 yts3 yts4 ytnn ytfeed ytrf 1/23/18 9.71 1.15 0.64 2.22 3.87 1.25 1.61 1.70 1/24/18 8.30 1.70 2.12 2.95 6.19 2.14 2.90 3.36 1/25/18 7.17 3.03 3.30 3.38 6.68 0.99 5.55 3.89 1/26/18 7.10 1.35 1.04 1.76 2.76 3.16 1.84 2.26 1/27/18 9.22 6.77 4.56 6.42 1.56 4.81 1.56 1.92 1/28/18 9.68 4.18 5.09 1.81 0.76 7.51 5.85 4.68 1/29/18 9.15 0.24 0.12 2.69 2.14 4.41 2.05 0.43 mape 8.62 2.63 2.41 3.03 3.42 3.47 3.05 2.60 we choose the regression function with the smallest error to be used as the regression function for the next forecast phase. the model yts2 is selected to be the forecasting model. 6) forecast results for february of 2018 we see the results in figure 11, where a definite improvement is observed. iv. conclusion we observed the experimental results in the forms of testing datasets (load datasets of the previous day, previous week, previous month and the dataset of slp), we saw that the results of the slp-svr models are closely to the actual value of february of 2018, while the results of the old model are in quite a large deviation. thus, we see that the use of slp as the input dataset for the modules of forecasting regression function is effective and gives forecasting results with low error. it solves the problem of deviation between the solar and the lunar dates, especially in the months of lunar new year. also it resolves the difference between the solar and lunar cycles. fig. 11. forecast results for the next 30 days references [1] m. h. m. r. shyamali dilhani, c. jeenanunt, “daily electric load forecasting: case of thailand”. 7th international conference on information communication technology for embedded systems, bangkok, thailand, march 20-22, 2016 [2] j. huo, t. shi, j. chang, “comparison of random forest and svm for electrical short-term load forecast with different data sources”, 7th ieee international conference on software engineering and service science, beijing, china, march 23, 2017 [3] l. c. p. velasco, c. r. villezas, p. n. c. phalang, j. a. a. dagaang, “next day electric load forecasting using artificial neural networks”, cebu city, philippines, december 9-12, 2015 [4] d. willingham, “electricity load forecasting for the australian market case study”, available at https://ww2.mathworks.cn/matlabcentral/ fileexchange/31877-electricity-load-forecasting-for-the-australianmarket-case-study?s_tid=fx_rc1_behav, 2016 [5] n. t. dung, t. t. ha, n. t. phuong, “comparative study of short-term electric load forecasting: case study evnhcmc”, 4th international conference on green technology and sustainable development, ho chi minh city, vietnam, november 23-24, 2018 [6] e. ceperic, v. ceperic, a. baric, “a strategy for short-term load forecasting by support vector regression machines”, ieee transactions on power systems, vol. 28, no. 4, pp. 4356-4364, 2013 [7] v. vapnik, the nature of statistical learning theory, springer, 1995 [8] s. gunn, support vector machines for classification and regression, technical report, university of southampton, 1995 [9] v. cherkassky, y. ma, “selection of meta-parameters for support vector regression”, international conference on artificial neural networks, madrid, spain, august 28-30, 2002 [10] d. basak, s. pal, d. c. patranabis, “support vector regression”, neural information processing – letters and reviews, vol. 11, no. 10, pp. 203– 224, 2007 [11] a. j. smola, b. scholkopf, “a tutorial on support vector regression, statistics and computing”, vol. 14, no. 3, pp. 199–222, 2004 [12] understanding support vector machine regression and support vector machine regression, available at: https://www.mathworks.com/help/ stats/understanding-support-vector-machine-regression.html [13] l. breiman, “random forests”, machine learning, vol. 45, no. 1, pp. 5-32, 2001 [14] l. breiman, j. h. friedman, r. a. olshen, c. j. stone, classification and regression trees. chapman & hall 1984 engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2907 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … a numerical-analytical hybrid approach for the identification of sdm solar cell unknown parameters rabeh abbassi college of engineering, university of hail, saudi arabia and university of tunis, ensit, latice laboratory, tunisia r_abbassi@yahoo.fr attia boudjemline college of engineering, university of hail, saudi arabia a_boudjemline@hotmail.com abdelkader abbassi dept. of electrical engineering, university of tunis, ensit, lisier laboratory, tunisia abd_abbassi@yahoo.com ahmed torchani college of engineering, university of hail, saudi arabia and university of tunis, ensit, lisier laboratory, tunisia tochahm@yahoo.fr hatem gasmi college of engineering, university of hail, saudi arabia and university of tunis el manar, enit, tunisia gasmi_hatem@yahoo.fr tawfik guesmi college of engineering, university of hail, saudi arabia and university of sfax, enis, tunisia tawfiq.guesmi@gmail.com abstract—appropriate modeling and accurate parameter identification of solar cells are crucial in the optimization of photovoltaic (pv) systems. the single-diode model (sdm), consisting of an ideal current source, an ideal diode, a shunt resistor and a series resistor, is widely used to simulate the behavior of pv cells/panels. in this article, a hybrid approach for identification of solar cell sdm parameters is presented. this approach uses the inverse of the slope of the i-v curve under short-circuit and open-circuit conditions and combines numerical and analytical solutions. indeed, knowing that numerical methods require appropriate initial values, the main idea of the proposed approach is to provide these solutions by analytical methods. the comparison of obtained results with experimental ones, based on manufacturer’s datasheet, proves that the algorithm thus obtained requires less information from the manufacturer and improves significantly the parameter identification accuracy. keywords-solar energy; pv cell parameters; i-v and p-v characteristics; single-diode model i. introduction fossil fuels, i.e., oil, natural gas and coal, are the main sources of today’s energy, whose demand is increasing in an alarming rate [1, 2]. in the process of extracting and converting these materials to energy, they generate air and water pollution, land degradation and, consequently, cause of harm to the health and well-being of humans and animals [3]. in addition, fossil fuels are non-renewable, as they exist in finite amounts. it is quite obvious that, at the present high rate of exploitation, they will eventually deplete rapidly, becoming too expensive to extract [4]. so, we are faced with two enormous challenges: a decrease in the energy resources coupled with an increase in the damage to the environment [5, 6]. to mitigate these problems, efforts are being made worldwide by governments, national and international agencies, companies and research institutions to find other sources of energy that are renewable, sustainable, environment-friendly and not expensive [7-10]. in this respect, solar energy, in all its forms, has been identified as the most promising because it is plentiful, renewable, available in almost every country and harnessing it causes very little damage to the environment [10]. the most common method to exploit solar energy is via the photovoltaic (pv) effect, whereby the energy of the sunlight photons impinging on certain materials, is converted to electricity [11]. photovoltaic panels are intended to work outdoors and, as such, are exposed to varying environmental conditions, e.g., temperature and amount of solar insolence. in order to predict the performance of pv systems, designers need to know the different electrical parameters of the pv cells/panel measured in all sorts of conditions. unfortunately, pv manufacturers only provide some of the parameters at only one operating condition, referred to as standard test condition or stc [12]. to address this problem, designers and researchers resort to modeling the pv cells/panels in order to determine the vital intrinsic parameters. although the main equations derived from these models are basically the same, numerous methods and techniques with varying degrees of complexity have been developed and applied with great success to estimate the parameters. indeed, many review papers have been published to either summarize the salient features of these methods and/or to conduct a comparison between a few of them[13-20]. engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2908 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … ii. pv module based on single-diode model a. pv cell model as mentioned above, it is of significant importance to know the performance of pv panels in different environmental conditions, e.g. solar irradiance and ambient temperature, before they are deployed in the field in order to maximize their output power [11, 21, 22]. in this respect, and due to the lack of information from datasheets, the intrinsic parameters that determine this performance need to be estimated. since it is not practically possible to test all manufactured solar cells/panels under all types of conditions, engineers resort to simulations which result in saving materials, time and labor [23-25]. a pv cell is basically a diode, a p-n junction made of two dissimilar semiconducting materials, whose top surface is exposed to sunlight. the photon energy is subsequently converted to electrical energy. as a result, the most commonly-used equivalent electric circuit that describes the operation of a pv cell is the single-diode model as shown in figure 1 (a) [26] and where the components are assumed ideal. in this model, i is the current delivered to the load and v is the voltage across the load. the current source represents the photoelectric current, iph, caused by the photons impinging on the solar cell. this current is assumed proportional to the solar irradiance and also depends on the cell’s temperature [11]. the diode in parallel with the current source represents the recombination current in the quasi-neutral regions [27]. the current source serves to forward-bias the diode. the single-diode model has been shown to result in the calculation, to a high degree of accuracy, of the electrical parameters that are characteristic of the pv cells. it is a good compromise between precision and simplicity [29]. furthermore, this model proved to be a very useful tool in optimizing the design of an entire pv system, including solar panels and control power electronics, and hence generating the maximum power possible. note that, over the years, other equivalent electrical circuit models with different degrees of complexity and accuracy, such as doubleand triple-diode ones, have been proposed depending on the desired outcomes; accuracy, time taken to simulate, etc. [11, 26, 27]. by applying kirchhoff’s current law to the equivalent circuit, which is a lossless model, the current at the terminals of the solar cell is expressed as: . .1 1t v qv v kt ph d ph o ph oi i i i i e i i e η η= − − − − −     = =         (1) where, io: diode reverse bias saturation or dark current (a) η: diode ideality factor, vt: thermal voltage (v) k: boltzmann’s constant (=1.381×10−23j/k) q: charge of an electron (=1.602×10−19coulomb) t: cell’s temperature in kelvin (k). equation (1) shows the well-known shockley diode equation. the diode reverse saturation current, io, originates from minority carriers (e.g., electrons in the p-region) that recombine in the depletion region. this current limits the current in the reverse bias operation mode [28]. the ideality factory, η, also known as the quality factor or emission coefficient, depends on the fabrication process and semiconductor material. it is typically between 1 and 2, depending on the dominant recombination mechanism [14]. the thermal voltage, vt, describes the voltage produced within the p-n junction due to temperature. at room temperature (300 k), vt ≈ 26 mv. a solar cell is characterized by its current-voltage (i-v) and power-voltage (p-v) curves. in order to generate accurate curves, one has to take into account the different losses associated with a cell/panel. examples of the origins of these losses are intrinsic material defects, manufacturing flaws and the contacts with the loads. consequently, a shunt resistor and a series resistor are added, as in figure 1 (b), to give a more realistic electrical model. fig. 1. equivalent circuit of (a) ideal (b) practical sdm pv cell model. the small series resistor rs represents the conductivity of the materials, the thickness of the various layers and the ohmic contacts between metal and semiconductor [27]. the large shunt resistor rsh represents the leakage current across the p-n junction when the diode is reverse-biased and usually originates from cell manufacturing defects [28]. taking into account the two resistors, (1) becomes: ph d shi i i i= − − (2) with exp 1dd o t v i i vη = −          (3) d sv v ir= + (4) s sh sh v ir i r + = (5) by substituting (3), (4), and (5) in (2), we obtain: exp 1s sph o t sh v ir v ir i i i v rη + + = − − −          (6) equation (6) describes the current-voltage relationship of the pv cell and is the main equation used to represent the single-diode electrical model of figure 1 (b) and also to plot the i-v and p-v curves. it is clear from (6) that, to calculate i as a function of v, the values of five parameters, namely, iph, io, rs, rsh and η need to be known. for most of the existing engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2909 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … panels, if not all of them, these parameters are not supplied by the manufacturers and hence need to be estimated. b. proposed method parameters extraction to determine the five independent parameters, five independent equations are required. it is customary to use data extracted from the pv panel datasheet and sometimes from experiments performed on the cell/panel. in particular, three values are used: the short circuit current isc, the open-circuit voltage voc and the maximum power generated pmax measured at the standard test conditions, stc, where the panel’s ambient temperature is tc=25°c (298 k), the irradiance is 1000w/m2 and air mass am1.5 [11]. the values resulting from the solution of the five equations are then used in subsequent simulations which force the modeled i-v and p-v curves to pass through these main three points. in what follows, the asterisk (*) denotes values measured at stc. at open circuit: *0, oci v v= = and (6) becomes: * * * * * * exp 1oc ocph o t sh v v i i v rη = − +          (7) at short circuit: *0, scv i i= = and (6) becomes: * * * * * * * * * exp 1 0sc s sc ssc ph o t sh i r i r i i i v rη − + − + =          (8) substituting (7) in (8) results in (9): * * * * * * * * * * * exp exp 0oc sc s oc sc ssc o t t sh v i r v i r i i v v rη η − − − − =                (9) at the maximum power * * max : mp mpp i i v vand= = : and (6) leads to (10): * * * * * * * * * * * exp 1 0mp mp s mp mp sph o mp t sh v i r v i r i i i v rη + + − − − − =          (10) by substituting (7) in (10) the latter becomes: * * * * ** * * * * * * * 1 exp exp 0mp mp s oc mps ocmp o sh t t sh v i r v vr v i i r v v rη η + − + − − − =                   (11) the power delivered to the load is p vi= and its derivative with respect to the voltage v is: dp di i v dv dv = + (12) at the point of maximum power * mpp , * 0 mpp p dp dv = = and (12) becomes: * * 0mp mp di i v dv + = (13) the derivative of (6) gives: 1 1 1 exp 1ss o s t t sh v irdi di di r i r dv v dv v r dvη η + = − + − +                (14) at stc, and using (13) and (14), equation (15) is derived as follows: * * * * * * * * * * * * * * * 1 1 1 exp 1 0mp mp mp mp s mps o s mp t mp t sh mp i i v i r i r i r v v v v r vη η + − − − − =                   (15) the result is a set of four non-linear equations, (7), (9), (11) and (15), with five unknown variables, iph*, io*, rs*, rsh* and η*. consequently, a fifht equation is needed. the slope of the i-v curve at short circuit, i.e., ι=ιsc* and v=0, is approximately equal to the negative inverse of the shunt resistance rsho [15, 29]. in other words, * 0 1 sci i and v sho di dv r= = = − (16) with these values (14) will be rewritten as: * * * * * * * 1 1 1 exp sc s o sho s sh t t i r i r r r v vη η = + −       (17) in principle, rsho can be extracted from the i-v curve, but since datasheets do not provide numerical values for the data used to plot this curve, researchers use graphical methods to achieve this by means of a digitizer. however, inaccurate values can result, which in turn will affect the other extracted parameters. by assuming ( ) ( )* * * * * *exp 1o t sc s t shi v i r v rη η and * sho sr r , equation (17) will lead to the approximation * sho shr r≈ [21]. now, replacing rsho by rsh *, in (17) will lead to: * * * * * * * * 1 1 1 exp sc s o sh s sh t t i r i r r r v vη η = + −       (18) finally, (7), (9), (11), (15) and (18) constitute a set of five non-linear equations with five unknown variables iph*, io*, rs*, rsh*, η*. because these equations are inherently implicit and nonlinear, finding analytical solutions is not a trivial task [24-30]. approximations and simplifications are always used to arrive to some simple analytical solutions [25]. however, the set of transcendental equations is usually solved using numerical methods. these include curve-fitting, such as the levenbergmarquardt (lm) algorithm [29], and root-finding, such as the bisection and the newton-raphson methods [11, 21, 29]. to reach convergence, the numerical methods require good approximation of the starting values of the five parameters. to achieve this, analytical solutions are used. as a result, a variety of methods and techniques aiming at solving these non-linear equations have been devised, tested and published in the scientific literature over a few decades [12-22]. once these parameters are determined, usually at stc, other equations are used to estimate the response of the pv cell/panel at other operating conditions, i.e., temperature and solar irradiance. subsequently, (6) is used to generate the i-v curves for these operating conditions [11, 23, 26]. engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2910 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … c. approximate analytical solutions from (9) the following expression for io* is derived: ( )( ) ( ) ( ) * * * * * * * * * * *exp exp s sh sc oc sh o oc t sc s t r r i v r i v v i r vη η + − = −       (19) the following acceptable assumptions and approximations are made. from experimental work, * * sh sr r , resulting in * *1 1.s shr r+ ≈ in addition, * * * sc oc shi v r and ( ) ( )* * * * *exp expoc t sc s tv v i r vη η [31]. equation (19) then becomes: ( )* * * *expo oc t sci v v iη= − (20) the shunt resistor shr has a high value, and is sometimes assumed infinite when modeling a pv cell. furthermore, in short-circuit mode, the diode is reverse-biased and hence its current can be neglected. consequently, the equality (21) is valid in all cases. * * ph sci i= (21) from the i-v and p-v curves, one can notice that * * oc mpv v− is small and with *shr being large, the following assumption: ( )* * * 0oc mp shv v r− ≈ is valid. hence, by substituting (20) and (21) in (11), we obtain (22): ( )( )* * * * * * *1 expmp mp oc mp s t sci v v i r v iη= − − +   (22) since rsh* is high and * *sh sr r , the term ( ) ( )* * * *1 1sh sh mp mpr r i v−   will be negligible and (15) becomes: ( ) ( ) ( )( )* * * * * * * * * * * *1 expmp mp t s mp mp mp oc mp s t sci v v r i v v v i r v iη η= − × − +      (23) taking the previous assumptions into consideration, (18) is transformed as: ( )2* * * * ** * * exp 0sh sc sc s ocs t t r i i r v r v vη η − − + =       (24) equation (22) can be rewritten as: ( ) ( )* * * * * * * *ln sc mp sc mp oc mp s ti i i v v i r vη− = − +   (25) by eliminating the term ( )* * * * *exp mp oc mp s tv v i r vη− +   from (22) and (23) and using (25), the following expressions for rs* and η* are obtained [21]: ( ) ( ) ( ) ( ) * * * ** * * * * * * * 2 ln 1 mp oc sc mpmp s mp mp sc mp sc mp v v i iv r i i i i i i − − = − − + −   (26) ( ) ( ) * * * * * * * * 2 ln 1 mp oc t mp sc mp sc mp v v v i i i i i η − = − + −   (27) equation (24) leads to: ( ) ( )( ) * * * * * * * *exp s sh sc t sc s oc t r r i v i r v vη η = − (28) once rs* and η* are computed using (26) and (27), their values will be used in (28) to calculate rsh. the calculated values of * * *, ands shr rη , are in turn used to calculate io * and iph * from (19) and (7), respectively. finally, and with the assumptions made, (7), (19), (26), (27) and (28) will constitute the approximated analytical solutions for the five parameters of the single-diode model for the pv cell/panel [32]. the estimated values of these parameters will be used as the starting values to solve numerically the implicit and nonlinear equations (7), (8), (11), (15) and (17) by the matlab ‘fsolve’ function based on l.m. algorithm. d. estimation of sdm parameters at operating conditions so far, the five sdm parameters were estimated at stc. however, in day-to-day applications, the conditions, especially the temperature and irradiance, are different from stc. moreover, the cell/panel parameters depend strongly on these conditions. therefore, it is crucial to evaluate the performance of the cell/panel at these real conditions. according to [12], the reverse saturation current io, the photocurrent iph and the shunt resistance rsh are expressed as follows: 3 * * * * 1 exp g go o e et i i t k t t    = −           (29) ( )* **ph ph i g i i k t t g  = + −  (30) * * sh sh g r r g = (31) where: t, g, eg and rsh are the temperature, irradiance, material band gap energy and shunt resistance at the operating condition, respectively, whereas, t*, g*, eg* and rsh* are the corresponding parameters at stc. the parameter ki is the temperature coefficient of the short-circuit current and k is botlzmann’s constant. iii. results and discussions in this section, the extraction of the parameters of the proposed model is carried out. the results of the proposed optimization algorithm are compared with the experimental results. the comparison is implemented under different conditions of irradiation and temperature. the studied method was applied to the suntech power ‘stp250s-20/wd’ monocrystalline silicon solar module and the multicrystalline tallmax module ‘tsm-pd14’. the five parameters of the model were estimated in accordance with the main steps presented previously. theoretical (or simulated) i–v and p-v curves derived from the developed approach were compared to experimental ones provided by the manufacturers, for different environmental conditions. the specifications of the ‘stp250s-20/wd’ and ‘tsm-pd14’ panels are depicted in table i. figures 2 and 3 show the i-v and p-v characteristics of the ‘stp250s-20/wd’ solar module at fixed module engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2911 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … temperature (t=25°c) and under different irradiance levels (200, 400, 600, 800 and 1000w.m-2). the curves were overlaid on top of those obtained using the proposed method to highlight any discrepancies. the model curves match well with the experimental data except for a negligible gap registered at high solar radiation. similarly, figures 4 and 5 depict the experimental i-v and p-v curves of the multicrystalline tallmax solar module ‘tsm-pd14’. again, one can see that the simulated model curves match well with the experimental ones. figures 2, 3, 4 and 5 show the variation of isc (which is none other than iph) with the illumination. moreover, the increase in solar radiation causes a slight rise in the open-circuit voltage in addition to an increase in the maximum power generated by the panel. these results prove that the calculated data are extremely close to the experimental ones. fig. 2. single-diode model and manufacturer datasheet p-v curves for stp250s-20/wd at different irradiance levels (200, 400, 600, 800 and 1000w.m-2) and fixed module temperature (t=25°c). fig. 3. single-diode model and manufacturer datasheet i-v curves for stp250s-20/wd at different irradiance levels (200, 400, 600, 800 and 1000w.m-2) and fixed module temperature (t=25°c). fig. 4. comparison of single-diode model and manufacturer datasheet p-v curve for tsm-pd14 at different irradiance levels (200, 400, 600, 800 and 1000w.m-2) and fixed module temperature (t=25°c). fig. 5. comparison of single-diode model and manufacturer datasheet i-v curve for tsm-pd14 at different irradiance levels (200, 400, 600, 800 and 1000w.m-2) and fixed module temperature (t=25°c). table i. datasheet of suntech-power ‘stp250s-20/wd’ and tallmax module ‘tsm-pd14’ pv modules at standard test condition (stc) and nominal operating cell temperature (noct). datasheet parameters stp250s-20/wd tsm-pd14 at stc at noct at stc at noct pmp 250 183 325 242 vmp 30.7 27.9 37.2 34.5 imp 8.15 6.55 8.76 7.02 voc 37.4 34.4 45.9 42.6 isc 8.63 6.96 9.25 7.47 ki 0.05%/°c 0.05%/°c kv 0.34%/°c 0.32%/°c ns 60 72 after investigating the performance of the proposed approach under the effects of solar radiation, the effects of the temperature, which is a very important parameter, will also be evaluated. in order to predict a pv module i-v and p-v characteristics curves at different temperatures, for which data or i-v curves are not available, temperature coefficients (temperature coefficient of voc (kv) and temperature coefficient of isc and (ki)) were used. by using the sandia national laboratory database, the parameters isc(t), voc(t), imp(t), and vmp(t), at different temperatures can be determined. then, the obtained values enable the estimation of the those of η(t) , io(t), and iph(t). subsequently, the i-v and p-v curves at 0°c, 25°c, 50°c, and 75°c were generated for the panels ‘stp250s-20/wd’ and ‘tsm-pd14’ in figures 6-9. fig. 6. single-diode model and manufacturer datasheet p-v curves for stp250s-20/wd at different module temperatures (10, 20, 30 and 50°c) and fixed irradiance level (1000w.m-2). engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2912 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … fig. 7. single-diode model and manufacturer datasheet i-v curves for stp250s-20/wd at different module temperatures (10, 20, 30 and 50°c) and fixed irradiance level (1000w.m-2). fig. 8. single-diode model and manufacturer datasheet i-v curves for tsm-pd14 at different module temperatures (10, 20, 30 and 50°c) and fixed irradiance level (1000w.m-2). fig. 9. single-diode model and manufacturer datasheet p-v curves for tsm-pd14 at different module temperatures (10, 20, 30 and 50°c) and fixed irradiance level (1000w.m-2). while increasing the temperature lead to a negligible increase in the photocurrent iph due to better light absorption, a noticeable decrease in the open-circuit voltage is observed. this is accompanied by a large reduction in the maximum power pmax which translates into a decrease in the available power. the superposition of the i-v and p-v curves obtained using the estimated parameters (of the single-diode model adopted in the work presented here) on top of those generated using the experimental data show very good agreement albeit negligible differences. iv. conclusions in this paper, a hybrid numerical-analytical approach was developed and programmed in matlab environment. its capability to estimate the unknown electrical parameters of pv modules using single-diode model (sdm) was validated by the experimental i-v and p-v data extracted from the manufacturer’s datasheet. two pv modules, namely, ‘stp250s-20/wd’ and ‘tsm-pd14’, made with different manufacturing techniques were utilized for validation. it can be concluded that the extracted characteristics nearly coincide with the experimental ones. consequently, the obtained good fitting indicates the feasibility and high precision of the proposed method. references [1] f. t. hamzehkolaei, n. amjady, “a techno-economic assessment for replacement of conventional fossil fuel based technologies in animal farms with biogas fueled chp units”, renewable energy, vol. 118, pp. 602-614, 2018 [2] s. saad, l. zellouma, “fuzzy logic controller for three-level shunt active filter compensating harmonics and reactive power”, electric power systems research, vol. 79, no. 10, pp. 1337-1341, 2009 [3] a. nikolaev, p. konidari, “development and assessment of renewable energy policy scenarios by 2030 for bulgaria”, renewable energy, vol. 111, pp. 792-802, 2017 [4] m. a. destek, a. aslan, “renewable and non-renewable energy consumption and economic growth in emerging economies: evidence from bootstrap panel causality”, renewable energy, vol. 111, pp. 757763, 2017 [5] r. abbassi, s. marrouchi, m. ben hessine, s. chebbi, h. jouini, “voltage control strategy of an electrical network by the integration of the upfc compensator”, international review on modelling and simulation, vol. 5, no. 1, pp. 380–384, 2012 [6] m. bhattacharya, s. a. churchill, s. r. paramati, “the dynamic impact of renewable energy and institutions on economic output and co2 emissions across regions”, renewable energy, vol. 111, pp. 157-167, 2017 [7] r. abbassi, s. saidi, m. hammami, s. chebbi, “analysis of renewable energy power systems: reliability and flexibility during unbalanced network fault”, in: handbook of research on advanced intelligent control engineering and automation, pp. 651–686, igi global, 2015 [8] k. hori, t. matsui, t. hasuike, k. fukui, t. machimura, “development and application of the renewable energy regional optimization utility tool for environmental sustainability: reroutes”, renewable energy, vol. 93, pp. 548-561, 2016 [9] r. abbassi, s. chebbi, “energy management strategy for a grid– connected wind-solar hybrid system with battery storage: policy for optimizing conventional energy generation”, international review of electrical engineering, vol 7, pp. 3979-3990, 2012 [10] a. abbassi, m. a. dami, m. jemli, “a statistical approach for hybrid energy storage system sizing based on capacity distributions in an autonomous pv/wind power generation system”, renewable energy, vol. 103, pp. 81-93, 2017 [11] a. abbassi, r. gammoudi, m. a. dami, o. hasnaoui, m. jemli, “an improved single-diode model parameters extraction at different operating conditions with a view to modeling a photovoltaic generator: a comparative study”, solar energy, vol. 155, pp. 478–489, 2017 [12] w. de soto, s. a. klein, w. a. beckman, “improvement and validation of a model for photovoltaic array performance”, solar energy, vol. 80, no. 1, pp. 78-88, 2006 [13] d. t. cotfas, p. a. cotfas, s. kaplanis, “methods to determine the dc parameters of solar cells: a critical review”, renewable and sustainable energy reviews, vol. 28, pp. 588-596, 2013 [14] g. ciulla, v. lo brano, v. di dio, g. cipriani, “a comparison of different one-diode models for the representation of i–v characteristic of a pv cell”, renewable and sustainable energy reviews, vol. 32, pp. 684-696, 2014 [15] d. jena, v. v. ramana, “modeling of photovoltaic system for uniform and non-uniform irradiance a critical review”, renewable and sustainable energy reviews, vol. 52, pp. 400-417, 2015 engineering, technology & applied science research vol. 8, no. 3, 2018, 2907-2913 2913 www.etasr.com abbassi et al.: a numerical-analytical hybrid approach for the identification of sdm solar cell … [16] a. m. humada, m. hojabri, s. mekhilef, h. m. hamada, “solar cell parameters extraction based on single and double-diode models: a review”, renewable and sustainable energy reviews, vol. 28, pp. 494509, 2016 [17] m. u. siddiqui, a. f. m. arif, a. m. bilton, s. dubowsky, m. elshafei, “an improved electric circuit model for photovoltaic modules based on sensitivity analysis”, solar energy, vol. 90, pp. 29-42, 2013 [18] m. b. h. rhouma, a. gastli, l. b. brahim, f. touati, m. benammar, “a simple method for extracting the parameters of the pv cell single-diode model”, renewable energy, vol. 113, pp. 885-894, 2017 [19] m. bashahu, a. habyarimana, “review and test of methods for determination of the solar cell series resistance”, renewable energy, vol. 6, no 2, 129-138, 1995 [20] w. gong, z. cai, “parameter extraction of solar cell models using repaired adaptive differential evolution”, solar energy, vol. 94, pp. 209220, 2013 [21] m. hejri, h. mokhtari, m. r. azizian, l. söder, “an analyticalnumerical approach for parameter determination of a five-parameter single-diode of photovoltaic cells and modules”, international journal of sustainable energy, vol. 35, no. 4, pp. 396-410, 2016 [22] p. a. kumari, p. geethanjali, “parameter estimation for photovoltaic system under normal and partial shading conditions: a survey”, renewable and sustainable energy reviews, vol. 84, pp. 1-11, 2018 [23] n. barth, r. jovanovic, s. ahzi, m. a. khaleel, “pv panel single and double diode models: optimization of the parameters and temperature dependence”, solar energy materials and solar cells, vol. 148, pp. 8798, 2016 [24] s. bana, r.p. saini, “identification of unknown parameters of a single diode photovoltaic model using particle swarm optimization with binary constraints”, renewable energy, vol. 101, pp.1299-1310, 2017 [25] f.j. toledo, j. m. blanes, “analytical and quasi-explicit four arbitrary point method for extraction of solar cell single-diode model parameters”, renewable energy, vol. 92, pp. 346-356, 2016 [26] k. ishaque, z. salam, s. mekhilef, a. shamsudin, “parameter extraction of solar photovoltaic modules using penalty-based differential evolution”, applied energy, vol. 99, pp. 297-308, 2012 [27] a. luque, s. hegedus, handbook of photovoltaic science and engineering, john wiley & sons, ltd, 2005 [28] h. wirth, photovoltaic modules technology and reliability, walter de gruyter gmbh & co kg, 2016 [29] g. petrone, c. a. ramos-paja, g. spagnuolo, photovoltaic sources modeling, wiley & sons, 2017 [30] m. chegaar, z. ouennoughi, f. guechi, “extracting dc parameters of solar cells under illumination”, vacuum, vol. 75, no 4, pp. 367-372, 2004 [31] j. c. h. phang, d. s. h. chan, j. r. phillips, “accurate analytical method for the extraction of solar cell model parameters”, electronics letters, vol. 20, no. 10, pp. 406-408, 1984 [32] m. kumar, a. kumar, “an efficient parameters extraction technique of photovoltaic models for performance assessment”, solar energy, vol. 158, pp. 192–206, 2017 microsoft word etasr_v13_n4_pp11210-11215 engineering, technology & applied science research vol. 13, no. 4, 2023, 11210-11215 11210 www.etasr.com pervan et al.: mechanical stability analysis of the external unilateral fixation device due to the … mechanical stability analysis of the external unilateral fixation device due to the impact of axial pressure nedim pervan university of sarajevo, faculty of mechanical engineering, bosnia and herzegovina pervan@mef.unsa.ba (corresponding author) elmedin mesic university of sarajevo, faculty of mechanical engineering, bosnia and herzegovina mesic@mef.unsa.ba adis muminovic university of sarajevo, faculty of mechanical engineering, bosnia and herzegovina adis.muminovic@mef.unsa.ba enis muratovic university of sarajevo, faculty of mechanical engineering, bosnia and herzegovina muratovic@mef.unsa.ba muamer delic university of sarajevo, faculty of mechanical engineering, bosnia and herzegovina delic@mef.unsa.ba vahidin hadziabdic university of sarajevo, faculty of mechanical engineering, bosnia and herzegovina hadziabdic@mef.unsa.ba lejla redzepagic-vrazalica university of sarajevo, faculty of dentistry with clinics, bosnia and herzegovina lejlaredzepagic@yahoo.com received: 27 march 2023 | accepted: 7 march 2023 licensed under a cc-by 4.0 license | copyright (c) by the authors | doi: https://doi.org/10.48084/etasr.5888 abstract this study performed a mechanical stability analysis for the impact of axial pressure on an ultra x external unilateral fixation device applied to a tibia with an open fracture. the real construction of the fixation device was used to create a 3d geometric model using a finite element method (fem) model, which was made to perform structural analysis in the catia v5 (computer aided three-dimensional interactive application) cad/cae system. specific stresses and displacements were observed at points of interest using structural analysis. the focus was on the relative displacements of the proximal and distal bone segments in the fracture zone. these displacements were used to calculate the stiffnesses of the bone in the fracture zone and the fixation device itself. the results obtained provide the necessary information regarding the stability of the ultra x fixation device. keywords-external unilateral fixation device; specific stresses; relative displacements, stiffness; stability engineering, technology & applied science research vol. 13, no. 4, 2023, 11210-11215 11211 www.etasr.com pervan et al.: mechanical stability analysis of the external unilateral fixation device due to the … i. introduction during the recent years, there has been a considerable improvement in external fixation devices in terms of their construction variants, which have been experimentally investigated to provide information on their characteristics and advantages in terms of stability, stiffness, mechanical properties, and patient comfort during treatment. using software for 3d modeling and fem analysis to perform mechanical stability analysis is not a substitution for an experimental examination but is exclusively a tool for data comparison and validation. the experimental investigation of fixation devices is mainly based on biomechanical properties, along with the influence of specific parameters on the stability of the device [1]. the results of these investigations are reflected in certain values, such as von mises stresses, displacements, angular strains, and fixation device stiffnesses, and most of these studies provide results of the application of the fixation device [2-3]. in recent years, the most popular treatment for tibia fracture is by using intermediary fasteners [4]. in [5], the application of an external fixation device and intermediary fasteners was considered in an open tibia fracture taking into account the treatment time and other possible complications, such as the size, severity, etc. in [6], the stiffness of the fixation device was defined concerning the location of the fracture and the number of fasteners and pins. in [7], an analysis of the stiffness of the hoofman unilateral and uniplanar fixation device was presented along with its relation to the number of fasteners, trusses, and couplings. the stiffness of the device is determined by the loads that simulate normal walking conditions. in [8], a comparative study was conducted on two external fixation devices: the original hoffmann and the ao tabular device with four different construction solutions. in [9], the mechanical properties of the external pinless fixation device were experimentally investigated, comparing its results with the ao tabular and ultra x devices. this study concluded that the ao tabular devices are far superior in comparison with the other two solutions. in [10], the ilizarov fixation device was investigated experimentally. furthermore, many studies have analyzed the mechanical stability of structures [11-13]. this study aimed to investigate the mechanical properties of the ultra x external unilateral fixation device, applied to the openfracture tibia bone under the impact of axial pressure. the construction parameters taken into account were the stiffness of the device, the values of the maximum von mises stresses, and the displacements in specific points. ii. development of the cad/fem model the adjustable fixation device should be light, stiff, easy to implement, etc. such devices should be part of the first response at accident sites so that basic stabilization could be performed before transporting patients. figure 1 shows the howmedic ultra x external fixation device, which is one of the first modular fixation devices, and was used in the first gulf war in 1991. the components of the device are mostly made of metals, alloys, and plastics such as polymers and carbon fibers. the ultra x fixation device has a truss made of austenitic stainless steel x2crnimo17-12-2, while couplings and small and large spheres are made of polymeric materials, attributed with smaller strength, young's modulus, density, specific weight, and great forming properties. the upper and lower parts of the coupling and the fastening head are made of special polyvinyl chloride (pvc), which has high hardness and good mechanical properties. small and large spheres are made of polybutylene (pb), which has properties similar to pvc because it can be manufactured with any method of thermoforming, and therefore, it gives a lot of creativity during the shape-forming process. table i shows the mechanical properties of the unilateral ultra x fixation device [14]. fig. 1. the ultra x fixation device. table i. mechanical properties of the unilateral ultra x fixation device component name standard abbreviations (en) modulus of elasticity e (gpa) poisson's coefficient υ density ρ (kg/m3) yield strength σv (mpa) truss x2crnimo1712-2 230 0.29 8000 620 sphere >pb< 2.9 0.4 1290 coupling >pvc< 3.3 0.38 1380 0.2 fastener x5crni18-10 193 0.29 7900 205 half-pin x2crnimo1814-3 196.4 0.3 8000 800 the cad/fem model of the ultra x device was developed using catia v5 software. device components were defined and modeled in the part design environment and subsequently assembled in the assembly design environment. the general structural analysis module was used in the next step of model creation to complete the fem model and define the material for each component. the material of the bone fragments was assumed to be orthotropic with properties defined according to table ii [15-16]. after the materials were defined, the next step in fem processing was the discretization of the model and the definition of the finite element type. linear (te4) and parabolic (te10) elements were used for the model. the te4 elements were used for the spheres, while the rest of the components were discretized with the te10 elements. after the discretization was complete, it was necessary to define the constraints between the components of the device. the constraints used were the following: fastened connections between half-pins and bone segments, as shown in figure 2(a), and contact connections between other components, as shown in figure 2(b). in addition to defining the necessary constraints, it was also mandatory to define supports, as shown in figure 3. engineering, technology & applied science research vol. 13, no. 4, 2023, 11210-11215 11212 www.etasr.com pervan et al.: mechanical stability analysis of the external unilateral fixation device due to the … table ii. bone model mechanical properties property value longitudinal modulus of elasticity 22900 mpa tangential modulus of elasticity 10500mpa normal modulus of elasticity 14200 mpa poisson's coefficeint in the xy plane 0.29 poisson's coefficeint in the xz plane 0.19 poisson's coefficeint in the yz plane 0.31 shear modulus in the xy plane 6480 mpa shear modulus in the xz plane 6000 mpa shear modulus in the yz plane 3700 mpa density 1850 kg/m 3 the last step in creating the fem model was to define the axial load, which is applied as the surface load on the top surface of the upper bone segment. the upper bone segment is constrained so it can only move in the z-axis direction, i.e. the direction of the applied force. the lower bone segment is supported by a spherical joint (ball joint) through a virtual part (smooth virtual part). the spherical joint allows rotation around a predefined point (handle node), and all translations are restricted, as shown in figure 4. the axial load was set to 200 n, according to orthopedic recommendations and [17-18]. (a) (b) fig. 2. defining connection constraints: (a) fastened, (b) contact. fig. 3. fixation device model with constraints and supports. fig. 4. fixation device fem model with the applied load. iii. determination of stress, displacement, and stiffness during structural analysis, specific points were monitored to obtain values of principal and von mises stresses generated on the fixation device. the intensity of the equivalent one-axis stress, i.e. von mises stress, is often used in mechanics defined as [19-20]: �� = ��� = �3� = �� ��� − � � + �� − ��� + ��� − ��� � (1) apart from stresses, displacement values at the same points were monitored so the stiffness of the device can be defined as the ratio of the load and displacements. the device stiffness under the impact of axial load can be defined as [19, 21]: �� = ���� (2) where fp is the axial force (n), and δp is the axial displacement of the bone segments at the fracture zone (mm). the fixation device stiffness is an important parameter, but it doesn't give direct information about displacements at the fracture zone, so the fracture stiffness needs to be defined. this was achieved by determining displacements in the x, y, and z directions of the pair of adjacents points on the planes of the proximal and distal bone segments at the fracture zone. for these points, the resultant vector of relative displacement rmax has the highest value. accordingly, the total fracture stiffness is defined as the ratio of the load and the resultant relative displacement of the observed pair of points [22-23]: ��� = ��� = �� �������� !����"�� !����#�� (3) engineering, technology & applied science research vol. 13, no. 4, 2023, 11210-11215 11213 www.etasr.com pervan et al.: mechanical stability analysis of the external unilateral fixation device due to the … the relative displacements rd(x), rd(y), rd(z) of the observed points are defined as [24-25]: $%�&� = '��&� − '(�&� $%�)� = '��)� − '(�)� (4) $%�*� = '��*� − '(�*� where rd(x), rd(y), and rd(z) are the relative displacements for the points of bone segments (mm), dp(x), dp(y), and dp(z) are the displacements of the proximal bone segment (mm), and dd(x), dd(y), and dd(z) are the displacements of the distal bone segment in x, y, and z directions (mm). iv. results figure 5 shows the displacement vectors due to the maximum axial load, where the course, direction, and intensity of the vectors for the analyzed points can easily be noticed. table iii shows the components of the displacement vectors and the displacement values for the maximum axial load of 200 n. the stiffness of the construction is determined using (2), based on the axial displacement in the z-axis direction (straight surface at the top of the proximal bone), while the fracture stiffness requires displacements at the proximal and distal bone segments at the fracture zone in the x, y, and z directions. this is done by observing which pair of points will result in the highest displacements, as shown in figure 5 (detail a). fig. 5. displacement vectors for the specific points under the impact of maximum axial load. table iii. displacement and stiffness values displacement of the proximal segment (mm) displacement of the distal segment (mm) f r a c tu r e st if fn e ss ( n /m m ) c o n st r u c ti o n st if fn e ss ( n /m m ) load zone fracture zone fracture zone x y z dp(x) dp(y) dp(z) dd(x) dd(y) dd(z) cpp cp 0 0 -4.54 3.593 0.616 -4.76 3.744 0.701 0.246 39.89 44.05 figure 6 shows the von mises distribution. truss is an important component of the fixation device that needs to be considered in the structural analysis, which is loaded with eccentric pressure (simultaneous bending and pressure) for the truss amount σvm=318.62 mpa, which is also the global maximum. fig. 6. von mises stress distribution. due to axial pressure, the bone is bent around the y-axis. this induces the location of the maximum stress at the bone circumference, i.e. the location of the peripheral points with the x-axis, as shown in figure 7. fig. 7. von mises stress distribution for the fixation device truss. engineering, technology & applied science research vol. 13, no. 4, 2023, 11210-11215 11214 www.etasr.com pervan et al.: mechanical stability analysis of the external unilateral fixation device due to the … the intensities and the direction of principal stresses were monitored for the 10 most critical zones of the device construction, as shown in figure 8. table iv summarizes the values of principal and von mises stress for these points. fig. 8. principal stresses at the critical zones of the construction. table iv. stress values due to axial pressure load observed point principal stresses at critical points (mpa) von mises stresses at critical points (mpa) pm+ pmpm+ pm σ1 σ2 σ3 σ1 σ2 σ3 σνμ σνμ 1 198.0 -5.13 -14.4 -8.04 -11.2 -202 208.1 196.1 2 193.7 4.859 4.38 -3.35 -3.79 -175 191.1 174.8 3 203.4 11.83 7.18 -9.99 -14.2 -218 196.5 208.4 4 126.5 -6.46 -11.5 -3.22 -4.32 -185 135.6 183.7 5 205.6 10.64 7.23 -8.24 -13.8 -225 199.3 230.9 6 170.4 3.84 3.39 11.7 6.66 -145 169.2 145.5 7 218.8 13.95 9.70 -10.8 -12.3 -202 209.2 206.3 8 142.0 13.9 0.89 -3.02 -4.10 -163 147.2 162.5 9 315.4 1.07 2.20 0.42 -1.82 -322 318.8 317.3 10 302.9 1.41 0.51 6.36 5.89 -296 302.3 298.7 v. discussion the maximum displacement in the device construction due to the impact of the axial load is located at the end of the second schanz fastener and is 6.06 mm, as shown in figure 5. maximum displacements for the fracture zone are located at the edges of the proximal and distal bone segments, the values of which are given in table iii. when comparing these results with those of [26-27], it can be noticed that the ultra x external fixation device has significantly higher displacements (80100% higher) than the other such devices for the same case of applied load. displacements in the load zone are used as a basis for the stiffness of the fixation device, which was 44.05 n/mm. if this value is compared with other studies [23-24], it can be observed that the stiffness of the ultra x device is much lower (3-5 times) than that of other devices under the same load conditions. similarly, displacements in the fracture zone are used to calculate the fracture stiffness, which is 39.89 n/mm and is again 3-5 times lower compared than the values found in [26-27]. the results of the structural analysis show that the most critical zone of the fixation device construction is the middle of the truss where it establishes contact with large spheres. this zone is a load transfer zone, where the axial force is transmitted from the couplings to the truss through the schanz fasteners. the highest principal stresses, regarding the whole construction, were σ1=315.41 mpa (global maximum) for the positive values and σ3=-322.65 mpa (global minimum) for the negative values. both extreme values are located in the truss, have similar values, and are in correspondence with the results obtained in other studies [26-27]. it is also important to note that stresses in the fixation device construction satisfy the maximum permissible stress of the device material. vi. conclusion this study conducted a stability analysis on the ultra x external fixation device due to the impact of axial loads, developing a fem model. this model was used to observe the movements and displacements of the fracture and establish a connection between these phenomena with the stiffnesses of the fracture zone and the device itself. the analysis of the results obtained showed relatively large displacements compared to other studies for the same load conditions. this can be justified by the fact that a truss diameter of an ultra x device has a smaller cross-section, i.e. moment of inertia. lower displacement values could be expected for the hollow circle cross-sections of greater diameter, i.e. with an increased moment of inertia by moving away the material from the own axis of the element. this led to the conclusion that the mechanical stability of the ultra x fixation device is insufficient for application to fractures of the lower extremities. however, the ultra x device is recommended for traumas of the upper extremities due to its good properties, such as ease of implementation and small dimensions. in this case, weaker mechanical properties will not be a problem, as the upper extremities are subjected to significantly smaller loads. references [1] t. n. gardner, m. evans, and j. kenwright, "a biomechanical study on five unilateral external fracture fixation devices," clinical biomechanics, vol. 12, no. 2, pp. 87–96, mar. 1997, https://doi.org/10.1016/s02680033(96)00051-4. [2] c. s. roberts, j. c. dodds, k. perry, d. beck, d. seligson, and m. j. voor, "hybrid external fixation of the proximal tibia: strategies to improve frame stability," journal of orthopaedic trauma, vol. 17, no. 6, jul. 2003, art. no. 415. [3] c. lenarz, g. bledsoe, and j. t. watson, "circular external fixation frames with divergent half pins: a pilot biomechanical study," clinical orthopaedics and related research, vol. 466, no. 12, pp. 2933– 2939, dec. 2008, https://doi.org/10.1007/s11999-008-0492-0. engineering, technology & applied science research vol. 13, no. 4, 2023, 11210-11215 11215 www.etasr.com pervan et al.: mechanical stability analysis of the external unilateral fixation device due to the … [4] r. g. checketts and c. f. young, "(iii) external fixation of diaphyseal fractures of the tibia," current orthopaedics, vol. 17, no. 3, pp. 176– 189, jun. 2003, https://doi.org/10.1016/s0268-0890(03)00068-9. [5] a. hutchinson, a. frampton, and r. bhattacharya, "operative fixation for complex tibial fractures," the annals of the royal college of surgeons of england, vol. 94, no. 1, pp. 34–38, mar. 2012, https://doi.org/10.1308/003588412x13171221498668. [6] l. yang, m. saleh, and s. nayagam, "the effects of different wire and screw combinations on the stiffness of a hybrid external fixator," proceedings of the institution of mechanical engineers, part h: journal of engineering in medicine, vol. 214, no. 6, pp. 669–676, jun. 2000, https://doi.org/10.1243/0954411001535697. [7] j. vossoughi, y. youm, m. bosse, a. r. burgess, and a. poka, "structural stiffness of the hoffmann simple anterior tibial external fixation frame," annals of biomedical engineering, vol. 17, no. 2, pp. 127–141, mar. 1989, https://doi.org/10.1007/bf02368023. [8] t. k. moroz, j. b. finlay, c. h. rorabeck, and r. b. bourne, "stability of the original hoffmann and ao tubular external fixation devices," medical and biological engineering and computing, vol. 26, no. 3, pp. 271–276, may 1988, https://doi.org/10.1007/bf02447080. [9] a. r. remiger, "5. mechanical properties of the pinless external fixator on human tibiae," injury, vol. 23, pp. s28–s43, jan. 1992, https://doi.org/10.1016/0020-1383(92)90005-d. [10] b. fleming, d. paley, t. kristiansen, and m. pope, "a biomechanical analysis of the ilizarov external fixator.," clinical orthopaedics and related research (1976-2007), vol. 241, pp. 1976-2007, apr. 1989. [11] s. f. fakhouri, m. m. shimano, c. a. araujo, h. l. defino, and a. c. shimano, "photoelastic analysis of the vertebral fixation system using different screws," engineering, technology & applied science research, vol. 2, no. 2, pp. 190–195, apr. 2012, https://doi.org/ 10.48084/etasr.144. [12] d. benarbia, m. benguediab, and s. benguediab, "two-dimensional analysis of cracks propagation in structures of concrete," engineering, technology & applied science research, vol. 3, no. 3, pp. 429–432, jun. 2013, https://doi.org/10.48084/etasr.300. [13] b. achour, d. ouinas, m. touahmia, and m. boukendakdji, "buckling of hybrid composite carbon/epoxy/aluminum plates with cutouts," engineering, technology & applied science research, vol. 8, no. 1, pp. 2393–2398, feb. 2018, https://doi.org/10.48084/etasr.1224. [14] m. j. bosse, c. holmes, j. vossoughi, and d. alter, "comparison of the howmedica and synthes military external fixation frames," journal of orthopaedic trauma, vol. 8, no. 2, pp. 119-126, apr. 1994. [15] r. n. dehankar and a. m. langde, "finite element approach used on the human tibia: a study on spiral fractures," journal of long-term effects of medical implants, vol. 19, no. 4, pp. 313–321, 2009, https://doi.org/10.1615/jlongtermeffmedimplants.v19.i4.80. [16] n. pervan, a. j. muminović, e. mešić, m. delić, and e. muratović, "analysis of mechanical stability for external fixation device in the case of anterior-posterior bending," advances in science and technology. research journal, vol. 16, no. 3, pp. 136–142, jul. 2022, https://doi.org/10.12913/22998624/146857. [17] a. a. shetty, u. hansen, k. d. james, and s. djozic, "biomechanical test results of the sarafix external fixator," imperial college, london, uk, 2004. [18] e. mesic, n. pervan, n. repcic, and a. muminovic, "research of influential constructional parameters on the stability of the fixator sarafix," in annals of daaam for 2012 & proceedings of the 23rd international daaam symposium, vienna, austria, 2012. [19] n. pervan and e. meši, "stress analysis of external fixator based on stainless steel and composite material," international journal of mechanical engineering and technology, vol. 8, no. 1, pp. 189–199, jan. 2017. [20] e. mešić, n. pervan, a. j. muminović, a. muminović, and m. čolić, "development of knowledge-based engineering system for structural size optimization of external fixation device," applied sciences, vol. 11, no. 22, art. no. 10775, jan. 2021, https://doi.org/10.3390/ app112210775. [21] e. mešić, a. muminović, m. čolić, m. petrović, and n. pervan, "development and experimental verification of a generative cad/fem model of an external fixation device," tehnički glasnik, vol. 14, no. 1, pp. 1–6, mar. 2020, https://doi.org/10.31803/tg20191112161707. [22] n. pervan, e. mešić, a. muminović, m. čolić, and m. petrović, "structural size optimization of an external fixation device," advances in science and technology. research journal, vol. 14, no. 2, pp. 233– 240, jun. 2020, https://doi.org/10.12913/22998624/116870. [23] m. elmedin, a. vahid, p. nedim, and r. nedžad, "finite element analysis and experimental testing of stiffness of the sarafix external fixator," procedia engineering, vol. 100, pp. 1598–1607, jan. 2015, https://doi.org/10.1016/j.proeng.2015.01.533. [24] e. mešić, v. a. avdić, and n. pervan, "numerical and experimental stress analysis of an external," folia medica facultatis medicinae universitatis saraeviensis, vol. 50, no. 1, 2015. [25] elmedin mešić, vahid avdić, nedim pervan, and adil muminović, "a new proposal on analysis of the interfragmentary displacements in the fracture gap," tem journal, vol. 4, no. 3, pp. 270–275, 2015. [26] n. pervan et al., "biomechanical performance analysis of the monolateral external fixation devices with steel and composite material frames under the impact of axial load," applied sciences, vol. 12, no. 2, jan. 2022, art. no. 722, https://doi.org/10.3390/app12020722. [27] n. pervan, e. mešić, a. j. muminović, m. delić, and e. muratović, "stiffness analysis of the external fixation system at axial pressure load," advances in science and technology. research journal, vol. 16, no. 3, pp. 226–233, jul. 2022, https://doi.org/10.12913/22998624/ 149599. engineering, technology & applied science research vol. 8, no. 5, 2018, 3332-3337 3332 www.etasr.com khoa & tung: modeling for development of simulation tool: impact of tcsc on apparent impedance … modeling for development of simulation tool: impact of tcsc on apparent impedance seen by distance relay ngo minh khoa faculty of engineering and technology quy nhon university quy nhon, binh dinh, vietnam ngominhkhoa@qnu.edu.vn doan duc tung faculty of engineering and technology quy nhon university quy nhon, binh dinh, vietnam doanductung@qnu.edu.vn abstract—the impact of thyristor controlled series capacitor (tcsc) on distance protection relays in transmission lines is analyzed in this paper. voltage and current data are measured and collected at the relay locations to calculate the apparent impedance seen by distance protection relays in the different operating modes of the tcsc connected to the line. short-circuit faults which occur at different locations on the power transmission line are considered in order to locate the fault for the purpose of evaluating the impact of tcsc on the distance protection relay. matlab/simulink simulation software is used to model the power transmission line with two sources at the two ends. voltage source, transmission line, tcsc, voltage and current measurement, and discrete fourier transform (dft) blocks are integrated into the model. simulation results show the impact of tcsc on the distance protection relay and determine the apparent impedance and fault location in the line. keywords-apparent impedance; distance relay; firing angle; tcsc; transmission line. i. introduction flexible alternating current transmission systems (facts) based on power electronics have been developed to improve the performance of weak alternating current (ac) systems and to make long distance ac transmission systems feasible [1, 2]. series compensation with tcsc has various applications in power system control such as readjustment of power flow, transient stability control, power oscillation damping control and sub-synchronous resonance mitigation because of its continuously varying reactance capability [3, 4]. the presence of a tcsc in fault loop affects both steady state and transient components of the voltage and current. moreover, variable capacitance or inductance in tcsc can lead to subsynchronous oscillations under overreaching distance protection and has an influence on the apparent impedance seen by distance protection relays [5, 6]. hence, a model for the development of a simulation tool that analyzes the impact of tcsc on distance protection relay in power transmission lines is necessary. authors in [7] proposed the fault direction estimation technique for a transmission line with a tcsc. a compensated line imposed problems to directional relaying schemes due to reactance modulation, current, and voltage inversion issues and tcsc-control action in order to estimate the direction of fault for a line with tcsc. authors in [8] considered tcsc as a dynamical device which had the response to disturbances based on its own control strategy. it concluded that not only tcsc affects the protection of its line, but also the protection of adjacent lines would experience problems. in [9], the impact of tcsc on the performance of conventional communication aided distance protection schemes was analyzed. authors also proposed new schemes for mitigating the impact of tcsc which used the information available at the substation to inhibit relay malfunctions. authors in [10] presented a new protective scheme for transmission lines compensated by tcsc. the scheme employed the averages of voltage and current, a new criterion was introduced to discriminate between forward and reverse faults. authors in [11] proposed a new high speed mho distance protection scheme for single line to ground faults in tcsc line in which fault voltage and current at relay point and firing angle from the tcsc substation were taken as the input for the mho relay. the objective of this paper is to model and analyze the impact of tcsc on the performance of based protection relays under normal operation and different single-phase to ground fault conditions. therefore, to address the aforementioned issues, this paper develops a simple power system model including a transmission line with a tcsc and single-phase to ground faults are investigated. according to the simulation results, the apparent impedance seen is calculated to analyze the impact of tcsc to distance protection relay on the line. ii. operation principles of tcsc tcsc is used in power systems to dynamically control the reactance of a transmission line in order to provide sufficient load compensation [12]. the benefits of tcsc are seen in its ability to control the amount of compensation of a transmission line, and in its ability to operate in different modes. these traits are very desirable since loads are constantly changing and cannot always be predicted. tcsc designs operate in the same way as fixed series compensation, but provide variable control engineering, technology & applied science research vol. 8, no. 5, 2018, 3332-3337 3333 www.etasr.com khoa & tung: modeling for development of simulation tool: impact of tcsc on apparent impedance … of the reactance absorbed by the capacitor device. the control scheme of a tcsc [13] is shown in figure 1. fig. 1. control scheme of tcsc. change of impedance of tcsc is achieved by changing the thyristor controlled inductive reactance of inductors connected in paralleled to the capacitor. the magnitude of inductive reactance is determined by the firing angle α, which can also be controlled continuously by the flowing amplitude of current reactor from maximum value to zero. firing angle switching thyristors can change inductive reactance controlled choke from a minimum value to, theoretically, infinite value. the tcsc equivalent reactance is as a function of its capacitive and inductive reactance parameters, and the firing angle [14, 15]:             1 2 2 ( ) 2 sin 2 cos tan tan                   tcsc cx x c c (1) where: 1    c lcx xc (2) 2 2 4   lc l x c x (3)   c l lc c l x x x x x (4)  c l x x  (5) an appropriate value for capacitance and inductance of a tcsc device is based on the net reactance of the transmission line and expected power demands in future. selection of capacitance and inductance values of tcsc can be summarized by the following steps:  step 1: select the degree of compensation (k).  step 2: calculate capacitive reactance (xc) and capacitance value of tcsc from (6) and (7), respectively. c tlx k x  (6) where xtl is the total reactance of the transmission line. the capacitance value of tcsc is given by (7): 1 2 c c fx   (7) where f is the fundamental frequency.  step 3: the choice of inductance value depends on the length of operating area required for inductive and capacitive region. it is decided by the factor , given in (5) by shifting the position of resonance region. finally, inductance value of tcsc is given by (8): 2 lxl f   (8) the tcsc capacitive and inductive reactance values should be chosen carefully in order to ensure that just one resonant point is present in the range of 90o to 180o. figure 2 shows the tcsc fundamental frequency reactance, as a function of the firing angle. fig. 2. tcsc fundamental frequency reactance characteristic curve. tcsc operates in different modes depending on when the thyristors for the inductive branch are triggered. the modes of operation are [16]:  blocking mode: thyristor valve is always off, opening inductive branch, and effectively causing the tcsc to operate as fixed series compensation.  bypass mode: thyristor valve is always on, causing tcsc to operate as capacitor and inductor in parallel, reducing the current through tcsc.  capacitive boost mode: forward voltage thyristor valve is triggered slightly before capacitor voltage crosses zero to allow current to flow through the inductive branch, adding to capacitive current. this effectively increases the observed capacitance of the tcsc without requiring a larger capacitor within the tcsc. the presence of tcsc systems with its reactor (xtcsc) has a direct influence on the total impedance of the protected line (zij), especially on the reactance xij and no influence on the resistance rij. the new setting zones (zone 1, zone 2, and zone 3) for a protected transmission line with tcsc connected at midline are: inductive mode 90olmax capacitive mode cmax180 o resonance mode lmaxcmax x lmax x bypass x c x cmax 90o lmax cmax 180 o  x tcsc 0 engineering, technology & applied science research vol. 8, no. 5, 2018, 3332-3337 3334 www.etasr.com khoa & tung: modeling for development of simulation tool: impact of tcsc on apparent impedance …  1 0.8 ij ij tcscz r jx jx       (9)    2 0.2ij ij tcsc jk jkz r jx jx r jx         (10)    3 1.2ij ij tcsc jk jkz r jx jx r jx        (11) where z1, z2, z3 are setting zones 1, 2 and 3, respectively, rij, xij are resistance and reactance of the protected line ij, respectively and rjk, xjk are resistance and reactance of the line jk which follows the line ij, respectively. iii. modeling of impact of tcsc on distance relay a. studied system description a 3-phase, 500kv, 400km long transmission line, as shown in figure 3, is investigated in this section. the transmission line has a tcsc at the sending end of the line. the transmission line and tcsc parameters are given in the appendix. inductance and capacitance of tcsc are determined by using the previous equations and they are also given in the appendix. by changing the firing angle from 150o to 180o, tcsc capacitive reactance calculated using (1) is shown by negative values in figure 4. tcsc provides 20% compensation at 180o (minimum), 69.53% compensation at 150o (maximum) firing angle. fig. 3. the transmission line with tcsc. fig. 4. tcsc capacitive reactance based on the firing angle. the studied power system has been simulated using matlab/simulink software. voltage and current data are collected at a sampling frequency of 1.0khz. samples of voltage and current signals are used to determine the phasors which are used to calculate the apparent impedance seen by the relay and to determine a fault in distance relay's zone of protection. dft is a tool for the phasor estimation of voltage and current signals. the algorithm for single-phase to ground (ph-g) fault is shown in figure 5. the algorithm can be explained by the following steps:  step 1: set system conditions including parameters of the sources and transmission line.  step 2: set single-phase to ground fault in the line.  step 3: acquire voltage and current from voltage transformers vts and current transformer cts.  step 4: computation of voltage vph and current iph phasor components using dft.  step 5: computation of the zero-sequence current phasor component i0.  step 6: computation of the apparent impedance seen by the distance relay as (12). 0   ph ph v z i ki (12) where vph, iph are the faulted phase voltage and current, respectively, i0 is the zero-sequence current component and  0 1 1 k z z z . z0, z1 are zero-sequence and positive-sequence line impedances per kilometer.  step 7: update the fault position and resistance and return to step 2.  step 8: finally, determine resistance and reactance r, x. fig. 5. the flow diagram for tripping characteristics. b. simulation results and discussion in order to verify the correctness of the modeling of impact of tcsc on the apparent impedance seen by distance protection relay, the system described in the previous section was modeled in matlab/simulink as shown in figure 6. in this model, the three-phase source blocks, source a and source b, implement a balanced three-phase voltage source with internal r-l impedance. the two voltage sources are connected in y with a grounded-neutral connection. the transmission line block implements a balanced three-phase transmission line model with parameters lumped in a pi section. the line parameters r, l, and c are specified as positiveand zerocb zsa line cb source b 21 zsb 21 a b fault source a tcsc 150 155 160 165 170 175 180 -80 -70 -60 -50 -40 -30 -20 -10 0  (degrees) x t c s c ( % ) -69.53 -31.82 -24.14 -21.41 -20.37 -20.04 -20.00 set system conditions set single-phase to ground fault acquiring voltage and current from vts and cts computation of voltage and current phasor components using dft computation of zero-sequence current phasor component i0 computation of apparent impedance z=v ph /(i ph +ki0) computation of resistance and reactance r, x update fault position and resistance start end engineering, technology & applied science research vol. 8, no. 5, 2018, 3332-3337 3335 www.etasr.com khoa & tung: modeling for development of simulation tool: impact of tcsc on apparent impedance … sequence parameters that take into account the inductive and capacitive couplings between the three phase conductors, as well as the ground parameters. the line is divided in two segments (segment 1 and segment 2) because the ph-g fault is located at mid-point of the line. the fault location can be changed by setting the length of the two segments. it is assumed that the fault location is at different positions on the line and it occurs at 0.4s in total simulation time of 2 seconds. moreover, the fault resistance is also changed from 0ω to 50ω in order to evaluate the impact of tcsc on the relay a. tcsc is modeled by using a series rlc branch block which can change its reactance parameters according to the equations presented in section ii. the firing angle is set to change the reactance of tcsc and simulate the changing apparent impedance seen by distance relay. however, the firing angles of tcsc, 180o, 155o, and 150o are used to measure the impact of tcsc on distance relay tripping characteristics in this study. fig. 6. simulated system a ph-g fault beginning at 0.4 seconds is established at the 100% of the line length and the fault resistance is set by zero (rf=0ω). four hypotheses (without tcsc in the line, with tcsc at the firing angle of 180o, 155o, and 180o) are simulated in the study. with total time of 2 seconds, the simulation results of this case, including the phase current, phase voltage, zerosequence current, and apparent impedance seen by the relay a are shown in figure 7. meanwhile, the voltage and current measurement data are acquired by vts and cts and are sampled by a specific period. the magnitude of faulted phase current at the relay a is shown in figure 7(a). at the beginning time of the fault (0.4 seconds), there is a transient period in the current magnitudes and then they are stable at their new steady state. because of the firing angle, the faulted phase current magnitudes are at different values as shown in figure 7(a). among these values, the current magnitude in the case with tcsc at the firing angle of 150o is the highest one and the current magnitude in the case without tcsc is the lowest one because the firing angle changes the reactance of tcsc. the magnitude of faulted phase voltage at the relay a is shown in figure 7(b). after the fault starts, the phase voltage magnitude decreases to new value. however, the voltage magnitude in the case with tcsc at the firing angle of 150o is the lowest one and the voltage magnitude in the case without tcsc is the highest one. the zero-sequence current is shown in figure 7(c). before the fault occurs, it is zero because the system is almost balanced. ph-g fault is applied at 0.4 seconds and the zerosequence current increases. the component is also used to calculate the apparent impedance seen by the relay a which is shown in figure 7(d). the apparent impedance seen by the relay a depends on the firing angle of tcsc. the impedance in the case with the presence of tcsc at the firing angle of 150o is the lowest one and the impedance in the case without tcsc is the highest one. fig. 7. apparent impedance for solid fault at 100% line length. 0 0.5 1 1.5 2 0 2 4 6 8 10 12 t (sec) (a) p hase current i p h (k a ) t csc  = 150o t csc  = 155o t csc  = 180o w it hout t csc 0 0.5 1 1.5 2 360 370 380 390 400 410 420 t (sec) (b) p hase volt age v ph ( kv ) w it hout t csc t csc  = 180o t csc  = 155o t csc  = 150o 0 0.5 1 1.5 2 0 100 200 300 400 500 600 700 800 t (sec) (c) zero-sequence current i 0 ( a ) w it hout t csc t csc  = 180o t csc  = 155o t csc  = 150o 0 0.5 1 1.5 2 0 50 100 150 200 250 300 350 400 450 t (sec) (d) apparent impendance z (  ) w it hout t csc t csc  = 180o t csc  = 155o t csc  = 150o (b) phase voltage (a) phase current (c) zero-sequence current (d) apparent impedance engineering, technology & applied science research vol. 8, no. 5, 2018, 3332-3337 3336 www.etasr.com khoa & tung: modeling for development of simulation tool: impact of tcsc on apparent impedance … in order to show the changing apparent impedance seen by the relay a according to the fault location, the fault is assumed to occur at different locations on the line by varying the length of two segments of the line. the locations range from 0 to 100% of the line length. in this situation, the apparent impedance seen by the relay a is shown in figure 8. it is clear that the apparent impedance without tcsc increases linearly by the fault location in the line. in the situation with tcsc, the apparent impedance seen by the relay a changes nonlinearly by the fault location in the line. with tcsc at the firing angle of 180o, 155o, and 150o an impedance resonance point occurs between the reactance of tcsc and the impedance of line. the point is the lowest impedance in its characteristic as shown in figure 8. therefore, the firing angle of tcsc has an influence on the apparent impedance seen by the relay a. fig. 8. apparent impedance according to fault location. in this work, high resistance faults are considered in order to investigate the influence of tcsc on apparent impedance seen by the distance relay a. the fault resistances are changed in the range of 0 to 50ω. because of the fault resistance, the apparent impedance will be changed as shown in figure 9. the curves in figure 9, including cases: without tcsc, with tcsc at the firing angle of 180o, 155o, and 150o are shown. this states that the modeling of the system can be used to analyze the impact of tcsc and the fault resistance on apparent impedance seen by the relay a comprehensively. fig. 9. apparent impedance according to fault resistance. the simulation results shown in figure 10 are the hybrid of fault location and resistance. the fault locations range from 0 to 100% of the line length and the fault resistances are set at 0, 10, 20, and 30ω. all faults simulated in this work are the ph-g fault that occurs at 0.2 seconds. the apparent impedance (z) seen by the relay a is separated to the resistance (r) and reactance (x). they are shown in impedance plane in which xaxis is the resistance and y-axis is the reactance. in addition, the mho characteristic of zone 1 is also plotted in order to identify the faults inside or outside zone 1. (a) without tcsc (b) tcsc α=1800 (c) tcsc α=1550 (d) tcsc α=1500 fig. 10. mho characteristic of distance relay according to the firing angle 0 20 40 60 80 100 0 20 40 60 80 100 120 l (%) z (  ) w it hout t csc t csc  = 180o t csc  = 155o t csc  = 150o 0 10 20 30 40 50 0 100 200 300 400 500 r f ( ) z (  ) w it hout t csc t csc  = 180o t csc  = 155o t csc  = 150o engineering, technology & applied science research vol. 8, no. 5, 2018, 3332-3337 3337 www.etasr.com khoa & tung: modeling for development of simulation tool: impact of tcsc on apparent impedance … the results in figure 10 are for the cases without tcsc and with tcsc with firing angle at 180o, 155o, and 150o. simulation results show that the firing angle of tcsc has an impact on the distance relay a. this can make the relay unable to operate correctly when the fault occurs because the point is outside of the zone 1. therefore, the zone 1 is modified according to the firing angle of tcsc to identify the fault correctly (figures 10(b)-(d)). iv. conclusion modeling for development of simulation tool of impact of tcsc on apparent impedance seen by distance protection relays has been proposed in this study. the algorithm for determining capacitance and inductance parameters of tcsc has been developed comprehensively and the method for calculating resistance and reactance seen by distance relay has been proposed. a 3-phase, 500kv transmission line in presence of tcsc which affects relay settings for fault conditions has been modeled and simulated in matlab/simulink. in the model, ph-g faults are applied in the line at different locations in order to evaluate the impact of tcsc on distance relays. simulation results show that the firing angle of tcsc has an impact on apparent impedance seen by distance protection relays. therefore, it is necessary for distance relay to adjust new settings in its mho characteristics and adapt to system conditions according to the firing angle of tcsc. appendix  the parameters of each source are: voltage at both sources: 500kv line-line rms, 50hz thévenin's resistance and inductance of: source a: r=0.8929ω, l=16.58mh source b: r=0.8929 ω, l=16.58mh phase angle between two ends: 10o  the parameters of the line are: length of line: 400km positive-sequence parameters: positive-sequence resistance: r1=0.01273ω/km positive-sequence inductance: l1=0.9337mh/km positive-sequence capacitance: c1=12.74nf/km zero-sequence parameters: zero-sequence resistance: r0=0.3864ω/km zero-sequence inductance: l0=4.1264mh/km zero-sequence capacitance: c0=7.751nf/km  the parameters of tcsc: capacitance: c=0.136mf inductance: l=10mh acknowledgment this work was supported by quy nhon university, code number of project: t2018.569.18. references [1] e. acha, c. r. fuerte-esquivel, h. ambriz-perez, c. angeles-camacho, facts modelling and simulation in power networks, wiley, 2004 [2] s. jamhoria, l. srivastava, “applications of thyristor controlled series compensator in power system: an overview”, 2014 international conference on power signals control and computations (epscicon), thrissur, india, january 6-11, 2014 [3] s. biswas, p. k. nayak, “state-of-the-art on the protection of facts compensated high-voltage transmission lines: a review”, high voltage, vol. 3, no. 1, pp. 21-30, 2018 [4] m. zellagui, a. chaghi, “impact of tcsc on measured impedance by mho distance relay on 400 kv algerian transmission line in presence of phase to earth fault”, journal of electrical systems, vol. 8, no. 3, pp. 273-291, 2012 [5] m. zellagui, a. chaghi, “impact of apparent reactance injected by tcsr on distance relay in presence phase to earth fault”, power engineering and electrical engineering, vol. 11, no. 3, pp. 156-168, 2013 [6] e. reyes-archundia, j. l. guardado, e. l. morenogiytia, j. a. gutierrez-gnecchi, f. martinez-cardenas, “fault detection and localization in transmission lines with a static synchronous series compensator”, advances in electrical and computer engineering, vol. 15, no. 3, pp. 17-22, 2015 [7] p. jena, a. k. pradhan, “directional relaying in the presence of a thyristor-controlled series capacitor”, ieee transactions on power delivery, vol. 28, no. 2, pp. 628-636, 2013 [8] m. khederzadeh, t. s. sidhu, “impact of tcsc on the protection of transmission lines”, ieee transactions on power delivery, vol. 21, no. 1, pp. 80-87, 2006 [9] t. s. sidhu, m. khederzadeh, “tcsc impact on communication-aided distance-protection schemes and its mitigation”, iee proceedings generation, transmission and distribution, vol. 152, no. 5, pp. 714728, 2005 [10] s. m. hashemi, m. t. hagh, h. seyedi, “high-speed relaying scheme for protection of transmission lines in the presence of thyristorcontrolled series capacitor”, iet generation, transmission & distribution, vol. 8, no. 12, pp. 2083-209, 2014 [11] a. maori, m. tripathy, h. o. gupta, “an advance compensated mho relay for protection of tcsc transmission line”, 6th ieee power india international conference (piicon), delhi, india, december 5-7, 2014 [12] k. k. sen, m. l. sen, introduction to facts controllers: theory, modeling and applications, new jersey: john wiley & sons, 2009 [13] s. bruno, g. d. carne, m. l. scala, “transmission grid control through tcsc dynamic series compensation”, ieee transactions on power systems, vol. 31, no. 4, pp. 3202-3211, 2016 [14] ieee standard 1534, ieee recommended practice for specifying thyristor controlled series capacitor, new york: ieee power and energy society, 2009 [15] s. meikandasivam, r. k. nema, s. k. jain, “selection of tcsc parameters: capacitor and inductor”, india international conference on power electronics, new delhi, india, january 28-30, 2011 [16] b. h. li, q. h. wu, d. r. turner, x. zhou, “modelling of tcsc dynamics for control and analysis of power system stability”, international journal of electrical power & energy systems, vol. 22, no. 1, pp. 43-49, 2000 microsoft word khalifeh-ed.doc etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 396-401 396 www.etasr.com khalifeh et al.: isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria: assessment of bacteria concentration effects a. khalifeh islamic azad university science and research university tehran, iran anis.khalifeh@ yahoo.com b. roozbehani research center of petroleum university of technology abadan, iran b.roozbehani@ gmail.com a. m. moradi islamic azad university science and research university tehran, iran dr.oil@gmail.com s. imani moqadam research center of petroleum university of technology abadan, iran saeedeh.imani@ gmail.com m. mirdrikvand research center of petroleum university of technology abadan, iran mirdrikvand@ gmail.com abstract—biological decomposition techniques and isolation of environmental pollutions using biosurfactants bacteria are effective methods of environmental protection. surfactants are amphiphilic compounds that are produced by local microorganisms and are able to reduce the surface and the stresses between surfaces. as a result, they will increase solubility, biological activity, and environmental decomposition of organic compounds. this study analyzes the effects of biosurfactants on crude oil recovery and its isolation using pseudomonas sea bacteria species. preparation of biosurfactants was done in glass flasks and laboratory conditions. experiments were carried out to obtain the best concentration of biosurfactants for isolating oil from water and destroying oil-inwater or water-in-oil emulsions in two ph ranges and four saline solutions of different concentrations. the most effective results were gained when a concentration of 0.1% biosurfactants was applied. keywords-environmental decomposition; biological separation; biosurfactant; pseudomonas; concentration. i. introduction studies over optimum methodologies for elimination of oil and refinery products from sea ecosystems are growing rapidly worldwide. among applied physical and chemical methods, biological degradation is the most beneficial and economical one, in an environmental scope, for the elimination of oil pollutants [1]. fortunately, a great amount of oil is degraded via sea microorganisms. the amount of these microorganisms is comprehensively greater in polluted areas. as a result, preparing suitable conditions can accelerate such activities for oil recovery and separation. the microorganisms suitable for biological degradation can be obtained from polluted areas or in the lab, with proper conditions [2]. chromohalobacters, the bacteria that absorb oil pollutants, are advantageous since the released materials in their reactions are usually stable in salty solutions. owing to the fact that most oil resources are located in salty areas, these biosurfactants are effective and efficient for oil elimination. biosurfactants have always been applied in the revival and cleaning of polluted waters and hydrocarbons recovery. they can be applied as emulsioners, foam makers, moisturizers, and cleaners in the oil and petrochemical industry, environmental management systems and mining industries [3]. chemical theories of biosurfactants have been applied to oil industry for oil recovery and more importantly for increasing the efficiency of the recovery process [4-5]. biosurfactants attracted a great attention in environmental processes such as environmental cleaning of polluted waters and soil. biosurfactants appeared in such processes as they have some effects such as dispersion, and environmental friendly factors naming low toxicity and biodegradability [6]. in this study, biosurfactants were produced from local bacteria and their influence on decreasing pollutants’ concentration and their absorption from surroundings was analyzed. in addition, elimination of oil pollution from the area was accomplished as an environmental friendly method; it was done so that no oil residue remained after the cleaning. the oil was also recovered and used afterwards. this paper is going to analyze the effects of biosurfactants on biological degradation. the purpose of this study is to assess the influence of biosurfactants on environmental analysis of crude oil, using native bacteria separated from the mahshahr exporting port, in pilot design and in laboratory conditions. ii. materials and methods a. sampling samples were taken from mahshahr exporting port located at khur-musa near the persian gulf in khuzestan province, iran. they were kept in a temperature near that of incubator until the bacteria were isolated. salinity, ph and water etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 396-401 397 www.etasr.com khalifeh et al.: isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria temperature in the sample area were measured before and after sampling. b. substances and bacterial culture the basic field of crude oil environmental analysis tests includes the materials that are illustrated in table i along with their concentrations in the bacterial culture medium. table i. materials and their concentration in bacterial culture medium material concentration na2hpo4 2.2 gr/lit kh2 po4 1.4 gr/lit mgso4.7h2o 0.6gr/lit feso4.7h2o 0.02gr/lit nacl 10.0 gr/lit cacl2 0.08gr/lit yeast extract 0.02gr/lit nano3 1.0gr/lit glucose 2%v/v glucose was added to the culture plates as a carbohydrate source and nano3 and yeast extract as nitrogen sources. an amount of 1.5 milliliters of crude oil was eventually put through as the only hydrocarbon source. all experiments were carried out in 250 ml flasks. a volume of 100 milliliters from base culture medium including %2 v/v carbon hydrate source, 1 ml of hydrocarbon source and 0.02 gr/lit of yeast extract was poured into erlenmeyer flasks. an amount of 10 5 to 10 6 bacterial cells per each ml were inoculated into the erlenmeyer flasks. erlenmeyer flasks were left for a week in incubators of 120 revolutions per minute (rpm) rotation in a temperature of 32 °c. daily sampling was performed to define turbidity. besides, ph variations were measured every day and adjusted to the initial value. iii. designing the experiment in laboratory scale in this part of the study, the experiment was designed and performed in laboratory scale in order to assess amount of oil absorption by biosurfactants. ph variations and salinity effects were analyzed. a. ph in order to measure ph variation effects on crude oil environmental degradation, an amount of 10 5 to 10 6 pseudomonas bacteria per milliliters were injected into two 250 ml erlenmeyer flasks containing 100 ml of base culture medium including hydrocarbon and carbohydrate nutrients. phs of the mediums were adjusted on 8 and 8.5 respectively; the flasks were then kept in a temperature of 32 °c and rotation of 120 rpm in the incubator. ph variations were recorded everyday and then adjusted to the set point. figure 1 illustrates these variation effects. b. salinity effect eight 250 ml erlenmeyer flasks, each containing 100 ml of base bacterial culture medium, with salinities of 0.25%, 0.5%, 1%, and 2% were prepared. phs of the solutions were adjusted to 8-8.5; the test bacteria were then injected into the fields. erlenmeyer flasks were kept in the incubator with a temperature of 32˚ c and rotation rate of 120 rpm for a week. to assess the turbidity of solutions, sampling was done in 6 successive days. phs were also recorded daily and then adjusted to the initial amount [6]. iv. assessing the results a. bacteria cell growth and biosurfactant production the results of studies on pseudomonas bacterial cell growth represented crude oil consumptions as sources of carbon and energy. the results that were obtained after 5 days were similar to previous observations during production of these bacteria via these species [7]. the growth amount of most pseudomonas bacteria may be caused by more effective biodegradable features of pseudomonas compared to other species like bacillus. this fact comes after comparing nutrients; media culture situation and salinity in the other chemical cultivation studies [4]. figure 2 illustrates cell growth analysis using turbidity assessment in different concentrations of salt. b. analyzing diverse biosurfactant concentrations in crude oil segregation bacterial culture medium in this research exhibits great amounts of cell growth and more separation activity with concentration of 0.1% of biosurfactant just like those of [6]. no considerable increase in separation activity occurred at higher concentrations. biosurfactants were even identified in very low concentrations [8]. the maximum amount of bacterial degradation and oil separation species is 89% with a biosurfactant concentration of 0.1%. in this study, biosurfactants that were produced by these species in degradability test and oil segregation were used. biosurfactant effect analysis was done in concentrations of 0.25%, 0.5%, 0.15%, and 0.1% in various salinities. the best results was obtained in 0.1% concentration of biosurfactants, ph=8.5 and salinity of 1.0%. the results and comparisons are illustrated in figure 3. the total pollution amount considered is 1000 ppm and the figures show the results in proportion to the whole pollutions. c. analysis of separation and oil degradability in laboratory scale performing test stages in laboratory scale, in glass flasks, illustrated that maximum degradation and separation occurs when sources of phosphor and nitrogen are used in bacterial culture medium for biosurfactant production [9]. analyzing all parameters declares that the presence of biosurfactants is the most important factor that destroys oil-in-water emulsion drops. etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 396-401 398 www.etasr.com khalifeh et al.: isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria fig. 1. daily variations of ph in different salinities etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 396-401 399 www.etasr.com khalifeh et al.: isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria fig. 2. cell growth analysis using turbidity assessment in different concentrations of salt etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 396-401 400 www.etasr.com khalifeh et al.: isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria fig. 3. the amount of residue oil in the area per total pollution amount (1000ppm) etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 396-401 401 www.etasr.com khalifeh et al.: isolation of crude oil from polluted waters using biosurfactants pseudomonas bacteria v. iv. conclusion biosurfactants were applied to clean the polluted areas via the common strategy of being grown in bacterial culture plates and then added continuously to pollution sources. application of biosurfactant in oil recovery is one of the most important methods of recovering a substantial amount of oil residue. additions of biosurfactants increase degradation potential and oil segregation. results illustrated that bacterial cell production via biosurfactants was much more effective than chemical surfactants. biosurfactants were more advantageous due to their lower toxicity, native acceptance and biodegradability. acknowledgment special thanks to abadan refinery, petroleum university of technology and imam khomeini hospital for their friendly non ending helps. references [1] g. h. ebrahimipur, j. fuladi, a. ferdosi, “environmental factors’ effect assessment on crude oil separation via extreme haluphil oil eater bacteria producing biosurfactant pars q2 and gravimeter of oil cuts consumed by these bacteria in optimum conditions”, natural science journal, vol. 13, pp. 59-70, 2005 (in persian) [2] g. h. ebrahimipur, j. fuladi, a. ferdosi, “separation and specification of rk extreme haluphil oil eater bacteria bearing biosurfactant and assessment of salt concentration amount on crude oil separation via this “suyeh””, natural science journal, vol. 12, pp. 9-20, 2004 (in persian) [3] a. singh, j. d. van hamme, o. p. ward, “surfactants in microbiology and biotechnology: part 2. application aspects”, biotechnology advances, vol. 25, no. 1, pp. 99-121, 2007 [4] m. p. plociniczak, g. a. plaza, z. piotrowska-seget, s. s. cameotra, “environmental application of biosurfactants: recent advances", international journal of molecular sciences, vol. 12, no. 1, pp. 633654, 2011 [5] n. k. bordoloi, b. k. onwar, "microbial surfactant-enhanced mineral oil recovery under laboratory condition", colloids and surfaces, vol. 63, no. 1, pp. 73-82, 2008 [6] r. thavasi, s. jayalakshmi, i. m. banat, " effect of biosurfactant and fertilizer on biodegradation of crude oil by marine isolates of bacillus megaterium, corynebacterium kutscheri and pseudomonas aeruginosa", bioresource technology, vol. 102, no. 2, pp. 772-778, 2011 [7] r. thavasi, s. jayalakshmi, t. balasubramanian, i. m. banat, "production and characterization of a glycolipid biosurfactant from bacillus megaterium using economically cheaper sources", world journal of microbiology and biotechnology, vol. 24, no. 7, pp. 917925, 2008 [8] z. a. raza, z. m. khalid, i. m. banat, "characterization of rhamnolipids produced by a pseudomonas aeruginosa mutant strain grown on waste oils", journal of environmental science and health, vol. 44, no. 13, pp. 1367-1373, 2009 [9] b. kumari, s.n. singh, d.p. singh, “characterization of two biosurfactant producing strains in crude oil degradation”, process biochemistry, vol. 47, no. 12, pp. 2463–2471, 2012 microsoft word 18-2970_sx2_etasr_v9_n5_pp4673-4678_2 engineering, technology & applied science research vol. 9, no. 5, 2019, 4673-4678 4673 www.etasr.com nechadi: adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine … adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine power system stabilizers emira nechadi ferhat abbas setif 1 university setif, algeria emira.nechadi@univ-setif.dz abstract—a new, adaptive, fuzzy type-2 fast terminal, synergetic multi-machine power system stabilizer is proposed in this study, based on the bat algorithm. the time spent to reach the equilibrium point, from any initial state, is guaranteed to be finite. the adaptive fuzzy type-2 design is applied to estimate the unknown functions of a multi-machine power system. the parameters of the fast terminal synergetic control are optimized, using bat metaheuristic method. in order to test the robustness of the proposed stabilizer, three load conditions, of the multimachine power system are studied. a comparison of the proposed adaptive fuzzy type-2 synergetic power system stabilizer with bat conventional approach is presented, indicating improved performance. the control system stability is assessed by the second theorem of lyapunov and is proven to be asymptotically stable. keywords-adaptive fuzzy type-2 design; fast terminal synergetic control; bat algorithm; lyapunov stability; power system stabilizer i. introduction a power system must remain stable and capable of withstanding a wide range of disturbances, in order to provide secure and reliable services. in a power system, the active power depends on the phase angle between the sending and receiving-end voltages, whereas the reactive power depends on the voltage magnitudes. a dynamic model of the system can be described, by the relationships between active and reactive powers and the bus voltage and frequency [1]. in a stable power system, when synchronous generators are subjected to a disturbance, they either return quickly to their original state or to a new stable operating point. disturbances cause mechanical oscillations, which must be damped [2]. power systems are complex nonlinear systems that often exhibit low frequency oscillations, due to insufficient damping caused by adverse operating conditions, which can lead the underlying machine to lose synchronism [3]. power system stabilizers (pss) are designed to suppress these oscillations and improve overall stability by applying supplementary control through the excitation controller (avr) [4]. conventional pss, consisting of cascade connected lead–lag compensators derived from a linearized model of the power system around a certain operating point have long been used to damp oscillations, regardless of the varying loading conditions or disturbances. however pss control strategies based on linear models often fail to provide satisfactory results over a wide range of operating conditions [5]. authors in [4, 5] presented a comprehensive approach for tuning the conventional pss parameters and their effect on the dynamic performance of the power system. however pss designed to damp one single oscillation mode can produce adverse effects in other modes. several pss design techniques have been reported [6, 7]. pole placement or eigenvalues methods are used in [8-11]. classical optimization techniques failed to provide optimum pss parameters [12]. heuristic techniques, such as genetic algorithms (ga), have already been applied to pss design [13]. a particle swarm optimization (pso) algorithm has been used in [14], for optimizing pss parameters. research has been conducted in optimization using bat algorithm in [15, 16]. optimization of pss parameters, based on bat algorithm, has been also reported [17-19]. recently, a new synergistic control scheme, which combines control theory with heuristic optimization and computational intelligence methods, has emerged [20-22]. this study proposes an adaptive fuzzy type-2 fast terminal synergetic power system stabilizer. fast terminal’s synergetic control parameters are determined, using the bat optimization method, as shown in [23, 24]. adaptive fuzzy type-2 design is used to approximate unknown functions in the multi machine power system model. ii. fast terminal synergetic control in this section, the fast terminal synergetic (ftsyn) controller is developed for the following nonlinear singleinput/single-output (siso) system: ( ) ( )   += = utxgtxfx xx ,, 2 21 � � (1) where [ ] 2 21 rxxx t ∈= is the state vector, while ( )txf , and ( )txg , are unknown functions. in order to obtain the terminal convergence of the state variables, the following macrovariable is defined as a function of the state variables: 1 1 1 x x x λα βψ = + +� (2) corresponding author: emira nechadi engineering, technology & applied science research vol. 9, no. 5, 2019, 4673-4678 4674 www.etasr.com nechadi: adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine … where α, β are positive constants. with a proper choice of λ, α, β and given an initial state ( ) 00 1 ≠x , the dynamics of the macro variable will reach the equilibrium point in a finite time. the exact time to reach zero s t , is determined by: ( ) ( ) 1 1 01 ln 1 s x t λ α β α λ β − + =    −   (3) and the equilibrium point at 0 is a terminal attractor. introducing the typical constraint (4), the selected macrovariable is forced to evolve in a desired manner, despite the uncertainties and/or disturbances: 0,0 >=ψ+ψ ss tt � (4) where s t is a parameter to be chosen determining the rate of convergence to the attractor and can be made arbitrary small, considering only eventual control constraint. using (2) and (4), the macro-variable derivative is given as: ψ−=++ − s t xxxx 11 1212 λβλα�� (5) the fast terminal synergetic control is: ( ) ( )       ψ+++−= −− s t xxxtxftxgu 1 ,, 1 122 1 λβλα (6) to prove the stability of fast terminal synergetic control, consider the following candidate lyapunov function: ψψ= tv 2 1 (7) therefore: ψψ= �� tv (8) ( ) ( )( )1 122 ,, −+++ψ= λβλα xxxutxgtxf (9) 21 ψ−= s t (10) then: 0≤ i v� (11) iii. design of adaptive fuzzy type 2 synergetic control control law (6) ensures system stabilization and robustness, but it cannot be directly implemented, since functions ( )txf , and ( )txg , are not known. this can be overcome by approximating functions by two interval type-2 fuzzy adaptive systems. a fuzzy system that uses type-2 fuzzy sets and/or fuzzy logic and inference, is called a type-2 fuzzy system [25]. based on the universal approximation theorem, unknown functions ( )txf , and ( )txg , can be approximated by: ( ) ( ) f t f xxf θξθ =,ˆ (12) ( ) ( ) g t g xxg θξθ =,ˆ (13) where [ ] m21 ,...,θ,θθθ = is the parameters’ vector, [ ]t m21 ,...,ξ,ξξξ = is the vector of fuzzy basis functions (fbf), such that: [ ][ ] flfr t l t rf t θθξξθξ 2 1 = (14) [ ][ ] glgr t l t rg t θθξξθξ 2 1 = (15) where 1 2 t m l l l l ξ ξ ,ξ ,...,ξ =   , 1 2 t m r r r r ξ ξ ,ξ ,...,ξ =   , [ ]r 1r 2r mrθ θ ,θ ,...,θ= , and [ ]l 1l 2l mlθ θ ,θ ,...,θ= . this yields the minimum approximation error: u gf δδε += (16) where: ( ) ( ) * f t f xxf θξδ −= (17) ( ) ( ) * g t g xxg θξδ −= (18) and * f θ , * g θ are the optimal approximation parameters by letting: *~ fff θθθ −= (19) *~ ggg θθθ −= (20) the following control law: ( ) ( )       ψ+++−= −− st xxxtxftxgu 1 ,ˆ,ˆ 1 122 1 λβλα (21) under the adaptation laws: ( ) ff x θγξγθ 11 −ψ=� (22) ( ) gg x θγξγθ 22 −ψ=� (23) ensures the stability of the nonlinear system (1). the lyapunov function is chosen as: g t gf t f v θθ γ θθ γ ~~ 2 1~~ 2 1 2 1 21 2 ++ψ= (24) therefore : ( ) ( ) 1 2 1 1 1 t t f g s t t f f g g v x x u t θ ξ θ ξ ε θ θ θ θ γ γ   = ψ − − + − ψ    + + � �� � � � � (25) using (22) and (23): ψ+−−ψ−= εθθθθ g t gf t f st v ��� ~~1 2 (26) and given the following inequalities being valid: 2 * 2 1~~ 2 1~ ff t ff t f θθθθθ +−≤− (27) 2 * 2 1~~ 2 1~ gg t gg t g θθθθθ +−≤− (28) engineering, technology & applied science research vol. 9, no. 5, 2019, 4673-4678 4675 www.etasr.com nechadi: adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine … v� can be written as: 2 2 2 * * 1 1 1 2 2 1 1 2 2 t t f f g g s f g v t θ θ θ θ θ θ ε   ≤ − ψ + + +    + + ψ � � � �� (29) using: ( )222 2 1 22 1 εεε s s s s t t t t −ψ−+ψ=ψ (30) then: 2 2 2 2* * 1 1 1 2 2 2 1 1 2 2 2 t t f f g g s s f g v t t θ θ θ θ θ θ ε   ≤ − ψ + + +    + + � � � �� (31) and       +=       = 2 * 2 * 21 2 1 2 1 ,, 1 min gf st θθµ γγα (32) finally: 2 2 εµα s t vv ++−≤� (33) integrating (33) from 0 to t, yields: ( ) ( ) ( ) ( )0 2 0 2 0 vtd t dvtv t s t +++−≤ ∫∫ µττεττα (34) terms ( ) ττα dv t ∫ 0 and ( ) ττε dt t s ∫ 0 2 2 are bounded. it can be concluded that ψ and ψ� are bounded ( ∞∈ψ l and ∞∈ψ l� ). sinceε , f θ ~ and g θ ~ are bounded, hence )(tv is also bounded, guaranteeing the stability of the closed loop system. to optimize the synergetic parameters βα,, s t and λ, a fuzzy synergetic approach using bat algorithm is employed. the typical values of the optimized parameters are taken as [ ]2.005.0 − for s t , [ ]300100− for α , [ ]15050− for β and [ ]101− for λ . iv. bat οptimization αlgorithm bat algorithm is a relatively new meta-heuristic optimization method [23]. this algorithm exploits the so-called echolocation of bats. bats use sonar echoes to detect and avoid obstacles. they navigate by emitting high frequency sounds waves and detecting the time delay of the reflected waves. from the detected time delay, bats know how far away they are from the prey or the obstacle [25]. the algorithm, with the use of random walks (one solution is selected among the current best solutions and then the random walk is applied to generate a new solution for each bat), is presented in detail in [23, 24]. applying the bat optimization algorithm and after the optimization procedure, we find ts=0.1000, α=240, β=140 and λ=3. v. power system model the nonlinear power system model considered in this paper represents a synchronous machine connected to an infinite bus via a double circuit transmission line. a nonlinear representation of the power system, considered during a transient period after a major disturbance has occurred, is given by [18, 19]: ( ) ( )   +=∆ ∆=∆ uxgxfmp mp � �ω (35) where ω∆ is the speed deviation, em ppp −=∆ the accelerating power, m is the inertia coefficient, ru ∈ is the input, ( )xf and ( )xg are nonlinear functions and ( ) 0≠xg in the controllability region. the block diagram of a conventional lead-lag power system stabilizer is shown in figure 1. tw is the wash out time constant, t1i-t4i are the pss time constants and ki is the pss parameter of generator i. the optimal parameters of the conventional pss are obtained by bat method and are listed in table ii [19]. fig. 1. conventional power system stabilizer vi. simulation results to proof the robustness and effectiveness of the proposed optimal fuzzy synergetic pss, simulations were carried out under different operating conditions of the multi-machine power system. to demonstrate the stability enhancement achieved with the proposed stabilizer, a three-phase fault test is applied at bus 7 of the multi-machine power system, with duration of 60ms before its clearance. seven fuzzy sets were used for each variable of the proposed pss. the fuzzy sets for [ ]55.0.5730,2−∈∆p and [ ]1.81430.1086,-∈q are defined according to the membership functions shown in figures 2 and 3 respectively. three disturbance scenarios have been considered in the simulation, in order to test the robustness of the proposed control scheme. table i describes these three cases. in each case, the proposed stabilizer is compared with a bat cpss and an aft2 synpss. engineering, technology & applied science research vol. 9, no. 5, 2019, 4673-4678 4676 www.etasr.com nechadi: adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine … table i. cases of loading conditions for the system (pu) generator g1 g2 g3 light case p 0.9649 1.00 0.45 q 0.223 -0.1933 -0.2668 normal case p 1.7164 1.630 0.85 q 0.6205 0.0665 -0.1086 heavy case p 3.5730 2.20 1.35 q 1.8143 0.7127 0.4313 fig. 2. fuzzy sets for speed deviation fig. 3. fuzzy sets for accelerating power table ii. bat conventional power system stabilizer parameteters k t1 t3 batpss1 46.6588 0.4153 0.2698 batpss2 8.4751 0.4756 0.1642 batpss3 4.2331 0.2513 0.1853 the three-machine test system, used to examine the interarea oscillation control problem, is shown in figure 4 for light load case, in figure 5 for nominal load case and in figure 6 for a heavy load case. (a) (b) (c) fig. 4. light load scenario (a) (b) (c) fig. 5. normal load scenario engineering, technology & applied science research vol. 9, no. 5, 2019, 4673-4678 4677 www.etasr.com nechadi: adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine … (a) (b) (c) fig. 6. heavy load scenario the proposed stabilizer keeps the generator synchronized. three studies have been performed to investigate the effect of the proposed bat aft2 synpss. the results are compared with a bat cpss and an aft2 synpss. the fault is cleared and the proposed stabilizer helps the system to reach a stable operating point very quickly. vii. conclusion in this paper a bat algorithm was used in combination with an adaptive fuzzy type-2 terminal synergetic (bat aft2) to design a pss for a multi machine power system. the obtained results show that the proposed stabilizer is very effective and can cope with different loading conditions. the proposed stabilizer could rapidly damp oscillations that would eventually lead to a loss of synchronism. the simulation results demonstrate the superior performance of the proposed pss over the bat conventional power system stabilizer (bat cpss) and adaptive fuzzy type-2 synergetic power system stabilizer (aft2 synpss). references [1] f. p. demello, “concepts of synchronous machine stability as affected by excitation control”, ieee transactions on power appparatus and systems, vol. 88, no. 4, pp. 316-329, 1969 [2] k. tang, g. k. venayagamoorthy, “damping inter-area oscillations using virtual generator based power system stabilizer”, electric power systems research, vol. 129, pp. 126–141, 2015 [3] e. v. larsen, d. a. swann, “applying power system stabilizers part ii: performance objectives and tuning concepts”, ieee transactions on power apparatus and systems, vol. 100, no. 6, pp. 3025–3033, 1981 [4] p. kundur, n. j. balu, m. g. lauby, power system stability and control, mcgraw-hill, 1994 [5] p. m. anderson, a. a. fouad, h. happ, power system control and stability, ieee, 1979 [6] a. ghosh, g. ledwich, o. p. malik, g. s. hope, “power system stabilizer based on adaptive control techniques”, ieee transactions on power apparatus and systems, vol. 103, no. 8, pp. 1983–1989, 1984 [7] s. j. cheng, y. s. chow, o. p. malik, g. s. hope, “an adaptive synchronous machine stabilizer”, ieee transactions on power systems, vol. 1, no. 3, pp. 101–107, 1986 [8] h. m. soliman, h. a. yousef, “saturated robust power system stabilizers”, international journal of electrical power & energy systems, vol. 73, pp. 608–614, 2015 [9] a. y. sivaramakrishnan, m. v. hariharan, m. c. srisailam, “design of variable-structure load-frequency controller using pole assignment technique”, international journal of control, vol. 40, no. 3, pp. 487– 498, 1984 [10] m. l. kothari, j. nanda, k. bhattacharya, “design of variable structure power system stabilisers with desired eigenvalues in the sliding mode”, iee proceedings c-generation, transmission and distribution, vol. 140, no. 4, pp. 263–268, 1993 [11] k. bhattacharya, m. l. kothari, j. nanda, “design of discrete-mode variable structure power system stabilizers”, international journal of electrical power & energy systems, vol. 17, no. 6, pp. 399–406, 1995 [12] j. talaq, “optimal power system stabilizers for multi machine systems”, international journal of electrical power & energy systems, vol. 43, no. 1, pp. 793–803, 2012 [13] f. m. adjeroud, f. djahli, a. mayouf, t. devers, “a coordinated genetic based type-2 fuzzy stabilizer for conventional and superconducting generators”, electric power systems research, vol. 129, pp. 51–61, 2015 [14] m. s. mahmoud, h. m. soliman, “design of robust power system stabilizer based on particle swarm optimization”, circuits and systems, vol. 3, no. 1, pp. 82-89, 2012 [15] a. s. oshaba, e. s. ali, s. a. elazim, “mppt control design of pv system supplied srm using bat search algorithm”, sustainable energy, grids and networks, vol. 2, pp. 51–60, 2015 [16] a. h. gandomi, x. s. yang, “chaotic bat algorithm”, journal of computational science, vol. 5, no. 2, pp. 224–232, 2014 [17] m. r. sathya, m. m. t. ansari, “load frequency control using bat inspired algorithm based dual mode gain scheduling of pi controllers for interconnected power system”, international journal of electrical power & energy systems, vol. 64, pp. 365–374, 2015 [18] d. k. sambariya, r. prasad, “robust tuning of power system stabilizer for small signal stability enhancement using metaheuristic bat algorithm”, international journal of electrical power & energy systems, vol. 61, pp. 229–238, 2014 [19] e. s. ali, “optimization of power system stabilizers using bat search algorithm”, international journal of electrical power & energy systems, vol. 61, pp. 683–690, 2014 [20] a. kolesnikov, g. veselov, and a. kolesnikov, “modern applied control theory: synergetic approach in control theory,” trtu, moscow, taganrog, russia, pp. 4477–4479, 2000 [21] a. kolesnikov, g. veselov, a. monti, f. ponci, e. santi, r. dougal, “synergetic synthesis of dc-dc boost converter controllers: τheory and experimental analysis”, seventeenth annual ieee applied power electronics conference and exposition, dallas, usa, march 10-14, 2002 [22] z. jiang, “design of a nonlinear power system stabilizer using synergetic control theory”, electric power systems research, vol. 79, no. 6, pp. 855–862, 2009 engineering, technology & applied science research vol. 9, no. 5, 2019, 4673-4678 4678 www.etasr.com nechadi: adaptive fuzzy type-2 synergetic control based on bat optimization for multi-machine … [23] x. s. yang, “a new metaheuristic bat-inspired algorithm”, nature inspired cooperative strategies for optimization, pp. 65–74, 2010 [24] x. s. yang, x. he, “bat algorithm: literature review and applications”, international journal of bio-inspired computation, vol. 5, no. 3, pp. 141–149, 2013 [25] e. nechadi, m. n. harmas, a. hamzaoui, n. essounbouli, “type-2 fuzzy based adaptive synergetic power system control”, electric power systems research, vol. 88, pp. 9–15, 2012 microsoft word 7-20-1-sm.doc etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 8-12 8 www.etasr.com pylarinos et al : automating the classification of field lc waveforms automating the classification of field leakage current waveforms d. pylarinos dept of ece university of patras patras, greece dpylarinos@yahoo.com k. siderakis dept of ee tei of crete heraklio, greece e. pyrgioti dept of ece university of patras patras, greece e. thalassinakis assistant director p.p.c. heraklion, greece i. vitellas director p.p.c. athens, greece abstract— leakage current monitoring is widely employed to investigate the performance of high voltage insulators and the development of surface activity. field measurements offer an exact view of experienced activity and insulators’ performance, which are strongly correlated to local conditions. the required long term monitoring however, results to the accumulation of vast amounts of data. therefore, an identification system for the classification of field leakage current waveforms rises as a necessity. in this paper, a number of 500 leakage current waveforms recorded on a composite post insulator installed at a 150 kv high voltage substation suffering from intense marine pollution, are investigated. the insulator was monitored for a period of 13 months. an identification system is designed based on the considered data employing fourier analysis, wavelet multiresolution analysis and a neural network. results show the large impact of noise in field measurements and the effectiveness of the discussed system on the considered data set. keywords-insulator; leakage current; field; neural network; wavelet; pattern recognition; std_mra i. introduction outdoor insulation is an important part of transmission and distribution systems, since a single insulator failure may cause an excessive outage of the power system. during operation, electric, mechanical, thermal and chemical stresses apply to outdoor insulators. one of the most influential mechanisms however, is the pollution phenomenon. the basic stages of the phenomenon as described in [1,2], are as follows: the first step is the accumulation of contaminants on the insulators’ surface. in the case of hydrophilic insulation (e.g. porcelain), the presence of a wetting mechanism (e.g. rain, fog, humidity) transforms the contaminants layer into a conductive film and the flow of leakage current (lc) on the surface is permitted. initially, this current is resistive and sinusoid but as activity advances distorted sinusoid current is recorded. the surface heats and dries up unevenly and areas of higher resistance, called dry bands, are formed. the voltage distribution along the insulator is altered. increased stress along the dry bands is observed and dry band arcs appear, which, under favorable conditions, may propagate and ultimately lead to a complete flashover of the insulator. the presence of the arc in the current path is indicated from the on-set time delay of lc waveform in every half-cycle, which causes a knee-like shape. polymer insulators and coatings are used to prevent film formation, and therefore suppress activity, due to their hydrophobicity. however, such materials experience cycles of hydrophobicity loss and recovery [3-7]. the phenomenon is highly correlated with environmental and surface conditions (temperature, wind, location etc) [1-8]. therefore, only field measurements can offer an exact view of the experienced activity and insulators’ performance. it should be noted that during a hydrophobicity loss period, the waveform shapes recorded on hydrophobic insulators are similar to those recorded on hydrophilic ones [8]. the main issue regarding field leakage current monitoring however, is that activity is rapid, rather rare and cannot be safely predicted. therefore, continuous long term field monitoring is required. the long term monitoring combined with the necessary high sampling rate results to the accumulation of vast amounts of data. further, field conditions exaggerate the noise factor and therefore a percentage of the gathered data may be incoherent [9]. in this paper, a data set of 500 lc waveforms recorded on an insulator located in the field during a period of 13 months, is investigated. an identification system capable of identifying four different types of waveforms is designed based on the considered waveforms. the identification system employs fourier analysis in order to identify noise generated waveforms, wavelet analysis and especially std_mra in order to extract patterns from activity portraying waveforms and a neural network to automate the identification process. ii. measurements setup the waveforms investigated in this paper have been recorded on a 150 kv post composite insulator located in the linoperamata 150 kv high voltage transmission substation of the greek network. the monitoring period was 13 months. the linoperamata substation is located next to the coast and suffers from intense marine pollution. the greek public power corporation (p.p.c.) has issued a large project to cope with the problem, and as a part of that project several insulators and coatings have been, or still are, monitored and investigated. some of the published results can be found in [8-11]. a schematic representation of the measuring apparatus employed to monitor leakage current is shown in fig. 1. etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 8-12 9 www.etasr.com pylarinos et al : automating the classification of field lc waveforms figure 1. a schematic representation of the lc measuring apparatus the measurement of leakage current is acquired by inserting in the lc path a collection ring and a hall sensor. the acquired data are transmitted to a central data acquisition system (daq) and sampling is performed at a rate of 2 khz. a user-defined time window is set (e.g. 24 hours) and the daq records one waveform for each time-window (e.g. one waveform per day). the waveform that is recorded is the one portraying the highest peak value. various time-windows have been applied during the 13 months of monitoring. each waveform has a length of 480ms which with a 2 khz sampling rate corresponds to 960 data points. the daq is periodically connected to a laptop in order for data to be retrieved. the matlab software has been employed for further processing of retrieved data and for the design and evaluation of the identification system. iii. wavelet analysis and the std_mra technique wavelets are a mathematical tool for signal analysis. extended wavelet theory can be found in [12,13]. wavelet analysis allows simultaneous time and frequency analysis of signals. a wavelet function is an oscillatory function, with an average value of zero and a band-pass like spectrum. the basic concept in wavelet analysis is to select an appropriate wavelet function ψ (the mother wavelet) and then perform the analysis of a signal using translated (shifted) and scaled (dilated) versions of the mother wavelet. the continuous wavelet transform is given by (1) where α represents the scale, b represents the position, and *ψ represents the complex conjugate of ψ . dt a bt tf a dtttff baba )()( 1 )()(, * ,, ∫∫ ∞ ∞− ∞ ∞− − ψ=ψ=ψ (1) in case a digitized signal and discrete values of a and b are used then the discrete wavelet transform is given by (2) where j sa 0 = , jkskab 0== and zjk ∈, . )()( 1 ),,( 0 0 0 ∫ ∞ ∞− ⋅− ψ= j j j s skt tf s bjfdwt (2) multiresolution analysis (mra) is a wavelet based filtering algorithm, which was created as a theoretical basis to represent signals that decompose in finer and finer detail [12,13]. the main idea is to use wavelet analysis to decompose the original signal in two parts: the approximation, which contains the lowfrequency part of the signal, and the details, which contains the high-frequency part. the first stage of decomposition will give the first level approximation (a1) which if decomposed will give the second level approximation (a2) and so on. detail analysis is performed with a contracted, high frequency version of the mother wavelet, while approximation analysis is performed with a dilated, low frequency version of the same wavelet. an example of mra performed in a lc waveform is shown in fig. 2. in this paper, the std_mra technique is used in order to extract patterns from lc waveforms. each lc waveform is decomposed in six levels using mra and the standard deviation (std) of the details (d1, d2, …, d6) extracted in each level of the mra is calculated. the normalized six-point vector, called std_mra vector, is then used as a pattern for the corresponding waveform. the std_mra vector is normalized because similar lc waveform shapes can portray various amplitudes. the mathematical expression of the standard deviation σ for a n-point vector x , is given in (3), where x is given in (4). considering that the shape of the mother wavelet should be similar to the shape of the signal, daubechies 4 wavelet is chosen as a mother wavelet. the form of the approximation and details during the mra is directly linked to the shape of the mother wavelet, which means that decomposition will produce daub4-like wavelets, as shown in fig. 2. the frequency band of approximation and details for each decomposition level is showed in table i. ∑ ∑ = = − − =− − = n i n i ii xx n xx n 1 1 22 1 2 )( 1 1 ))( 1 1 (σ (3) ∑ = = n i i x n x 1 1 (4) etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 8-12 10 www.etasr.com pylarinos et al : automating the classification of field lc waveforms figure 2. six level mra analysis of a lc waveform. a1-a6 shows the approximation and d1-d6 shows the details through levels 1-6. table i. frequency bands for different mra levels decomposition level approximation details 1 0~500 (hz) 500~1000 (hz) 2 0~250 (hz) 250~500 (hz) 3 0~125 (hz) 125~250 (hz) 4 0~62.5 (hz) 62.5~125 (hz) 5 0~31.25 (hz) 31.25~62.5 (hz) 6 0~15.625 (hz) 15.625~31.25 (hz) iv. activity portraying waveforms and extracted patterns three different categories for activity portraying waveforms were set after the investigation of the considered data set. sinusoid and distorted sinusoid current are described as type a. dry band arcs that are sustained for a limited number of half cycles are described as type b and excessive arcs that are sustained throughout the whole waveform are described as type c. an example of each type and the corresponding pattern derived from std_mra is portrayed in fig. 3,4 and 5 respectively. v. the artificial neural network artificial neural networks (ann) are highly parallel, adaptive learning systems that can learn a task by generalizing from case studies of the tasks. if a problem can be posed as a problem of mapping outputs to inputs, then an ann can be used as a black box that learns the mapping from examples of known cases of correlated inputs-outputs. the selection and the design of the ann was done considering the attributes described in [14-16] related to simplicity, speed and efficiency. among the various forms of ann architectures, the multilayer feed forward network with back propagation learning algorithm was chosen. this architecture (also known as multilayer perceptron architecture) is suitable for recognizing patterns that don’t evolve with time. figure 3. a type a waveform and it’s the std_mra pattern figure 4. a type b waveform and the std_mra pattern figure 5. a type c waveform and the std_mra pattern in order to identify categories that are located in the same area, but are not linearly separated (such as the patterns extracted in this study), one hidden layer is sufficient. the number of inputs etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 8-12 11 www.etasr.com pylarinos et al : automating the classification of field lc waveforms is six (equal to the elements of the pattern-vector) and the ann must identify 3 categories, therefore three output neurons are sufficient. each type is correlated to a three-element output vector easily separable from the others. type a to [1 0 0]’,type b to [0 1 0]’ and type c to [0 0 1]’. in order to minimize the risk of “trapping” the algorithm around a local minimum, the number of neurons per layer should decrease from the input layer to the output layer. hence, five neurons are selected for the hidden layer. the hyperbolic tangent function is chosen for the hidden layer for its speed and efficiency. the log-sigmoid function is chosen for the output layer in order to compress the outputs into the [0,1] domain. the learning algorithm used is the levenberg-marquardt due to its speed in the case of medium-sized anns. the train set consists of 4 type a waveforms, 3 type b waveforms and 6 type c. a schematic representation of the ann is illustrated in fig. 6. vi. noise generated waveforms noise generated waveforms can be attributed to various field related conditions (cable faults, equipment faults, operation of circuit breakers, switching of heavy loads etc) [9]. noise generated waveforms vary in form, and some typical waveforms are shown in fig. 7. those waveforms will lead to erroneous results if fed to the neural network. therefore, it is highly desirable to be identified and discarded at the early stages of the identification system, using deterministic criteria. the voltage frequency in the greek system is 50 hz and therefore the fundamental frequency of every leakage current waveform should be 50hz. this criterion can be used to discard waveforms as the first three in fig. 7. however, waveforms similar to the last one in fig. 7, can exhibit a 50hz fundamental. an example is shown in fig. 8. an amplitude criterion could be applied in order to discard such waveforms. however, a noise generated spike, can be superimposed on such waveforms, as shown in the first waveform of fig. 7, and thus allowing them to exceed any threshold. therefore, a simple low pass filter with a cut off frequency of 200hz is employed, in order to remove spikes while maintaining the main part of the waveform, and then an amplitude criterion is applied. figure 6. a schematic representation of the ann figure 7. noise generated waveforms figure 8. two noise generated waveforms and their frequency content vii. the identification system a block diagram of the identification system is shown in fig. 9. initially, the frequency content of each lc waveform is calculated using the fourier transform. if the fundamental frequency of the waveform differs from 50 hz then the waveform is attributed to noise. if the waveform exhibits a 50 hz fundamental, then it passes through the low-pass filter and the amplitude of the filtered waveform is calculated. if the amplitude of the filtered waveform is found smaller than 1 ma, then the waveform is attributed to noise. otherwise, std_mra is performed on the original waveform (not the filtered one). the extracted pattern (the std_mra vector) is then fed to the artificial neural network which identifies the waveform type. figure 9. block diagram of the identification system etasr engineering, technology & applied science research vol. 1, �o. 1, 2011, 8-12 12 www.etasr.com pylarinos et al : automating the classification of field lc waveforms viii. results and discussion the identification system was able to successfully identify all 500 waveforms and results are shown in table ii. the results show the significant impact of noise in field leakage current waveform monitoring. further it is shown that the discussed identification system can successfully recognise and further categorize activity portraying waveforms. table ii. number of waveforms per type waveform type �umber of waveforms noise 460 type a 9 type b 7 type c 24 sum 500 however, it should be mentioned that the design of the discussed identification system is based upon the considered data set, which is relatively small. further investigation of field waveforms is required. however, results show that the std_mra technique combined with neural networks can be applied in order to identify different types of field leakage current waveforms, although it is highly probable that further investigation may result to the modification of the system and possibly to the add of new categories. ix. conclusion leakage current monitoring is widely employed in order to investigate surface activity on high voltage insulators and to evaluate their performance, which are both strongly correlated to local conditions. field monitoring can offer an exact view of the insulators’ performance and the experienced activity. however, the necessary long term monitoring results to the accumulation of vast amounts of data and the implementation of an identification system rises as a necessity. in this paper a number of 500 waveforms recorded over a 13 month period on a 150 kv post composite insulator located at a 150 kv high voltage substation suffering from intense marine pollution, is investigated. an identification system is designed, capable of identifying four basic types of waveforms, including noise generated waveforms. results show that noise is significantly exaggerated in the field. in addition, it is shown that wavelet analysis, and especially the std_mra technique, combined with neural networks can be successfully employed to automate the classification of field leakage current waveforms. references [1] cigre wg 33-04, the measurement of site pollution severity and its application to insulator dimensioning for a.c. systems, electra no. 64, pp.101-116, 1979 [2] cigre wg 33-04, tf 01, a review of current knowledge: polluted insulators, cigre publications, 1998 [3] h. hillborg, u.w. gedde, “hydrophobicity changes in silicone rubbers”, ieee trans. dielectr. electr. insul., vol. 6, no. 5, pp.703-717, 1999 [4] z. jia, h. gao, z. guan, l. wang, j. yang, “study on hydrophobicity transfer of rtv coatings based on a modification of absorption and cohesion theory, ieee trans. dielectr. electr. insul., vol. 13, no. 6, pp. 1317-1324, 2006 [5] d.a. swift, c. spellman, a. haddad, “hydrophobicity transfer from silicone rubber to adhering pollutans and its effect on insulator performance, ieee trans. dielectr. electr. insul., vol. 13, no. 4, pp. 820-829, 2006 [6] s. kumagai, “hydrophobicity transfer of rtv silicone rubber aged in single and multiple environmental stresses and the behaviour of lmw silicone fluid”, ieee trans. power deliv., vol. 18, no. 2, pp. 506-516, 2003 [7] n. yoshimura, s. kumagai, s. nishimura, “electrical and environmental aging of silicone rubber used in outdoor insulation”, ieee trans. dielectr. electr. insul., vol. 6, no. 5, pp. 632-650, 1999 [8] k. siderakis, d. agoris,” performance of rtv silicone rubber coatings installed in coastal systems”, electr. power syst. res., vol. 78, issue 2, pp. 248-254, 2008 [9] d. pylarinos, k. siderakis, e. pyrgioti, e. thalassinakis, i. vitellas, “impact of noise related waveforms on long term field leakage current measurements”, ieee trans. dielectr. electr. insul., vol. 18, no. 1, 2011 [10] k. siderakis, d. agoris, s. gubanski, “salt fog evaluation of rtv sir coatings with different fillers”, ieee trans. power deliv., vol. 23, no. 4, pp. 2270-2277, 2008 [11] k. siderakis, d. agoris, j. stefanakis, e. thalassinakis, “influence of the profile on the performance of porcelain insulators installed in coastal high voltage networks in the case of condensation wetting”, iee proceedings, science, measurement and technology, vol. 153, no. 4 , p. 158-163, 2006 [12] s.g. mallat, “a theory for multiresolution signal decomposition: the wavelet representation”, ieee trans. pattern analysis and machine intelligence, vol. 11, pp. 674-693, 1989. [13] stephane mallat , a wavelet tour of signal processing, academic press, 1999 [14] s. haykin , neural networks: a comprehensive foundation, prentice hall , india 1999 [15] e. dermatas, pattern recognition, university of patras’ academic press, department of electrical and computer engineering, 1997 [16] c.m. bishop, neural networks for pattern recognition, oxford university press 1995 authors profile dionisios pylarinos was born in athens in 1981. he received a diploma degree in electrical and computer engineering from the university of patras in 2007. presently he is with the high voltage laboratory of the department of electrical and computer engineering at the university of patras. he has worked as a scientific consultant for ppc. his research interests include outdoor insulation, electrical discharges, signal processing and pattern recognition. kiriakos siderakis was born in heraklion in 1976. he received a diploma degree in electrical and computer engineering in 2000 and the ph.d. degree in 2006 from the university of patras. presently, he is an application professor at the department of electrical engineering, at the technological educational institute of crete. his research interests include outdoor insulation, electrical discharges, high voltage measurements and high voltage equipment diagnostics and reliability. eleftheria pyrgioti was born in 1958 in greece. she received her diploma degree in electrical engineering from patras university in 1981 and the ph.d. degree from the same university in 1991. she is an assistant professor at the department of electrical and computer engineering at the university of patras. her research activity is directed to high voltage, lightning protection, insulation coordination and distributed generation. emmanuel thalassinakis received the diploma in electrical and mechanical engineering and also the ph.d. degree from the national technical university of athens. after working for the ministry of the environment, in 1991 he joined the public power corporation (p.p.c.) where he is now assistant director of the islands network operations department. isidoros vitellas was born in 1954 in greece. he has a diploma in electrical engineering and the ph.d. degree in the same field. he is currently director of the islands network operations department in p.p.c. (public power corporation) athens, greece. engineering, technology & applied science research vol. 8, no. 3, 2018, 3041-3043 3041 www.etasr.com sohu et al.: flexural performance of concrete reinforced by plastic fibers flexural performance of concrete reinforced by plastic fibers muhammad tahir lakhiar faculty of civil and environmental engineering universiti tun hussein onn malaysia parit raja, johor, malaysia mtl.eng17@gmail.com samiullah sohu faculty of civil and environmental engineering universiti tun hussein onn malaysia parit raja, johor, malaysia sohoosamiullah@gmail.com imtiaz ali bhatti faculty of civil and environmental engineering universiti tun hussein onn malaysia parit raja, johor, malaysia engrimtiaz290@gmail.com nadeem-ul-kareem bhatti department of civil engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan knadeem_b@yahoo.com suhail ahmed abbasi department of civil engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan abbasi.suhail2009@gmail.com muhammad tarique department of civil engineering mehran university of engineering and technology jamshoro, sindh, pakistan mtarique181@gmail.com abstract—for sustainable development construction, recycle or reuse of waste materials is utilized. many researchers conducted tried to create an innovative green concrete, utilizing waste materials. the aim of this research is to contribute and promote the use of plastic waste in concrete. the concrete’s flexural and workability were investigated by using different percentages of 0%, 0.2%, 0.4%, 0.6%, 0.8% and 1% of plastic fibers in concrete. in this study, m15 grade concrete beams were casted and cured for 7 and 28 days to analyze the flexural performance and workability. the outcomes demonstrated that the workability was slightly reduced by the utilizing plastic fibers where flexural strength improved by 16.5% at 0.6% addition of plastic fibers in concrete. keywords-flexural strength; workability; plastic fibers; green concrete i. introduction concrete is a material widely utilized in construction industry due to its many benefits. it is feasible, durable and economical compared to other building materials [1-3]. flexural strength of concrete is minimum compared to compressive strength because concrete is brittle in nature [4]. to enhance the flexural strength, steel fibers are mostly used as reinforcement [5]. steel contributes to the carbon dioxide co2 emission in atmosphere during its manufacturing which contributes to global warming [6] and therefore, the need of sustainable green concrete raises day by day. concrete made from waste, which is more eco-friendly, is known as green concrete. in other words, green concrete is the concrete in which waste materials are utilized in order to save natural resources and thus decrease environmental pollution. in this type of concrete, waste material is used to at replace least one of its ingredients. if its production process does not affect the environment, production procedures, life cycle sustainability and quantity of cement are the key factors adopted to categorize whether a concrete is green or not [7, 8]. the main purpose of developing green concrete is to minimize co2 gas emissions which cause environmental pollution and to re-use waste materials, which create disposal problems. ii. materials and methodology a. materials for this experimental study, m15 grade concrete (1:2:4 ratio) was utilized with water-cement ratio constant at 0.55. the material properties of fine and coarse aggregates and cement are shown in tables i-iii. plastic fibers (figure 1) were utilized from 0%-0.6% with increment of 0.2%. ordinary portland cement was utilized for this research. b. methodology m15 grade concrete was casted incorporating 0%-1% with increment of 0.2% of plastic fibers. two concrete beam types were casted. one was conventional concrete beam which had 0% of plastic fibers (pf) and the other was plastic fiber concrete beam (pfc-b) which contained plastic fibers. fifty four beams were casted, three beams were tested for each proportions at each curing age. the beams, having size 150mmx200mmx1500mm, were tested for flexural strength at different water curing regimes of 7, 14 and 28 days. mixing was carried out utilizing a rotary mixer. the concrete mix workability was examined by standard slump test using standard slump cone and procedures according to astm c 143. all cubes were extracted from the molds after 24 hours and cured for the required age of testing. the concrete flexural str as c7 a. wo sho slig b. as out fig per engineerin www.etasr rength was tes stm followed 78/78m-18. table i s. n 0 02 03 04 table ii. s table s i slump test (a astm c 1 orkability of c own in table ghtly decrease ta convent pf pf pf pf p flexural stre flexural stre stm c78/78m tcomes of all gure 2. the rformance inc ng, technology r.com ted using thre d the overall i. materia no. t 1 water ab 2 specific 3 finess m 4 co . material . no. t 01 water a 02 specif 03 finess e iii. materi s. no. t 01 cons 02 specif 03 finess fig. 1. iii. result (astm c 143) 143 standard concrete. work iv. the resul ed incorporatin able iv. slu mixtures tional concrete b fc-b (0.2% pf) fc-b (0.4% pf) fc-b (0.6% pf) fc-b (0.8% pf) fc-b (1% pf) ength of beam ength test w m-18 standar l concrete mi outcomes de creased rapidly y & applied sci ee point flexur procedure d al properties of test r bsorption c gravity modulus olor lig properties of c test r absorption 1 fic gravity modulus ial properties o test re sistency 3 fic gravity 3 modulus plastic fibers ts and discus [9] was fal kability or slu lts show that t ng the plastic f ump flow of con slump beam 2 2 2 2 m as performed rd [10]. the ixes are show emonstrated t y by utilizing ience research ral loading tes escribed in a fine aggregate results 1.1% 2.6 2.96 ght brown oarse aggrega results 1.19% 2.68 6.15 of cement (o.p.c) esults 31% 3.14 1.18 sion llowed to ge ump flow resul the slump flow fibers in concr ncrete mix. values(mm) 32.37 30.34 29.50 28.24 26.32 25.65 d according t e flexural str wn in table v that beam fle plastic fibers h v sohu et al.: t. the astm es tes ) et the lts are w was rete. to the rength v and exural up to 0.8% the dec wat rap util pfc 2.54 plas flow the fou con [1] [2] [3] [4] [5] vol. 8, no. 3, 20 : flexural perfo % and then it flexural stren creased 1.5% ter curing, the idly up to 0.8 lized. after 28 c-b 16.52% 4% when 1% tabl mixtu conventional c pfc-b (0 pfc-b (0 pfc-b (0 pfc-b (0 pfc-b ( fig. 2. the workabil stic fibers we w. the flexura control samp und to be 0.6% ncrete in conte m. t. lakhiar, jhatial, a. a. a tensile strength vol. 8, no. 2, p z. li, advanced a. m. neville, p r. nagalakshm concrete with p aggregate with engineering re e. mello, c. properties with environmental 018, 3041-3043 formance of co decreased slig ngth of pfc-b when 1% of e flexural stren % and decrea 8 days water increased rap of pf were uti le v. flexu ures concrete beam 0.2% pf) 0.4% pf) 0.6% pf) 0.8% pf) 1% pf) flexural stren iv. co lity of concret ere added beca al strength inc ple. the opti % and it enhan xt of all other refer , n. mohamad, abdul samad, “e h”, engineering t pp. 2796-2798, 20 d concrete techn properties of con mi, “experimental partial replaceme h coconut shell”, esearch, vol. 4, n ribellato, e. m h fibers additio engineering, vol ncrete reinforc ghtly. after 7 b 11 % increas pf were utiliz ngth of pfc-b ased 1.42% wh curing, the f idly up to 0. ilized. ural strength o flexural st 7 days 14 1.96 2 2.08 2 2.13 2 2.18 2 2.15 2 1.93 2 ngth outcomes of onclusion te slightly red ause of the fi creased up to 1 imum percent nced the flexu mixes. rences m.a. b. shaikh, efffect of river in technlogy & app 018 nology, john wile ncrete, prentice h study on strength ent of cement w , international jo no. 1, 2013 mohamedelhassan, on”, international l. 8, no. 3, pp. 24 3042 ced by plastic f days water cu sed up to 0.8% zed. after 14 b 15.23% incre hen 1% of pf flexural streng 8% and decre of concrete trength (mpa) days 28 day 2.10 2.36 2.26 2.57 2.32 2.68 2.42 2.75 2.35 2.65 2.07 2.30 f all mixes duced when co ibers’ resistan 16.5% compar tage of fibers ural performan , a. a. vighio, ndus sand on co plied science res ey and sons inc., all, 2011 h characteristics o with fly ash and ournal of scient “improving co l journal of civ 9-254, 2014 fibers uring, % and days eased were gth of eased ys onsist nce to red to s was nce of a. a. oncrete search, 2011 n m25 coarse tific & oncrete vil and engineering, technology & applied science research vol. 8, no. 3, 2018, 3041-3043 3043 www.etasr.com sohu et al.: flexural performance of concrete reinforced by plastic fibers [6] y. wang, q. wang, y. hang, z. zhao, s. ge, “co2 emission abatement cost and its decomposition: a directional distance function approach”, journal of cleaner production, vol. 115, pp. 205-215, 2018 [7] a. baikerikar, “a review on green concrete”, j. emerging technol. innovative res. vol.1, no.6, pp. 472–474, 2014. [8] k. h. obla, “what is green concrete?”, the indian concrete journal, vol. 24, pp. 26-28, 2009 [9] astm international, astm c143/c143m-15, standard test method for slump of hydraulic-cement concrete), astm international, west conshohocken, pa, 2015 [10] astm international, astm c78/78m-18, standard test method for flexural strength of concrete (using simple beam with third-point loading), astm international, west conshohocken, pa, 2018 microsoft word 29-3468_s_etasr_v10_n4_pp6052-6056 engineering, technology & applied science research vol. 10, no. 4, 2020, 6052-6056 6052 www.etasr.com islam et al.: optimized controller design for an islanded microgrid using non-dominated sorting sine … optimized controller design for an islanded microgrid using non-dominated sorting sine cosine algorithm (nssca) quazi nafees ul islam electrical and electronic engineering department islamic university of technology gazipur, bangladesh quazinafees@iut-dhaka.edu saad mohammad abdullah electrical and electronic engineering department islamic university of technology gazipur, bangladesh saadabdullah@iut-dhaka.edu md. arif hossain electrical and electronic engineering department islamic university of technology gazipur, bangladesh arifhossain@iut-dhaka.edu abstract—in order to cope with the increasing energy demand, microgrids emerged as a potential solution which allows the designer a lot of flexibility. the optimization of the controller parameters of a microgrid ensures a stable and environment friendly operation. non-dominated sorting sine cosine algorithm (nssca) is a hybrid of sine cosine algorithm and non-dominated sorting technique. this algorithm is applied to optimize the control parameters of a microgrid which incorporates both static and dynamic load. the obtained results are compared with the results of the established non-dominated sorting genetic algorithm-ii (nsga-ii) in order to justify the proposal of the nssca. the average time needed to converge in nssca is 7.617s whereas nsga-ii requires an average of 10.660s. moreover, the required number of iterations for nssca is 2 which is significantly less in comparison to the 12 iterations in nsga-ii. keywords-multi-objective; nsga-ii; nssca; dynamic load; static load; spss; i. introduction renewable energy sources are often integrated in microgrids because they are environmentally friendly and are considered an answer to the fossil fuel scarcity. however, due to their unpredictable nature, renewable energy sources often hamper stability and may cause large frequency and voltage deviations in a microgrid [1]. the control parameters play a significant role for smooth and efficient operation of a microgrid. if there is any type of disturbance in the system, the selection of proper controller parameters and their tuning at optimized value ensures stable system operation [2]. thus, research is now focused in optimizing the controller parameters, load sharing, cost etc. of a microgrid with a view to enhance its stability, efficiency, and cost effectiveness [3, 4]. in this aspect, various optimization algorithms are often adopted because they can often identify the global optimum system and also have a better convergence probability [5, 6]. in single objective optimization (soo), the aim generally is to search for the best design or decision, which is expected to be the global solution of the optimization problem. but in the case of multiple objective optimization (moo), there may be one or more solutions which may be the best (global minimum or maximum) with respect to all objectives [7]. however, moo renders a greater flexibility to the designers than soo while selecting the most optimum result [8] because instead of presenting a single solution, moo provides a set of solutions known as the pareto front where none of the solutions dominates the others and thus the designer can choose any solution depending on his choice/requirements. various works have been conducted in optimizing the controller parameters of a microgrid. in [9], artificial fish swarm algorithm was used to optimize only the droop controller gains for controlling the frequency deviation in a microgrid operating in islanded mode but it did not optimize other controller parameters. authors in [10] used the moo nsga-ii in optimizing the controller parameters but lacked comparison analysis between existing works. in this regard, the present study proposes a new moo where sine cosine algorithm (sca) is combined with nondominated sorting technique to form the hybridized nondominated sorting sine cosine algorithm (nssca). basically sca is an soo which was first introduced in 2015 [11]. the incorporation of non-dominated sorting technique transforms this soo into an moo. the proposed nssca is being used to obtain global optimum control parameters for an islanded microgrid consisting of both static and dynamic load. the main focus of this study is to obtain a better dynamic performance during load variation by applying nssca. in order to establish the efficacy of the designed nssca, the results are compared with the ones of the established non-dominated sorting genetic algorithm (nsga-ii) [12]. ii. microgrid model in this study, an islanded microgrid as shown in figure 1, composed of two distributed generation (dg) units where one unit has static (r-l) load installed and an induction motor on the other unit is considered as the dynamic load. the complete microgrid model used in this study is adopted from [2, 13]. in microgrid modeling, inverter, loads, and network design are the three main parts. figure 2 shows the block diagram of an inverter connected to the microgrid along with its associated controllers. among the three controller units: power controller determines the frequency and magnitude of the output voltage reference for the voltage controller, voltage controller corresponding author: quazi nafees ul islam engineering, technology & applied science research vol. 10, no. 4, 2020, 6052-6056 6053 www.etasr.com islam et al.: optimized controller design for an islanded microgrid using non-dominated sorting sine … determines the inductor output currents’ reference using proportional integrator (pi) regulator after comparing the actual and reference voltage values, and finally the current controller supplies switching signals to the inverter. there are thirteen states for individual inverter units, i.e. twenty-six states in total for the two inverters, two states for static load model, two states for line network, and five states for the induction motor. the complete microgrid model used in this study is developed by incorporating the state space model of individual inverter, static load model, line network, and induction motor. the total number of state variables is eight as shown in (1). ∆�� � �∆�� ∆�� ∆ � ∆ ��� ∆ ��� ∆����� ∆����� ∆���� � (1) fig. 1. two dgs with static and dynamic load. fig. 2. block diagram of an inverter connected to microgrid. iii. problem formulation stable operation of a microgrid in islanded mode is an important aspect that needs to be ensured in order to acquire proper system output. the presence of both static and dynamic load in the microgrid creates challenges to its stable operation. moreover, controller parameters play a vital role in system stability. in this system, pi regulators are used to vary gains of both voltage and current controllers. these controller gains need to be fine-tuned within proper limits for stable operation of the microgrid. a. objective function the microgrid model used in this study has eight controller gain parameters as there are two inverters for two dgs and each inverter has a separate voltage and current controller unit. each voltage and current controller has separate pi regulators to control the controller gains. here, ����, ���� and ����, ���� represent the pi gains of the voltage controller of inverter-1 and inverter-2 respectively. similarly, ���� , ���� and ���� , ���� represent the pi gains of the current controller of inverter-1 and inverter-2 respectively. eigenvalue analysis provides information regarding damping characteristics of a system which plays an important role in system stability. if the eigenvalues of the states of the microgrid model move away from the imaginary axis and go in the left half of the s-plane, their real part becomes more negative, the damping performance of the system is improved, and system stability is ensured. the main objective of this study is to optimize the above-mentioned controller gains to obtain a stable performance. the objective functions are mentioned in (2) and (3) where � and ζ indicate the real part of the eigenvalues and the damping ratio respectively. ��������, !� � "#��$%�&$� # minimum+�, -. (2) ��������, !� � "/�$%�&$� # minimum+ζ, -. (3) n represents the total number of states, which is 35 for this study. for each of these states the � and / values will be evaluated so that both the objectives are satisfied. ��$%�&$� and /�$%�&$� specify the limit of objective functions [14]. these two objective functions are contradictory in nature which can be understood from (4) where ω represents the frequency of the states. from (4) it can be seen that if the magnitude of � increases then / becomes more negative i.e. it reduces and vice versa. /, � 0123124 5624 (4) the constraint for this study is given in (5) where controller gains are limited to a desired boundary which is obtained by performing root locus analysis in order to obtain a stable microgrid system with improved damping performance. 0 8 ����,� , ����,�, ����,�, ����,� 8 500 (5) b. proposed solution nssca, a novel hybridized optimization algorithm, is proposed in this study. the optimization technique is presented with the help of the flow chart in figure 3. the detailed steps of the algorithm are given below. step 1: initialize the system parameters and generate the initial set of population. the total number of iterations is also defined in this step. step 2: the fitness of the solutions is evaluated by the objective functions !� and !�. step 3: non-dominated sorting of the initial generation of population is carried out on the basis of fitness value. step 4: in this step, crowding distance and ranking of the population is done. step 5: the position of all the solution sets is updated by (6) and (7) following [11]. :�;5� � :�; < =� > sin+=� > |=a ��; # :�; |, =b 8 0.5 (6) :�;5� � :�; < =� > cos+=� > |=a ��; # :�; |, =b f 0.5 (7) where :�; is the current solution after the g;h iteration where the solution is along the �;h dimension. similarly ��; is the destination solution point after the g;h iteration along the �;h engineering, technology & applied science research vol. 10, no. 4, 2020, 6052-6056 6054 www.etasr.com islam et al.: optimized controller design for an islanded microgrid using non-dominated sorting sine … dimension. here, =� indicates the direction along which the solution will move, i.e. whether the solution will confine its movement within the space between the solution and the destination point or traverse beyond it. =� is updated using (8) where i is a constant and j is the maximum number of iterations. =� � i # g kl (8) =� is a random number [0, 2π] which indicates the distance of the movement of the solution inside or outside the destination and =a indicates a random weight in defining the effect of destination in distance calculation. if =a m 1 then the effect of the destination is emphasized and the opposite if =a 8 1 [11]. =b is a random number between �0, 1� which indicates which of (6) or (7) should be followed to update the position. fig. 3. flowchart of the nssca. step 6: in order to update the position of the solutions =� , =� , =a and =b need to be updated to reach the best destination point and determine the best solution. step 7: merge the new set of solution with the initial set of and then non-dominated sorting, crowding distance calculation and ranking of the merged set of solutions is applied until the maximum number of iterations is reached. step 8: the best possible position of the solution is determined after the final iteration and that position indicates the best solution iv. results and discussion a. eigenvalue analysis the ability of the proposed nssca algorithm in stabilizing the system was examined through eigenvalue analysis. the eigenvalues of different states obtained before and after optimizing the controller parameters using nssca are shown in table i. from the table, it can be observed that for some of the states∆ �� , ∆ �� , ∆���� and ∆�o�p$q the eigenvalues are positive which indicates their location on the right side of the s plane and thus instability to the system is introduced. after optimizing the controller parameters using nssca, it is observed that the obtained negative eigenvalues of the aforementioned states shifted their location from the right side to the left side of the s plane making the system more stable. table i. eigenvalue analysis index state eigen value of the state before optimization after optimization 1 ∆�� 2909410 + 12209388i -2909410 + 12209388i 2 ∆�� 2909410 12209388i -2909410 12209388i 3 ∆ � -3261123 + 8309127i -3261123 + 8309127i 4 ∆ �� -3261123 8309127i -3261123 8309127i 5 ∆ �� -55.194 + 45315i -19853 + 389577i 6 ∆ �� -55.194 45315i -19853 389577i 7 ∆ �� -352.417 + 44476.812i -19823 + 389227i 8 ∆���� -352.417 44476.812i 19823 389227i 9 ∆���� -4107.969 + 31524.405i -29070 + 318066i 10 ∆���� -4107.969 31524.405i -29070 318066i 11 ∆���� -5629.972 + 29764.098i -29078 + 318482i 12 ∆���� -5629.972 29764.098i -29078 318482i 13 ∆���� -8720.541 + 8365.113i -8277.936 + 20647.906i 14 ∆�� -8720.541 8365.113i -8277.936 20647.906i 15 ∆�� -6328.127 + 8624.675i -12555.159 + 18762.199i 16 ∆ � -6328.127 8624.675i -12555.159 18762.199i 17 ∆ �� -1291.429 + 0i -2209.282 + 504.685i 18 ∆ �� 213.426 + 784.754i -2209.282 504.685i 19 ∆ �� 213.426 784.754i -25.460 + 198.321i 20 ∆ �� -81.284 + 376.280i -25.460 198.321i 21 ∆���� -81.284 376.280i -157.538 + 0i 22 ∆���� -162.677 + 0i -1.092 + 55.904i 23 ∆���� -70.227 + 1.578i -1.092 55.904i 24 ∆���� -70.227 1.578i -70.694 + 0i 25 ∆���� -67.497 + 0i -67.571+ 1.586i 26 ∆���� 22.647 + 0i -67.571 1.586i 27 ∆�o�p$q 1.315 + 0i -2.187 + 0i 28 ∆�o�p$r -2.394 + 0i -0.369 + 0.044i 29 ∆�o�k�q -2.393 + 0i -0.369 0.044i 30 ∆�o�k�r -0.018 + 0.045i -1.686 + 0.009i 31 ∆�r% -0.018 0.045i -1.686 0.009i 32 ∆�q% -0.021 + 0i -1.686 + 0i 33 ∆�r& -0.329 + 0i -0.560 + 0i 34 ∆�q& -0.200 + 0i -0.701 + 0i 35 ∆so -0.202 + 0i -0.701 0i engineering, technology & applied science research vol. 10, no. 4, 2020, 6052-6056 6055 www.etasr.com islam et al.: optimized controller design for an islanded microgrid using non-dominated sorting sine … c. time domain simulation analysis in this section, the comparison among nsga-ii and nssca is presented based on the overshoot obtained from the step response of the inductor current (d-q), output voltage (dq), real power and reactive power for both dg-1 and dg-2 as shown in figure 4. considering the d-axis component of the inductor current of dg-1 as shown in figure 4(a), the percentage of overshoot in the case of nssca is much less compared to nsga-ii. when, the d-axis component of the inductor current is considered for dg-2, nsga-ii results in 25% overshoot compared to zero overshoot in the case of nssca. for both dg-1 and dg-2, when the step response of the q-axis component of the inductor current is considered, no overshoot is caused by nsga-ii and nssca. considering the step response of the output voltages (d-q) of both dg-1 and dg-2 as shown in figure 4(e)-(h), it can be observed that nssca causes less overshoot than nsga-ii with the only exception in case of the q-axis component of the output voltage of dg-2. for both dgs, the step response of the real and reactive power indicate that both the algorithms cause zero overshoot as depicted in figure 4(i)-(l). in the light of the above discussion, it can be concluded that nssca provides better step response compared to nsga-ii. fig. 4. step response. d. statistical tests in order to justify the uniqueness of each algorithm, independent samples’ t-test was performed to compare the equality of means using spss [15] statistical analysis software. the t-test was performed with respect to the total number of iterations required to complete the optimization process, the total execution time of the optimization process, and the total summation of the real part of the eigenvalues. while running the t-test, spss software generated the results of the f-test which indicate whether the data samples from the two grouping variables (i.e. nsga-ii and nssca in this case) possess equal variances or not. for the f-test, the null hypothesis assumes that the data samples from the two groups have equal variances and the alternative hypothesis assumes that the data samples have non-equal variances. the null hypothesis can only be rejected when the significant factor (p-value) of the f-test is less than 0.05. from the results summarized in table ii, it can be observed that the p-value of the f-test is greater than 0.05 only in case of the total summation of the real part of the eigenvalues. thus, the data sets of nsga-ii and nssca possess non-equal variances in terms of total number of iterations and total execution time. the corresponding t-test results are also summarized in table ii. in the case of the t-test, the null hypothesis assumes that the means of two data sets are equal and as the alternative hypothesis it is assumed that the means of two data sets are not equal. if the significant factor (p-value) of the t-test is less than 0.05, then the null hypothesis can be rejected. from table ii it can be observed that there is a significant difference between the two algorithms with respect to the total number of iterations and the execution time as for both cases the p-value of the t-test is less than 0.05, whereas, with respect to the total summation of eigenvalues the null hypothesis cannot be rejected as the p-value of the t-test is greater than 0.05. however, from the data presented in table engineering, technology & applied science research vol. 10, no. 4, 2020, 6052-6056 6056 www.etasr.com islam et al.: optimized controller design for an islanded microgrid using non-dominated sorting sine … iii, the total summation of the eigenvalues in the case of nssca is slightly higher compared to nsga-ii which indicates that nssca ensures slightly better stability to the system. from the above analysis, it can be concluded that each algorithm possesses unique characteristics. considering the mean values of the total number of iterations, total execution time, and total summation of eigenvalues of both algorithms, nssca was found to exhibit significantly better performance. table ii. comparison between nsga-ii and nssca based on the f-test and t-test results parameters f-test t-test for equality of means f sig. mean diff. t df sig. (2-tailed) total iterations 60.035 0.000 10.133 6.177 29.273 0.000 total summation of eigenvalues (real) 2.477 0.121 30941 1.356 58 0.180 execution time 38.545 0.000 3.043 2.155 33.202 0.039 table iii. group statistical data of nsga-ii and nssca parameters algorithm mean standard deviation standard error mean total number of iterations nsga-ii 12 8.965 1.637 nssca 2 0.615 0.112 total summation of eigenvalues (real) nsga-ii -12709275 68444 12496 nssca -12740216 104560 19090 execution time nsga-ii 10.660 7.468 1.363 nssca 7.617 2.015 0.368 v. conclusion in this study, the non-dominated sorting technique was merged with the sine cosine algorithm (sca) in order to develop a multi-objective optimization algorithm named nssca. this algorithm was applied to optimize the controller gains of a two bus microgrid model. the microgrid model was developed considering static load in one of the buses and an induction motor as dynamic load in the other bus. the performance of nssca in optimizing the controller gains was compared with nsga-ii by applying nsga-ii for the same two-bus system. from the comparative study in terms of eigenvalue analysis, time domain analysis, and statistical tests it was observed that nssca performs better in stabilizing the system by optimizing controller gains. the computations done by nssca were significantly faster compared to nsga-ii in terms of both required number of iterations and execution time. thus, nssca can be considered as a prospective algorithm in optimizing the controller gains of a microgrid model. references [1] a. m. howlader et al., “a minimal order observer based frequency control strategy for an integrated wind-battery-diesel power system,” energy, vol. 46, no. 1, pp. 168–178, oct. 2012, doi: 10.1016/j.energy.2012.08.039. [2] n. pogaku, m. prodanovic, and t. c. green, “modeling, analysis and testing of autonomous operation of an inverter-based microgrid,” ieee transactions on power electronics, vol. 22, no. 2, pp. 613–625, mar. 2007, doi: 10.1109/tpel.2006.890003. [3] l. s. coelho and v. c. mariani, “combining of chaotic differential evolution and quadratic programming for economic dispatch optimization with valve-point effect,” ieee transactions on power systems, vol. 21, no. 2, pp. 989–996, may 2006, doi: 10.1109/tpwrs.2006.873410. [4] r. eslami, s. h. h. sadeghi, and h. a. abyaneh, “a probabilistic approach for the evaluation of fault detection schemes in microgrids,” engineering, technology & applied science research, vol. 7, no. 5, pp. 1967–1973, oct. 2017. [5] s. sinha and s. s. chandel, “review of recent trends in optimization techniques for solar photovoltaic–wind based hybrid energy systems,” renewable and sustainable energy reviews, vol. 50, pp. 755–769, oct. 2015, doi: 10.1016/j.rser.2015.05.040. [6] e. e. miandoab and f. s. gharehchopogh, “a novel hybrid algorithm for software cost estimation based on cuckoo optimization and knearest neighbors algorithms,” engineering, technology & applied science research, vol. 6, no. 3, pp. 1018–1022, jun. 2016. [7] k. prabakar, f. li, and b. xiao, “controller hardware-in-loop testbed setup for multi-objective optimization based tuning of inverter controller parameters in a microgrid setting,” in 2016 clemson university power systems conference (psc), clemson, sc, usa, mar. 2016, doi: 10.1109/psc.2016.7462824. [8] m. fadaee and m. a. m. radzi, “multi-objective optimization of a stand-alone hybrid renewable energy system by using evolutionary algorithms: a review,” renewable and sustainable energy reviews, vol. 16, no. 5, pp. 3364–3369, jun. 2012, doi: 10.1016/j.rser.2012.02.071. [9] a. ibrahim, y. jibril, and y. s. haruna, “determination of optimal droop controller parameters for an islanded microgrid system using artificial fish swarm algorithm (afsa),” international journal of scientific & engineering research, vol. 8, no. 3, pp. 959–965, mar. 2017. [10] r. wang et al., “optimized operation and control of microgrid based on multi-objective genetic algorithm,” in 2018 international conference on power system technology (powercon), guangzhou, china, nov. 2018, pp. 1539–1544, doi: 10.1109/powercon.2018.8601845. [11] s. mirjalili, “sca: a sine cosine algorithm for solving optimization problems,” knowledge-based systems, vol. 96, pp. 120–133, mar. 2016, doi: 10.1016/j.knosys.2015.12.022. [12] k. deb, a. pratap, s. agarwal, and t. meyarivan, “a fast and elitist multiobjective genetic algorithm: nsga-ii,” ieee transactions on evolutionary computation, vol. 6, no. 2, pp. 182–197, apr. 2002, doi: 10.1109/4235.996017. [13] a. kahrobaeian and y. a.-r. i. mohamed, “analysis and mitigation of low-frequency instabilities in autonomous medium-voltage converter-based microgrids with dynamic loads,” ieee transactions on industrial electronics, vol. 61, no. 4, pp. 1643–1658, apr. 2014, doi: 10.1109/tie.2013.2264790. [14] s. r. mudaliyar and s. s. sahoo, “comparison of different eigenvalue based multi-objective functions for robust design of power system stabilizers,” international journal of electrical and electronic engineering & telecommunications, vol. 1, no. 2, 2015. [15] spss inc. released 2007. spss for windows, version 16.0. chicago, spss inc. microsoft word 24-2630_s engineering, technology & applied science research vol. 9, no. 2, 2019, 3998-4001 3998 www.etasr.com eli-chukwu & onoh: experimental study on the impact of weather conditions on wide code … experimental study on the impact of weather conditions on wide code division multiple access signals in nigeria ngozi clara eli-chukwu department of electrical & electronics engineering, alex ekwueme federal university, ndufu-alike, ebonyi, nigeria ngozieli@gmail.com g. n. onoh electrical & electronics engineering, enugu state university of science and technology, enugu, nigeria onohgn@gmail.com abstract—in cellular network activities, before a site is integrated it is expected that each cell of the site meets the nigerian communication commission (ncc) standard of ≥98% for both service accessibility and call completion rate which in turn depicts a ≤2% in both blocked call rate (bcr) and dropped call rate (dcr). it is suggested that weather conditions have a very strong negative effect on the performance of wideband code division multiple access (wcdma) network as it could lead to signal attenuation or change the polarization. in this paper, we study the impact of weather conditions on wcdma network in nigeria. to achieve this, network samples (log-files) were collected weekly during a driving test in enugu state nigeria for a period of five years for both rainy and dry seasons, in which blocked and dropped calls were extracted. results show that during adverse weather conditions, bcr and dcr rise greater than 8% and 4% respectively. although with a slight relationship between the weather conditions, the weather condition during the dry season has a better-blocked call rate of 8.76% in comparison with the rainy season with 12.89%. calls tend to drop more during the dry season. from the outcome of the experiment, a model was developed for predicting an unknown network call statistics variables. keywords-blocked call; cellular; dropped call; dry season; rainy season; weather; wcdma i. introduction in cellular networks, trying for optimization is considered a constant. one of the major reasons for constant optimization is the weather. often, weather conditions affect major key performance indicators (kpi) such as accessibility and retainability used by operators and subscribers to assess network performance. during various weather conditions, it is expected for the wcdma cellular network to maintain its performance and meet the nigerian communication commission (ncc) threshold of both bcr and dcr ≤2%. this paper studies the impacts of weather conditions on wcdma network and compares them. gsm signal strength varies with respect to weather parameters. tropospheric delays such as humidity, pressure and temperature affect the strength of the transmitted signal [1]. distance estimation based on the received signal strength of a wireless radio is susceptible to radio propagation conditions, particularly during periods of precipitation [2]. movement of mobile station (ms) affects the signal received from the base transceiver system (bts). decreased signal reception conditions, if the channel is exposed to rain, occur. rain attenuation as color noise affects signal quality, mainly 3g network signal transmission, and can be grouped into drizzling rain, straight form, medium, heavy convective and storms, have the effect of their own in the process of signal transmission. [3]. many wireless sensor networks operating outdoors are exposed to changing weather conditions, which may cause severe degradation in system performance. therefore, it is essential to explore the factors affecting radio link quality in order to mitigate their impact [4]. there is a relationship between atmospheric conditions and speech quality [5]. since the cellular gsm networks are one of the most commonly used communication technologies today, the quality of speech in these networks is a topic of great significance. many advances and approaches have been introduced in the field of speech quality during the last decade, most of them focusing on ip networks, where speech quality is influenced by every single network node through which the communication passes. there is a bond between speech quality in gsm networks and weather conditions and a greater bond between rain density and speech quality [6]. the most classical approach of determining rain attenuation for radio-wave frequency has been to theoretically determine the specific attenuation. at frequency over 10ghz, rain and precipitation can influence the attenuation a lot. the effect of atmospheric attenuation between the source and destination over wireless communication is of major concern and proper site visit and control method are required so that the performance can be increased [7]. rainfall is a natural phenomenon whose temporal and geographical distribution varies widely. wireless communications suffer losses in network quality during rainfall which can affect the regional communication for a while. growing concerns of climatic change also encourage the study of the effects of natural phenomena like rainfall on other measurable parameters corresponding author: ngozi clara eli-chukwu engineering, technology & applied science research vol. 9, no. 2, 2019, 3998-4001 3999 www.etasr.com eli-chukwu & onoh: experimental study on the impact of weather conditions on wide code … [8]. the link availability of outdoor radio systems is often affected by atmospheric conditions such as rain and snow. the effects of rainfall on wireless transmissions are particularly noticeable at data rates. as the rate of rainfall increases, more disruption is caused to the outdoor link [9]. some dependencies between weather conditions and receive level were studied in [10]. there is a relationship between refractivity from rainfall and propagation of the gsm radio signal as greater refractivity means lower signal quality and vice versa [11]. the gsm technology is the most widely utilized communication standard which it is now coming to its bandwidth limitations especially in big cities and densely populated areas. under such circumstances, even a minor weather change could be a decisive factor causing changes in the quality of service [12]. attenuation in tropical regions is underestimated by existing prediction methods based on experimental data from temperate climates [13]. sunny and rainy weather conditions have firstorder main effects on user equipment [14]. the significance here is that various forms of precipitation such as rain, snow, cloud and fog absorb and scatter electromagnetic energy leading to attenuation in its signal strength. harmattan precipitation intensity may be so great that visibility at ground level is reduced to less than a hundred meters while inflicting attenuation significantly [15]. certain combinations of the constituents in weather can cause radio signals to be heard hundreds of miles beyond the ordinary range of radio communications. tropical weather has significant effects on radio signal where the highest correlation values for each factor are 0.70756 for solar radiation, 0.6285 for humidity, 0.4344 for wind speed, 0.3850 for rain rate, and 0.3339 for temperature [16]. terrestrial and earth-space links operating at bands higher than 10ghz inevitably suffer severe signal degradation due to rain fade, particularly in the tropics [17]. this study focuses on ascertaining and comparing the impact of various weather conditions on wcdma network. ii. experimental setup a. method the drive test method was used to characterize the network [18]. the experimental setup uses the testing equipment for mobile system (tems) v13.0 software installed on a laptop, a tems mobile phone, a gps and a power inverter. voice calls were made for 120secs on the mtnn network by the mobile phone. the test covered the enugu metropolis in enugu state, nigeria for a period of 5 years, covering both rainy and dry seasons. b. network characterization parameter although the experimental technique reports many parameters, the parameters of interest are bcr and dcr. bcr explains the rate at which the user equipment (ue) is unable to access the network when a call is attempted. it is expressed as: %��� = ���∗ �� � (1) where ��� is the number of blocked calls and ��� the number of successful calls. dcr is the rate at which established calls ends abruptly without the knowledge of both call originator and terminator: %��� = ���∗ �� �� (2) where ��� is the number of is dropped calls and ��� is the number of successful calls. c. network characterization results table i shows the call statistics and radio environment results for the five year period. table i. call statistics kpi result from network characterization. year weeks rainy season dry/harmattan season bcr dcr bcr dcr 2015 wk1 7.14 3.66 7.43 2.06 wk2 3.19 3.12 2.1 2.27 wk3 1.01 2.6 1.01 3.07 wk4 3.04 1.96 2.23 2.98 wk5 10.25 3.9 3.75 2.43 wk6 5.54 2.88 3.29 2.7 wk7 5.1 2.11 3.02 1.97 wk8 3.87 3.04 1.98 2.45 2016 wk9 3.47 4.1 2.56 5.7 wk10 5 3.72 9.41 5.32 wk11 16.75 8.62 2.19 3.14 wk12 8.96 5.46 4.98 9.42 wk13 12.85 5.47 3.46 6.81 wk14 17.19 8.18 2.27 5.12 wk15 24.35 11.3 11.21 7.07 wk16 22.13 2.63 13.75 7.73 2017 wk17 23.35 7.47 13.81 9.94 wk18 10.43 4.37 8.21 5.59 wk19 14.29 4.7 13 4.84 wk20 21.55 4.05 31.87 8.77 wk21 29.53 3.91 3.34 2.52 wk22 30 5.84 8.98 5.83 wk23 35.22 7.5 12.39 8.38 wk24 30.15 5.7 12.44 7.37 2016 wk25 2.46 4.65 16.53 5.45 wk26 1.69 3.36 4.76 4.27 wk27 2.91 1.72 1.27 1.28 wk28 29 4 5.06 2.67 wk29 24.63 5.91 31.76 7.47 wk30 23.08 4.39 31.3 4.5 wk31 6.59 7.27 7.73 8.98 wk32 3.01 2.35 4.88 2.65 2017 wk33 5.05 5.7 6.93 5.53 wk34 10.11 3.45 9.91 3.73 wk35 16.92 5.47 9.04 12.31 wk36 17.54 5.35 10.11 5.75 wk37 7.07 6.65 12.37 6.67 wk38 5.91 4.05 8.74 5.71 wk39 8.38 5.2 4.88 3.82 wk40 7.11 6.11 6.34 4.07 iii. results and analysis a. blocked call rate analysis first, a correlation test is performed on the bcr results for both rainy and dry seasons to ascertain their nature of relationship existing between them. the correlation test is denoted with �� ranges from -1 to 1 [19]. �� = �� ��� � ��� ( �"�) (3) engineering, technology & applied science research vol. 9, no. 2, 2019, 3998-4001 4000 www.etasr.com eli-chukwu & onoh: experimental study on the impact of weather conditions on wide code … the result (rs=0.453) shows there is a slight positive relationship between the bcr for both rainy and dry seasons, such that a rise in rainy season bcr leads to a rise in dry season bcr: μ%�& '()* = 12.8955 , μ�%'()* = 8.7573 . a statistical approach for testing the hypothesis is used to ascertain if there is a difference in population means between the bcr during the rainy and dry seasons. hypothesis: 3�: μ% 5 μ� = 0 3�: μ% 5 μ� 7 0 (4) decision rule: accept h0 if 8��9 : 8( ;< �"=)�>(?) , otherwise reject. test statistic: 8��9 = (@;"@�)ab;��;(?) (5) 8��9 = 2.796 e 8( ;< �"=)�>(?) = 1.664 (6) conclusion: there’s a statistical significant difference between the blocked call rate for the rainy and dry seasons. the graph in figure 1 shows that the rate of blocked calls during rainy season is higher than the dry/hammattan season. this result is obvious in weeks 21, 23 and 25 respectively. fig. 1. rainy and dry season bcrs b. drop call rate analysis applying (3)-(5) the dcr we get rs=0.524. with rs there is a slight positive correlation between the dcrs of both rainy and dry seasons: μ%�& 'g)* = 4.798 and μ�%'g)* = 5.2085 , 8��9 = 51.129 e 8( ;< �"=)�>(?) =1.664. conclusion: there’s no statistical significant difference between the dcrs of rainy and dry seasons. the graph in figure 2 shows that there is a statistical equal rate of dropped calls during rainy and dry/hammattan seasons. all through the test period, the average marginal difference (amd) of the dcr was 51 k lm�g)* k 1. fig. 2. rainy and dry season dcrs c. comparative kpi analysis friedman test is a non-parametric test of hypothesis approach for repeated measure analysis of variances which is used when the same parameter has been measured under different conditions on the same subjects [20]. hypothesis: 3�: �*()* = �n()* = �*g)* = �ng)* = 0 3�: �*()* = �n()* 7 0 (6) decision rule: accept h0 if op�9q� e o�&r and reject if otherwise [20]. s���9 = �=� *�� t ��� u(u<�) 5 3v(w x 1) ~ s�( ,u)�>(?) (7) op�9q� = 0.05 e o�&r. = 0 conclusion: there’s no statistical significant difference between the bcr and dcr for rainy and dry seasons. d. multiple linear regression multiple linear regression is a statistical approach that models the relationship between two or more explanatory variables and a response variable by fitting a linear equation to the observed data [20]. all the values of the independent variable x are associated with the values of the dependent variable y. the model is expressed as: z& = �� x ��[&�x..x �\[&\ x ]& (8) z& = 2023.437 5 0.084[� x 0.756[= x 0.415[^ x 0.555[_ where z& =years, [� =rainy bcr, [= =dry bcr, [^ =rainy dcr and [_ =dry dcr. the above linear regression model can be used for prediction. iv. conclusion in cellular network activities in nigeria, before site integration, it is expected that each cell of the site meets the ncc standard of ≥98% for service accessibility and call completion rate which in turn depicts a ≤2% in the bcr and engineering, technology & applied science research vol. 9, no. 2, 2019, 3998-4001 4001 www.etasr.com eli-chukwu & onoh: experimental study on the impact of weather conditions on wide code … dcr. these results are achieved during seasonal changes. this paper points out the impact of varying weather conditions on wcdma network performance. the results showed that during adverse weather conditions, the bcr and dcr rise greater than 8% and 4% respectively. although with a slight relationship between the weather conditions, the weather conditions during the dry season have a better bcr of 8.76% than the rainy season with 12.89%. calls tend to drop more during the dry season. a regression model was developed for predicting unknown network call statistics variables. optimization actions that will protect against the effects of weather conditions on service accessibility and retainability should be considered in future studies. references [1] t. s. dalip, v. kumar, “effect of environmental parameters on gsm and gps”, indian journal for science and technology. vol. 7, no. 8, pp. 1183-1188, 2014 [2] s. h. fang, y. s yang, “the impact of weather condition on radiobased distance estimation: a case study in gsm network with mobile measurement”, ieee transactions on vehicular technology, vol. 65, no. 8, pp. 6444-6453, 2015 [3] m. s. yadnav, i. w. sudiartha, “simulation of broadcast level signal mobile station 3g network rain condition”, asia pacific conference on multimedia and broadcasting, bali, indonesia, november 17-19, 2016 [4] j. luomal, i. hakala, “effects of temperature and humidity on radio signal strength in outdoor wireless sensor networks”, computer science and information systems, vol. 5, pp. 1247-1255, 2015 [5] m. voznak, j. rozhon, “influence of atmospheric parameters on speech quality in gsm/umts”, international journal of mathematical models and methods in applied sciences, vol. 6, pp. 575-582, 2012 [6] j. rozhon, p. blaha, m. voznak, j, skapa, “the weather impact on speech quality in gsm network”, in: computer networks, communications in computer and information science, vol. 291 springer, 2014 [7] m. c. kestwal, s. joshi, l. s. garia, “prediction of rain attenuation and impact of rain in wave propagation at microwave frequency for tropical region”, international journal of microwave science and technology, vol. 2014, article id 958498, 2014 [8] s. sabu, s. renjmol, d. abhiram, b premlet, “effect of rainfall on cellular signal strength: a study on the variation of rssi at user end of smartphone during rainfall 2017 ieee region 10 symposium, cochin, india, july 14-16, 2014 [9] b. fong, p. b. rapaiic, g. y. hong, “effects of rain attenuation on wireless transmission of frame relay traffic”, the 8th international conference on communication systems, singapore, november 28, 2002 [10] j. skapa, m. dvorsky, l. michalek, r. sebesta, p. blaha, “k-mean clustering and correlation analysis in recognition of weather impact on radio signal”, 35th international conference on telecommunications and signal processing, prague, czech republic, july 3-4, 2012 [11] l. ali, i. alam, a. a. s. syed, m. yaqoob, “various meterological parameters effect on gsm radio signal propagation for a moderate area”, 2017 international conference on frontiers of information technology, islamabad, pakistan, december 18-20, 2017 [12] p. blaha, j. rozhon, m. voznak, j. skapa, “correlation between speech quality and weather”, in: soft computing models in industrial and environmental applications. advances in intelligent systems and computing, vol. 188, springer, 2014 [13] l. a. r. da silva mello, e. costa, r. s. l. de souza, “rain attenuation measurements at 15 and 18 ghz”, electronics letters, vol. 38, no. 4, pp. 197-198, 2015 [14] c. li, x. luo, c. zhang, x. wang, “sunny, rainy, and cloudy with a chance of mobile promotion effectiveness”, marketing science, vol. 36, no. 5, pp. 762-779, 2017 [15] d. d. dajah, n. parfait, “a consideration of propagation loss models for gsm during harmattan in n’djamena (chad)”, international journal of computing and ict research, vol. 4, no. 1, pp. 43-48, 2010 [16] n. h. sabri, r. umar, m. m. shafie, s. n. a. s. zafar, r. mat, a. sabri, z. a. ibrahim, “correlation analysis of tropical rainforest climate effect on radio signal strength at kusza observatory, terengganu”, advanced science letters, vol. 23, no. 2, pp. 1268-1271, 2015 [17] a. f. ismail, m. r. islam, j. din, a. r. tharek, n. l. i. jamaludin, “investigation of rain fading on a 26ghz link in tropical climate”, 6th international conference on telecommunication systems, services, and applications, bali, indonesia, october 20-12, 2011 [18] n. c. eli-chukwu, g. n. onoh, “improving service accessibility (cssr) in gsm network using an intelligent agent-based approach”, international journal of computer engineering in research trends, vol. 4, no. 11, pp. 478-486, 2017 [19] p. k. sahu, s. r. pal, a. k. das, estimation and inferential statistics, springer, 2015 [20] m. r. spiegel, l. j. stephens, theory and problems of statistics, mcgraw-hill, 1999 engineering, technology & applied science research vol. 7, no. 2, 2017, 1478-1481 1478 www.etasr.com shukla et al.: a modified approach of optics algorithm for data streams a modified approach of optics algorithm for data streams m. shukla department of computer engineering marwadi education foundation & r. k. university, rajkot, india madhu.ce@gmail.com y. p. kosta department of computer engineering marwadi education foundation rajkot, india ypkosta@gmail.com m. jayswal department of computer engineering marwadi education foundation rajkot, india, mjmeghnesh@gmail.com abstract-data are continuously evolving from a huge variety of applications in huge volume and size. they are fast changing, temporally ordered and thus data mining has become a field of major interest. a mining technique such as clustering is implemented in order to process data streams and generate a set of similar objects as an individual group. outliers generated in this process are the noisy data points that shows abnormal behavior compared to the normal data points. in order to obtain the clusters of pure quality outliers should be efficiently discovered and discarded. in this paper, a concept of pruning is applied on the stream optics algorithm along with the identification of real outliers, which reduces memory consumption and increases the speed for identifying potential clusters. keywords-two phase; cluster quality; clustering technique; pruning; time and space complexity; threshold value i. introduction traditional data mining methods are not that successful in case of huge data streams, as off-line mining is not applicable. there are some requirements for clustering algorithms. fast processing of data points and identification of outliers must be clear and precise [1]. data uncertainty is an added issue. different data type should be treated differently which is also an issue. an arbitrary shape of clusters makes hard to distinguish the accurate shape of the cluster [2]. there are many different applications like network traffic analysis, sensor network, internet traffic etc. that produce stream data [20]. random sampling, sliding window, histograms, multi-resolution methods, sketches and randomized algorithm are some basic data sampling techniques for mining data streams [13]. classification of stream data is possible with algorithms such as the hoeffding tree, the concept adaptive very fast decision tree (cvfdt), the very fast decision tree (vfdt), and the classifier ensemble approach. accuracy, efficiency, compactness, separateness, purity, space limitation and cluster validity are an important issue in the aspect of clusters quality. different types of clustering methods like partitioning, hierarchical methods, model based, density based, grid based, constraint-based and evolutionary methods etc. are used for clustering stream data. various algorithms are developed for clustering in data streams. micro-clustering algorithms for data streams are as follows: den stream algorithm, stream optics algorithm, hdd stream algorithm [19] etc. density grid-based algorithms for data streams are as follows: d-stream algorithm, mr-stream algorithm, dengris algorithm[19] etc. table i gives a basic mapping of several existing algorithms that contains the description of their advantages and disadvantages. clustering is a key task in data mining. there are various other additional challenges by data streams on clustering such as one pass clustering, limited time and limited memory. along with this, finding out clusters with arbitrary shapes is very much necessary in data stream applications. density-based clustering method is of significant importance in clustering data streams, as it has the tendency to discover arbitrary shape clusters as well as outliers. in density based clustering, a cluster is defined as areas of higher density than the remaining data set. clustering algorithms requires very tedious calculations for detecting the outliers. handling noisy data, limited time and memory, handling evolving data, and handling high dimensional data are also to be considered. an outlier is defined as a data point which shows abnormal behaviour to the system and it is application dependent. in stream data mining the data points are huge in number thus it may be possible that during clustering of such data points few of the data points which does not belong to clusters or does not take part in clustering due to its distance from the cluster will be termed as outliers. it is required to remove such data points. as data generation is continuous and fast, a structure should be established to handle the mining process. clustering provides a solution to such types of issues in stream data mining ii. proposed architecture in this paper a modification is applied on the stream optics algorithm by applying a pruning method and setting a threshold value cut off points for data dynamically. the extension of the most basic density algorithm (dbscan) is optics which is based on the ordering point in the stream data mining. its concept is to continuously increase the given cluster till the density in the neighborhood cross some of the threshold value. for each data point in a given cluster from the group of the cluster, the neighborhood of given radius has to contain at least a minimum number of points. one of the important advantages of this method is that it can find clusters of arbitrary shapes and can be used to filter out engineering, technology & applied science research vol. 7, no. 2, 2017, 1478-1481 1479 www.etasr.com shukla et al.: a modified approach of optics algorithm for data streams noise. it considers clusters as dense regions of the objects in the data space which are separated by low-density regions. for interactive and automatic cluster analysis this algorithm determines an augmented cluster ordering. an ordering of clusters is done to obtain basic clustering information and deliver the intrinsic clustering architecture. the core distance represents the smallest value from the core. the reachability distance considers the greater value of the core distance of the second object and the euclidean distance between the two objects. this creates an ordering of the objects in a database and stores the suitable reachability distance and core distance for each object. the major drawback of the optics method is that if there is no core object the reachability distance between two objects is undefined. the same basic scheme is applied for the stream optics algorithm modified with the addition of an iterative property of the threshold value and with the concept of pruning in order to optimize and time complexity. table i. approach, advantages, and disadvantages of existing algorithms algorithm approach advantages disadvantages stream [1] partitioning space and time complexity is low. it does not have flexibility of computing the clusters at user defined time periods. clu-stream [2] partitioning & hierarchical it provides flexibility of computing the clusters at user defined time periods. does not support concept drift and gives the clusters of arbitrary shapes. hpstream [3] partitioning & hierarchical high dimensional projected clustering of data stream an average number of projected dimension parameters and number of clusters requires detail domain knowledge. den-stream [4] density based gives arbitrary shape clusters. deleting and merging of the micro clusters does not allow release of any memory space. d-stream [5] density and grid based it supports density decaying and monitors evolving behaviour for real time data streams. does not support handling of multiple dimensional data e-stream [6] hierarchical high performance then other algorithm user complexity of setting high number of parameters dbscan [7] density based can analyze the cluster for large dataset inputting the parameter setting is very much difficult stream-optics [8] density based plotting of cluster structure based upon time. for cluster extraction it is not supervised technique mr-stream [9] density based improves the performance of the clustering. in high dimensional data, it lacks its working hdd-stream [10] density based high dimensional data is clustered. it takes more amount of time in searching the neighbor clusters. dengris [11] density based using sliding window model the distribution of recent most records are captured precisely no evaluation to show its effectiveness compared with other state off art algorithms. somke [12] density based use in non-stationary data efficiently and effectively. cannot handle unbalance data. leaden-stream [13] density and grid based it supports density decaying and monitors evolving behaviour for real time data streams. does not support handling of multiple dimensional data pod – clus [14] model based supports concept evolution and data fading computation and updating of pairwise distance takes a lot of time in data streams birch [15] hierarchical overcomes the inability to undo what was done in previous step does not perform well if the cluster is not in spherical shape. spe-cluster [16] partitioning based solve the problem of specifying the number of cluster. cannot discover arbitrary shape clusters. cobweb [17] model based it identifies the outliers effectively. it does not provide compact representation. data stream input in the form of small chunks is called data chunks. here a windowing concept is used because stream data are huge. data are fitted into a window frame and then passes ahead. different parameters like window size, threshold value, and radius are set by the user. after that, an one-way online process is used in which data chunks are fitted into the window and then the clustering process is applied. in the online phase, micro-clustering is performed and basic dbscan algorithm is used. the micro clustering is done by the selection of the centroid with nearest data point's i.e. cluster mean value of object. the output of these will be the data points with k cluster and n objects. the clustering is done based on distance. these will now be input to the offline phase with the parameters like core distance, epsilon and fixed value of min point, generating distance. the proposed scheme is depicted in figure 1. in the offline phase, the stream optics algorithm is used for clustering. macro clustering takes place and the data points form clusters of good quality. two phase work is required because due to the nature of data streams. points which have not yet been part of any clusters are distinguished by giving them weight. for every new iteration, this value will be incremented to make sure it is part of real outlier. based on the application of a threshold value, points termed as real outliers are detected. thus the node or the data points, which are outliers, will be pruned off which will reduce the memory consumption and the time taken for generation of the potential clusters. these will improve the engineering, technology & applied science research vol. 7, no. 2, 2017, 1478-1481 1480 www.etasr.com shukla et al.: a modified approach of optics algorithm for data streams purity of the potential clusters. for each threshold value set the whole dataset is been checked with the prime motto of maintaining the quality of the clusters. the algorithm is broken down in steps in figure 2. iii. results & discussions various parameters like cluster purity, number of clusters, ssq, threshold, memory and time consumption are evaluated. then these evaluated parameters are used for comparison and performance evaluation. net beans is used for the simulation studies in our research work. the forest cover type data set and sensor data set are used for evaluation and algorithms are computed for 50000 and 100000 data records respectively. different threshold values were tested and an optimum value was chosen in each case (3 for the the forest cover type data set and 14 for the sensor data set). tables ii to v sum up the results. fig. 1. the proposed scheme fig. 2. the proposed algorithm table ii. overall result summary of forest cover dataset with different parameter at various threshold values before pruning. threshold value maximum time (sec) maximum memory (mb) number of cluster sum of squared error no. of noise points purity 1 44427 171.46 22 1.31 19 93.02 2 44327 163.09 23 1.21 19 92.13 3 44162 138.99 21 1.46 12 91.29 4 43966 128.56 25 1.35 15 91.35 5 44636 137.21 24 1.42 10 93.82 6 44172 138.18 21 1.38 17 92.95 table iii. overall result summary of forest cover dataset with different parameter at various threshold values after pruning threshold value maximum time (sec) maximum memory (mb) number of cluster sum of squared error no. of noise points purity 1 43783 145.46 16 1.15 23 95.53 2 43643 132.09 16 1.02 22 95.68 3 43580 110.99 13 1.02 17 95.87 4 43480 100.56 14 1.00 21 95.17 5 43939 109.21 14 1.06 16 95.42 6 43172 111.18 9 1.18 20 95.34 table iv. overall result summary of sensor dataset with different parameter at various threshold values before pruning threshold value maximum time (sec) maximum memory (mb) number of cluster sum of squared error no. of noise points purity 2 15499 53.06 21 1.27 17 92.33 10 15370 57.06 23 1.47 20 93.80 13 15434 52.06 25 1.33 13 91.08 14 15498 58.06 24 1.09 16 93.61 15 15156 54.06 25 1.29 12 93.33 2 15499 53.06 21 1.27 17 92.33 table v. overall result summary of sensor dataset with different parameter at various threshold values after pruning threshold value maximum time (sec) maximum memory (mb) number of cluster sum of squared error no. of noise points purity 2 14938 27.06 17 1.02 19 95.74 10 14797 26.21 9 1.09 24 94.94 13 14704 27.12 7 1.07 19 95.52 14 14938 27.06 8 1.02 19 96.29 15 14578 27.06 6 1.08 16 95.31 16 14704 26.08 6 1.07 15 94.96 iv. conclusion handling data streams shows increased complexity due to their constant, huge and potentially infinite nature. working with data streams challenges the memory, space, time and handling changes along with speed and multiple source of data generation. thus, the algorithms used for offline data mining and management may prove insufficient in such application and variations may be required. such a variation of the optics algorithm is proposed in this paper. simulations are performed, results are discussed and an overall improvement is documented. references [1] l. o’callaghan, n. mishra, a. meyerson, s. guha, r. motwani “streaming-data algorithms for high-quality clustering”, 18th engineering, technology & applied science research vol. 7, no. 2, 2017, 1478-1481 1481 www.etasr.com shukla et al.: a modified approach of optics algorithm for data streams international conference on data engineering, pp. 685-694, february 26march 1, 2002 [2] c. c. aggarwal, j. han, j. wang, p. s. yu, “a framework for clustering evolving data streams”, international conference on very large databases, vol. 29, pp. 81-92, 2003 [3] c. c. aggarwal, j. han, j. wang, p. s. yu, “a framework for projected clustering of high dimensional data streams”, thirtieth international conference on very large data bases, vol. 30, pp. 852-863, 2004 [4] f. cao, m. ester, w. qian, a. zhou, “density-based clustering over an evolving data stream with noise”, siam international conference on data mining and secure data management (sdm), vol. 6, pp. 328-339, 2006 [5] li-xiong, h. hai, g. yun-fei, and c. fu-cai, “rdenstream: a clustering algorithm over an evolving data stream”, international conference on information engineering and computer science, pp. 1-4, december 1920, 2009 [6] k. udommanetanakit, t. rakthanmanon, k. waiyamai, “e-stream: evolution-based technique for stream clustering”, lecture notes in computer science, vol. 4632, pp. 606-616, 2007 [7] c. dharni, m. bnasal, “an improvement of dbscan algorithm to analyze cluster for large datasets”, ieee international conference on mooc innovation and technology in education (mite), pp. 42-46, 2013 [8] m. ankerst, m. m. breunig, h. kriegel, j. sander, “optics : ordering points to identify the clustering structure”, acm sigmod, vol. 28, no. 2, pp. 49-60, 1999 [9] l. wan, w. k. ng, x. h. dang, p. s. yu, and k. zhang, “density-based clustering of data streams at multiple resolutions”, acm transactions on knowledge discovery from data (tkdd), vol. 3, no. 3, pp. 1-28, 2009 [10] i. ntoutsi, a. zimek, t. palpanas, p. kröger, h. kriegel, “density-based projected clustering over high dimensional data streams”, society of industrial and applied mathematics (siam) international conference on data mining, pp. 987-998, 2012 [11] a. amini, t. y. wah, “dengris-stream: a density-grid based clustering algorithm for evolving data streams over sliding window”, international conference on data mining computer engineering, pp. 206211, 2012 [12] y. cao, h. he, h. man, “somke: kernel density estimation over data streams by sequences of self-organizing maps”, ieee transactions on neural networks learning systems., vol. 23, no. 8, pp. 1254-1268, 2012. [13] a. amini, t. y. wah, “leaden-stream: a leader density-based clustering algorithm over evolving data stream”, journal of computer communication, vol. 1, no. 5, pp. 26-31, 2013 [14] p. p. rodrigues, j. gama, j. p. pedroso, “odac: hierarchical clustering of time series data streams”, ieee transaction on knowledge data engineering , vol. 20, no. 5, pp. 615-627, 2008 [15] t. zhang, r. ramakrishnan, m. livny, “birch: an efficient data clustering databases method for very large”, acm sigmod record, vol. 25, no. 2, pp. 103-114, 1996 [16] e. keogh, s. chu, d. hart, m. pazzani, “an online algorithm for segmenting time series”, international conference on data mining, pp. 289-296, 2001 [17] kavita, p. bedi, “clustering of categorized text data using cobweb algorithm”, international journal computer science and information technology research, vol. 3, no. 3, pp. 249-254, 2015 [18] m. khalilian, n. mustapha, “data stream clustering: challenges and issues”, international multi conference of engineers and computer scientists, vol. 1, hong kong, march 17-19, 2010 [19] a. amini, t. y. h. saboohi, “on density-based data streams clustering algorithms:a survey”, journal of computer science and technology, vol. 29, no. 1, pp.116-141, 2014 [20] m. shukla, y. p. kosta, p. chauhan, “analysis and evaluation of outlier detection algorithms in data streams”, ieee international conference on computer, communication and control (ic4), pp. 1-8, september 1012, 2015 [21] p. chauhan, m. shukla, “a review on outlier detection techniques on data stream by using different approaches of k-means algorithm”, ieee international conference on advances in computer engineering and applications (icacea), pp. 580-585, 2015 [22] m. kamber, j. han, data mining: concepts and techniques, second edition, elsevier, 2001 engineering, technology & applied science research vol. 8, no. 2, 2018, 2834-2838 2834 www.etasr.com ibrahim: accuracy of bit error probability for w-cdma system using code tree accuracy of bit error probability for w-cdma system using code tree anwar hassan ibrahim department of electrical engineering qassim university, college of engineering buraydah, saudi arabia dr.anwar@qec.edu.sa abstract—w-cdma is radio access utilized for 3g cell frameworks. a code tree allocation scheme is one of the most explored channelization techniques, used to improve system performance and capacity through adjustable data rates. this work investigates the accuracy of bit error probability for wcdma system using code tree orthogonal variable spreading factor (ovsf) codes, compared to pseudo-noise (pn) codes under various noise conditions, such as additive white gaussian noise (awgn) and random noise (rn). results are carried out theoretically and by computer simulation. the simulation includes the scenario of simple model representation for wcdma system. it was concluded that, the system has better performance using ovsf compared with pn code under different noisy channels. keywords-w-cdma; ovsf code; pn code; awgn; rn; bit error probability (bep) i. introduction data spreading in w-cdma systems is done by the application of a signal independent code. the code choice affects the framework execution. the more extended the code, the higher the preparations picked up, which empowers the system to permit more clients in the framework. then again, a bigger preparation, infers the use of more transfer speed in wcdma [1]. a good w-cdma planning model for communications development reduces the complexity of solutions in order to integrate them into the same channel by using code division multiple access (cdma), data, and web services. 3g wireless standards use w-cdma to meet high data rates and variable rate requirements. the proposed scenario initially attempts to assign request codes to the system, and then tries to allocate them to user access. in order to achieve high bit error rate accuracy [2, 3], different data must be used by different user connections with variable operation rates and ovsf. in terms of implementation, it is better to have different spreading factors from the same branch of the tree to avoid chip level buffering [4]. the more the channel is used, the more noise is produced [5]. furthermore, the scenario studies the efficiency performance of ovsf code tree in wcdma system under two different data under noise comparing to pn code. w-cdma is a flexible system, supporting variable data rates and services. the flexibility of using ovsf as channelization code increases the ability of supporting variable data rates for each transceiver and makes simpler hardware usage [6]. ii. ovsf code tree a. orthogonality the tree structure code method is usually assigned to certain users with different data rates orthogonally. the design of ovsf codes has different lengths on different levels and different spreading factors, related to the information rate multiplied by the entire code word [7]. two codes are supposed to be orthogonal when their inner product is zero. for example, the characterized code of (1, 1, 1, 1) and (1, 1, -1, -1) are orthogonal. their product is zero as shown below: (1*1)+(1*1)+(1*-1)+(1*-1)=0 b. code tree definition ovsf codes have been introduced for 3g communication systems. spectrum spreading is attained by plotting each bit (1 or -1) into an allocated code categorization. figure 1 shows the tree structure [8]. ovsf codes are continuously powering data rates with respect to the lowest simple rate. the potential rates supported are: rb, 2rb, 4rb, 8rb, etc, with rb meaning “rate bit”, and the break becomes greater as the rate growths. in particular cases it may over-serve by a greater rate [6]. fig. 1. ovsf code structure c1(1) = [1] c2 (1) = [c1(1) , c1(1)] = [1,1] c2 (2) = [c1(1) , c1(1) ] = [1,-1] c4 (2) = [c2(1) , c2(1) ] = [1,1,-1,-1] c4 (3) = [c2(1) , c2(1) ] = [1,-1, 1, -1] c4 (4) = [c1(1) , c1(1) ] = [1,-1,-1,1] c4 (1) = [c2(1) , c2(1) ] = [1,1,1,1] . … … … … . . cn (1) = [cn/2 (1) , cn/2 (1)] cn (2k 1) = [cn/2 (k) , cn/2 (k)] cn (2k) = [cn/2 (k), cn/2 (k)] cn (n) = [cn/2 (n/2), cn/2 (n/2)] … . … . engineering, technology & applied science research vol. 8, no. 2, 2018, 2834-2838 2835 www.etasr.com ibrahim: accuracy of bit error probability for w-cdma system using code tree c. code tree algorithm different distribution code factors, means different code extents. the general idea is to enable the merging of changed messages with alternative spreading factors and retain their orthogonality. code dimensions are needed to be orthogonal. the analysis below shows the workability of the algorithm. the process of the first section in the tree is 1. for each level, there are two conceivable sub-levels, represented as top and bottom sub divisions. the top sub division is constructed by repeating the root of the sub-division twice. so in this case the top sub-section of (1) would be (1, 1) and the bottom sub division is assembled by self-inverse of (1), and it would be (1, -1). at each section, all the codes are the rows of a hadamard matrix with the fundamentals mapped to polar arrangement. the type of code tree depends on the chosen code through the design in figure 2. if sf=8, the other level can’t be used [9]. iii. pn-code algorithm a pseudo-noise (pn) used for direct sequence spreading code consists of nds components, named chips. these chips require 2 values: either -1/1 or 0/1. the bit classifications are used unless specified otherwise. each data symbol is mutual with a single comprehensive pn-code, the direct sequence is identical to the code-length. pn sequences are periodic structures that have similar behavior with noise [10]. this code is generated by using shift registers, module adders represented by xor gates and feedback loops. figure 2 shows the scenario for generating pn code. fig. 2. shift register of pn code generation the extreme dimension of a pn sequence is defined by the size of the register and the structure of the feedback system. an n bit sequence created can consist of up to 2n different groupings of zeros and ones. meanwhile the feedback system achieves linear processes, if all the inputs of the flip-flops are zero, the result of the output from the feedback system will also be zero. consequently, the entirely zero grouping will always stretch zero output for all following clock cycles, so the system avoids including it in the sequence. hence, the determined length of several pn sequences is 2n-1 and classifications of that extent are called maximum-length arrangements or msequences. this became preferable. feedback outlines for msequences are organized and can be found in functions created in matlab. the signal is usually multiplied by a pn code. a pn code is a classification of chips valued as -1 and 1 (polar sequence) or 0 and 1 (non-polar sequence) and has similar noise properties. to create a pn code in the proper way, at least one shift-register should be actively fast. as long as the length of the code in such a shift-register is n equal, the result in (1) will represent the period nds of the above mentioned scenario for the length of the code: nds=2n-1 (1) this code is used to determine the frequency spectrum that the produced signal will lodge. it regulates and controls the spreading arrangement of the system. iv. proposed system the system model is illustrated in figure 3. fig. 3. illustrattion fo the proposed system model this section details the developed simulation methodology to evaluate the performance of ovsf code tree. the simulation results demonstrate the code performance for several configurations in different channel environments, which are given. the system consists of two channels where users fed their data into a diversity receiver [11]. in the receiver, the transmitted data are recovered and checked for errors [12]. the simulation study investigates a scenario with undesired channel condition, aiming to examine the refusal of the ovsf code tree to accept this condition and to calculate the probability of error. v. bit error probability an important application of spread spectrum structures is multiple access infrastructures, in which several users have to access the channel [13]. the probability of error performance in the receiver part is presented. the antenna component separation and the functioning environment parameters (such as random noise (rn) and additive white gaussian noise engineering, technology & applied science research vol. 8, no. 2, 2018, 2834-2838 2836 www.etasr.com ibrahim: accuracy of bit error probability for w-cdma system using code tree (awgn) generation), in overall space-path multiplicity expanse can be directly assessed. every one of spreading waveforms is assigned to one equivalent bit vectors. consequently, each independent message bit to be transferred on the n-th signaling interval is allocated to a defined transmit antenna. the number of n-th user's data transmitted by transmit antenna k on the n-th signaling is multiplied by spreading waveform . the composite channel improvement among transmit antenna k and receive antenna j on the n-th signaling interval ∝ . different users are allocated single sets of spreading waveforms [14] w = c t − nt (2) = 1, 2, … . t and (cmx= ∅ when m≠1) the designation for these decision variables and how they are joined together is the subject of the spreading technique used by the transmitter. the kth matched filter output on receive antenna j and signaling interval n is and the xth channel gain matrix, then the decision variables of are given by equation 3. u = ∑ e b ∝ + n , k = xn k ≠ x (3) in addition, we make the usual assumption that power control is used to enable all users’ transmissions that reached the user of interest with the same power. under these conditions, it can be shown, that the receiver bit error probability can be approximated [15]. )( snrqpe  (4) where 2 2 ( ) ( )e a tb eb p q q no no   (5) 11( ) 3 2 k no snr n eb   (6) in which k is the number of users and n is the number of chips per bit (the processing gain). the main idea of this paper is to identify the system implemented to assess the performance of the ovsf code tree and pn code under channel environment. we are assuming that the same power control can be used by all users. under these conditions, it provides the receiver bit error probability calculated approximately in simple system built. therefore, it characterizes the number of chips contained in one data bit. complex processing gain (pg) required more spreading factor. orthogonal variable spreading factor code tree does not have the greatest spreading behavior and the process of spreading depends on user data rate. nevertheless the pn sequences need to have more spreading factor, since their power spectral density is focused on a small number of the selected discrete frequencies. vi. results figures 4-5 show the accuracy of the bit error probability. figure 4 shows the pe versus eb/no for the constant processing gain and varying number of users. it is found that the error is approached for every case shown. for example, if 8 users are active and pe of 10-2 is desired, it cannot be achieved no matter what eb/no is used. this is one of the drawbacks of w-cdma by using code tree or pn code. the average error in w-cdma system, by using ovsf code tree, will be approximately less than the pn code. it is also found that, with more users and larger processing gain, the more accurate the approximation. the other advantage of ovsf code tree is that the variable data give the user an opportunity to introduce the system as well as if it were in good condition. figure 4 explains pe versus eb/no for wcdma system using pn code. the number of users is 4 and the processing gain=4. figure 5 shows pe versus eb/no for wcdma system using orthogonal variable spreading factor under distributed users (the number of users is changing when the processing gained is constant). fig. 4. bep for the system using pn code fig. 5. user distribution for given bep figures 6-9 give detail comparison between the original data transmitted and received for both types of noise (awgn and rn) for ovsf and pn. as a result of adding awgn to the w-cdma system with ovsf code, a 25% error in the transmitted data is detected at the receiver as shown in figure 6. in figure 7, rn is added to the w-cdma system with ovsf code tree and a 18.75% error occurs. figure 8 shows a comparison between the original transmitted and received data when adding rn to the w-cdma system with pn code. in that case a 31.25% error in the data transmitted was detected at engineering, technology & applied science research vol. 8, no. 2, 2018, 2834-2838 2837 www.etasr.com ibrahim: accuracy of bit error probability for w-cdma system using code tree the receiver. adding awgn to the w-cdma system with pn code, a 37.5% error in the data transmitted was detected by the receiver as shown in figure 9. table i shows the summarization of data error occurrence in the receiver without filtering and using different codes and different noises. table i. clarify error occurrence in the receiver type of code type of noise error occur ovsf awgn 25% rn 18.75% pn awgn 37.5% rn 31.25% fig. 6. data transmitted vs data received for ovsf and awgn fig. 7. data transmitted vs data received for ovsf and rn fig. 8. data transmitted vs data received for pn and rn fig. 9. data transmitted vs data received for pn and awgn vii. conclusion results have shown that in the orthogonal variable spreading factor, the error is obtained by applying awgn and rn were 25% and 18.75%, respectively. this indicates that rn effect is less than the awgn effect. however, in the pn, the error obtained by applying awgn and rn are 37.5% and 31.25% respectively confirming that, rn effect is less than awgn effect in the system. to conclude, the lowest effect of noise channel in the system will be achieved by applying the rn introduced by ovsf tree. the low cross-correlation standards between the codes is easy to the strain of a data message detection. references [1] p. singh, g. soni, “performance analysis of wcdma link using qpsk & qam modulation schemes based on vector signal transceiver 5644r & labview 2012”, international conference on control, instrumentation, communication and computational technologies (iccicct), kumaracoil, india, december 16-17, 2016 [2] m. matthe, n. michailow, i. gaspar, g. fettweis, “influence of pulse shaping on bit error rate performance and out of band radiation of generalized frequency division multiplexing”, international conference on communications workshops, sydney, nsw, australia, pp. 43–48, june 10-14, 2014 [3] n. michailow, s.krone, m. lentmaier, g. fettweis, “bit error rate performance of generalized frequency division multiplexing”, 38th ieee vehicular technology conference (vtc fall), quebec, canada, september 3-6, 2012 [4] p. bah1, a. farrago, i. chlamtac, “resource assignment for integrated service in wireless atm networks”, international journal of communication systems, vol. 11, no. 1, pp. 29-41, 1998 [5] m f. alsharekh, m. islam, a. h. ibrahim, r. khan, s. habib, “bit error rate performance of rfid signal in sdr communication”, journal of applied sciences, vol. 16, no. 4, pp. 161 -166, 2016 [6] t. minn, k.-y. siu, “dynamic assignment of orthogonal variable spreading factor codes in w-cdma”, ieee. journal on selected areas in communications, vol. 18, no. 8, pp. 1429-1440, 2000 [7] n. cardona, a. navarro, “w-cdma capacity analysis using gis based planning tools and matlab simulation”, first international conference on (conf. publ. no. 471) 3g mobile communication technologies, london, uk, march 27-29, pp. 230-234, 2000 [8] c. w. wu, r. q. huang, “ovsf code management schemes on ad hoc networks”, ieee international conference on communications (ieee cat. no.04ch37577), paris, france, vol. 7, pp. 4152-4156, june20-24, 2004 [9] r. g. winch, telecommunication transmission systems: microwave, fiber optic, mobile cellular radio, data, and digital multiplexing, mcgraw-hill, inc., ny, usa, 1993 engineering, technology & applied science research vol. 8, no. 2, 2018, 2834-2838 2838 www.etasr.com ibrahim: accuracy of bit error probability for w-cdma system using code tree [10] f. liu, a.-j. chen, c.-b. xiang, h.-j. song, “the intelligent monitoring method based on spectral correlation pseudo wcdma”, 4th international conference on computer science and network technology, harbin, china, pp. 1294-1298, december 19-20, 2015 [11] r. e. ziemer, w. h. tranter, principles of communications systems modulation and noise, john wiley & sons, 2014 [12] c. d'amours, a. o. dahmane, “bit error rate performance of a mimocdma system employing parity-bit-selected spreading in frequency nonselective rayleigh fading”, international journal of antennas and propagation, vol. 2011, article id 516929, 2011 [13] m. shen, g. li, h. liu, “effect of traffic channel configuration on the orthogonal frequency division multiple access downlink performance”, ieee transactions on wireless communications, vol. 4, no. 4, pp. 1901-1913, 2005 [14] z. deng, y. liu, j. liu, x. chen, a. argyriou, z. xu, s. ci, “crossnetwork and cross-layer optimized video streaming over lte and wcdma downlink”, ieee symposium on computers and communication (iscc), messina, italy, pp. 868-873, june 27-30, 2016 [15] y.-c. tseng, c.-m. chao, “code placement strategies for wideband cdma ovsf code tree management”, ieee transactions on mobile computing, vol. 99, no. 4, pp. 293-302, 2002 microsoft word 27-2939_s_etasr_v9_n5_pp4724-4728 engineering, technology & applied science research vol. 9, no. 5, 2019, 4724-4728 4724 www.etasr.com al-shammari & darwish: in-depth sampling study of characteristics of vehicle crashes in saudi arabia in-depth sampling study of characteristics of vehicle crashes in saudi arabia naif khalaf al-shammari mechanical engineering department, university of hail, hail, saudi arabia naif.alshammari@uoh.edu.sa saied mohamed hassan darwish industrial engineering department, king saud university, riyadh, saudi arabia darwish@ksu.edu.sa abstract—it is imperative for any traffic safety enhancing effort to collate and analyze detailed data about crashes. this article describes a study that investigated all aspects related to motor vehicle crashes resulting in human injuries or deaths in riyadh. the database consisted of 295 collisions involving 331 vehicles, 596 fatalities (car passengers and pedestrians) and 2,454 injuries with abbreviated injury scale (ais) ≥ 1. results show that only 15.1% of all vehicle occupants were wearing seatbelts at the time of collision which is reflected in having most injuries occurring to upper parts of the body and the spine. it is also reflected in a high incidence rate of 0.22 fatalities per crash. the average age of victims was 33 years with three quarters of them being males. results also show that human actions, like reckless driving, over speeding, and sudden lane deviations were the causes of most collisions. it is concluded that in order to improve traffic safety conditions in riyadh and in the whole country, a change in driving culture of all road users is needed. this can only come with improved awareness of the risks involved among road users, better law enforcement and other engineering and hi-tech countermeasures like smart red lights. keywords-motor vehicle crashes; riyadh; casualties; spinal injuries; driving behavior; injury prevention i. introduction and background the kingdom of saudi arabia (ksa) is located in southwestern asia and is the largest country in the middle east. it occupies about the fourth-fifths of the area of the arabian peninsula with a total area of 2,250,000km 2 [1] and an estimated population of 34.14 million by the end of 2019 [2]. the oil boom experienced in saudi arabia over the past seventy years resulted in a sharp increase in living standards and massive urban development. this inevitably led to a significant increase in the asphalted road network length from a few thousand kilometres in mid-twentieth century to currently more than 71,500km. the number of motor vehicles has also increased from 144,000 cars in 1970 to almost 19 million with approximately 800,000 vehicles being imported every year [2, 4]. this increase in living standards has also resulted in a large increase in the number of traffic crashes and, subsequently, in a tragic jump in the number of deaths due to these crashes. it is estimated that road crash deaths account to 6.53% of the total deaths in saudi arabia [3]. also, about one quarter of all cases transported by the saudi red crescent society (srcs) ambulances over the last 20 years were due to road crashes [8]. the official saudi annual average mortality rate from road crashes for the period of 2010-2018 is estimated to be 19.25 per 100,000 population [2, 7], which is among the highest in the world [5]. moreover, the rank of saudi’s fatality rate based on its motorization level (vehicles/population) is found to be 11.23 which is approximately three times higher than the average of developed countries [9, 20]. furthermore, an estimated loss of between 2.2% and 4.7% of the national income due to traffic crashes has been suggested for saudi arabia [10, 11]. this deteriorating situation poses a serious threat and has a serious negative effect on economic growth, especially as the majority of losses are among the younger generation knowing that the average age of drivers in saudi arabia is 26.7 years [12, 13]. the problem of road crashes has attracted significant research in the last few decades in most developed countries resulting in a reduction of the size of the problem [21-23]. however, one of the most serious problems facing traffic safety improvement effort in most of developing countries such as saudi arabia is the lack of epidemiological studies that analyze the extent and the gravity of this problem [5]. conducting in-depth prospective or retrospective studies involving extensive data collection are also essential in developing safety regulations and programs aiming to reduce road crashes and to minimize the resulting human and economic loss [13, 14, 20]. this study comes to fill in some of these gaps in that it aims to do an indepth retrospective analysis of crashes that led to human losses or injuries in riyadh over a period of 18 months. ii. methodology this study was carried out in the city of riyadh, the capital of saudi arabia, and surrounding region (known as greater riyadh area). riyadh is a large metropolis with an area of 1782km 2 and a population of approximately 8 million people, around a quarter of the total population of saudi arabia [2]. over the last 10 years, the city of riyadh recorded an average annual population growth rate of 3.53%. riyadh has a modern road network system but its traffic management system is not fully developed. out of 957,125 new registered vehicles in saudi arabia in 2016, 129,513 were found in the greater riyadh area. it is estimated that there are about seven million trips daily in the city, with over 85 percent by private car. bus trips only represent two percent of the total, whilst goods movements make up the remainder [4, 16]. the study corresponding author: naif khalaf al-shammari engineering, technology & applied science research vol. 9, no. 5, 2019, 4724-4728 4725 www.etasr.com al-shammari & darwish: in-depth sampling study of characteristics of vehicle crashes in saudi arabia population consisted of motor vehicle collisions in greater riyadh area that resulted in an injury of ais ≥ 1 of one or more vehicle occupant(s) and/or pedestrian(s) between september 2017 and february 2019 (18 months). the data collection procedure entailed performing an in-depth technical examination of vehicles, extracting related information from police records and collecting detailed injury information from medical archives. each vehicle was inspected and photographed extensively both internally and externally. where possible, the damage profile of the vehicle was measured so that severity indicators known as collision delta-v (∆v) and/or equivalent test speed (ets) were used [17]. the rollover speed and impact speed on pedestrians were reconstructed based on the physical evidence present at the scene. for pedestrian crashes, the procedure developed by nhtsa [18] and ccis [14, 15] was followed. impact direction was classified based on the standardized collision deformation classification (cdc) code as recommended by ‘sae practice j224b’ [6]. this method used the principal direction of force of the impact (pdof). directions were front (pdof = 1, 11 and 12), side (pdof = 2, 3, 4, 8, 9 and 10) and rear (pdof = 5, 6 and 7). vehicle body types were classified into two categories. the first category included passenger cars, light trucks and vans and the second included sport utility vehicles (suvs). vehicles were also classified according to curb weight (small for weight less than 1,089kg, mid-size cars weighting 1,090 – 1,587kg, or large weighting more than 1,588kg). information including age, gender, restraint use, and seating position was obtained for each attended casualty. the use of airbags and seatbelts was primarily determined through crash scene vehicle inspection, police records, and interviews. details of the injuries were obtained from medical and emergency department records. injury severity was classified according to the ais rev 2005 [19]. descriptive statistics, bar chart, pearson χ 2 test, one-way analysis of variance and association analysis (cross tabulation) were used, where applicable, to present and assess the data. also, mann-whitney u and median tests were used for comparisons of means. spss version 23 was used for this purpose. significance level was set at 0.05. iii. results and analysis in saudi arabia, the number of road accidents and the resulting human and financial losses are still enormous. the official statistics show that 22,545 people were killed and 69,018 were injured as a result of the 533,380 accidents that happened on saudi roads during the last three years [7]. the average fatalities and injuries per accident were 0.22 and 0.68 respectively. this means that one person dies and 3 people get injured for every 4 accidents. table i presents the recent statist of accidents, fatalities and injuries of road traffic accidents in saudi arabia. table i. vital statistics of rtas in ksa durin g 2017-2019 year non-fatal accidents fatal accidents serious injuries fatalities 2017 511,649 38,120 21,731 9,031 2018 438,068 33,199 22,420 7,489 2019 327,597 30,217 24,867 6,025 total 1,277,314 101,536 69,018 22,545 over the study period, a total of 295 collisions involving a total 331 motor vehicles have qualified to be included in the current study. in those 295 collisions, there were 596 casualties including 568 vehicle passengers and 28 pedestrians who received between them a total of 2,454 injuries (ais ≥ 1). a. collision characteristics figure 1 presents the type of collisions in this study. out of those 295 collisions, 61.2% were collisions between vehicles, 14.54% due to colliding with fixed objects, 10.9% due to motor vehicle rollover, 10.9% due to motor vehicles hitting pedestrians and 2.9% due to colliding with camels. maximum number of accidents was recorded on thursdays and minimum on sundays. this may be attributed to the increase in activities during weekend and decrease suddenly after. figure 2 shows the severity of accidents for various impacts. police records show that 6% of the crashes as slight, 53.5% as serious, and 40.5% as fatal crashes. crosstabulating severity of crashes over impact reveals that almost all fatal and serious crashes had either frontal or lateral impact. it was found that there is an association between the severity of accident and the type of impact. the majority of accidents happened within the carriageway. the site of accident does not affect the type of impact. almost 61% of the accidents were recorded on the highways in urban areas. there is no statistically significant relationship between impact direction and the area of accident. most of frontal impacts (62%) and side impacts (73%) frequently occurred on straight roads and slope layouts. there was a statistically significant relationship between the type of impact and road layout. the distribution of weather condition at the time of crash is presented in table ii. fig. 1. distribution of collisions by type table ii. distribution of crash by weather condition weather conditions percentage fine no high winds 68.8 dust 12.7 raining, no high winds 11.7 raining and high winds 4.4 fine and high winds 1.8 fog or mist 0.6 the majority of accidents occurred in fine weather conditions (69%). the weather conditions had a significant effect on the types of impacts. most severe crashes due to traffic accidents occurred on lit roads or during day time. pedestrian crossing facilities are crucial as some of the areas engineering, technology & applied science research vol. 9, no. 5, 2019, 4724-4728 4726 www.etasr.com al-shammari & darwish: in-depth sampling study of characteristics of vehicle crashes in saudi arabia are reserved for parking, gardening, etc. and some areas should not be accessible by pedestrians. more than half of the pedestrian collisions occurred on the nearside of drivers. fig. 2. impact direction vs. severity of the accident b. vehicle characteristics there were 331 vehicles involved in the accidents considered in this study. the type of vehicle has the potential of playing an important role in causing accidents and their consequences. passenger cars form the vast majority of traffic crashes (87%). the direction or location of impact plays an important role on the severity and type of injury. figure 3 shows the distribution of types of impact for crashes between vehicles. it can be seen that frontal impact represents 62.6%, side impact 18.9%, top/bottom impact 10.9% and rear end impact 7.6%. fig. 3. direction of impact for vehicle crashes most of the vehicles were small size cars (55%) followed by medium size cars (41%). the statistical analysis showed that there is an association between the type of impact and vehicle size as presented in figure 4. hatchback was the motor vehicle with most traffic crash-induced injuries (60%). table iii shows the major causes attributed to the vehicle crashes in ksa. the causes have been categorized according to driver, vehicle, and road. the drive behavior was responsible about most of these accidents (93.80%), followed by technical defects (5.30%) and roads works (0.90%). the major causes of accidents by driver’s action were found to be careless driving (24%), over speeding (21.6%) and sudden deviation (16.6%) accounting for 61% of car accidents in this study. of the sampled vehicles, the unacceptable condition of tires, brakes, and lights accounted for 43%, 36%, and 8% of all vehicle faults respectively. analyzing tire-related collisions lead to aging defects, unacceptable tread depth and illegal tires being the most common types of defective tires contributing 37%, 21%, and 14% of tire-related collisions respectively. fig. 4. impact direction vs. type of vehicle table iii. distribution of crash causes crash cause percentage driver’s violations (93.80%) over speed 21.4 run traffic light 9.4 wrong overtaking 3.4 wrong u-turn 1.9 wrong parking 1.5 alcohol 4.4 falling asleep 0.6 exhaustion 1.1 careless driving 24.6 driving on the wrong side of the road 1.7 stop sign crossing 1.9 disobeying priority rules 3.4 sudden deviation 16.6 tailgating 0.4 recklessness driving 1.5 vehicular faults (5.30%) bad tires 2.28 brakes defects 1.90 lights faults 1.12 roads obstacles (0.90%) total 100% c. causlaty charatcteristics as stated earlier, the 295 crashes included in the current study resulted in 596 casualties. casualty medical details were collected from hospital records, emergency medical service providers and trauma centers. three quarters of injuries due to traffic crashes were males. the average age of the victims was 33.2 (stdev=17.3) years. people aged 15-44 years sustained the bulk of traffic crash-related injuries (66%). in this study, 568 occupants were included. figure 5 shows that the majority of injured occupants were drivers (54.2%) followed by front seat passengers (20.1%). only 15% of vehicle occupants who sustained an injury were restrained. restraint use was found to reduce significantly the injury severity. out of the 596 engineering, technology & applied science research vol. 9, no. 5, 2019, 4724-4728 4727 www.etasr.com al-shammari & darwish: in-depth sampling study of characteristics of vehicle crashes in saudi arabia casualties included in the current study, 57.7% were admitted to hospital, 6.7% were treated as outpatients, 24.9% died upon arrival to hospital and 10.7% died before admission (figure 6). out of those admitted to hospitals 71.6% had ais level of 2, 13.2% of 3 and 15.2% level or 4+. fig. 5. position of vehicle occupants during the crash fig. 6. place of death for victims in the study d. injury characteritsics most of injury details of the 596 casualties were collected from the medical records of the main hospitals in riyadh where the casualties were admitted and the severity of each injury was assessed using the ais scale. in total, there were 2,454 injuries (with ais > 1) and 596 casualties. table iv shows a summary of the severity of the injuries for each type of collision. as can be noted, 25.2% of injuries due to collisions with fixed objects and 23.1% of injuries due to collisions with other vehicles lead to injuries of ais ≥ 4. this reflects the greater risk of injury upon hitting a fixed object or a vehicle as compared to hitting a pedestrian (16.2%), in case of rollover (13.1%) and hitting a camel (13.1%). there were a total of 588 head injuries recorded. pearson χ 2 test revealed that there is a significant association between head injury severity and impact direction (p=0.001). most of severe head injuries (with ais ≥ 3), mainly believed to be injuries to the cervical spine, occurred in front and lateral impacts. it was found that most of the head injuries in frontal impact occur due to head contact with the windscreen and frame while in lateral impacts occur due to contact with the side rail above the window. it was also noticed that injuries to the cervical spine are mainly non-contact ones occurring due to high forces being transmitted to the spine. a total of 87 face injuries were recorded in the crashes analyzed in this study. as the vast majority of face injuries had ais ≤ 2, pearson χ 2 test result gave a non-significant relationship between face injuries and ais value (p > 0.05). also, there were 487 thorax injuries recorded. pearson χ 2 test revealed that there is a significant relationship between thorax injury severity and impact direction (p < 0.05) where most thorax injuries occurred in frontal and lateral impacts. the abdomen injuries were 400. the relationship between the abdomen injury severity and impact direction was examined. pearson χ 2 test revealed that there is no significant relationship between abdomen injury severity and impact direction (p > 0.05). it can be seen also that more than 80% of abdomen injuries had ais ≤ 3. in total, there were 329 recorded injuries to the limbs. cross tabulation of limb injury severity and impact type was found to be significant (p < 0.001) with lateral impact crashes resulting, generally speaking, in higher ais level than other crash types. table iv. distribution of severity of injuries accordin g to collision types impact ais 1 2 3 4 5 6 rollover n=239 36.8% 37.2% 12.9% 9.2% 3.3% 0.6% vehicle n=1597 28.5% 28.6% 19.8% 11.6% 6.8% 4.7% fixed object n=393 31.6% 24.4% 17.8% 12.4% 9.1% 4.7% pedestrian n=179 36.9% 37.4% 9.5% 8.3% 4.4% 3.5% camel n=4 69.6% 17.4% 2.2% 4.3% 4.3% 2.2% iv. discussion and conclusions high losses due to road traffic crashes in saudi arabia highlighted earlier can be reduced by introducing intervention measures. such measures have helped to decrease the rates of road crashes in motorized countries like uk, australia, sweden, and usa [5]. in this study detailed information on 295 collisions that occurred in the city of riyadh and resulted in an injury of ais ≥ 1 were analyzed. the results show that most of the casualties were young in age, 75% of them were males, and 54% were drivers. this can be attributed to the fact that this group of drivers accepts greater risk than others which is reflected through their lower rate of seatbelt use, higher rate of reckless driving and their greater proneness to disobey traffic rules than others [10, 12]. over half of the injuries sustained are head and cervical, and a significant number of those involve cord damage. thoracic and lumbar spinal injury is predominately fracture, without associated cord damage [9, 20]. while at all levels of injury severity, car to car impacts accounted for about 61% of the collisions in the study. it was interesting to note that a third of injuries (34%) were caused due to collisions with vehicles other than cars [15]. this suggests that these injuries may not be adequately addressed by the european or us test standards and tests since the mass and the height of the test barrier in either test cannot compare to the mass and height of a standard european/us heavy goods vehicle [17, 20]. also for crash characteristics, it was found that the majority of crashes leading to deaths or injuries in riyadh were from frontal impact, recorded on highways, in urban areas, in the presence of road light or sun light and on engineering, technology & applied science research vol. 9, no. 5, 2019, 4724-4728 4728 www.etasr.com al-shammari & darwish: in-depth sampling study of characteristics of vehicle crashes in saudi arabia straight roads, while they occurred in fine weather conditions. such findings can be further justified when reported causes of these crashes are considered [21-23]. moreover, only 15.1% of vehicle occupants were reported to be restrained in police records regarding the 568 casualties. this explains the high rates of upper body (mainly in head and thorax) injuries reported in hospital records of the same casualties. even injuries to the spine occurred, reportedly, as result of head contact during which a translational force is transmitted along the spine from the head [13, 20]. keeping in mind the good road network that exits in greater riyadh area, it is therefore concluded that human behavior is the main contributor to these crashes. although it was clear that human behavior is responsible for the majority of crashes, assessing crash vehicle characteristics revealed that some technical problems are also contributing to the high road crash rate. improper maintenance of tires and brakes as well as underinflated, illegal or ageing tires were the most commonly recorded crash causes related to vehicle characteristics, which is in-line with previous research studies. this study demonstrated several important aspects that should be addressed in order to improve the state of safety in saudi arabia. a detailed database would give strong indicators to policy makers as of the state of road safety and how the related issues are emerging. it is also recommended for future research work to incorporate crash analysis and reconstruction to better understand how crashes happen in saudi arabia and what can be done to minimize their number and severity. acknowledgment this study was supported by king abdulaziz city for science and technology general directorate of research grants program (grant number at-34-220). references [1] ministry of culture and information, the kingdom of saudi arabia: a welfare state, cultural affairs releases, ministry of culture and information, 2008 [2] gastat, “statistical yearbook for the years 1391-1439h (1970 2018), general authority for statistics (gastat), 2018 [3] a. s. al-ghamdi, road traffic accidents in saudi arabia: causes: effects, and solutions, riyadh: king abdulaziz city for science and technology, 1999 (in arabic) [4] ministry of transport, transportation statistics: annual publications of transportation statistics for the years 1391-1439h (19702018), ministry of transport, saudi arabia, 1970-2018 [5] who, global status report on road safety 2015, world health organization, 2015 [6] sae international, collision deformation classification j224_ j224_201702, sae, 2017 [7] ministry of interior, traffic statistics: annual publications of road accident statistics for the years 1391-1437h (19702016), ministry of interior, general traffic department, 2016 [8] srcs, first aids statistics: annual publications of saudi red crescent statistics for the years 1391-1439h (19702018), red crescent, ministry of health, saudi arabia, 2018 [9] n. k. al-shammari, “typical cases of crash reconstruction and injury casuation in saudi arabia”, national traffic safety conference 2019: (present and future), riyadh, saudi arabia, march 11-13, 2019 [10] h. a. mohamed, “estimation of socio-economic cost of road accidents in saudi arabia: willingness-to-pay approach (wtp)”, advances in management & applied economics, vol. 5, no.3, pp. 4361, 2015 [11] ada, “the economic cost of road traffic accidents in the kingdom of saudi arabia. riyadh”, the high commission for the development of arriyadh (ada). 2010 [12] k. m. sarma, r. n. carey, a. a. kervick, y. bimpeh, “psychological factors associated with indices of risky, reckless and cautious driving in a national sample of drivers in the republic of ireland”, accident analysis & prevention, vol. 50, pp. 1226–1235, 2013 [13] s. bendak, “compliance with seat belt enforcement law in saudi arabia”, international journal of injury control and safety promotion, vol. 14, no. 1, pp. 45-48, 2007 [14] a. m. hassan, r. guo, c. e. n. sturgess, y. hu, “road traffic accident data collection and analysis for road safety research”, 4 th international conference on traffic safety, changsa, china, 2005 [15] m. mackay, a. m. hassan, “age and gender effects on injury outcome for restrained occupants in frontal crashes: a comparison of uk and us data bases”, in: annual proceedings, association for the advancement of automotive medicine, vol. 2000 pp. 75-91, aaam, 2000 [16] ada, background of the riyadh public transport network, the high commission for the development of arriyadh (ada), 2016 [17] n. johnson, assessment of crash energy–based side impact reconstruction accuracy, msc thesis, virginia polytechnic institute and state university, 2011 [18] nass, crashworthiness data system, national accident sampling system, july 2016 [19] aaam, the abbreviated injury scale, 2015 revision, association for the advancement of automotive medicine, 2015 [20] n. k. al-shammari, motor vehicle spinal injuries: simulation and crash investigation, lap lambert academic publishing, 2012 [21] m. touahmia “identification of risk factors influencing road traffic accidents”, engineering, technology & applied science research, vol. 8, no. 1, pp. 2417-2421, 2018 [22] a. detho, s. r. samo, k. c. mukwana, k. a. samo, a. a. siyal, “evaluation of road traffic accidents (rtas) on hyderabad karachi m-9 motorway section”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2875-2878, 2018 [23] a. detho, s. r. samo, k. c. mukwana, i. a. memon, u. a. rajput, “proposed remedies to prevent road traffic accidents (rtas) on highways in pakistan”, engineering, technology & applied science research, vol. 8, no. 5, pp. 3366-3368, 2018 autho rs profile dr. naif k. al-shammari holds a bsc and an msc in mechanical engineering from king saud university, saudi arabia, and a phd in biomechanics from the university of birmingham, uk. in 2005 he joined the birmingham automotive safety centre (basc). his current research interests focus on crash injury of real world accidents including biomechanics of impacts, vehicle collision performance, and epidemiology of traffic crashes. he has held the post of principle crash analyst and was working as a researcher in the co-operative crash injury study team (ccis) at the university of birmingham. he has published more than 17 papers and technical reports. his major fields of specialization are computer modeling, robotics and artificial intelligence. prof saied darwish got his phd in the area of industrial engineering from birmingham university, uk in 1987. he has a distinct inter-disciplinary background and has been working in the areas of computer aided design and finite element analysis. he has published more than 70 research papers in international journals and over 100 in various conferences. he holds 3 patents in various countries including the us, european union, china, singapore and japan while another 6 applications are in various stages of processing. prof. darwish has supervised 5 phd students, ~70 bsc/msc students for their thesis works and at present eleven students are working with him for their phd degrees. he is an active member of more than 15 international societies of mechanical and industrial engineers around the world. microsoft word 23-3054_s1_etasr_v9_n6_pp4996-5000 engineering, technology & applied science research vol. 9, no. 6, 2019, 4996-5000 4996 www.etasr.com al-zahrani: on the statistical distribution of packets service time in cellular access networks on the statistical distribution of packets service time in cellular access networks ali y. al-zahrani department of electrical and electronic engineering university of jeddah jeddah, saudi arabia ayalzahrani1@uj.edu.sa abstract—a cellular communication system is divided into two main parts, core network, and radio access network. this research is concerned with the radio access network part which consists of multiple-cells, each served by a central located base station. furthermore, the users in each cell are considered to be uniformly distributed inside the cell. in the downlink context, the users’ packets usually arrive at the base station via fiber optic and then are relayed to the users via radio waves of certain frequency/ies. the speeds of delivering users’ packets vary, depending on the users’ location. in this paper, the actual distribution of the service time over different users whose locations are uniformly distributed in a cell served by one base station is analytically found. simulation results are presented to validate the derived model. keywords-wireless network; cumulative distribution function; probability density function; packet service time; resource allocations i. introduction deploying a wireless system can be quite costly. therefore, vendors and operators do extensive simulations before deploying any system to make sure that it is going to work properly as anticipated, and hence the investment will pay off. a key issue for drawing useful insights out of these numerical experiments (i.e. simulations) is the accuracy of the models that represent different parts of the system. in the cellular radio access network (ran) where each cell is served by one central base station (bs), users are usually distributed within the cell according to a uniform distribution. in addition, the users’ packets arrive to the base station via fiber optic channels, and are relayed to the users via wireless radio channels [1, 2]. from queuing theory perspective, when the packets arrive at a service facility according to poison distribution (i.e. the packets interarrival time is an exponential random variable), and being serviced in a time that is exponentially distributed, then the resulted queue is denoted by m/m/1. m stands for “memoryless” which is a property of the exponential distribution, and 1 indicates the existence of only one server in the facility. this model is the simplest among different queue models [3]. bs may be looked at as the service facility which serves the incoming packets by relaying them to their respective users. this paper aims to show that the resulting queue in the bs cannot be m/m/1 as some researchers assume [4, 5]. since the backhaul of the bs is connected to a single reliable fiber optic channel, the inter-arrival time of the packets to the bs may be modeled as an exponential random variable [6]. on the other hand, the time required to send packets is a random variable, but cannot be modeled as exponentially distributed because these packets belong to many users with different channels to the bs. hence, the packets are delivered with completely different data rates. the main contributions of this paper are: • the statistical distribution of the service time required to deliver packets from the bs to the users is analytically derived. • a function for generating service time samples, which is a very useful tool for simulation purposes, is provided. • the statistical distribution of the service rate under the described system setup is also derived. • the impact of the derived service time distribution is highlighted from the queuing theory perspective. ii. system description in a typical cell of a radius r, as shown in figure 1, users are distributed within the cell according to a uniform distribution. hence, users generally experience different pathloss due to their different distances d from the bs. furthermore, since the bs backhaul is a reliable fiber optic link, the users’ packets arrive to the cell bs according to the poisson distribution with an average rate of λ packets per second. for simplicity, we assume that the considered bs has one frequency channel fc which is allocated/shared among users’ packets over time domain according to the first-in-firstout policy (fifo). considering the bs as the router which routes the arrived packets to their respective users, the service time required to send each packet depends on the channel gain from the bs to the packet’s destination. the channel gain g from a user distant from the bs by d (m) is � = � ���� �� , where � is a constant ( c = a� � � ������ , with �� being the overall antenna gain, and �� the signal wavelength), �� is the reference distance, and α is the pathloss exponent. finally, we assume the size of each packet is � bits. corresponding author: ali y. al-zahrani engineering, technology & applied science research vol. 9, no. 6, 2019, 4996-5000 4997 www.etasr.com al-zahrani: on the statistical distribution of packets service time in cellular access networks fig. 1. a radio access network iii. system analysis since the mobile users are uniformly distributed within the cell, the distance d from the base station to any randomly selected user is a random variable (rv) whose cumulative distribution function (cdf) can be found as follows. considering only the users with the cell, the probability that a typical user is within an area of radius � such that �� < � < � is given by: ��� ��!" ��$% !� !�&�!" !� ! &' �ℎ '' %��) % "" = * * + �, �+ �-.���/(�� − ���) = * 2/ + �+���/(�� − ���) solving the above equation yields the cdf of the distance from the bs the a randomly selete user: 45 (�) = 6[8 ≤ �] 45(�) = ; 0 '&� � ≤ ���=>��=?=>��= '&� �� ≤ � ≤ �1 '&� � ≥ � , (1) where � is any typical distance and � is the radius of the cell. thus, the average channel gain c from the base station to any randomly selected user is a random variable and can be defined as a function of the distance 8: c = � d��8 e�, therefore, the cdf of the channel gain can be found as [9]: 4�(�) = 6[c ≤ �] = 6 f� d��8 e� ≤ �g = 6[8�h/� ≥ �h/���] = 6 f8 ≥ ���h/��h/� g = 1 − 6 f8 ≤ �� d��eh/�g = 1 − 45 j�� d��e h�k substituting (1) into the last equation yields: 4�(�) = 1 − lmm n mmo 0 '&� �� � pq�h/� ≤ �� d���rs�t/ue=>��=?=>��= '&� �� ≤ �� �pq�h/� ≤ �1 '&� �� �pq�h/� ≥ � (2) after rearranging (2), we obtain the cdf of the channel gain of a randomly selected user as follows: 4�(�)= = lmn mo 0 '&� � ≤ � ���? ��1 − ��=vp=/u>q=/uwq=/uv?=>��=w '&� � ���? �� ≤ � ≤ �1 '&� � ≥ � (3) assuming the system applies capacity achieving code, the throughput of a given user, whose channel gain is g, will be x = y"&�� �1 + [q\]�� , where ^ is the a constant transmit power, y is the channel bandwidth, and _� is the noise singlesided power spectral density. furthermore, the service time required to transmit one packet of l bits over one frequency band, form the bs to a randomly selected user is a random variable ` = ab. the cdf of the service time 4c(�) is derived as follows: 4c(�) = 6[` ≤ �] = 6 d �y "&�� �1 + ^cy_�� ≤ �e = 6 f �y� ≤ "&�� d1 + ^cy_�eg = 6 fc ≥ d2 a\h − 1ey_�^ g = 1 − 6 fc ≤ d2 a\h − 1e y_�^ g ⟹ 4c(�) = 1 − 4� j�2 klm − 1�\]�[ n (4) substituting (3) into (4) yields the cdf of the service time over one frequency channel as shown below: fp(t) = lm mm n mmm o 0 for t ≤ uvwxy=�hz {|}~�� k �=�>�(�)�(�) for uvwxy=�hz {|}~�� ≤ t ≤ uvwxy=jhz{|d��� e�}~� k 1 for t ≥ u vwxy=jhz{|d��� e � }~� k (5) where � = ��=?=>��=, and ℎ(�) = �2 klm − 1� =u �\]� �=u. engineering, technology & applied science research vol. 9, no. 6, 2019, 4996-5000 4998 www.etasr.com al-zahrani: on the statistical distribution of packets service time in cellular access networks a. generating service time samples for simulation purposes, it is sometimes required to generate samples of service time. the inverse-transform technique can be used for generating samples of service time. since 4c(�) ∈ [0,1] , and is a monotone non-decreasing function, _ samples of the service time can be generated by: � �̀ = 4c>h(��)���h] , where 4c>h is the inverse of the cdf in (5), and ������h] are randomly generated samples that follow the uniform distribution over the interval [0,1] . below, we explicitly find 4c>h: � = � p=/u>�(c)�(c) ��(c)� = ��/� − ℎ(`) ℎ(`)��� + 1� = ��/� ℎ(`) = �p=/u�z� �2 kl� − 1��/� �\]� ��/� = �p=/u�z� solving for ` yields: ` = 4c>h(�) = a\��q=jhz� �����u/= r�l��n (6) thus, (6) is a direct simulation tool for generating samples of the service time required to send a packet of � bits from the bs to a randomly selected user. b. service time probability density function (pdf) from (5), the pdf of the service time can be found: 'c(�) = �v��(h)w�h (7) = lmn mo� >��(h)p =u�=(h) '&� a\��q=�hz �rl��� ≤ � ≤ a\��q=jhz�rd��� eul�� k0 &�ℎ ���$ (8) where: ℎ′(�) = �(�(h))�h = − �\]�� ��/� �a�¡(�)\� (�) klmd� klm>he=¢uu h= after some mathematical manipulations, the pdf of the service time can be rewritten as shown bleow: 'c(�) = lmn mo£ hh= �2=¤m − 2>u¤m �> =�uu '&� a\��q=�hz �rl��� ≤ � ≤ a\��q=jhz�rd��� eul�� k0 &�ℎ ���$ (9) where £ = ���/� � �\]���/� �a�¡(�)\� , and , = a\(�z�). note that both £ and , are constant. let the limits of the service time wheren the pdf is non zero be: �� = a\��q=�hz �rl���, and �¥ = a\��q=jhz�rd��� eul�� k. then, the first two moments of the service time are given by [9]: ¦[`] = £ * h§h� hh d2=¤m − 2>u¤m e> =�uu �� (10) ¦[`�] = £ * h§h� d2=¤m − 2>u¤m e> =�uu �� (11) c. density function of the service rate in certain occasions, we are sometimes interested in service rate rather than service time. the service rate (s) is a discrete random variable which represents the number of transmitted packets in one second. in this subsection, we introduce the cdf and pdf of service rate. clearly, the relation between service time and service rate is t=1/s. therefore, the cdf of the service rate can be found as: 4̈ ($) = 6[© ≤ $] = 6[1̀ ≤ $] = 1 − 6[` ≤ 1$] = 1 − 4c(1$) by substituting (5) in the above equation, we obtain the cdf of the service rate over one frequency channel as follows: 4̈ ($) = lmm mn mmm o 0 '&� $ ≤ \��q=jhz �rd��� eul�� k a (� + 1) ª(�)> ���tp=/uª(�) '&� \��q=jhz�rd��� e u l�� k a ≤ $ ≤ \��q=�hz �rl���a 1 '&� $ ≥ \��q=�hz �rl���a , (12) where «($) = ℎ(h�). furthermore, the pdf of the service rate is given by: '̈ ($) = = lmn mo£v2�¬� − 2>�¬�w>=�uu '&� \��q=jhz �rd��� eul�� k a ≤ $ ≤ \��q=�hz �rl���a 0 &�ℎ ���$ . (13) the average service rate can be found by [9]: ©̅ = ® $ £v2�¬� − 2>�¬�w>�z�� �$ hh� hh§ d. the impact of the derived service rate the bs usually has a number of orthogonal frequency channels through which the transmitters within the bs can transmit the packets to the respective users. when all radio frequencies are assigned to only one transmitter which transmits all packets in a fifo fashion, then, according to the queuing theory [3], this kind of queue is called m/g/1 queue. m indicates a markovian arrival model, in which the packet engineering, technology & applied science research vol. 9, no. 6, 2019, 4996-5000 4999 www.etasr.com al-zahrani: on the statistical distribution of packets service time in cellular access networks inter-arrival time follows an exponential distribution while g indicates a general service model where the packet service time follows an arbitrary distribution as the one shown in (5), and “1” indicates that only one transmitter is in service. when � < ©̅, the utilization of the bs transmitter is ¯ = °̈̅ , which quantifies the proportion of time the bs transmitter is busy in the long run. moreover, the system will be stable since ¯ < 1, and the steady-state parameters of this m/g/1 queue will be as follows [12]: • packet average delay: ± = h̅̈ + °(h/¨̅=z²�=)�(h>³) , (14) where ´c� is the variance of the packet service time t. • average number of packets in the bs: _ = ¯ + ³=(hz²=¨̅=)�(h>³) (15) • probability of zero packets at the bs: 6. = 1 − ¯ if, however, the service model was not general (g), but rather markovian (m) (as it is usually erroneously assumed), then the steady-state parameters would have been [12]: • packet average delay: ± = 1©̅ − � • average number of packets in the bs: _ = ¯1 − ¯ • probability of n-packets at the bs: 6¡ = ¯¡(1 − ¯) these quantities, which are based on loose assumptions, are misleading as they are completetly different than the actual pararmeters shown in (14) and (15). note that the true packet average delay in (14) is directly proportional to the variance of the service time which indicates that the packet delay increases as the variation in service time increases. thus, as the users become more scattered in the cell, then their service time, which depenends on the distance from the bs, will vary more. one way to reduce ´c� is by dividing the set of all radio frequencies into n subsets assingend to n transmitters for serving n packets simultanously. in addition, each transmitter should serve packets related to users at approximately the same distance from the bs (i.e. a group of user apart from the bs by � ± ¶). iv. numerical experiment in this section, the results of computer simulation are shown and discussed. the simulation is conducted in accordance with the system setup explained above. the simulation parameters are shown in table i. the simulated network architecture is shown in figure 2, where the bs (red star) is located at the cell center and all users (black dots) are uniformly distributed in the cell. matlab version r2013a was used. the simulation started by randomly choosing the location of each user (i.e. its distance from the base station and its angle with respect to the central point of the cell). then, the channel of each user was estimated via calculating the signal path loss and small scale fading. by knowing the user channel, data rate can be estimated, and so the required time for delivering each packet. the objective was to measure the required time for delivering each packet, and then draw the cdf of these samples using the matlab command cdfplot, and finally comparing the result with analytical cdf found in (5). table i. simulation parameters parameters value unit system frequency 2.5 ghz channel bandwidth 30 khz pathloss exponent (·) 3.52 – reference distance (��) 20 m cell radius 500 m transmit power (^) 0 dbm noise power spectral density -174 dbm/hz overall antenna gain 0 db packet size (�) 1500 bits fig. 2. a simulated cell with 500m radius figure 3 shows different graphs for the analytical and the simulated cdfs of the time required to relay the packets to the users. the analytical cdf is a simple graph of 4c(�) described by (5) while the simulated cdf is based on the data drawn from the simulation. as depicted in the figure, the simulated cdf fits the analytical cdf. in addition, as the number of users increases, the precision of the fitness increases further, simply because the analytical cdf is based on the limit (i.e. the number of users tends to infinity). this result clearly proves the accuracy of the statistical model of the packet service time proposed in (5). furthermore, figure 4 shows a comparison between the analytical pdf portrayed by (9) and the histogram of the service time results drawn from the simulation. the analytical pdf clearly matches the result of the simulation. this shows that the derived statistical model is realistic and closely matches the data-based results. engineering, technology & applied science research vol. 9, no. 6, 2019, 4996-5000 5000 www.etasr.com al-zahrani: on the statistical distribution of packets service time in cellular access networks fig. 3. comparisons between analytical cdf and simulated cdf: (a) 100 users/cell, (b) 500 users/cell, (c) 1000 users/cell, (d) 2000 users/cell fig. 4. comparison between analytical pdf and the histogram of the packet service time v. conclusion in this paper, the correct statistical model of the time taken by the bs to relay a user’s packet was derived. we showed that it is completely different than exponential distribution as it might be assumed. furthermore, we showed how this can be handy and beneficial in numerical experiments involving the cellular system. finally, we derived a direct closed form of the cdf and pdf of the service rate measured in packets per seconds. these results may be beneficial in developing new methods for radio resource management aiming at reducing the waiting time at the bs. references [1] t. s. rappaport, wireless communication: principles and practice, prentice hall, second edition, 2002 [2] a. goldsmith, wireless communication, cambridge university press, first edition, 2005 [3] d. p. bertsekas, r. g. gallager, data networks, prentice-hall, 1992 [4] k. son, h. kim, y. yi, b. krishnamachari, “base station operation and user association mechanisms for energy-delay tradeoffs in green cellular networks”, ieee journal on selected areas in communications, vol. 29, pp. 1525-1536, 2011 [5] h. kim, g. de veciana, x. yang, m. venkatachalam, “·-optimal user association and cell load balancingin wireless networks”, ieee/acm transactions on networking, vol. 20, no. 1, pp. 177-190, 2002 [6] d. niyato, e. hossain, “adaptive fair subcarrier/rate allocation in multi rate ofdma networks: radio link level queuing performance analysis”, ieee transactions on vehicular technology, vol. 55, no. 6, pp. 1897– 1907, 2006 [7] a. y. al-zahrani, “modelling and qos-achieving solution in full-duplex cellular systems”, international journal of computer networks & communications, vol. 10, pp. 117–135, 2018 [8] s. buyukcorak, g. k. kurt, o. cengaver, “a probabilistic framework for estimating call holding time distributions”, ieee transactions on vehicular technology, vol. 63, no. 2, pp. 811–821, 2014 [9] a. leon-garcia, probability and random processes for electrical engineering, addison-wesley, 1994 [10] m. azhar, a. shabbir, “5g networks: challenges and techniques for energy efficiency”, engineering, technology & applied science research, vol. 8, no. 2, pp. 2864-2868, 2018 [11] l. scalia, k. k. t. biermann, c. choi, w. kellerer, “power-efficient mobile backhaul design for comp support in future wireless access systems”, 2011 ieee conference on computer communications workshops, shanghai, china, april 10-15, 2011 [12] j. banks, j. s. carson ii, b. l. nelson, d. m. nicol, discrete-event system simulation, prentice hall, fourth edition, 2005 authors profile ali y. al-zahrani received his bsc in electrical engineering (with honors) from king fahad university of petroleum and minerals (kfupm), dhahran, saudi arabia, in 2002 and his msc and phd degrees in electrical and computer engineering from carleton university, ottawa, on, canada, in 2010 and 2015 respectively. he is currently an assistant professor at the department of electrical and computer engineering, university of jeddah, saudi arabia. from 2002 to 2007 he worked as an electrical engineer at the saudi basic industries corporation (sabic). his research interests include radio resource allocation and interference management in wireless communication systems, dsp and massive mimo. microsoft word 02-3679_s1_etasr_v10_n5_pp6165-6171 engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6165 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … identification of oil-gas two phase flow in a vertical pipe using advanced measurement techniques lokman a. abdulkareem department of petroleum engineering college of engineering university of zakho duhok, iraq lokman.abdulkareem@uoz.edu.krd abstract−the characteristics of flow configuration in pipes are very important in the oil industry due to its role in governing equipment design. in vertical risers, many flow configurations could be observed such as bubbly, slug, churn, and annular flow. in this project, two tomographic techniques have been applied simultaneously to the flow in a vertical riser: the electrical capacitance tomography (ect) technique and the capacitance wire mesh sensor (wms) technique. the employed pipe diameter was 50mm and the superficial studied velocities were 0.06-3.0m/s for gas and 0.06-0.4m/s for oil. several techniques have been used to analyze the output data of the two tomography techniques such as time series of cross-sectional averaged void fraction, probability density function (pdf), image reconstruction, and liquid hold-up profile. the averaged void fractions were calculated from the output signal of the two measurement techniques and plotted as functions of the superficial velocity of the gas. the flow patterns were identified from the pdf of the averaged void fraction. in addition, it was found that both tomographic techniques are reliable in identifying the flow regimes in pipes. keywords-insert void fraction; electrical capacitance tomography; wire mesh sensor; two phase flow i. introduction multiphase flow of gas-liquid mixtures in vertical pipes occurs in industrial equipment and applications such as petroleum industry, boilers, chemical plants, heat transfer equipment with change of phases, nuclear reactor technology, and geothermal energy production. two phase flow is a challenging subject due to the complexity of the form in which the fluids exist inside the pipes [1]. the prediction of occurring characteristics such as liquid hold up and gas void friction during the two phase gas/liquid flow in pipes is of particular interest to the nuclear, petroleum, and chemical industries. in some petroleum industry units, obtaining accurate inflow performance relations is difficult due to its multiphase behavior [2]. the most important parameters in multiphase flow are flow regimes and void fraction [3]. therefore, the understanding of any process in multi-phase flows depends on the identification of flow regimes. as a result, identification of flow regimes is an excellent start for developing techniques that predict gas and liquid hold up, mass and heat transfer, and finally pressure drop. therefore, the main aim of this study is the characterization of two-phase flow in vertical pipes. ii. experimental arrangement the study of multi-phase gas-liquid flow requires accurate measurements of each phase velocity and phase fraction. the experimental facilities and equipment setup used in this study are described in [3] and shown in figure 1. the project measurement technique that has been installed and used on the vertical facility technique is the novel embracing capacitance wire mesh sensor (wms) [4]. this device is designed in the research center of helmholtz centrum dresden rossendorf (hzdr), germany. fig. 1. block diagram of the facility. iii. probability density function the probability density function (pdf) of the void fraction time series has been used to classify the various flow patterns corresponding author: lokman a abdulkareem engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6166 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … observed in our previously performed experiments [5]. typical examples for bubbly, slug, and churn patterns are given in figure 2. this data corresponds to a liquid superficial velocity of 0.06m/s and gas superficial velocities of 0.06-5.33m/s respectively. the slug flow regime is characterized by twin peaks corresponding to the liquid slug and the taylor bubble. a single peak existing at a low void fraction is typical of bubbly flow. the pdfs of the conditions of flow rate of oil uls=0.06 and 0.4m/s and gas superficial velocity of 0.06-5.33 m/s are illustrated on 3 dimensional graphs in figure 3. fig. 2. probability density function versus gas superficial velocity. (a) (b) fig. 3. the pdfs from: (a) electrical capacitance tomography and (b) wire mesh sensor. the pdf at the lowest gas flow rate shows a single peak at low void fraction, typical of bubbly or homogeneous flow. as the gas velocity increases, this first peak moves to higher void fractions. in addition, a second peak at a much higher void fraction begins to grow. this second peak is typical of slug flow and represents the emergence of taylor bubbles. for slug flow, the results from electrical capacitance tomography sensor show a clearer peak than those of wire-mesh sensors. the axial gap between the electrodes is 2mm for wire-mesh sensor, therefore the wire-mesh sensors measure almost instant void fraction at a plane, whilst electrical capacitance tomography probes measure average void fraction over the 13 zones. iv. mean void fraction the mean void fraction of two probes for vertical flow has been plotted against the superficial gas velocity in figure 4. the measurements were conducted under liquid velocity of 0.06m/s and different gas rates. as can be seen from figure 4, at low gas rates, there is a good agreement between the two curves. however, after the gas superficial velocity reached 1.4m/s, the mean void fraction curve in ect probe has moved higher than the wms curve. fig. 4. mean void fraction of two probes vs. superficial gas velocity. v. void fraction profile the wire-mesh sensor measuring system provides time and cross-sectional resolved information about the spatial distribution of phases. the information can be used to obtain many parameters such as space and time averaged void fractions, radial profiles of time averaged void fraction, and cross-sectional averaged time series of void fraction in the pipe. time averaged resolved radial gas void fraction profiles were examined for various gas and liquid superficial velocity values. figure 5 shows the curve shapes of oil-air flow. all peak values are located in the pipe center and increase with increasing gas superficial velocity. these curve shapes are developed by the appearing of relatively large gaseous structures located in the pipe center. this is expected as the flow rates fall in the center peak region [6]. vi. time series of cross-sectional averaged void fraction time series is a calculation method that is used as adjunct to identify and analyze the types of flow pattern inside a vertical pipe. the wms provides time and cross-sectional resolved information about the spatial distribution of the phases. the time series of cross-sectional average void fraction allows detecting essential features of the flow [7]. effectively, the time series data show the variation of void fraction with time for each performed run which detects the time each bubble passes through the sensors and is transferred to the output device which in turn records the time taken for each bubble. engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6167 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … (a) (b) fig. 5. radial, time-averaged void fraction profiles for different gas superficial velocities and (a) 0.06m/s, (b) 0.4m/s liquid superficial velocity. examples of time series of cross-sectional average void fraction obtained from wms tomography technique are shown in figure 6 which explains the difference between bubble size and the amount of gas-liquid inside the pipe. however, the time series method of measurements by itself is not sufficient for identifying the types of flow pattern. other parameter methods are needed, such as pdf, frequency and mean void fraction techniques. with the presence of these parameters, the detection of flow pattern type is more accurate. the flow pattern types for vertical pipe are bubbly flow, cup bubble flow, slug flow, and churn flow. (a) (b) (c) (d) fig. 6. time series of cross-sectional average void fraction for (a) bubbly flow, (b) cup bubble, (c) slug flow, and (d) churn flow. vii. iimage reconstruction electrical capacitance tomography (ect) produces images of permittivity distribution of the mixture of two fluids inside pipes. the permittivity distribution is displayed as a series of normalized pixels located on a 32×32 square pixel grid using an appropriate color scale of graduated blue/green/red to indicate the normalized pixels permittivity [8]. the pixel values corresponding to the lower permittivity material used in the calibration of the sensor have zero value and are displayed in blue whereas pixels corresponding to the higher permittivity material have the value of 1 and are displayed in red. the normalized permittivity distribution corresponds to the fractional concentration distribution of the higher permittivity material. the method which has been used for reconstruction images is known as linear back projection (lbp) and is based on a set of forward and inverse transforms. several slice images of reconstruction data have been constructed for captured data. figure 7 shows examples of reconstruction of tomographic images for different superficial oil and gas velocities. the images from the figure are in agreement with the visual observations. the images in figure 8 show the transition of a few different flow regimes processed with both visualization techniques. fig. 7. tomographic images for flow for different velocitird of liquid and gas. (a) (b) fig. 8. visualization of (a) electrical capacitance tomography and (b) wire-mesh sensor data. it has been experimentally observed that bubbles start forming up at low flow rates. gradual slug flow occurs due to the increase in the superficial velocity. with further increase in engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6168 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … velocity, the pattern progresses, into churn flow with moderate to high turbulence. preliminary results obtained using this technique have proved the ect system as a powerful tool for exploring transient multiphase phenomena in gas–liquid [9]. viii. three dimension plots two and three dimension plots of surface contours of the permittivity of the material inside the pipe have been generated by the ect system. the plot3d software allows the capacitance data to be captured and the ptl ect32 software to be displayed as a set of multiple 2-d image frames and to be viewed in 2 or 3d. the 3d plotting extends the flow visualization of the structure of the flow into the third dimension axially along the pipe. it does this by projecting the structures axially based on the velocity as they pass through the sensor planes. figure 9 shows an example of 3d data captured by the ect sensor. fig. 9. 3d plot of liquid concentration. the wire mesh sensors (wmss) employed in this study are capable of providing the taylor bubble shapes by creating a 3d reconstruction of the flow as shown in figure 10. the deformation of a taylor bubble is also related to the stresses generated from the translational motion. as a result, as the mixture velocity increases it can be observed that the taylor bubble is totally broken when the gas superficial velocity reaches 1.4m/s. this phase interaction mechanism might be the reason why, as reported in [10], liquid structures inside the gas core of the taylor bubble (wisps) have been found to exist in the churn flow regime. fig. 10. taylor bubble shapes in 3d. ix. liquid hold up profile ect is a system which processes capacitance data captured by a twin plane ect sensor to produce velocity and flow profiles and overall flow data for a mixture of 2 dielectric materials. it generates instantaneous concentration (hold up) profiles of the flow inside the ect sensor, at two axial measurement locations (planes) for each frame of capacitance data. these concentration profiles can be calculated for a relatively small number of zones in the flow cross-section. figure 11 shows the average concentration (expressed as a percentage of the zone when full, in the nominal range 0100%) for the selected zone at each of the two measurement planes as a function of time. the green trace corresponds to the average zone concentration at plane 1 and the red trace corresponds to the average zone concentration at plane 2. cross-correlation techniques can then be used to derive the instantaneous velocity in each zone from the concentration profiles at the 2 measurement planes. the overall flow profile can also be calculated from the concentration and velocity profiles. fig. 11. time variation of liquid hold up in four marked zones from the two probes. x. structure velocity structure velocities have been calculated from the cross correlation of the two void fraction signals from two planes of the ect sensor. the transient time between the two planes was measured and divided by the distance between the two planes. figure 12 shows the variation of structure velocity against mixture velocity for all conditions for vertical flow. in addition, the experimental data have been compared with the nicklin equation. the dash line curve in the figure is the correlation proposed by [11]. it shows that increasing the mixture velocity leads to increasing in structure velocity. however, at liquid rate of 0.06m/s and gas rate of 1.9m/s, the structure velocity starts to decrease. the reason of this case is the change in flow engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6169 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … pattern from slug to churn. in addition, the experimental structure velocity is higher than the nicklin curve. fig. 12. structure velocity vs. mixture velocity of gas, for different liquid flow rates. xi. frequency analysis frequency analysis is a very important technique on the predication of slug flow characteristics. accurate designing of separation in two phase flow depends on the reliable predication of slug frequency. the frequency of the data can be found by power spectrum density (psd) analysis. psd shows how the power of a signal or time series is distributed with frequency. mathematically, according to the wiener-khinthine theorem, the psd has been obtained by using fast fourier transform (fft) of auto covariance function and auto correlation of time series signal. the auto covariance function of a signal x(t) is given by: 1 1 ( ) [ ( ) ] [ ( ) ] n xx t r t x t x x t t x n = ∆ = − ⋅ + ∆ −∑ (1) where 1 1 ( ) n t x x t n = = ∑ . the psd is then obtained from: 2)2exp()( 1 )( 1 0       = ∑ = = n t xxxx fitr n fp τπ (2) the autocorrelations for one run are shown in figure 13. as can be seen, in auto covariance function and fft there is a single dominant peak which is about 1.2hz. figure 14 shows the effect of gas flow rate on frequency. as can be seen there are break points at bubbly flow to slug and slug to churn transition. the frequency slightly increased with increasing gas flow rate until velocity reached about 0.5m/s and then the frequency starts to decrease due to change in flow regimes from bubbly to slug flow. however, the frequency slightly decreases in slug flow region for higher gas flow rates. xii. discussion this section concentrates on the discussion of the results obtained in the previous sections. also, the results of the present study are compared with the results obtained in [13]. (a) (b) (c) fig. 13. (a) psd vs. frequency, (b) auto correlation vs. time delay, (c) time series of void fraction for usg=0.4m/s and usl=0.27m/s. fig. 14. frequency against superficial velocity of gas, for different liquid rate flows. a. average void fraction analysis in figure 15, the variation of average void fraction with gas superficial velocity is shown. the behavior in present work is compared with the results obtained in [13] with silicone oil in pipes of the same diameter. i can be seen that the results follow a comparable trend. minor differences can be explained by the difference in electrode configurations which brings about a time delay. b. probability density function the pdf obtained from the void fraction time series suggested that the flow was bubbly, slug, and churn. the results observed in the present study have been compared with the results obtained in [13]. three velocities were used for engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6170 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … comparison (figures 16-18). it can be clearly seen that the pdfs obtained for both studies are of similar character. fig. 15. result comparison between the current study and [13]. fig. 16. current study and [13] comparison of pdfs at 0.003m/s. fig. 17. current study and [13] comparison of pdfs at 0.001m/s. fig. 18. current study and [13] comparison of pdfs at 0.02m/s. c. dominant frequency analysis the variation of dominant frequencies with gas superficial velocity was compared to the pattern obtained in [13]. the comparison is shown in figure 19. the plot trend obtained in the present study was found to be significantly different from that obtained in [13]. here the frequency was found to fluctuate with superficial gas velocity unlike the one obtained in [13]. this fluctuation in frequency value is mainly due to disturbances caused by noise. fig. 19. comparison of dominant frequencies. d. structure velocity the structure velocity was found to vary linearly with the gas superficial velocity as shown in figure 20, implying the flow inside the tube to be slug. the bubble velocities obtained in the present study were compared with the bones obtained in [13] and it was seen that their magnitudes are very similar. this suggests that the bubble behaviors of both liquids can be considered to be similar. fig. 20. comparison of structure velocity. xiii. conclusion two advanced instrumentations were used to examine and investigate the characteristics of two-phase flow in vertical pipes. experiments were performed in a mixture of gas and oil, revealing good agreement in the flow pattern behavior with similar studies. the ect has the capability to measure velocity non-intrusively, and is able to show detailed void fraction and velocity profile information for flows. the 3d shape of the bubble was reconstructed from wms and ect data. the structure velocity shows similar trends when compared with [11]. pdfs used as in [5] have been used to identify the flow patterns. acknowledgment the author would like to thank the multiphase flow research center in the department of petroleum engineering, college of engineering, university of zakho for funding and supporting this research project. engineering, technology & applied science research vol. 10, no. 5, 2020, 6165-6171 6171 www.etasr.com abdulkareem: identification of oil-gas two phase flow in a vertical pipe using advanced … references [1] b. j. azzopardi, gas-liquid flows. new york, ny, usa: begell house, 2006. [2] m. h. chachar, s. a. jokhio, a. h. tunio, and h. a. qureshi, “establishing ipr in gas-condensate reservoir: an alternative approach,” engineering, technology & applied science research, vol. 9, no. 6, pp. 5011–5015, dec. 2019. [3] l. a. abdulkareem, b. j. azzopardi, s. thiele, a. hunt, and m. j. da silva, “interrogation of gas/oil flow in a vertical using two tomographic techniques,” presented at the asme 2009 28th international conference on ocean, offshore and arctic engineering, feb. 2010, pp. 559–566, doi: 10.1115/omae2009-79840. [4] v. a. musa, l. a. abdulkareem, and o. m. ali, “experimental study of the two-phase flow patterns of air-water mixture at vertical bend inlet and outlet,” engineering, technology & applied science research, vol. 9, no. 5, pp. 4649–4653, oct. 2019. [5] m. j. da silva, s. thiele, l. abdulkareem, b. j. azzopardi, and u. hampel, “high-resolution gas–oil two-phase flow visualization with a capacitance wire-mesh sensor,” flow measurement and instrumentation, vol. 21, no. 3, pp. 191–197, sep. 2010, doi: 10.1016/j.flowmeasinst.2009.12.003. [6] g. costigan and p. b. whalley, “slug flow regime identification from dynamic void fraction measurements in vertical air-water flows,” international journal of multiphase flow, vol. 23, no. 2, pp. 263–282, apr. 1997, doi: 10.1016/s0301-9322(96)00050-x. [7] ohnuki a. and h. akimoto, “experimental study on transition of flow pattern and phase distribution in upward air-water two-phase flow along a large vertical pipe”, international journal of multiphase flow, vo. 26, pp 367-386, 2000. [8] m. abdulkadir, d. zhao, s. sharaf, l. abdulkareem, i. s. lowndes, and b. j. azzopardi, “interrogating the effect of 90° bends on air–silicone oil flows using advanced instrumentation,” chemical engineering science, vol. 66, no. 11, pp. 2453–2467, jun. 2011, doi: 10.1016/j.ces.2011.03.006. [9] m. byars, “developments in electrical capacitance tomography,” 2nd world congress on industrial process tomography, pp. 542–549, jan. 2014. [10] w. warsito and l.-s. fan, “measurement of real-time flow structures in gas–liquid and gas–liquid–solid flow systems using electrical capacitance tomography (ect),” chemical engineering science, vol. 56, no. 21, pp. 6455–6462, nov. 2001, doi: 10.1016/s00092509(01)00234-2. [11] b. azzopardi, v. hernandez perez, r. kaji, m. j. da silva, m. beyer, and u. hampel, “wire mesh sensor studies in a vertical pipe,” in proceedings of the 5th international conference on transport phenomena in multiphase systems, heat 2008, bialystok, poland, 06.03.07 2008. [12] e. q. bashforth, j. b. p. fraser, h. p. hutchison, and r. m. nedderman, “two-phase flow in a vertical tube,” chemical engineering science, vol. 18, no. 1, pp. 41–46, jan. 1963, doi: 10.1016/0009-2509(63)80004-4. [13] m. m. mustafa, “investigation of gas-liquid flow in vertical pipes by using wire-mesh sensor & measuring the viscosity flow of fluids using bs/u/m viscometers,” ph.d. dissertation, university of zakho, zakho, iraq, 2013. engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3726 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing a multi-criteria decision making approach for enhancing university accreditation process nezha benmoussa signals, distributed systems and artificial intelligence laboratory, enset mohammedia, university of hassan ii, mohammedia, morocco nbnezhabenmoussa@gmail.com abir elyamami signals, distributed systems and artificial intelligence laboratory, enset mohammedia, university of hassan ii, mohammedia, morocco abir.elyamami@gmail.com khalifa mansouri signals, distributed systems and artificial intelligence laboratory, enset mohammedia, university of hassan ii, mohammedia, morocco khmansouri@gmail.com mohammed qbadou signals, distributed systems and artificial intelligence laboratory, enset mohammedia, university of hassan ii, mohammedia, morocco qbmedn7@gmail.com elhoussein illoussamen signals, distributed systems and artificial intelligence laboratory, enset mohammedia, university of hassan ii, mohammedia, morocco illous@hotmail.com abstract— this paper is an attempt to provide an accreditation training process model by the criteria established by the national agency for evaluation and quality assurance of higher education and training. the aim is to minimize rejection returns or revisions of the training record. the main feature of our contribution is the use of multi-criteria decision making (mcdm) approaches for calculating the suitability of proposed courses. therefore, our contribution will is concretized by the analysis of various decision support multi-criteria methods, the modeling of the general accreditation process of university courses, the development of a risk management matrix concerning the launch of new courses and the application of topsis (technique for order of preference by similarity to ideal solution) on a sample of courses according to internal and external criteria, collected during interviews in moroccan universities. result analysis shows that the proposed model allows a better prioritization of training and thus avoids the abrupt closure of courses because of lack of material or human resources. keywordsdecision support; mcdm; accreditation; risk management matrix; topsis i. introduction decision-making generates very important industrial and economic issues affecting management and competitiveness of organizations. prioritizing and optimizing all actions are two key factors of effective decision-making. universities face the challenge of resources and investment optimization to avoid any negative impact on the management of training projects and their performance. they are always looking for innovation so that they may be able to meet the needs of the market and to encourage scientific research. all interviewed managers assume that taking an effective decision in advance, requires a preliminary study of any educational proposal. multi-criteria decision making (mcdm) methods, are increasingly used in various fields like natural resource management, environment and spatial planning, making possible to rely on science while taking managerial decisions and to drive decision-making processes in organized systems [1]. whether strategic, global, operational or local, decision making is generally made to manage organizations consistently: quantitatively (number of products or services offered), and qualitatively (development of standards, establishment of a charter). therefore, making a decision requires different alternatives that must be evaluated according to one or more criteria in order to determine the optimum [2]. these alternatives and indicators help enormously in decision-making and are a great contribution to the evolution of future steps to take. decision support is a scientific approach to decisionmaking problems that arise in any socio-economic context in which the two main factors are the decision-maker who governs the decision-making process, and the responsible of study who intervenes at least on 1 of the 3 important levels, namely the modeling of the decision problem, the design or adaptation of a procedure for exploiting the model and the elaboration of a prescription from the solution(s) [3]. formerly, decision support was designed to find the solution to a given problem. today, it decently offers answers in the form of recommendations to decision-makers of a decision-making process and allows them to make better choices [4]. operational research (or) offers a variety of decision support tools and the most complex decisions and resource optimizations are possible through a number of algorithmic approaches that iteratively build a solution, descent heuristics that seek a global optimum from a given solution and metaheuristics that break down objectives to ease decision-making. these purely algorithmic resolution methods in the decisionmaking domain resolve in time, efficiently and quickly, to a solution or choice [5]. indeed, or is a discipline of scientific methods that help the making of a decision. it refers to notions corresponding author: n. benmoussa engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3727 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing that map our contemporary semantic and legal territory to the image of big data, e-reputation, or predictive algorithms [6]. multicriteria decision support is a major area of study of or involving several schools of thought, mainly american [7]. these are mathematical methods to choose the best solution or the optimal solution among a whole set of solutions. in this paper, we contribute by analyzing various mcdm methods, modeling the general process of accrediting academic training and developing a risk management matrix for the launch of new courses and the application of topsis on a sample of courses according to internal and external criteria, collected during interviews for an effective decision-making process within moroccan universities. ii. mcdm methods multicriteria decision aid (dma) was created in the 1970s. it has aroused interest through its innovative approach starting with single-criteria analysis followed by the weighting of criteria and their aggregation procedures varied in decision problems [8]. most conventional mcdm methods use parameters derived from the decision maker's preferences. these parameters are often used for weight calculation, quantifying the importance of each criterion in the multicriteria decision process. this domain is broken down into two subdomains: • madm (multi-attribute decision making) for selecting the best alternative in a predetermined set of alternatives. • modm (multi-objective decision making) concerning the selection of the best action in a continuous or discrete decision space. multi-objective optimization is a branch of modm [9]. authors in [10] specify that in a multi-criteria decision support process, the main objective is not to find a solution, but to build or create a tool considered as useful in the decisionmaking process. since then, the mcdm methods are more and more used including maxmin, maxmax, saw, ahp, topsis, smart and electre [11] in order to make a choice, to classify or to sort out for an effective decision making. the mcdm's progression goes through 6 steps as shown in figure 1. fig. 1. general steps of mcdm methods the test steps are important because they give an accurate assessment of the indicators. if the indicator’s quality isn’t conformed to the goals, it must be redefined and the test must be recommenced. below we present some decision support methods which will be the object of a concise description specifying their functioning and their limits. topsis will be more detailed since it constitutes the implementation tool of the data of our case study “accreditation and training management”. the analyzed methods’ advantages and limitations are shown in table i. a. ahp (hierarchical process analysis) ahp is a semi-quantitative method that has been developed in [12] and its computer version “expert choice” software was introduced in the us in 1985. it is based on the comparison of pairs of options and criteria by structuring in a logical coherence: • classes, criteria and hierarchical weights. • sub-criteria and ranks by priority. this involves making pairs of elements of each hierarchical level with an element of the higher hierarchical level. this step makes possible to build comparison matrices. the values of these matrices are obtained by the transformation of judgments into numerical values according to the saaty scale [13], while respecting the principle of reciprocity: 𝑃𝑐(𝐸𝐴, 𝐸𝐵) = 1 𝑃𝑐(𝐸𝐵,𝐸𝐴) (1) b. smart (simple multiple attribute rating technique) smart is similar to ahp, it has been developed since 1971 as a hierarchical structure created to assist in defining a problem and in organizing criteria. the difference between a value tree and a hierarchy in ahp is that the value tree has a true tree structure, allowing one attribute or sub-criterion to be connected to a higher level criterion. its main steps are: • put the criteria in decreasing order of importance. • determine the weight of each criterion. • normalize the relative importance coefficients between 0 and 1: sum the importance coefficients and divide each weight by this sum. • measure the location of each action on each criterion (uj (αi)). evaluations actions are on a scale ranging from 0 (plausible minimum) to 100 (plausible maximum). • determine the value of each share based on the following weighted sum: 𝑈(𝑎𝑖) = ∑ 𝜋𝑗𝑢𝑗(𝑎𝑖) 𝑛 𝑗=1 , i=1, 2… m (2) • classify actions in decreasing order of u(αi). c. topsis topsis was presented in [14] and developed later in [15, 16]. it is worth noting that it corresponds to the hellwig taxonomic method of ordering objects [17]. the main advantages of this method are: it is a simple, rational, comprehensible concept, and it has intuitive and clear logic that represents the rationale of human choice. in this method, two alternatives are hypothesized: the ideal solution that has the engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3728 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing best solution for all attributes and the negative ideal solution for the one which has the worst attribute values. topsis method performs prioritization of alternatives based on their geometric distance from the positive-ideal and negative-ideal solution. it reduces the need of pair comparisons and the limitation of capacity may not significantly dominate the process. therefore, it would be appropriate for cases with a large number of criteria and alternatives, especially when objective or quantitative data are determined [18]. topsis breaks down the decision into different stages: 1) formation of decision matrix criterion outcomes of decision alternatives are collected in a decision matrix. the matrix rows represent decision alternatives, with matrix columns representing criteria. a value found at the intersection of row and column in the matrix represents a criterion outcome: a measured or predicted performance of a decision alternative on a criterion. c1cjcn x= 𝐴1 ⋮ 𝐴2 ⋮ 𝐴𝑚 [ 𝑥11 ⋮ ⋯ … 𝑥1𝑗 ⋮ … 𝑥1𝑛 ⋮ 𝑥𝑖1 ⋮ … … 𝑥𝑖𝑗 ⋮ … 𝑥𝑖𝑛 ⋮ 𝑥𝑚1… … 𝑥𝑚𝑗 … … 𝑥𝑚𝑛 ] 𝑚×𝑛 (3) where, xij is the performance rating of alternative i with respect to criterion j, ai is ith alternative and cj is the jth criterion. 2) formation of weight matrix different importance weights to various criteria may be awarded by the decision maker independently or by entropy method. these importance weights form the weight matrix: w=[w1 …..wj …..wn] (4) 3) normalization of performance rating units and dimensions of performance ratings under criteria differ. for comparison, these performance ratings are converted into dimensionless units by normalization using the following equations: ( ) ij ij ij i x x max x = (5) for benefit criteria j and ( )ij i ij ij min x x x = (6) for non-benefit criteria j finally, the normalized decision matrix is formed: =x a1 ⋮ ai ⋮ am [ x̅11 ⋮ ⋯ … x̅1j ⋮ … x̅1n ⋮ x̅i1 ⋮ … … x̅ij ⋮ … x̅in ⋮ x̅m1 x̅mj x̅mn] m×n (7) 4) determination of the positive ideal and negative ideal solution a+ = (ai1 + ,ai2 + , … … … … … … . . , aim + ), aij += max 1≤i≤m (aij), j=1, 2…, n (8) a− = (ai1 − ,ai2 − , … … … … … … . . , aim − ), aij −= min 1≤i≤m (aij), j=1, 2…., n (9) 5) calculation of the separation measures using the n-dimensional euclidean distance the separation of each alternative from the positive ideal solution is given as: 𝐷𝑖 +=√∑ 𝑊𝑗(𝑎𝑖𝑗 + − 𝑎𝑖𝑗) 2𝑛 𝑗=1 (10) similarly, the separation from the negative ideal solution is given as: 𝐷𝑖 −=√∑ 𝑊𝑗(𝑎𝑖𝑗 − − 𝑎𝑖𝑗) 2𝑛 𝑗=1 (11) 6) step6: calculation of the ratio for each alternative, calculate the ratio ri as: 𝑅𝑖 = 𝐷𝑖 − 𝐷𝑖 + + 𝐷𝑖 − i = 1, 2. . . . . . m (12) 7) rank alternatives in increasing order according to the ratio value of ri d. electre i & ii electre i (elimination and choice translating reality) was developed in 1968 and electre ii in 1971 [19]. both versions are based on the notions of concordance and discordance. electre is a non-compensatory method of multicriteria decision support. in reality, the decision maker is often undecided, his preferences evolve because the decision is the result of a process of micro decisions. the optimum can only be achieved if three conditions are met: • the different strategies (projects) proposed to the decision maker are distinct. • strategies stability over time is present. • comparability is transitive. e. electre iii electre iii is based on a fuzzy logic and a constructive approach which classifies actions. it favors [20]: • dialogue between the different factors in the decisionmaking process. • weighting of criteria by factors expressing preferences on resource management strategies. • consideration of uncertainty in the evaluation of actions by pseudo-criteria. f. electre iv this method assumes that all pseudo-criteria are of equal importance. it involves two outranking relations like electre ii but only one set of veto thresholds and the notion of concordance is translated by a notion of majority of criteria in the absence of any weighting. engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3729 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing table i. mcdm methods: advantages and limitations ahp popular method that has been subjected to criticism regarding the explosion of the number of pairwise comparisons in case of a complex problem, the reversal of rank (order of priority of the actions) in case of addition or deletion and the introduction of biases by the association of a numerical scale on the semantic scale which is restrictive. currently, ahp is subject to several extensions such as the consideration of uncertainty (stochastic ahp) and blur (fuzzy ahp) in the expression of judgments. smart similar to ahp and easy to exploit but requires a priori articulation of preferences, and evaluation of actions on a single scale (cardinal scale). it is compensatory and has been developed in criterium decision plus 3.0 and decide right for automatic management. topsis easier method to apply and responsive to the decision maker's wishes. however, the attributes must be cardinal in nature, preferences are fixed a priori. on the other hand, if all the actions are bad, the method proposes the best of these bad actions. electre i it formalizes well the process of human reasoning, but has the disadvantage of using quantization weights of the importance of different criteria. despite the contribution of this method, the decision maker still faces the difficult task of providing quantization weights. electre ii it replaces the classic upgrade relationship with two new relationships, namely strong upgrade and weak upgrade. electre iii it introduces the notion of pseudo-criteria that replace the classical criteria. the pseudo-criteria are modeled by functions whose expression is close to the membership functions known in the field of fuzzy logic. electre iv is distinguished by its ability to dispense with the weights associated with each criterion. however, this benefit is tempered by the need to determine a “credibility degree” associated with each outranking relationship used. gra uses a specific concept of information. it defines situations with no information as black, and those with perfect information as white. neither of these idealized situations ever occurs in real world problems. situations between these extremes are described as being grey, hazy or fuzzy. promethee gaia unlike the outranking relation constructed by the electre method, which is purely binary, the relation constructed by promethee is a valued upclass relationship: one action outclasses another with a numerical preference intensity. in 1989, gaia provided a descriptive complement to promethee rankings. using a graphical representation of the multicriteria problem, the decision maker can easily understand which choices are possible and which trade-offs are required to make a good decision. promethee-gaia require less parameterization while remaining as efficient. it makes possible to stay closer to the real decision problem, to better describe it and to carry out sensitivity analyses. g. gray relational analysis (gra) also called deng's gray incidence analysis model [21], gra uses a specific concept of information. it defines situations with no information as black, and those with perfect information as white. however, neither of these idealized situations ever occurs in real world problems. in fact, situations between these extremes are described as being gray or fuzzy [22]. the scope of the gray system involves agriculture, ecology, economics, meteorology, medicine, history, geography, industry, earthquake, geology, hydrology, irrigation, strategy, military affairs, sport, traffic, management, materials science, environment, biological protection, judicial system. h. promethee preference ranking organization method for enrichment of evaluations (promethee) is a part of the family of upgrade methods that allow two particular mathematical treatments: partial storage (promethee i) and complete storage (promethee ii). with their descriptive complement geometrical analysis for interactive aid they are better known under the names promethee and gaia1 [23]. they are multi-criteria decision-support methods that belong to the family of methods of outreach initiated by the electre methods. promethee and gaia methods offer a prescriptive and descriptive approach to the analysis of discrete multicriteria problems covering several areas. in fact, 217 scientific articles from 100 journals mention their fields of application on environmental management, hydrology and water management, commercial and financial management, chemistry, logistics and transport, manufacturing and assembly, energy management and other topics such as medicine, agriculture, education, design, government, and sport [24, 25]. iii. research methodology the proposed approach concerns the field of higher education in general and the moroccan universities in particular. it meets the obligation to set up indicators before launching a new course. indeed, the objective of this study is to contribute to the optimization of resources and especially the evaluation of existing training as well as the decision-making concerning those to be accredited. for the understanding of the accreditation process, we modeled the demands management standard and developed the corresponding risk management matrix. in addition, we studied university specifications and conducted interviews with university officials on the basis of internal and external criteria that are essential for the training. these data will be presented and commented. for the evaluation of our proposal, we applied topsis on a course sample. a. model of accreditation of innovative university courses the compliance with the standards and criteria stipulated by the national agency for evaluation and insurance quality of higher education and scientific research (aneaq) is essential for the acceptance of the training proposed by any institution. the following model presents the general process for the processing of accreditation requests for training and the main conditions to be taken into consideration in order to avoid the return of refusals or major revisions: at least 1 teacher per higher grade, the appropriate hourly volume in theory and practice as well as internships and partnerships to prepare profiles for the professional world as shown in figure 2. b. risk management matrix the interviewees unanimously expressed the usefulness of the designed risk management matrix which specifies not only the main risks to be considered, but also the responsibility and actions to be taken to make the decision effective. they have completed it and have judged that these risks are to be minimized or even avoided and must be taken into consideration before any training proposal (table ii). engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3730 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing fig. 2. processing process for training accreditation applications table ii. risk management matrix risk probability rating severity rating priority rating responsibility action to be taken lack of classrooms 3/5 4/5 12/25 university construction enlargement lack of human resources 4/5 4/5 16/25 establishment continuing education session lack of equipment 4/5 5/5 20/25 establishment/ university acquisition the risk management matrix in table ii clearly shows that teaching resources and human resources are the most important according to their respective ratings of 20/25 and 16/25, which is explained by the risk of lack of rooms that can be solved by adequate automatized planning or an expansion of the establishment. thus, the managers must establish a policy of continuous training of human resources permanently according to the evolution of the market and an adequate budgetary strategy relating to the educational equipment necessary for each department. to make the study relevant, we have defined, in addition to these indicators, internal and external criteria based on program specifications and interviewee responses. c. internal and external criteria table iii provides the main internal criteria of moroccan university courses, the main objective of the departments is the adequacy of the proposals with the university strategy which aims to reach 100% in terms of innovation and development in order to meet the expectations of the market and encourage scientific research. table iii illustrates the results of the interviews based on the specifications and the actual estimate for each of the criteria according to the specifications. we were able to determine 7 internal and 3 external criteria. the internal ones are part of the school's strategy and are the key to the success of the existing training and future ones. externally, these are the conditions to be respected for alignment with the standards put in place concerning requests for accreditation of new training. these criteria will allow university officials to detect, qualitatively and quantitatively, the defects and report on the improvement of the situation and good planning. in addition to the ratings in the risk management matrix and standards in the accreditation processing model, the vision will be clear in identifying priority areas for improvement to avoid inefficient decision-making. the external criteria are shown in table iv and are complementing the internal ones in order to prepare competent profiles in adequacy with the socioeconomic status and respect for the aneaq standards. they concern especially the rank of human resources, the hourly amount and the content which must be theoretical and practical. table iv shows that the partnership is essential and must be developed for all courses in order to gain knowledge of the market and easy integration in active life and internships. it is therefore necessary to maximize the agreements and partnerships with companies and administrations. internal and external criteria, partnership and internships considerably influence the decision to be taken and constitute the key to evaluate existing training cards and launching new training. they will be implemented via topsis in order to conclude with recommendations of interest for our universities. table iii. main internal criteria criteria motivation human resources technical resources course innovation market needs team experience class rooms work shops equipment bdcc 95% 95% 50% 100% 65% 30% 40% glsid 90% 80% 65% 90% 65% 40% 45% mli 85% 80% 65% 90% 65% 35% 40% gmsi 75% 85% 70% 90% 70% 50% 45% sid 70% 80% 75% 75% 65% 40% 40% gecsi 80% 85% 65% 90% 70% 45% 45% gmasi 75% 80% 70% 90% 70% 50% 45% table iv. external criteria alignment with standards partnership traineeship grade vh content 2 pes 100% t/p yes yes 3 pa 100% t/p yes yes 2 pa 100% t/p yes yes 1 pa 100% t/p not yet yes 4 pa 100% t/p not yet yes 3 pa 100% t/p not yet yes 1 pa 100% t/p yes yes 1 pa 100% t/p not yet yes d. topsis implementation after the identification of the alternatives and criteria, the results of our interviews will be implemented under the mcdm topsis in order to identify the importance coefficients and prioritize the best decision. we will then calculate the weighted scores for each of the formations according to the standardized criteria and values in order to prioritize and optimize the different formations. 1) formation of decision matrix the choice of alternatives and criteria is the most important step in decision making. indeed, selecting the key indicators is a basis that will allow university officials to prioritize and optimize actions for better management. the alternatives and criteria codes are presented in figure 3. 2) formation of weight matrix in table v we see the selected alternatives and the scoring engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3731 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing of each alternative on different criteria. the dataset is used as decision matrix and then the normalized decision matrix is calculated (table vi). fig. 3. codification of internal and external criteria table v. weight matrix aij c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 a1 95 95 50 90 65 30 40 43 1 1 a2 95 100 65 100 80 80 45 43 1 1 a3 85 80 65 90 65 35 40 38 1 1 a4 75 85 70 90 70 50 45 38 0 1 a5 70 80 75 75 65 40 40 43 0 1 a6 80 85 65 80 70 45 45 26 0 1 a7 75 80 70 90 70 50 45 38 1 1 3) determination of positive and negative ideal solutions positive and negative ideal solutions, 𝐴+and 𝐴− are defined according to the normalized decision matrix (table vii). 4) calculation of the separation measures for each competitive alternative the separation distance is calculated in table viii. 5) ratio calculation the relative closeness of each location to topsis ideal solution is measured in table ix. 6) classify actions the alternatives are ranked in decreasing order (table x). table vi. performance rating rij a1 0.43 0.41 0.29 0.39 0.35 0.23 0.35 0.42 0.50 0.38 a2 0.43 0.44 0.37 0.43 0.44 0.61 0.40 0.42 0.50 0.38 a3 0.39 0.35 0.37 0.39 0.35 0.27 0.35 0.37 0.50 0.38 a4 0.34 0.37 0.40 0.39 0.38 0.38 0.40 0.37 0.00 0.38 a5 0.32 0.35 0.43 0.32 0.35 0.31 0.35 0.42 0.00 0.38 a6 0.37 0.37 0.37 0.34 0.38 0.34 0.40 0.25 0.00 0.38 a7 0.34 0.35 0.40 0.39 0.38 0.38 0.40 0.37 0.50 0.38 table vii. positive and negative ideal solutions vmax a+ 0.43 0.44 0.43 0.43 0.44 0.61 0.40 0.42 0.50 0.38 vmin a0.32 0.35 0.29 0.32 0.35 0.23 0.35 0.25 0.00 0.38 table viii. separation measures course code smin smax a1 0.546676 0.420843 a2 0.686478 0.057166 a3 0.53037 0.379255 a4 0.240516 0.568416 a5 0.231595 0.61971 a6 0.161789 0.609396 a7 0.554413 0.276441 table ix. ratio values course code coefficients ranking a1 0.923127 1 a2 0.667281 2 a3 0.583065 3 a4 0.565028 4 a5 0.297325 5 a6 0.272047 6 a7 0.209792 7 table x. ranking values course coefficients ranking glsid 0.923127 1 gmasi 0.667281 2 mli 0.583065 3 bdcc 0.565028 4 gmsi 0.297325 5 sid 0.272047 6 gecsi 0.209792 7 iv. discussion compensatory methods such as topsis allow trade-offs between criteria, where a poor result in one criterion can be negated by a good result in another. it provides a more realistic form of modeling than non-compensatory methods. using topsis, the collected results respond to our problematic, which aims to innovate in the field of university training, whether initial or continuous, and above all to prioritize and optimize actions in order to remedy the difficulties encountered before, during and after the launch of a new course. figure 4 illustrates the degree of prioritization of different courses according to the studied criteria. fig. 4. prioritization the result of the proposed method shows that the first 4 courses are favored by several factors that can be summarized in: 9 2 .3 1 % 6 6 .7 3 % 5 8 .3 1 % 5 6 .5 0 % 2 9 .7 3 % 2 7 .2 0 % 2 0 .9 8 % glsid gmasi mli bdc c gmsi sid ge c si engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3732 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing • relevant course modules • interesting potential of human resources • strong demand for these specialties on the market however, for all courses, we must focus on the material and teacher’s specialties constraints. in addition, some modules may be enriched, deleted or replaced. the results show that the engineering, industry and big data rank first in relation to others because all criteria are fulfilled. this explains why, before launching a new course, we have to focus on the indicators studied and evaluate the limits of the human and material resources which are at the origin of the success of any course. in addition, we must prioritize these courses because they are innovative and meet current and immediate market expectations. admittedly, they require more teaching resources and considerable experience in the field, but the situation can be improved by setting up performance tools such as balanced scorecards. our method allowed the evaluation of possible alternative solutions for the continuous improvement of the performance of the studied formations. the proposed framework can help universities identify the strengths and weaknesses of their human and material resources. it also makes it easier for decision-makers to plan future strategies to develop educational courses and identify the best practices for each course while respecting current strategies, the job market, and the technological evolution. v. conclusion the main goal of this study was to provide a decisionmaking tool for effective management of the initial and continuing training map. we focused on the case of multicriteria decision-making in the system of applications for accreditation of innovative training in order to prioritize decisions and the incessant changes in the labor market. it should be noted that the interaction between the different stakeholders (decision makers, department heads and teachers) is always present. in this context, we opted for mcdm methods in general and topsis in particular, which uses precise parameters to calculate the performance for each studied element. using this tool, we were able to quantify the internal and external criteria identified from specifications and interviews. these indicators were determined and can be expanded in the future according to the academic needs and standards. they allowed us to to enable decision-makers to target the best alternative and also to correct those that are interesting but have lower results. in conclusion, the multicriterion aspect and the proposed approach combining the analysis of internal/external criteria via the topsis tool, made possible to select and prioritize the actions to be taken for an effective decision. these coefficients will help university officials in determining the value of different alternatives in order to focus human and material resources on the emergencies to be addressed and the innovative trainings to be planned. when it comes to future work, this same approach can be completed by a calculation of weights of each of the criteria announced or taken over with fuzzy topsis or promethee gaia for more details, especially the management of uncertainties and sensitivity of situations through a prescriptive and descriptive approach, and scorecards for helping in decision-making. references [1] m. diaby, f. valognes, a. clement-demange, “utilisation d’une methode multicritere d’aide a la decision pour le choix des clones d’hevea a planter en afrique”, biotechnologie, agronomie, societe et environnement, vol. 14, no. 2, pp. 299-309, 2010 (in french) [2] d. vanderpooten, aide multicritere a la decision concepts, methodes et perspectives, ens cachan, 2008 (in french) [3] s. b. mena, “introduction aux methodes multicriteres d’aide a la decision”, biotechnologie, agronomie, societe et environnement, vol. 4, no. 2, pp. 83-93, 2000 (in french) [4] j. ninin, l. mazeau, “la recherche operationnelle: de quelques enjeux juridiques des mecanismes d’aide a la decision”, lex electronica, vol. 22, pp. 57-79, 2017 (in french) [5] d. c. porumbel, “algorithmes heuristiques et techniques d’apprentissage applications au probleme de coloration de graphe”, phd thesis, université d'angers, 2009 (in french) [6] b. roy, regard historique sur la place de la recherche operationnelle et de l’aide a la decision en france, universite paris-dauphine, 2006 (in french) [7] t. l. saaty, “decision making with the analytic hierarchy process”, international journal of services sciences, vol. 1, no. 1, pp. 83-98, 2008 [8] b. roy, h. aissi, robustesse en aide multicritere a la decision, universite paris-dauphine 2008 (in french) [9] a. appriou, “methodologie de la gestion intelligente des senseurs”, traitement du signal, vol. 22, no. 4, pp. 305-306, 2005 (in french) [10] b. roy, d. bouyssou, aide multicritere a la decision: methodes et cas , economica, 1993 (in french) [11] m. hanine, o. boutkhoum, a. tikniouine, t. agouti, “application of an integrated multi-criteria decision making ahp-topsis methodology for etl software selection”, springerplus, vol. 5, pp. 1-17, 2016 [12] t. l. saaty, l. g. vargas, models, methods, concepts & applications of the analytic hierarchy process, springer, 2001 [13] f. tscheikner-gratl, p. egger, w. rauch, m. kleidorfer, “comparison of multi-criteria decision support methods for integrated rehabilitation prioritization”, water, vol. 9, no. 2, pp. 1-28, 2017 [14] d. ozturk, f. batuk, “technique for order preference by similarity to ideal solution (topsis) for spatial decision problems”, isprs 4th international workshop, trento, italy, march 2-4, 2011 [15] a. mardani, a. jusoh, k. nor, z. khalifah, n. zakwan, alireza, “multiple criteria decision-making techniques and their applications”, economic research-ekonomska istrazivanja, vol. 28, no. 1, pp. 516571, 2015 [16] t. jolanta, z. edmundas, t. zenonas, p. vainiunas, “multi-criteria complex for profitability analysis of construction projects”, economics and management, vol. 16, pp. 969-973, 2011 [17] z. edmundas, m. abbas, t. zenonas, j. ahmad, n. khalil, “development of topsis method to solve complicated decision-making problems”, international journal of information technology & decision making, vol. 15, no. 3, pp. 645-682, 2016 [18] g. o. odu, o. e. charles-owaba, “review of multi-criteria optimization methods – theory and applications”, iosr journal of engineering, vol. 3, no. 10, pp. 1-14, 2013 [19] d. ayadi, optimisation multicritere de la fiabilite: application du modele de goal programming avec les fonctions de satisfactions dans l’industrie de traitement de gaz, phd thesis, universite d’angers, 2010 (in french) [20] k. solecka, “electre iii method in assessment of variants of integrated urban public transport system in cracow”, transport problems: an international scientific journal, vol. 9, no. 4, pp. 83-96, 2014 engineering, technology & applied science research vol. 9, no. 1, 2019, 3726-3733 3733 www.etasr.com benmoussa et al.: a multi-criteria decision making approach for enhancing university accreditation … do not alter header & footer. they will be completed during editing [21] l. sifeng, c. hua, c. ying, y. yingjie, “advance in grey incidence analysis modelling”, 2011 ieee international conference on systems, man, and cybernetics, anchorage, usa, october 9-12, 2011 [22] f. y. ma, “analysis of energy effinciency operational indicator of bulk carrier operational data using gray relation method”, journal of oceanography and marine science, vol. 5, no. 4, pp. 30-36, 2014 [23] s. greco, m. ehrgott, j. r. figueira, multiple criteria decision analysis: state of the art surveys, springer, 2005 [24] s. c. deshmukh, “preference ranking organization method of enrichment evaluation promethee”, international journal of engineering science invention, vol. 2, no. 11, pp. 28-34, 2013 [25] m. zare, c. pahl, h. rahnama, m. nilashi, a. mardani, o. ibrahim, h. ahmadi, “multi-criteria decision making approach in e-learning: a systematic review and classification”, applied soft computing, vol. 45, pp. 108-128, 2016 microsoft word 14-3340_s_etasr_v10_n2_pp5419-5422 engineering, technology & applied science research vol. 10, no. 2, 2020, 5419-5422 5419 www.etasr.com belkhir: simple implementation of a fuzzy logic speed controller for a pmdc motor with a low cost … simple implementation of a fuzzy logic speed controller for a pmdc motor with a low cost arduino mega kamel salim belkhir department of electrical engineering faculty of technology university ferhat abbas setif 1 setif, algeria ksbelkhir@univ-setif.dz abstract—control of the permanent magnetic direct current pmdc motor is a common practice, hence the importance of the implementation of the pmdc motor speed controller. the results of a fuzzy logic speed controller for the pmdc motor rely on an appropriate base. as the dimension of the rules increases, its difficulty rises which affects computation time and memory requirements. fuzzy logic controller (flc) can be carried out by a low-cost arduino mega which has a small flash memory and a maximum clock speed of 16mhz. it is realized by three membership functions and each was divided into three memberships. the results of the flc are satisfactory, revealing superior transient and steady-state performance. in addition, the controller is robust to speed mode variations. keywords-fuzzy logic; pmdc motor; arduino mega i. introduction permanent magnetic direct current pmdc motors have long been commonly used in the industry control area, due to their high performance and the fact that the torque is directly proportional to the field flux, which means that the speed can be adjusted by the terminal voltage [1]. in classical pid controllers, proportional, derivative, and integral control actions are applied together. they are simple in construction, and are appropriate to control the processes with well-defined mathematical models. generally, the exact model is not available, so the conventional pid controller may not be the best choice [2-5]. to overcome this problem, flc is proposed. the flc does not rely on a mathematical model [6]. it can be successfully functional to control nonlinear systems using basic engineering logic [7-8]. with the rapid developments of microprocessors and semiconductor materials, many classical and intelligent control techniques can be applied to control the speed of the dc motor in order to achieve high performance. a minor overshoot and quicker response of motor speed signal were stated in the fuzzy pid control compared to the classic pid control [9]. the neuro-fuzzy controller performed better compared to the pid controller in different loads [10]. compared h2 and h∞ control methods for a dc motor [11], adaptive robust control method for dc motor [12], and speed control of a dc motor used hybrid control methods [13]. these techniques offer a good instrument for the control of nonlinear systems that are hard to model. when choosing flc parameters, several problems occur. among the proposed solutions, adaptive controllers are able to adjust to reach optimum performance [14-17]. however, as the number of rules and large membership functions for fuzzy logic increase, calculation speed and memory storage become serious problems [18-19], so the need for excessive memory and high speed capacity materials for the implementation becomes necessary. fuzzy-logic controllers require more processing power to work in real time as the number of inputs/outputs of the controller increases. in this case, conventional microprocessors are not adequate for most real-time applications. in [20], in order to control the system, quanser q8 data acquisition card along with high performance computers were used for sending analog signals and getting encoder signals. in [21], the data exchange with the pc was provided by a ni usb-6812 daq card, which sent the signals that it received from the feedback tacho generator and control signals to the dc motor control module. in both cases vast resources were used for the control of dc motor. in this concern, the main contribution of the current work is the implementation of the proposed flc controller on the conventional and low cost arduino mega board. the board uses the three membership functions and each membership function’s inputs were divided to three memberships. ii. control strategy the pmdc motor simulink program given in figure 1 was implemented by using the following structures. the inputs of flc are the error speed e and its derivative de by subtracting pmdc motor speed value ω from the reference speed value ω * . e (t) and de (t) for each sampling time t are given as: ���� = �∗���− ���� (1) ���� = ���� − ��� − 1� (2) the output dv is the change in armature voltage v is: ���� = ���� − ��� − 1� (3) corresponding author: kamel salim belkhir engineering, technology & applied science research vol. 10, no. 2, 2020, 5419-5422 5420 www.etasr.com belkhir: simple implementation of a fuzzy logic speed controller for a pmdc motor with a low cost … fig. 1. control speed strategy the linguistic terms used to represent input and output are defined by three variables: negative (n), zero (z) and positive (p). triangular and gaussian membership functions are used. agreeing to the rated speed of the pmdc motor, the interval of discourse of the speed error is between -3000rpm/mn and 3000rpm/mn. the supported rules are shown in table i. table i. final rules e de n z p n n p p z n z p p n n p after adjustments, the final membership functions are obtained (figure 2). the defuzzification method is used at the center of gravity of the membership function dv. (a) (b) (c) fig. 2. input and output membership functions: (a) error speed, (b) change in error speed, (c) change in armature voltage iii. hardware implementation the hardware block diagram is shown in figure 3 and the hardware implementation in figure 4. a matlab program was used to transfer the flc algorithm to the arduino mega card (figure 5) which received the feedback signal from the tachometer and sent signal in the form of pwm to the pmdc motor via dc-dc converter module. the motor specifications ares illustrated in table ii. the pmdc motor was set at noload, 30% load, and 95% load. fig. 3. hardware blok fig. 4. hardware implementation fig. 5. arduino mega table ii. pmdc motor parameters rated power 3.8w rated voltage 20v rated speed 3000rpm iv. experimental results an experiment test was set to demonstrate the performance of the flc. the output voltage was 0-3v to control the motor speed 0–3000rpm. engineering, technology & applied science research vol. 10, no. 2, 2020, 5419-5422 5421 www.etasr.com belkhir: simple implementation of a fuzzy logic speed controller for a pmdc motor with a low cost … (a) (b) (c) (d) (e) fig. 6. operation at various speeds and loads: (a) 1000rpm, 0%, (b) 2000rpm,0%, (c) 2600rpm, 0% (d) 1000rpm, 30%, (e) 1800rpm, 95% the aim was to make the pmdc motor operate at a constant speed for various loads. in order to test whether the system could remain at a constant speed under no-load, the motor was operated at 1000rpmm 2000rpm, and 2600rpm (figure 6(a), 6(b) and 6(c) respectively). it was observed that oscillations were low. also it was shown that the motor achieves the desired speed in less than 5s for overshoot less than 30%. as a next step, the motor was operated at 1000rpm under 30% load at 24s (figure 6(d)) and 1800rpm under 95% load at 22s (figure 6(e)). in the latter case, even though the motor was almost loaded, the flc controller keep the speed constant with no oscillations. the pmdc motor was regulated by the controller and still rotated at the desired speed. the fuzzy controller was capable to realize intelligent control for each speed. explicitly, the speed overshoot and the swings were small. v. conclusion in this paper, a low cost flc controller was designed to control the speed of a pmdc motor. the design extremely diminishes the necessary hardware to the simple conventional arduino mega and on the other hand, the used program was rather simple. in contrast with some distinct fuzzy controllers with several rules and membership functions running on computer systems, a simple flc controller using a small number of rules and a simple implementation program were able to control the speed of the pmdc motor. the controller shows good performance in tracking the reference speed and in terms of reducing steady state error. references [1] n. matsui, “sensorless pm brushless dc motor drives”, ieee transactions on industrial electronics, vol. 43, no. 2, pp. 300-308, 1996 [2] a. w. nasir, i. kasireddy, a. k. singh, “real time speed control of a dc motor based on its integer and non-integer models using pwm signal”, engineering, technology & applied science research, vol. 7, no. 5, pp. 1980-1986, 2017 [3] m. ndje, j. m. nyobe yome, a. t. boum, l. bitjoka, j. c. kamgang, “dynamic matrix control and tuning parameters analysis for a dc motor system control”, engineering, technology & applied science research, vol. 8, no. 5, pp. 3416-3420, 2018 [4] l. a. gadeh, “outline of a new approach to the analysis complex systems and decision processes”, ieee transactions on systems, man, and cybernetics, vol. smc-3, no. 1, pp. 28-44, 1973 [5] e. gowthaman, c. d. balaji, “self tuned pid based speed control of pmdc drive”, 2013 international mutli-conference on automation, computing, communication, control and compressed sensing, kottayam, india, march 22-23, 2013 [6] r. kushwah, s. wadhwani, “speed control of separately excited dc motor using fuzzy logic controller”, international journal of engineering trends and technology, vol. 4, no. 6, pp. 2518-2523, 2013 [7] a. h. o. ahmed, “optimal speed control for direct current motors using linear quadratic regulator”, journal of science and technology, vol. 13, no. 3, pp. 32-38, 2012 [8] d. drainkov, h. hellendoorn, m. reinfrank, an introduction to fuzzy control, springer-verlag, 1993 [9] z. z. liu, f. l. luo, m. h. rashid, “speed nonlinear control of dc motor drive with field weakening”, ieee transactions on industry applications, vol. 39, no. 2, pp. 417-423, 2003 [10] s. v. s. r. pavankumar, s. krishnaveni, y. b. venugopal, y. s. kishore babu, “a neuro-fuzzy based speed control of separately excited dc motor”. international conference on computational intelligence and communication networks, bhopal, india, november 26-28, 2010 [11] y. shi, j. huang, b. yu, “robust tracking control of networked control systems: application to a networked dc motor”, ieee transactions on industrial electronics, vol. 60, no. 12, pp. 5864-5874, 2013 engineering, technology & applied science research vol. 10, no. 2, 2020, 5419-5422 5422 www.etasr.com belkhir: simple implementation of a fuzzy logic speed controller for a pmdc motor with a low cost … [12] z. li, j. chen, g. zhang, m. g. gan, “adaptive robust control for dc motors with ınput saturation”, iet control theory & applications, vol. 5, no. 16, pp. 1895-1905, 2011 [13] s. h. kim, k. ishiyama, “hybrid speed control of a dc motor for magnetic wireless manipulation based on low power consumption: application to a magnetic wireless blood pump”, ieee transactions on magnetics, vol. 50, no. 4, article id 5000307, 2014 [14] a. fereidouni, m. a. s. masoum, m. moghbel, “a new adaptive configuration of pid type fuzzy logic controller”, isa transactions, vol. 56, pp. 222–240, 2015 [15] h. acikgoz, “speed control of dc motor using interval type-2 fuzzy logic controller”, international journal of intelligent systems and applications in engineering, vol. 6, no. 3, pp. 197-202, 2018 [16] a. ramya, m. balaji, v. kamaraj, “adaptive mf tuned fuzzy logic speed controller for bldc motor drive using ann and pso technique”, iet the journal of engineering, vol. 2019, no. 17, pp. 3947–3950, 2019 [17] d. k. panicker, m. r. mol, “hybrid pi-fuzzy controller for brushless dc motor speed control”, iosr journal of electrical and electronics engineering, vol. 8, no. 6, pp. 33-43, 2013 [18] l. t. ngo, d. d. nguyen, l. t. pham, c. m. luong, “speed up of interval type 2 fuzzy logic systems based on gpu for robot navigation”, advances in fuzzy systems vol. 2012, article id 698062, 2012 [19] d. k. chaturvedi, r. umrao, o. p. malik, “adaptive polar fuzzy logic based load frequency controller”, international journal of electrical power & energy systems, vol. 66, pp. 154-159, 2015 [20] a. avcu, a. f. bozkurt, k. erkan, i. kurt, s. sezer, “comparison of ipd and fuzzy logic velocity control in two degree of freedom dc motor system”, international journal of engineering science and application vol. 2, no. 1, pp. 1-7, 2018 [21] i. kandilli, “real-time speed controlling of a dc motor using fuzzy logic controller”, pamukkale university journal of engineering sciences, vol. 23, no. 5, pp. 543-549, 2017 engineering, technology & applied science research vol. 8, no. 4, 2018, 3287-3293 3287 www.etasr.com benmoussa et al.: web information system for the governance of university research web information system for the governance of university research khaoula benmoussa information system engineering research group national school of applied sciences abdelmalek essaadi university tetouan, morocco majida laaziri information system engineering research group national school of applied sciences abdelmalek essaadi university tetouan, morocco samira khoulji information system engineering research group national school of applied sciences abdelmalek essaadi university tetouan, morocco kerkeb mohamed larbi information system engineering research group faculty of sciences abdelmalek essaadi university tetouan, morocco abstract—technology development has proved crucial in analyzing and processing the volume of scientific information that is generated today. governments are developing scientific and technical information systems that, beyond a database, are a real tool for supporting research management and decisionmaking in the field of science and technology policy. for the development of higher education in morocco, the ministry has focused on projects for the management and development of university research. for this purpose, abdelmalek essaadi university developed an efficient application dedicated to the management of collaborative extranet called simarech (moroccan information system of scientific research), in order to support, organize and structure all academic activities. it will enable all university stakeholders to use a digital workspace specific to their roles, to access and share information, and interact and engage in national scientific research. this article presents an overview of research management systems and the design and development of simarech, which is designed as a tool for monitoring research conducted by a university or other institutions. keywords-simarech platform; web information system; management of research; university scientific research i. introduction governments want high-level universities because the modern economy is based on scientific research and highly skilled human capital. each university must have a clear and evidence-based understanding of the institution's research performance in relation to its objectives and mission [1]. since research is a central function, the university must evaluate its research performance, something that will help the decision making about which research areas to support or build [2]. it will also help university leaders to understand the institution's position in relation to global and national standards of research production. for a good piloting of scientific research, an information system becomes a mandatory condition to manage better investments in science in order to evaluate the performance of research and to establish a solid policy for the development of scientific research [3]. to this end, moroccan universities have decided to adopt the digital application called simarech set up by abdelmalek essaadi university, which aims to develop an information system initiative whose ultimate goal is to support researchers and enhance their scientific activities. the incorporation of simarech into moroccan universities has contributed to a significant increase in the number of active researchers, the quantity of research projects in progress, the external and internal funds obtained and the number of publications in indexed journals. simarech can be defined as a set of people, processes and equipment designed, built, operated and maintained to collect, record, process, store, retrieve and display information about the activities and results produced by the researchers in their development centers or in collaboration with other national or international institutions. ii. global research management systems overview global research management has benefited greatly from the development of research management systems that reflect the research results generated in universities and research centers, organizations, institutions, etc. the development or improvement of a research information system requires a comparative study of similar existing systems and as well as taking into account remarks from researchers or users of the system. a. research management systems in latin america table i shows an overview, the advantages and the disadvantages of the respected systems (sgi, sicytar). b. research management systems in europe the same applies in table ii regarding the european research management systems (sica, graal, iris). engineering, technology & applied science research vol. 8, no. 4, 2018, 3287-3293 3288 www.etasr.com benmoussa et al.: web information system for the governance of university research table i. latin american research management systems systems overview advantages cons sgi (chile) developed in 2001 at the university of talca to support the academic activities of their researchers [4]. includes a range of information services for the exclusive use of specific users of university researchers, aims to have a set of indicators and related statistics of available research capabilities, stays informed in the results of academic research, exchange knowledge at national and international level, allows visitors to the sgi platform to view the researchers' cv [4-6]. limited to the management of university research. limited to the management of research projects and indexed publications. [4] sicytar (argentina) sicytar was established in 2002, in order to facilitate and unify the access to information on scientists, technicians and their jobs [11]. keeps a unified cvar register constantly updated [9]. produces through the infosicytar service of the platform detailed statistical information to develop indicators and evaluate science and technology policies, for exclusive use by the institutions of national system of science, technology and innovation [10] provides the buscar tool for networking among researchers, as well as government and commercial sectors accessible to all visitors [7]. there is not an identifier to distinguish the user from the system, it is accessible to everyone, a researcher or other from any country can create an account and provide false information, something that makes the statistics information withheld questionable [8]. table ii. european research management systems systems overview advantages cons sica (spain) sica was born as a research project in 2001, through a cooperation agreement between the ministry of education and culture of the government of andalusia and the university of granada to meet the specific needs of the organizations of operational management to promote research and technological development in spain [13, 15]. assists management in general and those responsible for science policy especially in decision-making, provides updated information from individual researcher programs, flexible mechanisms for management and maintenance on an ongoing basis, establishes an authorized knowledge base with standardized criteria in the evaluation and quality of the results of the scientific activity, encourages the transfer of results between different types of information [13, 15]. accessible to everyone, all people (researchers or other) have the opportunity to create an account and enrich the sica database with information which may be wrong [12, 13]. the evaluation of research by this system is imprecise. graal (france) graal is a software program launched in 2000 by the interuniversity computing center of grenoble. its management was then entrusted to a group of scientific interest (gls) bringing together the public partners concerned [16]. presents and manages in a coherent way the research units within the university including the personnel and their scientific activities, manages the monitoring of financial means and international activities within the university [16]. limited just to the research management of university research units [16]. iris (italy) adopted in 2015 by the university of l'aquila in italy, it is a platform of data management on research activities, was designed to meet the needs of academic institutions and research. it is now installed in seven establishments outside italy [17]. iris aims to collect, manage and store the results of the scientific research of the university, professors, researchers, the fellows of research, specialization and phd students, as well as administrative staff members can access the platform using their credentials to catalog their published research, iris aims to control the results of research and to improve the visibility of the university's production, has five different modules integrated through standard protocols and interfaces, one of the main components of iris is dspace-cris, an open source solution that can also be used as a standalone system [17, 19]. does not allow managing and evaluating of research activities of each unit independently of the other [20] . c. the globally recognized research information management system (rims) rims project is an initiative funded by dst and powered by infoed. authors in [21, 22] described rims as a potential new category of services for libraries. rims is an integrated online system for managing grants and the administration of ethics in research projects and research activities. it provides researchers, administrative and executive staff with a single point of reference for these aspects of research projects [22, 24]. the software is used by more than 600 academic, medical and scientific institutions around the world [21, 22, 24, 25]. iii. simarech the moroccan information system of scientific research (simarech) aims to collect, manage and store research (research, professional, educational and international activities) at national level. its modular nature and flexibility of its data model facilitate the processing, organizing and transmission of information in accordance with international standards. the development team has worked to develop a scalable and customizable application that aligns with local, national and international research policies, and recognizes the autonomy of universities in terms of governance of the scientific research [25]. simarech has taken into consideration the advantages and disadvantages of the previously presented systems. a. general simarech was created at abdelmalek essaadi university in 2008 to help develop the research potential by highlighting the scientific production and know-how of researchers. simarech can also be used as an assessment tool and could help decision making. initially, the system was structured to include a range of services for the exclusive use of specific users based on their pro res str rel has mo wa inc and act situ and eva sel all res un to sec pat 1) ma adm       2) un  engineerin www.etasr ofile, with resp sulting from rategic level th lated statistics s been designe oroccan mana ay the institut cluding the pe d the monito tivities [26]. s uation and a d material res aluation of res lf-evaluation a ows for a tran sponsible of a iversity admin print docume ctions: index tents, etc. [27] administrati simarech i aintaining inn ministrative pr reducing the streamlining providing au access to i compliance. improving th demands of a making the understand a processes. making the p status, inform between univ university p simarech ha iversities and promote a communicati regional, nati ng, technology r.com pect to progra the research he system aim s of available ed to meet the agement bodie tions and the rsonnel and th oring of finan simarech prov needs study f sources. it als search that wo and external p nscendent role a research stru nistrator. its re ents, graphs a xed publicatio ]. fig. 1. gen ve processes is supporting novative and i rocesses by (f e administrativ g and automatin uthorized indiv information he efficiency o a growing wor research stee and improving process more mation sharin versity manage rocess as been design government b and increase ion between ional or intern y & applied sci ams, projects, e h activities d ms to have a se research cap specific needs es, aims to pr research units heir scientific ncial resource vides a descrip for better opti so allows a q ould provide o eer review. it e assignment: ucture, educati eporting system and statistics ons, commun nerality of simare g research b integrated sys figure 2): ve burden. ng the admini viduals and sy about propos of the adminis rkload with lim ering process the efficiency transparent by ng and easy ement systems ned to meet th y allowing to e visibility, scientists a national level. ience research benmou events and pro developed. a et of indicator pabilities. sim s of universitie resent in a coh s of the unive activities (fig s and interna ption of the ex imization of h quantitative na objective criter is configurabl teacher-resea onal institutio m makes it po on various sp nications, pro ech by providing stems that sim stration of rese ystems with t sals, rewards stration to me mited resource timelier, eas y of the surrou y reporting res integration of s. he specific nee (figure 3): exchange and researche h v ussa et al.: web oducts at the rs and marech es and herent ersity, gure 1) ational xisting human ational ria for le and archer, on and ossible pecific ojects, g and mplify earch. timely s and eet the es. sier to unding search f data eds of and ers at        b. dev wit on a com oth low oth obj sys dev tech use foll 1) cas vol. 8, no. 4, 20 b information s organize and registries to statistics in re assist in the institutions re consistently p provide up-t programs. national and the cultural an ensure inter technology da it offers the institutional a information i resubmitting h object-orien oom is a velopment and th this method a component b mponents and er systems. w wer maintenanc er words, obje ects in a syste tem coding is velop oo syste hnique (omt) ed oo design lowing steps: system analy system analy e of omt as i 018, 3287-3293 system for the g d maintain uni produce acc eal time. e overall man esearch units a present researc to-date inform international nd professiona roperability b atabases. opportunity appeals, includ n the cv and his cv in call fig. 2. adm fig. 3. uni nted methodolo a methodolog d graphical not dology, a com basis that enab facilitates the with the adopti ce costs and b ect modeling em and their in s complete. t ems [28-30]. i ) is used becau methodologie ysis ysis is the firs in any other sy 3 governance of ified scientific curate, reliabl nagement of and their scient ch units and th mation on in knowledge e al experience o between natio to conduct ding project fu d preventing th forms and dat ministrative proces iversity processes ogy (oom) gy for obje tation to repre mputer system bles the efficie e sharing of on of oom, h better quality c is based on th nterrelationshi there are man in this work, th use omt [29] es in the wor st phase of d ystem develop 3289 f university res c and technolo le and up-to researchers in tific results. heir environme ndividual res exchange to e of researchers. onal science and participa unding, by usin he researcher tabases. ss s ect-oriented esent oo conc m can be devel ent reuse of exi its componen higher product can be achieve he identificatio ips. once done ny ooms use he object mod ] is one of the rld. it include development in pment model. i search ogical o-date n the ent. earch enrich and ate in ng the from (oo) cepts. loped isting nts by tivity, ed. in on of e, the ed to deling most es the n the it is a con do im do ph use op stu 2) arc sys an sol org stru the a q the 3) int the the obj nee usu def com int un gro 4) the pro [32 5) po int fau new c. sys the the mo sys la the the not engineerin www.etasr ncise and prec , not how it mplementation main concept ase, the develo er's needs an eration. in sy udying and def system desi the designe chitecture. it in stem to implem oo design ar lved [33]. in ganized into ructure and pro e problem, des question of de e requirements object desig object desig terfaces, objec e requirements e analysis mod ject of object eded to imple ually occurs finition of th mposed, and terfaces withi derstanding o ouped into cla implementat class objects e object are ogramming lan 2]. testing the complete int of view [ terwoven into ults early, it m w faults based object-orien unified mod the perspect stems develop e system deve e ooad pha odeling langua stem to be im anguage (uml e large numbe e object mana tation. uml ng, technology r.com cise abstraction t will be do details. the m s and not it oper interacts nd analyzes ystem develop fining the prob gn er makes high nvolves develo ment the ident re related to t n the system various subsy oposed archite sign is the pro efining the wa s identified dur gn gn is the pr cts, classes, att s. the designe del but contain t design is the ment each cyc at two scale he componen designing co in a compon oo developme sses. tion with obje s and relation e ultimately nguage, datab ed part of the [34]. in other the developm makes subsequ d on existing o nted analysis a deling languag tive of ooad p: they tend to elops and requ ases are captu age, producin mplemented [ l) was develo er of object-ori agement group l is suitable y & applied sci n of what the d one. it should model objects implementatio with the syste the system pment, analysi blem to be solv h-level decisio oping an oo m tified requirem the solution to m design, the ystems based ecture. if analy cess of definin ays in which t ring the analys rocess of def tributes, and o r creates a des ning impleme e data structu cle [32]. in la es: the archite nts from whi omponents, de nent. the imp ent is based ect-oriented pr nships develop translated base, or hardw system is teste r words, testi ment process. t uent phases l nes. and design (o ge (uml) d was to avoi o be very diffi uirements cha ured using a f g an unambig [34, 35]. the oped to simpl iented develop p (omg) adop for modeli ience research benmou desired system d not contain must be appli on concepts. i em user to kno to understan is is the proce ved [31, 32]. ons on the o model of a sof ments. the obje o the problem e target syste d on both an ysis means de ng the solution the system sa sis. fining compo operations that sign model bas entation details ures and algor arge systems, d ectural design ich the syste efining classe portant conce on objects th rogramming ( ped in the desi into a part ware implemen ed from a func ing is continu this not only lo ess likely to ooad) with a id the problem ficult to maint ange. the resu formal syntax guous model o unified mod ify and conso pment method pted it as a sta ing, ranging h v ussa et al.: web m must n any cation in this ow the d the ess of overall ftware ects in being em is nalysis efining n. it is atisfies onents, t meet sed on s. the rithms design n, the em is es and ept in hat are (oop) ign of ticular ntation ctional uously ocates create a m that tain as ults of x of a of the deling olidate ds, and andard from ente real d. sys com sys thi feat plat eac wit inte a la sim    e. resp eac plat 1) sim mo per com cha   vol. 8, no. 4, 20 b information s erprise compu l-time embedd system archit the architect tem in terms mponents, from tem, and base is architecture tures and dev tform is organ ch other (figu th a database erfacing: fram ayer for servic f advantages o marech: the symfony and saves ti simarech 3.0 thanks to th updated proj produce or im it guarantees using mvc keeping the co results the simarech ponsible of a ch with diffe tform for each teacher-rese marech offers oroccan unive rsonal infor mmunication aracteristics of better visibili university, ins simarech en scientific activ 018, 3287-3293 system for the g uting to distrib ded systems [3 tecture design ture of an in of componen m the viewpo ed on specific e will provid velopment of nized into thre ure 4): a data e system and mework and a ces: simarech fig. 4. simare of using symfo y framework p ime, since s 0 platform at th e symfony fra ect to the un mprove version the separatio models [38]. ode clean whi h platform has research struc erent access. h actor and the earcher space s an account ersities (figu rmation, sc with other f the teacher-re ity on each tea stitution, resea ncourages the vities. 3 governance of buted web app 36]. n nformation sy nts and interac oint of specif c structuring p de an overvie simarech. th ee layers that a managemen d search eng api used in sy h platform. ech layer architec fony [39] for t provides consi everal develo he same time. amework we niversities, wh ns of simarech on of the logi . this is a p le it facilitates s four spaces: cture, dean, un the spaces ir functional s for each tea ure 5), which cientific pr teacher-resear esearcher spac acher-research arch structure. teacher-resea 3290 f university res plications and ystem defines ctions among fic aspects of principles [36, ew of the d he structure o communicate nt layer: the s gine. a layer ystem developm ture the developme istency in the opers manage can hand ove hich allows u h 3.x. ic of the view practice that s changes. teacher-resear niversity presi of the sim structure are: acher-research h allows ent roductions, rchers etc. s e are: her at national l archer to shar search even s that those f that , 37]. design of the with server r for ment. ent of code e the er the us to ws by helps rcher, ident, arech er of tering and some level, re his       2) stru pil act rol res stru      wo    engineerin www.etasr it encourages the teacher-r system base. it promotes research stru the teacherresearch activ it establishes scientific coo centers and s the teacherof a research f space respo simarech of ructure within lot and validat tive part in al le between his searcher. sim ructure to: a) define propose and work, in c development analyze the successful ac identify pos optimize str goals into res write and/or funding. exchange inf b) manage orks of the stru validate the participate in tables, think respond to i of the struc university, m ng, technology r.com s managing an researcher par s interactions ctures. -researcher p vities within a s exchanges of operation with similar researc -researcher fol h unit he belong ig. 5. teacher onsible of a re ffers an acco an institution tes the researc ll the research s responsibiliti arech allows research prior d set up the connection w t department. costs, budgets chievement of sible improve ructure perfor search program r validate and formation regu e internal an ucture different scien n scientific eve tanks etc. internal and e cture: balance meetings with o y & applied sci nd sharing scie rticipates in th s with teac articipates in a research struc f information, h national and h bodies. llows and dev gs to. r-researcher space esearch structu ount for the h n (figure 6), w ch work of his h work. it then ies of structur for the respon rities annual and m with the stud s and schedule the research o ements in res rmance. tran ms. d print the file ularly with res nd external c ntific publicati ents: congress external solicit e sheets with other services ience research benmou entific product he enrichment cher-researcher the evaluati cture. documentatio d foreign inst velops the act e [40] ure head of a res who is defined s structure, tak n occupies a m e and his wor nsible of a res multi-year res dies, research es necessary f objectives. search process nslate improv es needed to o searchers. communicatio ions of researc es, symposia, tations on the h the instituti etc. h v ussa et al.: web s. of the rs or on of on and titutes, tivities search d as a kes an mixed rk as a search search h and for the ses to ement obtain on on chers. round work on or     3) esta vali esta occ and      4) uni ens sup man inst rese vol. 8, no. 4, 20 b information s animate the private or pub research etc. c) coach te validate with and arbitrate o individually a unit’s team researchers. follow the bu fig. 6. sp dean's space simarech of ablishments ( idates the res ablishment and cupies a mixed d his work as a to provide exchange bet more specifi decisions of th to validate th to administer to prepare bu to download administrator simarech of versity (figur sures the op pervises instit nages and v titutions of th earch work. t 018, 3287-3293 system for the g partnership p blic laboratorie eams/research h them the succ on solutions to and collective and ensure udget and sche pace for the respon ffers an acco figure 7), w search of all d takes an act d role between a researcher. s general com tween an inst cally the com he research un he need for me r the institution udget forecasts the annual rep fig. 7. d r space ffers an accou re 8) who is d perational ma tutions, resea validates the he university the ceo occu 3 governance of policy: links es, with organ hers cesses and enc o put in place. ly assess the p the developm edules. nsible of a researc ount for all which is defin the research ive part in the n his institutio imarech allow mmunication titution and t mmunication nits. eans/resources n's daily budg s. port of institut ean's space [40] unt for each defined as the anagement o arch units, t works of re and takes an upies a mixed 3291 f university res with units, te nizations prom countered obst performance o ment of skill ch structure [40] the deans of ned like pilot h structures o e research wor onal responsibi ws the dean: and inform the university of the work . et. tion activities. president of e university's c of the unive teacher-researc esearch of all active part in d role betwee search eams, moting tacles of the ls of f the t and of his rks. it ilities mation y and k and each ceo, ersity, chers, l the n the n the pre spa and un      ben res res tha han mo dev res un res eff the the exp ed sup res [1] [2] engineerin www.etasr esidency of hi ace allows the d manage st iversity. sima to check, ve to ensure t research and or the supp university's s to publicize to develop documents t nationally an to manage th the research nefited by the sults of the res search centers at keeps track nd dashboards oment and on velopments an search perform its as well as search activitie forts of the m eir daily work, e current know perience in mo this work w ducation, scien pport nationa search. g. i. petrova, kachalov, “k administration social and beh a. f. j. van statistical prop hierarchically information sc 2006 ng, technology r.com is university a e ceo to mee tructures and arech allows th erify, validate a the proper fu d experimental pression of a scientific polic and publish th the dissemi to promote th nd internationa he income and fig. 8. admi iv. co h managemen e development search generat s. simarech is k of all activit s and statistics n the other to nd directions mance and m s to build a d es. however, managers, resea , made simare wledge system orocco. acknow was supported ntific researc al projects in refe v. m. smokotin knowledge ma n of education i havioral sciences, raan, “perform perties of resear layered networks cience and techn y & applied sci and his work a et the needs of evaluate th he ceo: and accredit in unctioning of centers and p a unit, in ac cy. he teaching an ination of al he reputation ally. d the expenses inistrator’s space onclusion nt in morocco t of simarech ted in morocc s an excellen ties over time s reflecting the o also charact taken over tim monitoring of dynamic visio it was above archers and ev ech a powerful m and, in the wledgment in part by the ch and manag n the manage erences n, a. a. kornienk anagement as in the research , vol. 166, pp. 45 mance-related diff ch groups: cum s”, journal of th nology, vol. 57, ience research benmou as a researcher f institutions, e research o nformation etc existing unit propose the cr cordance wit nd research res ll information n of the univ of the univer [40] o has been la h which reflec can universitie t collaborativ e, to release o e situation at a terize the tren me, to evalua academic res on of a univer all the interes valuators who l tool, a refere e end, a pione ministry of h gement traini ement of scie ko, i. a. ershova a strategy fo university”, pro 51-455, 2015 ferences of biblio mulative advantag e american soci no. 14, pp. 1919 h v ussa et al.: web r. the create of the c. ts and eation th the sults. n and versity rsity. argely cts the es and e tool on one given nds of ate the search rsity’s st and , with nce in eer of higher ing to entific a, n. a. or the ocedia ometric ges and iety for 9-1935, [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] vol. 8, no. 4, 20 b information s n. barts, infoc pilotage de la r université paul i. f. palomo, c investigación tecnológica, v universidad de spanish) j. e. p. latour, la investigación distrito de nue chimbote, 2014 buscador sicy (in spanish) sicytar http://cvar.sicyt cvar, availab spanish) ] nuevo sitio d argentino mi available at: ht de-informacion ] sistema de info available a de_ciencia_y_t ] guia de usu /help/index.htm ] f. m. solis cab una experiencia (in spanish) ] sistema de i https://sica2.cic ] p. desfray, g. architecture”, morgan kaufm ] amue, “gestio recherche, app (in french) ] a. bollini, m. m managing the research, vol. ] v. zwass, mana ] cineca inter information sys ] universita, de information sy institutional-res ] l. dempsey, “ service catego information-ma ] acrl, “keep systems”, a keeping_up_wi ] m. hasan, n. m hassan, “deve management international c systems (icrii ] m. mangena, “ project”, avai launch-of-resea 2008-02-26, 20 ] university of https://www.new research-system about-rims 018, 3287-3293 system for the g centre recherche : recherche d’un é l cézanne-aix-ma c. g. veloso, r. en la universi ol. 18, no. 1, pp. e talca, informe implementacion n tecnologica en evo chimbote, 20 4 (in spanish) ytar, available a identificación tar.mincyt.gob.ar/ ble at: http://ww el sistema de i nisterio de cienc ttp://www.mincyt n-de-ciencia-y-tecn ormacion de cien at: https://ww tecnología_arge uario de sica ml?anexo_iii.htm ( brera, “el sistema a pionera en espa informacion cie ca.es/# raymond, “mo in: modeling e mann, 2009 on de la recherch plication des act mennielli, s. mor research life-cy 4, no. 4, pp. 738agement informat runiversity cons stem, cineca, 2 egli studi di m ystem), available search-informatio “research inform ory”, available anagement-system ping up with.. available at: th/rims, 2016 maarop, g. n. sam eloping a succ system for re conference on re is), langkawi, m “launch of rese ilable at: http: arch-information-m 08 newcastle, aus wcastle.edu.au/re ms/research-inform 3 governance of système d’infor tablissement de r arseille iii, 2008 f. schmal, “sis idad de talca, 97-106, 2007 (in gestion universi de un sistema inf el i.s.t.p. carlo 014, universidat, at: http://sicytar.m de usuario /auth/index.jsp (in ww.mincyt.gob.ar información de cia, tecnología e t.gob.ar/noticias/n nologia-argentino ncia y tecnologi ww.ecured.cu/sis ntino (in spanish a2, available a (in spanish) a de informacion c ana”, madri+d, n entifica de and odels for phase c enterprise archi he, application d tivit es laboratoir rnati, d. t. palme ycle”, universal -743, 2016 tion systems, wm sortium, iris, 2015 messina, iris ( e at: http://www n-system mation managem at: http://orwe ms-a-new-service. research info http://www.al my, h. i. baharum ess model of esearch affiliated esearch and inno malaysia, august 1 earch information ://www.polity.org management-syst stralia, “about esearch-and-innov mation-manageme 3292 f university res mation, outil d’a recherche, phd t (in french) stema de gestión chile”, inform n spanish) idad de talca, 20 formatico de gest os salazar rome , catolica los a mincyt.gob.ar/bus o, available n spanish) r/accion/cvar-646 ciencia y tecn innovacion produ nuevo-sitio-del-si o-11530 (in spani a argentino – ec stema_de_inform h) at: https://sica2.c cientifica de and no. 22, pp. 12-18 dalucia, availab c: information s tecture with to es activit gestion res laboratoires” er, “iris: suppor journal of educa m. c. brown, 199 institutional re (institutional re w.unime.it/it/ricerc ment systems a eblog.oclc.org/res -category/, 2014 ormation manag la.org/acrl/public m, w. z. abidin, research inform d institutions”, ovation in inform 10, 2017 n management s g.za/article/sa-man tem-project-26022 rims”, availab vation/resources/ ent-system-rims/ search aide au thesis, n de la mación 014 (in tión de ero del angeles scar/#/, at: 67 (in nología uctiva, stemaish) cured, mación_ cica.es dalucia, 8, 2008 le at: system ogaf, n de la ”, 2007 rting & ational 92 esearch esearch ca/irisa new searchgement ations/ , n. h. mation 2017 mation system ngena2008ble at: engineering, technology & applied science research vol. 8, no. 4, 2018, 3287-3293 3293 www.etasr.com benmoussa et al.: web information system for the governance of university research [26] k. benmoussa, m. laaziri, s. khoulji, m. kerkeb, “simarech 3: a new application for the governance of scientific research”, transactions on machine learning and artificial intelligence, vol. 5, no. 4, 2017 [27] k. benmoussa, m. laaziri, s. khoulji, m. kerkeb, “comparative study of governance information systems for scientific research”, transactions on machine learning and artificial intelligence, vol. 5, no. 4, pp. 768-775, 2017 [28] d. coleman, object-oriented development : the fusion method, prentice hall, 1994 [29] m. j. chonoles, t. quatrani, succeeding with the booch and omt methods a practical approach, addison-wesley, 1996 [30] software development group, “yourdon systems method (ysm)”, in: software systems design methods, pp. 85–110, liverpool john moores university, 1999 [31] k. w. derr, applying omt: a practical step-by-step guide to using the object modeling technique, sigs, 1995 [32] b. a. haugh, m. c. frame, k. a. jordan, object-oriented development process for department of defense information systems, institutr for defence analyses 1995 [33] j. d. mcgregor, t. d. korson, “integrated object-oriented testing and development processes”, communications of the acm, vol. 37, no. 9, pp. 59-77, 1994 [34] p. beynon-davies, “entity models to object models: object-oriented analysis and database design”, information and software technology, vol. 34, no. 4, pp. 255-262, 1992 [35] h. r. hiremath, m. j. skibniewski, “object-oriented modeling of construction processes by unified modeling language”, automation in construnction, vol. 13, no. 4, pp. 447-468, 2004 [36] g. booch, j. rumbaugh, i. jacobson, the unified modeling language user guide, addison-wesley, 1998 [37] m. kolp, j. mylopoulos, “architectural styles for information systems: an organizational perspective”, 13th international conference on advanced information systems engineering (caise’01), interlaken, switzerland, 2001 [38] p. desfray, g. raymond, “models for phase b”, in: modeling enterprise architecture with togaf, morgan kaufmann, 2009 [39] symfony, symfony 4.0 documentation, available at: https://symfony. com/doc/current/index.html [40] simarech 3, available at: http://simarech.uae.ac.ma/ microsoft word 22-2855_s1_etasr_v9_n4_pp4433-4439 engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4433 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis magnetic field influence on the properties of water treated by reverse osmosis mahdi o. karkush civil engineering department university of baghdad baghdad, iraq mahdi_karkush@coeng.uobaghdad.edu.iq mahmoud d. ahmed civil engineering department university of baghdad baghdad, iraq mahmoud_baghdad@yahoo.com salem m. a. al-ani civil engineering department university of baghdad baghdad, iraq salimalani39@yahoo.com abstract—the current study is focused on reviewing the rapid growing of magnetic water use in different science fields and in measuring the influence of several intensities of magnetization on the chemical and electrical properties of tap water treated by reverse osmosis. this work includes water circulation for 24h in magnetic fields of intensities 500, 1000, 1500, and 2000g. the magnetization of water increases some ions in the water such as mg, k, na, cl, and sio2 and decreases ca and so3. the main application of magnetic water is the improvement of the geotechnical properties of soft and swelling soil through precipitation of calcite in pores which increases the bond between soil particles and the strength of the soil. keywords-magnetic field; water; cheimical properties; electric properties; reverse osmosis i. introduction water is a polar molecule in a v-shaped order of dipoles. magnetic water is defined as raw or treated water passed through magnetic field of various intensities and circulation periods. water properties include magnetic sensitivity and when subjected to a magnetic field, these properties change. the changes could be of negative or positive impact according to the use. water meniscus is not homogeneous and depends on the applied pressure and temperature. magnetization can affect the two forces that control the water structure, chemical hydrogen bond and van der waal’s forces, where magnetization can break down its structure, reduce the linkage angle and increase solubility [1, 2]. it is found that the magnetic field changes the size of the water clusters which in turn changes the physical properties of water [3, 4]. increase in the intensity of magnetic field causes decrease of the surface tension of tested water samples, which reduces the capillary rise of water [5]. the magnetic field application in wastewater has improved its physical and biological property performance in terms of solid-liquid separation through aggregation of colloidal particles and improvement of bacterial activity [6]. the compression strength of concrete produced using magnetic water is increased by 10%-23% compared to the concrete mixture prepared with plain tap water [7]. the circulation of raw salt water in magnetic field of 2000g intensity increased the k, mg, na, and al cations and decreased anions (cl and so3). the results of the treatment of irrigation water by magnetic field showed beneficial effects on the germination of seeds, plant growth and development, and crop yield [8]. authors in [9] investigated the effects of magnetization on the ability of magnetite (fe3o4) nanoparticles, synthesized by chemical co-precipitation, in removing metal ions from water. the nanoparticles of magnetite are promising potential adsorbents and exhibit remarkable reusability for removal of metal ions from water and wastewater. the effect of magnetic field dependent viscosity on free convection heat transfer of a nanofluid in an enclosure considering brownian motion had been studied in [10], where the bottom wall had constant heating flux. the results showed that the nusselt number is an ascending function of the rayleigh number and the nanoparticle volume fraction, but it is a descending function of viscosity parameter and hartmann number. there was significant decrease in salinity measured in terms of electrical conductivity (ec), na, and cl content, in soil irrigated with saline water treated by magnetization. in contrast, there were created non-significant effects on mg 2+ and hco3̄ . the deficiency or high cost of potable water pushed farmers to use saline water for irrigation, but the saline water needed to be treated before use. the test results showed that salt contents in soil increased with increasing depth in the column of soil treated with magnetic water where the salts moved deeper during the treatment process [11]. the top layer of soil is very important in agriculture [12]. the influence of magnetization of water on the rate of calcite precipitation and formation on the membranes of reverse osmosis process had been studied in [13]. tests were conducted on tap water using a module of spiral wound with reverse osmosis membranes. the results did not show any effect for magnetization on the precipitation and formation of calcite. the present study is focused on reviewing the applications of magnetized water in different science fields, especially in water treatment and improvement of the chemical and geotechnical properties of soil. also, the influence of magnetic field of several intensities on the chemical and electrical properties of water treated by reverse osmosis and ozone has been studied. this water can be used in improving the geotechnical properties of swelling soils. ii. magnetic water applications the magnetized water has many applications in different science fields and in industry, especially in green technology. corresponding author: mahdi o. karkush engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4434 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis the circulation of water in magnetic field can change some of its properties. these changes may be useful in industries associated with water properties like ph, surface tension, electrical resistivity, viscosity, and calcite formation inhibition. magnetized water has many applications in green technology, e.g. in the remediation of contaminated soil and water. also, magnetized water can be used as injection fluid in oil recovering areas [14]. a. water purification the treatment of water can be classified according to its source: domestic, natural, and wastewater. according to the quality of water, a specific plan of action will be adopted for reuse, treatment, or disposal. the treatment of water depends on the type and the specifications of effluents. the available techniques of water purification are adsorption, catalytic processes, biotechnology, membrane treatment, ionizing radiation, and magnetization processes. high gradient magnetic separator (hgms) is a technique commonly used in the separation of particles [15-18]. the application of a magnetic field across a column of water will produce a magnetic gradient along the column and will attract magnetized particles to the surface and help trap these particles, so the collection of particles depends on the magnetic gradient, particle size, and maybe their shape. b. wastewater treatment there are many chemical, physical, and biological techniques used for wastewater treatment. the quantity of wastewater is mainly related to the size of population and the level of development. rapid industry development and growing population produce different types of wastewater which require different types of technology for reuse or treatment. the magnetic field have been used in treatment of wastewater for several purposes such as the removal of colors, heavy metals, suspended solids and turbidity, organic compounds, and toxic chemicals [19-21]. however, more research is required in this field. c. formation of calcium carbonate (caco3) the formation of caco3 has the attention of many researchers because of its wide range of applications in engineering processes, e.g. as cementing agent, adsorbent material, and brightener filler [22-26]. the sedimentation of caco3 causes damages and operational problems such as blocking of pipes, clogging of membranes, and efficiency decay in heaters. several methods have been used to prevent precipitation of caco3 (scaling) such as water decarbonization through electrochemical processes and addition of acid and chemical inhibitors. keeping in mind that chemical treatment may be harmful to public health, therefore, physical techniques have been developed to avoid the use of chemicals. one of these techniques is the magnetic treatment of hard water [2627]. the precipitation of different types of crystals under the existence of a magnetic field proved that caco3 is considered the most thermodynamically stable crystal at standard temperature and pressure and forms thick layers that are difficult to be removed mechanically [28-30]. improvement and remediation of soil is considered one of the important fields in geotechnical engineering. the application of magnetic field to the water circulated through weak, swelling or contaminated soils may help to build bonding between particles of soil through precipitation of calcite in soil pores. these bonds depend mainly on the quantity of calcite precipitated in the pores and the ability of calcite in absorbing the contaminants from soil. d. synthesis of phas from biomass sludge the application of magnetic field on bacteria cultivations enhanced their growth [31-33], depending on the gradient of magnetic field and the type of existing microorganisms. many studies investigated the effects of magnetization on the growth of microorganisms, yet the synthesis of polyhydroxyalkanoates (phas) under a magnetic field has not been investigated in details. also, carbon waste such as activated sludge can be used to reduce the cost of processing [34-37]. using acetate of concentration more than 200cmmol/l can prevent cell growth and pha formation [38]. therefore, magnetization treatment can be used to enhance the production of pha under unfavorable conditions. e. magnetic water treatment in agriculture the magnetization of water changes its chemical and physical properties and these changes will affect the soil-waterplant-system. the irrigation of soil with magnetic water will increase the available alkalines such as na, k, and mg significantly. the magnetic susceptibility of nutrients will determine its behavior under the magnetic field. generally, the molecules in nonmagnetic water are in loose state, but they cluster together due to attraction forces provided by magnetization. these forces may help pollutants, especially the toxic ones, to move inside the water molecule cluster. also, the large structure of water molecule clusters or toxic molecules can clog the membrane when they pass through a membrane cell [39-41]. the magnetization of water will prevent the toxic agents from entering its structure, therefore the magnetic water is considered as a bio friendly fluid. using magnetic water in agriculture will help increase crop yield and benefit the health of biomass. using magnetic water will help conserve fresh water supplies for the expected water crisis [39, 40, 42]. iii. materials and methods pure water as a dipolar and associative liquid can alter its intermolecular bonds under the application of magnetic field and transform to a metastable state [43]. the magnetic field will affect both physical and chemical processes of crystallization and dissolution of water molecules [44]. there are two main different types of magnetic field effects. the direct field which affects the biochemical reactions and the indirect field which affects the surroundings [45]. in the first type of magnetic field, the concern might be the possible genetic influences on living organisms, but the second type of magnetic field may have secondary effects such as temperature, pressure, or mechanical stirring. in the present study, the water container was made from plastic fiber (acrylic) material. the used water pump has the following properties: 25w power, 18m head, 1000l/h flow rate, and ac 220v, 50hz and the provided magnetic fields were 500, 1000, 1500, and 2000g in a with plastic tube of 12mm diameter. all these parts were connected together in a basin of water containing 10l of water engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4435 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis treated by reverse osmosis (ro) as shown in figure 1. the water magnetization device was supplied from a local factory. water treated with ro will be the reference for comparison. the procedure of magnetization of water can be easily described by putting 10l of the reference water in the plastic box. the box is supplied with a submersible pump and the magnetization equipment is fixed on the top of the box. the submersible pump is connected to the magnetization equipment by the 12mm tube. the circulation of water in the magnetic field was continued for 5 days, but the water’s most chemical and electrical properties remained approximately constant after 24h. the practical use of water magnetic treatment is based on certain changes in its physical and chemical properties. intensification and stabilization of small initial changes in properties can occur with the help of intermediate mechanisms increasing many times these changes. water tank submersible pump power supplyregulator regulator ph meter magnets tube tube direction of flow fig. 1. schematic diagram of the magnetic system. table i. water analysis methods parameter symbol specification ph ph value astm, d1293 electrical conductivity ec astm, d1125 total alkaline alkaline astm, d1067 total dissolved salts tds astm, d5907 silicon dioxide sio2 astm, d859 chloride content cl¯ astm, d512 sulfate so4 astm, d516 magnesium, calcium mg, ca, astm, d511 sodium na astm, d4191 potassium k astm, d4192 in most cases, such intensification is inherent to heterogeneous systems and their phase transitions. for example, the slightest stimulation of crystal formation can cause avalanche irreversible bulk crystallization, with all the process consequences. a slight decrease in the hydration degree of solid particles under certain conditions can lead to their mass coagulation, significant improvement in filtration, etc. the tested chemical and electrical properties of water are listed in table i. iv. results and discussion previous studies demonstrated that magnetic water treatment influences the molecular and physicochemical properties of water that alter its quality. the effects of magnetic treatment on irrigation water include increasing the number of crystallization centers and altering the free gas content [33]. the factors affecting the magnetization process are flow rate, circulation time, magnetic field intensity, carbonate water hardness of more than 50mg/l, and the concentration of hydrogen ions in water at ph value>7.2. to determine the influence of magnetic field intensity on the properties of water, several intensities ranging from 500g to 2000g produced by common lab devices were applied to water treated by ro. the circulation of water in the magnetic field continued for 5 days, but almost all properties remained constant after 24h. experimental studies have shown that magnetic treatment can increase the number of crystallization and modulates the free gas content of the solution [46]. magnetic treatment on water plays important role in different procedures influencing the crystallization process such as association, dissociation, and nucleation rates [29, 33, 47] a. effects of magnetization on ph pure water is considered neutral when having a ph value of 7 at room temperature (25°c), where the amounts of h + ions and oh¯ ions are equal. water becomes more volatile as a result of magnetic processing due to the weakening of the hydrogen bonds between its molecules [48]. the magnetic process can change the ph of water [49]. decrease in ph is caused by the formation of calcite nuclei resulting from the liberation of h + ions. �� � � � � � ⇌ � � �� � (1) � � �� � � � � ��� � (2) �� � � ��� � ⟶ ���� � � � (3) fig. 2. concentration of ph vs magnetic field intensity figure 2 shows the variation of ph under magnetic field of different intensities. when water passed through the magnetic field, the ph value increased by 6% to 34.34% when the magnetic field intensity increased from 500g to 2000g under 6,4 6,8 7,5 7,9 8,6 0 2 4 6 8 10 0 500 1000 1500 2000 p h v a lu e magnetic field (g) engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4436 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis constant flow rate of 1000l/h. for flow rate higher than 2160l/h, the ph value was stable under various magnetic fields [50]. according to the results, the ph value increases with increasing magnetic field intensity, which means absorption of h + ions and increasing number of oh¯ ions in the water. these finding were confirmed by previous studies, but with increasing percentage of 0.53% to 5.6% [51-53], this difference is mainly due to the quality of the used water and the physicochemical parameters. b. effects of magnetization on alkalines the alkaline metals are basic when dissolved in water with ph value greater than 7.0. generally, the water forms more carbonate without magnetization which accelerates the precipitation of calcite. magnetic field inhibits the precipitation of bicarbonates, and inhibits the formation of calcites, but the magnetization increases the precipitation of na, k, and mg. mostly, the alkaline metals increase with increasing magnetic field intensity. also, the magnetic field induces faster proton transfer from hydrogen carbonate to water due to the inversion spin of protons in the diamagnetic field of salts. figure 3 shows the variation of alkaline concentration with different intensities of magnetic field, where the alkaline concentration increased from 238% to 450% when the magnetic field increased from 500g to 2000g. the test results demonstrated significant increase in the concentration of alkaline minerals with increasing magnetic field intensity. the alkalines may be affected by flow velocity. the intensity of magnetic field did not affect the alkaline content at the highest flow velocity, but had significant effects on the content of alkaline at low flow velocity [54]. fig. 3. concentration of alkalines vs magnetic field intensity figure 4 shows the variation of concentration of alkaline minerals: magnesium (mg), calcium (ca), sodium (na), and potassium (k) with different magnetic field intensities. the concentration of mg and k in water increased significantly with increased magnetic field intensity. na does not occur freely in nature and is prepared chemically from compounds. na salts are highly soluble in water. the concentration of na increased significantly when the magnetic field intensity increased. the ph value increases with increasing magnetic field intensity and the concentration of ca ions decreased. in other words, the high precipitation of caco3 makes a significant drop in the calcium content. the magnetic field inhibits the growth of crystal particles [54]. the concentration of ca decreases with increasing magnetic field intensity. fig. 4. concentration of mg, na, ca, and k vs magnetic field intensity c. effects of magnetization on chloride chloride (cl − ) is an anion formed when the cl element gains an electron or when a compound such as hydrogen chloride is dissolved in water or other polar solvents. chloride salts are very soluble in water. figure 5 shows the variation of cl¯ concentration with different magnetic field intensities. the concentration of cl¯ increased by 20, 45, 75, 100% for magnetic field of 500, 1000, 1500, and 2000g intensity respectively fig. 5. concentration of cl¯ vs magnetic field intensity d. effects of magnetization on silicon dioxide silicon dioxide (sio2) is a chemical compound extensively found in quartz, sand, and in living organisms. this compound is not very reactive because the polarity of its molecule is zero. silica is a major compound of sandy soils having many uses in chemical, electronic and pharmaceutical industries. fig. 6. concentration of sio2 vs magnetic field intensity 16 54 70 81 88 0 20 40 60 80 100 0 500 1000 1500 2000 a lk a li n e (m g /l ) magnetic field (g) 0 5 10 15 20 25 30 35 40 0 500 1000 1500 2000 m in e ra l c o n te n t (% ) magnetic field (g) mg content na content ca content k content 20 24 29 35 41 0 10 20 30 40 50 0 500 1000 1500 2000 c l¯ (m g /l ) magnetic field (g) 2,1 2,7 3,3 3,7 4,1 0 1 2 3 4 5 0 500 1000 1500 2000 s io 2 (% ) magnetic field (g) engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4437 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis figure 6 shows the variation of sio2 concentration with different intensities of magnetic field, where sio2 concentration increased by 28, 57, 76, and 95% when the reference water passed through magnetic field of of 500, 1000, 15000, and 2000g intensity respectively. e. effects of magnetization on electrical conductivity (ec) ec is the reciprocal of electrical resistivity. it represents the material’s ability to conduct electric current. figure 7 shows the change in ec for different magnetic field intensities at a flow rate of 1000l/h, where ec increased from 56 to 264µs/cm with increasing intensity of magnetic field from 0 to 2000g. the flow rate does not have a significant effect on ec [54]. the ec depends on the ion content. it is observed that decreasing the content of ca with increasing intensity of magnetic field causes increase in the ec [54]. fig. 7. ec vs magnetic field intensity f. effects of magnetization on total dissolve solids (tds) tds is a measure of the dissolved inorganic and organic substances present in a liquid in molecular, ionized or microgranular suspended form. specific types of tds mainly include calcium, magnesium, potassium, sodium, bicarbonates, chlorides, iron, lead and sulfates. the common sources of dissolved solids in water come from weathering of rocks and erosion of earth’s surface. many minerals are soluble in water, so high contents will be accumulated over time through the constantly reoccurring process of precipitation and evaporation. groundwater usually has higher contents of tds than surface water, due to longer duration of contact with the underlying rocks and sediments. figure 8 shows the variation of tds content with different magnetic field intensities. we can see that the tds content in magnetized water increases with increasing magnetic field intensity. fig. 8. concentration of tds vs magnetic field intensity g. effects of magnetization on sulfate sulfate (so4) is a polyatomic anion found in water. the sulfate compounds occur from the oxidation of sulfite ores, the presence of shales, and industrial wastewater. sulfate is considered one of the most common dissolved salts in rainwater. figure 9 shows the variation of so4 concentration with different magnetic field intensities, where the circulation of water in a magnetic field causes decreasing in the so4 content with increasing magnetic field intensity. fig. 9. concentration of so4 vs magnetic field intensity v. conclusion the results of the present study showed that the circulation of water in a magnetic field increases the ph which indicates increasing water alkalinity. the nucleation of alkaline content increased from 16mg/l for reference water to 88mg/l for water treated with magnetic field of 2000g intensity. also, the magnetic treatment reduces the nucleation of calcium mineral and sulfate content. the results from this research are focused on the influences of magnetic field of varying intensity on the chemical and electrical properties of water treated by reverse osmosis. the experiments changed the content of ions in water as follows: • after the use of magnetic treatment, ph, ec and tds increased with increased magnetic field intensity. • some positive and negative ions such as mg, k, na, cl, alkaline, and sio2 increased. • some positive and negative ions such as ca and so4 decreased. • the strength of soil could be improved by this method without using chemical additives to the soil through calcite precipitation. the amounts of sulfate in magnetic field decreased, which is useful to protect concrete from erosion, but the magnetization causes increase in the content of chloride which attacks the reinforcement steel of foundation and causes corrosion. references [1] y. wang, h. wei, z. li, “effect of magnetic field on the physical properties of water”, results in physics, vol. 8, pp. 262-267, 2018 [2] d. r. ambashtaa, m. sillanpaa, “water purification using magnetic assistance: a review”, journal of hazardous materials, vol. 180, no. 13, pp. 38-49, 2010 56 145 197 226 264 0 50 100 150 200 250 300 0 500 1000 1500 2000 e c ( µ s /c m ) magnetic field (g) 38 109 128 140 155 0 30 60 90 120 150 180 0 500 1000 1500 2000 t d s ( m g /d l ) magnetic field (g) 48 44 39 33 25 0 10 20 30 40 50 60 0 500 1000 1500 2000 s o 4 (% ) magnetic field (g) engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4438 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis [3] m. iwasaka, s. ueno, “structure of water molecules under 14 t magnetic field”, journal of applied physics, vol. 83, no. 11, pp. 64596461, 1998 [4] s. h. lee, m. takeda, k nishigaki, “gas–liquid interface deformation of flowing water in gradient magnetic field influence of flow velocity and nacl concentration”, japanese journal of applied physics, vol. 42, no. 4, pp. 1828-1833, 2003 [5] y. i. cho, s. h. lee, “reduction of the surface tension of water due to physical water treatment for fouling control in heat exchangers”, international communications in heat and mass transfer, vol. 32, no. 1-2, pp. 1-9, 2005 [6] n. s. zaidi, j. sohaili, k. muda, m. sillanpaa, “magnetic field application and its potential in water and wastewater treatment systems”, separation & purification reviews, vol. 43, no. 3, pp. 206-240, 2014 [7] b. s. k. reddy, v. g. ghorpade, h. s. rao, “influence of magnetic water on strength properties of concrete”, indian journal of science and technology, vol. 7, no. 1, pp. 14–18, 2014 [8] h. al najm, effect of ιrrigation water salinity and magnetization and moisture depletion in some physical properties of soil growth and yield of potatoes, phd thesis, university of anbar, 2014 [9] s. rajput, c. u. pittman jr., d. mohan, “magnetic magnetite (fe3o4) nanoparticle synthesis and applications for lead (pb 2+ ) and chromium (cr 6+ ) removal from water”, journal of colloid and interface science, vol. 468, pp. 334-346, 2016 [10] m. sheikholeslami, m. m. rashidi, t. hayat, d. d. ganji, “free convection of magnetic nanofluid considering mfd viscosity effect”, journal of molecular liquids, vol. 218, pp. 393–399, 2016 [11] m. hachicha, b. kahlaoui, n. khamassi, e. misle, o. jouzdan, “effect of electromagnetic treatment of saline water on soil and crops”, journal of the saudi society of agricultural sciences, vol. 17, no. 2, pp. 154162, 2018 [12] v. zlotopolsk, “the impact of magnetic water treatment on salt distribution in a large unsaturated soil column”, international soil and water conservation research, vol. 5, no. 4, pp. 253-257, 2017 [13] a. andrianov, e. orlov, “the assessment of magnetic water treatment on formation calcium scale on reverse osmosis membranes”, matec web of conferences, vol. 178, no. 2, article id 09001, 2018 [14] e. esmaeilnezhad, h. j. choi, m. schaffie, m. gholizadeh, m. ranjbar, “characteristics and applications of magnetized water as a green technology”, journal of cleaner production, vol. 161, pp. 908-921, 2017 [15] j. svoboda, “a realistic description of the process of high-gradient magnetic separation”, minerals engineering, vol. 14, no. 11, pp. 1493– 1503, 2001 [16] a. ditsch, s. lindenmann, p. e. laibinis, d. i. c. wang, t. a. hatton, “high-gradient magnetic separation of magnetic nanoclusters”, industrial & engineering chemistry research, vol. 44, no. 17, pp. 6824-6836, 2005 [17] h. okada, k. mitsuhashi, t. ohara, e. r. whitby, h. wada, “computational fluid dynamics simulation of high gradient magnetic separation”, separation science and technology, vol. 40, no. 7, pp. 1567-1584, 2005 [18] m. sarikaya, t. abbasov, m. erdemoglu, “some aspects of magnetic filtration theory for removal of fine particles from aqueous suspensions”, journal of dispersion science and technology, vol. 27, no. 2, pp. 193198, 2006 [19] l. wang, j. li, y. wang, l. zhao, “preparation of nanocrystalline fe3xlaxo4 ferrite and their adsorption capability for congo red”, journal of hazardous materials, vol. 196, pp. 342–349, 2011 [20] s. liu, f. yang, f. meng, h. chen, z. gong, “enhanced anammox consortium activity for nitrogen removal: impacts of static magnetic field”, journal of biotechnology, vol. 138, no. 3-4, pp. 96-102, 2008 [21] a. tomska, l. wolny, “enhancement of biological wastewater treatment by magnetic field exposure”, desalination, vol. 222, no. 1-3, pp. 368373, 2008 [22] b. r. heywood, s. rajam, s. mann, “oriented crystallization of caco3 under compressed monolayers. part 2.-morphology, structure and growth of immature crystals”, journal of the chemical of society, faraday transactions, vol. 87, no. 5, pp. 735-743, 1991 [23] s. r. dickinson, k. m. mcgrath, “aqueous precipitation of calcium carbonate modified by hydroxyl-containing compounds”, crystal growth & design, vol. 4, no. 6, pp. 1411-1418, 2004 [24] j. s. park, j. h. yang, d. h. kim, d. h. lee, “degradability of expanded starch/pva blends prepared using calcium carbonate as the expanding inhibitor”, journal of applied polymer science, vol. 93, no. 2, pp. 911-919, 2004 [25] c. y. tai, c. k. wu, m. c. chang, “effects of magnetic field on the crystallization of caco3 using permanent magnets”, chemical engineering science, vol. 63, no. 23, pp. 5606-5612, 2008 [26] f. alim, m. m. tlili, m. b. amor, g. maurin, c. gabrielli, “effect of magnetic water treatment on calcium carbonate precipitation: influence of the pipe material”, chemical engineering and processing: process intensification, vol. 48, no. 8, pp. 1327-1332, 2009 [27] j. bogatin, n. p. bondarenko, e. z. gak, e. e. rokhinson, i. p. ananyev, “magnetic treatment of irrigation water: experimental results and application conditions”, environmental science and technology, vol. 33, no. 8, pp. 1280-1285, 1999 [28] j. d. donaldson, “magnetic treatment of fluids-preventing scale”, finishing, vol. 12, no. 1, 1988 [29] y. wang, j. babchin, l. t. chernyi, r. s. chow, r. p. sawatzky, “rapid onset of calcium carbonate crystallization under the influence of a magnetic field”, water reseasch, vol. 31, no. 2, pp. 346-350, 1997 [30] k. higashitani, k. okuhara, s. hatade, “effects of magnetic fields on stability of non-magnetic ultrafine colloidal particles”, journal of colloid and interface science, vol. 152, no. 1, pp. 125-131, 1992 [31] j. jung, b. sanji, s. godbole, s. sofer, “biodegradation of phenol: a comparative study with and without applying magnetic fields”, journal of chemical technology and biotechnology, vol. 56, no. 1, pp. 73-76, 1993 [32] t. utsunomiya, y. i. yamane, m. watanabe, k. sasaki, “stimulation of prophyrin production by application of an external magnetic field to a photosynthetic bacterium, rhodobacter sphaeroides”, journal of bioscience and bioengineering, vol. 95, no. 4, pp. 401-404, 2003 [33] z. y. li, s. y. guo, l. li, m. y. cai, “effects of electromagnetic field on the batch cultivation and nutritional composition of spirulina platensis in an air-lift photobioreactor”, bioresource technology, vol. 98, no. 3, pp. 700-705, 2007 [34] h. salehizadeh, m. c. v. loosdrecht, “production of polyhydroxyalkanoates by mixed culture: recent trends and biotechnological importance”, biotechnology advances, vol. 22, no. 3, pp. 261-279, 2004 [35] m. s. kumar, s. n. mudliar, m. k. r. konduri, t. chakrabarti, “production of biodegradable plastics from activated sludge generated from a food processing industrial wastewater treatment plant”, bioresource technology, vol. 95, no. 3, pp. 327-330, 2004 [36] d. dionisi, g. carucci, m. p. papini, c. riccardi, m. majone, f. carrasco, “olive oil mill effluents as a feedstock for production of biodegradable polymers”, water research, vol. 39, no. 10, pp. 20762084, 2005 [37] s. bengtsson, a. werker, m. christensson, t. welander, “production of polyhydroxyalkanoates by activated sludge treating a paper mill wastewater”, bioresource technology, vol. 99, no. 3, pp. 509-516, 2008 [38] j. yu, j. wang, “metabolic flux modeling of detoxification of acetic acid by ralstonia eutropha at slightly alkaline ph levels”, biotechnology and bioengineering, vol. 73, no. 6, pp. 458-464, 2001 [39] l. pandolfo, r. colale, g. paiaro, “magnetic field and tap water”, la chimica e l'industria, vol. 69, no. 11, pp. 88-89, 1987 [40] d. l. watt, c. rosenfelder, c. d. sutton, “the effect of oral irrigation with a magnetic water treatment device on plaque and calculus”, journal of clinical periodontology, vol. 20, no. 5, pp. 314-317, 1993 [41] v. hogan, s. e. mason, s. a. campbell, f. c. walsh, “the use of magnetic fields in the prevention of scaling”, uk corrosion and eurocorr 94, bournemouth, uk, october 31-november 3, 1994 engineering, technology & applied science research vol. 9, no. 4, 2019, 4433-4439 4439 www.etasr.com karkush et al.: magnetic field influence on the properties of water treated by reverse osmosis [42] g. paiaro, l. pandolfo, “magnetic treatment of water and scaling deposit”, annali di chimica, vol. 84, no. 5-6, pp. 271-274, 1994 [43] v. k. golovleva, g. e. dunaevskii, t. l. levdikova, y. s. sarkisov, y. i. tsyganok, “study of the influence of magnetic fields on the properties of polar liquids”, russian physics journal, vol. 43, no. 12, pp. 10091012, 2000 [44] o. mosin, i. ignatov, “magnetohydrodynamic cell for magnetic water treatment”, nanotechnology research and practice, vol. 6, no. 2, pp. 81-92, 2015 [45] j. nakagawa, n. hirota, k. kitazawa, m. shoda, “magnetic field enhancement of water vaporization”, journal of applied physics, vol. 86, no. 5, pp. 2923-2925, 1999 [46] m. yamashita, c. duffield, w. a. tiller, “direct current magnetic field and electromagnetic field effects on the ph and oxidation-reduction potential equilibration rates of water. 1. purified water”, langmuir, vol. 19, no. 17, pp. 6851-6856, 2003 [47] v. kozic, l. c. lipus, “magnetic water treatment for a less tenacious scale”, journal of chemical information and computer sciences, vol. 43, no. 6, pp. 1815-1819, 2003 [48] y. z. guo, d. c. yin, h. l. cao, j. y. shi, c. zhang, y. m. liu, h. h. huang, y. liu, y. wang, w. h. guo, a. r. qian, p. shang, “evaporation rate of water as a function of a magnetic field and field gradient”, international journal of molecular sciences, vol. 13, no. 12, pp. 16916-16928, 2012 [49] j. s. baker, s. j. judd, “magnetic amelioration of scale formation”, water research, vol. 30, no. 2, pp. 247-260, 1995 [50] h. b. amor, a. elaoud, m. hozayn, “does magnetic field change water ph?”, asian research journal of agriculture, vol. 8, no.1, pp. 1-7, 2018 [51] f. alimi, anti-scale treatment of hard water by magnetic processes, phd thesis, national institute of applied science and technology, tunisia, 2008 [52] a. elaoud, n. turki, h. b. amor, r. jalel, n. b. salah, “influence of the magnetic device on water quality and production of melon”, international journal of current engineering & technology, vol. 6, no. 6, pp. 2256-2260, 2016 [53] a. kotb, a. m. a. e. aziz, “scientific investigations on the claims of the magnetic water conditioners”, annals of faculty engineering hunedoara-international journal of engineering, vol. 11, no. 4, pp. 147-157, 2013 [54] l. jiang, j. zhang, d. li, “effects of permanent magnetic field on calcium carbonate scaling of circulating water”, desalination and water treatment, vol. 53, no. 5, pp. 1275-1285, 2015 microsoft word 1-2551-6848-1-ed_s engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3871 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … predicting injury severity of angle crashes involving two vehicles at unsignalized intersections using artificial neural networks stephen a. arhin howard university transportation research and data center washington, dc, usa adam gatiba howard university transportation research and data center washington, dc, usa abstract—in 2015, about 20% of the 52,231 fatal crashes that occurred in the united states occurred at unsignalized intersections. the economic cost of these fatalities have been estimated to be in the millions of dollars. in order to mitigate the occurrence of theses crashes, it is necessary to investigate their predictability based on the pertinent factors and circumstances that might have contributed to their occurrence. this study focuses on the development of models to predict injury severity of angle crashes at unsignalized intersections using artificial neural networks (anns). the models were developed based on 3,307 crashes that occurred from 2008 to 2015. twenty-five different ann models were developed. the most accurate model predicted the severity of an injury sustained in a crash with an accuracy of 85.62%. this model has 3 hidden layers with 5, 10, and 5 neurons, respectively. the activation functions in the hidden and output layers are the rectilinear unit function and sigmoid function, respectively. keywords-crashes; unsignalized intersection; artificial neural network; injury severity i. introduction even though intersections constitute a relatively low proportion of the facilities of transportation systems, a significant number of crashes occur at these locations, especially in urban areas. in california for instance, an annual average of 1.5 crashes occur at unsignalized intersections in rural locations, compared to an average of 2.5 crashes per year in urban locations [1]. data from the world health organization (who) reveal that 1.25 million people die annually worldwide in road crashes. the economic cost of these deaths is estimated to be approximately $260 billion per year [2]. in the united states, there were a total of 37,456 fatalities in road-related crashes reported in 2016 [3]. though most of these crashes occurred on road segments, a significant number occurred at or near intersections. out of the total of 52,231 fatal crashes in the united states in 2015, approximately 4.4% (2,298) of the crashes occurred at stopcontrolled intersections, while 7.5% (3,917) of the crashes occurred at intersections controlled by traffic signals. intersections without any type of traffic control device recorded the highest number of fatal crashes (4,227) [4]. several studies have investigated the causes of these crashes. these causes are either driver-induced, or occur due to road geometry, road defects, vehicle defects and atmospheric or weather conditions. various countermeasures have been proposed and/or implemented to reduce the occurrence of crashes at intersections, which in some instances have been successful. in order to effectively reduce the frequency and mitigate the severity of intersection related crashes, it is necessary to explore the predictability of these crashes based on the pertinent factors and circumstances that might have contributed to the occurrence of these crashes. several studies have resulted in the development of mathematical models that predict crashes on roadways in general and, in a few instances, at unsignalized intersections in particular. these mathematical models include linear regression and machine learning methods. given the varying characteristics of intersections, it is necessary to develop models that are focused and specific to a particular set of conditions. this study therefore focuses on the development of models to predict the severity of right-angle crashes involving two vehicles at unsignalized intersections in urban centers using anns. ii. literature review a. contributory factors for intersection-related crashes there are many factors that determine the degree of injury sustained by people involved in crashes at unsignalized intersections. however, it is shown that only certain factors are statistically significant predictors. authors in [5] assessed the degree of injury sustained by drivers involved in angle collisions in relation to the fault status of drivers. the results of the study showed that drivers who were not at fault tended to sustain more severe injuries than those who were at fault. it was further determined that injury severity was affected by factors including time of year, speed limit, age, gender, restraint/helmet use, and alcohol/drug use. authors in [6] concluded that the road surface condition (wet or dry) was a significant predictor of injury severity. additionally, female drivers are more likely to sustain severe injuries than male drivers. crashes at urban areas were determined to result in less serious injuries than crashes at rural areas [6]. also, traffic volume on a major road is a significant predictor of crashes at unsignalized intersections [7]. corresponding author: stephen a. arhin (saarhin@howard.edu) engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3872 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … the geometric characteristics and features of unsignalized intersections have also been found to be potential explanatory variables in crash prediction models. authors in [8] predicted the frequency of accidents at unsignalized intersections in urban areas using negative binomial models. it was concluded that besides traffic exposure functions such as traffic flow, which usually significantly predict crashes, intersection geometrics, absence of street lighting and dedicated left-turning lanes are positively correlated with accident frequency at intersections. typical geometric characteristics included number of lanes on major road, width of lanes, and presence of median on intersecting roads. the study further revealed that tintersections with yield control had a much lower accident potential than those with stop control. b. crash prediction models several modeling techniques have been employed to predict crashes at intersections. 1) linear regression models linear regression modeling is an approach to establish a relationship between scalar responses, also called dependent variables, and other explanatory (or independent) variables. model parameters are estimated using a data set of values of the response and explanatory variables. the model is usually fitted to the observed data set using the least square approach. linear regression models take the form: �� = �� + �� �� + � � + ⋯+ �� � + ؏� (1) where, yi is the i th dependent variable, β1, β2… βp are estimated parameters, xi1, xi2…xin are the predictor variables of the i th dependent variable and ؏� is the error term. the error term is an independent and normally distributed random variable with mean of zero and a variance greater than zero. linear regression modelling has been applied in several studies to establish various relationships between the frequency of injury crashes and other traffic characteristics. authors in [9] investigated the relationship between the number of injuries or property damage only (pdo) crashes that occur annually at intersections and traffic and environmental factors. the crash records (ranging from 1984 to 1987) of 2,488 intersections in california were sampled. the linear regression analysis employed in this study was conducted in two levels. in the first level, a simple linear regression model was developed with injury/pdo crashes per year as the response variable and traffic intensity, expressed in millions of vehicles entering the intersection per year from all approaches, as the predictor variable. in the second model, additional information such as design, traffic control, proportion of cross street traffic, and environmental features were included as predictor variables. the results of the analysis showed that the accuracy of the model improved as more predictors variables were added. though linear regression models are easy to use and interpret, it has been shown that they are not ideal for crash predictions. crashes are usually sporadic and random in nature and hence are not best fitted by linear relationships. also, the assumption that the error term is normally distributed is not accurate for crash predictions which are usually discrete and non-negative. further, some factors have been determined to strongly correlate with each other, thus introducing multicollinearity thereby invalidating such linear models [10]. in overcoming the shortcomings of the linear regression models, generalized linear models (glms) have been used to model crashes at intersections. glms are a flexible generalization of the ordinary linear regression that can accommodate the non-normal distributed error terms. the most common forms of generalized linear models used in crash prediction models are the negative (nb) model and the ordered probit model (opm) 2) negative binomial model nb models are a generalization of the poisson regression. unlike the poisson models where the variance of the distribution of the response variables is equal to its mean, in nb models, the variance differs from the mean. nb models have been found to be suitable for crash predictions due to the nature of the dependent variables in such analysis. usually the response required is the number of crashes at a specific location. such responses are nonnegative integers and generally follow the nb distribution. the distribution is given by the following poisson-gamma distribution: ��(y=yi |ui,α)= ɼ(������) ɼ(���)ɼ(����) ( ����������) ���( ����������) �� (2) where, u is the mean of the dependent variable y, β is an estimated parameter to be estimated, α is the heterogeneity parameter, and xi is the i th the predictor variable. authors in [11] investigated the relationship between crash frequencies and factors such as traffic conditions, geometric and operational characteristics or roadways, and weather conditions using data of crashes that occurred from 2004 to 2010 on a motorway in auckland. the nb regression model developed had a goodness of fit, ρ 2 of 0.119. additionally, several individual predictors such as length of road segments, aadt, number of lanes and shoulder width were found to be significant predictors of the model. 3) ordered probit models the ordered probit model (opm) is used in developing models which have an ordered response. this approach in modeling data employs the use of the probit link function. the latent continuous metric underlying the ordinal responses observed are partitioned into a series of regions corresponding to the ordinal categories. generally, the probability of obtaining a particular outcome is given by: ��(�� = �| !) = "#$ (%&'(�)) (��"#$�%&'(�)�) − "#$�%&��'(�)����"#$�%&��'(�)�� (3) where, yi is an observable ordinal variable, xi is a vector of exogenous variables, β is a vector of unknown parameters to be estimated and and τj is the threshold associated with the j th ordinal partition interval which are assumed to be of ascending order. opm has been applied in the development of several crash prediction models which seek to predict injury severity based on several factors. authors in [12] developed an opm that sought to relate the severity of crashes experienced at freeway exits. crash data for 326 locations in florida were sampled. the results of the study indicated that the factors which significantly influenced crash severity included mainline engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3873 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … lane number, length of ramp, difference of speed limits between mainline and ramp, light condition, weather condition, surrounding land type, alcohol/drug involvement, road surface condition, and crash type. the model developed had a goodness of fit of 0.019 and a chi-squared goodness of fit value of 95.63. 4) empirical bayes refinement of the glm crash estimates made with glms are susceptible to regression-to-the-mean. the regression to mean occurs when a randomly large number of accidents during a period is normally followed by a reduced number of accidents during a similar after period, even if no measure has been implemented. the glms do not account for this effect. hence, to improve the accuracy of the predictions made with glms, the empirical bayes (eb) method is usually applied. the eb method compensates for the regression-to-the-mean bias by pulling the crash count towards the mean. thus, prior data (observed crash counts) are combined with the predicted crash frequency from the glm to calculate a corrected value. the corrected value is expected to lie somewhere between the observed crash frequency and the predicted frequency from the glm. this is expressed as: ( )1 observed crashes frequencye weight weightµ= × + − × (4) where, e is the corrected value, and µ is the average number of crashes (determined from the glm) [13]. 5) artificial neural networks (anns) anns are mathematical models inspired by the biological neural networks in the human brain. anns are used in engineering to perform complex tasks such as pattern recognition, forecasting, data compression and classification. the effectiveness of an ann is based on its ability to approximate both linear and nonlinear functions to a required degree of accuracy using a learning algorithm, and to build ‘‘piece-wise’’ approximations of the functions [14]. classification or forecasting using anns involves training and learning procedure, where, historical data (a set of input data with known outputs) is presented to the network. usually large amounts of such data are required for the training of the network. the network goes through a learning process by constructing a network of inputs and outputs, and weights assigned to each mapping are adjusted at each iteration. the method by which these weights and bias levels of a network are updated is determined by the learning rule used. thus, the learning rule helps a neural network to learn from the existing conditions and improve its performance. there are several learning rules used in training neural networks. notable among the rules are the hebbian, perceptron (error-correction), delta, correlation and outstar learning rules [15]. however, the most common known rule is the multilayer perceptron (mlp). mlp basically consists of three layers: input layer, hidden layer, and output layer. mlp is a feed forward network in which information flows from the input layer through the hidden to the output layer to produce the outcome. these layers have interconnected nodes (neurons). the interconnections are assigned weights (representing information flow) which are computed using mathematical functions. the outputs for specific inputs are obtained by adjusting the weights to minimize the errors between the output produced and the desired output by error-back propagation. the mlp is known to be a universal approximator because of its ability to approximate continuous functions on a compact set of real numbers with little assumption made. activation functions, also called transfer functions, are an essential component of anns. activation functions are models in the output neurons of the ann which introduce non-linearity into the network. they function by calculating the weighted sum of their inputs and adding a bias, then deciding whether a neuron should be activated or not. the three most common types of activation functions used in an ann are the sigmoid, the hyperbolic tangent, and the rectified linear unit [16]. authors in [17] utilized anns to develop a model to show the relationship between crash severity on urban highways, and traffic variables such as traffic volume, flow speed, human factors and road, vehicle and weather conditions. the study showed that mlp with feed forward back propagation networks provided the best results compared to other learning methods. network architecture with 2 hidden layers with 17 and 7 neurons respectively were determined to be the best. mean square errors (mse) within acceptable range of 3% to 4% were achieved. also, correlation coefficients of 86% to 87 % were achieved. iii. methodology a. study area this study is based on data obtained in the district of columbia (dc). the capital of the usa, washington, dc is divided into four (equal) quadrants areas: northwest (nw), northeast (ne), southeast (se), and southwest (sw) which are further divided into eight wards. as of july 2018, the population of dc was about 702,455 with a growth rate of approximately 1.41% [18]. the city is highly urbanized, and it’s ranked the sixth most congested city in the united states with each driver spending an average of 63 hours in traffic annually [19]. it has a land area of 68.34mi 2 and a total of 1,503 miles of roadway comprised of local roads, collector roads, minor arterials, principal arterials, freeways and interstates [20]. also, the city has about 7,700 intersections of which 1,450 are signalized [21]. the american society of civil engineers’ 2017 infrastructure report card reported that about 95% of the roads in dc are in poor condition [22]. b. the crash database system crash prediction models are data dependent and as a result the accuracy of the developed models depends largely on the quality of the available crash data. to ensure that a reliable model is developed, this research utilized traffic crash data from the district department of transportation’s (ddot’s) crash database called traffic accident reporting and analysis systems version 2.0 (taras2). the district of columbia metropolitan police department (mpd) records traffic crash information at the scene of crashes electronically on a police department form number 10 (pd-10) crash reporting form. the crash data is then downloaded through secure servers from mpd into ddot’s database and are then processed and made available in taras2, which is an oracle-based application. taras2 contains data fields that can be broadly categorized engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3874 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … under vehicle characteristics, environmental conditions, roadway characteristics, traffic exposure characteristics, as well as crash location, date, time, crash type, crash severity and information on of persons involved. c. data extraction and encoding nine years of crash data (2008-2015) were queried and extracted from taras2. the data were then filtered to obtain angle crashes involving two vehicles at unsignalized intersections. further, the extracted data were cleaned by identifying and removing duplicate and incomplete crash records and irrelevant data fields. in all, 3,307 data points were extracted and used for analysis. the extracted data set contained the following fields: accident complaint number, main street name, side street name, year of accident, month of accident, time of accident, day of week, quadrant of accident occurrence, type of collision, road surface condition, street lighting condition, lighting condition, weather condition, traffic condition, traffic control type, drivers’ age, drivers’ gender, contributing circumstances, and injury severity. only numerical data can be analyzed by anns. hence, qualitative data must be converted to quantitative data. thus, both input and output data must be encoded into either real or integer values. secondly, binary method (0 and 1) of encoding has been determined to yield better results since it minimizes the loss functions values with respect to the models’ parameters. the loss value determines how well the model fits the data set. the lower the loss function value the better the model fits the data set. table ii presents the variables and coding scheme used in this study. d. types of collision the crash types considered for this study are angle collisions. three types of angle collisions are specified: rightangle, right turn, and left turn collisions. • right-angle collision: this type of collision occurs when the side of one vehicle is impacted by the front of another vehicle which is traveling in a direction at right angle to the direction of the former vehicle. figure 1 depicts a rightangle collision at an intersection. • right turn collision: this type of collision occurs when a vehicle turning right at an intersection is impacted by a vehicle from the other intersecting road. figure 2 depicts a right turn collision. • left turn collision: this type of collision occurs when a left turning vehicle at an intersection is impacted by a vehicle from the oncoming traffic. figure 3 depicts a left turn collision. e. injury severity the outcome variable describes the degree of injury severity sustained by persons involved in a crash. the crash database specifies five degrees of injury severity: no injury, complain, non-disabling injury, disabling injury and fatal. due to the insignificant percentage of fatal and disabling injury crashes in the data set, all complain, injury and fatal crashes were categorized as injury crashes. table i shows the levels of crashes used in the analysis. fig. 1. right-angle collision fig. 2. right turn collision fig. 3. left turn collision table i. levels of injury severity injury severity level no injury non-injury complain injury non-disabling injury disabling injury fatal f. data standardization to achieve accurate predictions from machine learning models it is necessary that variables used in developing the models are of equal scale. also, most optimization algorithms minimize the loss function converge faster when variables are of the same scale. the method of scaling used on this data set is standardization. the raw scores (of the encoded data) are converted to standard scores by subtracting the mean of each variable from the raw score of each observation and then dividing the difference by the standard deviation of the variable. by doing so, the variables are transformed to have a mean of zero and a unit variance. the standardized value, z, of each score of each variable is given by (5): _ )/(z x x σ= − (5) where, 6 is the mean of the variable, x is the encoded score of each observation of a variable and σ is its standard deviation. engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3875 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … table ii. variable encoding variable variable name code variable variable name code day of crash lighting condition x1 monday 1-present, 0-otherwise x26 dark 1-present, 0-otherwise x2 tuesday 1-present, 0-otherwise x27 dark lighted 1-present, 0-otherwise x3 wednesday 1-present, 0-otherwise x28 daylight 1-present, 0-otherwise x4 thursday 1-present, 0-otherwise weather condition x5 friday 1-present, 0-otherwise x29 clear 1-present, 0-otherwise x6 saturday 1-present, 0-otherwise x30 rain 1-present, 0-otherwise x7 sunday 1-present, 0-otherwise x31 snow 1-present, 0-otherwise time of day x32 traffic condition 0-low, 1-medium, 2-high x8 a.m. peak (06:00 – 10:00) 1-present, 0-otherwise traffic control type x9 off peak (10:00 – 15:00) 1-present, 0-otherwise x33 stop 1-present, 0-otherwise x10 p.m. peak (15:00 – 19:00) 1-present, 0-otherwise x34 yield 1-present, 0-otherwise x11 evening (19:00 – 00:00) 1-present, 0-otherwise x35 none 1-present, 0-otherwise x12 night (0000 – 0600) 1-present, 0-otherwise contributing circumstances of driver 1 quadrant x36 no violation d1 1-present, 0-otherwise x13 nw 1-present, 0-otherwise x37 alcohol/ drug use d1 1-present, 0-otherwise x14 sw 1-present, 0-otherwise x38 speeding d1 1-present, 0-otherwise x15 ne 1-present, 0-otherwise x39 stop/ yield sign violation d1 1-present, 0-otherwise x16 se 1-present, 0-otherwise x40 improper maneuvering d1 1-present, 0-otherwise x17 bn 1-present, 0-otherwise contributing circumstances of driver 2 type of collision x42 no violation d2 1-present, 0-otherwise x18 right angle 1-present, 0-otherwise x43 alcohol/ drug use d2 1-present, 0-otherwise x19 left turn 1-present, 0-otherwise x44 speeding d2 1-present, 0-otherwise x20 right turn 1-present, 0-otherwise x46 improper maneuvering d2 1-present, 0-otherwise road surface condition x47 distraction d2 1-present, 0-otherwise x21 wet 1-present, 0-otherwise x22 dry 1-present, 0-otherwise x48 age of driver 1 1-present, 0-otherwise street lighting x49 age of driver 2 1-present, 0-otherwise x23 light off 1-present, 0-otherwise x50 gender of driver 1 0-female, 1-male x24 light on 1-present, 0-otherwise x51 gender of driver 2 0-female, 1-male x25 none 1-present, 0-otherwise y1 injury severity 0-no injury, 1-injury g. development of models the process of classification by ann is an iterative process of weight adjustments based on information flow that mimics the functioning of neurons in the human brain. the steps below describe in detail how models for crash injury severity classification were developed using ann: • selection of network architecture. • training of neural network. • testing and evaluation of model. 1) selection of network architecture the network architecture was first set up. a multi-layer perceptron (mlp) feedforward ann was adopted to develop classification models. an mlp consists of at least three layers: an input layer, hidden layer(s) and an output layer. each layer consists of nodes or neurons. the neurons of each layer are interconnected with those of the succeeding layer. also, the neurons of the hidden and output layers are embedded with nonlinear activation functions. the mlp ann architecture used in this research consists of an input layer with 44 neurons (each neuron represents each of the input variables, xi in table ii) and an output layer with 1 neuron, which is the target or dependent variable, y. the number of hidden layers and neurons varied for several iterations until the optimal numbers of hidden layers and neurons which produced the best model were obtained. figure 4 shows the mlp ann architecture used in developing the model. fig. 4. mlp ann 2) training of neural network training of the neural network by backward propagation was carried out in the following sequence: • presentation of training dataset to the network: the training dataset was imported into the network to commence training. the vector of independent variables was fed into each input neuron connected to neurons of the first hidden layer. the training process was initialized by randomly engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3876 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … selecting weights for all interconnections between the neurons of the input and hidden layers. • forward computation: the forward propagation was then implemented by multiplying the weights with the values of the input neurons and the sum products are stored in the corresponding neurons of the hidden layer. the weighted sums are subsequently transferred into an activation function and based on the output of the functions, the neuron is either activated or not. mathematically this can be expressed as: 789 = ∑ ;8�9 � (9'�)< �=� (6) �89 = ф��78� (7) where, 789 is the weighted sum in jth neuron of the lth hidden layer, ;8�9 is the weight coefficient of the jth neuron of the lth layer that is fed from the i th neuron in layer l-1, � (9'�) the output of th i-th neuron in the previous layer l-1, �8 is the output of the of the j th neuron in layer l-1, ф� is the activation function which is a rectilinear unit function in the hidden layers and a sigmoid function in the output layer. hence for the last layer (output layer) l=l, �8?(@) = ab (8) where, ab is the output of the n-th iteration. • computation of error: the error of the j th neuron of the n th iteration is then computed as c8(@) = d8(@) − ab(@) ¨ (9) where, d8 is the target output. • backward computation: the weights in the network are adjusted based on a local gradient, σ, which is a function of the error, e, and computed as follows: e89(@) = c8?(@) фf g78?(@)h (10a) for neuron j in the output layer l, and e89(@) = c8?фf g789(@)h∑ ei (9��)(@);i8 (9��)(@)i (10b) for neuron j in the hidden layer l, where, k is the succeeding neuron in layer l+1 and фf(·) is the derivative of the function ф(·). the weights in the network are then adjusted by the given relation: ;8�9 (@ + 1) = ;8�9 (@)b + kl;8�9 (@)b(@ − 1)m + ne89(@)�� (9'�)(@) (11) where η is the learning-rate parameter and α is the momentum constant. • iteration: the procedures in the three previous steps are repeated for batches of 3 observations per iteration until the stopping criteria of 100 epochs is met. figure 5 illustrates the training process. 3) model testing and evaluationl after the training of the network for the required number of epochs (100), the model was tested using the test dataset. the accuracy of the model was evaluated by the confusion matrix. the number of hidden layers and neurons in the network architecture was varied and the training process was repeated. this iterative process was done until the model with the best performance was achieved. fig. 5. ann training process 4) model evaluation the performance of each of the models was assessed using the test dataset. the results were then evaluated by using the data generated by a confusion matrix (cm). a cm contains information about actual and predicted classifications done by a classification system. each row of the cm represents the instances of an actual class and each column represents the instances of a predicted class. table iii shows the confusion matrix for a two-class classifier. table iii. confusion matrix total no. of observations predicted negative positive actual negative true negative (tn) false positive (fp) positive false negative (fn) true positive (tp) the entries of the cm are defined as follows: true positive (tp) instances are positive and correctly classified as positive, true negative (tn) instances are negative and correctly classified as negative, false positive (fp) instances are negative but wrongly classified as positive, and false negative (fn) instances are positive but wrongly classified as negative. based on the cm, the following measures were computed to evaluate the models developed. • accuracy (ac): the accuracy is the proportion of the total number of predictions that were correctly classified. it is computed as: ac=(tn+tp)/(tn+fp+fn+tp) (12) • error rate (er): the error rate is the rate at which predictions will be misclassified: er=1-ac (13) • sensitivity (s): it is the proportion of positive cases that were correctly identified: s=tp/(fn+tp) (14) engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3877 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … • precision (p): it is the proportion of the predicted positive cases that were correct: p=tp/(fp+tp) (15) • f-measure (f): it is a measure of the accuracy of the test model computed using s and p. the value of f ranges from 0 to 1, where 1 shows an excellent model and 0 shows a bad model. fmeasure is calculated as: f=2(s·p)/(s+p) (16) h. analysis software the classification models of all three machine learning techniques were developed by using the high-level generalpurpose programming language python. especifically, the anaconda python distribution was used. this is an open source distribution with standard and robust libraries for data processing, analysis and machine learning applications. the numpy and pandas libraries were imported to facilitate data preprocessing. also, tensorflow and keras libraries were imported to develop the ann models. in addition, the descriptive statistics of the data were obtained using ibm statistical software for social scientist (spss). iv. results a. descriptive statistics tables iv and v present the descriptive statistics of the data set. the frequencies of categorical variables are presented in table iv, while table v presents the mean and standard deviation of the continuous variable age. it can be observed from table iv that the highest number of crashes (1,252) occurred during the off-peak period, from 10:00a.m. to 3:00p.m., while the least number of crashes (176) occurred at night, between 12:00am to 6:00am. most of the crashes occurred on tuesdays, wednesdays and thursdays while sundays recorded the least number of crashes. the northwest quadrant of washington d.c. recorded the highest number of crashes (1,167). right-angle collision was the most frequent occurring crash type. most of the crashes occurred under daylight, clear weather and light level traffic conditions. though most crashes were as a result of no violation on the part of one or both drivers, distracted driving and stop/yieldsign violation were also reported as comparatively high contributing circumstances. among the drivers involved, 3,936 were male and 2,678 were female. of the 3,307 recorded crashes, 1,274 resulted in injury. it is observed that the rate of injury crashes was highest during the night (41.24%), on fridays (41%), and in the northeast quadrant (40.44%). most were right turn collisions (40.69%), absent street lights (39.52%), rainy weather (50.57%), under light traffic conditions (54.78%). intersections controlled by yield signs also recorded the highest rate (70.59%) of injury crashes. this is complemented by the fact that the highest rates of injury crashes were a result of at least one driver’s failure to comply with a stop/yield sign. thus, the contributing circumstance which resulted in the highest rate (69.94%) is stop/yield sign violation. crashes in which at least on driver was a female recorded the highest rate of injury crashes. a correlation analysis was conducted to investigate the relations between age and injury severity. the results are presented in table vi. the spearman’s rho of -0.52 was found to be statistically significant (p=0.03). this implies that, the severity of a crash increased with decreasing age of drivers involved in the crash. table iv. crash frequencies no factor level crashes total injury noninjury injury rate (%) 1 period of day a.m. peak 730 296 435 40.49 off peak 1252 466 785 37.25 p.m. peak 776 298 478 38.4 evening 373 142 230 38.17 night 176 73 104 41.24 2 day of week monday 265 102 163 38.49 tuesday 566 228 338 40.28 wednesday 957 371 586 38.77 thursday 657 243 414 36.99 friday 400 160 240 40 saturday 261 90 170 34.62 sunday 201 80 122 39.6 3 quadrant northwest 1,167 442 725 37.87 northeast 858 347 511 40.44 southwest 226 76 150 33.62 southeast 984 382 602 38.82 boundary 72 27 45 39.13 4 type of collision right angle 1,338 530 808 39.61 left turn 1,217 438 779 39.61 right turn 752 306 446 40.69 5 street lighting condition lights off 2,503 967 1,536 38.63 lights on 680 258 422 37.94 none 124 49 75 39.52 6 lighting condition dark 757 15 727 2.02 dark (lighted) 581 193 388 33.22 day light 1,967 1,063 906 53.99 7 weather condition clear 2,350 921 1,429 39.19 rain 609 308 301 50.57 snow 348 45 303 12.93 8 traffic condition light 2,178 1,193 985 54.78 medium 808 71 737 8.79 heavy 321 71 737 8.79 9 traffic control type stop sign 2,504 1,066 1,450 42.37 yield sign 604 132 55 70.59 none 187 76 528 12.58 10 gender of driver 1 male 1,621 419 1,202 25.85 female 1,686 855 831 50.71 11 gender of driver 2 male 2,315 1,026 1,289 44.32 female 992 248 744 25 12 contri. circum. of driver 1 no viloation 1,700 869 831 51.12 alcohol 159 0 159 0 distracted 682 122 560 17.89 speed 430 134 296 31.16 stop/yield sign violation 310 148 162 47.74 improper maneuver 24 2 22 8.33 13 contri. circum. of driver 2 no viloation 1,041 7 764 0.91 alcohol 160 0 160 0 distracted 996 408 588 40.96 speed 276 7 269 2.54 stop/yield sign violation 672 470 202 69.94 improper maneuver 161 112 49 69.57 14 injury severity 3,307 1,274 2,033 38.52 engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3878 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … table v. driver age statistics factor mean standard deviation min. max drivers age 42.56 15.73 14 86 table vi. age-injury severity corelation analysis factor test statistic (spearman’s rho) p-value age of driver -0.52 0.03 b. spatial distribution of crashes this section presents the results of the spatial analysis of the crashes using arcgis pro software program. the spatial analysis performed included the spatial distribution of crashes based on injury severity and a kernel density analysis for injury crashes. the spatial distribution and density of crashes are shown in figures 6 and 7, respectively. figure 7 shows that most of the crashes were located in the nw quadrant. this covers the downtown and central business district of washington dc. figure 7 also shows that higher densities of injury crashes are in the same region of washington dc. fig. 6. spatial distribution of crashes [source: arcgispro] fig. 7. kernel density of injury crashes [source: arcgispro] c. results of classification of crashes twenty-five distinct ann models were developed using the training dataset. each model was trained with batches of 3 observations per iteration until the stopping criteria of 100 epochs was met. the performance of each model was then evaluated using the test data set (which constitutes of 25% of the total dataset). the performance of the models after training and testing are presented in tables vii and viii respectively. the tables show the number of models explored and the structure of the neural network. the performance measures (accuracy, error rate, sensitivity, precision and f-measure) of each model were computed and are also presented. table vii. results of training ann model network arch. ac er s p f hidden layer no. no. of neurons 1 1 20 0.9181 0.0819 0.8995 0.8892 0.8943 2 1 15 0.9032 0.0968 0.9162 0.8454 0.8794 3 1 5 0.8649 0.1351 0.8366 0.8170 0.8267 4 1 3 0.8573 0.1427 0.8461 0.7961 0.8203 5 2 25-20 0.9585 0.0415 0.9455 0.9465 0.9460 6 2 20-25 0.9472 0.0528 0.9435 0.9213 0.9322 7 2 20-15 0.9512 0.0488 0.9874 0.8964 0.9397 8 2 15-20 0.9258 0.0742 0.9539 0.8668 0.9083 9 2 10-15 0.9157 0.0843 0.9529 0.8473 0.8970 10 2 15-10 0.9302 0.0698 0.9445 0.8826 0.9125 11 2 5-10 0.8722 0.1278 0.8785 0.8067 0.8411 12 2 10-5 0.9060 0.0940 0.8953 0.8654 0.8801 13 2 6-3 0.8685 0.1315 0.8628 0.8086 0.8349 14 2 3-6 0.8597 0.1403 0.8440 0.8020 0.8224 15 2 2-2 0.8427 0.1573 0.8304 0.7767 0.8026 16 3 30-20-25 0.9516 0.0484 0.9832 0.9170 0.9490 17 3 25-30-20 0.9689 0.0311 0.9204 0.9565 0.9381 18 3 20-15-20 0.9402 0.0598 0.9916 0.8926 0.9395 19 3 15-20-15 0.9404 0.0596 0.9644 0.8916 0.9266 20 3 15-10-15 0.9293 0.0707 0.9738 0.8692 0.9185 21 3 10-15-10 0.9310 0.0690 0.8995 0.8677 0.8833 22 3 5-10-5 0.9115 0.0885 0.8859 0.8270 0.8554 23 3 10-5-10 0.9102 0.0898 0.9414 0.8293 0.8818 24 3 6-4-2 0.9159 0.0841 0.9058 0.8374 0.8702 25 3 6-2-6 0.9237 0.0763 0.9058 0.8547 0.8795 the accuracy, sensitivity, precision and f-measure (f) performance measures range from 0 to 1, with values closer to 1 showing models with better performance measures and conversely values closer to 0 showing worse performance measures. in contrast, models with error rates (er) closer to 0 are better than models with error rate closer to 1. the results of the analysis in table vii show that after the training of the models, the accuracy ranged from 84.87% to 96.89%. model 17 produced the best classification accuracy (96.89%) with a corresponding error rate of 3.11%, while model 15 produced the worse accuracy (84.87%) with a corresponding error rate of 15.73%. model 7 had the highest sensitivity (s) measure, while model 15 had the least sensitivity measure. with regards to the precision measure, model 17 was the most precise (p) model with a precision of 0.9565, while model 15 was the least precise one. model 16 recorded the highest f-measure of 0.9490, while the lowest f-measure was recorded by model 6. the variation of performance measures with varying models is shown in figure 8. table viii presents the results of evaluation of the trained models using the test data set. the results show that the accuracy (after testing) of the models ranged from 76.54% to 85.62%. model 22 produced the best classification accuracy (85.62%) with a corresponding error rate of 14.38%, while model 6 produced the worse accuracy. model 14 had the highest sensitivity measure, while model 16 had the least sensitivity measure. with regards to the precision measure, model 15 was the most precise model with a precision of 0.7850, while model 18 was the least precise model with a engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3879 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … precision of 0.6882. model 15 recorded the highest f-measure of 0.7875, while the lowest f-measure was recorded by model 6. the variation of performance measures with varying models is shown in figure 9. fig. 8. variation of performance measures for training dataset using ann table viii. results of testing ann model network arch. ac er s p f hidden layer no. no. of neurons 1 1 20 0.8005 0.1995 0.7900 0.7200 0.7534 2 1 15 0.8114 0.1886 0.7492 0.7587 0.7539 3 1 5 0.7896 0.2104 0.7524 0.7164 0.7339 4 1 3 0.8295 0.1705 0.7806 0.7781 0.7793 5 2 25-20 0.7872 0.2128 0.7210 0.7256 0.7233 6 2 20-25 0.7654 0.2346 0.7116 0.6900 0.7006 7 2 20-15 0.7836 0.2164 0.7304 0.7147 0.7225 8 2 15-20 0.7787 0.2213 0.7179 0.7112 0.7145 9 2 10-15 0.7944 0.2056 0.7586 0.7224 0.7401 10 2 15-10 0.7715 0.2285 0.7429 0.6890 0.7149 11 2 5-10 0.8198 0.1802 0.7680 0.7656 0.7668 12 2 10-5 0.7993 0.2007 0.7524 0.7339 0.7430 13 2 6-3 0.8174 0.1826 0.7774 0.7561 0.7666 14 2 3-6 0.8114 0.1886 0.8276 0.7233 0.7719 15 2 2-2 0.8356 0.1644 0.7900 0.7850 0.7875 16 3 30-20-25 0.8440 0.1560 0.6865 0.7252 0.7053 17 3 25-30-20 0.8256 0.1744 0.7398 0.7024 0.7206 18 3 20-15-20 0.8100 0.1900 0.8025 0.6882 0.7410 19 3 15-20-15 0.8300 0.1700 0.7837 0.7163 0.7485 20 3 15-10-15 0.8511 0.1489 0.7367 0.7460 0.7413 21 3 10-15-10 0.8532 0.1468 0.7712 0.7546 0.7628 22 3 5-10-5 0.8562 0.1438 0.7586 0.7586 0.7586 23 3 10-5-10 0.8457 0.1543 0.7524 0.7385 0.7453 24 3 6-4-2 0.8406 0.1594 0.8119 0.7379 0.7731 25 3 6-2-6 0.8340 0.1660 0.7868 0.7233 0.7538 fig. 9. variation of performance measures for testing dataset using ann v. discussion the study sought to develop classification models to predict injury severity of angle crashes involving two vehicles at unsignalized intersections using anns. a total of 3,307 reported crashes from 2008 to 2015 were extracted from a crash database and used in the analysis. of the total number of crashes, 1,272 resulted in injury and/or fatality, while the remaining 2,035 crashes were non-injury crashes. the spatial distribution of the crashes showed that the downtown area of washington dc experienced the highest frequency of crashes. also, most of the crashes occurred during off-peak periods and under light traffic conditions. right angle collisions were the most frequent collision type. the combination of driver contributing circumstances which result in injury were stop/yield sign violation by one driver, and no violation on the part of the other driver. the accuracy of classification models developed using ann generally tends to increase as the number of hidden layers increases. models with higher accuracies were attained with three hidden layers. model 22 was the most accurate (85.62%) for predicting injury severity of angle crashes at unsignalized intersections. this model has 3 hidden layers with 5, 10, and 5 neurons respectively. the activation function in the hidden layers is the rectilinear unit function and the activation function in the output layer is the sigmoid function. the confusion matrix of this model is presented in table ix. we can see that 51.5% of the crashes were correctly classified as non-injury crashes, while 10.3% were wrongly classified as injury crashes. similarly, 29% of the crashes were correctly classified as injury crashes while 9.2% were wrongly classified as non-injury crashes. f-measure, is a combined measure for both precision and sensitivity. f-measures of the ann models generally ranged between 0.7 and 0.8, and the higher values of f-measure were achieved with two hidden layers. models 15 and 22 are the most accurate ann models for predicting injury severity of angle crashes at unsignalized intersections. table ix. confusion matrix of model 22 total no. of observations predicted negative positive actual negative 431 77 positive 77 242 vi. conclusion and recommendation in conclusion, the most accurate ann model for predicting the severity of an injury sustained in a crash is a model with 3 hidden layers with 5, 10, and 5 neurons. the activation functions in the hidden and output layers are the rectilinear unit function and sigmoid function. this research explored the ann machine learning technique. future research can explore other techniques such as decision trees, k-nearest neighbors and linear discriminants. also, other types of crashes can be explored at unsignalized intersections. further, these analyses could be extended to signalized intersections. references [1] t. r. neuman, r. pfefer, k. l. slack, k. k. hardy, d. w. harwood, i. b. potts, d. j. torbic, e. r. k. rabbani, national cooperative highway engineering, technology & applied science research vol. 9, no. 2, 2019, 3871-3880 3880 www.etasr.com arhin & gatiba: predicting injury severity of angle crashes involving two vehicles at unsignalized … research program: guidance for implementation of the aashto strategic highway safety plan, transportation research board, 2003 [2] world health organization, global status report on toad safety 2015, who, 2015 [3] national highway traffic safety administration, “usdot releases 2016 fatal traffic crash data”, available at: https://www.nhtsa.gov/ press-releases/usdot-releases-2016-fatal-traffic-crash-data, 2017 [4] national highway traffic safety administration, traffic safety facts 2015, us department of transportation-national highway traffic safety administration, 2015 [5] b. j. russo, p. t. savolainen, w. h. schneider, p. c. anastasopoulos, “comparison of factors affecting injury severity in angle collisions by fault status using a random parameter bivariate ordered probit model”, analytic methods in accident research, vol. 2, pp. 21-29, 2014 [6] r. garrido, a. bastos, a. de almeida, j. p. elvas, “prediction of road accident severity using the ordered probit model”, transport research. procedia, vol. 3, pp. 214-223, 2014 [7] t. sayed, f. rodriguez, “accident prediction models for urban unsignalized intersections in british columbia”, transportation research record journal of the transportation research board, vol. 1665, no. 1, pp. 93-99, 1999 [8] w. ackaah, m. salifu, “crash prediction model for two-lane rural highways in the ashanti region of ghana”, international association of traffic and safety sciences research, vol. 35, no. 1, pp. 34-40, 2011 [9] m. y. lau, a. d. may, accident prediction model development: signalized intersections, institute of transportation studies, university of california-berkeley, 1988 [10] a. kamer-ainur, m. marioara, “errors and limitations associated with regression and correlation analysis”, statistics and economic informatics, pp. 710-712, 2007 [11] p. chengye, p. ranjitkar, “modelling motorway accidents using negative binomial regression”, journal of the eastern asia society for transportation studies, vol. 10, pp. 1946-1963, 2013 [12] z. yang, l. zhibin, l. pan, z. liteng, “exploring contributing factors to crash injury severity at freeway diverge areas using ordered probit model”, procedia engineering, vol. 21, pp. 178-185, 2011 [13] federal highway administration, “highway safety improvement program manual–safety”, available at: https://safety.fhwa.dot.gov/ hsip/resources/fhwasa09029/sec6.cfm, 2011 [14] g. dutta, p. jha, a. k. laha, n. mohan, “artificial neural network models for forecasting stock price index in the bombay stock exchange”, journal of emerging market finance, vol. 5, no. 3, pp. 283295, 2006 [15] m. h. hassoun, fundamentals of artificial neural networks, mit press, 1995 [16] s. sharma, “activation functions in neural networks”, available at: https://towardsdatascience.com/activation-functions-neural-networks1cbd9f8d91d6, 2017 [17] f. r. moghaddam, s. afandizadeh, m. ziyadi, “prediction of accident severity using artificial neural networks”, international journal of civil engineering, vol. 9, no. 1, pp. 41-49, 2011 [18] k. s. jadaan, m. al-fayyad, h. f. gammoh, “prediction of road traffic accidents in jordan using artificial neural network (ann)”, journal of traffic logistics engineering, vol. 2, no. 2, pp. 92-94, 2014 [19] office of the state superintendent of education, “new u.s. census bureau numbers officially put dc’s population over 700,000”, available at: https://osse.dc.gov/release/new-us-census-bureau-numbersofficially-put-dc%e2%80%99s-population-over-700000, 2018 [20] t. winship, “the 10 us cities with the worst traffic”, available at: https://www.businessinsider.com/the-10-us-cities-with-the-worst-traffic2018-2, 2018 [21] district department of transportation, “ddot by the numbers”, available at: https://ddot.dc.gov/page/ddot-numbers [22] american society of civil engineers, repord card for d.c.’s infrastructure, asce, 2016 microsoft word 35-3160_s_etasr_v9_n6_pp5062-5065 engineering, technology & applied science research vol. 9, no. 6, 2019, 5062-5065 5062 www.etasr.com memon et al.: controlling the defects of paint shop using seven quality control tools in an … controlling the defects of paint shop using seven quality control tools in an automotive factory imdad ali memon department of mechanical engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan engineerimdad@yahoo.com ahmed ali department of electrical engineering sukkur iba university sukkur, pakistan ahmed.shah@iba-suk.edu.pk munawar ayaz memon department of electrical engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan engr.mam@gmail.com umair ahmed rajput department of mechanical engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan engr.umair@quest.edu.pk saeed ahmed khan abro department of electrical engineering sukkur iba university sukkur, pakistan saeed.abro@iba-suk.edu.pk ahsan ali memon department of mechanical engineering quaid-e-awam university of engineering, science & technology nawabshah, pakistan 15el06@quest.edu.pk abstract—seven quality control (7qc) tools are used for reducing defects during manufacturing. these tools are highly effective in productivity and quality improvement. in this case, the study of the 7qc tools was applied in an automotive factory in order to reduce paint shop defects. within four months the production line was inspected, defects were categorized and the 7qc tools were successfully applied, reducing the overall defect rate by 70%. although every tool was important, the cause and effect diagram was responsible for finding the root causes of the defects. keywords-defects reduction; productivity improvement; paint shop; 7qc tools i. introduction the seven quality control (7qc) tools are highly useful for improving productivity, resolving problems in the quality operational process and delivery [1, 2]. the 7qc tools are applied for improving the performance of the production processes, solving problems at any stage [3, 4]. solving problems by the 7qc tools reduces cost, while they cannot be replaced by any other complex decision-making support system [5]. the level of defects or problems in the product is associated with the process conditions. if the process is under control limit then the product is useful, while when the process is out of control the product has demerits resulting to rejection, rework or scrap [6]. these problem solving tools are directly beneficial for customers too, as quality products entail the vast reduction of defects, while this process reduces cost. improving the quality of the product is very important for any company and its endurance in the market. the 7qc tools can be used in the production processing line ensuring the reduction of defects while suggesting improvements [7]. statistical process control (spc) tools used to hold the position of 7qc tools. these tools have played an important role in the reduction of variations in many industries [8]. many studies focused on these tools which were found very effective regarding problem solving in many industries. a comparative study was conducted between them and the new 7qc tools [8]. it has been reported that there are more than two spc techniques, namely the ishikawa diagram (cause and effect diagram) and the spc control charts which were also applied in automotive industry. the study in [9] focused on the defects in shocker seals of the automotive industry. the rejection level was reduced from 9.1% to 5% and 95% process capability was achieved. the control chart alone was applied to automotive components and monitored process capability. it was reported that the defect level has been reduced using a data acquisition system. the automated inspection method was adopted and offline spc method was converted into the online method in [10]. in many companies, more complex quality tools were used, but they were not highly effective and also they were not able to examine defects at a proper level [11]. in today’s challenging environment, every organization should apply proper productivity improvement tools. this paper presents a case study of the application of the 7qc tools in an automotive factory. initially, the factory used partially some of the tools without getting fruitful results. a goal to apply all the 7qc tools and understand their implementation mechanism was set, focusing on the identification and reduction of the defects occurring in the paint shop. an inspection point of the paint shop was used in order to collect, assess and analyze data. ii. related work the 7qc tools are applicable to any kind of industry regardless of size and capacity [11, 12]. these techniques (flow chart, pareto diagram, check sheet, control chart, histogram, corresponding author: imdad ali memon engineering, technology & applied science research vol. 9, no. 6, 2019, 5062-5065 5063 www.etasr.com memon et al.: controlling the defects of paint shop using seven quality control tools in an … scatter plot, cause-and-effect diagram) aim to control quality parameters such as methods, machine, equipment, and products. hence, these tools reduce the loss and give more profit [13]. in [7], the 7qc tools were used for reducing the rejection of shift fork. the total rejection was reduced from 16.66% to 0.65%, saving rs 303.000 per year. in [14] the cost impact of the application of real time spc in hardwood sawmills was investigated. in [15], some of the basic statistical tools were selected, such as pareto diagram, control chart and histogram, studying their impact on the overall process performance, cost and product quality. in [16], the importance of the 7qc tools was examined, showing a continuous quality improvement process in an automotive industry, while in [17] those tools were used for productivity improvement. many companies adopted spc tools and continued the implementation of process control in manufacturing industries. these techniques fulfill the customer requirements for highquality products at low price. manufacturing firms use spc to analyze their impact, isolate problems, monitor the outcome and process parameters for achieving quality goals [18, 19]. moreover, the spc process can enhance problem solving and performance [9, 20]. paint is a dispersion of pigments. it is used as filler in a fluid vehicle. the fluid vehicle includes a liquid binder that was solidified during cure. it has the capability to serve as a liquid carrier, viscosity reducing aid, and also provide healthy application distinctive [21]. drying oil, volatile thinner, and paint have been developed by mixing, grinding, thinning, straining operations [22] iii. materials and methods data collection started from the assessment of the automotive factory, through frequent visits. after visual inspection, defects were categorized in four types: dust, floatation, scratch, and improper paint. then, defects having a direct impact on the car body were focused. therefore, a check sheet was developed for finding the root cause of paint defects, applying the cause and effect diagram technique. this technique is very useful for the reduction of body section defects [23, 24]. data were collected through the check sheet for four months, using the same pattern. these data can be simulated, as reported in [25]. iv. results αnd discussion a. application of the 7qc tools • flow chart (qc tool 1): it was developed for collecting data from the paint shop. the defects observed at paint shop are of four types: scratch, dust, improper paint and floatation. • check sheets (qc tool 2): after identifying the defects, the next step is data collection. the check sheets were developed for collecting defect data for further analysis, and were designed focusing on attributted data, providing simple checkmarks, where check inspectors marked defect occurences. the check sheets mentioned different types of defects in the paint shop. data were collected from november 2015 to february 2016. • histograms (qc tool 3): histograms were created using excel. the histograms display variation levels and process capability. occurrence frequency of each defect was displayed in a monthly histogram between november 2015 to february 2016, as shown in figure 1. fig. 1. histogram for paint shop defects • pareto diagrams (qc tool 4): a pareto diagram shows the defects in descending order of occurrences. the pareto diagram is also monitoring the defects cumulative frequency level occurrences on its secondary axis. the red trend line shows the defect cumulative percentage level. defects occurring at a high level should be given priority. the pareto diagrams are displayed in figure 2. fig. 2. pareto charts of paint shop defects for (a) nov-15, (b) dec-15, (c) jan-16 and (d) feb-16 • cause and effect diagram (qc tool 5): the cause and effect diagram helps in discovering the root causes of the problem. this tool can point out the defects and their reasons. the preliminary data collected through check sheets showed a high frequency of defects in november and december 2015. cause and effect diagrams were developed and applied in the paint shop. using check sheets the data were collected for january 2016 and february 2016, engineering, technology & applied science research vol. 9, no. 6, 2019, 5062-5065 5064 www.etasr.com memon et al.: controlling the defects of paint shop using seven quality control tools in an … showing considerable reduction in paint defects, due to control on root causes as pointed out in figure 3. • scatter diagram (qc tool 6): this tool was used for the study and evaluation of the impact of each parameter to another. fig. 4 shows the scatter diagram of the paint shop. this diagram shows time in weeks and the number of defective bodies. in the first eight weeks, the defect rate was too high, while after finding the root cause of defects through cause and effect diagram, it was observed that the occurrence rate of defective bodies was reduced. fig. 3. cause and effect diagram for paint shop of: (a) improper paint, (b) floatation, (c) scratch, and (d) dust fig. 4. scatter diagram • p-chart (qc tool 7): the p-chart was based on the number of paint defect occurrences using binominal distribution. the process, running either in statistical control or not, can be pointed by the p-chart. it also pointed out that the changes occurred in the defective items when process measurement took place. b. overall defect reduction figure 5 shows the np control chart for the paint shop, where the blue line shows the defect rate, the green line shows the upper control limit, the red line shows the center and the purple line shows the lower control limit. nine weeks after the implementation of the 7qc tools, the defections were drastically reduced. figure 6 shows the impact on the defect rate before and after the implementation of 7qc tools in the paint shop. 122 defects occurred during the first month and 155 occurred during the second. after using the 7qc tools the defects were reduced to 83 during the third month and to 47 during the fourth. fig. 5. np control chart fig. 6. reduction of defects after the application of the 7qc tools v. conclusion this research investigated the application of the 7qc tools for the reduction of defects it an automotive factory. the initial flow chart was developed and check sheets were designed for data collection on inspection points. it was observed that the highest frequency defects were seen in november and december, 2015. after using the cause and effect diagram, the defects reduced substantially. during the fourth month (february 2016) total defects reduced by 70%, comparing to the first month, from 155 to 47. every tool played an important role in the defect reduction, but the cause and effect diagram was very useful for finding the cause and its effect. the main contribution of this study is to highlight all the possible defects or errors affecting production in manufacturing industries. references [1] j. c. benneyan, “the design, selection, and performance of statistical control charts for healthcare process improvement”. international journal of six sigma and competitive advantage, vol. 4, no. 3, p. 209239, 2008 [2] p. kuendee, “application of 7 quality control (7 qc) tools for quality management: a case study of a liquid chemical warehousing”, 4th international conference on industrial engineering and applications, nagoya, japan, april 21-23, 2017 [3] h. hailu, h. tabuchi, h. ezawa, k. jilcha, “reduction of excessive trimming and reject leather by integration of 7 qc tools and qc story formula: the case report of sheba leather plc”, industrial engineering & management, vol. 6, no. 3, 2017 engineering, technology & applied science research vol. 9, no. 6, 2019, 5062-5065 5065 www.etasr.com memon et al.: controlling the defects of paint shop using seven quality control tools in an … [4] b. neyestani, “seven basic tools of quality control: the appropriate techniques for solving quality problems in the organizations”, ssrn, available at: https://zenodo.org/record/400832#.xdpbdjmzaul, 2017 [5] n. visveshwar, v. vishal, v. venkatesh, r. v. samsingh, p. karthik, “application of quality tools in a plastic based production industry to achieve the continuous improvement cycle”, calitatea, vol. 18, no. 157, pp. 61-64, 2017 [6] p. s. parmar, t. n. desai, “reduction of rework cost in manufacturing industry using statistical process control techniques: a case study”, industrial engineering journal, vol. 10, no. 6, pp. 40-46, 2017 [7] a. jaware, k. bhandare, g. sonawane, s. bhagat, r. ralebhat, “reduction of machining rejection of shift fork by using seven quality tools”, international journal of engineering and technology, vol. 5, no. 4, pp. 4323-4334, 2018 [8] s. m. ahmed, r. t. aoieong, s. l. tang, d. x. zheng, “a comparison of quality management systems in the construction industries of hong kong and the usa”, international journal of quality & reliability management, vol. 22, no. 2, pp. 149–161, 2005 [9] d. r. prajapati, “implementation of spc techniques in automotive industry : a case study”, international journal of emerging technology and advanced engineering, vol. 2, no. 3, pp. 227-241, 2012 [10] t. v. u. k. kumar, “spc tools in automobile component to analyze inspection process”, vol. 2, no. 1, pp. 624–630, 2013 [11] c. fotopoulos, e. psomas, “the use of quality management tools and techniques in iso 9001:2000 certified companies: the greek case”, international journal of productivity and performance management, vol. 58, no. 6, pp. 564–580, 2009 [12] r. h. fouad, a. mukattash, “statistical process control tools : a practical guide for jordanian industrial organizations”, vol. 4, no. 6, pp. 693– 700, 2010 [13] r. srinivasu, g. s. reddy, s. r. rikkula, “utility of quality control tools and statistical process control to improve the productivity and quality in an industry”, international journal of reviews in computing, vol. 5, pp. 15-20, 2011 [14] t. m. young, b. h. bond, j. wiedenbeck, “implementation of a realtime statistical process control system in hardwood sawmills”, forest products journal, vol. 57, no. 9, pp. 54–62, 2007 [15] n. afzaal, a. aftab, s. khan, m. najamuddin, “to analyze the use of statistical tools for cost effectiveness and quality of products”, iosr journal of humanities and social science, vol. 20, no. 1, pp. 47–57, 2015 [16] g. paliska, d. pavletic, m. sokovic, “quality tools: systematic use in process industry”, journal of achievements in materials and manufacturing engineering, vol. 25, no. 1, pp. 79–82, 2007 [17] m. sokovic, j. jovanovic, j. krivokapic, a. vujovic, “basic quality tools in continuous improvement process”, journal of mechanical engineering, vol. 55, no. 5 pp. 333-341, 2009 [18] g. patidar, d. d. s. verma, “implimantation of statistical process control in small scale industriesa review”, international journal of technologies and engineering, vol. 2, no. 7, pp. 121-124, 2015 [19] v. parkash, d. kumar, r. rajoria, “statistical process control”, international journal of research in engineering and technology, vol. 2, no. 8, pp. 70–72, 2013 [20] p. s. parmar, v. a. deshpande, “implementation of statistical process control techniques in industry : a review”, journal of emerging technologies and innovative research, vol. 1, no. 6, pp. 583–587, 2014 [21] j. v. koleske, “mechanical properties of solid coatings”, in: encyclopedia of analytical chemistry: applications, theory and instrumentation, john wiley and sons, 2006 [22] a. e. bryson, the control of quality in the manufacture of paint, phd thesis, masachusetts institute of technology, 1950 [23] p. bhangale, r. dhake, g. gambhire, “reduction in defects of car body panel using 7qc tools approach”, national conference on modelling, optimization and control, pune, india, march 4-6, 2015 [24] i. a. memon, q. b. jamali, a. s. jamali, m. k. abbasi, n. a. jamali, z. h. jamali, “defect reduction with the use of seven quality control tools for productivity improvement at an automobile company”, engineering, technology and applied science research, vol. 9, no. 2, pp. 4044– 4047, 2019 [25] m. l. chew hernandez, l. viveros rosas, r. f. retes mantilla, g. espinosa martínez, v. velazquez romero, “supply chain cooperation by agreed reduction of behavior variability: a simulation-based study”, engineering, technology and applied science research, vol. 7, no. 2, pp. 1546–1551, 2016 microsoft word 6-2754_s_etasr_v9_n4_pp4342-4348 engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4342 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay performance evaluation of a generator differential protection function for a numerical relay kim hung le the university of danang university of science and technology da nang, vietnam lekimhung@dut.udn.vn phan huan vu central power corporation central electrical testing company limited da nang, vietnam vuphanhuan@gmail.com abstract-this paper describes the advantages and disadvantages of a generator differential protection relay system which uses double slope characteristics of areva p343, abb reg670, sel300g and ge g60. a buon tua srah hydropower plant in vietnam was selected as an example for the relay setting calculations of these characteristics. the performance of the introduced relay model was tested at various fault conditions in matlab/simulink. the results apply to the problems of solving the performance of the relay accurately and with reliable differential protection against internal faults, and keeping the generator stable on all external faults and in normal conditions. the simulation simplifies the process of selecting the relay and protection system. this can improve the quality of the protection system design early, thereby reducing the number of errors found later in the operation. keywords-generator; differential protection function; slope 1; slope 2; matlab/simulink i. introduction the synchronous generator is the most important element of a power system. generator faults are considered serious since they may cause severe and costly damages to insulation, windings, and stator core. the large short circuit currents cause large current forces, which can damage other components in the power plant, such as the turbine and the generator-turbine shaft, or even initiate explosion and fire. in addition, if the generator is tripped in connection to an external short circuit, it can give an increased risk of power system collapse. to limit the damages in connection to a stator winding short circuits and abnormal operating conditions, generators need to be protected as much as possible by a proper protection system [1]. nowadays, there are a variety of numerical different protective relays on the market which include many functions in one unit, and provide metering, communication, and generator protection. these protective relays help us to simplify the protection implementation in circuit design and setting calculations. although there is quite an agreement among protection engineers as to what constitutes the necessary protection and how to provide it, there are still many differences of opinion in certain areas. as protection system complexity increases with ieds connected to the hydropower plant, the evaluation of effective protection relay becomes a need for well-designed algorithms that can allow or deny arranged trip of generator, field circuit, and neutral breakers (if used) through a lockout relay based on decisions, to enable fault isolation. among them, generator differential protection function (f87g) is one of the most critical protection applications. it is mainly employed for the protection of stator windings of generator against earth faults and phase-to-phase faults. it is also of great importance that the f87g does not trip for external faults when the large fault current is fed from the generator. the need to evaluate f87g with individual different characteristics has been well known to generator protection engineers. the techniques, methods and practices to provide this coordination are well established but scattered in various textbooks, papers, and in manufacturer’s user manuals [2]. author in [1] evaluated the performance of the f87g of the sel 300g generator protection relay which was employed to protect the particular low resistance grounded 555mva generator represented in the real-time simulation model of the rscad generic software. authors in [3] considered the f87g and simulated this function using simulink. authors in [4] described the effects of damages in secondary circuits and the influence on disoperation of differential protection micom p633 of the unit generator–transformer in simulink. authors in [5] conducted performance evaluation of the f87g by using a dynamic model of atp/emtp software for a large steam turbine synchronous generator. authors in [6] used atpemtp package to simulate and generate fault data which were processed in matlab to implement relay logic for detecting internal faults in the stator windings of f87g. authors in [7] used an anfis algorithm in simulink to design a f87g. the technical manuals of schneider p343, abb reg670, sel300g and ge g60 relay protections are in [9, 11, 14, 15] respectively. the works on detailed settings guidance [10, 12, 13, 16] allow protective relaying engineers to have a clear understanding of which methods are available on every relay protection, what input parameters are required for each method and the expected results of each. in practice, the worst condition of unbalanced secondary currents is realized when the current transformer (ct) in the faulted circuit is completely saturated and none of the other cts suffers a reduction in ratio. it is a universal differential protection problem for an unwanted trip of the generator [8]. besides, there is a possibility that corresponding author: phan huan vu engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4343 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay someone with unauthorized access might infiltrate the relay and reconfigure incorrect settings instructing it to release a false trip signal without the existence of a fault. when these types of misoperation risks go undetected, it is very easy for operators to mistakenly believe that their relay protection is secure. hence, the question that operators need to ask is: “how confident am i that my relay protection is reliable and secure?”. therefore, the purpose of this paper is to provide a single document that can be used to calculate relay setting parameters with multi-vendors, to answer the most frequently asked questions about f87g considering the buon tua srah hydropower plant in vietnam. in addition, a matlab/simulink model to check four generator differential characteristics to the plant will be checked while simulating fault cases. it will equip the reader with the knowledge to choose the most suitable vendor for his or her project. ii. generator differential protection function this section shows how an f87g characteristic is constructed and how it works. selection rules for setting parameters are discussed. as an example, figure 1 illustrates the schematic diagram for the implementation of the main generator protection of a buon tua srah hydropower plant. line side cts and neutral side cts are both ends of the stator winding which are wye-connected. the f87g relies on measurements of the currents of the protected generator in order to calculate differential and biased currents which are then utilized to make tripping decisions. on low-impedance grounded machines, this scheme can detect phase-to-phase, phase-to-ground, and three-phase faults. equations (1) and (2) show the mathematical definition of the differential and biased currents, respectively, which are employed by various vendors such as schneider, abb, sel, and ge. fig. 1. differential protection relay connection with a generator. the differential current: || _ 2 _ 1 iii diff += (1) the biased current: 2 |||| _ 21 _ ii i bias + = (2) based on these values of ibias and of idiff, the trip/restrain characteristics are applied in the vendors of protection relay which has a three-step shape (two slopes and one pickup level) as in figure 2. the differential current pickup setting (is1, idmin, 87op, pickup) should avoid the maximum unbalance current under normal load condition which is mainly caused by ct error, the normal current error (kct_err) must be less than 5% the operating current for the 5p20 type ct, and we should multiply by a reliable coefficient krel that is normally equal to 1.5. this setting can be set as low as 5% of the rated generator current, to provide protection for as much of the winding as possible [10]. fig. 2. differential protection characteristics slope1 setting (k1, slopesection2, slp1, and slope1) is set to ensure sensitivity to internal faults at normal operating current levels. the criterion for setting this slope is to allow maximum expected ct mismatch error when operating at maximum permitted current. in this case, it combines the kct_err. the relay pickup accuracy (krelay_err) can be obtained from the instruction manual of the relay, and an operating margin (kmargin) of 5% is typically provided in order to increase the security of the differential protection scheme [13]. the purpose of the slope2 setting (k2, slopesection3, slp2, slope2) is to increase the security of the differential protection scheme during heavy through-fault conditions. it can result in severe saturation of a ct. when a ct saturates, it can no longer trustworthy reproduce the primary current with a scale factor on the secondary side of the ct. as a result, a very high differential current may be obtained under this fault condition. the slope2 setting is typically set higher than the slope1 setting. relay setting calculation is an important task for a power plant before operating. the parameter in this calculation is used for the setting of the relay protection equipment of the power plant. choosing the slope of a differential relay has been more art than science. manufacturer’s guidelines tend to be qualitative or empirical in nature. they are based on the manufacturer’s experience and knowledge of his/her design [9], as shown in the above f87g using trip/restrain characteristics. we can calculate detailed adjustable settings by using the generator parameters as in table i [10]. a. schneider p343 [11, 12] is1 is calculated to avoid maximum unbalance under normal load and the k1 should be 0% to assure the sensitive under normal operating current. . . 1 1.5 (2 0.05) 2116.5 0.127, 2500 rel err ct g n s k k i i ct × × × × × = = = so, 0.2a is proper. ibias slope2 idiff pickup slope1 break1 break2 (d). ge g60 irt slope1 slope2 iop o87p irs1 u87p (c). sel 300g zone3 slope section2 slope section3 idiff ibias idmin endsection1 endsection2 zone1 zone2 (b). abb reg670 ibias idiff is1 k1 is2 k2 (a). areva p343 engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4344 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay the is2 should be the same with ct rating current or generator’s 120% nominal current. ct rating current is 2500a, so is2=2500a, the secondary value is is2=1a. the k2 is normally to avoid external max unbalance current under max throughout fault near protection zone. this max unbalance current can be calculated according to: iunb.max=krel×kap×kcc× kerr.ct×i (3) max (3) where the external three phase short circuit current is: (3) . " 2116.5 4.045 0.2093 2500 g n max d i i x ct = = = ×× . the ct type factor kcc should be 0.5 when same type ct for each side, or be 1 when different type ct for each side. in this paper, we select 0.5. the non-periodic factor is kap=1.5~2.0. in this paper, we select 2.0. so: iunb.max=1.5×2×0.5×0.1×4.045=0.6068. according to the equation: . 1 (3) 2 0.6068 0.2 2 0.1336 4.045 1 unb max s max s i i k i i − − = = = −− , we suggest selecting k2=0.2. the f87g operates when idiff exceeds is1 and the percentage of ibias, defined by a slope setting (k1, k2). it can be calculated using the following: idiff>k1×ibias+is1 where ibias ≤is2 idiff >k2×ibias–(k2−k1)×is2 +is1 where ibias>is1 table i. parameters of buon tua srah hydropower plant parameters values rated capacity 50.6 mva normal current ig.n 2116.5a rated voltage 13.8kv frequency 50hz pt ratio of the terminal of the generator 13.8 0.11 0.11 0.11 3 3 3 line ct ratio 2500/1 neutral ct ratio 2500/1 rated current secondly 0.8466a synchronous reactance xd 99.28% transient reactance x’d 28.27% sub transient reactance x’’d 20.93% synchronous reactance xq 62.88% negative reactance x2 23.60% static negative current i2∞ 250a transient negative current i 2 2t 40s krelay_err sel300g is 2%, g60 is 1%, p434 is 5%, and reg670 is 2% b. abb reg670 [9, 10] the pick-up value (idmin) is: . . min 1.5 (2 0.05) 2116.5 0.127 2500 rel err ct g n d k k i i ct × × × × × = = = we set idmin=0.2ig.n. in section1, the risk of false differential current is very low. this is the case, endsection 1 is set to experience value: . 0.5 0.5 2116.5 sec 1 0.4233 2500 g n i end tion ct × × = = = a slopesection2 is proposed to be set to 30%. breakpoint 2 set to experience value: . 3 3 2116.5 sec 2 2.5398 2500 g n i end tion ct × × = = = slopesection3=80%. it is supposed to cope with false differential currents related to current transformer saturation. the f87g operates when idiff exceeds a threshold idmin and a percentage of ibias: ( )2 1 when 1 2 diff dmin bias bias i i slopesection i endsection endsection i endsection > + × − ≤ ≤     ( ) ( ) 2 2 s 1 3 2 ] when 2 [ diff dmin bias bias i i slopesection endsection end ection slopesection i endsection i endsection > + × − + × − ≥ c. sel300g [13, 14] the slope1 setting can be calculated as: 1 2 _ _ct err relay err margin slp i k k= × + + (4) slp1=2×5%+2% +5%=17%, so 20% is proper. o87p setting is calculated using the guidance shown in equation: o87p=0.5∗slp1=0.1 the slp2 setting is fixed to 100%, the turning point between slope1 and slope2 defined by the value irs1 is fixed to 3.0 per unit. the purpose of the unrestrained differential element pickup setting (u87p) is to detect the very high differential current that clearly indicates a fault inside the differential protection zone. the u87p setting is set to 10 per unit as recommended by the relay manufacturer. the criteria for internal and external faults can be seen from the differential characteristic and are described below: 87 where 87 1/ diff bias i o p i o p spl> ≤ 1 1 when 87 1/ diff bias bias rs i i slp o p spl i i> × < < ( ) 12 1 2 when diff bias b rbi s sa iasi i slp slp slp i i i> × + − × ≥ d. ge multilin g60 [15, 16] . . 1.5 0.02 2116.5 0.025398 2500 rel er n g nk i ipickup ct × × × × = = = , so 0.1a is proper. slope1 is set at 15% starting from 0.04 (rc). the break1 setting should greater than the maximum overload expected for the machine: .1.05 1.05 2116.51 0.88893 2500 g nibreak ct × × = = = , so it is set at 1.2 slope2 is set at 80 %. the break2 setting is set at 3 or 4. it provides security from misoperation for maximum fault and resulting maximum ct error condition. the criteria for internal and external faults can be seen from the differential characteristic and are described below: engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4345 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay where 1/ diff bias i pickup i pickup slope> ≤ 1 when 1 1 / diff bias bias i slope i pickup slope i break> × < < 2 3 0 1 2 3 ≥ + × + × + ×diff bias bias biasi c c i c i c i when 1 2, where:< slope2×ibias when ibias≥break2 all setting parameter results are calculated based on the best manufacturer practices that are adjusted to compensate for ct ratio error mismatch and ct response via a dual slope characteristic typically as shown in figure 7. review: the numerical algorithm of the f87g is very similar to motor differential protection. it is also principally simpler than that of the power transformer differential protection. no phase shifts, and no transformation ratios, typical for power transformers, must be numerically allowed for. the suggested protection for instantaneous and sensitive protection for generator internal faults is presented in [17]. the variable slope percentage differential relay is a widely used form of differential relaying for generator protection. in this type of relay, the percentage slope characteristic may vary from about 10% at low values of through current up to 100% or more at high values of through current. a p434 is more sensitive than another relay with k1=0% during light internal faults and relatively low sensitivity (k2=20%) during heavy external faults. iii. power system under study matlab/simulink tool is useful for the basic understanding of power system protection, particularly for new engineers. it helps them to model the f87g system behavior under normal and fault conditions. in this section, the f87g performance is tested on the 13.8kv, 50.6mva synchronous generator, connected with 220kv through a 51mva, d11yn step-up transformer as shown in figure 3. a. the generator the generator is represented by its impedance and an ac source, required for the system supply. it is located at the point where one side can easily be grounded. a resistor of 0.5ω, which exists in the actual grounded generator neutral, is not represented. this assumption does not have a significant influence on the study. fig. 3. power system model b. current transformer models current phasors at both ends of stator windings are 2500/1a 5p20 30va which are performed the current transformer excitation test, measured the ct winding resistance and ct current-ratio automatically by the vanguard ezct2000 test set in central electrical testing company limited. all of the ezct’s test leads can be connected to the ct output terminals, eliminating the need for lead switching during testing. test voltage output is automatically raised and lowered by the ezct without any operator intervention. once the test is completed, test results can be printed and excitation curves can be plotted on the built-in 4.5-inch wide thermal printer as in figure 4. fig. 4. v-i curve of ct 2500/1 a knee type iec 10/50 standard [18]: vpk=1401.44v, ipk=0.0124a, ct ratio: 2485.885/1a, error: 0.5646%, ex v=99.6v, ex i=0.02a, phase angle: 0.120, ct pole: in phase, and winding res: 13.94ω. the proposed mathematical current transformer model is shown in figure 5 and is based on the ct saturation theory and calculator presented by the ieee power system relaying and control committee (psrc) and the practical testing results of ct by vanguard ezct-2000 test set above (see [19] for more details). engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4346 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay fig. 5. a mathematical current transformer model c. different protection relay model by performing the essential computations given in section ii, we got the four models of f87g relay (p343, reg670, sel300g, g60) which are connected on ct’s secondary side at both ends of the generator (il, in). after that, these signals were calculated using recursive discrete fourier transform algorithm. these are combined with setting parameters sending to s-function block. this block has been developed for detecting generator stator winding internal faults. if the output from the s-function block is equal to zero, this means that there is no fault in the stator winding of the generator, otherwise, the stator winding of the generator has a fault. the inside simulink model of f87g block is shown in figure 6. fig. 6. differential protection relay p434 block d. three phase fault block three phase fault block f1 and f2 generate fault types and fault resistance varying from 1ω to 35ω which are inside the protected zone, and out of the protected zone, respectively. iv. simulation results after the building of the proposed model has been completed, it is ready to analyze the operation of f87g applied under three cases below. in the first case, the normal condition shows the phase current waveforms captured at both terminal in=il=2200a, ct error ≈0%, idiff≈0a and all relays have not generated trip signal. the trajectory of the operating point can be seen by the relay lied the restrain zone (figure 7). fig. 7. current waves, ct error, trip signals and trajectory of operating point for normal condition in the second case, the internal fault occurs at 0.16s on the terminal of phase ‘abg’ stator winding as shown in figure 8. the phase currents captured at both terminals in=2ka, il=1ka and remain in the same phase, ct error ≈0%, idiff≈8a. according to change in current waveforms during the fault, the trajectory of the operating point moves quickly into trip zone, which results in a tripping at 0.178s (reg670), 0.1832s (sel300g and g60), 0.1835s (p434). in the third case, a threephase fault occurs at 0.1s on the terminal of a synchronous generator that is external to the stator winding. if the cts have no error, then currents at both ends of stator windings remain in the same value and opposite phase, and idiff≈0a. unfortunately, during fault conditions as shown in figure 9, cts do not always perform ideally, since core saturation may cause a breakdown of a ratio (in=20ka, il=1.8ka). such core saturation usually from 0.13s to 0.15s is the result of a dc transient in the primary fault current, total burden impedance zb=150ω and may be aggravated by the residual flux left in the core by a previous fault. the trajectory of the operating point moves into the trip zone of p434, sel300g, and reg670. after that, it comes back to the restrain zone at 0.23s. therefore, in order to provide additional security against maloperations during this event, the relay incorporates saturation detection logic. when saturation is detected, the engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4347 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay element will make an additional check on the angle between the neutral and line current. if this angle indicates an internal fault then tripping is permitted. in this case, the generator does not trip because the differential relay does not a response to the fault since the fault happens out of the protected zone. fig. 8. current waves, ct error, trip sinals and and trajectory of operating point for internal fault condition (abg fault) review: the simulation results show that under the no-fault condition and external fault then the relay does not trip; under a fault condition, the trip signal is taken. in another way the characteristic results are very sensitive to internal faults and insensitive to ct error currents during severe external faults. v. conclusions in order to verify the deployed generator protection scheme is working as designed after field installation, this paper is discussed with a buon tua srah hydropower plant example. the paper provides calculation on appropriate a pickup threshold, slope and breakpoint settings for an f87g function available from today’s modern multifunctional generator protection ieds such as p434, reg670, sel300g, and g60. it also presents important aspects of generator protection system analyzing at different conditions which are simulated on a synchronous machine stator winding in matlab/simulink. this software shows the two cts’ current waves, and the different and restraint current trajectories on the relay characteristics. it tells the user how far the different locus intrudes into the trip zone. according to the obtained results, it has been shown that the ieds operate safe and reliable. fig. 9. current waves, ct error, trip sinals and and trajectory of operating point for external fault condition (abc fault) acknowledgment the authors would like to thank the central electrical testing company limited, vietnam for allowing the use of the test record of current transformer, setting the calculations of hydropower plants used in this study. references [1] y. t. huang, investigating the performance of generator protection relays using a real-time simulator, msc thesis, university of kwazulu-natal, 2013 [2] d. reimert, protection relaying for power generation systems, taylor & francis group, 2006 [3] m. v. sudhakar, l. k. sahu, “simulation of generator protection using matlab”, ieee international conference on smart technologies and management for computing, communication, controls, energy and materials, chennai, india, august 2-4, 2017 [4] k. kadriu, g. kabashi, l. ahma, “misoperation of the differential protection during the dynamic processes of faults in the secondary protection circuit. differential protection modeling with matlab software and fault simulation”, 5th wseas international conference on power systems and electromagnetic compatibility, corfu, greece, august 23-25, 2005 [5] w. yousef, m. a. elsadd, a. y. abdelaziz, m. a. badr, “performance evaluation of generator-transformer unit overall differential protection in large power plant”, international journal on power engineering and energy, vol. 9, no. 3, pp. 869-877, 2018 [6] n. w. kinhekar, s. daingade, a. kinhekar, “current differential protection of alternator stator winding”, international conference on power systems transients, kyoto, japan, june 3-6, 2009 [7] r. mohemmed, a. cakir, “modeling and simulation of differential relay for stator winding generator protection by using anfis algorithm”, international journal of scientific & engineering research, vol. 7, no. 12, pp. 1668-1673, 2016 engineering, technology & applied science research vol. 9, no. 4, 2019, 4342-4348 4348 www.etasr.com le & vu: performance evaluation of a generator differential protection function for a numerical relay [8] ieee, c37.102 guide for ac generator protection, ieee, 2006 [9] abb, generator protection reg670 application manual, abb, 2012 [10] abb engineering (shanghai) ltd, #1, #2 g-t unit protection setting calculations for buon tua srah hydro power plant, abb engineering, 2009 [11] schneider electric, micom p343 generator protection relay-technical manual, schneider electric, 2011 [12] ecidi-alstom consortium, cscs system protection parameter calculation, vnss4-c3-9-001 /a, ecidi-alstom consortium, 2010 [13] xj electricity co., generator relay protection setting calculation instruction, xj electricity co., 2010 [14] sel, sel-300g multifunction generator relay instruction manual, sel, 2018 [15] ge digital energy, g60 generator protection system-instruction manual, ge digital energy, 2015 [16] ge grid automation, application book-ct requirements for ge multilin relays, ge grid automation, 2016 [17] i. brncic, z. gajic, s. roxenborg, adaptive differential protection for generators and shunt reactors, abb power technologies ab, 2007 [18] central electrical testing company limited, “the test record of current transformer 2500/1a, 5p20 30va, 04/17/2014”, 2014 [19] k. h. le, p. h. vu, “testing and evaluation of factors affecting the current transformer saturation”, journal of science and technology, thai nguyen university, vol. 189, no. 13, pp. 129-134, 2018 (in vietnamese) microsoft word 10-3-2759_s1_etasr_v9_n4_pp4367-4370 engineering, technology & applied science research vol. 9, no. 4, 2019, 4367-4370 4367 www.etasr.com alghamdi: creep resistance of polyethylene-based nanocomposites creep resistance of polyethylene-based nanocomposites abdulaziz s. alghamdi mechanical engineering department, college of engineering, university of hail, hail, saudi arabia asbg945@hotmail.com abstract—the purpose of this work is to investigate the effects of carbon black (cb), carbon nanotubes (cnts) and nanoclay sheets addition on the creep behavior of polyethylene-based nanocomposites synthesized with an in-house processing method. a blend of 75 wt.% uhmwpe and 25 wt.% hdpe, abbreviated to u75h25, was used as the hybrid pe matrix to accommodate the nanofillers. a 0.5 wt.% of cb, cnts or nanosheets clay was embedded separately into the blend matrix in order to improve the creep resistance. the scanning electron microscope (sem) and transmission electron microscope showed that the nanofillers were homogeneously dispersed in the u75h25. the addition of just 0.5 wt.% nanoclay resulted in a significant increase in the creep resistance of the polyethylene blend. conversely, the addition of cb or cnts caused a reduction in the creep resistance. the embedding of cnts into the matrix resulted in creep behavior almost close to the creep behavior of pure uhmwpe. the burger’s model was employed to understand the effect of the nanoparticle addition on the creep mechanism. keywords-uhmwpe; hdpe; polymer; creep; nanocomposite; polyethylene i. introduction polyethylene (pe) is the most widely used thermoplastic because of its outstanding mechanical properties, such as moisture absorption, chemical resistance, high toughness and ease of processing [1]. it was found that the incorporation of various nanofillers can lead to a significant improvement in the polyethylene composite properties which can be used in many applications such as packaging, electrical and thermal energy storage, automotive, and biomedical [1-6]. recently, new polyethylene nanocomposites have been developed with the use of various processing methods, different types, and amounts of reinforcements [7-16]. these nanocomposites can be a cost-effective alternative to the high cost advanced composites and can be widely used in various industrial applications [1]. however, achieving uniform dispersion of the nanoparticles is still an important scientific and technological challenge in nanocomposite fabrication. poor dispersion of the nanofillers, weak interaction between the filler and the matrix, and agglomeration can lead to the reduction of the mechanical properties [9]. in [10], it was found that the embedding of mwcnt and nanoclay into the polyethylene matrix increased significantly hardness, elastic modulus and indentation resistance of polyethylene-based nanocomposites. in this work, different types of nanofillers with different geometric shapes were used in order to improve the creep resistance of the uhmwpe/hdpe. the volume of fractions was kept at low percentage to minimize the effect of agglomeration, especially for cb and cnt. the creep response was analyzed using the burger’s model. ii. experimental work a. materials the materials tested in this study were uhmwpe/hdpe blended polymers with various nanofillers. nascent uhmwpe powders (sabic®uhmwpe3548) were purchased from sabic, having an average molecular weight of 3×106mol/g. hdpe powders (exxonmobil tm hdpe hma014) were purchased from ico ltd. carbon black (cb) powder with the commercial product name, black pearls ® 4040 (bp4040) and average particle diameter of 28nm was provided by cabot corporation. natural hectorite nanoclay was supplied by elementis specialties. multi-wall nanotubes (mwnt) with diameters in the range of 5nm to 50nm, were provided by nanocyl. butylated hydroxytoluene and tris (nonylphenyl) phosphate, supplied by sigma-aldrich, were used as primary and secondary antioxidants, to maintain the long term thermal stability and melt processing stability, respectively. b. processing an in-house pre-mix technology was used to incorporate the nanofillers into the uhmwpe and hdpe powders. a twinscrew extruder was then used to blend the uhmwpe and hdpe powders pre-mixed with 0.5wt.% of cb, carbon nanotubes (cnt) or nanoclay to form nano-filled uhmwpe/hdpe composites. a blend of 75wt.% uhmwpe and 25wt.% hdpe, abbreviated to u75h25, was used as the hybrid pe matrix to accommodate the nanofillers. during processing, the mixing temperature was controlled using five zones from feeding port to die, the processing parameters are shown in table i. compression moulding was used to mould the nanocomposite materials. the raw material was placed into a mould (100mm×100mm×1.65mm), and then heated to 190ºc, which is higher than the melting point of the composite (approximately 135ºc). various mould pressures (154, 232, corresponding author: abdulaziz s. alghamdi engineering, technology & applied science research vol. 9, no. 4, 2019, 4367-4370 4368 www.etasr.com alghamdi: creep resistance of polyethylene-based nanocomposites 309, and 386mpa) were studied in order to optimize material properties such as hardness and crystallinity. various holding times at maximum pressure (10, 15 and 30min) were also used to identify the most appropriate moulding parameters. the optimal moulding pressure and holding time were 309mpa and 15min respectively, which resulted in the highest values of hardness and crystallinity. after compression moulding, the mould was cooled to room temperature with the use of water. then, the specimens were cut from the plaques into a dumbbell shape using a die punch cutter with the following dimensions: 75mm overall length, 25mm length of narrow parallel-sided portion, 12.5mm width at the ends, 4mm width of narrow portion and 1.65mm thickness. table i. processing method parameters extruder speed (rpm) processing temperature (ºc) zone 1 zone 2 zone 3 zone 4 die cooling 190 220 250 260 270 280 water c. mechanical testing and characterization. in order to characterize the nanofiller dispersion and the microstructure of the u75h25 nanocomposites, several experimental techniques were used. these included differential scanning calorimetry (dsc), scanning electron microscopy (sem) and transmission electron microscopy (tem). dsc, (ta instruments, shimadzu dsc60) was used to analyze the effect of different compression moulding parameters and nanofiller types on the crystallinity of the blend and nanocomposites. the specimens, with average mass of 5±0.2mg, were sealed in aluminium pans and heated from 20ºc to 180ºc at a rate of 10ºc per minute. the mass fraction degree of crystallinity was then determined by comparing the heat of fusion with that for fully crystalline polyethylene at the equilibrium melting point (290kj/kg) [17]. the surface morphology was investigated using a leo 440 sem from leo electron microscopy ltd and philips xl30 esem-feg from fei company. the dispersion of the nanofillers was studied after fracturing the samples in liquid nitrogen, and then coating them using platinum. a jeol 2000fx tem from jeol ltd. was used to analyze the dispersion of nanofillers into the blend matrix. tensile creep tests were carried out using an instron 3366 tensile testing machine from instron corporation) at room temperature (22±2ºc). d. burger’s model. creep modeling and analysis is important to determine time response, which leads to understanding chain dynamics. the burger’s model, which is a combination of kelvin-voigt and maxwell elements, is the most used model to describe the linear viscoelastic behavior of composites. the total strain as a function of time can be obtained from (1) [18]: ε� = � �� + �� 1 − e �� �� � + ��� t (1) where em and ηm are the elastic and viscous components of maxwell model, τ=ηk/ek is the retardation time taken to produce 63.2% of the total deformation in the kelvin unit, ηk and ek are elastic and viscous components of kelvin model. iii. results and discussion a. nanofillers dispersion figure 1 shows the sem images for the microstructure of u75h25/nanofillers. it can be seen that cb, nanoclay and cnts are dispersed homogenously in the u75h25 matrix. however, small agglomeration of the cb nanofillers can be observed, which can lead to a reduction of load carrying capacity between cb and the polymer matrix. similarly, these cb nanofillers agglomerations have been observed in the tem image, as seen in figure 2(a). moreover, figure 1(c) shows good dispersion of cnts into the polyethylene matrix. single clay nanosheet and cnt can be seen in figures 2(b) and 2(c), respectively. this indicates a uniform distribution of both clay nanosheets and cnts into the blend matrix. (a) (b) (c) fig. 1. sem images for the microstructure of (a) u75h25-0.5wt.%, (b) u75h25-0.5wt.% clay, and (c) u75h25-0.5wt.% cnts engineering, technology & applied science research vol. 9, no. 4, 2019, 4367-4370 4369 www.etasr.com alghamdi: creep resistance of polyethylene-based nanocomposites (a) (b) (c) fig. 2. tem images for the microstructure of (a) u75h25-0.5wt.%, (b) u75h25-0.5wt.% clay, and (c) u75h25-0.5wt.% cnts b. thermal analysis table ii presents the dsc results for u75h25 and its nanocomposites. it can be seen that the addition of 0.5wt.% cb nanoparticles has no effect on crystallinity, however crystallinity is increased significantly with the addition of 0.5wt.% clay nanosheets. the incorporation of 0.5wt.% cnts resulted in a slight reduction in the crystallinity value. these changes in the crystallinity values can be attributed to the effect of nanofiller shapes and the interaction between the nanofillers and the polyethylene matrix. table ii. thermal propertis of hdpe-based nanocomposites materials crystallinity % uhmwpe 53.3 u75h25 55.3 u75h25-0.5wt.% cb 55.2 u75h24-0.5wt.% clay 69 u75h25-0.5wt.% cnt 50 c. tensile creep results figure 3 shows the effect of the blending of hdpe on the creep resistance of uhmwpe and the effects of the addition of nanofillers on the creep resistance of u75h25 blend. it can be seen that blending 25wt.% of hdpe with 75wt.% uhmwpe resulted in an increase in the creep resistance by 32%. this can be proposed to the influence of hdpe chains and spherulite properties on the mobility during creep. the viscoelastic behavior in semi-crystalline polymers such as uhmwpe and hdpe is a combination of crystalline and amorphous phase’s mobility and the changes in these microstructures can lead to significant variation in the polymer properties. the addition of cb and cnts nanofillers resulted in a reduction in the creep resistance of the u75h25 blend. this can be attributed to the agglomerations of the nanofillers, which lead to a reduction in the surface to volume ratio and apply as defects in the microstructure. polyethylene is a nonpolar polymer; therefore, the interaction between the nanofillers and the polyethylene matrix is almost weak. this can affect the effectiveness of load transfer between the matrix and the nanofiller, which then affects mechanical properties. however, the addition of 2d plate-like nanofillers showed a significant improvement in the creep resistance of the polyethylene blend. the addition of only 0.5wt.% of clay nanosheets resulted in 22% increase in the creep resistance. this can be attributed to the good dispersion, interaction between nanoclay and polyethylene matrix, the increase in crystallinity and the plate-like shape of nanoclay. fig. 3. effect of gnss addition on the micro-hardness values. d. constitutive modeling as shown in figure 3, curves fitting are in a satisfactory agreement with the experimental data. the data were fitted to the burger’s model, all parameters being obtained by minimizing the sum of the squared differences between the actual and calculated strains, using the solver in excel. table ii shows the berger’s model parameters that indicate an increasing in the values with blending 25wt.% hdpe with 75wt.% uhmwpe. further increasing can be observed with the addition of plate-like clay nanosheets. the elasticity em and the stiffness of the amorphous phase ek of the blend have increased by 32% by the addition of nanoclay. the parameter ηm represents the irrecoverable creep strain, which also engineering, technology & applied science research vol. 9, no. 4, 2019, 4367-4370 4370 www.etasr.com alghamdi: creep resistance of polyethylene-based nanocomposites increased with the addition of nanoclay. this indicates that a reduction in the dashpot flow can occur, which leads to a reduction in the permanent deformation. however, retardation time, where τ is the delayed response to the applied stress for the u75h25-0.5wt.% clay, is less than the retardation time for the blend and the uhmwpe. conversely, the addition of cb and cnts nanofillers to the blend matrix shows a reduction in elasticity, stiffness and in the irrecoverable creep strain. table iii. burger’s model parameters material em ηm (x10 3 ek τ ηk (mpa) mpa.s) (mpa) (s) (mpa.s) uhmwpe 445 381 561 56.1 31472.1 u75h25 617 605 789 53.4 42132.6 u75h25-0.5wt.% cb 558.7 480 724 52.3 37865.2 u75h24-0.5wt.% clay 907 629 1020 37.7 38454 u75h25-0.5wt.% cnt 497 445 626 53.6 33553.6 iv. conclusion the main findings in this work are summarized as follows: • blending 25wt.% of hdpe with 75wt.% uhmwpe resulted in a significant increase in the creep resistance. • the addition of low weight fraction of plate-like nanoclay leads to further improvement in the creep resistance of the u75h25 blend. • the embedding of cb and cnts into the blend matrix resulted in a reduction in the creep resistance, which can be attributed to the weak interaction between the filler and the polyethylene matrix. moreover, the agglomeration of these types of nanofillers can reduce the surface to volume ratio, which can significantly affect the load transfer between the matrix and the filler. • elasticity, stiffness and the irrecoverable creep strain have increased with the addition of plate-like nanoclay. references [1] p. n. khanam, m. a. a. maadeed, “processing and characterization of polyethylene-based composites”, advanced manufacturing: polymer & composites science, vol. 1, no. 2, pp. 63-79, 2015 [2] a. sari, “form-stable paraffin/high density polyethylene composites as solid–liquid phase change material for thermal energy storage: preparation and thermal properties”, energy conversion and management, vol. 45, no. 13–14, pp. 2033–2042, 2004 [3] k. m. manu, s. soni, v. r. k. murthy, m. t. sebastian, “ba(zn1/3ta2/3)o3 ceramics reinforced high density polyethylene for microwave applications”, journal of materials science: materials in electronics, vol. 24, no. 6, pp. 2098–2105, 2013 [4] t. k. dey, m. tripathi, “thermal properties of silicon powder filled high-density polyethylene composites”, thermochimica acta, vol. 502, no. 1–2, pp. 35–42, 2010 [5] l. fang, y. leng, p. gao, “processing of hydroxyapatite reinforced ultrahigh molecular weight polyethylene for biomedical applications”, biomaterials, vol. 26, no. 17, pp. 3471–3478, 2005 [6] q. zhang, s. rastogi, d. chen, d. lippits, p. j. lemstra, “low percolation threshold in single-walled carbon nanotube/high density polyethylene composites prepared by melt processing technique”, carbon, vol. 44, no. 4, pp. 778–785, 2006 [7] a. s. alghamdi, i. a. ashcroft, m. song, d. cai, “morphology and strain rate effects on heat generation during the plastic deformation of polyethylene/carbon black nanocomposites”, polymer testing, vol. 32, no. 6, pp. 1105–1113, 2013 [8] a. s. alghamdi, i. a. ashcroft, m. song, d. cai, “nanoparticle type effects on heat generation during the plastic deformation of polyethylene nanocomposites”, polymer testing, vol. 32, no. 8, pp. 1502–1510, 2013 [9] a. s. alghamdi, i. a. ashcroft, m. o. song, “creep resistance of novel polyethylene/carbon black nanocomposites”, international journal of materials science and engineering, vol. 2, no. 1, pp. 1-5, 2014 [10] a. s. alghamdi, i. a. ashcroft, m. o. song, “high temperature effects on the nanoindentation behaviour of polyethylene-based nanocomposites”, international journal of computational methods and experimental measurements, vol. 3, no. 2, pp. 79–88, 2015 [11] c. v. gorwade, a. s. alghamdi, i. a. ashcroft, v. v. silberschmidt, m. o. song, “finite element analysis of the high strain rate testing of polymeric materials”, journal of physics: conference series, vol. 382, articleid 012043, 2012 [12] a. s. alghamdi, “nanoparticle type effects on the scratch resistance of polyethylene-based nanocomposites”, international journal of advanced and applied sciences, vol. 4, no. 4, pp. 1-6, 2017 [13] d. r. paul, l. m. robeson, “polymer nanotechnology: nanocomposites”, polymer, vol. 49, no. 15, pp. 3187-3204, 2008 [14] m. rahmat, p. hubert, “carbon nanotube-polymer interactions in nanocomposites: a review”, composites science and technology, vol. 72, no. 1, pp. 72-84, 2011 [15] x. jiang, l. t. drzal, “multifunctional high-density polyethylene nanocomposites produced by incorporation of exfoliated graphene nanoplatelets 2: crystallization, thermal and electrical properties”, polymer composites, vol. 33, no.4, pp. 636-642, 2012 [16] a. s. alghamdi, “synthesis and mechanical characterisation of high density polyethylene/graphene nanocomposites”, engineering, technology & applied science research, vol. 8, no. 2, pp. 2814-2817, 2018 [17] s. humbert, o. lame, g. vigier, “polyethylene yielding behaviour: what is behind the correlation between yield stress and crystallinity?”, polymer, vol. 50, no. 15, pp. 3755-3761, 2009 [18] w. n. findley, j. s. lai, k. onaran, creep and relaxation of nonlinear viscoelastic materials, dover publications, 1989 microsoft word 16-3600_s engineering, technology & applied science research vol. 10, no. 4, 2020, 5974-5978 5974 www.etasr.com nguyen et al.: nonlinear inelastic analysis of 2d steel frames nonlinear inelastic analysis of 2d steel frames an improvement of the plastic hinge method phu-cuong nguyen faculty of civil engineering ho chi minh city open university ho chi minh city, vietnam cuong.pn@ou.edu.vn binh le-van faculty of civil engineering ho chi minh city open university ho chi minh city, vietnam binh.lv@ou.edu.vn son dong tam vo thanh faculty of civil engineering ho chi minh city open university ho chi minh city, vietnam son.dtvt@ou.edu.vn abstract—in this study, a new method for nonlinear analysis of 2d steel frames, by improving the conventional plastic hinge method, is presented. the beam-column element is established and formulated in detail using a fiber plastic hinge approach. residual stresses of i-shape sections are declared at the two ends through fibers. gradual yielding by residual stresses along the member length due to axial force is accounted for by the tangent elastic modulus concept. the p-δ effect is captured by stability functions, whereas the p-∆ effect is estimated by the geometric stiffness matrix. a nonlinear algorithm is established for solving nonlinear problems. the present study predicts the strength and behavior of 2d steel frames as efficiently and accurately as the plastic zone method did. keywords-fiber plastic hinge; nonlinear algorithm; residual stress; stability functions; steel frames i. introduction nowadays, direct design for steel frames is permitted by the modern design codes. a direct design including the effects of geometric nonlinearity, inelasticity of materials, imperfections, residual stress, etc. is accounted directly and simultaneously in advanced analysis. there are usually two methods for advanced analysis: plastic hinge methods [1-15] and plastic zone methods [3, 16-25]. authors in [14, 15, 22, 23, 26] invented a spring element for accounting stiffness of beam-to-column connections for nonlinear behavior analysis of steel frames with flexible connections. recently, authors in [27-29] tried to investigate the behavior of steel frames with the effects of connections. author in [30] investigated the effect of the iranian standard no. 2800 on the elastic and inelastic behavior of dual steel systems by using the nonlinear pushover analysis of commercial software sap2000. up to now, in spite of the developments in computer science and technology, plastic zone methods are still expensive for the daily engineering design of steel frames. plastic hinge methods are more simple, efficient in computational time, and with acceptable accuracy, so they are suitable for practical design. plastic hinge methods have been studied widely from 1980 to 2000 [1-8]. authors in [1, 3] used hermite interpolation functions to predict the displacements of beam-column elements. plastic hinges were assumed to concentrate at the two elemental ends. for considering geometric nonlinearity, the beam-column elements were divided into many short-elements. residual stresses and imperfections were not accounted for in direct analysis. authors in [6, 7] used a five-order interpolation function for considering the second-order effects of beam-column elements. the plasticity of cross-sections at the two ends of the element is modeled by two springs using the section assemblage concept. the tangent stiffness matrix of a structural system is established by integrating the stiffness of beam-column elements and the stiffness of springs. authors in [10] derived a finite element formulation for a beam-column element that had an arbitrarily plastic hinge along the element length. authors in [9] proposed a second-order inelastic large-deflection analysis method using only one element per member, including three plastic hinges in one member. in 2014, liu et al. [13] also proposed an arbitrarily-located plastic hinge element for direct analysis of planar steel frames. their method was directly developed from the initial out-ofstraight element. king et al. [5] proposed a second-order inelastic analysis method for steel frames. this method employed stability functions for predicting the second-order effects accurately. gradual yielding at plastic hinges accounted for using lrfd’s interactive equations. gradual yielding by residual stresses along the member length due to axial force was calculated by the crc tangent modulus concept. one element was used for modeling. in 2002, ziemian and mcguire [8] improved the result of plastic hinge method using modified tangent modulus formulation. the result was nearly identical with the sophisticated plastic zone method in some examples, but the method should be verified even more with different problems. the plastic hinge method of [5] is effective and saves computational time because it uses only one element per member. however, residual stresses at two plastic hinges are not considered in the analysis, and in some problems, the result has a significant error when compared with the ‘exact’ solutions using plastic zone methods. this study tries to develop a new plastic hinge method which can capture the nonlinear behavior of steel frames accurately. the proposed method employs the fiber discretization of a cross-section. stability functions and the geometric stiffness matrix are utilized for accounting for the second-order effects. a nonlinear static algorithm implementing the generalized displacement method is established for solving nonlinear problems. by some examples, the present method is proved to be reliable and straightforward for tracing the nonlinear behavior of 2d steel frames. corresponding author: phu-cuong nguyen engineering, technology & applied science research vol. 10, no. 4, 2020, 5974-5978 5975 www.etasr.com nguyen et al.: nonlinear inelastic analysis of 2d steel frames ii. behavior of the beam-column element a. p δ− effect stability functions studied in [31] were utilized for predicting the p δ− effect. with one element for the member, it is efficient to economize sources and analysis time. the force-displacement relationship using the incremental form of a 2d beam-column element can be written as: 1 2 2 1 / 0 0 0 0 i i j j p a i ei m s s l m s s δ θ θ ∆ ∆ ∆ = ∆ ∆ ∆                         (1) where p∆ , im∆ , and jm∆ are the axial force and moments, δ∆ , iθ∆ , and jθ∆ are the axial movement and rotations, a is the sectional area, i is the moment of inertia around the z axis, l is the elemental length, e is the young’s modulus of the steel, and 1s and 2s are stability functions. b. fiber plastic hinge figure 1 illustrates the fiber plastic hinge method. in this method, two ends, i and j, of the element are monitored regarding the behavior of stress and strain of fibers. the forcedisplacement relation of a 2d element considering both the p δ− effect and plasticity can be formulated as in (2): 2 2 1 2 1 2 2 2 1 1 / 0 0 0 ( (1 )) 0 ( (1 )) t i i j i j i j j i j j i a i p e i s m s s l s m s s s s δ η η η η θ θ η η η η       ∆ ∆         ∆ = − − ∆       ∆ ∆       − −     (2) where iη and jη are scalar parameters accounting for gradual yielding of fiber hinges. fig. 1. fiber plastic hinge method. they are estimated as: ( )2 1 n tii i i i i i t e a y i e i η = + = ∑ (3) ( )2 1 n tji i i i i j t e a y i e i η = + = ∑ (4) where n is the sum of fibers on the sections at i and j, tiie and tjie are the tangent modulus of the th i fiber at i and j, ia is the area of the th i fiber, ii is the moment of inertia of the th i fiber, iy is the center coordinate of the th i fiber, shown in figure 1, and te is the tangent modulus of an element. c. residual stresses from [5], the crc tangent modulus te is applied with the aim to consider the effects of residual stresses along the length. this effect is similar to the spread of plasticity on the length due to axial force, formulated as: te e= for 0.5 yp p≤ (5) 4 1t y y p p e e p p   = −      for 0.5 yp p> (6) where py is squash load. eccs residual stress [2] is admitted as initial condition to fibers, as shown in figure 2. d. fiber state the cross-section ends of the element are divided into several fibers n for considering gradual yielding of two fiber plastic hinges at i and j, as shown in figure 1. fibers are monitored and their behavior (stress, strain) is updated. if the fiber is yielded, its elastic modulus is equal to zero. axial strain ε∆ and curvature χ∆ of cross-section, and section forces are written as: sectional force vector: { }tn m∆ ∆ (7) sectional deformation vector: { }tε χ∆ ∆ (8) the sectional deformation vector is estimated as: ( ) 1 1 1 2 1 1 n n i i i i i i i n n i i i i i i i i i e a e a y n m e a y e a y i ε χ − = = = = − ∆ ∆ = ∆ ∆ − +                       ∑ ∑ ∑ ∑ (9) engineering, technology & applied science research vol. 10, no. 4, 2020, 5974-5978 5976 www.etasr.com nguyen et al.: nonlinear inelastic analysis of 2d steel frames fig. 2. eccs residual stress pattern. e. p − ∆ effect the elemental tangent stiffness matrix is written as: [ ] [ ] [ ] 3 5 3 3 3 5 5 5 t e gt k t k t k × × × ×  = +     (10) where the transformation matrix [ ] 3 5 t × of the element is formulated as: [ ] 3 5 1 0 0 0 0 0 1 0 1/ 1/ 0 0 1 1/ 1/ t l l l l × −   = −   −  (11) and gk   is the geometric stiffness matrix: 2 2 5 5 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 i j i j g i j i j m m m m l l k m m p p l ll m m p p l ll × + + − = + − + − −                           (12) iii. nonlinear solution the nonlinear solution algorithm invented by yang and shieh [32] is developed to find a solution to the structural system. yang and shieh’s method is one of the most efficient and stable numerical methods. it can easily capture the problems with several critical points. the equilibrium equation of steel frames is: { } { } { }1 1ˆi i i ij j j jk d p rλ− −  ∆ = +  (10) where 1 i jk −   is the tangent stiffness matrix, { } i jd∆ is the displacement vector, { }p̂ is the consultation load vector, { }1ijr − is the residual force vector, ijλ is the load coefficient. iv. examples and discussion a. column flexural buckling figure 3 illustrates a simply supported steel column under axial compression. young’s modulus is e=200000mpa and poisson’s ratio of steel is v=0.3. a horizontal load is put in the middle of the column for considering geometry imperfection. fig. 3. simply supported column. fig. 4. load-deflection curves of the column. the load-deflection curves of the column captured by the present study and abaqus are plotted in figure 4. by one element per column, the presented method can predict precisely the behavior and strength of the column, but abaqus overestimates the strength by 10.6%. abaqus needs more than five elements for obtained accuracy. this example illustrates the accuracy of the presented study in predicting the second-order effect. b. portal steel frame vogel [3] invented the portal steel frame for benchmark second-order inelastic methods. nguyen and kim [25] proposed a plastic zone method for analyzing this frame. the frame configuration is described in figure 5. the elastic modulus is e=205000mpa, the yield stress of the steel is σy=235mpa. the cross-sections are hea340 and heb300. authors in [3, 25] used 50 elements for columns and 40 elements for the beam for analyzing the frame, while this present program uses one element for beam-column members, and i-shape cross-sections have been meshed into 24 fibers for flanges and 18 fibers for the web. the load-deflection curve of the present study closely matches with vogel’s result, as plotted in figure 6. nguyen and kim’s result is lower than vogel’s result: -2.05% error is obtained when compared with p 0.004p w 8 x 3 1 2 .5 m 2 .5 m engineering, technology & applied science research vol. 10, no. 4, 2020, 5974-5978 5977 www.etasr.com nguyen et al.: nonlinear inelastic analysis of 2d steel frames vogel’s result. the collapse load coefficient of the different applied methods is listed in table i. less than 0.8% error of the present study is achieved when comparing with vogel’s result. analyzing this problem on a computer, with a configuration of intel core i7-7500 4cpus 2.70ghz and 16gb ram, the analyzing time is only 15s. this shows the accuracy and computation speed of the proposed method. fig. 5. portal steel frame. table i. collapse load coefficient for portal frame method collapse load coefficient error (%) [3] 1.022 – [25] 1.001 -2.05 present study 1.014 -0.78 fig. 6. load-deflection curves of portal frame. c. six-story steel frame the six-story steel frame plotted in figure 7 was firstly analyzed by vogel in [3]. vogel used both plastic zone and plastic hinge methods, chan and chui [7] used the refined plastic hinge method, nguyen and kim in [22] used a fiber beam-column method and in [25] the plastic zone method. all the columns are inclined with an angle of ψ = 1/450. properties of steel are e=205000mpa and σy=235mpa. in the present study, five elements for beams and one for columns were used for modeling. figure 8 and table ii show the results predicted by various methods. in this frame, the predicted load-deflection curve of various methods is not much different. the load-deflection curve of the present study is almost identical to vogel’s plastic zone method. the critical strength predicted by the proposed method (1.116) has less than 0.45% error when compared with vogel’s result (1.111) using the plastic zone method. analyzing this problem on the same intel computer, it takes only 53s, showing that the proposed program is accurate and efficient in predicting the nonlinear behavior and the strength of 2d steel frames. fig. 7. six-story steel frame. fig. 8. load-deflection curves of a six-story frame. table ii. collapse load coefficient for a six-story frame method collapse load coefficient error (%) [3] 1.111 – [25] 1.100 -0.99 present study 1.116 +0.45 v. conclusion a second-order inelastic analysis program based on the finite element method for 2d steel frames has been developed successfully. the effects of p-δ, p-∆, inelasticity of materials, residual stresses, and imperfections have been accounted for the nonlinear analysis by the generalized displacement method. the proposed method is simple, accurate, and efficient in predicting the strength and behavior of steel frames. the 4.0 m 5 .0 m 2800 kn 35 kn ∆ h e b 3 0 0 2800 kn h e b 3 0 0 hea340 ψ = 1/400 ψ 0.0 0.2 0.4 0.6 0.8 1.0 0 5 10 15 20 25 l o a d c o e ff ic ie n t horizontal deflection (mm) vogel nguyen and kim present study 6.0 m6.0 m 49.1 kn/m 49.1 kn/m 49.1 kn/m 49.1 kn/m 49.1 kn/m 31.7 kn/m ipe400 ipe360 ipe330 ipe300 ipe300 ipe240 h e b 2 2 0 h e b 2 2 0 h e b 2 2 0 h e b 2 2 0 h e b 1 6 0 h e b 1 6 0 h e b 2 6 0 h e b 2 6 0 h e b 2 4 0 h e b 2 4 0 h e b 2 0 0 h e b 2 0 0 6 x 3 .7 5 m = 2 2 .5 m ∆ = 1/450ψ = 10.23 knf 1 = 22.44 knf 2 f 2 f 2 f 2 f 2 ψ ψ 0.0 0.2 0.4 0.6 0.8 1.0 0 50 100 150 200 250 300 350 l o a d c o e ff ic ie n t horizontal deflection (mm) vogel nguyen and kim present study engineering, technology & applied science research vol. 10, no. 4, 2020, 5974-5978 5978 www.etasr.com nguyen et al.: nonlinear inelastic analysis of 2d steel frames proposed method can be integrated into commercial software for daily engineering design using advanced analysis. acknowledgment the authors gratefully acknowledge the financial support granted by the scientific research fund of the ministry of education and training (moet), vietnam (no. b2019–mbs– 01). the authors would also like to thank ho chi minh city open university and colleagues for supporting this project. references [1] j. g. orbison, “nonlinear static analysis of three-dimensional steel frames,” ph.d. dissertation, cornell university, ithaca, ny, usa, 1982. [2] “ultimate limit state calculations of sway frames with rigid joints,” eccs general secretariat, brussels, belgium, 1984. [3] u. vogel, “calibrating frames”, stahlbau, vol. 10, pp. 295-301, oct. 1985 [4] s.-h. hsieh and g. g. deierlein, “nonlinear analysis of threedimensional steel frames with semi-rigid connections,” computers & structures, vol. 41, no. 5, pp. 995–1009, jan. 1991, doi: 10.1016/00457949(91)90293-u. [5] w. s. king, d. w. white, and w. f. chen, “second‐order inelastic analysis methods for steel‐frame design,” journal of structural engineering, vol. 118, no. 2, pp. 408–428, feb. 1992, doi: 10.1061/(asce)0733-9445(1992)118:2(408). [6] s.-l. chan and p. p.-t. chui, “a generalized design-based elastoplastic analysis of steel frames by section assemblage concept,” engineering structures, vol. 19, no. 8, pp. 628–636, aug. 1997, doi: 10.1016/s01410296(96)00138-1. [7] s. l. chan and p. k. chui, non-linear static and cyclic analysis of steel frames with semi-rigid connections. amsterdam, netherlands: elsevier science, 2000. [8] r. d. ziemian and w. mcguire, “modified tangent modulus approach, a contribution to plastic hinge analysis,” journal of structural engineering, vol. 128, no. 10, pp. 1301–1307, oct. 2002, doi: 10.1061/(asce)0733-9445(2002)128:10(1301). [9] s. l. chan and z. h. zhou, “elastoplastic and large deflection analysis of steel frames by one element per member. ii: three hinges along member,” journal of structural engineering, vol. 130, no. 4, pp. 545– 553, apr. 2004, doi: 10.1061/(asce)0733-9445(2004)130:4(545). [10] z. h. zhou and s. l. chan, “elastoplastic and large deflection analysis of steel frames by one element per member. i: one hinge along member,” journal of structural engineering, vol. 130, no. 4, pp. 538– 544, apr. 2004, doi: 10.1061/(asce)0733-9445(2004)130:4(538). [11] h. van long and n. dang hung, “second-order plastic-hinge analysis of 3-d steel frames including strain hardening effects,” engineering structures, vol. 30, no. 12, pp. 3505–3512, dec. 2008, doi: 10.1016/j.engstruct.2008.05.013. [12] c. ngo-huu, p.-c. nguyen, and s.-e. kim, “second-order plastic-hinge analysis of space semi-rigid steel frames,” thin-walled structures, vol. 60, pp. 98–104, nov. 2012, doi: 10.1016/j.tws.2012.06.019. [13] s.-w. liu, y.-p. liu, and s.-l. chan, “direct analysis by an arbitrarilylocated-plastic-hinge element — part 1: planar analysis,” journal of constructional steel research, vol. 103, pp. 303–315, dec. 2014, doi: 10.1016/j.jcsr.2014.07.009. [14] p.-c. nguyen and s.-e. kim, “nonlinear inelastic time-history analysis of three-dimensional semi-rigid steel frames,” journal of constructional steel research, vol. 101, pp. 192–206, oct. 2014, doi: 10.1016/j.jcsr.2014.05.009. [15] p.-c. nguyen and s.-e. kim, “investigating effects of various base restraints on the nonlinear inelastic static and seismic responses of steel frames,” international journal of non-linear mechanics, vol. 89, pp. 151–167, mar. 2017, doi: 10.1016/j.ijnonlinmec.2016.12.011. [16] c. m. foley and s. vinnakota, “inelastic analysis of partially restrained unbraced steel frames,” engineering structures, vol. 19, no. 11, pp. 891– 902, nov. 1997, doi: 10.1016/s0141-0296(97)00175-2. [17] l. h. the, m. j. clarke, “plastic-zone analysis of 3d steel frames using beam elements,” journal of structural engineering, vol. 125, no. 11, pp. 1328–1337, nov. 1999, doi: 10.1061/(asce)07339445(1999)125:11(1328). [18] x.-m. jiang, h. chen, and j. y. r. liew, “spread-of-plasticity analysis of three-dimensional steel frames,” journal of constructional steel research, vol. 58, no. 2, pp. 193–212, feb. 2002, doi: 10.1016/s0143974x(01)00041-4. [19] c. g. chiorean, “a computer method for nonlinear inelastic analysis of 3d semi-rigid steel frameworks,” engineering structures, vol. 31, no. 12, pp. 3016–3033, dec. 2009, doi: 10.1016/j.engstruct.2009.08.003. [20] p.-c. nguyen, n. t. n. doan, c. ngo-huu, and s.-e. kim, “nonlinear inelastic response history analysis of steel frame structures using plasticzone method,” thin-walled structures, vol. 85, pp. 220–233, dec. 2014, doi: 10.1016/j.tws.2014.09.002. [21] p.-c. nguyen and s.-e. kim, “distributed plasticity approach for timehistory analysis of steel frames including nonlinear connections,” journal of constructional steel research, vol. 100, pp. 36–49, sep. 2014, doi: 10.1016/j.jcsr.2014.04.012. [22] p.-c. nguyen and s.-e. kim, “an advanced analysis method for threedimensional steel frames with semi-rigid connections,” finite elements in analysis and design, vol. 80, pp. 23–32, mar. 2014, doi: 10.1016/j.finel.2013.11.004. [23] p.-c. nguyen and s.-e. kim, “second-order spread-of-plasticity approach for nonlinear time-history analysis of space semi-rigid steel frames,” finite elements in analysis and design, vol. 105, pp. 1–15, nov. 2015, doi: 10.1016/j.finel.2015.06.006. [24] a. saritas and a. koseoglu, “distributed inelasticity planar frame element with localized semi-rigid connections for nonlinear analysis of steel structures,” international journal of mechanical sciences, vol. 96– 97, pp. 216–231, jun. 2015, doi: 10.1016/j.ijmecsci.2015.04.005. [25] p.-c. nguyen and s.-e. kim, “advanced analysis for planar steel frames with semi-rigid connections using plastic-zone method,” steel and composite structures, vol. 21, no. 5, pp. 1121–1144, jan. 2016, doi: 10.12989/scs.2016.21.5.1121. [26] p.-c. nguyen and s.-e. kim, “nonlinear elastic dynamic analysis of space steel frames with semi-rigid connections,” journal of constructional steel research, vol. 84, pp. 72–81, may 2013, doi: 10.1016/j.jcsr.2013.02.004. [27] n. l. tran and t. h. nguyen, “reliability assessment of steel plane frame’s buckling strength considering semi-rigid connections,” engineering, technology & applied science research, vol. 10, no. 1, pp. 5099–5103, feb. 2020. [28] n. w. bishay-girges, “improved steel beam-column connections in industrial structures,” engineering, technology & applied science research, vol. 10, no. 1, pp. 5126–5131, feb. 2020. [29] n. konkong, “an investigation on the ultimate strength of coldformed steel bolted connections,” engineering, technology & applied science research, vol. 7, no. 4, pp. 1826–1832, aug. 2017. [30] h. veladi and h. najafi, “effect of standard no. 2800 rules for moment resisting frames on the elastic and inelastic behavior of dual steel systems,” engineering, technology & applied science research, vol. 7, no. 6, pp. 2139–2146, dec. 2017. [31] w.-f. chen, structural stability: theory and implementation. englewood cliffs, nj, usa: prentice hall, 1987. [32] y.-b. yang and m.-s. shieh, “solution method for nonlinear problems with multiple critical points,” aiaa journal, vol. 28, no. 12, pp. 2110– 2116, 1990, doi: 10.2514/3.10529. microsoft word etasr_v13_n3_pp10765-10768 engineering, technology & applied science research vol. 13, no. 3, 2023, 10765-10768 10765 www.etasr.com doan van: application of advanced deep convolutional neural networks for the recognition of road … application of advanced deep convolutional neural networks for the recognition of road surface anomalies dong doan van science and technology application for sustainable development research group, ho chi minh city university of transport, vietnam dongdv@ut.edu.vn (corresponding author) received: 28 march 2023 | revised: 8 april 2023 | accepted: 14 april 2023 licensed under a cc-by 4.0 license | copyright (c) by the authors | doi: https://doi.org/10.48084/etasr.5890 abstract the detection of road surface anomalies is a crucial task for modern traffic monitoring systems. in this paper, we used the yolov8 network,a state-of-the-art convolutional neural network architecture, for real-time object recognition and to automatically identify potholes, cracks, and patches on the road surface. we created a custom dataset of 1044 road surface images in vietnam, each of which was annotated with pavement anomalies, and the yolov8 network was trained with this dataset. the results show that the model achieved an accuracy of 0.56 map at a threshold of 0.5, indicating its potential for practical application. keywords-road surface anomalies; convolutional neural networks; digital image processing; transportation i. introduction identifying anomalies on road surfaces such as potholes, cracks, and bumps is an important factor in creating conditions for road maintenance, providing a better driving experience, and reducing the risk of accidents (collisions, falls, etc.) [1-5]. analyzing data related to the condition of streets promptly can help make better decisions about transportation spending [4]. the anomalies on the road surface are repaired when they are reported by citizens or when a major incident occurs. however, a real-time reaction system that automatically detects various anomalies on urban and national roads does not exist. systems for identifying anomalies on roads can be divided into three categories: vision-based, sensor-based, and 3d reconstruction methods [6]. the sensor-based method mostly uses sensor data to identify road anomalies. authors in [5] compared the decision tree (dt) and support vector machine (svm) algorithms for classifying abnormalities on the road using data measured from acceleration and gyro sensors. authors in [7] used inertial sensor datasets collected in different contexts to detect and classify abnormalities on road surfaces (e.g. dirt roads, cobblestones, and asphalt roads). based on the reported results, the proposed convolutional neural network (cnn) model achieved the best performance with an accuracy of 93.17%. authors in [8] developed a hybrid method combining threshold-based signal processing techniques and machine learning algorithms to form a near real-time road anomaly detection system. on the other hand, the technique that uses 3d reconstruction to anticipate the shape of the road anomalies and evaluate their volume through stereo-vision technology is considered the most precise of the three methods. however, this method is more costly and difficult to identify when potholes are filled with water or dirt than other approaches. for instance, authors in [9] developed a pixel-level road surface anomaly detection approach based on stereo vision and deep learning. specifically, the vehicle-mounted photography system was used to capture both parallel and oblique photos to generate a 3d pavement point-cloud model. stereo-vision technology was employed in the 3d reconstruction phase to process the input images. point-cloud calibration relied on a pca algorithm, and various orthoimages, including color, depth, and overlapped images, were generated during the 3d data-processing phase. to identify pavement cracks and potholes in the orthoimages, a modified u-net deep-learning technology was utilized for segmentation. their approach achieved significant results: 0.9632 precision, 0.9552 recall, and 0.9592 f1 score. the vision-based method uses images to identify the presence of abnormalities through image processing algorithms. the advantage of this method is that it does not require direct access to the location of the abnormalities on the road, making it easy to detect multiple objects at the same time through traffic monitoring cameras or cameras on mobile devices. for example, authors in [10] proposed a real-time automatic pavement crack and pothole recognition system using a mobile device. the proposed system achieved only 0.7 precision, recall, accuracy, and f1 score. recently, deep learning techniques have gained widespread application in diverse fields [11-14]. these methods have also been employed in the identification of road surface anomalies, leveraging their strengths such as accurate detection and the ability to handle engineering, technology & applied science research vol. 13, no. 3, 2023, 10765-10768 10766 www.etasr.com doan van: application of advanced deep convolutional neural networks for the recognition of road … intricate data. for example, authors in [1] discussed a deep learning algorithm for detecting potholes on road surfaces. the algorithm employed a cnn with 9 layers. however, the method is not suitable for real-time applications because it cannot be used for online video processing. authors in [2] developed a system based on yolov2 [15] network to detect potholes on roads. however, their system can only run offline and cannot be used in real-time applications. additionally, the system's accuracy only reached 82.5%. authors in [16] proposed a lightweight cnn model based on a modified mobilenetv2 [17] that can operate on edge devices. the proposed system is capable of performing pixel-wise crack detection on streets. the common drawback of the image processing methods mentioned above is that they cannot meet real-time operational criteria. therefore, in this article, we propose a method that used yolov8, an advanced cnn model capable of real-time object detection with high accuracy. the proposed method can detect abnormalities on the road surface such as potholes, cracks, and road patches. the technical contributions of this paper can be summarized as:  to the best of our knowledge, this paper is the first that applies the state-of-the-art yolov8 architecture to road anomaly detection.  the proposed method can detect in real time various types of anomalies such as potholes, cracks, and patches.  the empirical results suggest that the proposed method can be applied in practical settings with suitable modifications. ii. road anomalies dataset we constructed a road surface dataset consisting of 1044 images collected from random roads in vietnam. figure 1 shows some examples from the dataset. the data were taken at different times and weather conditions, resulting in a wide range of lighting and shadow conditions. this is the biggest challenge for abnormal road detection methods based on image processing. the dataset has a total of 1044 images and was divided into 967 images for training of the network and 77 images for model evaluation. each image was labeled with three kinds of anomalies namely potholes, cracks, and road patches. the instance distribution is presented in figure 2. iii. methodology a. yolov8 network released at the beginning of 2023, yolov8 is the latest generation of the yolo network, in particular, and is currently the most efficient model in tasks such as classification, detection, and segmentation of objects [18]. in object detection tasks, yolov8 can achieve superior results and faster processing times than other models thanks to its combination of optimization techniques and improvements. specifically, the yolov8m model achieved a 50.2% map score on the coco dataset, which is higher than its predecessors [18]. also, the model requires fewer parameters than the others. fig. 1. examples in the dataset. fig. 2. distribution of road anomalies in the dataset. yolov8 utilizes advanced techniques in the object detection field such as decoupled head and anchor-free detection. also, novel ideas, such as mosaic stopping strategy that skips mosaic augmentation for the last 10 epochs, were introduced. the modifications compared to yolov5 [18, 19] are:  the c3 module was replaced with the c2 module 5. engineering, technology & applied science research vol. 13, no. 3, 2023, 10765-10768 10767 www.etasr.com doan van: application of advanced deep convolutional neural networks for the recognition of road …  the first 6×x6 conv was replaced with a 3×3 conv in the backbone.  the first 1×1 conv` was replaced with a 3×3 conv in the bottleneck.  decoupled head was used and the objectness branch was deleted. this technique separates the classification and regression tasks into two separate subnetworks, each with its own set of parameters [11, 20]. fig. 3. evaluation results of proposed model. fig. 4. visualization of the results in real-time application. b. experimental environment the proposed yolov8 model was trained for 200 iterations with a batch size of 8. due to the relatively small sizes of the objects of interest, the input image size increased from 640×640 to 1280×1280. all experiments were run on a computer with the following configuration:  gpu nvidia rtx3050 4 gb vram  cpu amd ryzen 5 5600h, 3.3 hz  16 gb ram iv. results and discussion the performance of the anomaly detection model on the road is shown in figure 3. specifically, the model’s precision achieved 84%, and the average accuracy was 56.8% at a confidence threshold of 0.5. the recall criterion, which indicates the ability to detect all objects present in the image, achieved 60% score. the visualized results of the evaluation set can be observed in figure 4. the results show that the model is capable of detecting anomalies such as potholes, cracks, and patches on the road with high accuracy. engineering, technology & applied science research vol. 13, no. 3, 2023, 10765-10768 10768 www.etasr.com doan van: application of advanced deep convolutional neural networks for the recognition of road … v. conclusion in this paper, the utilization of the advanced yolov8 convolutional neural network architecture to address the issue of detecting road anomalies was discussed. the study shows that the proposed model's performance is promising, with map of 0.56 at the threshold of 0.5, suggesting that it can be applied in practical settings with suitable modifications. we intend to gather additional road data and further finetune the model to achieve better results in the future. references [1] v. pereira, s. tamura, s. hayamizu, and h. fukai, "a deep learningbased approach for road pothole detection in timor leste," in 2018 ieee international conference on service operations and logistics, and informatics (soli), singapore, jul. 2018, pp. 279–284, https://doi.org/10.1109/soli.2018.8476795. [2] k. e. an, s. w. lee, s.-k. ryu, and d. seo, "detecting a pothole using deep convolutional neural network models for an adaptive shock observing in a vehicle driving," in 2018 ieee international conference on consumer electronics (icce), las vegas, nv, usa, jan. 2018, https://doi.org/10.1109/icce.2018.8326142. [3] j. m. celaya-padilla et al., "speed bump detection using accelerometric features: a genetic algorithm approach," sensors, vol. 18, no. 2, feb. 2018, art. no. 443, https://doi.org/10.3390/s18020443. [4] f. seraj, b. j. van der zwaag, a. dilo, t. luarasi, and p. havinga, "roads: a road pavement monitoring system for anomaly detection using smart phones," in big data analytics in the social and ubiquitous context, 2016, pp. 128–146, https://doi.org/10.1007/978-3319-29009-6_7. [5] a. basavaraju, j. du, f. zhou, and j. ji, "a machine learning approach to road surface anomaly assessment using smartphone sensors," ieee sensors journal, vol. 20, no. 5, pp. 2635–2647, mar. 2020, https://doi.org/10.1109/jsen.2019.2952857. [6] y.-m. kim, y.-g. kim, s.-y. son, s.-y. lim, b.-y. choi, and d.-h. choi, "review of recent automated pothole-detection methods," applied sciences, vol. 12, no. 11, jan. 2022, art. no. 5320, https://doi.org/10.3390/app12115320. [7] j. menegazzo and a. von wangenheim, "road surface type classification based on inertial sensors and machine learning," computing, vol. 103, no. 10, pp. 2143–2170, oct. 2021, https://doi.org/ 10.1007/s00607-021-00914-0. [8] s. sattar, s. li, and m. chapman, "developing a near real-time road surface anomaly detection approach for road surface monitoring," measurement, vol. 185, nov. 2021, art. no. 109990, https://doi.org/ 10.1016/j.measurement.2021.109990. [9] j. guan, x. yang, l. ding, x. cheng, v. c. s. lee, and c. jin, "automated pixel-level pavement distress detection based on stereo vision and deep learning," automation in construction, vol. 129, sep. 2021, art. no. 103788, https://doi.org/10.1016/j.autcon.2021.103788. [10] a. tedeschi and f. benedetto, "a real-time automatic pavement crack and pothole recognition system for mobile android-based devices," advanced engineering informatics, vol. 32, pp. 11–25, apr. 2017, https://doi.org/10.1016/j.aei.2016.12.004. [11] h. d. quy, n. n. son, and h. p. h. anh, "deyolov3: an optimal mass detector for advanced breast cancer diagnostics," in computational intelligence methods for green technology and sustainable development, 2023, pp. 325–335, https://doi.org/ 10.1007/978-3-031-19694-2_29. [12] v. t. h. tuyet, n. t. binh, and d. t. tin, "improving the curvelet saliency and deep convolutional neural networks for diabetic retinopathy classification in fundus images," engineering, technology & applied science research, vol. 12, no. 1, pp. 8204–8209, feb. 2022, https://doi.org/10.48084/etasr.4679. [13] d. patil and s. jadhav, "road segmentation in high-resolution images using deep residual networks," engineering, technology & applied science research, vol. 12, no. 6, pp. 9654–9660, dec. 2022, https://doi.org/10.48084/etasr.5247. [14] n. c. kundur and p. b. mallikarjuna, "deep convolutional neural network architecture for plant seedling classification," engineering, technology & applied science research, vol. 12, no. 6, pp. 9464–9470, dec. 2022, https://doi.org/10.48084/etasr.5282. [15] j. redmon and a. farhadi, "yolo9000: better, faster, stronger," in 2017 ieee conference on computer vision and pattern recognition (cvpr), honolulu, hi, usa, jul. 2017, pp. 6517–6525, https://doi.org/ 10.1109/cvpr.2017.690. [16] g. doğan and b. ergen, "a new mobile convolutional neural networkbased approach for pixel-wise road surface crack detection," measurement, vol. 195, may 2022, art. no. 111119¸https://doi.org/ 10.1016/j.measurement.2022.111119. [17] m. sandler, a. howard, m. zhu, a. zhmoginov, and l.-c. chen, "mobilenetv2: inverted residuals and linear bottlenecks." arxiv, mar. 21, 2019, https://doi.org/10.48550/arxiv.1801.04381. [18] g. jocher, a. chaurasia, and j. qiu, "yolo by ultralytics." jan. 2023, [online]. available: https://github.com/ultralytics/ultralytics. [19] g. jocher et al., "ultralytics/yolov5: v7.0 yolov5 sota realtime instance segmentation." zenodo, aug. 22, 2022, https://doi.org/10.5281/ zenodo.7347926. [20] z. ge, s. liu, f. wang, z. li, and j. sun, "yolox: exceeding yolo series in 2021." arxiv, aug. 05, 2021, https://doi.org/10.48550/ arxiv.2107.08430. microsoft word 13-875-ed.doc engineering, technology & applied science research vol. 6, no. 6, 2016, 1274-1279 1274 www.etasr.com zeghib and chaker: efficiency of a solar hydronic space heating system under the algerian climate efficiency of a solar hydronic space heating system under the algerian climate ilhem zeghib energy physics laboratory department of physics, brothers mentouri university constantine, algeria imita75@yahoo.fr abla chaker energy physics laboratory department of physics, brothers mentouri university constantine, algeria chakamine@yahoo.fr abstract—hydronic heating systems supplied by renewable energy sources are one of the main solutions for substituting fossil fuel and natural gas consumption. this paper presents the development of modeling and analysis of a solar hydronic heating system in an existing single-family house built in 1990’s heated by low-temperature radiators. the simulation has been used to study the potential of using this system under climatic conditions in algeria. and for this purpose, a component based on the simulation model for the thermal behavior of each component of the system are carried out in order to evaluate the economic performance for this system. the system is compared, with a conventional high-temperature boiler system. the results indicated that single-family houses could be heated with solar hydronic heating and provided an acceptable level of thermal comfort in the room with 22°c, according to the results of the analysis, the solar energy covers only 20.8% of the total energy consumption in a single-family house. furthermore, the thermal performance of the heating conventional system can be largely improved up to 15%. keywords-solar collector; low temperature heating; solar heating; indoor temperature; efficiency i. introduction in algeria, energy consumption in the building sector accounts for almost 40% of the total final energy use [1] with heating and hot water being responsible for almost 60% of that [1]. this high consumption has led to opt for new low-energy buildings and to retrofit old ones, and take measures such as using additional insulation, tight building envelopes, or energysaving equipments. all these measures help reduce the season and space heating load and provide an opportunity to use lowtemperature heating systems. these systems usually work with a maximum supply water temperature of 45°c. so a new generation of hydronic heating systems operating at low temperatures and employing renewable energy sources such as geothermal sources and solar energy has emerged. in [2], authors show that renewable sources of heat can be integrated into the district heating system without problems and contribute to the fossil-free heating sector. in [3], the author studied the technical and economical potential to use solar energy in the finnish district heating system. the function of a district heating system connected to the solar collector field was simulated with the trnsys (transient system simulation tool) software. the results show that solar collectors could provide the 10% of the yearly heat production. in the existing single-family house, we can make very significant savings by replacing or supplementing an old system with a solar heating system. the objective is to meet the heating needs using the least possible energy auxiliary, valorize a performance enhancement of solar energy. for this purpose, integration of the solar thermal system should be relatively simple: all that will need to be done is to replace the boiler of a traditional heating fluid with a storage tank heated with a set of solar panels and existing conventional radiators with oversized radiators (low temperature). in this paper, we are studying the dynamic behavior of the solar hydronic space system is represented by a mathematical model corresponding to an energy balance of each element of the solar system, collector, tank and the radiators. ii. system description a. heating system a schematic diagram of the solar hydronic space heating system used in the present study is shown in figure 1. the solar space heating system consists of the flat plate collectors, a hot water storage tank, the piping, the controllers, and the auxiliary heating system [4]. the load distribution system consists of low temperature radiators and two pumps to transfer the energy to the storage and to the load. the circulating water from the collector transmits its heat to the storage tank water where it is stored in the form of a sensitive heat until it can be used. an auxiliary heater (gas boiler) is connected to the storage tank to supplement solar heating, when needed to meet temperature requirement of the load. in our study, the heat production system is the combination of solar collectors and a gas boiler. when water temperature in the storage tank is high enough for space heating, solar collectors are used as the only heat source can deliver hot water directly to the low temperature radiators. in this respect, auxiliary heater operates only when the temperature of the water in the tank is lower than 45°c which will reduce the boiler energy. engineering, technology & applied science research vol. 6, no. 6, 2016, 1274-1279 1275 www.etasr.com zeghib and chaker: efficiency of a solar hydronic space heating system under the algerian climate fig. 1. schematic diagram of solar low heating system b. hydronic distribution system the component parts of a solar-water distribution system are similar to that of a conventional heating system. the hydronic systems use pipes and circulation pumps to distribute heated water throughout the house. a heat distribution grid can transfer heat from the tank to the low-temperature radiator. the low-temperature hydronic heating concept can be extended to existing houses by replacing existing radiators with low temperature radiators and changing the design operating conditions from 70/60/20°c to 50/40/20°c [3]. in our study, the length and height of the radiators are kept the same but their depth is changed. this meant that while all the original radiators were type 21, the low-temperature radiators were type 33 [5]. as such, replacing the radiators is very easy because the radiators can be connected to the existing piping system without any changes. iii. modeling system components the simulation procedure involves casting mathematical models for each system component and then combining these models consecutively to accomplish the complete simulation. thermal models compute each component exit temperature (collector, storage tank, low temperature radiator) and system performance, were estimated every minute. a. thermal analysis 1) solar collector two flat-plate collectors, a 1.5 m² gross surface area each, were used for the collection of solar energy. the collectors have been installed facing south and inclined at an angle of 45°c from the horizon for maximum gain during winter collection. the mathematical model describes the flat-plate solar collector system considering the transient properties of its different zones. in the proposed model, the analyzed control volume of the flat-plate solar collector contains one tube that is divided into five nodes (glass cover, air gap, absorber, fluid and insulation) perpendicular to the liquid flow direction. the collector efficiency is defined as the ratio of usable heat energy extracted from a collector by the heat transfer fluid during any time period to the solar energy striking the cover during that same time period [6]: dtia dtq t t gc t t u c    2 1 2 1  (1) 2) water storage tank a stratified storage tank was used in this simulation, a mathematical model for heat transfer in the storage tank based on the one-dimensional transient heat transport equation by convection and conduction along the prevailing flow direction of the storage tank. in multi-node modeling, the tank is divided into n nodes or sections, with energy balances written for each node. the energy equation takes into account the energy gain from the collector, the energy loss in the surrounding, and utilized by the load. this results in n differential equations to be solved simultaneously to obtain the temperature of each node [7]. in our study the tank is divided into 150 nodes. thermal efficiency at tank [9]: s dis s q q  (2) where disq is the heat transferred to the distribution system is defined by: slosssdis qqq , (3) the energy storage in the tank is related to the mass and the difference between the initial and final temperature of water in the storage tank can be expressed as [10]: )( 1 ,1, 1 ,, isis n i isiss ttcpm n q    (4) the heat losses from the tank is given by [11]  aisisis n i sloss ttau n q   ,,, 1 , 1 (5) 3) auxiliary heating system the auxiliary heating system used in conjunction with a solar heating system is of the plain/traditional type. although engineering, technology & applied science research vol. 6, no. 6, 2016, 1274-1279 1276 www.etasr.com zeghib and chaker: efficiency of a solar hydronic space heating system under the algerian climate the auxiliary boiler can be controlled in different ways, normally the auxiliary heater in series would be used to the temperature of the water from storage only when the water temperature in the storage tank is too low to meet the heating requirement in the house, and has a desired temperature of 45°c. the auxiliary efficiency was calculated as : gazcaz sso bur au au pcim ttcpm q q . 1, . . )(   (6) auq the useful energy transmitted by the auxiliary to the hot water, bruq the energy used by the burner. 4) pipe heating the task of the distribution system is to connect all various components of the heating system. the pipes distribute a heating medium lose some it to the surroundings. this heat loss causes undesired cooling of the medium in the pipes; how they can be acceptable if the pipes are placed within a heated space. however, this form of heat transfer is undesirable, and it cannot be regulated and may not be required for most of the time. the efficiency of the pipes heating:           ctif q q q q ctif q q q q s au ploss au p s dis ploss dis p pi 451 451 1, , 1, ,  (7) the heat emissions of pipes are calculated is following [12]: )(.., nmfccploss ttylq  (8) where cy is the linear thermal transmittance, cl is the total length of pipes. mft is the pipes inner temperature and nt is the interior temperature. 5) low-temperature radiators the level of water temperature supplied to the heat emitter in buildings plays a major role in primary energy consumption. the main principle of a low-temperature heating system is to provide the same thermal comfort as a medium-temperature heating system, while using a lower supply temperature. the radiators for low temperature systems are physically and technically the same as traditional panel radiators. the only key factor which changes is sizing. normally manufacturers' data sheets will quote radiator output when there is a temperature difference (water to air). if a radiator is required to run at a lower temperature than normal, its size must be increased to compensate for this temperature difference. the water return temperature through radiators based on actual heat emission and inlet water temperatures is calculated by [13]: )()()( . rsbrrsrepr rs r ttuattcm dt dt c  (9) the emission efficiency is giving by : p re re q q  (10) 6) eenergy demand in the house the energy balance of a house is characterized by energy losses and gains, energy losses are transmission and ventilation and these losses can be fully or partly compensated by energy gains. different sources of energy gains can be utilized, such as internal energy gains caused by appliances and users as well as solar gains through openings. in this study, the energy need of the house during the heating is estimated using the equations in thermal regulation for algerian buildings. the need of useful heat depends on thermal qualities of its envelope (thermal resistances), and its losses by ventilation. the energy need for heating is giving by [14]: glosh qnqq . (11) where losq is the total heat transfer for heating mode, gq are the total heat gains for the heating, n is the dimensionless gain utilization factor. the house is modeled as a multi-zone model (each room is modeled as an individual zone). the rooms are heated at a uniform indoor temperature at all times. the room can be modeled as a single heat capacity element. a differential equation is then written relating the heat flow to the room to time derivative of the indoor temperature and the house heat capacity. the indoor temperature of room is calculated by: )( 11 ,sup, blosb b net b b qq c q cdt dt  (12) b. energy performance analysis the performance indices of energy evaluated in this study include: energy collected, energy delivered and supply pipe losses, solar fraction, collector efficiency and system efficiency. 1) solar fraction the solar fraction is the amount of energy provided by the solar technology divided by the total energy required. it can be calculated from the equation below [15]:     auxs s qq q sf (13) where sq the solar energy produced, auxq auxiliary heating requirement. engineering, technology & applied science research vol. 6, no. 6, 2016, 1274-1279 1277 www.etasr.com zeghib and chaker: efficiency of a solar hydronic space heating system under the algerian climate 2) system efficiency the efficiency of heating systems is increasingly important because of the increasing need to save energy. the major characteristic parameter for estimating the efficiency of the hydronic heating is heat loss factor. the main parts of a heating system, generation, storage, distribution and emission, all have some losses. the distribution losses caused by pipes in unheated area are calculated as non-recoverable losses and losses in heated rooms contribute as recoverable losses. emission losses consist of heat loss due to non-uniform temperature distribution, heat loss due to heat emitter position and heat loss due to control indoor temperature. the tolal systems efficiency is calculated dy [16] repiauinst s  ... (14) iv. results and discussion the thermal performance of a solar space heating system is usually estimated by computer simulation, taking into consideration local climatic conditions and energy load. the simulation input parameters include climatic conditions of adrar (27.10 n, 0.17 e, altitude: 279 m) in algeria. the weather data are obtained from in-field measurements, using a weather station, the measured radiation and ambient temperature for 1 min intervals over a 96 h optimization interval are shown in figure 2. a calculation code is set up and developed in fortran language for the resolution of the thermal balance equations for different component of the solar space heating. the program determine every minute the temperature of the collector, the storage tank, the radiators and energy needed for heating. for the hydronic system, we determine the pipe heat losses, the global efficiency of the solar space heating system. the system was analyzed and optimized over a period of four days from 16th to 19th february. fig. 2. variation of solar radiation in this study, the heated water from solar collectors is stored in a tank for a week without use. the goal is to increase the tank temperature to 55°c, after we used the water from that tank for heats the space water radiators. figure 3 shows the variation of the outlet water temperature in the tank and inlet and outlet temperature of the radiators. when water temperatures at the outlet of tank is less than 45°c, water goes to auxiliary heater to be heated and to be send to the radiators. this occurs when the solar radiation is not sufficient to heat and if the water storage temperature is over 45 ° c the radiators are heated by hot water from the storage tank. fig. 3. water temperature of the radiators and the storage tank in figure 4, the energy need for heating of the house is shown for 4 days. it is noticed that heating energy profile is quite similar to ambient temperature profile. in general, heating energy and the ambient temperature are inversely proportional to each other: the higher the heating energy the lower ambient temperature, and vice versa. it may be noted that during the night the heating needs are greater in the living room that two room, its maximum value of 1.2 kw. there is a slight difference of the heating needs in the three rooms between 14.00 h and 10.00 h this is obviously caused by effect of the solar gains on the internal conditions, which are different according to orientation of the room. fig. 4. variation of energy need for heating during the 4 days from figure 5, it has been observed clearly that daily solar fraction is maximal when the storage temperature is higher than 45°c, the radiators are heated by hot water from the storage tank. when the temperature is below 45°c, the auxiliary heater is activated, and daily solar fraction starts to decrease with decreasing storage temperature. as it can be observed from figure 6 the monthly solar fraction decreases from november to january and then increasing until march. during november and march, the solar energy is relatively low but it this coincides with heating requirements which are low. during december and january, the solar fraction decreases and at the same time the contribution of the solar thermal systems to the supply of energy is reduced. moreover engineering, technology & applied science research vol. 6, no. 6, 2016, 1274-1279 1278 www.etasr.com zeghib and chaker: efficiency of a solar hydronic space heating system under the algerian climate it was estimated that the annual solar fraction for this system is about 20.8%. figure 7 illustrates the variation of the indoor temperatures in the rooms in the house (living room, room1 and room 2) for four days. it should be noted that the internal temperature profile is quite similar to ambient temperature profile, the temperature is maximum in the middle of the day and minimum for the night, or heating needs are important. they vary between 17°c and 23°c for three rooms, for a minimum ambient temperature at 7°c. there is a slight difference temperature of living room and two rooms between 10.00h and 14.00h, this is due to the fact that the living room main facade faces south while the two rooms are oriented to the north. fig. 5. variation of daily solar fraction and tank temperature fig. 6. variation of monthly solar fraction, annual energy needs and solar energy produced fig. 7. room air temperature during 4 days figure 8 shows a comparison of indoor room temperature in the room 1 between solar heating and conventional heating. it is observed that temperatures are similar (difference of 0.5°c) and they range between 16 and 23°c. we can deduce that solar heating systems can provide a ideal temperature inside the house. figure 9 shows a comparison of total system efficiency between solar heating and conventional heating. we note that the performance of solar heating systems is higher than conventional heating systems by 15%.this seems logical when the heating system is too hot, the head losses from production units, distribution pipes and radiators are important, therefore, their performance decreases. it should be noted that the system efficiency profile is quite similar to indoor room temperature profile, the system efficiency is maximum in the middle of the day and minimum for the night. fig. 8. comparison of indoor room temperature between conventional and solar heating fig. 9. comparison of space heating efficiency between conventional and solar heating the potential savings offered by solar thermal systems are difficult to calculate exactly and depend on a large range of factors. these include: initial system cost (depending on size, quality of parts and installation), the energy source that you are replacing (coal, gas). solar heating systems usually cost more to purchase and install than conventional water heating systems. in this study the total cost of the solar heating system was 300000 da. this figure includes the cost of installation and all parts (solar collectors, controller, pipes, hot water tank). however, regulators and piping distribution system are not included in the calculation of costs as they are part of conventional system. the annual saving offered by this system in the first year of operation is about 3661 kilowatts hours / year, corresponding to 366 m 3 of natural gas. we can deduce that the space solar heating in algeria offer modest savings, because the per unit cost of gas is fairly cheap and the prices engineering, technology & applied science research vol. 6, no. 6, 2016, 1274-1279 1279 www.etasr.com zeghib and chaker: efficiency of a solar hydronic space heating system under the algerian climate for natural gas exported by algeria are 20 times less expensive than the international prices (equal to 4.191da / kwh (1 euro = 122.43 da )). v. conclusion system modeling and computer simulations are employed in order to investigate the potential of using a solar hydronic space heating system under the algerian climate. the developed mathematical analysis is utilized to study the performance of the proposed system. results showed that this system provide on average, more than 20.8% of a house’s heating energy. furthermore, the thermal performance of the heating conventional system can be largely improved to 15%. moreover, the system was capable of sustaining a stable and comfortable indoor temperature in rooms with 22°c. the results indicate that, considering the cost of solar heating systems and the fuel prices today, a hydronic heating system that integrates solar heating is not yet cost-effective in the algerian circumstances, especially as the expected pay-back periods tend to be very long. this result proves that this technology is preferred for countries with higher gas price. references [1] k. imessad, r. kharchi, “experimental study of a combined solar system for floor heating”, renewable energy, vol. 18, no. 3, pp. 399405, 2015 [2] m. brand, s. svendsen, “renewable-based low-temperature district heating for existing buildings in various stages of refurbishment”, energy , vol. 62, pp. 311-319, 2013 [3] m. leskinen, “the use of solar energy in a district heating system in finland : case study of six district heating plants”, new and renewable technologies for sustainable development, pp. 313-324, 2002 [4] m. bojic, s. kalogirou, k. petronijevic, “simulation of a solar domestic water heating system using a time marching mode”, renewable energy, vol. 24, pp. 441-452, 2002 [5] www. chappee.com, 2015 [6] y. kim, k. thu, “thermal analysis and performance optimization of a solar hot water plant with economic evaluation”, solar energy, vol. 86, pp. 1378-1389, 2012 [7] z. f. li, k. sumathy, “performance study of a partitioned thermally stratified storage tank in a solar powered absorption air conditioning system”, applied thermal engineering, vol. 22, pp. 1207-1216, 2002 [8] i. zeghib, a. chaker, “modeling and simulation of a solar thermal system for domestic space heating using radiators low temperature”, international journal of renewable energy research, vol. 1, no. 5, pp. 266-276, 2015 [9] c. j. porras-prieto, f. r. mazarron, “influence of required tank water temperature on the energy performance and water withdrawal potential of a solar water heating system equipped with a heat pipe evacuated tube collector”, solar energy, vol. 110, pp. 365-377, 2014 [10] m. arslan, “thermal performance of a vertical solar hot water storage tank with a mantle heat exchanger depending on the discharging operation parameters”, solar energy, vol. 116, pp. 184-204, 2015 [11] h. wang, c. qi, “performance study of underground thermal storage in a solar-ground coupled heat pump system for residential buildings”, energy and buildings, vol. 40, pp. 1278-1286, 2008 [12] s. ntsaluba, b. zhu , “optimal flow control of a forced circulation solar water heating system with energy storage units and connecting pipes”, renewable energy, vol. 89, pp. 108-124, 2016 [13] l. haiyan, p. valdimarsson lamberto, “district heating modelling and simulation”, 34th workshop on geothermal reservoir engineering, 2009 [14] l. tronchin, k. fabbri, “energy performance building evaluation in mediterranean countries: comparison between software simulations and operating rating simulation”, energy and buildings., vol. 42, pp. 18621877, 2010 [15] e. kazanavicius, a. mikuckas, “the heat balance model of residential house”, information technology and control , vol. 35, pp. 391-396, 2006 [16] m. maivel, j. kurnitski, “low temperature radiator heating distribution and emission efficiency in residential buildings”, energy and buildings, vol. 69, pp. 224-236, 2014 microsoft word 16-2728_s_etasr_v9_n3_pp4154-4158 engineering, technology & applied science research vol. 9, no. 3, 2019, 4154-4158 4154 www.etasr.com pamplona & alves: mitigating air delay: an analysis of the collaborative trajectory options program mitigating air delay: an analysis of the collaborative trajectory options program daniel alberto pamplona air transportation department aeronautics institute of technology sao jose dos campos, brazil pamplonadefesa@gmail.com claudio jorge pinto alves air transportation department aeronautics institute of technology sao jose dos campos, brazil claudioj@ita.br abstract—congestion is a problem at major airports in the world. airports, especially high-traffic ones, tend to be the bottleneck in the air traffic control system. the problem that arises for the airspace planner is how to mitigate air congestion and its consequent delay, which causes increased cost for airlines and discomfort for passengers. most congestion problems are fixed on the day of operations in a tactically manner using operational enhancements measures. collaborative trajectory options program (ctop) aims to improve air traffic management (atm) considering national airspace system (nas) users business goals, particularities faced by each flight and airspace restrictions, making this process more flexible and financially stable for those involved. in ctop, airlines share their route preferences with the air control authority, combining delay and reroute. when ctop is created, each airline might decide its strategy without knowledge of other airline’s flights. current solutions for this problem are based on greedy methods and game theory. there is potential space to improve. this paper examines ctop and identifies important strategic changes to atm adopting this philosophy, particularly in brazil. keywords-ctop; collaborative trajectory options; air traffic management; atm i. introduction air delay is an existing problem in most airports around the world, bringing higher cost to the airlines and discomfort to the passengers. this type of inefficiency brings economic consequences for all stakeholders involved in the airline business. authors in [1] reported that flights in europe overdue to airspace inefficiencies and capacity bottlenecks are delayed 10 minutes on average per flight. it is estimated that, on average, consumer benefits per flight due to airspace modernization will count for €32 with higher levels for business passengers. in the next 20 years the projected demand for civil aviation market varies from 32,600 [2] new aircrafts (freighter and passenger) to 38,050 airplanes [3]. single-aisle airplanes are expected to command the largest share of the new deliveries, with an estimated need of 26,730 airplanes [2]. the total fleet is expected to double in 15 years. in this context, some cities are expected to concentrate the air demand with long-haul and regional traffic, creating global hubs. the air traffic growth is concentrated in a few cities. in latin america, since 2007, 45% of the traffic growth is accounted by just 10 airports. these airports are not just transport hub exchanges but they arise as the cornerstones of new urban and economic global centers [4]. delay is one of the consequences of this flight concentration and is a constant problem in most big airports. in 2014, the average delay per delayed (add) flight in europe was 26 minutes. in 2013, 7.9% of all flights in brazil were delayed more than 30 minutes, and 3.1% were delayed more than 60 minutes. in 2010, 24% of all flights in europe and 18% of all flights in usa were delayed more than 15 minutes [5, 6]. more than half of delays are caused by airline factors such as technical problems, baggage delays, and passenger related problems. the second largest portion (22%) is due to air traffic flow control management (atfcm) problems. the third largest portion is related to airport problems (16%) and the last portion is related to the weather (9%) [6]. due to capacity constraints, there is a growing necessity for changes in the air traffic system to accommodate the increasing traffic demand. the fundamental shift in atm paradigm will be from clearance-based air traffic control (atc) to trajectory-based atc operations. this new type of trajectory will include new constraints, for example target time of arrival (tta), that will improve its predictability and as consequence, and facilitate air traffic controllers’ work. there are differences in the capacity constraints for the usa and europe. for us, the major capacity constraints are founded at major airports and in the terminal airspace around them. in the other hand, in europe the en route airspace presents capacity constraints [7]. according to [8] there are four performance objectives: (a) airspace design for more capacity, with an increase of 73% in 2020, when compared to the 2005 panorama. in the long term, there will be three times more air space capacity; (b) three times improvement in air safety for 2020 and a ten times increase in the longer term; (c) decrease of 10% on the environmental impact per flight due to atm; (d) decrease of 50% to atm costs per flight. as [9] reminds, atm is foremost about safety. ii. a cooperative environment between airlines and air traffic authorities in 2003, during the 11th air navigation conference, it was agreed upon icao members that it was necessary to evolve towards a more collaborative environment. key to this philosophy is the notion of global information utilization, corresponding author: daniel alberto pamplona engineering, technology & applied science research vol. 9, no. 3, 2019, 4154-4158 4155 www.etasr.com pamplona & alves: mitigating air delay: an analysis of the collaborative trajectory options program management and interchange. this new philosophy aims to evolve to a holistic, cooperative and collaborative decisionmaking environment. despite the differences between the members, the actions are balanced to achieve equity and access. the following members comprise the atm community: (a) airport community, (b) airspace providers, (c) airspace users, (d) atm service providers, (e) atm support industry, (f) international civil aviation organization (icao), (g) regulatory authorities, and (h) states. in this context of collaboration the collaborative air traffic management (catm) arises [10]. iii. collaborative air traffic management collaborative atm (catm) is an attempt to accommodate aircraft operator preferences to the maximum extent possible with restrictions imposed only when an actual operational need exists. catm tries to adjust the atc system to meet real-time demands. the main objective is to give the aircraft operator the opportunity to participate in the decisions rather than the atc authority arbitrary defining the restrictions. this means that all airspace operators can work together and collaborate on the decision making [11]. the first implementation of catm is the collaborative decision-making (cdm). a. collaborative decision-making cdm began in us in 1993 when faa and major air space users started a cooperative environment. before 1993, faa used flight schedules published in the official airline guide (oag) to forecast preliminary air traffic demand prior to operator’s route request. the milestone of cdm was when the industry agreed to share its information, providing real-time, day-of-operations schedules [12]. the notion was that both the service provider (faa) and the system users (airlines) could benefit from cooperation [13]. it was 1995 when cdm was officially launch in the us, when faa and the industry group defined roles and responsibilities and the foundation for a collaborative air traffic management system was laid [12]. cdm is structured in the following manner: the basis of the cdm process is: (a) common situational awareness: where all parties must know the constraints, with a shared view of the constraints in the system. above is the (b) distributed planning: where all parties must be able to react to the constraints in a manner where decisions are made at the most appropriate point. in the top is the (c) analytical capability: where all parties must measure what happened in order to improve the system and is the pillar of the collaborative paradigm [13]. in europe, cdm was implemented in early 2000’s as airport cdm (a-cdm), because virtually all european airports have slot controls and scheduled operations generally are within airport capacities [14, 15]. today, cdm is well developed in europe and usa [16]. allied with this collaborative environment, air traffic flow management (atfm) programs were created to reduce the scale and cost during times of adverse weather and heavy traffic demand [15]. b. air traffic flow management (atfm) atfm is a function of atm established with the objective of contributing to a safe, orderly and expeditious flow of traffic while minimizing delays. the purpose of atfm is to balance air traffic demand with airspace and/or airport capacity to ensure the most efficient use of the airspace system [17]. to achieve those objectives of optimum flow traffic, the following measures include, but are not limited to: a) allocating and updating departure slots, (b) allocating and updating arrival slots, (c) allocating and updating en route slots, (d) re-routing of traffic, (e) alternate flight profiles, (f) minutes-in-trail assignments, (g) mile-in-trail assignments, (h) airborne holding, and (i) ground-holding [17]. atfm programs developed to handle problems in the en route airspace have been quite successful in mitigating the cost of disruptions, although their success has been limited due to inflexibilities in incorporating flight operator´s specific needs and adapting to changing weather and traffic conditions [18]. recently, the nextgen and sesar programs are looking for a shift in the atc method moving for trajectory-based operations (tbo). linked to this, faa has recently implemented a new atfm program, ctop [18, 19]. iv. trajectory-based operations a trajectory can be defined as the four-dimensional flight path of an aircraft through space and time (4d). the tbo concept means a move from base method atc to a trajectorybased system of atm. in this new concept, the aircraft will be assigned flexible and negotiated trajectories and the atc will have to manage those routes, with the air traffic controllers performing a strategic traffic flow coordinator. this will allow maximum utilization of available airspace and providing advanced navigational capabilities for those aircrafts flying for example rnp trajectories [11]. for operating in this new concept, these will be necessary: (a) the aircraft will be required to transmit and receive aircraft and navigational data in a precise manner, (b) new surveillance equipment, (c) improved aircraft avionics capabilities, (d) advanced automation systems, and (e) automated conflict probes. enabling tbo requires interactive and integrated decisions and control actions spanning each time horizon to include capacity management, flow contingency management and trajectory management. a critical requirement is the air navigation service provider (ansp) enabling the stakeholder access and common awareness of the air traffic system capacity and constraints, in the present and future (predicted) situation. ctop provides through trajectory option set (tos) an initial foundation of the tbo [20]. tbo is the new atc concept that moves from a base method to a trajectory-based method. collaborative trajectory options program (ctop) is one of the atm initiatives and it is associated with the idea of a constrained area. inside the ctop, there is the trajectory options set (tos) a set of trajectories that are chosen by the airlines in a constrained area and the four-dimension trajectory (4dt) that is a flight path of an aircraft through space (three-dimension) and time (one-dimension) [21]. a. collaborative trajectory options program ctop relates to the idea of a constrained area. ctop is one of many new traffic management initiatives been developed within catmt and is a part of the nextgen and sesar initiatives. ctop is a method of managing demand through constrained airspace. in ctop, customers are allowed to communicate their preferences in a tos. the customers can choose between route and delay [19]. ctop is used anywhere engineering, technology & applied science research vol. 9, no. 3, 2019, 4154-4158 4156 www.etasr.com pamplona & alves: mitigating air delay: an analysis of the collaborative trajectory options program there is a constraint in the air traffic system. the most common constraints are weather and air traffic volume. the ctop program provides greater flexibility for the airspace planner in managing capacity by allowing ground delays and re-routes to be considered together. according to [18] ctop has similarities with the previous en route atfm programs with the difference that it considers flight operator´s submitted en route resources preferences. b. trajectory options set a tos will allow the airlines to manage a flight by telling the atc the route and delay options that the clients are willing to accept. the tos may contain multiple trajectories options, with different routes, altitude or speed per trajectory. the difference between a flight plan and a tos is that the flight plan contains a single request with a defined route, altitude and speed. tos may contain multiple trajectory options, with each one of the options, containing routes, altitudes or speeds [19]. in the current air traffic control system (atcs) the pilot determines through a flight plan the flight’s objective (destination airport) and how to reach it, deciding which route is best, the proposed altitude, the cruising airspeed, the time of departure, climb and descent profiles. to control an airplane while flying, the pilot can be qustioned by the air traffic controller if the parameters requested in the flight plan form are maintained or can determine the aircraft’s flight profile by interpreting the flight track, azimuth and altitude information displayed on the radar scope [11]. in the current atc configuration, the system aims to satisfy each pilot’s request for a specific route or altitude. it may be necessary to apply procedural restrictions to ensure positive aircraft separation. the constant use of air space restrictions results in increased fuel use, increased flight times, loss of flexibility, and, occasionally, reduced traffic flow. in the other hand, great care must also be taken to not overload the air controller. the routine imposition of procedural restrictions reduces the controller’s workload, and consequently decreases the potential loss of separation between aircrafts and decreases the number of planes flying in an area. these procedural restrictions tend to keep an aircraft at inefficient altitudes. since the constrained aspect is the controller’s capacity to coordinate clearances and predict separation conflicts, and not airspace saturation, an automated process would reduce the need for rigid procedural restrictions on system capacity. in this aspect, manual air traffic control procedures need to be improved with computer-based decision support systems for the atc to become more efficient and capable. the aircraft separation is nowadays human dependent, maintained by air traffic controllers who use radar screens to visualize aircraft flight paths, make subjective judgements as future aircraft positions and potential conflicts, and mentally develop alternate flight paths [11]. the operators must express their preferences among different flight options, which must be expressed in terms of a relative trajectory cost (rtc). each option will be evaluated based on customer’s preference expressed by the rtc [19]. the rtc of a flight option is an expression of the number of minutes of delay that would have to be imposed upon the operator’s most preferred trajectory option before some other flight option becomes a desirable alternative. upon submitting a tos, the ctop and cacr algorithms assign routes and/or ground delay to flights by attempting to provide the operator a minimum adjusted cost. the minimum adjusted cost is the sum of the delay assigned to a flight plus its rtc, while ensuring that traffic is limited within the program flow constrained area (fca) to a specified capacity [20]. the fundamental principle behind ctop and tos is the four-dimension trajectory. c. four-dimension trajectory four-dimension trajectory (4dt) is the pillar of the new atm, whereby time-based operations progress to trajectorybased operations and in the long term achieve performancebased operations. a 4dt is defined as a precise description of an aircraft path in space and time. waypoints are used to represent specific steps along the path, which is earthreferenced with a proper latitude and longitude [7]. what distinguishes a 4dt is that the path contains altitude description for each waypoint and indications about the time at which the trajectory will be executed. some waypoints in the 4dt path may be associated with controlled time of arrival (cta) or required time of arrival (rta). in a cta, it may be assigned a target time of arrival (tta). the aircraft must meet this tta requirement within a specified time of tolerance. the cta represents a time window for the aircraft to pass through a specific waypoint. it is normally used to regulate traffic flows entering congested en route/arrival/departure airspace. the main idea is to establish a sequence of spatial and temporal windows. this sequence will represent milestones to meet during flight execution [22]. to achieve the desired rta, aircraft’s speed must be adjusted and regulated along the trajectory to arrive at a specific waypoint at a specified time, improving the predictability of the aircraft-flying path. the problem is that the time of arrival over a fixed point is not a function of aircraft´s airspeed alone, but it depends upon the winds and temperature that the aircraft will encounter in its route [23]. in europe, in the sesar program, the 4dt is often called reference business trajectory (rbt). the term reference is used because once a trajectory is chosen, it will become the reference trajectory which the airspace user agrees to fly and all the service providers agree to facilitate with their respective services. this name difference is basically caused by the european consortium which wants a more collaborative environment, where trajectories are agreed between all the atm stakeholders, for example atc, airports, airlines, military and general aviation. this 4dt will be executed gateto-gate by the aircraft [7]. sesar’s and nextgen’s core concept is to structure atm around aircraft tbo. to achieve this milestone, it is necessary for the aircraft to achieve the 4dt with accuracy/reliability, to accurately and fast pass information via data link, huge improvements in surveillance capabilities are demanded as well as automation and decision support tool capabilities and huge improvements in computer/equipment processing power and speed [9]. the 4d concept is consistent within icao aviation system block upgrade (asbu) and with icao global air navigation plan and global air traffic management operational concept. some authors divide 4dt in two phases: initial 4dt (i4dt) and full 4dt (4dt). the objective of an i4dt is to optimize the arrival phase of a flight at an airport. engineering, technology & applied science research vol. 9, no. 3, 2019, 4154-4158 4157 www.etasr.com pamplona & alves: mitigating air delay: an analysis of the collaborative trajectory options program to achieve this goal, the airborne and ground trajectories must be synchronized around a common unique reference designated by a 2d point or metering fix (mf) and a time constraint. the trajectory negotiation process begins when the aircraft is about 200nm or 40 minutes from its destination. the negotiation is made via a data link between the atc and the aircraft and includes the standard terminal arrival route (star) and approach procedures applicable to the metering fix. the final 4dt is a lateral route with altitude, speed and time constraints over waypoints in the trajectory [24]. for the implementation of the i4dt function onboard, the following avionics systems are necessary: (a) cockpit display systems: it must display relevant data related to the engagement and monitoring of the 4dt, (b) flight management system: the onboard computed prediction and the system performance requirements are consistent, (c) communication system: must be able to manage the ads-contract and the controller-pilot data link communication (cpdlc) applications. an information management platform is necessary to allow this entire collaborative environment. the system wide information management (swim) platform will provide the infrastructure and services necessary to deliver network-enabled information access to a multitude of atm system users. the system must integrate with a variety of legacy sub-system over many years. swin is described as a framework enabling authorized applications and services to reliably and securely share information. swin will allow the necessary trajectory functions exchange functions. this will permit a system coordinated 4dt plans [23]. v. future scope figure 1 shows the most important metrics for result comparison of the 4dts in the key performance areas. fig. 1. metrics for 4dt comparison to achieve tbo environment, the following technologies are considered necessary: (a) advanced flight management system (fms) capabilities: 4dt can only exist with accurate (cta) capabilities. the key factors that impact system accuracy are wind and temperature data. (b) data communication: the voice communication channel between atc and cockpit will not be sufficient to handle the amount of traffic. it will be necessary to introduce data communication that will decrease the controller’s workload. one of the key aspects is the balance between the new airspace capacity and the controller task load [7]. atm system depends critically on the rate that controllers can process aircraft through airspace sectors [9]. (c) ads-b: this technology will replace the radar as surveillance instrument. (d) air traffic control decision support tools: there is a necessity to implement decision support tools (dst) for air traffic controllers. dst will be necessary to provide air traffic controllers with acceptable levels of workload. dst will have to handle the trajectories predicted for the system, and will allow to share and negotiate 4dt, and keep the traffic separated. they will be able to have the capability of conflict detection and resolution. the ctop, which started on march 2014, is a new concept for traffic flow management (tfm). however, some characteristics could be brought to brazil to be implemented, as a future program. the application of preferential routes to ifr has the objective of optimizing the airspace’s use and allowing the better planning of flights. also, it intends to reach better use of aircraft’s rnav navigation systems to keep air traffic flow and its high safety standards. these preferential routes could be used as initial routes in the tos since they are the most advantageous for air traffic service (ats) provider and airline companies. since these routes represent the optimized routes, they represent for airline companies the fastest routes representing lesser cost, and for ats provider represent that flights will follow routes which are contained in sectors which could absorb the increase in traffic flow. another important step towards the ctop implementation would be the observation of airspace’s characteristics, such as certain regions which present degraded weather conditions. one possible application is at the rio de janeiro and sao paulo air terminals, as shown in figure 2. this is the region with the largest aircraft movement in brazil. due to its geographical proximity and air movement growth trend, it is estimated that in the coming decades, there will be an increase in flight delay. fig. 2. possible implementation in brazil this could be done through the assessment of meteorological maps and through the experience of air traffic controllers and other workers in the sector. this analysis would allow identifying the most common constrained areas in the brazilian airspace and it would be an initial step to create the alternative routes to be part of the tos. then, the use of fasttime simulation could allow identifying the cost of each engineering, technology & applied science research vol. 9, no. 3, 2019, 4154-4158 4158 www.etasr.com pamplona & alves: mitigating air delay: an analysis of the collaborative trajectory options program trajectory relative to one another. cost parameters such as travelling time and fuel burn could be evaluated and they would compound the relative cost for each trajectory. validating the trajectories with all stakeholders is an important step to ensure that tos satisfies their needs. vi. conclusion in 15 years, the total commercial fleet is expected to double and some cities are expected to concentrate air demand with long-hauls and regional traffic creating global hubs. delay is one of the consequences of this flight concentration and due to capacity constraints there is a growing necessity for changes in the air traffic system to accommodate the increased traffic demand. air traffic flow management aims to balance traffic demand with airspace. the purpose of atfm is to balance air traffic demand with airspace and/or airport capacity to ensure the most efficient use of the airspace system. a fundamental change will be from clearance-based atc to trajectory-based atc operations and the implementation of the ctop. ctop aims to improve air traffic management considering national airspace system users business goals, particularities faced by each flight, and airspace restrictions, making this process more flexible and financially stable. tbo will be a fundamental pillar in this new operational scenario and will permit more and extreme use of accessible airspace. with its increasing demand for air transportation, brazil, especially rio de janeiro and sao paulo region, is a serious candidate for the implementation of such technologies. this district is the one with the biggest airplane deployment in brazil. because of its land closeness and air deployment pattern, it is assessed that in the coming decades, there will be an expansion in flight delay. besides air saturation, the full operational implementation of new technologies is also necessary. research is undergoing to embed all the needed capabilities in sesar and nextgen operational environment. references [1] g. burghouwt, r. lieshout, t. boonekamp, v. van spijker, economic benefits of european aairspace modernization, seo amsterdam economics, 2016 [2] airbus, global market forecast 2014-2034, 2015 [3] boeing, current market outlook 2015-2034, 2015 [4] v. bamberger, m. blondel, mega-aviation cities’ project, arthour d. little, 2013 [5] brazilian national civil aviation agency (anac), air transport yearbook, 2014 [6] european organisation for the safety of air navigation (eurocontrol), network operations report for 2014, 2015 [7] g. enea, m. porretta, “a comparison of 4d-trajectory operations envisioned for nextgen and sesar, some preliminary findings”, 28th congress of the international council of the aeronautical sciences, brisbane, australia, september, 23-28, 2012 [8] sesar consortium, milestone deliverable d3: the atm target concept, 2007 [9] p. brooker, “sesar's atm target concept: keys to sucess”, available at: https://dspace.lib.cranfield.ac.uk/handle/1826/2941, 2008 [10] international civil aviation organization (icao), doc 9854 global air traffic management operational concept, 2005 [11] m. nolan, fundamentals of air traffic control, cengage learning, 2011 [12] transportation research board (trb), guidebook for advancing collaborative decision making (cdm) at airports, 2015 [13] m. c. wambsganss, “collaborative decision making in air traffic management”, in: new concepts and methods in air traffic management, pp. 1-15, springer, 2001 [14] m. o. ball, “collaborative decision making: us vs europe”, 2015 nextor workshop on global challenges to improve air navigation performance, asilomar, usa, february 11-13, 2015 [15] european organisation for the safety of air navigation (eurocontrol), airport cdm implementation – the manual for collaborative decision making, 2012 [16] m. o. ball, r. hoffman, a. mukherjee, “ground delay program planning under uncertainty based on the ration-by-distance principle”, transportation science, vol. 44, pp. 1-14, 2010 [17] international civil aviation organization (icao), doc 9971 manual on collaborative air traffic flow management, 2014 [18] a. kim, m. hansen, “some insights into a sequential resource allocation mechanism for en route air traffic management”, transportation research part b: methodological, vol. 79, pp. 1-15, 2015 [19] federal aviation administration (faa), ac 90-15 – collaborative trajectory options program (ctop): document information, 2014 [20] a. a. aslinger, l. martin, w. s. leber, m. a. hopkins, “enabling a modernized nas atm infrastructure in support of trajectory based operations”, 2012 integrated communications, navigation and surveillance conference, herndon, usa, april, 24-26, 2012 [21] b. vaaben, j. larsen, “mitigation of airspace congestion impact on airline networks”, journal of air transport management, vol. 47, pp. 54-65, 2015 [22] p. brooker, “a 4d atm trajectory concept integrating gnss and fms?”, the journal of navigation, vol. 67, pp. 617-631, 2014 [23] j. klooster, k. wichman, o. bleeker, “4d trajectory and time-of-arrival control to enable continuous descent arrivals”, aiaa guidance, navigation and control conference and exhibit, honolulu, usa, august 18-21, 2008 [24] l. h. mutuel, p. neri, e. paricaud, “initial 4d trajectory management concept evaluation”, tenth usa/europe air traffic management research and development seminar, chicago, usa, june 10-13, 2013 microsoft word reffas-ed-r2.doc etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 452 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… analysis of void growth and coalescence in porous polymer materials coalescence in polymer materials sid ahmed reffas djillali liabes university of sidi bel abbes, algeria reffas_ahmed@yahoo.fr mohamed elmeguenni djillali liabes university of sidi bel abbes, algeria elmeguennimohamed@yahoo.fr mohamed benguediab djillali liabes university of sidi bel abbes, algeria benguediab_m@yahoo.fr abstract— the use of polymeric materials in engineering applications is growing more and more all over the world. this issue requests new methodologies of analysis in order to assess the material’s capability to withstand complex loads. the use of polyacetal in engineering applications has increased rapidly in the last decade. in order to evaluate the behavior, the damage and coalescence of this type of polymer, a numerical method based on damage which occurs following several stages (nucleation of cavities, their growth and coalescence in more advanced stages of deformation) is proposed in this work. a particular attention is given on the stress-strain and the volumetric strain evolution under different triaxiality and for three initial void shapes. its application to polyacetal allows approving this approach for technical polymers. finally, this method allow us to compare the obtained results of basic calculations at different triaxiality and to discuss their possible influence on the initial size and the geometrical shape of the porosity on the material failure. keywordsvoid growth; coalescence; representative elementary volume( rve); ductile; polyoxymethylene (pom); acetal i. introduction in the plastics industry, technical polymers are widely used in engineering components which may experience complex mechanical loadings. the understanding of their intrinsic mechanical behavior to evaluate the mechanisms of damage and coalescence, is of prime importance in order to make better choices in the design of all components. over recent years, considerable attention has been focused on the analysis of plastic deformation of the ductile materials and solid polymers. the deformation processes involved in the plastic deformation of the ductile materials have been widely investigated by several authors [1-19], however, most studies conducted on solid polymers are based on the same criteria. phenomenological laws have been proposed by some researchers [20-25] and other studies have been based only on the mechanical behavior of polymers in large scale deformation [26-36]. research efforts have been devoted to understanding the mechanisms of voids growth and coalescence and to developing micro-mechanical models for better describing the ductile fracture of polymers. probably the best-known expansion plasticity model is the one introduced by gurson [20], later modified by tvergaard and needleman [37-39]. the gurson model was derived based on the assumption that the deformation mode of the matrix material surrounding a void is homogenous. it can therefore predict the material softening behavior due to the nucleation and growth of voids, but has no intrinsic ability to predict the shift of a homogenous deformation mode to a localized mode by void coalescence. for our work, the representative elementary volume (rve) method has been chosen, where the stress depends on the deformation, the strain rate and the stress triaxiality effect. this law has been used successful to characterize the behavior of a great number of polymers with an empirical criterion, like the critical void volume fraction. in this study, a numerical simulation on the basis of the model of an elementary cell is presented. firstly, the unit cell model which is used to predict the response of a material consisting of a periodic assembly of rve is briefly described. secondly, a method developed for the calculation of cells while maintaining a constant triaxiality during the loading is described. this method allows us to compare results of basic calculations for all triaxiality used in this study and to summarize the effects of the various geometrical parameters on the void coalescence in the acetal material (polyoxymethylene or pom). finally, the relevant features which should be taken into account in the application of an accurate constitutive rve model are discussed with a particular attention paid on volumetric strain, damage and their evolutions for all triaxiality. ii. material and numerical procedure the void matrix material is characterized by a model type of material (pom). the yield stress of the virgin matrix material σ0 is set to be 55 mpa. the elastic properties of the model material are taken as e=2900 mpa and ν=0.4. the results obtained using the standard tensile tests are presented in figure 1. the objective of these tests is to demonstrate the strain-rate effect on the response and fracture under large deformation of the pom material. tests were conducted on an etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 453 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… instron machine. an optical measurement system was used to control the strain-rate and measure the local strain in the specimen section. the mechanical tests were achieved at thre strain rates of 10 -1 , 10 -2 and 10 -3 at room temperature (23 °c). the pom material exhibits an influence of strain rate on the nonlinear behavior. in order to observe this effect, and to quantify it, an overview of the strain-rate effect on the true stress-strain curves is firstly analyzed. figure 1 indicates that the response of this material is similar to other polymers seen in the literature on the viscoelastic evolution. however, the overall response looks like ductile materials. in this response, the curve shows a proportional limit followed by a maximum at which necking takes place. it is common to term this maximum as the yield stress in polymer materials. fig. 1. experimental true stress – strain curves during pom at 23°c (10-1 s-1; 10-2 s-2 and 10-3 s-3). after experimental testing, results show that the behavior of the pom has the same evolution with ductile materials. so, for a first approach, a law already used for metals is applied. the objective is to verify the relevance of this law to this type of polymer. in the present work, the rate-independent power law strain hardening material is applied. the flow stress of the virgin matrix material is described as: 0 0 (1)1 n p f ε σ σ ε           = + where σf is the flow stress, εp the equivalent plastic strain, σ0 the yield stress, ε0= σ0 /e the yield strain, and n the plastic strain hardening exponent. three moderate hardening exponents n have been used to verify a good relationship and representation with the experimental curve (n = 0.01, 0.05 and 0.08). in the idea of good representation, two pre-strain cases have been considered (1.5% and 2.5%). the pre-strain here means the permanent strain after unloading. pre-strain induces strain hardening and residual stress in the void model as well as void growth and void shape changes. in order to separate the strain hardening effect from the one due to void shape changes, ellipsoidal, and spherical voids with a void volume faction equivalent to the one at the end of pre-strain history and a homogenous pre-strained matrix material have been analyzed. figure 2 compares the virgin material with the three moderate hardening exponents in (a) and the two homogenous pre-strained matrix materials in (b). it should be noted that in this study, n=0.05 and 1.5% of pre-strain have been chosen. the stress-strain curve of the material shown in figure 2 is obtained by trimming the virgin material curve by the specified pre-strain level. the elastic properties of the pre-strained materials are kept identical to the virgin material. 0 20 40 60 80 0 0,1 0,2 0,3 true strain tr u e s tr e s s ( m p a ) experimental curve n=0.01 n=0.05 0.08 0 20 40 60 80 0 0,1 0,2 0,3 true strain tr u e s tr e s s experimental curve prestrain = 0 prestrain = 1.5% prestrain = 2.5% fig. 2. matrix material properties used in the analyses: a) the plastic strain hardening exponent (n); b) the pre-strain variability for the virgin material (a) (b) etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 454 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… figure 3 shows the quarter unit cell model used in the study. the model has been used previously for various studies on the void coalescence behavior [40-46]. the model is axisymmetric and the stress ratios ρ=σx/σy are kept constant in both the pre-straining analysis and subsequent analyses. the model was analysed in a load-controlled manner and abaqus-riks method has been applied [47]. nodal constraints were applied such that the left and top boundaries remain vertical and horizontal during the analysis. (a) (b) fig.3. a voided unit cell model and the region analyzed numerically: a) a voided unit cell model and; b) one quarter of the unit cell model. for the axisymmetric problem considered the stress triaxiality can be calculated from the stress ratio α: ( 2) 1 2 3(1 ) h eq σ αβ σ α += = − where σh is the hydrostatic stress and σeq the von mises equivalent stress. the initial radius and height of the model are denoted as lx0, ly0, ry0 and rz0 and represent the initial radii of the void. the results are based mainly on the case with an initial void volume fraction of 1%. voids with different initial shapes (spherical, prolate, and oblate) but same initial void volume fraction are also considered (figure 4). the initial and current void aspect ratios are defined as: 0 0 0 ( 3 ) z y y z r s r r s r          = = (a) (b) (c) fig. 4. three initial void shapes considered in the study: a) spherical void with s = 1; b) prolate void with s = 4; c) oblate void with s = 0.25. the microscopic strain and cauchy stress tensors inside the matrix are denoted by small letters ε and σ, whereas the macroscopic strain and stress tensors are denoted by the capital letters ε and σ. the overall deformation of the cell model can be calculated from the normal displacements of the outer faces. because of the symmetry of the problem on hand, the macroscopic total logarithmic strain tensor ε and cauchy stresses tensor σ possess the same principal directions, which are the radial and axial directions. the tensor ε is qualitatively given by: ( ) (4)y y x x z zp ze e e e e e e e= ⊗ + ⊗ + ε ⊗ the macroscopic asymmetric radial (e1) and axial (e2) deformations are defined by the following expressions: 1 1 0 2 2 0 (5) ln 1 ln 1 a y a z u e l u e l                           = + = + for the purpose of specifying the overall plastic deformation of the cell model, we choose the effective strain ee defined by: 2 1 ( 6 ) 2 3e e e e= − as an independent variable for presenting most results. the macroscopic stress tensor σ is qualitatively given by: ( ) (7)y y x x z zp ze e e e e eς = σ ⊗ + ⊗ +σ ⊗ with the remote true principal stresses σy in both y and x directions, and σz in z-one. they are calculated at any instant as the average reaction forces at the cell faces per momentary areas through. 0 2 0 (8) 1 2 z y ly y z lz l y z y l z zy t dz l yt dy l                    = = σ = σ = ∫ ∫ etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 455 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… where t is the stress vector. the corresponding effective von mises stress σe and hydrostatic stress σh result from: (9 ) 1 2 3 ze y z yh           σ = σ − σ σ = σ + σ and the overall stress triaxiality β of the stress state is defined as the ratio: 21 1 1 2 (1 0) 3 3 1 z yh e z y αβ α                      σ σ + σ += = = σ −σ − σ w ith (1 1)y z α σ = σ the stress triaxiality equals 1/3 (or α=0) for simple uniaxial tension and 0 (or α=-0.5) for pure shear. figure 5 shows the evolution of stress ratio α versus the triaxiality β used in the study. -1 -0,7 -0,4 -0,1 0,2 0,5 0,8 1,1 0 0,5 1 1,5 2 2,5 3 3,5 triaxiality st re ss r a ti o fig.5. evolution of stress ratio versus the triaxiality. iii. method of analysis in as much as the unit cell is subjected to axisymmetric deformation, the analysis of its evolution is performed using a cylindrical coordinate system with an orthonormal frame denoted by (ex, ey, ez) along y-radial, x-ortho-radial and z-axial axis. in the initial undeformed configuration, the unit cell is a cylinder with diameter 2ly0 and height 2lz0. the voids are assumed to be spheroidal initially with radius ry0 and half length rz0. the void is oblate if ry0 > rz0, and prolate if ry0 < rz0. the particular case of a spherical void with radius ry0 corresponds to the situation for which ry0 = rz0. the void volume fraction f is defined as the ratio of the total void volume to the unit cell volume. the void volume fraction can be calculated in two ways: the first is by numerical integration of the points along the surface of the void volume; a second criterion is by using the approximate formula proposed by koplik and needleman [41]. this relationship has been proposed assuming the matrix plastically incompressible. the initial cell geometry is completely characterized by the void volume fraction, the void aspect ratio and the cell aspect ratio which their initial values, f0, s0 and λ0 are respectively defined by: 2 0 0 0 0 0 2 (12) 3 y z y z r r f l l                 = 0 0 0 (13)z y l l λ = as a consequence of the lattice periodicity, all outer planes of the unit cell have to behave only as rigid moveable planes in coordinating directions during the process of loading. the faces at y = lyo and z = lzo will have a uniform normal displacements and their mutual orientations will be maintained. these requirements impose the unit cell to remain a cylinder during the finite strain deformation process. this feature is attained by assessing the deformation via an imposed homogeneous elongation (uz a ) of the corner a in the axial direction and monitoring its homogeneous radial displacement (uy a ) by multipoint constraints. the cylinder is thus characterized in an arbitrary state by: 0 0 (14) a y yy a z zz l l u l l u      = + = + because of these constraints, only one quarter geometry of the unit cell model (0≤ y ≤ lyo; 0≤ z ≤ lzo) needs to be analyzed and is drawn in figure 3-b. to satisfy the axisymmetric conditions and to ensure periodicity of the cell arrangements, the boundary conditions of the quadrant dealing with displacements read, uy = 0 along the axis, uz = 0 on the bottom and uy = uy a on the lateral surface, uz = uz a on the top. the actual void volume fraction f is calculated whether using numerical integration from the updated coordinates of the nodes at the void–matrix interface during the deformation of the unit cell, or using the following approximate analytic formula proposed in (8). ( )0 01 1 (15) ev v f f v v ∆= − − − ( ) ( )0 3 1 1 (16)with e h v f v e ν−∆ = − σ where the ratio of the current volume v of the unit cell to its initial volume v0 is given by: ( ) 2 0 0 2 0 0 0 (17) a a y zy z y z l u l u v v l l       + + = etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 456 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… ∆v e is an approximate correction term for the elastic expansion change in cell volume due to the imposed hydrostatic stress σh. it was checked that the two methods yield very close results within the range of our calculations. it should be mentioned however, that (16) does not hold in the case of porous matrix. on the other hand, it should be kept in mind that this approximation will be used in the transient analysis of the cell model to provide the starting loading for which, as will be seen later, it turns out that the stress triaxiality drops down quickly and then have to be corrected. iv. numerical results and discussion in order to evaluate the proposed transient analysis for axisymmetric cell model simulations, a series of calculations have been conducted. figure 6 shows the mesoscopic radial strain versus equivalent strain curves for the case with an initial void volume fraction of 1% and for all triaxiality proposed in this study from 0.33 to 3. a global view of these results indicates that there is rather a reasonably good agreement between the results. however, it is seen from these curves that a difference exists in the nature of the response especially in the coalescence. in general, the cell model elongates in the vertical direction and contracts in the radial direction. during the plastic deformation and void growth, an approximate linear relation can be observed. this linear relation indicates a homogenous deformed state. when the deformation reaches a critical state a sudden shift from the relatively homogenous deformation state to a uniaxial straining state can be seen. this shift depicts the onset of void coalescence. computations were carried out for all cell models, whereby the extreme values of stress triaxialty from 0.33 to 3 were investigated. the most interesting outcome of the cell models is the overall mesoscopic hardening and failure behavior. this relation is expressed in term of the invariants of the equivalent stress versus equivalent strain. figure 7(a) displays variations of the equivalent stress as a function of the equivalent strain. the onset of void coalescence corresponds to a marked change of the slope of the curves. the transition is most sharp at low stress triaxiality, where the triaxiality effect is marked. it has all also been shown that after the onset of void coalescence the falling equivalent stress– strain curves is nearly linear, except for the high triaxiality (2 and 3). figure 7(b) exhibits the variations of the volumetric strain as a function of the equivalent strain. void coalescence induces an increase in the void growth rate and a transition in the void shape evolution. after the onset of void coalescence, the volumetric strain growth is significantly larger. the end of the coalescence process in a polymer material usually consists of the failure of the remaining ligament by microcleavage, crystallographic shearing, or with the help of the second population of smaller voids, rather than volumetric void growth until impingement. -0,6 -0,5 -0,4 -0,3 -0,2 -0,1 0 0 0,2 0,4 0,6 0,8 1 1,2 1,4 equivalent strain ra d ia l s tr a in β = 0.33 β = 0.6 β = 1 β = 2 β = 3 f 0=1% fig. 6. radial strain versus equivalent strain for different stress triaxiality. f 0 =1% 0 20 40 60 80 100 0 0,2 0,4 0,6 0,8 1 equivalente strain e q u iv a le n t s tr e s s ( m p a ) β = 0.33 β = 0.6 β = 1 β = 2 β = 3 0 0,1 0,2 0,3 0,4 0 0,2 0,4 0,6 0,8 1 equivalente strain v o lu m e tr ic s tr a in β = 0.33 β = 0.6 β = 1 β = 2 β = 3 f 0=1% fig.7. cell model results for spherical void in different stress triaxiality: a) equivalent stress versus equivalent strain; b) volumetric strain versus equivalent strain (a) (b) etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 457 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… in the previous results we note a reduction of the void diameter which depends on the triaxiality occurred in the course of loading. to study the influence of this additional condition on the mechanical behavior of the cell, we have chosen three cases of void volume fraction (1%, 5% and 10%) with different initial shapes (spherical, prolate, and oblate) (figure 4). computations were carried out for all cell models, whereby the extreme values of stress triaxialty from 0.33 to 3 is investigated. figure 8 shows the evolution of the mesoscopic equivalent stress versus equivalent strain curves and summarizes the effects of the various geometrical parameters on void coalescence; the three cases of void volume fraction (1%, 5% and 10%) with different initial shapes (spherical, prolate, and oblate) for all triaxiality. here the equivalent stress and strain are chosen, because both are relevant coalescence parameters. all the cells were loaded with a prescribed triaxiality β=0.33 to 3. several comments can be made concerning the void coalescence. the onset of void coalescence depends strongly on the relative void spacing as can be discerned by comparing cells (1% spherical with β=0.33) and (1% spherical with β=3), which have an identical void shapes and initial porosity. the effect can be seen in another way by comparing cells (5% or 10% prolate) and (5% or 10% oblate) for all triaxiality, which has been chosen with roughly similar void spacing. the cells have not a similar coalescence strain for the same initial void volume fractions and different shapes. the conclusion to be drawn is that heterogeneity in the void distribution inherited from prior working or processing plays a major role in the coalescence of the material. void spacing is not the only influential parameter as can be seen when comparing figure 9-a and figure 9-e (spherical; 5% and 10% prolate or oblate). in these comparisons, a higher level of triaxiality builds up in the ligament of cell and accelerating localization of the coalescence. for a similar cell aspect ratio, cells (10% prolate) and (10% oblate) also have significant differences between the slopes after the onset of void coalescence, due to the differing initial porosity and void shape. the analysis of the stress and strain fields inside several void cells has shown that voids start interacting with each other well before the onset of void coalescence. figure 9 compares the critical stress and strain obtained at the moment of the coalescence as a function of the triaxiality for the different geometrical parameters on void coalescence in all cases. in general, stress triaxiality has a negative effect on the coalescence stress and strain. it is interesting to observe that both the void shapes have a strong influence on the reduction of the coalescence stress and strain. v. conclusion in order to summarize the observed results in this paper, and to confirm predictions of the rve model, we have noted some conclusions. when the peak equivalent stress strain appears at the outer boundary; the void coalescence starts. the microscopic coalescence criterion is practical and has been applied to determine the coalescence. however, this observation further verifies the plastic limit load theory for void coalescence, and the void coalescence occurs when a plastic limit load state of the void cell model has been reached. the void coalescence behavior effect is strongly dependent on the stress triaxiality and the initial void shape. for prolate and oblate voids smaller than 5%, the critical stress and the strain coalescence effect can be neglected. at high stress triaxiality cases the reduction in coalescence stress – strain due to the initial voids higher than 5% can be very significant. for an initially spherical void with β = 3, the reduction in coalescence strain can be as large as 70%. in comparison, the stress triaxiality effect of the coalescence strain of oblate voids is the largest. the absolute reduction of coalescence strain seems to be dependent of stress triaxiality. finally, it can be noted that the predictions show promising results, and these calculations show that the constitutive equations for damage evolution are required, and the determination of onset and continuation of a triaxiality is based on the initial porosity shape. etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 458 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… β = 0.33 0 20 40 60 80 100 0 0,2 0,4 0,6 0,8 1 equivalente strain e q u iv a le n te s tr e s s ( m p a ) f0 = 1% spherical f0 = 5% prolate f0 = 10% prolate f0 = 5% oblate f0 = 10% oblate β = 0.6 0 20 40 60 80 100 0 0,2 0,4 0,6 0,8 equivalent strain e q u iv a le n t s tr e s s ( m p a ) f0 = 1% spherical f0 = 5% prolate f0 = 10% prolate f0 = 5% oblate f0 = 10% oblate β = 1 0 20 40 60 80 0 0,2 0,4 0,6 equivalent strain e q u iv a le n t s tr e s s ( m p a ) f0 = 1% spherical f0 = 5% prolate f0 = 10% prolate f0 = 5% oblate f0 = 10% oblate β = 2 0 20 40 60 0 0,05 0,1 0,15 0,2 0,25 equivalente strain e q u iv a le n te s tr e s s ( m p a ) f0 = 1% spherical f0 = 5% prolate f0 = 10% prolate f0 = 5% oblate f0 = 10% oblate β = 3 0 10 20 30 40 50 0 0,05 0,1 0,15 0,2 equivalente strain e q u iv a le n te s tr e s s ( m p a ) f0 = 1% spherical f0 = 5% prolatel f0 = 10% prolatel f0 = 5% oblate f0 = 10 oblate fig. 8. cell model results for three initial void shapes (spherical; prolate and oblate): (a) β =0.33; (b) β =0.6; (c) β =1; (d) β =2; (e) β =3. (a) (b) (c) (d) (e) etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 459 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… 0 20 40 60 80 100 0 1 2 3 triaxiality c ri ti c a l e q -s tr e s s ( m p a ) f0 = 1% spherical f0 = 5%prolatel f0 = 10% prolatel f0 = 5% oblate f0 = 10% oblate 0 0,2 0,4 0,6 0,8 1 0 1 2 3 triaxiality c ri ti c a l e q -s tr a in f0 = 1% spherical f0 = 5% prolatel f0 = 10% prolatel f0 = 5% oblate f0 = 10% oblate fig. 9. cell model results: a) critical stress versus triaxiality; b) critical strain versus triaxiality. references [1] a. a. benzerga, j. besson, a. pineau, “coalescence-controlled anisotropic ductile fracture”, journal of engineering materials and technology, vol. 121, no. 2, pp. 221-229, 1999 [2] a. a. benzerga, “micromechanics of coalescence in ductile fracture”, journal of the mechanics and physics of solids, vol. 50, no. 6, pp. 1331-1362, 2002 [3] a. a. benzerga, j. besson, a. pineau, “anisotropic ductile fracture. part i: experiments”, acta materialia, vol. 52, no. 15, pp. 4623-4638, 2004 [4] a. a. benzerga, j. besson, a. pineau, “anisotropic ductile fracture. part ii: theory”, acta materialia, vol. 52, no. 15, pp. 4639-4650, 2004 [5] a. a. benzerga, d. surovik, s. m. keralavarma, “on the pathdependence of the fracture locus in ductile materials–analysis”, international journal of plasticity, vol. 37, pp. 157-70, 2012 [6] x. gao, t. wang, j. kim, “on ductile fracture initiation toughness: effects of void volume fraction, void shape and void distribution”, international journal of solids and structures, vol. 42, no. 18-19, pp. 5097-5117, 2005 [7] s. m. keralavarma, a. a. benzerga, “a constitutive model for plastically anisotropic solids with non-spherical voids”, journal of the mechanics and physics of solids, vol. 58, no. 6, pp. 874-901, 2010 [8] s. m. keralavarma, s. hoelscher, a. a. benzerga, “void growth and coalescence in anisotropic plastic solids”, international journal of solids and structures, vol. 48, no. 11-12, pp. 1696-1710, 2011 [9] a. e. huespe, a. needleman, j. oliver, p. j. sánchez, “a finite thickness band method for ductile fracture analysis”, international journal of plasticity, vol. 25, no. 12, pp. 2349-2365, 2009 [10] a. e. huespe, a. needleman, j. oliver, p. j. sánchez, “a finite strain, finite band method for modeling ductile fracture”, international journal of plasticity, vol. 28, no. 1, pp. 53-69, 2012 [11] y. li, d. g. karr, “prediction of ductile fracture in tension by bifurcation, localization, and imperfection analyses”, international journal of plasticity, vol. 25, no. 6, pp. 1128-1153, 2009 [12] y. li, t. wierzbicki, “prediction of plane strain fracture of ahss sheets with post-initiation softening”, international journal of solids and structures, vol. 47, no. 17, pp. 2316-2327, 2010 [13] h. li, m. w. fu, j. lu, h. yang, “ductile fracture: experiments and computations”, international journal of plasticity, vol. 27, no. 2, pp. 147-180. 2011 [14] h. stumpf, j. makowski, k. hackl, “dynamical evolution of fracture process region in ductile materials”, international journal of plasticity, vol. 25, n o. 5, pp. 995-1010, 2009 [15] a. s. khan, h. liu, “a new approach for ductile fracture prediction on al 2024-t351 alloy”, international journal of plasticity, vol. 35, pp. 112, 2012 [16] l. lecarme, c. tekoglu, t. pardoen, “void growth and coalescence in ductile solids with stage iii and stage iv strain hardening”, international journal of plasticity, vol. 27, no. 8, pp. 1203-1223, 2011 [17] m. dunand, d. mohr, “hybrid experimental–numerical analysis of basic ductile fracture experiments for sheet metals”, international journal of solids and structures, vol. 47, no. 9, pp. 1130-1143, 2010 [18] m. dunand, d. mohr, “optimized butterfly specimen for the fracture testing of sheet materials under combined normal and shear loading”, engineering fracture mechanics, vol. 78, no. 17, pp. 2919-2934, 2011 [19] s. m. graham, t. zhang, x. gao, m. hayden, “development of a combined tension–torsion experiment for calibration of ductile fracture models under conditions of low triaxiality”, international journal of mechanical sciences, vol. 54, no. 1, pp. 172-181, 2012 [20] a. l. gurson, “continuum theory of ductile rupture by void nucleation and growth: part-i–yield criteria and flow rules for porous ductile media”, journal of engineering materials and technology, vol. 99, no. 1, pp. 2-15, 1977 [21] s. yi, w. duo, “a lower bound approach to the yield loci of porous materials”, acta mechanica sinica, vol. 5, no. 3, pp. 237-243, 1989 [22] a. c. steenbrink, e. van der giessen, p. d. wu, “void growth in glassy polymers”, journal of the mechanics and physics of solids, vol. 45, no. 3, pp. 405-437, 1997 [23] h. -y. jeong, “a new yield function and a hydrostatic stress-controlled void nucleation model for porous solids with pressure-sensitive matrices”, international journal of solids and structures, vol. 39, no. 5, pp. 1385-1403, 2002 [24] p. j. sànchez, a. e. huespe, j. oliver, “on some topics for the numerical simulation of ductile fracture”, international journal of plasticity, vol. 24, no. 6, pp. 1008-1038, 2008 [25] l. cheng, t. f. guo, “void interaction and coalescence in polymeric materials”, international journal of solids and structures, vol. 44, no. 6, pp. 1787-1808, 2007 [26] s. g. bardenhagen, m. g. stout, g. t. gray, “three-dimensional, finite deformation, viscoplastic constitutive models for polymeric materials”, mechanics of materials, vol. 25, no. 4, pp. 235-253, 1997 [27] t. a. tervoort, r. j. m. smit, w. a. m. brekelmans, l. e. govaert, “a constitutive equation for the elasto-viscoplastic deformation of glassy polymers”, mechanics of time-dependent materials, vol. 1, pp. 269291, 1997 (a) (b) etasr engineering, technology & applied science research vol. 3, �o. 3, 2013, 452-460 460 www.etasr.com reffas et al.: analysis of void growth and coalescence in porous polymer materials… [28] j. m. gloaguen, j. m. lefebvre, “plastic deformation behaviour of thermoplastic/clay nanocomposites”, polymer, vol. 42, no. 13, pp. 5841-5847, 2001 [29] f. zaïri, m. naït-abdelaziz, k. woznica, j. m. gloaguen, “constitutive equations for the viscoplastic-damage behaviour of a rubber-modified polymer”, european journal of mechanics-a/solids, vol. 24, no. 1, pp. 169-182, 2005 [30] f. zaïri, b. aour, j. m. gloaguen, m. naït-abdelaziz, j. m. lefebvre, “numerical modeling of elastic–viscoplastic equal channel angular extrusion process of a polymer”, computational materials science, vol. 38, no. 1, pp. 202-216, 2006 [31] f. zaïri, m. naït-abdelaziz, k. woznica, j. m. gloaguen, “elastoviscoplastic constitutive equations for the description of glassy polymers behavior at constant strain rate”, journal of engineering materials and technology, vol. 129, no. 1, pp. 29-35, 2007 [32] f. zaïri, m. naït-abdelaziz, j. m. gloaguen, j. m. lefebvre, “modelling of the elasto-viscoplastic damage behaviour of glassy polymers”, international journal of plasticity, vol. 24, no. 6, pp. 945-965, 2008 [33] f. zaïri, m. naït-abdelaziz, j. m. gloaguen, j. m. lefebvre, “a physically-based constitutive model for anisotropic damage in rubbertoughened glassy polymers during finite deformation”, international journal of plasticity, vol. 27, no. 1, pp. 25-51, 2011 [34] a. d. mulliken, m. c. boyce, “mechanics of the rate-dependent elastic– plastic deformation of glassy polymers from low to high strain rates”, international journal of solids and structures, vol. 43, no. 5, pp. 13311356, 2006 [35] j. richeton, s. ahzi, k. s. vecchio, f. c. jiang, a. makradi, “modeling and validation of the large deformation inelastic response of amorphous polymers over a wide range of temperatures and strain rates”, international journal of solids and structures, vol. 44, no. 24, pp. 79387954, 2007 [36] m. elmeguenni, “effet de la triaxialité sur le comportement et la rupture du polyéthylène haute densité: approches expérimentales et numériques”, thesis, université lille1, 2010 [37] v. tvergaard, “influence of voids on shear bands instabilities under plane strain conditions”, international journal of fracture, vol. 17, no. 4, pp. 389–407, 1981 [38] v. tvergaard, “on localization in ductile materials containing spherical voids”, international journal of fracture, vol. 18, no. 4, pp. 237–252, 1982 [39] v. tvergaard, a. needleman, “analysis of the cup-cone fracture in a round tensile bar”, acta metallurgica, vol. 32, no. 1, pp. 157–169, 1984 [40] z. l. zhang, c. thaulow, j. ødegård, “a complete gurson model based approach for ductile fracture”, engineering fracture mechanics, vol. 67, no. 2, pp. 155-168, 2000 [41] j. koplik, a. needleman, “void growth and coalescence in porous plastic solids”, international journal of solids and structures, vol. 24, no. 8, pp. 835-853, 1988 [42] r. becker, r. e. smelser, o. richmond, e. j. appleby, “the effect of void schape on void growth and ductility in axisymmetric tension tests”, metallurgical transactions a, vol. 20, no. 5, pp. 853-861, 1989 [43] r. c. lin, d. steglich, w. brocks, j. betten, “performing rve calculations under constant stress triaxiality for monotonous and cyclic loading”, international journal for numerical methods in engineering, vol. 66, no. 8, pp. 1331-1360, 2006 [44] m. gologanu, j. b. leblond, g. perrin, j. devaux, “theoretical models for void coalescence in porous ductile solids. ii. coalescence in columns”, international journal of solids and structures, vol. 38, no. 32-33, pp. 5595-5604, 2001 [45] t. pardoen, j. w. hutchinson, “an extended model for void growth and coalescence”, journal of the mechanics and physics of solids, vol. 48, no. 12, pp. 2467–2512, 2000 [46] k. siruguet, j. b. leblond, “effect of void locking by inclusions upon the plastic behavior of parous ductile solids-part ii: theoretical modeling and numerical study of void coalescence”, international journal of plasticity, vol. 20, no. 2, pp. 255-268, 2004 [47] w. brocks, d. z. sun, a. hönig, "verification of the transferability of micromechanical parameters by cell model calculations with viscoplastic materials", vol. 11, no. 8, pp. 971-989, 1995 microsoft word etasr_v11_n4_pp7405-7410 engineering, technology & applied science research vol. 11, no. 4, 2021, 7405-7410 7405 www.etasr.com shamsan & almuhanna: intersystem interference study between medical capsule camera endoscopy … intersystem interference study between medical capsule camera endoscopy and other systems in co-channel and adjacent bands zaid ahmed shamsan electrical engineering department college of engineering imam mohammad ibn saud islamic university (imsiu) riyadh, saudi arabia shamsan@ieee.org khalid almuhanna electrical engineering department college of engineering imam mohammad ibn saud islamic university (imsiu) riyadh, saudi arabia kalmuhanna@imamu.edu.sa abstract—the ultra-high frequency (uhf) band occupies a very vital region in the spectrum and is becoming very congested because many applications use it. the capsule camera (capcam), an ultra-low power wireless device, is a short range device (srd) application that utilizes the uhf spectrum for medical endoscopy and it is designed to operate at the 430-440mhz frequency band range. this study will focus on the interference between the capcam and other systems operating in the frequency of 435mhz and adjacent bands. other systems that can operate in this band include non-specific srd and radiolocation services such as airborne radar and ground radar stations. the minimum coupling loss (mcl) method is implemented in this study. the findings showed that restricted distances between the capcam and other services must be considered when the capcam is in use. this should be done to avoid harmful interference from the capcam especially in the case of radiolocation services. keywords-capcam; airborne radar; ground radar; interference; mcl method i. introduction wireless medical capsule endoscopy (wmce) is a new generation of medical short range device (srd) applications, which are characterized by operating in ultra-low power and short distance applications. the capsule camera (capcam) is a main component of the ultra-low power wmce application. the capcam endoscopy is a practice recommended by doctors that uses a miniature wireless camera to take images of a patient’s digestive tract as it passes through it. the endoscopy camera is placed within a small capsule (approximately the same size as a vitamin pill) that the patient swallows. the camera takes pictures as the capsule passes through the patient's digestive system and transfers them wirelessly to a recorder carried by the patient [1]. the capcam is a medical diagnostic tool designed to operate in the uhf range including the frequency band of 430-440mhz [2]. this frequency band is occupied by several services such as radiolocation services, amateur radio services, non-specific srds (nsrds), land mobile services, and earth exploration-satellite services. therefore, the possibility of interference occurring between these systems and the capcam service is something that needs to be investigated. as a result of the interference, system performance deterioration may occur. depending on the properties of various wmce systems and the method of treatment, many contraindications are set by manufacturers. such a contraindication is the electromagnetic radiation represented by the interference of the capcam with other wireless devices (or intersystem interference). based on previous studies and the manufacturers’ recommendations, these contraindications include the effect on the cardiac pacemakers or other implanted electro-medical devices, creating strong electromagnetic fields on devices such as magnetic resonance imaging (mri), etc. [3, 4]. more broadly, in this paper, the intersystem interference between the capcam service and other systems is controlled in a primary-secondary operating basis, where the capcam service is a secondary service and the other systems are considered the primary services [5]. comprehensively, when the srds (as a secondary service) operate in shared bands, they are not permitted to cause harmful interference to radio services (primary). so, in general, srd cannot claim protection from interference caused by radio communication services as defined by the international telecommunication union radio regulations (itu-rr) [6]. further, this means that the capcam must not cause interference to the other primary services. therefore, this paper will study the effect of the capcam service on other services. both co-channel interference and adjacent channel interference will be examined in line of sight (los) and nonlos (nlos) environments. ii. intersystem interference scenarios this section provides a summary of the proposed interference scenario between the capcam and other systems that share the same frequency band of 430-440mhz. the services/systems involved in this study are described in detail. corresponding author: zaid ahmed shamsan engineering, technology & applied science research vol. 11, no. 4, 2021, 7405-7410 7406 www.etasr.com shamsan & almuhanna: intersystem interference study between medical capsule camera endoscopy … a. interference scenario the interference scenario is shown in figure 1. the capcam service is assumed to be operating in the 430440mhz frequency band and shares it with other services (radiolocation and nsrd) according to the itu-rr's article 5 [5] and the european common allocations (eca) [7]. the frequency allocation for the 430-440mhz band regarding radiolocations services (airborne and ground radars) and srd are shown in table i. it can be realized that the proposed use of the capcam application operating in the frequency band of 430-440mhz would lead to affecting both radiolocation and nsrd systems. this study will investigate this impact in order to coordinate the use of the capcam with radiolocation and nsrd systems. table i. the itu-r spectrum allocation for the capcam, radiolocation, and nsrd services in the 430-440mhz band itu rr allocation frequency band (mhz) radiolocation 430-433.05 radiolocation, nsrd 433.05-434.79 radiolocation 434.79-440 fig. 1. the interference scenario of capcam with other services. fig. 2. the scenario of capcam and dr as a wmce application. it is assumed that the capcam service (the interferer) is utilized inside a medical building such as a hospital and causes interference to three discrete outdoor wireless systems (airborne radar, ground radar, and nsrd), which can be termed as interfered services (figure 1). in the following subsections, a brief description of these systems, as well as the capcam service are presented. b. the capcam service it is a relatively new application of medical srds that can perform medical tests of patients with specific digestive conditions without causing bleeding or sedation hazard involved by traditional endoscopy [2]. the crucial part of the new application is a disposable tiny optical imaging camera imbedded in a capsule. the capcam is given to the patient to swallow and while it moves through the patient’s digestive tract it sends images to a receiver (data recorder-dr) outside the patient’s body as shown in figure 2. c. non-specific srds nsrds are devices for wireless telegraphy including telemetry, tele-command, alarms, and data transmission. telemetry services use radio communication for automatically indicating or recording measurements at a distance from the measuring instrument. tele-command services use radio communication for the transmission of signals to create, modify or dismiss functions of equipment at a distance; and the alarms are devices used for alarm systems, including social, security, and safety alarms [8]. d. radiolocation radiolocation services are defined as radio-determination services for the purpose of position determination. radiodetermination is defined as the determination of the position, velocity, and/or other characteristics of an object by the propagation properties of radio waves [9, 10]. the airborne radar [11], is a radar on a plane used to detect objects moving at very low speeds whereas ground radar is used for fixed, mobile, or transportable operations. iii. interference calculation methodoolgy the method proposed to calculate the intersystem interference from the capcam and other systems is the standard minimum coupling loss (mcl) which consists of attaining the critical minimum propagation losses required to avoid interference. once the total losses are obtained, it is straightforward to determine a matching minimum separation distance depending on a given propagation model. the propagation model used for the assessment of the separation distance is the free space model as well as indoor penetration losses due to the natural work of the capcam from the indoor environment and emitting power into the outdoor environment. the free space wave propagation model is given by [12]: � � ������ �� � � � (1) where � denotes the received interference power at the interfered system (radiolocation or nsrd systems), �� denotes the transmit power from the interferer system (the capcam service), �� and �� are the gain of the transmitter and receiver antenna respectively, � � denotes the correction factor of bandwidth between the capcam and the interfered system, and ���� is the channel propagation loss due to the free space (outdoor) environment and the indoor penetration loss. the expression in (1) can be represented using the decibel scale as follows [13]: � � �� � ����� � � � � ���� (2) where ���� is the free space loss due to the free space environment (��) and the indoor penetration loss ��. the �� engineering, technology & applied science research vol. 11, no. 4, 2021, 7405-7410 7407 www.etasr.com shamsan & almuhanna: intersystem interference study between medical capsule camera endoscopy … loss mainly depends on the wavelength of the traveling signal, �, either in los or nlos environments, and the distance between the transmitter and receiver � . �� is based on the material type used to construct the hospital building. therefore, the ���� factor can be defined for the los and nlos as follows: ���� � �� � �� (3) when the signal travels in a los outdoor environment, the loss ���� is given by (4) for true urban propagation prediction [14, 15]: ����(���) � 32.45 � 20log(&�) � �� (4) if the signal travels in an nlos outdoor environment, the loss ���� is given by: ����('���) � 32.45 � 20log(&) � 35log(�) � �� (5) where the frequency & is in mhz and the distance � is in km. in this study, both interference situations (co-channel and adjacent channel interferences) will be considered in both propagation environments. iv. system parameters the main parameters for the considered systems for the capcam, nsrd, airborne radar, and ground radar are shown in tables ii and iii. it is worth mentioning that the radiations from the capcam may not be highly uniform throughout the whole channel bandwidth except within the co-channel frequency (at around 10mhz). also, 10db is assumed as the body loss which is applied to distinguish the power levels inside/outside the body. table ii. main parameters of the interferer system (the capcam) value parameters 430-440mhz frequency of operation 10mhz single channel rf bandwidth -30dbm maximum erp of the transmitter 2.15db antenna gain -40dbm erp outside patient's body -50dbm/100khz maximum erp density outside patient's body 8-12h single use of activity cycle 10db building penetration loss table iii. main parameters of interfered systems ground radar airborne radar nsrd parameters 1 1 0.250 channel bandwidth, mhz 38 22 -2.85 rx antenna gain, db -115.9 -114.9 -110 rx interference threshold, dbm �/) � −6 �/) � −6 c/i�8 rx protection criteria, db 8 >9000 3 height above ground level, m v. results and discussion this section of the paper analyzes and discusses the results of the coordination between the capcam and the considered systems that share the frequency band of 430-440mhz. a. the capcam and nsrd systems since the nsrd is set to run in the frequency band 433.05434.79mhz, the operating carrier frequency is assumed as 433.91mhz. the channel propagation will be affected by the penetration loss of 10db as well as the loss in the los and nlos environments. the values of interference levels from the capcam service into the nsrd service versus the separation distances using co-channel frequency are shown in figure 3. this figure illustrates that the minimum distance in the los environment is about 322m whereas it is 27.6m in the nlos dense urban areas. at these distances, minimum power will be detected according to the nsrd receiver sensitivity which is 110dbm/25khz. figure 3 shows that as the distance increases the interference level from the capcam into the nsrd receiver decreases due to the propagation effect for both los and nlos. however, the interference level drastically decreases in nlos compared to the los environment. to allow for proper coordination between the capcam system and the nsrd system the required margin in the co-channel frequency band is depicted in figure 4. it can be seen from figure 4 that as the distance increases, the required margin decreases linearly in the log scale. in figure 4, we can notice that the required margin is 0db at the separation distances of 322m and 27.6m for los and nlos respectively. the reason is that at these distances the received power is equal to the nsrd receiver sensitivity of -110dbm. before reaching the aforementioned distances, the interference power is high and can cause deterioration in nsrd receiver performance. therefore, it is required to keep that distance in consideration regarding the operation of the capcam and nsrds services. fig. 3. the interference from the capcam into the nsrd in the cochannel frequency band. on the other hand, by shifting the capcam carrier frequency by 10mhz the two systems will operate under the adjacent channel band scenario. this is illustrated in figures 5 and 6. in figure 5, the interference level from the capcam service is 102m in los areas, while the distance in nlos dense urban areas is only 15m and lower than that of los environment. these findings are confirmed in figure 6, which shows the power margin in both areas. this shows that the power margin is 0db at the above mentioned distances due to the received power being equivalent to the nsrd receiver sensitivity. from the findings of the operation of the capcam and nsrd services, it can be concluded that the interference in te rf e re n c e ( d b m ) engineering, technology & applied science research vol. 11, no. 4, 2021, 7405-7410 7408 www.etasr.com shamsan & almuhanna: intersystem interference study between medical capsule camera endoscopy … power in nlos is less than in the los environment. this variation in interference power is due to the environment in the nlos blocking the interference power from the capcam from reaching the interfered system receiver in its entire strength. the los environment allows the interference signal power to travel with no obstacles in its path. moreover, the minimum distance required to operate the capcam and nsrd services simultaneously in the same area with no harmful interference using the co-channel frequency band is higher compared to the distance using the adjacent channel. this occurs due to the fact that the emitted power from the capcam service in the case of the adjacent channel frequency band is less than the power emitted in the case of the co-channel band by 10db. thus, the decrease in the interference emission power is transferred into decreasing the distance that enough harmful interference signal can reach the affected receiver of the nsrd. this finding is consistent with the findings of [16]. fig. 4. the required margin for coordination of the capcam and the nsrd in the co-channel frequency band. fig. 5. the interference from the capcam to the nsrd in the adjacent channel frequency band. b. the capcam and airborne radar systems here, the affected service is the airborne radar while the interferer is the capcam service. the airborne radar receiver has an interference threshold of –114.9dbm. figures 7 and 8 illustrate the interference power level with the distance increasing between the two systems in co-channel and adjacent channel frequency bands respectively. both figures show that the distance required in los is higher than that required in nlos areas. in addition, the minimum distance in the cochannel band is higher than that in the adjacent channel band. in the co-channel scenario the distance is 1563m and 67.5m, while in the adjacent channel scenario it is 494.5m and 36m for los and nlos respectively. also, in figures 9 and 10, the power margin is depicted for the same scenarios, in which 0db is the margin at the abovementioned distance. at greater distances, the power margin is negative which means it allows the airborne radar to run with no interference acting on it from the capcam. fig. 6. the required margin for the coordination of the capcam and the nsrd in the adjacent channel frequency band. fig. 7. the interference from the capcam to the airborne radar in the cochannel frequency band. fig. 8. the interference from the capcam to the airborne radar in the adjacent channel frequency band. m a rg in ( d b ) in te rf e re n c e ( d b m ) m a rg in ( d b ) in te rf e re n c e ( d b m ) in te rf e re n c e ( d b m ) engineering, technology & applied science research vol. 11, no. 4, 2021, 7405-7410 7409 www.etasr.com shamsan & almuhanna: intersystem interference study between medical capsule camera endoscopy … c. the capcam and ground radar systems for this scenario, the receiver has an interference threshold that is lower than that of the airborne radar, which is –115.9dbm. the findings shown in figures 11 and 12 present the interference levels with the distance between the capcam and ground radar services. it is found that the minimum distance at which the two systems can operate together with no harmful interference is 1754m and 72m for the co-channel frequency band in the los and nlos environments respectively. the minimum distance decreases using the adjacent channel band are 555m and 39m in the los and nlos environments respectively. moreover, figures 13 and 14 show the required power margin between the received interference from the capcam service and the maximum interference threshold. therefore, at the abovementioned distances, this margin is 0db, and before the ground radar cannot work. for instance, in distances less than 1754m and 555m the ground radar may not operate properly if the capcam service operates at the same time. fig. 9. the required margin for the coordination of the capcam and the airborne radar in the co-channel frequency band. fig. 10. the required margin for the coordination of the capcam and the airborne radar in the adjacent channel frequency band. table iv summarizes the results from the presented scenarios. it shows that the ground radar service is the most affected among other services due to the maximum interference threshold and its higher antenna gain. although, the exceptionally low deployment density of the capcam services and the short time period of single-use of the capcam devices, compared with many other srd applications, distance should be taken into account. another important point in this study is that it assumes that the patient treated by the capcam is separated by only one wall from the considered devices/services. in practical situations there may be many walls or partitions, which can contribute to more propagation loss that may allow the operation of the capcam with other services without interferences. fig. 11. the interference from the capcam to the ground radar in the cochannel frequency band. fig. 12. the interference from the capcam to the ground radar in the adjacent channel frequency band. fig. 13. the required margin for the coordination of the capcam and the ground radar in the co-channel frequency band. m a rg in ( d b ) in te rf e re n c e ( d b m ) in te rf e re n c e ( d b m ) m a rg in ( d b ) engineering, technology & applied science research vol. 11, no. 4, 2021, 7405-7410 7410 www.etasr.com shamsan & almuhanna: intersystem interference study between medical capsule camera endoscopy … fig. 14. the required margin for the coordination of the capcam and the ground radar in the adjacent channel frequency band this paper illustrates new findings on the coordination of operating a capcam system with radiolocation and nsrd services, which are investigated in order to eliminate possible interferences to those services. this study considered the effect of building penetration loss, adjacent channel interference, and long coverage distance which makes this study more technically sound compared to the work in [2]. ultimately, the findings of this paper aim to decrease the required restrictions that should be taken into account for the capcam service to run harmlessly besides other wireless services. table iv. minimum physical separation between the capcam and other systems iin los and nlos environments physical separation (m) environment channel type with ground radar with airborne radar with nsrd 1754 1563 322 los co-channel 72 67.5 27.6 nlos 555 494.5 102 los adjacent channel 39 36 15 nlos vi. conclusion this paper presents a study of the medical capsule camera endoscopy (capcam) and other systems in the 430-440mhz band using the mcl approach. the study covered los and nlos areas using co-channel and adjacent channel frequency bands. it has been found that 1754m is the maximum distance for the three systems as a conservative protection distance in the case of the co-channel frequency band. the minimum distance for the three services is 555m in the adjacent channel scenario within a los environment. in the case of nlos, the physical separation distances decrease dramatically. more studies may be recommended to investigate and analyze the effect of different practical situations with different building construction materials, stories, and designs. acknowledgment the authors would like to thank the deanship of scientific research at imam mohammad ibn saud islamic university (imsiu) for the financial support of the project under the grant no. 18-11-14-001. references [1] d. v. moyer, t. p. smith, j. g. erd, d. a. luebke, and k. l. tuma, "ingestible device with propulsion capabilities," us20200405129a1, dec. 31, 2020. [2] "coexistence of wideband ultra-low power wireless medical capsule endoscopy application operating in the frequency band 430440 mhz," ecc, ecc report 267, 2017. [3] d. bandorski et al., "contraindications for video capsule endoscopy," world journal of gastroenterology, vol. 22, no. 45, pp. 9898–9908, dec. 2016, https://doi.org/10.3748/wjg.v22.i45.9898. [4] s. dubner, y. dubner, h. rubio, and e. goldin, "electromagnetic interference from wireless video-capsule endoscopy on implantable cardioverter-defibrillators," pacing and clinical electrophysiology: pace, vol. 30, no. 4, pp. 472–475, apr. 2007, https://doi.org/10.1111/ j.1540-8159.2007.00695.x. [5] "article 5 frequency allocations," itu radio regulations. https:// life.itu.int/radioclub/rr/art05.htm (accessed jul. 03, 2021). [6] erc recommendation of 1997 on relating to the use of short range devices (srd). erc, 2021. [7] "the european table of frequency allocations and applications in the frequency range 8.3 khz to 3000 ghz (eca table)," erc, erc report 25, 2020. [8] etsi en 305 550 v2.1.0 (2017-10) short range devices (srd); radio equipment to be used in the 40 ghz to 246 ghz frequency range; harmonised standard for access to radio spectrum. etsi, 2017. [9] "assessment of the technical feasibility of introducing very narrow channel spacing in some existing plans, in guard bands and center gaps of fws channel arrangement at 6 ghz and 10 ghz," ecc, ecc report 215, 2014. [10] m. almutiry, "uav tomographic synthetic aperture radar for landmine detection," engineering, technology & applied science research, vol. 10, no. 4, pp. 5933–5939, aug. 2020, https://doi.org/ 10.48084/etasr.3611. [11] g. w. stimson, stimson’s introduction to airborne radar, 3rd ed. edison, nj, usa: scitech publishing, 2014. [12] t. s. rappaport, wireless communications: principles and practice, subsequent edition. upper saddle river, nj, usa: prentice hall, 2001. [13] recommendation itu-r p.530-17: propagation data and prediction methods required for the design of terrestrial line-of-sight systems. geneva, switzerland: itu, 2017. [14] recommendation itu-r p.525-4: calculation of free-space attenuation. geneva, switzerland: itu, 2019. [15] recommendation itu-r p.1411: propagation data and prediction methods for the planning of shortrange outdoor radiocommunication systems and radio local area networks in the frequency range 300 mhz to 100 ghz. geneva, switzerland: itu, 2019. [16] k. kimani and m. njiraine, "cognitive radio spectrum sensing mechanisms in tv white spaces: a survey," engineering, technology & applied science research, vol. 8, no. 6, pp. 3673–3680, dec. 2018, https://doi.org/10.48084/etasr.2442. m a rg in ( d b ) microsoft word 33-3100_s_etasr_v9_n6_pp5047-5055 engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5047 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … performance study of wireless systems with switch and stay combining diversity over α-η-µ fading channels ali a. eyadeh communication engineering department yarmouk university irbid, jordan aeyadeh@yu.edu.jo mohammad n. al-ta'ani communication engineering department yarmouk university irbid, jordan mohd.taani89@gmail.com abstract—in this paper, we consider a switch and stay combiner (ssc) diversity scheme operating over α−η−µ fading channel. new and closed-form expressions for the average output snr (asnr), the moment-generating function (mgf), the outage probability (p_out), and the average symbol error rate (aser) for m-ary quadrature amplitude modulation (qam) signaling are derived. the expressions are obtained in terms of the wellknown bivariate fox’s h-function (bfhf). it is worth pointing out that the bfhf and the bivariate meijer’s g-function (bmgf) have recently been used extensively in wireless communications literature to study the system's performance. the evaluated results are plotted for channel parameters of interest, and the effect of fading severity on the combiner performance is studied. moreover, the results are shown to match those previously reported in the literature for other channel models such as η−µ as a special case, which confirms the validity of the obtained expressions. also, insights on the optimal choice of the switching threshold are provided. keywords−switch and stay combiner ssc diversity; α−η−µ fading channel; m-ary qam; average output snr (asnr); moment-generating function (mgf); outage probability (pout) i. introduction there are many distributions that describe properly the statistics of the mobile radio signal. the long-term signal variation is acknowledged to follow the lognormal distribution, whereas the short-term signal variation is described by various other distributions such as hoyt, rayleigh, rice, nakagami-m, and weibull. it is typically accepted that the path strength at any extend is characterized by short-term distributions over a spatial dimension of a few hundred wavelengths, and with the aid of the lognormal distribution over areas with larger dimensions. the α−η−µ distribution is a generic fading distribution used to signify the small-scale variation of the fading signal. channel multipath fading is an important consideration when designing a wireless communication system, therefore fading mitigation techniques are needed. diversity combining is an effective technique used to mitigate fading and improves the performance of wireless systems over a fading channel. there are various types of diversity combining techniques used in practice [1], for example selection combining (sc), equal gain combining (egc), maximal ratio combining (mrc) and switched diversity combining (sdc). two strategies can be used in sdc: switch and stay combining (ssc) which is considered in this paper, and switch and examine combining (sec). in an ssc diversity system, the receiver selects a branch until its signalto-noise ratio (snr) drops below a predetermined threshold. when this happens, the combiner switches to another branch and stays there regardless of whether the snr of the original branch is above or below the predetermined threshold. several works have been conducted on the analysis of ssc scheme over fading channels including [2-14]. in [2-5], the performance of ssc for non-coherent binary frequency shift keying (bfsk) and noncoherent m-ary frequency shift keying (mfsk) over correlated nakagami-m and rician fading channels was studied. in [6], the performance of non-coherent mfsk with selection and switched diversity was analyzed over a hoyt fading channel. the performance of correlated rician fading channels and correlated weibull fading channels with ssc diversity was evaluated in [7] and [8] respectively. the performance of a dual-branch ssc systems over nakagami-m, correlated α–µ, correlated η–µ and correlated generalized-k (kg) fading channels were studied and analyzed in [9-14]. popular fading distributions have been derived assuming a homogeneous diffuse scattering field, resulting from randomly distributed point scatters. the assumption of a homogeneous field is truly an approximation because the surfaces are spatially correlated characterizing a non-linear environment. with the aim at exploring this non-homogeneity, two new fading distributions, κ−µ and η−µ have been discussed in [15, 16], and to discover the non-linearity of the propagation medium, which was also addressed more currently in a new proposed common fading distribution, the α−µ distribution [17]. the α−η−µ distribution is an accepted distribution for a short-time fading model. the probability density function (pdf) of α−η−µ distribution is in the shape of three parameters α, η and µ, which are associated to the nonlinearity of the environment, the scattered wave power ratio between the inphase, and quadrature components of each cluster of multipath corresponding author: ali a. eyadeh engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5048 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … and the number of multipath clusters in the environment respectively. the α−η−µ model includes, as special cases, other short-time fading distributions, like rayleigh, nakagami-m, nakagami-q (hoyt), weibull, η–µ and one-side gaussian distribution. by setting α=2, it reduces to η–µ distribution. furthermore, from η–µ fading distribution, nakagami-m model could be obtained in two cases: first for η→1, with nakagami parameter m being expressed as μ=m/2, and second for η→0, with parameter m being expressed as μ=m. it is well-known that η–µ distribution reduces to hoyt distribution, when µ=1, with hoyt parameter q defined as q=(1-η)/(1+η). from the hoyt distribution the one-sided gaussian is obtained for q→+1 or q→−1 (η→0 or η→∞). in the same way, by equating the in-phase and quadrature components variances, namely by setting η=1, rayleigh distribution is derived from hoyt. also, weibull distribution could be obtained as a special case of the α-η-µ model by setting corresponding values to the parameters µ=1and η=1. in [18], α−η−µ and α-κ-µ distributions have been discussed. the performance analysis of wireless communication over α-η-µ fading channel has been investigated in [19], where the outage probability, pdf and cdf of the received signal to interference ratio were derived. the performance analysis of α-η-µ fading channel is carried out in [20], when the communication is subjected to influence of co-channel interference. in [21], the performance of digital communication systems that operate over α−η−µ fading channels was analyzed and evaluated. specifically, exact closed-form analytical expressions for mgf, cdf, average channel capacity, and asep for different coherent and noncoherent modulation schemes were derived. switched diversity combining schemes such as ssc are less complex diversity combining schemes, as ssc does not require channel estimation at the receiver, and minimizes the switching rate required the available diversity branches. though this diversity combining scheme was already examined over different fading channels including rayleigh, rician, nakagami, etc., analysis over fading distributions such as α-η-µ is not available in the literature. in this paper, we derived novel and closed-form expressions for asnr, outage probability (pout), mgf, and the aser for the m-ary quadrature amplitude modulation (qam) scheme, of a dual-branch ssc operating over generalized α−η−µ fading channels. the expressions are obtained in terms of the bivariate fox’s hfunction (bfhf). it is worth pointing out that the bfhf and the bivariate meijer’s g-function (bmgf) have recently been used extensively in the wireless communications literature when studying the systems’ performance. the evaluated results are plotted for channel parameters of interest, and the effect of fading severity on the combiner performance is studied. our derived expressions are valid for arbitrary values of the fading parameters α, η and µ. other short-time fading distributions, like rayleigh, nakagami-m, nakagami-q (hoyt), weibull, η–µ and one-side gaussian distribution, are derived from our results as special cases. ii. system model and output statistics in this paper, we focus on the performance evaluation of ssc systems over α-η-µ fading channels. in these dual-branch diversity systems the receiver selects a branch until its snr drops below a predetermined threshold. when this happens, the combiner switches to another branch and stays there regardless of whether the snr of the original branch is above or below the predetermined threshold. a. the � − � − � distribution 1) probability density function: we assume that the channel envelope r follows the α-η-µ distribution. the probability density function (pdf) of the channel under consideration is given as [22]: ���� = � √������� ���������������������� ���� !�� "�����̅������� $%&���'����� � (1) where γ�) = * +,-. exp�−+ d+34 is the gamma function, 56�∙ is the modified bessel function of the first kind and arbitrary order 8, α > 0, � > 0, ℎ = �.>? �@ ? , a = �.-?� @ ? , η>0, and �̅� represents the �-root mean value of the envelope �. to derive the cumulative density function (cdf) of the snr, we need first to derive the pdf for the snr. as such, we define the instantaneous snr, b, as [23]: b = b̅ ���̅�� (2) where b̅ = cd�̅�eeb/no and eb/no is the energy per bit to the noise power spectral density ratio. after performing random variable transformation using (1) and (2) [24], then the pdf of b is obtained as: �f�b = √� � � ������f���������������g���h �� h��� i j�� "����f���������klmg��'h��h��� i (3) the unified pdf defined in (3) can be written in terms of the fox’s h-function as [21]: �f�b = � �������f����������!�� "����f��������� × a4,..,4 o����-" f��f��� p -�4,. q a.,�.,. o@�" f �� f��� p ���,.���-��,.�,���-�,.�q (4) where a&,rs,td. e is the fox’s h-function [25]. 2) the cumulative distribution function the cdf of b with its corresponding pdf defined in (3) can be derived using the main definition of the cdf, which is given by [1] as: vf = * �f�w xwf4 (5) for arbitrary values of µ, the cdf of b in (5) can be expressed as [18]: vf�b = 1 − z� g"� , [���f��f��� i (6) engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5049 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … where z��\,] = √� ��̂���.-_� �j�� _���� * `�� 5�-���\`� exp�−`� x`3a denotes the yacoub integral [15]. hence, the cdf of b is obtained as: vf�b = �.-��'�� �j���>. b2�ℎ �ff����d�� × φ� o�, �;2� + 1; − �1 "��b2�ℎ �ff����d , −�− "�� b2�ℎ �ff����dq (7) where φ� is the confluent lauricella function [26]. 3) the moment generating function (mgf) the mgf for the α−η−µ fading channels is obtained using the bfhf as [21]: hf�i = � �������j�� "���� �j f� �������� × h.,4:4,.;.,�4,.:.,4;.,. m ����-" �j f� ��@�" �j f� �� n �.-����>���;��,���:-;���,.�-:�4,. ;��-��,.�,���-�,.�o (8) b. output statistics of the ssc system: the pdf, cdf and mgf of the received snr at the output of a dual-branch ssc system over α−η−µ fading channels are derived in this section. 1) the pdf of the received snr if biip denotes the snr at the output of an ssc combiner, and bq denotes the predetermined switching threshold, to derive the pdf of ssc of the output snr, we first derive the cdf of the output snr, vfjjr�b , in terms of cdf of individual branch snr,vf�b , as [1]: vfjjr�b = s vf�bq vf�b , b < bqvf�b − vf�bq + vf�bq vf�b , b ≥bq (9) therefore, the cdf for a dual branch α-η-µ fading channels with ssc diversity is obtained by inserting (7) into (9) as: vfjjr�b = suvf�b , b < bqvf�b − u + u vf�b , b ≥bq (10) where u = vf�bq = �.-��'�� �j���>. b2�ℎ �fvf� ���d�� × φ� o�,�; +1; − �1 + "�� b2�ℎ �fvf� ���d, − �1 − "�� b2�ℎ �fvf� ���dq differentiating vfjjr�b with respect to b, we get the pdf of the snr at the output of the ssc combiner, �fjjr�b , in terms of cdf vf�b ,and the pdf �f�b of the individual branch snr as [1]: �fjjr�b = wxhyyz�f wf = svf�bq �f�b , b < bq{1 + vf�bq |�f�b , b ≥bq (11) the pdf �fjjr�b , for fading channels with ssc diversity is found by inserting (7) with respect to bq, and one of the pdf expressions (3) or (4) into (11) as: �fjjr�b = su �f�b , b < bqd1 + ue �f�b , b ≥bq (12) where u = vf�bq as mentioned previously in (10). 2) the mgf of the received snr the mgf with ssc diversity receiver, operating over α-η-µ fading channels is obtained using the pdf in (12) that is [1]: hfjjr�i = * e-jf�fjjr�b xb34 = d1 + vf�bq e hf�i − * e-jf �f�b xb fv4}~~~~�~~~~��� (13) where hf�i is the mgf for an individual branch under α-η-µ fading channel which is derived in (8). to obtain an expression for hfjjr�i in (13), we need to solve the second term integral 5.. to calculate the integral 5. in (13) we insert (4) in 5. as: 5. = � �������j�� "����f��������� * b����>���-.fv4 × h4,..,4 o����-" f��f��� p -�4,. q h.,�.,. o@�" f �� f��� p � ��,.���-��,.�,���-�,.�q × h4,..,4 �ib � -�4,. � xb (14) note that (14) is found by expressing the exponential function in 5., in terms of the fox’s h function [25]. using the definition of fox’s h function [25], (14) is written as: 5. = � �������j�� "����f��������� � .����� × * * * γ�w. b����-" f��� d -�� �^���� × j��-��>���j���-���j���>�-��� b@�"f��� d -�� γ�w� �i -�^ × �* b����>��-��-���-�^-. xb fv4}~~~~~~~�~~~~~~~��� � xw.xw�xw� (15) the inner integral in (15) with respect to b (i.e.,5. can be solved using the power integration rule: 5. = * b����>��-��-���-�^-. xb fv4 engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5050 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … = fv����������������^����>��-��-���-�^ (16) by using the identity γ�` + 1 = ` γ�` [27], then (16) can be written as: 5. = fv����������������^ j�����>��-��-���-�^�j�����>��-��-���-�^>.� (17) now, substituting (17) into (15) yields: 5. = � �������fv��������j�� "����f��������� � .����� × * * * γ�w. b����-" fv��f��� d -�� �^���� × j��-��>���j���-���j���>�-��� b@�"fv �� f��� d -�� γ�w� �i bq -�̂ × j�����>��-��-���-�^�j�����>��-��-���-�^>.� xw.xw�xw� (18) using a.1 from [28], then the inner integral 5. is found in terms of the bfhf as: 5. = � �������fv��������j�� "����f��������� × h.,.:4,.;.,�;4,.4,.:.,4;.,.;.,4 �� ��� �����-" fv��f��� @�"fv��f���i bq nn �.����>���;��,��,.�:-;���,.�;�-����>���;��,��,.�:�4,. ;��-��,.�,���-�,.�;�4,. �� ��� � (19) in conclusion, a closed form expression of the hfjjr�i , is obtained as: hfjjr�i = �1 + vf�bq � � �������j�� "�����j f� �������� × h.,4:4,.;.,�4,.:.,4;.,. m ����-" �j f� ��@�"�j f� �� n �.-����>���;��,���:-;���,.�-:�4,. ;��-��,.�,���-�,.�o − � �������fv��������j�� "����f��������� × h.,.:4,.;.,�;4,.4,.:.,4;.,.;.,4 ��� ��� ����-" fv��f���@�"fv��f���i bq nn �.-����>���;��,��,.�:-;���,.�;�-����>���;��,��,.�:�4,. ;��-��,.�,���-�,.�;�4,. ��� ��� (20) iii. performance analysis of the ssc system many important measures characterize the performance of communication systems in fading environments, such as the average symbol error probability (asep), the asnr or the pout, can be determined by averaging appropriate performance functions over the distribution of the effective snr at the receiver-side. in this section a detailed performance analysis, in terms of pout, asep and asnr, for ssc diversity receivers operating over α-η-µ fading channels is presented. a. outage probability the outage probability is the probability that the snr at the output of the ssc falls below a threshold level, b��, which is found by replacing b in vfjjr�b with b�� as in 9.241 in [1]: ����jjr(b�� = ��dbjjr ≤ b��e = vfjjr�b�� (21) since the sdc is considered as an optimal implementation of the switched diversity system, then the optimal switching threshold in the minimum outage probability is: bq,�&� = b��, and because the outage probability of a dual-branch sc is ����jr (b�� = d vf�b�� e 2 [1], then the outage probability of a dual-branch ssc system with an optimal switching threshold is [1]: ����jjr(b�� = ����jr (b�� = d vf�b�� e � (22) where vf�b�� is (7) after replacing b with b��. as a result, a closed form expression of the ����jjr ( b�� is obtained after inserting (7), after replacing b with b��, into (22) as: ����jjr(b�� = �.-��'�� ���j���>. �� b2�ℎ�f�'f� ���d @� × bφ� o�, �; 2� + 1; − �1 + "��b2�ℎ �f�'f� ���d , −�1 − "��b2�ℎ �f�'f� ���dqd�(23) b. average output snr the average snr at the ssc output, b̅jjr , is a useful performance measure serving as an excellent indicator for the overall system fidelity, and it can be obtained by averaging b over �fjjr�b [1]: b̅jjr = * b �fjjr�b xb = d1 + vf�bq e b̅ − * b �f�b xb fv4}~~~�~~~� �� 34 (24) differentiating (24) with respect to bq and setting the result to zero, it can be easily shown that b̅jjr is maximized when the switching threshold is set to bq,�&� = b̅ . to obtain an expression for b̅jjr in (24), we need to solve the second term integral 5�. following similar steps as in (13) and using 2.57 from [28], 5� is found in terms of the bfhf as: 5� = � ������� fvα���������j�� "���� f��������� × h.,.:4,.;.,�4,.:.,4;.,. ��� ������-" fv α� f��� @�"fvα� f��� n n �-����>���;��,���:-;���,.� �-.-����>���;��,���:�4,. ;��-��,.�,���-�,.���� �� (25) finally, a closed form expression of the b̅jjr is obtained as: b̅jjr = �b̅ + b̅ vf�bq � − � ������� fvα��µ������j�� "���� f��������� engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5051 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … × h.,.:4,.;.,�4,.:.,4;.,. ��� ������-" fv α� f��� @�"fv α� f��� n n �-����>���;��,���:-;���,.� �-.-����>���;��,���:�4,. ;��-��,.�,���-�,.���� �� (26) c. average symbol error probability in this section, the asep for m-ary qam signaling of a dual-branch ssc operating over a generalized α-η-µ fading channel is derived. for m-qam modulation scheme, the average sep is obtained using the following averaging process [1]: pr�e = * p��e 34 f ��¡�γ dγ (27) where �j�c is the conditional sep for square m-qam signals whose constellation size m is given by m= 2£ with k even. from 8.10 in [1], �j�c is given by: p��e = 2 a erfc�¨b γ � − a�erfc��¨b γ � (28) where erfc(x) is the complementary error function, (i.e. ª��p�` = �√� * ª-��∞% x+, [1]), \ = 1 − .√« and ] = ���¬��« ��«-. . although (28) is obtained for square constellations, it gives a good approximation for m for general qam constellations with m= 2£ points which are either in the shape of a square (k is even), or in the shape of a cross (k is odd) [1]. as a result, and after substituting (28) into (27), for m-qam modulation scheme, the average sep can be found as: pr�ª = d1 + vf�bq e × g* 2 \ ª��p�¨] b � �f�b xb34}~~~~~~~~�~~~~~~~~��^ − * \ �ª��p��¨] b � �f�b xb34}~~~~~~~~�~~~~~~~~��­ i − ®¯ 2 \ ª��p�¨] b � �f�b xbfv4}~~~~~~~~�~~~~~~~~��° − ¯ \�ª��p��¨] b � �f�b xbfv4}~~~~~~~~�~~~~~~~~��± ² �29 differentiating (29) with respect to bq and setting the result to zero [1], it can be shown that there is a generic expression for bq,�&� , for which the average error rate is minimal. in general, this bq,�&� will be a solution of 9.254 in [1], but explicit closed-form solutions will not always be possible to obtain. in this case, one must rely on numerical root-finding techniques to find an accurate solution for the optimum threshold. to find the asep for m-qam modulation scheme for ssc, we need to find the integrals, 5�, 5@, 5 ́and 5µ in (29). these quantities are derived in terms of the bfhf as: 5� = � _ � ������� √¶ j�� "���� �· f� �������� × h�,.:4,.;.,�4,�:.,4;.,. × �� �� ����-" �· f� �� @�" �· f� �� nn ���-����>���;��,���,�.-����>���;��,���:-;���,.� �-����>���;��,���:�4,. ;��-��,.�,���-�,.� �� �� (30) 5@ = _�� ������� ¶ j�� "���� �· f� �������� × h�,.:4,.;.,�;.,�4,�:.,4;.,.;�,4 × �� �� �����-" �· f� �� @�" �· f� ��1 nn �.-����>���;��,��,.�,���-����>���;��,��,.�:-;���,.�;�.,. �-����>���;��,��,.�:�4,. ;��-��,.�,���-�,.�;�4,. ,���,.��� �� � (31) 5´ = � _ � ������� fv��������√� j�� "���� f��������� × a.,.:4,.;.,�;.,�4,.:.,4;.,.;�,4 × �� �� ������-" fv �� f��� @�"fv�� f���] bq n n �.-����>���;��,��,.�:-;���,.�;�.,. �-����>���;��,��,.�:�4,. ;��-��,.�,���-�,.�;�4,. ,���,.� �� �� �� (32) 5µ = \�� ℎ���>�� bq����>��� ¸ γ�� a�-�� b̅����>��� × h.,.:4,.;.,�;.,�;.,�4,.:.,4;.,.;�,4;�,4 × �� �� �� �����-" fv�� f��� @�"fv�� f���] bq] bq n n �.-����>���;��,��,.,.�:-;���,.�;�.,. ;�.,. �-����>���;��,��,.,.�:�4,. ;��-��,.�,���-�,.�;�4,. ,���,.�;�4,. ,���,.� �� �� �� � (33) a closed form expression of the pr�ª has been obtained, which is given as: pr�ª = �1 + vf�bq � × � � _ � ������� √¶ j�� "���� �· f� �������� h�,.:4,.;.,�4,�:.,4;.,. × m ����-" �· f� �� @�" �· f� �� n ���-����>���;��,���,�.-����>���;��,���:-;���,.� �–����>���;��,���:�4,. ;��-��,.�,���-�,.� o − _�� ��� ���� ¶ j�� "���� �· f� �������� h�,.:4,.;.,�;.,�4,�:.,4;.,.;�,4 × �� �� �����-" �· f� �� @�"�· f� ��1 nn �.-����>���;��,��,.�,���-����>���;��,��,.�:-;���,.�;�.,. �-����>���;��,��,.�:�4,. ;��-��,.�,���-�,.�;�4,. ,���,.��� �� � º »¼ −( � _ � ������� fv��������√� j�� "���� f��������� h.,.:4,.;.,�;.,�4,.:.,4;.,.;�,4 × �� �� ������-" fv �� f��� @�"fv�� f���] bq n n �.-����>���;��,��,.�:-;���,.�;�.,. �–����>���;��,��,.�:�4,. ;��-��,.�,���-�,.�;�4,. ,���,.��� �� �� _�� ������� fv�������� � j�� "���� f��������� h.,.:4,.;.,�;.,�;.,�4,.:.,4;.,.;�,4;�,4 �� �� �� �����-" fv�� f��� @�"fv�� f���] bq] bq n n �.-����>���;��,��,.,.�:-;���,.�;�.,. ;�.,. �–����>���;��,��,.,.�:�4,. ;��-��,.�,���-�,.�;�4,. ,���,.�;�4,. ,���,.� �� �� �� � º »» »¼ (34) engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5052 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … iv. results and discussion in this section the ����, asnr and asep of a dual-branch ssc system over α-η-µ fading channels are presented using several numerical examples. the results are obtained using (23), (26) and (34). the optimum switching threshold was applied in each example. to validate our results, we plot in figures 1, 2 and 6, the abep for coherent bpsk and coherent bfsk, the ����, and asnr over η–µ and hoyt (nakagami-q), which are deduced from our results reported in [6, 12]. the results exactly match the results reported in figures 5 and 6 in [12], figure 5 in [6], figure 1 in [12], figure 4 in [6], and figure 2 in [12] respectively, which validates our work. the corresponding results for one-sided gaussian and nakagami-m fading channels are presented as special cases of α-η-µ fading channels. figure 1 presents the abep for coherent bpsk and noncoherent bfsk for some special cases of fading channels. figure 2 presents the outage probability with and without diversity over α-η-µ fading channels vs the normalized outage threshold (b��/b̅ for different values of α, η and µ. for α=2, η=0.5, µ=0.5 and �b��/b̅ =0db, it is shown that ����jjr(b�� with diversity decreases (improves) by 41% when compared to the outage probability without diversity. fig. 1. abep for coherent bpsk and coherent bfsk for some special cases of fading channels fig. 2. outage probability of a dual-branch ssc system over α-η-µ fading channels versus normalized outage threshold �b��/b̅ figures 3 and 4 show the effect of fading parameters on the outage probability ����jjr(b�� with average snr b̅ = 10db and b�� =5db. when α and (or) µ increase(s), they result in improved system performance. for example, in figure 3, when η=2 (fixed) and µ=1.5 (fixed), the ���� jjr(b�� is approximately decreased by 54% when α is decreased from 1.1 to 0.7. in figure 4, when α=1.5 (fixed) and η=0.9 (fixed), ���� jjr(b�� is approximately decreased by 57% when µ is decreased from 1.5 to 1. also, as η increases, ���� jjr(b�� increases. for example, in figure 3, when α=2 (fixed) and µ=1.5 (fixed), ���� jjr(b�� is approximately increased by 31% when η decreases from 2.5 to 1.5. this occurs for higher values of η. it is clear from the above discussion that ���� jjr(b�� improves by increasing α and µ, and degrades by increasing η, because of the following reasons: parameter µ represents the multipath in each cluster, so, when µ increases the receiver will have more copies of the same transmitted signal, so ���� jjr(b�� improves. parameter α represents the power exponent of the sum of multipath components, therefore, as α increases, ���� jjr(b�� improves. the fading parameter η represents the correlation coefficient between the in-phase and quadrature components of each cluster of multipath. therefore, as η increases the correlation coefficient increases, and then the ���� jjr(b�� degrades. fig. 3. outage probability of a dual-branch ssc system over α-η-µ fading channels versus parameter α fig. 4. outage probability of a dual-branch ssc system over α-η-µ fading channels versus parameter µ engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5053 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … figure 5 shows the normalized average output snr of a dual-branch ssc system over fading channels �b̅jjr/b̅) versus the parameter α for different values of η and µ. figure 6 shows the b̅jjr/b̅ as a function of µ for different values of α and η. both figures are plotted at bq 0 db. the results presented in figure 5 show that as α increases, the b̅jjr / b̅ decreases, resulting in a reduced diversity gain. for example, in figure 5, when η=1.5 (fixed) and µ=0.5 (fixed), b̅jjr / b̅ for α=1.4 is approximately decreased by 7% compared to α=0.6. we note similar observations for the effect of the fading parameter µ in figure 6. the results indicate that as µ increases, the b̅jjr/b̅ is degraded. for instance, when α=1.5 (fixed) and η=2.2 (fixed), b̅jjr /b̅ for µ=1.8 is nearly decreased by 7.5% compared to µ=0.8. in contrast to α and µ, it is obvious from figures 5 and 6 that as the value of η increases, b̅jjr/b̅ increases. for example, in figure 6, when α=1 (fixed) and µ=1.5 (fixed), b̅jjr/b̅ for η=2.2 is approximately increased by 1.5% compared to η=1.2. this effect of η occurs for higher values of η. fig. 5. normalized average output snr of a dual-branch ssc system over α-η-µ fading channels �b̅jjr/b̅) versus parameter α fig. 6. normalized average output snr of a dual-branch ssc system over α-η-µ fading channels �b̅jjr/b̅) versus parameter µ a graphical illustration of the impact of the switching threshold on b̅jjr for different values of α, η and µ is depicted in figure 7. this figure is plotted with respect to average snr b̅ 10 db. obviously, we get the best performance for bq bq,�&� b � . figure 8 shows the average sep of 16qam of a dual-branch ssc system over α-η-µ fading channels versus the average snr �b̅ for different values of α, η and µ, for switching threshold bq=5db. the curves in figure 8 are categorized into different sets according to the average snr value. each combination of the parameters α, η and µ represents different channel model which justifies that each curve has different rate of change behavior within the same set. as expected, the asep performance improves as input branch snr b̅ increases. the asep for a single branch � �� � � fading (no diversity) is also appeared in figure 8. as shown, the system performance is improved under ssc diversity. for example, when α=1.5 (fixed), η=1.5 (fixed), µ=2 (fixed) and b̅ =10db (fixed), the asep under ssc diversity is approximately decreased by 59% compared to the asep for a single branch. fig. 7. average output snr of a dual-branch ssc system over α-η-µ fading channels �b̅jjr) versus the switching threshold ( bq . fig. 8. asep of 16-qam of a dual-branch ssc system over α-η-µ fading channels versus average snr�b̅ figures 9 and 10 show the effect of the fading parameters on the asep. the average snr b̅ 10db and bq 5db. as α and (or) µ increase(s), the system performance improves. for example, in figure 9, when η=2(fixed) and µ=1(fixed), the asep reduction is about 39% when α increases from 0.6 to 0.95. figure 10 shows the effect of the parameter µ on the asep. for α=0.5(fixed) and η=1.1(fixed), the asep reduction is about 33% when µ increases from 0.75 to 1.35. in contrast with α and µ, it is clear that as η decreases, the asep improves. engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5054 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … for instance, in figure 9, when α=0.6 (fixed) and µ=1(fixed), the asep reduction is about 10.5% when η decreases from 2.5 to 0.9. note that, this effect occurs for higher values of η. fig. 9. asep of 16-qam of a dual-branch ssc system over α-η-µ fading channels versus α fig. 10. asep of 16-qam of a dual-branch ssc system over α-η-µ fading channels versus µ fig. 11. asep of a dual-branch ssc system over α-η-µ fading channels versus the average snr �b̅ for multi levels of qam in figure 11, we plot the asep of a dual-branch ssc system over α-η-µ fading channels versus the average snr �b̅ for multi levels of qam: 4, 8, 16, 32, 64 and 128. figure 11 is plotted for fixed values of α, η and µ (α=2, η=1.5 and µ=2), and for bq 3db. the degradation in the asep is obvious with increasing values of m-ary of qam. the asep get an improvement with increasing the average snr �b̅ . v. conclusions in this paper, a dual-branch ssc diversity scheme operating over α-η-µ fading channel has been examined. new and closed-form analytical expressions were derived for asnr, mgf, ���� , and asep for m-ary qam signaling. expressions for the optimum adaptive switching thresholds were also derived. some of these expressions were obtained in terms of the well-known bivariate fox’s h-function (bfhf). the results p are shown to match those previously reported in the literature for other channels models such as η−µ model as a special case, which confirms the validity of the obtained expressions. using numerical examples, we observed that the dual-branch ssc system has improved the performance of pout, asnr, and asep for m-ary qam signaling. the pout and asnr of the ssc diversity system improve, as α and (or) µ increase(s), with η kept constant. the asnr also improves, as η increases, with α and µ kept constant. however, increasing α and µ improves the system performance more than η. references [1] m. k. simon, m. s. alouini, digital communication over fading channels: a unified approach to performance analysis, john wiley & sons, 2000 [2] a. a. a. dayya, n. c. beaulieu, “analysis of switched diversity systems on generalized-fading channels”, ieee transactions on communications, vol. 42, no. 11, pp. 2959–2966, 1994 [3] a. a. a. dayya, n. c. beaulieu, “switched diversity on microcellular rician channels”, ieee transactions on vehicular technology, vol. 43, no. 4, pp. 970–976, 1994 [4] s. haghani, n. c. beaulieu, “post detection switch-and-stay combining in nakagami-m fading”, ieee vehicular technology conference, los angeles, usa, september 26-29, 2004 [5] s. haghani, n. c. beaulieu, “revised analyses of postdetection switchand-stay diversity in rician fading”, ieee transactions on communications, vol. 54, no. 7, pp. 1175–1178, 2006 [6] a. chandra, c. bose, m. k. bose, “performance of non-coherent mfsk with selection and switched diversity over hoyt fading channel”, wireless personal communications, vol. 68, no. 2, pp. 379-399, 2013 [7] p. s. bithas, p. t. mathiopoulos, “performance analysis of ssc diversity receivers over correlated rician fading satellite channels”, eurasip journal on wireless communications and networking, vol. 2007, article id 25361, 2007 [8] p. s. bithas, p. t. mathiopoulos, g. k. karagiannidis, “switched diversity receivers over correlated weibull fading channels”, international workshop on satellite and space communications, madrid, spain, september 14-15, 2006 [9] s. khatalin, j. p. fonseka, “capacity of correlated nakagami-m fading channels with diversity combining techniques”, ieee transactions on vehicular technology, vol. 55, no. 1, pp. 142-150, 2006 [10] p. c. spalevic, s. r. panic, c. b. dolicanin, m. c. stefanovic, a. v. mosic, “ssc diversity receiver over correlated α-µ fading channels in the presence of cochannel interference”, eurasip journal on wireless communications and networking, vol. 2010, article id 142392, 2010 [11] s. r. panic, p. spalevic, j. anastasov, m. stefanovic, m. petrovic, “on the performance analysis of sir-based ssc diversity over correlated α-µ fading channels”, computers and electrical engineering, vol. 37, no. 3, pp. 332–338, 2011 engineering, technology & applied science research vol. 9, no. 6, 2019, 5047-5055 5055 www.etasr.com eyadeh & al-ta'ani: performance study of wireless systems with switch and stay combining diversity … [12] s. khatalin, “on the performance analysis of ssc diversity system over η-µ fading channels”, international journal of electronics, vol. 103, no. 6, pp. 960-974, 2016 [13] s. haghani, h. dashtestani, “ber of noncoherent mfsk with post detection switch-and-stay combining in twdp fading”, ieee vehicular technology conference, quebec, canada, september 3-6, 2012 [14] b. r. manoj, p. r. sahu, “performance analysis of dual-switch and stay combiner over correlated kg fading channels”, national conference on communications, new delhi, india, february 15-17, 2013 [15] m. d. yacoub, “the κ-µ distribution and the η-µ distribution”, ieee antennas and propagation magazine, vol. 49, no. 1, pp. 68-81, 2007 [16] z. hussain, a. r. khan, h. mehdi, s. m. a. saleem, “analysis of d2d communication system over κ-µ shadowed fading channel”, engineering, technology & applied science research, vol. 8, no. 5, pp. 3405-3410, 2018 [17] m. d. yacoub, “the α−µ distribution: a general fading distribution”, 13 th ieee international symposium on personal, indoor and mobile radio communications, lisbon, portugal, september 15-18, 2002 [18] g. fraidenraich, m. d. yacoub, “the α � η � μ and α � κ � μ fading distribution”, ieee 9th international symposium on spread spectrum techniques and applications, manaus-amazon, brazil, august 28-31, 2006 [19] g. stamenovic, s. r. panic, d. rancic, c. stefanovic, m. stefanovic, “performance analysis of wireless communication system in general fading environment subjected to shadowing and interference”, eurasip journal on wireless communication and networking, vol. 124, pp. 1-8, 2014 [20] s. r. panic, s. ninkovic, d. jaksic, s. jovkovic, b. milosevic, “performance analysis of wireless communication system over α-η-μ fading channels in the presence of cci”, infoteh-jahorina, vol. 12, pp. 395-398, 2013 [21] o. s. badarneh, m. s. aloqlah, “performance analysis of digital communication systems over α-η-μ fading channels”, ieee transactions on vehicular technology, vol. 65, no. 10, pp. 7972-7981, 2016 [22] a. k. papazafeiropoulos, s. a. kotsopoulos, “the α-λ-µ and α-η-µ small-scale general fading distributions: a unified approach”, wireless personal communications, vol. 57, no. 4, pp. 735-751, 2011 [23] a. m. magableh, m. m. matalgah, “moment generating function of the generalized α-µ distribution with applications”, ieee communications letters, vol. 13, no. 6, pp. 411-413, 2009 [24] p. z. peebles, probability, random variables, and random signal principles, 4th edition, mcgraw-hill, 2000 [25] a. p. prudnikov, y. a. brychkov, o. i. marichev, integrals and series: special functions, crc press, 1990 [26] a. erdelyi, higher transcendental functions, mcgraw-hill, 1953 [27] i. s. gradshteyn, i. m. ryzhik, table of integrals, series and products, 7th edition, academic press, 2007 [28] a. m. mathai, r. k. saxena, h. j. haubold, the h-function: theory and application, springer, 2010 microsoft word 38-3435_s_etasr_v10_n4_pp6109-6115 engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6109 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … a web-based monitoring and evaluation system for government projects in tanzania: the case of ministry of health mpawe nicodem mleke school of computational and communication sciences and engineering the nelson mandela african institution of science and technology arusha, tanzania mlekem@nm-aist.ac.tz mussa ally dida school of computational and communication sciences and engineering the nelson mandela african institution of science and technology arusha, tanzania mussa.ally@nm-aist.ac.tz abstract—monitoring and evaluation systems are used by organizations or governments to measure, track progress, and evaluate the outcomes of projects. organizations can improve their performance, effectiveness, and achieved results in project success by strengthening their monitoring and evaluation systems. moreover, various studies reveal the need for information and communication technology systems in monitoring and evaluation activities. despite the advantage of the tools, most organizations do not employ computerized monitoring and evaluation systems due to their cost and limited expertise whereas those having these systems lack a systematic alert mechanism of the projects' progress. currently, the ministry of health, community development, gender, elderly, and children of tanzania monitors and evaluates its projects manually facing the risks and consequences of delayed project completeness. in this study, the evolutionary prototyping approach was used to develop the proposed system. this study describes the development of a web-based monitoring and evaluation system that aims to solve the monitoring and evaluation challenges, simplify works, generate quality data, and provide timely successful project implementation. the developed system was tested and evaluated against the user’s requirements and was positively accepted to be deployed at the ministry of health. keywords-health projects, monitoring and evaluation system; web-based, ministry of health i. introduction monitoring and evaluation (m&e) systems are used to improve the performance of projects and achieve positive results in project activities. m&e is essential in helping planners, implementers, managers, policymakers, and donors to understand and obtain the information they need to make informed assessments about project processes or operations [1]. project activities are usually executed over a fixed period of time, with the aim of achieving desired outcomes or specific goals [2]. project monitoring is essential for giving feedback about project progress to the beneficiaries involved in the project, implementers, and the donors who fund the project. furthermore, project evaluation is necessary for making judgments about the activities of the project and informing program decisions. evaluation determines the efficiency, effectiveness, sustainability, impact, whether projects have met their targets, and helps to identify areas of improvement. moreover, by sharing the project output to others, m&e creates knowledge in project management and promotes accountability to donors, stakeholders, and citizens [3]. the tanzanian government has recently shown significant efforts in improving the lives of its citizens, by initiating and implementing different projects and programs in community empowerment and health [4]. in tanzania, the basic health care services are equitable, qualitative, affordable, accessible, gender-sensitive, and sustainable and are supposed to be taken care by the ministry of health, community development, gender, elderly and children (mohcdgec). projects initiate at mohcdgec, but there is a lack of m&e systems for tracking their implementation progress in order to improve data quality, performance, reduce paperwork, and achieve good results. ii. background information currently, m&e of government projects at the mohcdgec is done manually, and as a result, there are risks encountered due to the lack of timely adoption of remedial actions. non-uniformity is observed in data collection, reporting, and management for different projects sponsored by different donors within the ministry [5]. having an electronic system for providing accurate and timely m&e information can be one of the solutions to this problem. among the challenges faced by the national malaria control program (nmcp) was coordinating the collection of m&e information [6]. moreover, lack of coordination among partners, agencies, ministries and information communication and technology departments was highlighted in the health sector strategic plan iii (2009–2015). the strategic plan further showed a lack of m&e activities in epidemics such as aids/hiv, tuberculosis (tb) and malaria due to poor infrastructure of the healthcare system and inefficiencies [7]. although the donor-funded health projects reported an increasing demand on the use of electronic m&e systems to reduce workload, improve data corresponding author: mpawe nicodem mleke engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6110 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … quality, data analysis, accurate reports, and data access [7], such systems are not available at the ministry of health. this study intends to support the mohcdgec by developing an electronic m&e system for government projects in order to track progress, report status, and give alerts and warning information. moreover, this study will simplify the data collection process, generate qualitative data for planning and evaluation, facilitate successful project implementation, and provide feedback mechanisms between stakeholders, employees, and donors. iii. related work m&e assists governments and organizations to extract data from past and ongoing activities, provides reports on projects' progress, and measures whether the project is meeting its objectives or progressing in the right direction. without m&e, it is impossible to judge if the project work is going in the right direction and what future efforts might be required [1]. traditionally, m&e focused on assessing input with the implementation processes. today, m&e focuses on assessing the factors that contribute to the development output, outcome, partnerships, advocacy, coordination, and policy advice. project managers are required to apply the information gained from m&e to improve project activities and strategies. better decisions in projects lead to greater accountability to the stakeholders and help improving project activity performance. close partnership with stakeholders creates knowledge sharing, skills, learning, capacity, project decision planning, provides valuable feedback, and makes a positive contribution to the effectiveness of development [1]. a project can be successful by good decision making and cooperative relationships [8]. having project planning and project management teams keeps the judgment of ongoing project activities, monitors progress, solves the challenges in case of gaps in planned goals, and improves project performance. the performance of a project is influenced by project management capabilities, cost, quality, time, risk management, communication skills, and human resources [9]. an unesco report indicates that governments or organizations have the systems for data collection for measuring the outcomes which have achieved on a project or program and can be simple or sophisticated but their results are poor due to lack of electronic m&e systems. it further shows that in the developed world only a few countries are implementing projects by using m&e systems [10]. authors in [11] proposed an m&e system for the organization and employee evaluation for the ministry of trade and industry in egypt. the system, not only provided regular reports but also assisted the top management to get feedback from employees and customers, eventually increasing employee performance. in [12], a web-based tool was used to support the collection and reporting of data for learning, research, and teaching in education. the tool was needed by students, instructors, and researchers for education project design, student progress evaluation, and as a feedback tool. this tool can be adopted at the mohcdgec to simplify the implementation of health project activities and provide feedback to different stakeholders. authors in [13] studied web-based construction for projects and a tool to monitor their performance was developed to support the project manager to measure and manage people, time, client satisfaction, cost communication, and quality. the purpose of this tool was to reduce the time used to collect data, dissemination and data incompatibility that was occurring in different software or systems. however, the system had cost implications in ensuring reliable security, preventing downtime and facilitating constant monitoring. the developed m&e system in [14] allows monitoring and evaluating road projects and public works for philippines government. the information is being secured and cannot be deleted or altered because of the administrator who updates and edits the data. the sokoine university of agriculture in tanzania has developed a webbased m&e system. the adoption of this electronic system helps knowing project progress, learning from achievements, coordinating and managing the project activities. overall, the designation of this system helps detecting risks that may occur and it gives early warnings to the coordination office and project’s team members. that system has additional functionalities allowing researchers and project team members to submit their reports electronically [15]. the literature review reveals that most government/organizations do not employ computerized m&e systems and those having these systems lack a systematic early informing mechanism of the projects' progress. this study aims to develop a web-based m&e system for health projects in order to keep track of the implementation progress, report the project status, and give appropriate warning alerts thus contributing to the completion of the project's goals. iv. methods a. system development approach in this study, data were collected at the mohcdgec in dodoma and dar es salaam regions, where various projects are monitored and evaluated. interviews, document reviews and focus group discussions were used to analyze the current m&e system for government projects. the requirements for the proposed system were analyzed and participants agreed to have an electronic m&e system for government projects in tanzania. to develop the proposed system for government projects, the evolutionary prototyping approach was used because it allows changes in every phase [16]. it improves the prototype system, reduces the software risks, minimizes work, and critical and serious defects during the system testing. b. tools and technologies used in system development 1) hypertext pre-processor (php) it used for database connection and manipulation and carries website duties such as authentication, password handling, and forum managing. it can be embedded into hyper text makeup language (html) code [17]. in this study, php was used to accept the data from the client and send them to the relational database management system (rdbms) for storage. it ensures the security of the user who logs in by maintaining the user session across the pages. further, it was used to connect the developed system to the mysql database. engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6111 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … 2) mysql it is a back-end rdbms that handles database commands or instructions. it handles large databases efficiently by employing different programs to support the administration [18]. during this study, a mysql database was developed to help m&e and other project members to store, retrieve and manage data. furthermore, different access privileges and password encryptions were used to enhance security through host-based verification. 3) javascript it is a scripting language that allows client-side data validation before the data are submitted to a database. the client-side validation is important as it saves time, reduces the workload of the server, allows the server to concentrate on lowlevel verification and data processing [10]. in this study, apart from validation, javascript libraries such as jquery and chart.js were employed to improve data presentation in plotting graphs, and handling tabular data and the date text field. 4) hypertext markup language (html) in the development of this system, html [20] was used for the purpose of displaying the web pages and other multimedia/information which are displayed on a web browser. 5) apache web-server it is highly customized to meet the need for different environments by using modules and extensions. apache is a cross-platform software that works on both windows and unix servers, it is reliable, secure, and fast [21]. 6) integrated development environment (ide) for this study, the ide used was net beans. it is an opensource integrated development environment used for application development. it provides the wizard, editors, and templates which help creating applications in programming languages like php and java and can be installed in operating systems such as windows, macos, solaris and linux [22]. c. user acceptance testing user acceptance testing was conducted to validate the proposed system if it met the project's requirements at mohcdgec. questionnaires were given to twelve respondents including four m&e staff, two ict staff, four project members, one project manager and one accountant to gather feedback on the validity of the system. v. results and discussion a. system requirements for implementing the proposed system, functional and nonfunctional requirements were collected and are shown in table i and table ii respectively. b. the architecture of the proposed system based on the study findings, the proposed system was successfully developed. it contains three modules: a project registration module, a project tracking module, and a project status module. this study intended to develop a tool to track and report the status of projects and allow prompt actions to mitigate the encountered challenges and risks. figure 1 presents the architecture for the proposed system. table i. functional requirements system users (actors) description system administrator will be able to login to the system and register all users/members. is responsible for defining and giving different privileges to the users of the system, for maintaining, and for updating the system. every user must have a username and password. project manager will be able to register a new project to the system, including code number and title/type of project, location and timeframe/period of the project, source of the project fund and all partners/support/donor funds and project activities. m&e officers add/update indicators for project activities and targets of the project, in a quarterly or year basis. m&e will review and check the data which are entered by program members to track the progress of the project and to measure performance and generate quarterly or yearly reports depending on the nature of the project. accountant officers add/update financial documents for project activities and verify if the accounting function was correctly captured. project members program members or project teams will have the privilege to perform their specific tasks of entering data/information in weekly, monthly or quarterly basis. all registered users users will be able to view the deadline warnings, the status and report of the projects and alert information before/after the project is completed. donors/ partners they will be able to provide feedback from the generated report. further, they will view various reports from the projects they support. they can see the progress of reports and enquire more information from the project manager if needed table ii. non functional requirements quality factor description performance the system will be required to support many terminals simultaneously without failure and handle multiple users without contradiction or break by using a fast server to handle traffic and provide crossbrowser compatibility. usability users must be satisfied with the usability of the website without any specialized training and be able to complete different tasks without failure. reliability the system should be capable of maintaining its performance.. security the system will protect services and information from external attacks using authentication, authorization, and encryption. interoperability the system will be interoperable with existing systems at the ministry. other systems must be able to fetch data from it and it should be able to read structured documents such as xml. maintainability the maintenance or any modification to the system will not cause the website to shut down more than once in 24 hours. recovery the system will be able to recover after some damages. flexibility the system will have the ability to add the new notification/status of projects before and after the deadline engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6112 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … fig. 1. architecture of the proposed system. c. data flow diagram the dataflow diagram is used to show how data will flow between actors [23]. in this study, the level 0 (context diagram) shows the flow of information between the developed system and external entities (figure 2). data flow diagram (level 1) presents the flow of information for all processes involved in each stage and their stored data when the process is completed. figure 3 presents the data flow diagram for the developed system. fig. 2. context diagram. d. the developed pmes the developed web-based tool, named project m&e system (pmes) for government projects is organized into six sections, namely account, settings, project details, project implementation, reports, and system log. these sections grouped several functionalities to simplify navigation. the developed web application for government projects allows only registered users to access the system. the home or dashboard page shows the status of the projects/project information including the initiated, implemented, completed, delayed projects and alert/warning information displayed on the dashboard. the dashboard page (figure 4) helps all system users to understand the progress of each project and minimize potential problems or solve the project challenges on time. this will help the ministry of health to complete projects within the allocated time. all registered system users can view the status of the project, but not other system menus. registered users need the administrator’s permission to access other menus based on their user roles. in the pmes, only system administrators can browse and access all system menus. the project detail section, has various functionalities including project registration, project sponsors, project members, project activities, and uploaded project reports. project manager and m&e team have access to the project detail section. to register a new project, the project manager enters the new project details including the title/type of the project, code number, location, start date, end date, and project and project activities. moreover, the project manager registers the project sponsor, project members, and fills the amount of money sponsored by the project. this is done by selecting their members, donor and positions since they were already registered by the system administrator. lastly, in the project details menu, project manager and m&e officer can upload various project reports. the reports may include information about project activities at any time they are needed, quarterly, annually or based on the agreed project format. this will allow users to know the reports of various projects and have access to feedback from decision making/ partnership/sponsors. figure 5 shows the page for entering the project details. the project implementation section enriches the particular project entered in the project section. it houses pmes functionalities including entering field data, setting indicators, entering indicator achievement, setting a budget and finally entering activity expenditure. in the system, the m&e officer is responsible for adding or updating the indicators of project activities on a quarterly basis or depending on the nature of the project. as a part of monitoring, the m&e officer is able to set indicators for each project activity, monitor its progress and enter the score of a particular indicator. figure 6 presents the form for uploading activity execution attachment. fig. 3. data flow diagram. engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6113 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … fig. 4. dashboard. fig. 5. project details page. fig. 6. form for uploading an activity execution attachment. engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6114 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … fig. 7. the indicator performance for project activity. project reports section has the reports needed by the various stakeholders, including reports on project details, field data, indicator performance, activity expiry, financial information, and project members. it is important for m&e officials to monitor the progress of all projects. the indicator performance section offers the m&e team the ability to evaluate the success of activities within the projects. it uses both tabular data and bar graphs to demonstrate the performance of an indicator. while the tabular data present the target and achievement of certain indicators, the graph shows the indicator performance in percentage. figure 7 presents the indicator performance for project activity. e. user acceptance validation to gather feedback from the system users, the users were trained in the pmes and were given three days to familiarize with it. the users were then registered into the pmes for proceeding with using and interacting with the system in their different roles. in the project detail menu, the project manager was allowed to log in, reset the account, and register project details. in the project implementation menu, the m&e officer was allowed to login and add the different project performance indicators. the accountant added the budget and sub-budget of the project activities, and the project member uploaded the attachment for activity which has been executed and any needed details or explanation. the other functionalities in pmes included viewing the list of projects with all associated details and status. furthermore, a questionnaire was distributed to the system evaluators and their comments, views, perception and recommendations about the pmes were asked. the questionnaire results were calculated on the mean score based on a four-point likert scale (4 = strongly agree, 3 = agree, 2 = disagree and 1 = strongly disagree) as shown in table iii. the mean score for each validated feature was above 3.5 which indicates that the majority of the respondents accepted the developed system, and they will be able to continue using it in their projects in order to improve the project implementation performance. lastly, the users report that they will recommend the tool or its extensions to fit other minister/organization m&e projects. table iii. system’s user acceptance validation result validation features mean score the pmes satisfies the m&e requirements of health projects. 3.75 the pmes is easy to access. 3.75 the interface of this pmes system is interactive. 3.83 the system contents are learned, understand and easy to operate. 3.75 the pmes will reduce the workload and paperwork in health projects. 3.91 the pmes will improve health project data handling in a specific time. 3.83 the pmes will improve the m&e process of the different health projects on time. 3.75 the pmes will improve the report generation, 3.67 the pmes will be useful and help in accessing health projects at the ministry. 3.91 i think, i will continue using this pmes. 4 i don’t think there is a need for having training support to operate this system. 3.91 f. discussion the mohcdgec currently relies on manual or paperbased systems in m&e of project activities, revealing a number of challenges in their operation such as poor information sharing and underperformance. some previous studies indicate that there is a need for web-based tools to monitor and evaluate project activities, simplify data collection, accommodate information sharing among stakeholders and improve the projects’ performance. after the system’s development, the results of user acceptance testing indicate that the developed pmes will help reducing the presented challenges. there is a need for pmes to be adopted at the mohcdgec to minimize manual work, improve cooperation among ministry departments, stakeholders, donors and, partners. moreover, the developed system will improve the quality of data, simplify the process of data collection and improve the progress of projects of the ministry of health in tanzania. engineering, technology & applied science research vol. 10, no. 4, 2020, 6109-6115 6115 www.etasr.com mleke & dida: a web-based monitoring and evaluation system for government projects in tanzania … vi. conclusion in this study, a tool was developed that will help the ministry of health to monitor and evaluate various project activities, thus remedying challenges such as delay of data submission during project implementation and data loss which usually occur in the paper-based data collection on a monthly or quarterly basis. the developed system will be useful to different stakeholders including project managers, project members, decision-makers, policymakers and m&e officers in tracking the projects’ progress, in the health domain as a tool for better and more informed decisions. acknowledgment the authors wish to thank the ministry of health and social welfare. this study was funded by the african development bank. references [1] handbook on monitoring and evaluating for results evaluation office. one united nations plaza new york, ny 10017, usa: evaluation office, united nations development programme, 2002. [2] pmbok guide: a guide to the project management body of knowledge, 6th ed. pa, usa: project management institute, 2017. [3] integrated monitoring, evaluation, & planning handbook. minnesota, usa: the mcknight foundation, 2017. [4] “tanzania national ehealth strategy 2012 – 2018.” mohsw, united republic of tanzania, 2013. [5] “proposal to strengthen the health information system (his).” mohsw, united republic of tanzania, 2010. [6] national malaria strategic plan, 2014–2020. dar es salaam, tanzania: mohsw, 2014. [7] mid term review of the health sector strategic plan iv 2015 2020: main report. dar es salaam, tanzania: ministry of health, community development, gender, elderly and children, 2019. [8] s. sohu, a. a. jhatial, k. ullah, m. t. lakhiar, and j. shahzaib, “determining the critical success factors for highway construction projects in pakistan,” engineering, technology & applied science research, vol. 8, no. 2, pp. 2685–2688, apr. 2018. [9] h. a. sulieman and f. a. alfaraidy, “influences of project management capabilities on the organizational performance of the saudi construction industry,” engineering, technology & applied science research, vol. 9, no. 3, pp. 4144–4147, jun. 2019. [10] “designing effective monitoring and evaluation of education systems for 2030: a global synthesiss of policies and practices.” unesco education sector, 2016. [11] a. n. ahmed and d. a. magdi, “the impact of electronic monitoring and evaluation system on organization performance applied on egyptian international trade point sector ministry of trade & industry in egypt,” international journal of latest engineering and management research (ijlemr), vol. 2, no. 5, pp. 1–11, 2017. [12] m. m. moyne, m. herman, k. z. gajos, c. j. walsh, and d. p. holland, “the development and evaluation of deft, a web-based tool for engineering design education,” ieee transactions on learning technologies, vol. 11, no. 4, pp. 545–550, oct. 2018, doi: 10.1109/tlt.2018.2810197. [13] s. o. cheung, h. c. h. suen, and k. k. w. cheung, “ppms: a webbased construction project performance monitoring system,” automation in construction, vol. 13, no. 3, pp. 361–376, may 2004, doi: 10.1016/j.autcon.2003.12.001. [14] j. a. landicho, “a web-based geographical project monitoring and information system for the road and highways,” journal of electrical systems and information technology, vol. 5, no. 2, pp. 252–261, sep. 2018, doi: 10.1016/j.jesit.2016.10.011. [15] c. a. sanga, k. g. fue, n. nicodemus, and f. t. m. kilima, “webbased system for monitoring and evaluation of agricultural projects,” international journal of interdisciplinary studies on information technology and busines, vol. 1, no. 1, pp. 17–43, 2013. [16] n. s. chen and s. y. huang, “applying evolutionary protyping model in developing stream-based lecturing systems,” digital education review, no. 4, pp. 62–75, 2002. [17] l. welling and l. thomson, php and mysql web development, fourth edition. upper saddle river, nj: addison-wesley professional, 2008. [18] l. welling and l. thomson, php and mysql web development, fifth edition. hoboken, nj: addison-wesley professional, 2016. [19] s. suehring, javascript step by step, third edition. sebastopol, california: microsoft press, 2013. [20] f. wempen, microsoft® html5 step by step. sebastopol, california:microsoft press, 2011. [21] r. bowen and c. mcgregor, introduction to the apache web server. 2005. [22] g. wielenga, beginning netbeans ide: for java developers, 1st ed. edition. new york: apress, 2015. [23] l. svobodová and m. černá, “project management model with designed data flow diagram: the case of ict hybrid learning of elderly people in the czech republic,” in computational collective intelligence, cham, 2018, pp. 399–408, doi: 10.1007/978-3-319-984469_37. microsoft word etasr_v11_n4_pp7477-7482 engineering, technology & applied science research vol. 11, no. 4, 2021, 7477-7482 7477 www.etasr.com daithankar & ruikar: analysis of the wavelet domain filtering approach for super-resolution videos analysis of the wavelet domain filtering approach for video super-resolution mrunmayee v. daithankar electronics engineering department walchand college of engineering sangli, maharashtra, india mrunmayeed30@gmail.com sachin d. ruikar electronics engineering department walchand college of engineering sangli, maharashtra, india ruikarsachin@gmail.com abstract-the wavelet domain-centered algorithms for the superresolution research area give better visual quality and have been explored by different researchers. the visual quality is achieved with increased complexity and cost as most of the systems embed different preand post-processing techniques. the frequency and spatial domain-based methods are the usual approaches for super-resolution with some benefits and limitations. considering the benefits of wavelet domain processing, this paper deals with a new algorithm that depends on wavelet residues. the methodology opts for wavelet domain filtering and residue extraction to get super-resolved frames for better visuals without embedding other techniques. the avoidance of noisy highfrequency components from low-quality videos and the consideration of edge information in the frames are the main targets of the super-resolution process. this inverse process is carried with a proper combination of information present in lowfrequency bands and residual information in the high-frequency components. the efficient known algorithms always have to sacrifice simplicity to achieve accuracy, but in the proposed algorithm efficiency is achieved with simplicity. the robustness of the algorithm is tested by analyzing different wavelet functions and at different noise levels. the proposed algorithm performs well in comparison to other techniques from the same domain. keywords-observation model; super-resolution; video quality parameters; wavelet residuals; wavelet domain processing i. introduction super resolution (sr) is a leading field of digital signal processing (dsp) with wide applicability in electronic imaging such as biomedical, forensics, surveillance, satellite imaging, etc. not only for better picturing but for proper data extraction, there is a need for high-quality images which are not readily available every time. an expensive high-resolution (hr) imaging system has restrictions on the sensor's capacity, the optics fabricating machinery, memory, and the sensor’s transmission bandwidth. physical up-gradation has almost reached its limits, so the solution is to develop effective ways to overcome the hardware limitations of the imaging systems. this leads to the sr concept and related developments. many authors [1-5] explored the sr concept, its recent applications, limitations, and scope for improvements. the concept aims to produce hr frames from successive low-resolution (lr) images or frames by applying dsp techniques in a proper sequence. some of the work related to sr is summarized below. the main contributions of the current paper are: • an introduction of the sr concept with the required basics. • a selection of a proper wavelet function through analyzing the simulation results for future work. • analysis of the proposed work with different noise levels and wavelet functions to check the robustness of the algorithm. • the establishment that the combination of low-frequency components and wavelet residuals for high-frequency details like edges leads to increased efficiency in comparison with the state-of-the-art techniques. ii. literature review a. super-resolution with the observation model the inverse process to carry out high-quality images or frames from their low-quality versions using digital image processing techniques is known as sr. the original hr version of the scene gets degraded by different factors like warping (w), blurring (b), aliasing or down-sampling (d), and additional noise (n) by the environment or by imaging devices. this degradation process results in the lr version of the images. the sr issue is an ill-posed problem, i.e. it has no particular solution and it doesn’t have a particular mathematical expression. researchers have tried to convert this ill-posed problem into a well-posed one in order to get a solution [1, 2]. even these trials do not represent the exact issue but helped obtain nearby versions and their solutions. for representing the sr concept mathematically, let us assume the following: x is the original hr image, with yk kth lr images/frames, where k is the number of observations. d is the decimation matrix, b is the blurring matrix, wk the warping matrix, and nk the additional noise. so, the mathematical expression of the observation model is represented in (1): yk = dbwkx+nk (1) the generalized observation model is represented in figure 1. the observation model shows the exposures of the lr frame from the high-quality frame due to some degradation factors. corresponding author: mrunmayee v. daithankar engineering, technology & applied science research vol. 11, no. 4, 2021, 7477-7482 7478 www.etasr.com daithankar & ruikar: analysis of the wavelet domain filtering approach for super-resolution videos the sr process is to get back the high-quality frames by restoring the degraded data using dsp techniques. this reverse process is carried out with different algorithms developed for different applications. the techniques for sr are mainly classified by their domain: spatial, frequency, and wavelet [36]. each domain has its advantages and disadvantages based on the application field. the next section summarizes some recent literature regarding sr. fig. 1. general sr observation model. b. super-resolution process categorization the categorization of the sr process is based on the domains in which the technique is developed. the techniques in the frequency field deal with the frequency element as an image trait. the frequency-domain approach is depending on shifting, aliasing, and band limitation of the signal [3–6]. the very first approach of sr was in the frequency area [7]. the routine equations that narrate the hr representation to the perceived vitiated pictures were framed with approximating the comparative moves among a series of down-sized, aliased, and without noise lr images. this process was protracted in [8] by recommending a biased least squares result upon the theory that the distortion and noise individualities are identical for all lr pictures. authors in [9] presented the dct centered image quality improvement algorithm. the dct has the benefits to attain a notable improvement in picture quality even for frequently posed cases. a foremost benefit of the frequency province-centered sr approaches is that they are habitually hypothetically straightforward and rational in calculations. the most elementary way to boost the resolution of a picture is spatial domain interpolation methods. the set of pixels estimates new pixels either by considering neighboring pixels or averaging the pixel values. these methods are known as nearest neighbor, bilinear, and bicubic interpolation. the composite interpolation methods are cubic b-spline interpolation method [10], new edge-directed interpolation (nedi) [11], and edge-guided interpolation (egi) [12]. for the spatially grounded sr approaches, uneven interpolation methods are some of the utmost innate approaches with moderately minimal computational difficulty. the frequency and spatial-based domain methods have their advantages and disadvantages as well. wavelet transform (wt) based methods give frequency components as well as spatial statistics which produce results more encouraging than the previous transforms. the same theory was investigated in [13]. the discrete wt and gabor wavelet combination also give promising results in the superresolution construction of satellite images, explored in [14]. the wavelet domain-centered sr restoration methodology can examine and manipulate global and local features at coarseand fine-scale respectively. among the confronts in sr, one needs to uphold or retrieve the real edges of entities in the interim condensing noise, which is commonly challenging to be realized instantaneously using frequency-centered processes due to the parallel reaction of edges and noise in the frequency range. this leads to embedding wt and edge-preserving algorithms like egi [15]. the combination of the keren algorithm for image registration, dwt, and nedi to improve the edges in the frames was explored in [16]. the appealing assets of wt, for instance, density, multi-resolution, and locality are valuable for probing actual-world motions. wt recommends a substitute solution to examine exact edges and noise individually. a collective statement of wtcentered techniques is that the lr frame is the quality truncated frequency subordinate band constructed by the wt of the picture [17]. the difference in wavelet decomposed sub-bands with actual low-resolution image, interpolation is the technique given in [18] for the reconstruction of images. authors in [19] recommended the sr method for the degraded frame with dwt and stationary wt (swt). yet, these accessible ways have partial execution in a range of noise stages, motion planes, wavelets, and the total of consumed frames. still, the researchers are showing interest in wavelet domain processing for the better performance of the algorithms. iii. the proposed wavelet-domain video sr process enhancing video quality in frequency and spatial domains is the traditional way, but nowadays the wavelet domain processing has become a trend due to the benefits of both domains embedded in one. the proposed algorithm is used basically for the analysis of the effect of the use of wavelets in the super-resolution process. the proposed technique uses the benefits of wavelet domain processing such as low frequency and high-frequency separation with the help of well-known wavelet families. the flow of work for the proposed methodology is divided into two parts: (a) degradation process and (b) upgradation process. these processes are presented in figures 2 and 3. a. degradation process naturally, the quality of images or video frames are degraded during the acquisition process due to many factors like noise in the acquisition process or environment, hardware quality, etc. many times, due to less storage and transmission capacity, the images/videos are compressed which leads to degradation. dealing with degraded quality data means inefficient information processing, less effective analysis, and lower quality visuals. the degradation process of the input video frames is shown in figure 2. fig. 2. degradation of input video frames. engineering, technology & applied science research vol. 11, no. 4, 2021, 7477-7482 7479 www.etasr.com daithankar & ruikar: analysis of the wavelet domain filtering approach for super-resolution videos in the proposed method, the original high-quality video frames are degraded by some factors like downsampling and addition of noise and are referred to as lr frames. the process is shown in figure 3. the degradation process of the proposed method can be communicated mathematically by modifying (1) to: yk = dx + nk (2) the steps followed in the degradation process are given in the form of a descriptive pseudo code in figure 3. fig. 3. frame degradation pseudocode. these lr frames are used as input in the upgradation process. b. upgradation process: the use of the wavelet transform is the frequency domain filtering of an input image, without any other noise removal technique. if only wavelet domain filtering is applied then it affects the output. this effect is analyzed for frame or image quality enhancement. the sr area has attracted the attention of researchers, as it enables the use of conventional image acquisition systems as it is with some post-processing. it reduces the cost of exchange of traditional systems by new technology. the simple and general steps involved in the superresolution process are shown in figure 4. fig. 4. sr of input lr video frames. the steps involved in the upgradation process are shown in the pseudocode of figure 5. fig. 5. frame upgradation pseudocode. iv. video quality measurement parameters a. peak signal to noise ratio (psnr) the psnr is a measure of the fraction among the highest signal power and the distortion of the noise that alters the superiority of its interpretation. since maximum signals include a vast range of intensity, the psnr is typically articulated in the logarithmic scale, whereas mean square error (mse) is offered by: mse � � �� ∑ ∑ x�i, j� � y�i,j��������������� (3) psnr � 20log�� " #$%& √#() * (4) where x represents the original image or frame, y denotes the matrix data of the corrupted image or frame, m gives the number of the pixel lines of the images, and n signifies the number of columns in the pixel image [1]. b. structural similarity (ssim) parameters like luminance, contrast, and structure are important to check the similarity between the original and super-resolved frames. collecting all these terms together we get the structural similarity (ssim) index [16, 20]. if the measurement of similarity is considered between images or frames x and y with the same size then the equation for ssim will be: ssim � ,�-&-./012,�3&./042,-&4/-.4/012,3&4/3.4/042 (5) where, μ6 and μ7 are the averages of x and y, σ6� and σ7� are variances of x and y, and σ67 is the covariance of x and y. c� and c� are constants [1]. v. experimental results and discussion the projected sr method based on the wavelet domain is verified on renowned noncopyrighted videos. the reason behind considering these videos is that they represent an easy engineering, technology & applied science research vol. 11, no. 4, 2021, 7477-7482 7480 www.etasr.com daithankar & ruikar: analysis of the wavelet domain filtering approach for super-resolution videos way to assess the functioning of our technique with other state of the art algorithms which have already taken these videos as input. the results disclosed here are on the "foreman" video. the video collections were extracted from an open database from xiph.org [23] which contains non-copyrighted and copyrighted type videos (to avoid copyright issues, the authors of the current paper considered only non-copyrighted videos). the frames were separated for further processing. the original high-resolution video was resized to 512×512 pixels. grounded on the observation prototype, the entered lr frames had dimensions of 128×128 pixels after down-sampling and were more degraded by the addition of different noise levels. the noise used to degrade frames was gaussian noise with specific snr. the aimed algorithm was executed in matlab (r2018b). the results and analysis are divided into three parts are explained below: • comparison of the proposed algorithm with existing techniques. • analysis of the planned algorithm with diverse noise levels. • analysis of the projected algorithm with different wavelet functions. a. comparison of the proposed algorithm with existing techniques the evaluation of the average values of psnr and ssim results for the proposed method and other techniques is shown in table i. the values from the table itself declare the prominence of the proposed algorithm. the most important characteristic is the simplicity, when concidering its efficiency. the algorithms used for comparison have many pre-processing and hybrid approaches which makes them more complex than the proposed algorithm. the reason behind this elevated performance is that the dwt-centered sr algorithms are further useful to regain the high-level frequency particulars of the specific degraded frames. the real borders are appropriately maintained and noise is eliminated by filtering purposes. for the "foreman" frames, even though the psnr and ssim results attained by the proposed method are greater than the other methods', the performance gain can be boosted by adding other edge preservation and direct mapping techniques. figure 6 shows the behavior of sr methods concerning the average psnr and ssim values. the most important quality metrics, i.e. ssim, have shown a substantial increase by the proposed algorithm. table i. comparative analysis of the proposed algorithm with existing methods method nearest bicubic nedi [11] dwtdiff [18] dwtswt [19] srdwt [16] proposed quality metrics avg. psnr 20.89 22.02 21.19 18.97 19.83 23.88 24.9 avg. ssim 0.61 0.7 0.7 0.47 0.50 0.84 0.897 b. analysis of the proposed algorithm with diverse noise levels the robustness of the proposed algorithm is validated with different noise levels, varying from 50db to 25db with a step of 5db. the sampled noisy frames of the "foreman" video are shown in figure 8. the frames were already down-sampled by a scale of 1/4th to the original. the db1 wavelet function was used. fig. 6. comparative analysis of the proposed algorithm with other sr methods. table ii. results for different noise levels noise in db 50 45 40 35 30 25 avg. psnr 25.01 25.01 25 24.98 24.9 24.68 avg. ssim 0.9067 0.9064 0.9057 0.9036 0.8970 0.8774 the average psnr and average ssim values after upgrading the quality are shown in table ii. figure 7 shows the graph for quality metrics with different noise levels. it can be seen that the proposed algorithm gives better results in a variety of noise levels. the psnr values are not affected much but the ssim values have significantly decreased. this can be concluded as the addition of noise significantly affect structural similarities. fig. 7. effect of different noise levels on super-resolution process. c. analysis of the proposed algorithm with different wavelet functions the literature provides different wavelet functions which have been tried with varying results. to check and analyze the variation in the efficiency of the proposed algorithm according to the wavelet function, all the parameters were kept stable and the noise level at 30db. in the literature, the daubechies family is explored in the super-resolution process, but not all functions are used except db2, db7/9 which are having wide applicability. this paper utilized db1, db2, db7, and db9 wavelet functions engineering, technology & applied science research vol. 11, no. 4, 2021, 7477-7482 7481 www.etasr.com daithankar & ruikar: analysis of the wavelet domain filtering approach for super-resolution videos for the proposed algorithm. the results are tabulated in table iii. figure 8 shows the graph for the proposed algorithm using different wavelet functions. table iii. analysis of the proposed algorithm with different wavelet functions wavelet function db1 db2 db7 db9 avg. psnr 24.9 21.79 14.18 13.58 avg. ssim 0.897 0.8582 0.6774 0.6455 the db1/haar function gives the most promising results due to its component reconstruction capability. many researchers prefer db2 for decomposition and reconstruction purposes, but in this case, db1 provides more efficacy in reconstruction while being simpler than the db2. fig. 8. effect of different wavelet functions on the sr process. the surveyed recent papers show that there is still a scope for efficiency improvement in object detection areas like face [21] and logo [22] recognition. the decision of recognition is based on the processes applied to the input data. but input pictures with low quality increase the complications in the final decision making. to avoid such circumstances, quality details in the input data are needed. this mends the interest in the sr area. vi. conclusion the current paper explored a simpe algorithm for the analysis of the effect of wavelet domain filtering in visual quality upgradation of low or degraded videos. this technique is based on wavelet residual mapping and interpolation. the robustness of the wavelet filtering approach is investigated for different noise levels and different wavelet functions and it can be said that the proposed algorithm surpasses the other popular methods from the same domain. simplicity and efficiency are the two main advantages of the proposed algorithm. the purpose behind developing this algorithm is to reduce the complexity. as the mapping of low-frequency information with wavelet residues gives promising results, the embedding of the wavelet domain with neural networks for learning this sr mapping will hopefully succeed. references [1] m. v. daithankar and s. d. ruikar, "video super resolution: a review," in icdsmla 2019, singapore, asia, 2020, pp. 488–495, https://doi.org/10.1007/978-981-15-1420-3_51. [2] m. v. daithankar and s. d. ruikar, "video super resolution by neural network: a theoretical aspect," journal of computational and theoretical nanoscience, vol. 17, no. 9–10, pp. 4202–4206, jul. 2020, https://doi.org/10.1166/jctn.2020.9045. [3] g. pandey and u. ghanekar, "a compendious study of super-resolution techniques by single image," optik, vol. 166, pp. 147–160, aug. 2018, https://doi.org/10.1016/j.ijleo.2018.03.103. [4] l. yue, h. shen, j. li, q. yuan, h. zhang, and l. zhang, "image superresolution: the techniques, applications, and future," signal processing, vol. 128, pp. 389–408, nov. 2016, https://doi.org/10.1016/ j.sigpro.2016.05.002. [5] d. thapa, k. raahemifar, w. r. bobier, and v. lakshminarayanan, "a performance comparison among different super-resolution techniques," computers & electrical engineering, vol. 54, pp. 313–329, aug. 2016, https://doi.org/10.1016/j.compeleceng.2015.09.011. [6] j. tian and k.-k. ma, "a survey on super-resolution imaging," signal, image and video processing, vol. 5, no. 3, pp. 329–342, sep. 2011, https://doi.org/10.1007/s11760-010-0204-6. [7] r. y. tsai, "multiframe image restoration and registration," advances in computer vision and image processing, vol. 11, no. 2, pp. 317–339, 1984. [8] s. p. kim, n. k. bose, and h. m. valenzuela, "recursive reconstruction of high resolution image from noisy undersampled multiframes," ieee transactions on acoustics, speech, and signal processing, vol. 38, no. 6, pp. 1013–1027, jun. 1990, https://doi.org/10.1109/29.56062. [9] s. rhee and m. g. kang, "dct-based regularized algorithm for highresolution image reconstruction," in international conference on image processing, kobe, japan, oct. 1999, vol. 3, pp. 184–187 vol.3, https://doi.org/10.1109/icip.1999.817096. [10] x. zhang and y. liu, "a computationally efficient super-resolution reconstruction algorithm based on the hybird interpolation," journal of computers, vol. 5, no. 6, pp. 885–892, 2010. [11] x. li and m. t. orchard, "new edge-directed interpolation," ieee transactions on image processing, vol. 10, no. 10, pp. 1521–1527, oct. 2001, https://doi.org/10.1109/83.951537. [12] l. zhang and x. wu, "an edge-guided image interpolation algorithm via directional filtering and data fusion," ieee transactions on image processing, vol. 15, no. 8, pp. 2226–2238, aug. 2006, https://doi.org/ 10.1109/tip.2006.877407. [13] h. ji and c. fermuller, "robust wavelet-based super-resolution reconstruction: theory and algorithm," ieee transactions on pattern analysis and machine intelligence, vol. 31, no. 4, pp. 649–660, apr. 2009, https://doi.org/10.1109/tpami.2008.103. [14] a. muthukrishnan, j. charles rajesh kumar, d. vinod kumar, and m. kanagaraj, "internet of image things-discrete wavelet transform and gabor wavelet transform based image enhancement resolution technique for iot satellite applications," cognitive systems research, vol. 57, pp. 46–53, oct. 2019, https://doi.org/10.1016/j.cogsys.2018.10.010. [15] s. izadpanahi and h. demirel, "motion based video super resolution using edge directed interpolation and complex wavelet transform," signal processing, vol. 93, no. 7, pp. 2076–2086, jul. 2013, https://doi.org/10.1016/j.sigpro.2013.01.006. [16] w. witwit, y. zhao, k. jenkins, and s. addepalli, "global motion based video super-resolution reconstruction using discrete wavelet transform," multimedia tools and applications, vol. 77, no. 20, pp. 27641–27660, oct. 2018, https://doi.org/10.1007/s11042-018-5941-5. [17] a. temizel, "image resolution enhancement using wavelet domain hidden markov tree and coefficient sign estimation," in ieee international conference on image processing, san antonio, tx, usa, oct. 2007, vol. 5, pp. v-381-v–384, https://doi.org/10.1109/icip.2007. 4379845. [18] h. demirel and g. anbarjafari, "discrete wavelet transform-based satellite image resolution enhancement," ieee transactions on geoscience and remote sensing, vol. 49, no. 6, pp. 1997–2004, jun. 2011, https://doi.org/10.1109/tgrs.2010.2100401. [19] h. demirel and g. anbarjafari, "image resolution enhancement by using discrete and stationary wavelet decomposition," ieee transactions on image processing, vol. 20, no. 5, pp. 1458–1460, may 2011, https://doi.org/10.1109/tip.2010.2087767. [20] z. wang, a. c. bovik, h. r. sheikh, and e. p. simoncelli, "image quality assessment: from error visibility to structural similarity," ieee engineering, technology & applied science research vol. 11, no. 4, 2021, 7477-7482 7482 www.etasr.com daithankar & ruikar: analysis of the wavelet domain filtering approach for super-resolution videos transactions on image processing, vol. 13, no. 4, pp. 600–612, apr. 2004, https://doi.org/10.1109/tip.2003.819861. [21] y. said, m. barr, and h. e. ahmed, "design of a face recognition system based on convolutional neural network (cnn)," engineering, technology & applied science research, vol. 10, no. 3, pp. 5608–5612, jun. 2020, https://doi.org/10.48084/etasr.3490. [22] a. alsheikhy, y. said, and m. barr, "logo recognition with the use of deep convolutional neural networks," engineering, technology & applied science research, vol. 10, no. 5, pp. 6191–6194, oct. 2020, https://doi.org/10.48084/etasr.3734. [23] "xiph.org : derf’s test media collection." https://media.xiph.org/ video/derf/ (accessed aug. 02, 2021). authors profile mrunmayee v. daithankar is working as a senior research fellow by attaining the national doctoral fellowship promoted by the all india council of technical education (aicte), delhi. she is carrying out her research in electronics engineering at walchand college of engineering sangli, maharashtra (india), under the guidance of dr. sachin d. ruikar. her research interests lie in the areas of image/video processing and neural networks. before joining the fellowship, she worked as an assistant professor for three and a half years in sinhgad institute’s college of engineering, pandharpur. she has completed her graduation in electronics and telecommunication engineering from sveri’s c.o.e, pandharpur, in 2011. in 2014, she received her postgraduate degree in electronics engineering from sinhgad institute’s c.o.e, pandharpur, affiliated to solapur university, solapur. sachin d. ruikar has received a graduate degree in electronics and telecommunication from the government engineering college, aurangabad, under the aegis of dr. b. a. m. u. aurangabad, in 1998. he has received thepostgraduate degree in electronics and telecommunication engineering from government engineering college, pune university, india in 2002. he has completed his ph.d. in the electronics under shri guru gobind singh institute of engineering technology, srtmu nanded, in 2013. presently, he is working as an associate professor in electronics engineering at walchand college of engineering, sangli, maharashtra. his research interests include image denoising with wavelet transforms, image fusion, image painting and image superresolution. microsoft word 38-2972_s_etasr_v9_n4_pp4520-4524 engineering, technology & applied science research vol. 9, no. 4, 2019, 4520-4524 4520 www.etasr.com shamsan et al.: micrometer and millimeter wave p-to-p links under dust storm effects in arid climates micrometer and millimeter wave p-to-p links under dust storm effects in arid climates zaid a. shamsan electrical engineering department, college of engineering, al imam mohammad ibn saud islamic university, riyadh, saudi arabia zashamsan@imamu.edu.sa moath alammar electrical engineering department, college of engineering, al imam mohammad ibn saud islamic university, riyadh, saudi arabia moath14121@hotmail.com abdullah alharthy electrical engineering department, college of engineering, al imam mohammad ibn saud islamic university, riyadh, saudi arabia abdullah-alharthy@outlook.sa abdulaziz aldahmash electrical engineering department, college of engineering, al imam mohammad ibn saud islamic university, riyadh, saudi arabia aziz.dahmash@gmail.com khalid. a. al-snaie electrical engineering department, college of engineering, al imam mohammad ibn saud islamic university, riyadh, saudi arabia kalsnaie@imamu.edu.sa abdulaziz m. al-hetar faculty of engineering and information technology, taiz university, taiz, yemen alhetaraziz@gmail.com abstract—a dust storm is the main attenuation factor that can disturb receiving radio signals in arid climate condition as in saudi arabia. this paper presents a study on the effect of dust storms on the received radio frequency power in a homogenous environment in the city of riyadh. a number of micrometer and millimeter wave links have been considered along with several measured dust storm data to investigate the dust storm effects. the results showed that dust storm can critically influence the communication link and this effect grows up as the physical distance between the transmitter and the receiver increases. the negative effect of the dust storm apparently appears at highfrequency bands allocated for the next communication generation (5g) which imposes finding solutions to mitigate the effects of this phenomenon. keywords-dust storm; millimeter waves; arid climate; attenuation; receiver sensitivity i. introduction in the fifth generation (5g) wireless networks, which are expected to be introduced around 2020, there will be some changes, including dramatic changes in the design of different layers for the next generation communication systems. massive multiple input multiple output (mimo) systems, filter bank multi-carrier (fbmc) modulation, relaying technologies, and millimeter-wave (mmwave) communications have been considered as some of the strong candidate techniques for the physical layer design of 5g networks [1]. the frequency spectrum represented by mmwave bands is most likely to be used in 5g networks, while mmwave capability can provide a huge amount of spectrum resources. regardless of the high data rate possibly offered by mmwave, quite a lot of practical difficulties in its use in mobile networks are evident. these difficulties include large path loss, low penetration capability, narrow beam width, fading phenomena due to rain and sand storms, or fading loss due to diffraction, etc. [2-5]. rain and snow attenuation are predominant factors in regions such as america, europe and other continental areas, while sand and dust storms are observed in some arabic countries, such as saudi arabia, and arid parts of australia, and dry states such as texas and arizona. dust storms occur when there are two conditions: the first one is the existence of dry and disjointed soil with no vegetation cover, and the second is high speed wind. the mechanics of the emergence of a dust storm can be explained as follows: when convection currents are created due to the heating of the earth surface, the air above the earth surface becomes warm and then rises up as convection currents. this causes variations in atmospheric pressure and heat, and leads relatively cold winds to be pushed to replace the previous convection currents site, which in turn make dust rise up and carry soil grains up to a level that is proportional to wind power and soil dryness and disintegration. nowadays, prediction of dust storms has become easy given the availability of metrological data [6]. however, a significant concern in this matter is the fading or signal attenuation caused by the sand storms. the attenuation factor is varying according to regional meteorological conditions. authors in [7] focused on dust storm attenuation as a uniform distribution of a specific geometric shape. in this paper, we estimate the attenuation caused by a dust storm, especially on p-to-p terrestrial signals and recommend a solution to avoid power loss due to attenuation over the 5g frequencies. the saudi capital riyadh, located on the center of the saudi desert, is experienced with continuous sand storms. the statistics for the number of sand storms per year in saudi arabia during the period 2010-2017 show that the number of sand storms per year ranges between corresponding author: zaid a shamsan engineering, technology & applied science research vol. 9, no. 4, 2019, 4520-4524 4521 www.etasr.com shamsan et al.: micrometer and millimeter wave p-to-p links under dust storm effects in arid climates 83 in 2014 and 212 in 2012 [8]. among the last observed stand storms, some of them were so strong that the visibility was very low. for example, a huge dust storm engulfed riyadh city and covered most of its parts on the 10th of march 2009 [9]. it stayed for hours after it started, and visibility highly diminished from kilometers to just very few meters within thirty seconds. in this study, we assume the worst case scenario in which a uniform distribution of dust storm has been considered in order to be in the safe side of received signal prediction. ii. method and proposed scenario the wireless point to point (p-to-p) link is proposed to be within an urban area in riyadh city. the transmitter is placed in al-washm at a level of 603.6m above sea level (asl) while the international king khalid airport (ikka) at 623.7m (asl) has been considered to be the receiver point, see figure 1. fig. 1. google map showing ikka and al-washm sites in riyadh (google llc). the distance between the two points is 34.750km. riyadh area is considered as an arid area, characterized by dust and sand storms. thus, the signal potentially undergoes the sand storm effect. this scenario is shown in figure 2. fig. 2. the proposed scenario (dust storm environment) the p-to-p has been installed and the system specifications (transmitter and receiver parameters) have been defined with the help of the radio mobile tool [4-5] and the channel propagation is assumed to be free space in order to study the worst case scenario. the power is calculated at the receiver under the influence of a homogenous dust storm. then, the received power has been calculated under the abovementioned effects. the received power and the receiver sensitivity are compered and the results are collected. this method is summarized in the flow chart shown in figure 3. fig. 3. the flow chart for the proposed method system installing has been carried out by defining the main specification of the p-to-p system with the help of the radio mobile tool, and the propagation channel loss has been computed using friis’ formula. in this formula, we can compute the receiver power, when the following parameters are known: transmitter power �� , gain of the transmitter and receiver antenna �� and �� , respectively, and the channel propagation loss l. firiis’ formula is given by: �� � ����� �� (1) where ��� is the free space loss which mainly depends on the wavelength of the travelling signal �, and the distance between the transmitter and receiver �. this factor can be defined as: ��� � ��� � � ���� � (2) where � is the light speed and � is the frequency of the p-to-p link signal. for dust storm effect estimation, due to the fact that when an object (rain drop/particle) is illuminated by a wave, some of the incident power is absorbed and another part is scattered. thus, the attenuation due to the dust storm can be explained in terms of scattering cross-section of a single particle [10, 11]. the total attenuation, in db, caused by a dust storm over a link has a length of l can be given as: � � � �� ��� �� (3) this solution can be executed by either rayleigh approximation or mie solutions [12]. due to the fact that rayleigh approximation is based on the assumption that !≪�, it is difficult to use it for frequencies bigger than 37ghz [13], while mie solution has not that limitation and can be used to predict attenuation in various frequency bands with high reliability particularly at higher frequencies. therefore, this paper proposes the mie model [14] to be used to estimate the engineering, technology & applied science research vol. 9, no. 4, 2019, 4520-4524 4522 www.etasr.com shamsan et al.: micrometer and millimeter wave p-to-p links under dust storm effects in arid climates dust attenuation �� , in db/km, as a function of the signal wavelength �, visibility $, radius of dust particles %, and dielectric constants of the dust particles for the real and imaginary parts ε' and ε'', respectively. �� � 94.3*+ ,.� + 3721.2*3 ,-4 .�4 + 23381*6 ,-7 .�7 (4) where *+, *3 and *6 are constants whose values depend on the real (ε') and imaginary (ε'') parts of the dielectric constant of the particles. these constants are: *+ � 8ɛ:: ;ɛ:<3=><ɛ::> (5) *3 � ɛ?? @ 8 a bɛ:><�ɛ:c3� d;ɛ:<3=><ɛ::>]> + + +a + + 6d;3ɛ:<6=><�ɛ::>] a +f (6) *6 � � 6 d gɛ:c+h>;ɛ:<3=<ɛ::>]> ] (7) visibility $ can be expressed in terms of particle density as: $ � a.a×+� m7 n,-> (8) where o is the dust particle number per volume unit of air in the unit of particles per cubic meter and ! is the radius of dust particles in meters. iii. system specification and parameters the key parameters for the point to point link system are listed in table i. various frequencies were used to study the dust storm effects. receiver sensitivity for each link with different frequency has also different values. in addition, values of dielectric constant vary with each carrier frequency operation. table i. the main parameters of p to p links link 3 [4] link 2 [15] link 1 [15] parameters 38, 60, 100 26 3.9 frequency, ghz 43.01 18 32 transmitter power, dbm -103.02 -77 -69 receiver sensitivity, dbm 17 40 40 transmitter antenna gain, db 50 transmitter height, m 2 40 40 receiver antenna gain, db 30 receiver height, m 0.16 waveguide loss, db/m 34.725 physical link separation, km ɛ? � 3.50 ɛ?? � 1.64 ɛ? � 4.56 ɛ ?? = 0.25 ɛ ? = 4.56 ɛ ?? = 0.25 dielectric constants measurements of dust storm have been carried out and readings were recorded and are listed in table ii [16]. the measurements have been carried out in riyadh with the use of passive collectors manufactured according to the astmd1739 [17]. each collector/bucket has an open topped cylinder with vertical sides and a flat bottom, its minimum diameter is 15cm with a depth of 2-3 times the diameter. the buckets are located at different heights above ground on the receiving tower. the collected dry samples were mixed in a watch-glass and then few drops of distilled water have been added in order to sediment the mixture. the resulted slurry is diluted up to approximately 100ml, and then it was boiled with low pressure to separate any superfluity air bubbles. a system called sedimentation balance was used to record particle weight that fell in the sedimentation fluid in terms of height and time. finally, stokes law was used to calculate the particle diameter d. table ii. dust storm measurement readings visibility (km) particle average diameter (µm) 0.6 21.25 1 15.3 2 9.8 4 8.2 iv. results and discussion this part of the paper analyzes and discusses the computed received power at ikka receiver antenna from a transmitter at al-washm zone under various dust storm measurements in riyadh. from the readings in table ii, it can be seen that there are four measurement categories. the first one is with visibility of 0.6km and particle radius of 21.25µm. this case seems the worst case because the signal experienced higher particle radius and lower visibility. whereas in the rest three cases, the particle radius decreases and visibility increases. table iii shows the corresponding free space loss due to signal propagation for ikka and al-washm sites. in table iv, the results of sand storm attenuation indicate that the signal fading is directly and inversely proportional to particle radius size and visibility degree. when a dust storm is strong then the visibility degree is low and the signal fading becomes high and vice versa, whereas when the particles radius size is small the signal fading becomes also small. table iii. free space propagation loss link 3 link 2 (26ghz) link 1 (3.9ghz) 100 ghz 60 ghz 38ghz 161.3164 158.8176 154.8503 151.5541 135.0759 lfs (db) table iv. dust storm attenuation of the link paths ae (µm) v(km) ad (db/km) link 1 (3.9ghz) link 2 (26ghz) link 3 38ghz 60ghz 100ghz 21.25 0.6 0.0526 1.1663 3.0939 4.8849 8.1414 15.3 1 0.0226 0.5035 1.3365 2.1103 3.5171 9.8 2 0.0073 0.1632 0.4324 0.6759 1.1264 8.2 4 0.0031 0.0676 0.1788 0.2828 0.4713 the distance versus power received for link1 is depicted in figure 4 on which power decreases when distance increases. figure 4 also shows that when the particle radius of dust increases and visibility decreases, then attenuation will increase. in addition, it can be seen that the power received is always greater than the receiver sensitivity which is -69dbm, and this means there is no effect of the dust storm on the receiver performance due to the fact that the link 1 employs a microwave carrier frequency of 3900mhz. the power level is about -27dbm, i.e. there is a margin of 42db from the receiver sensitivity. on the other hand, if the carrier frequency is increased to 26ghz as shown in figure 5, there will be not much difference in terms of the receiver performance, but the power margin is decreased to 19dbm. the relation of low visibility and large dust particle size with received power starts engineering, technology & applied science research vol. 9, no. 4, 2019, 4520-4524 4523 www.etasr.com shamsan et al.: micrometer and millimeter wave p-to-p links under dust storm effects in arid climates to be clear in figure 5 in which the worst case leads to weaken received power. all four cases in links 1 and 2 are good enough to receive the signal power from transmitter at 34.7km with no critical fading created by the sand storm. in figure 5, the power received value for all cases is approximately in the range between -58.3 and -57.19db, which is greater than the sensitivity, so the signal quality is good enough to be received. fig. 4. physical distance versus received power for link1. fig. 5. physical distance versus received power for link 2. regarding link3, we have another three cases with three different frequencies: 38, 60 and 100ghz with a fixed receiver sensitivity of 103.02dbm for all cases. using 38ghz, at the receiver palace ikka, the power signal will be received under all four dust storm conditions, as shown in figure 6. however, at 13km distance after the ikka (d=48km) the receiver will not be able to sense the transmitted signal especially for the worst dust storm condition of visibility of 0.6km and dust particle size of 21.25µm. the situation will be worse when the link uses 60ghz. in this case, restrictions on using this frequency should be considered spatially when the visibility is lower than 600m and 1000m. for these two conditions (with visibility of 0.6 and 1km), the signal can’t be captured by the receiver due to the fact that the power level of the received signal, 107.4 and 103.4dbm respectively, is lower than the receiver sensitivity at ikka, as shown in figure 7. the signal at ikka, for the other two cases (with visibility of 2 and 4km), will not be affected because the power received is -101.3 and -100.8dbm for visibility of 2 and 4km, respectively. however, the signal will deteriorate after about 10km from ikka (i.e. at a distance greater than 44km from the transmitter). for the link3 with carrier frequency of 100ghz as shown in figure 8, the signal will not be received for all visibility cases due to high dust attenuation. the power level at ikka is about -117.2, -109.7, -106.6 and -105.8dbm for visibility of 0.6, 1, 2 and 4km, in the same order. the maximum distance between the transmitter and the receiver with sensitivity of -103.02dbm is 15.5, 20, 24 and 27km for visibility of 0.6, 1, 2 and 4km, respectively. fig. 6. the physical distance versus power received for link3 at 38ghz. fig. 7. the physical distance versus power received for link3 at 60ghz. fig. 8. the physical distance versus power received for link3 at 100ghz. therefore, as a solution, the use of an rf relay system every 15.5, 20, 24 and 27km for all abovementioned visibility conditions of the same order is suggested. also, the transmitter power can be increased during dust storms to compensate the attenuation. in addition, data rate reduction, adjusting modulation and coding techniques are also mitigation solutions that can be used to mitigate the problem. v. conclusion this paper presents a study of the dust storm effect on communication channel performance in the arid area of engineering, technology & applied science research vol. 9, no. 4, 2019, 4520-4524 4524 www.etasr.com shamsan et al.: micrometer and millimeter wave p-to-p links under dust storm effects in arid climates riyadh, saudi arabia. measured dust storm data have been employed to investigate this factor. it was shown that severe dust storm may result to radio link interruption due to dust attenuation, especially in high frequency, large communication link, low visibility and large particle size conditions. these situations need technical mitigation schemes such as using an rf relay system, power control and/or adaptive modulation. references [1] e. basar, “index modulation techniques for 5g wireless networks”, ieee communications magazine, vol. 54, no. 7, pp. 168-175, 2006 [2] z. lin, x. du, h. h. chen, b. ai, z. chen, d. wu, “millimeter-wave propagation modeling and measurements for 5g mobile networks”, ieee wireless communications, vol. 26, no. 1, pp. 72-77, 2019 [3] z. a. shamsan, a. a. alburaih, f. i. alyahya, s. m. alshalawi, “effects of interference and precipitation on the 21.4–22 ghz downlink direct broadcasting satellite in saudi arabia”, 15th ieee student conference on research and development, putrajaya, malaysia, december 13-14, 2017 [4] z. a. shamsan, “38-ghz point-to-point wireless radio link prediction based on propagation and terrain path profile in riyadh”, university politehnica of bucharest scientific bulletin, series c-electrical engineering and computer science, vol. 80, no. 1, pp. 121-134, 2018 [5] z. a. shamsan, “clear air and precipitation millimeter-wave point-topoint wireless link prediction based on terrain path profile in semi-arid climate”, journal of telecommunication, electronic and computer engineering, vol. 10, no. 2-7, pp. 17-21, 2018 [6] n. middleton, u. kang. “sand and dust storms: impact mitigation”, sustainability, vol. 9, no. 6, article id 1053, 2017 [7] k. harb, b. omair, s. a. jauwad, a. a. yami, a. a. a. yami, a proposed method for dust and sand storms effect on satellite communication networks, kfupm university, 2012 [8] general authority for statistics, frequency of sandstorms per year in saudi arabia during the period 2010-2017, general authority for statistics, 2017 [9] china daily, sandstorm hits riyadh of saudi arabia, available at: http://english.sina.com/world/p/2009/0310/224892.html, 2009 [10] s. i. ghobrial, “the effect of sand storms on microwave propagation”, national telecommunication conference, houston, texas, november 30-december 4, 1980 [11] j. goldhirsh, “attenuation and backscatter from a derived twodimensional duststorm model”, ieee transactions on antennas and propagation, vol. 49, no. 12, pp. 1703–1711, 2001 [12] a. ishimaru, “wave propagation and scattering in random media and rough surfaces”, ieee, vol. 79, no. 10, pp. 1359-1366, 1991 [13] b. r. vishvakarma, c. s. rai, “limitations of rayleigh scattering in the prediction of millimeter wave attenuation in sand and dust storms”, ieee international geoscience and remote sensing symposium, tokyo, japan, august 18-21, 1993 [14] m. r. islam, z. elabdin, o. elshaikh, o. o. khalifa, a. h. m. z. alam, s. khan, a. w. naji. “prediction of signal attenuation due to duststorms using mie scattering”, iium engineering journal, vol. 11, no. 1, pp. 7187, 2010 [15] y. zhang, m. roughan, c. lund, d. l. donoho, “estimating point-topoint and point-to-multipoint traffic matrices: an information-theoretic approach”, ieee/acm transactions on networking, vol. 13, no. 5, pp. 947-960, 2005 [16] a. s. ahmed, a. a. ali, m. a. alhaider, “measurement of atmospheric particle size distribution during sand/duststorm in riyadh, saudi arabia”, atmospheric environment (1967), vol. 21, no. 12, pp. 27232725, 1987 [17] astm, astm d1739-70: standard method for collection and analysis of dustfall, astm d1739-70, 1982 microsoft word 18-3394_s engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5441 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … towards a new model to secure iot-based smart home mobile agents using blockchain technology badr eddine sabir laboratory of watch for emergent technologies fst, hassan i university settat, morocco b.sabir@uhp.ac.ma omar bouattane laboratory of ssdia enset, university of hassan ii casablanca mohammedia, morocco o.boattane@gmail.com mohamed youssfi laboratory of ssdia enset, university of hassan ii casablanca mohammedia, morocco med@youssfi.net hakim allali laboratory of watch for emergent technologies fst, hassan i university settat, morocco hakim-allali@hotmail.fr abstract—the internet of things (iot) is becoming an indispensable part of the actual internet and continues to extend deeper into the daily lives of people, offering distributed and critical services. mobile agents are widely used in the context of iot and due to the possibility of transmitting their execution status from one device to another in an iot network, they offer many advantages such as reducing network load, encapsulating protocols, exceeding network latency, etc. also, the blockchain technology is growing rapidly allowing for the addition of an approved security layer in many areas. security issues related to mobile agent migration can be resolved with the use of blockchain. this paper aims to demonstrate how blockchain technology can be used to secure mobile agents in the context of the iot using ethereum and a smart contract. the transactions within the blockchain are used to detect the malevolent mobile agents that could infiltrate the iot systems. the proposed model aims to provide a secure migration of mobile agents to ensure security and protect the iot applications against malevolent agents. the case of a smart home with multiple applications is applied to verify the proposed solution. the model presented in this paper could be extended to a wider selection of iot systems outside of the smart home. keywords-internet of things; smart home; blockchain; ethereum; smart contract; solidity; multi-agent systems; mobile agents i. introduction iot is growing exponentially in the area of telecommunications and it will be an indispensable part of the future internet [1-4]. it is referring to an approach where an extensive number of physical objects are interconnected and connected to the internet [5]. it is a part of pervasive and ubiquitous computing networks offering distributed and transparent services [6]. iot enables heterogeneous devices to interconnect to support various applications to serve users with different requirements [7-8] and is considered to be a good way to achieve smart city [9-10]. mobile agents are widely used in the context of the iot and due to the possibility of transmitting their execution status in an iot network, they offer many advantages such as reducing network load, encapsulating protocols and exceeding network latency. an agent-oriented infrastructure enables flexible coordination between iot devices including robots, smartphones, and sensors. agentbased systems enable cognitive management without constant human intervention [11-12]. functionalities such as smartness, autonomy and dynamicity are required for iot based infrastructure and can be offered by the presence of agents [11]. besides using agents, processing can be performed closer to actual data sources to reduce the cost of processing [13]. the use of mobile agents in an iot network is highly recommended due to their advantages [14]. authors in [7] showed that mobile agents can be represented by mobile javascript code (agentjs) that can be modified at run-time by agents that are processed by a modular and portable agent platform jam in a protected sandbox environment encapsulating agent processes. the proposed approach enables agents to migrate between different host platforms including web browsers by migrating the program code of the agent, embedding the state and the data of an agent, too, in an extended json+ format. authors in [4] presented mobile agents based on web technologies in the context of iot. in the proposed approach, agents can move between different devices, and if necessary, it is also possible to clone agents to create numerous instances. this model enables the creation of increasingly complex configurations, where device and context-specific decisions can also be taken. this approach increases the flexibility of the system design and evolution of iot since the new code can add new functionalities and adapt the device to new requirements. moving code and especially agents can also be used to add autonomous intelligence to systems. authors in [15] propose a distributed software-defined multi-agent architecture for unifying iot applications. this corresponding author: badr eddine sabir engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5442 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … architecture can tackle the main challenges that the iot faces, including heterogeneity, interoperability, scalability, flexibility, and security of iot applications. authors in [16] present an architecture based on mas, soa, and semantic web technologies to automate the integration and management of devices in iot environment. a prototype system was implemented and tested in a simulated environment of a manufacturing context. in this context, the system demonstrated the ability to adapt incorporating new features and flexibility. in [17], a framework for iot was presented which uses mobile agents for information transfer. the proposed approach can update information like the availability and usability of services dynamically. it also has speech processing modules to provide solutions using voice-based commands and prompts. mobile agents are widely used in the field of iot. therefore, the questions of how to ensure security during the process of migration of the mobile agents and protect the iot application against malevolent agents arise. authors in [18] describe four kinds of threat scenarios faced while using mobile agents: agent corrupts the platform (acp), platform corrupts agent (pca), agent corrupts agent (aca) other malicious entities (third-party programs) corrupt agent (oca). in this paper, we present an architecture model to secure mobile agents and protect them against different types of threats in the context of the iot. the proposed model aims to provide a secure migration of mobile agents to ensure security and protect the iot applications against malevolent agents using a smart contract [19]. the data in the blockchain is unchangeable once published and can help neutralize any attempt to altering the agent code source. the blockchain approach provides the logging of events in a tamper-proof manner which allows us to detect any mobile agent that has turned malicious. it can be used to provide high trust in secure transactions in a heterogeneous network [20]. once such a mobile agent is detected, the security agent isolates and destroys it. the case of a smart home can be a very suitable application of iot to verify the proposed solution, as it involves a variety of devices and parameters to be connected [21], however, the presented model could be extended to a wider selection of iot systems. ii. related work in this section, the related work is described with regard to the security of mobile agents in a multi-agent environment. authors in [13] provide a methodology to improve brosmap and make it a lightweight protocol to fulfill the needs of multiagent based iot systems in general. they offer a new eccbrosmap which is equivalent in security with the rsabrosmap and implement both rsa-brosmap and eccbrosmap before presenting a comparative performance study and implementation results of ecc-brosmap against rsa-brosmap. authors in [22] illustrate the use of mobile agent systems in distributed applications in the domain of ambient intelligence. they focus on the ability to improve privacy by hiding information using the agent architecture. the shown scenario clarifies the necessity to consider the particular security requirements of mobile agents. authors in [23] introduce agent identity in distributing the symmetric key to the newly attached agents of the platform. during the registration of the agent and its services at the platform, it must obtain the key from the key distribution process. the authors proposed a novel idea in fixing the identity of the agent for getting its shared key from the key distribution process. every trusted agent in a platform has a tiny hardware called usb dongle, which is password protected. it is configured during the initial environment formation. authors in [24] propose a security framework that can be effectively used to protect agents from attacks by malicious hosts. the framework is based on restricting the access level of the agent according to the trust level that is assigned to the current host. certain methods can only be executed on certain hosts that are minimally trusted. methods that cannot be executed on a host are kept encrypted. data are also selectively accessible according to the trust placed on the host. in traditional security methods, discovering the determined private key is enough for message decryption that can be done through malicious attacks to the network nodes or listening to communication links. accessing the private key can be considered as the endpoint of a malicious process. authors in [25] propose an approach to improve private key security using two strategies: encrypting the private key using an encryption algorithm (aes algorithm is used in this paper) and breaking the encrypted private key into different units. a secure authentication model based on ibc for the multi-agent of a single domain, giving a second authentication model based on ibc for multi-agent of multi-domain was proposed in [26]. authors in [27] described the general security requirements for mobile agent systems and existing security measures. especially, they pointed out some weaknesses in the field of protecting the carried data of mobile agents. to mitigate this issue, they implemented a trust and reputation management to provide a secure path for mobile agent data protection. our work aims to present a simple approach to guarantee the security of the migration of mobile agents in an iot network by ensuring maximum flexibility and without degrading performance. iii. background a. blockchain technology in 2008 the blockchain was first propagated through bitcoin [28] to assure all parties that the payer had the means to satisfy the debt before concluding any transaction [29]. basically, bitcoin was created with blockchain technology to transfer money, but now blockchain is used in many other areas. a blockchain [28] is a decentralized distributed database that maintains a continuously growing list of data in a public or a private peer-to-peer network. duplicated to all the peer nodes of the network, blockchain offers a secured system, between untrusted collaborators, which everyone on the network can check and interact with, but no one can control or alter. this allows the blockchain to be a trustworthy source without the requirement of a third-party [30]. a blockchain is a series of blocks. every block has itw own cryptographic hash code, previous block hash, and its data [31]. as shown in figure 1, each block in the blockchain is connected to the previous block, containing a hash of the previous block. as a result, the engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5443 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … history of transactions on the blockchain cannot be altered or deleted without completely changing the content of the blockchain [32]. a blockchain network is formed by one or more nodes. a node can be any electronic device (a computer, a telephone, etc.) if it is connected to the internet and has an ip address, and each node has a complete and separate copy of the blockchain. all these nodes connect to form a blockchain network. a transaction is not sent to the network but rather to a network node that it communicates with the other network nodes. fig. 1. simplified diagram of a blockchain network the blockchain has several characteristics, among which we can cite [33]: • decentralization: in the blockchain, third parties are not required to verify transactions. consensus algorithms and cryptographic mechanisms are used to maintain data consistency on blockchain networks. • persistency: it is not possible to delete transactions that have already occurred. • auditability: each transaction on the blockchain refers to the previous transaction. this makes it easy to verify and track each transaction. b. ethereum ethereum is a blockchain platform which surpasses some limitations of bitcoin [19]. it allows users to run distributed applications in a decentralized manner. this means that applications running on ethereum are available everywhere and every time [34-35]. ethereum has several elements, the most important of them are [36]: • account: every account on ethereum has a 20-byte address and consists of four parts, namely nonce-counter, storage, ether balance, and contract code. • transaction: transaction in ethereum refers to a signed data package that stores messages. • technology used: ethereum uses several technologies including web technology, client/node implementation, and data storage. • consensus algorithm: ethereum has 3 types of consensus algorithms, namely proof stake (pos), proof of authority (poa), and proof of work (pow). the most common one is pow. the underlying principle in this consensus algorithm is the complicated mathematical puzzles which consume a certain amount of power to find the solution but whose verification is comparatively fast and easy. the process of finding a solution to the puzzle is known as mining, while the nodes executing this process are known as miners. if the miner manages to find the solution (hash), the new block is formed, which is distributed on the network and if validated, the blocks get added, extending the chain. the protocols used for generating the hash for every block are cryptographic hash algorithms like sha256 which compute the hash of the current block considering metadata like the hash of the previous block. this makes each hash unique and any attempt to change the content or metadata of a block will result in an entirely different hash generating a diversion in the chain [30]. c. smart contract it was introduced in 1994 and defined a smart contract as a computerized transaction protocol that executes the terms of a contract [37]. translating contractual clauses into code and embedding them into a property that can self-enforce them was suggested in [38]. within the blockchain context, smart contracts are scripts recorded on the blockchain [39]. since they reside on the chain, they have a unique address. we trigger a smart contract by addressing a transaction to it. it then executes independently and automatically in a prescribed manner on every node in the network, according to the data that were included in the triggering transaction [40]. the nodes in the network interact with the contract by requesting the functions of the contract code once it is deployed on the network. smart contracts are invulnerable and cannot be altered, even by the author, once deployed on the network. smart contracts on ethereum are written in a high-level language and compiled via the ethereum virtual machine. the most used programming language is solidity [41] which we will use to write our own smart contracts. iv. proposed model in recent years, we have witnessed an impressive development of the iot home devices. home automation is a system controlled by a smart device. it can control home appliances such as lights, fans, air conditions, smart security locks, etc [42]. many companies such as google home, amazon echo, and samsung smartthings have released innovative new products that allowed these devices to become widely available. while these devices have a variety of benefits, they also introduce a new target for potential security threats [43]. device providers do not think about device security, supposing that devices in the home environment are trustworthy, and since the consumers do not have the resources to protect themselves from targeted security attacks on their home network, it becomes vulnerable to a variety of potential security threats. so, there is a real need for an intelligent and efficient home security model. a. components of the proposed model the proposed model shown in figure 2 aims to provide a migration of mobile agents while ensuring security and engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5444 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … protection of the iot application against malevolent agents by ensuring non-repudiation and their integrity using blockchain technology. the main components of our model are: • agents: an agent can be attached to a source device to collect information or perform actions on the source device. an agent can migrate to another device keeping its state to perform operations on the destination device. • smart device: extremely useful devices that are making our daily life easier [44] allowing users to configure, access and, control iot devices through a user-friendly interface. fig. 2. proposed model architecture • whitelist: it contains hashes of transactions returned by the ethereum network after the registration of a new agent. an agent identifier (aid) is associated with a transaction hash. • iot gateway: a user can access and control the iot devices using a smart device by accessing the iot gateway. the gateway is responsible for authenticating and monitoring communication between devices, requesting the security agent in the event of an agent migration. • security agent: mainly performs the operations of registering the source code hash of agents who wish to migrate to other devices in the blockchain network and of the verification of the agents after their migration to guarantee integrity, authentication and non-repudiation agents. • iot device: in our architecture, the iot device is a piece of hardware with a sensor. • securityagent.sol: it is a smart contract that provides two functions: o registermobileagent (aid, agenthash): this function allows sending a transaction to the blockchain network to register the source code of the agent. the function takes in two parameters, the hash of the source code of the agent (agenthash) and the agent identifier (aid) and returns the identifier of the transaction recorded in the blockchain (tranactionhash). o retrieveagenthash (transactionhash): this function allows us to recover the hash of an agent's source code from the blockchain network. the function takes the identifier of the transaction that allowed registering the hash of the source code of the agent and returns the hash of the source code of the agent. this smart contract is developed in solidity, which is a contract oriented language, used for writing smart contracts that can be deployed on an ethereum virtual machine. it follows an object-oriented approach and supports features like inheritance and complex data types. • solidity compiler: securityagent.sol is what we call “contract definition”. this code is not executed on the ethereum network, thus we need to compile our “contract definition” using a “solidity compiler” which will produce two separate files. a file which will contain “byte code” which will then be deployed on the ethereum network in the form of a contract instance using in our case truffle which is a development environment and a testing framework that helps the automatic compilation and deploying of contracts on blockchain, and is also used to deploy contracts on a private ethereum blockchain. the compilation will produce an “application binary interface (abi)”. this abi will be used to call the functions exposed by the smart contract instance deployed in the ethereum network using the web3 library. ethereum provides an interface with the ethereum network for developers in the web3.js api. this allows applications to interpret events sent from the ethereum network and to submit transactions to the network [45]. b. secure communication between agents the diagram shown in figure 3 illustrates the migration steps of an agent from a temperature sensor to a smart device in chronological order: • the mobile agent measures and collects information from the temperature sensor. • the mobile agent requests the iot gateway to migrate to the smart device. • the security agent generates the hash of the source code of the mobile agent. • the security agent invokes the “registermobileagent” function of the secureagent.sol smart contract, with passing in parameters the hash of the mobile agent source code and the aid of the agent. • the hash of the agent source code is registered in blockchain ethereum network. engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5445 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … • the result of this operation is an identifier of the transaction validated on the blockchain in the form of a hash of the transaction. • the security agent registers the hash of the transaction in the whitelist by associating it with the aid of the mobile agent for which the source code has been saved in the blockchain. • the iot gateway allows the mobile agent to migrate. • after the migration and the generation of graphs in the smart device, the mobile agent requests to return to the temperature sensor. • the iot gateway requests the security agent to verify the integrity and the authentication of the mobile agent after its migration. • the security agent retrieves the transaction identifier from the whitelist passing in the aid of the mobile agent. fig. 3. sequence diagram: migration of an agent • the security agent invokes the “retrieveagenthash” function of the securagent.sol smart contract passing the transaction identifier. • the hash of the source code of the mobile agent stored in the blockchain is returned to the security agent. • the security agent compares the hash recovered from the blockchain with the current mobile agent hash. • if the hashes are same, then the security agent allows the mobile agent to return to the sensor temperature. • else the security agent destroys the mobile agent. v. conclusion this document presents an overview of the current state of research in the field of mobile agent security in a multi-agent environment, and the utility of employing mobile agents in iot systems such as reducing network load, protocol encapsulation and exceeding network latency. an architecture using the blockchain technology has been presented in this document to secure mobile agents and protect them against different types of threats in the context of the iot using a smart contract deployed in a private ethereum network. the smart home use case with multiple iot devices using mobile agents was applied to verify and explain the proposed solution. though this document presents the smart home use case, there is a potential for this model to be extended to other types of iot systems with some modifications. our future work aims to set up a private ethereum network using go ethereum before testing the implementation of the proposed model, and to this end, we are in the process of developing a smart home testbed environment based on web technologies. references [1] t. alam, m. benaida, “cics: cloud–internet communication security framework for the internet of smart devices”, international journal of interactive mobile technologies, vol. 12, no. 6, pp. 74-84, 2018 engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5446 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … [2] s. li, l. d. xu, s. zhao, “the internet of things: a survey”, information systems frontiers, vol. 17, no. 2, pp. 243-259, 2015 [3] s. k. anithaa, s. arunaa, m. dheepthika, s. kalaivani, m. nagammai, m. aasha, s. sivakumari, “the internet of things: a survey”, world scientific news, vol. 41, pp. 150-158, 2016 [4] m. weyrich, c. ebert, “reference architectures for the internet of things”, ieee software, vol. 33, no. 1, pp. 112-116, 2016 [5] l. jarvenpaa, m. lintinen, a. l. mattila, t. mikkonen, k. systa, j. p. voutilainen, “mobile agents for the internet of things”, 17th international conference on system theory, control and computing, sinaia, romania, october 11-13, 2013 [6] s. bosse, “mobile multi-agent systems for the internet-of-things and clouds using the javascript agent machine platform and machine learning as a service”, 4th international conference on future internet of things and cloud, vienna, austria, august 22-24, 2016 [7] d. lake, a. rayes, m. morrow, “the internet of things”, the internet protocol journal, vol. 15, no. 3, pp. 10-19, 2012 [8] g. m. lee, j. y. kim, “the internet of tthings: a problem statement”, international conference on information and communication technology convergence, jeju, south korea, november 17-19, 2010 [9] a. zanella, n. bui, a. castellani, l. vangelista, m. zorzi, “internet of things for smart cities”, ieee internet of things journal, vol. 1, no. 1, pp. 22-32, 2014 [10] j. jin, j. gubbi, s. marusic, m. palaniswami, “an information framework for creating a smart city through internet of things”, ieee internet of things journal, vol. 1, no. 2, pp. 112-121, 2014 [11] g. fortino, a. guerrieri, w. russo, c. savaglio, “middlewares for smart objects and smart environments: overview and comparison”, in: internet of things based on smart objects, pp. 1-27, springer, 2014 [12] f. aiello, g. fortino, a. guerrieri, r. gravina, maps: a mobile agent platform for wsns based on java sun spots, university of calabria, 2009 [13] h. hasan, t. salah, d. shehada, m. j. zemerly, c. y. yeun, m. a. qutayri, y. a. hammadi, “secure lightweight ecc-based protocol for multi-agent iot systems”, 13th international conference on wireless and mobile computing, networking and communications, rome, italy, october 9-11, 2017 [14] h. yu, z. shen, c. leung, “from internet of things to internet of agents”, international conference on green computing and communications and ieee internet of things and ieee cyber, physical and social computing, beijing, china, august 20-23, 2013 [15] l. jarvenpaa, m. lintinen, a. l. mattila, t. mikkonen, k. systa, j. voutilainen, “mobile agents for the internet of things”, 17th international conference on system theory, control and computing, sinaia, romania, october 11-13, 2013 [16] r. l. cagnin, i. r. guilherme, j. queiroz, b. paulo, m. f. o. neto, “a multi-agent system approach for management of industrial iot devices in manufacturing processes”, 16th international conference on industrial informatics, porto, portugal, july 18-20, 2018 [17] p. verma, m. gupta, t. bhattacharya, p. k. das, “improving services using mobile agents-based iot in a smart city”, international conference on contemporary computing and informatics, mysore, india, november 27-29, 2014 [18] d. calvaresi, a. dubovitskaya, j. p. calbimonte, k. taveter, m. schumacher, “multi-agent systems and blockchain: results from a systematic literature review”, in: lecture notes in computer science, vol 10978, pp. 110-126, springer, 2018 [19] v. buterin, ethereum white paper. a next-generation smart contract and de-centralized application platform, 2014 [20] t. alam, “iot-fog: a communication framework using blockchain in the internet of things”, international journal of recent technology and engineering, vol. 7, no. 6, pp. 1-5, 2019 [21] v. tiwari, a. keskar, n. c. shivaprakash, “design of an iot enabled local network based home monitoring system with a priority scheme”, engineering, technology & applied science research, vol. 7, no. 2, pp. 1464-1472, 2017 [22] f. piette, c. caval, a. e. f. seghrouchni, p. taillibert, c. dinont, “a multi-agent system for resource privacy: deployment of ambient applications in smart environments”, international conference on autonomous agents & multiagent systems, malaysia, singapore, may 9–13, 2016 [23] r. kumaravelu, n. kasthuri, “distribution of shared key (secret key) using usb dongle based identity approach for authenticated access in mobile agent security”, international conference on communication and computational intelligence, erode, india, december 27-29, 2010 [24] p. j. marques, l. m. silva, j. g. silva, “establishing a secure openenvironment for using mobile agents in electronic commerce”, in: proceedings. first and third international symposium on agent systems applications, and mobile agents, ieee, 1999 [25] a. esfandi, a. m. rahimabadi, “mobile agent security in multi agent environments using a multi agent-multi key approach”, 2nd ieee international conference on computer science and information technology, beijing, china, august 8-11, 2009 [26] y. yu, x. zheng, m. zhang, q. zhang, “an identity-based authentication model for mobile agent”, fifth international conference on information assurance and security, xi'an, china, august 18-20, 2009 [27] g. geetha, c. jayakumar, “implementation of trust and reputation management for free-roaming mobile agent security”, ieee systems journal, vol. 9, no. 2, pp. 556–566, 2015 [28] s. nakamoto, “bitcoin: a peer-to-peer electronic cash system”, available at: https://bitcoin.org/bitcoin.pdf, 1997 [29] i. purdon, e. erturk, “perspectives of blockchain technology, its relation to the cloud and its potential role in computer science education”, engineering, technology & applied science research, vol. 7, no. 6, pp. 2340-2344, 2017 [30] i. ishita, d. kulkarni, t. semwal, s. b. nair, “on securing mobile agents using blockchain technology”, second international conference on advanced computational and communication paradigms, gangtok, india, february 25-28, 2019 [31] t. alam, “blockchain and its role in the internet of things (iot)”, international journal of scientific research in computer science, engineering and information technology, vol. 5, no. 1, pp. 151-157, 2019 [32] x. xu, i. weber, m. staples, l. zhu, j. bosch, l. bass, c. pautasso, p. rimba, “a taxonomy of blockchain-based systems for architecture design”, international conference on software architecture, gothenburg, sweden, april 3-7, 2017 [33] z. zheng, s. xie, h. dai, x. chen, h. wang, “an overview of blockchain technology: architecture, consensus, and future trends”, international congress on big data, honolulu, usa, june 25-30, 2017 [34] c. dannen, introducing ethereum and solidity: foundations of cryptocurrency and blockchain programming for beginner, apress, 2017 [35] d. patel, j. bothra, v. patel, “blockchain exhumed”, isea asia security and privacy, surat, india, january 29-february 1, 2017 [36] c. saraf, s. sabadra, “blockchain platforms: a compendium”, ieee international conference on innovative research and development, bangkok, thailand, may 11-12, 2018 [37] d. tapscott, a. tapscott, blockchain revolution: how the technology behind bitcoin and other cryptocurrencies is changing the world, penguin, 2018 [38] n. szabo, “the idea of smart contracts”, available at: https://nakamotoinstitute.org/the-idea-of-smart-contracts, 1997 [39] “using stored routines (procedures and functions)”, in: mysql reference manual, oracle, 2016 [40] s. j. pee, j. h. nang, j. w. jang, “a simple blockchain-based peer-topeer water trading system leveraging smart contracts”, international conference on internet computing and internet of things, las vegas, usa, july 27-30, 2018 [41] m. wohrer, u. zdun, “smart contracts: security patterns in the ethereum ecosystem and solidity”, international workshop on blockchain oriented software engineering, campobasso, italy, march 20, 2018 engineering, technology & applied science research vol. 10, no. 2, 2020, 5441-5447 5447 www.etasr.com sabir et al.: towards a new model to secure iot-based smart home mobile agents using blockchain … [42] t. alam, a. a. salem, a. o. alsharif, a. m. alhejaili, “smart home automation towards the development of smart cities”, aptikom journal on computer science and information technologies, vol. 3, no. 1, pp. 1-2, 2020 [43] l. rafferty, f. iqbal, s. aleem, z. lu, s. c. huang, p. c. k. hung, “intelligent multi-agent collaboration model for smart home iot security”, ieee international congress on internet of things, san francisco, usa, july 2-7, 2018 [44] t. alam, “middleware implementation in cloud-manet mobility model for internet of smart devices”, international journal of computer science and network security, vol. 17, no. 5, pp. 86-94, 2017 [45] v. p. ranganthan, r. dantu, a. paul, p. mears, k. morozov, “a decentralized marketplace application on the ethereum blockchain”, 4th international conference on collaboration and internet computing, philadelphia, usa, october 18-20, 2018 microsoft word 10-3611_s engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5933 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection uav tomographic synthetic aperture radar for landmine detection muhannad almutiry electrical engineering department northern border university arar, saudi arabia muhannad.almutiry@nbu.edu.sa abstract—the development of the unmanned aerial vehicle (uav) and communication systems contributed to the availability of more applications using uavs in military and civilians purposes. anti-personnel landmines deployed by militia groups in conflict zones are a life threat for civilians and need cautious handling while removing. the uav tomographic synthetic aperture radar (tsar) can reconstruct three-dimension images of the investigation domain to prescreen nonmetallic landmines. a nonmetallic landmine cannot be detected using conventional ground penetrating radars when the scattering field is undetected due to the dielectric permittivity. in this paper, imaging the underground for detecting landmine using tsar is proposed. the tsar has the capability of prosing the data in discrete mode regardless of the altitude of uav’s radar. a landmine is always buried less than a feet depth. l-band frequency is used to provide high resolution and to penetrate deep in dry soil. more than one uavs are used to multistatic scan the investigation space. the geometric diversity of multistatic distribution of the sensors will provide more information about the buried nonmetallic landmines, certain features, and their location. the data collected from the sensors will align with the geolocation data obtained from the uav’s system for processing. dynamic flying can be used to predict the electromagnetic response of the scattering field to create a dynamic matching filter using the green’s function under first-order born approximation. the occurring air-soil interference has been removed as an unwanted reflection from the ground while keeping the signal coming from underground. using the born approximation assumption created an ill-posed linear system solved by the conjugate gradient algorithm. simulation results are presented to validate the method. keywords-radar; rf tomography; uav; synthetic aperture radar i. introduction there were an estimated one hundred million landmines positioned in many countries by 2009, while their number is increasing by 2 million landmines annually [1, 2]. these hidden weapons pose a serious threat to the civilians living in these regions. while efforts are been conducted to remove these mines, at the ongoing rate of clearance it will take an estimated 1100 years to eradicate them and perform ordinances of unexploded mines through the use of conventional methods of detection such as ground-penetrating radar (gpr) [3]. tsar exhibits three primary attributes that set it apart from alternative sensors: a wide accumulated angle, low-frequency band, and small bandwidth. these attributes facilitate a more effective penetration of the soil, the development of synthetic aperture radar (sar) images that are of a higher resolution, and enable the radar to cover more extensive areas. as such, tsar represents a promising technology for the ongoing efforts to find land mines from a safe distance. moreover, tomography has been effectively employed in many medical and scientific fields. for example, computed tomography (ct) is frequently used to assist diagnosis. the tomographic set of scanning is exploiting the spatial diversity as an x-ray beam is concentrated and scanning in multiple angles around the object is conducted in order to provide the 3-d features [4]. the attenuation that occurs as the object is passed is calculated by the detectors (x-ray receivers) that are facing the x-ray tube. at each point at the circular geometry, the radiation dose that is received relates to an estimation of the attenuation of the object in the direction of the transmission. according to the projection-slice theorem [5] for a given angle position the fourier transform of the signal that is subsequently received relates to a slice of the two-dimensional (2-d) fourier transform. during the process of the ct, a one-dimensional (1d) fft is first performed. this is then processed into a 2-d image that displays the distribution of the attenuation in a polar arrangement. after that, the polar format is transformed into a cartesian format via an interpolation step. the final image is developed via a 2-d inverse that results in the presentation of the 2-d object attenuation map at every position of interest. several studies [6-8] have demonstrated that it is possible to formulate spotlight sar spectral analysis based on azimuth in a comparable technique similar to the tomographic set of medical x-ray scanning [9, 10]. the radar tomographic set is based on linking the radars in one processing network, where it can be expanded to further application [11, 12]. furthermore, the stripmap mode can be performed using a similar process to tomographic processing based on the scanning trajectory [13]. as a result of the small variations in the angles that are of interest during sar processing, more algorithms have been developed for effective image reconstruction [14]. at a high level, the development of the synthetic aperture that is employed in sar represents a tomographic technique. when x-ray tomography is performed, it is not possible to achieve 3corresponding author: muhannad almutiry engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5934 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection d imaging. instead, 2-d processing is performed in iterations spanning multiple positions on the object by performing minor step parallel readings of the receiver and transmitter set. this form of ct can be applied to detect mines due to the mutlistatic tomographic radar that can process signals at any evaluation beam level, where a discrimination process can be applied to classify a landmine from a variety of objects at a selected area that is potentially prescreening. having the ability to detect and image objects that are buried in the ground is essential in a range of commercial, military, and civilian settings, i.e. for the exploration of natural resources and detection of tunnels [15]. below-ground imaging techniques that are currently in use have the ability to identify both metallic and nonmetallic artifacts that exhibit higher conductivity than soil [16, 17]. the use of electromagnetic waves to image objects that are located below the ground represents a non-damaging approach to the detection, surveillance, and imaging of below-ground features and irregularities. the performance of below-ground imaging is performed using a process of radio wave transmission and scattering by which radio waves are passed through the ground by the transmitter system, which subsequently measures the waves that are reflected from the targets and determines the effects that the various materials have on the transmitted radio waves. the data that are reflected from the area of interest are primarily received by passing a surface antenna around a circular framework. pulses are transmitted from the airborne uav’s radar downward-looking to the ground with a data link connection for effective and accurate processing. the main difficulty in this scenario is the capturing of the signal caused by reflections of the surface. when the targets are deeply buried, it is easy to separate the surface clutter signal from the target signal. in such cases, it is easy to separate signals using range gating techniques. if the target is very close to the surface, range gating is ineffective because the clutter signal from the surface and the target will be received almost at the same time. thus, the problem is to promote the separation of target signals and ground clutter. tsar can operate at ultranarrow bandwidth (unb) or even at a single frequency in pulse form to reduce the attenuation when these pulses penetrate the ground and come into contact with an underground landmine. the signal attenuation is associated with the bandwidth. the radar system employs a supercomputer to undertake real-time digital processing of the scattering data field collected. when covering large environments, the data rate is substantial, and so a single receiver system using a single frequency is required for realtime processing employing existing down-link hardware. the surface antenna assembles the scattering field data that are reflected and subsequently undergo processing for detection and imaging. dynamic green’s function is used to eliminate the strong surface clutter. in dynamic green’s function technique, the processing of the landmine imaging is done by using just a single frequency. the technique explores an image of a below-ground item that is reconstructed through the application of tsar and 3-d sar using matched filter calculations based on aviation data that permit the collection of circular data collection. ii. land mine detection using electromagnetics land mines are currently detected through the use of metal detectors that evaluate the disruption of a radiated electromagnetic field from the presence of underground metallic objects. one of the primary drawbacks of this approach is that any scrap metal activates the alarm. as such, it represents an inefficient approach to the detection of land mines due to the high rate of false alarms [18]. the gpr is used to transmit electromagnetic waves that penetrate the ground to sense the underground from the reflections that occur at the discontinuities of the dielectric constant. due to its ability to penetrate the ground, ultra-wide band synthetic aperture radar (uwb sar) has emerged as a promising technology that can identify landmines in a large area of land without putting human lives at risk [19-20]. however, like metal detectors, gpr can trigger false alarms due to the presence of irregularities in the soil, for instance, rocks and roots, when hidden markov models were used to classify the background from plastic-cased or completely nonmetal landmines [21-23]. a further issue with gpr is that it is not as effective in terms of detection of smaller mines that are in shallow locations because the soil-surface reflection disguises their response [1, 24]. the x-ray backscattering approach is based on the notion that soil and mines have different attenuation, which leads to detecting landmines [25] although the x-ray generators that are employed in this approach are huge and heavy and require high amounts of power to achieve sufficient penetration. as such, this does not represent a portable method [26]. furthermore, as radiation is involved, its acceptance is limited. the millimeter-wave radar (mmwr) approach is dependent on the concept that the soil has low reflectivity and high emissivity at certain frequencies, while metals have the opposite characteristics [27, 28]. active mmwr employs a source of excitation, but mmwr relies purely on the temperature of the environment. as such, while mmwr represents a promising approach for the detection of metallic objects, it is not effective in the detection of plastic artifacts. iii. related work a number of methods of signal processing have been put forward for improving landmine gpr system performance, including forward scattering radar classification [29], background subtraction, clean algorithm [30], kalman filters [31], likelihood ratio test [32], wavelet packet decomposition [33], and additional two-dimensional filtering [34]. the majority of these methods depends on estimating the background signal through green’s function or calculating a mean value for the unprocessed data collected by gpr and then subtracting the estimated background signal from the received signal. such methods have been widely used for gpr applications but are a compromise at best. an alternative approach that does not need background data has been suggested in [35]: rather than being dependent on background scene data, we can gate out ground reflections, estimate the corresponding parameters, and use this for modeling and subtracting wall contributions from the received data. gpr designed for the detection of buried landmines through multiple probes of the ground across the target area has a similar problem caused by clutter reflection, i.e. reflection engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5935 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection from the ground’s surface. this approach involves the application of matched filters for eliminating ground clutter, which can be estimated at the phase center of the scattering field. while gpr requires special filtering, these two challenges have two substantial differences. firstly, for gpr, clutter signals arise from echoes caused by the air/soil medium differences. this means the strong ground clutter will mask the landmines which are very close to the surface. those buried deep down will not be shielded by clutter. secondly, as the clutter/landmine scatting signal is significantly overlapped within frequency domains, filtering will also attenuate the landmine reflection. for numerous gpr applications, the inverse problem of imaging a whole medium is not practical. the inverse problem will generally be nonlinear even when the forward problem is linear. additionally, the inverse problem is generally ill-conditioned, and we have to apply an inversion technique to regularize the solution. in this case, we employ techniques centered in matched filter methodology based on dynamic green’s function for the detection of significant changes in patterns within the cluttered environment, e.g. the appearance/disappearance of targets/target motion, with no knowledge of the background medium. iv. methodology the proposed dynamic green’s function is based on rf tomography imaging introduced at [15, 36]. the tsar model incorporates an electromagnetics source that is mounted into a uav located at an instant time at ��� . the instant time can be determined by combining the information from the global positioning system (gps) and the radar signal to determine the exact distance form the ground. there are many methods to support accurate fly level information, such as lidar. the electromagnetic field radiates toward the ground which is known as the ��� incident field in terms of the target. the area of interest that needs to be constructed for the image is discretized where more than one uav can scan. the target is positioned at �� about 0.1m under the strong ground clutter, and the scattering field �� is comparable to the way of radiating from the airborne uav transmitter. this scattered field, which is known as �� , is recorded by a receiver at ��� . the ��� and �� will have the same polarization unless the target has no properties of changing the reflection polarization. additionally, each respective object is assumed to be isotropic due to the ground environment where the ��� and �� together form the total electric field ���� as: ���� = �� + ��� (1) the incident field can be formed as green’s function with knowing the aviation and sensor parameters to given ���, ��� , and �� as follows: ��� = �ωμ�����,��� (2) the green’s function �����,��� is giving the calculated electromagnetic response at any point in the area of interest with respect to �� , and k is the wavelength number: �����,��� = �� + ∇∇��� ���� ! " # $� %&'�!"(�$' (3) in order to obtain the dynamic green’s function to remove strong air-soil clutter, the propagation number needs to be updated due to the soil dielectric properties. now, we calculate the ground clutter respace above the area of interest ��) as: ��) = )*+, (4) where prf is pulse repetition frequency and c is the speed of light. the green’s function to calculate the clutter �)����,��� is giving the calculated electromagnetic response at any point in the area of interest with respect to ��) : �)���),���� = �� + ∇∇��� ���' $.# / ' %&'�$.(�/ ' (4) the scattered field can be obtained in the integral form of respect to the object function 0���� and the total electric field ���� as follows: �1)2ω,���,���3 = ∭����,����0����e����ω, ���6�� (5) it is not possible to solve the scattered field, �� , from (5) because it is incorporated in ���� and becomes a nonlinear integral equation, as outlined in (1). the incident field can be calculated using (4) and by applying the first-order born approximation to linearize. in addition, 0���� is an unknown element within the imaging problem. it is multiplied by the other unknown, �� that is incorporated in the ���� . it is possible to apply the born approximation to the total electric field ���� to substitute the incidence field ��� to linearize the integral equation as: �1)27,���,��� 3 = ∭����,����0����e���7,���6�� (6) the incident field can be substituted with (2) to produce: �1)2ω,���,���3 = ∭�����,�������,����0����6�� (7) we need now to update the green’s function in the integral by the clutter green’s function: �1)2ω,���, ���3 = ∭�)���), ���������, �������,����0����6�� (8) discretizing (8) to have more flexibility to be adopted into airborne processing while calculating the green’s function as we collect the scattering field in different aviation level and trajectory by dividing the area of interest into p pixels as follows: �1)2ω,���,���3 ≅ 9:0����; ≅ ∑ {�)���), ��� ������,�������,����0����>�?> (9) equation (9) can be mathematically considered in a matrix form to provide an effective and rapid process in digital signal processing in form: ��� = 9��� ⋅ 0 (10) to obtain the object function 0, we will solve (10) as an inverse problem that is an ill-posed and ill-conditioned linear system representing the forward scattering model. however, (9) can only be used to determine single measurements as it relates to a particular orientation and transmitter location, engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5936 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection receiver orientation location, orientation, and frequency. in the event any of these parameters are reformed, there is a requirement to obtain a new measurement. as such, there is a need to modify (9) in response to any variations by compiling a set of (q=1, q) airborne uav’s measurements for each divided pixel p as [8, 9]: ab� =  db�ee0b�ee + db�ef0b�ef + db�gh0b�ei + ⋯   +db�fe0b�fe + db�ff0b�ff + db�fi0b�fi + ⋯ (11) +db�ie0b�ie + db�if0b�if + db�ii0b�ii each d value in (11) can be calculated by appropriately reorganizing and modifying (10). expanding the equation outlined above to all pixels in the area of interest and all potential measurement patterns, q, we have: k�>��*�⋮ mno p measurementvector�scattered field� k�>>) �>>b �>>�>*) �>*b �>*⋯�*>) �*>b �*>�**) �**b �**⋯⋮ ⋮ ⋱ mdeeeeeeeefeeeeeeeegh ijjjjjjjjkjjjjjjjjldynamic green's function 2matching filter3 ⋅ st>t*tuvnw p contrastvector2image3 (12) where e is the measured scattering field data collated at various uav’s aviation level, the matched filter h is the calculated green’s function response for the transmitter and the receiver at each z pixel in the area of interest grid, and w is the unknown object function. the inversion technique to recover the object function is given by: w = h(>o (13) the inversion of the ill-posed matrix at (12) can be obtained using the conjugate gradient (cg) algorithm, which is much faster compared to other inversion algorithms such as algebraic reconstruction techniques (art). the inversion algorithm can be done after the prescanning of the area of interest to do offline processing. real-time processing is doable due to independent measurement in discrete form analysis, where the pixels of the area of interest are updated at each scan. v. simulation results the simulation of the landmine detection using uav tsar has been done using feko, which is a computational em simulation software tool. in this simulation, we first need to check the accuracy of the calculated dynamic green’s function, whether it really can estimate the ground clutter or not in order to update the matched filter. in figure 1, we calculated the scattering field at using the same parameters for green’s function and feko simulation, which appear matching at the phase and amplitude of the reflected signal. the matching between the calculated and simulated scattering field indicates an exact update for the dynamic green’s function in the system to predict the effect of the clutter in the received signal. the accuracy of the calculated green’s function will eliminate the clutter by updating the matched filter. furthermore, obtaining the range distance between the uav and the ground improves the accuracy of the calculated clutter response at the matching filter. we are using the simulated radar scattering single received at l-band 500mhz bandwidth to obtain the range distance. the exact range distance of the radar and the measured range distance are shown in figure 2, which can provide the accurate calculation of the clutter response using the green’s function. we placed the radar first at 2.4m then we measured the range distance by the received scattering field. then, we increased the aviation level from the ground by 0.4m to find if the measured range is matching the flying level. as shown in figure 2, the flying level was increased by 0.4m for each position and the measured radar range matched the exact range. for landmine imaging, we show two scenarios with the same number of sensors at two different flying patterns. the operation frequency of both scenarios was at 2ghz. fig. 1. comparing the accuracy of the calculated scattering field. fig. 2. determining the flying ground level using uav radar. in the first scenario, the scattering field from constant circular trajectory flying pattern was simulated as shown in figure 3, while snapshotting the received scattering field at 969 locations around the measurement domain to mimic the airborne uav’s collecting data. the measurement domain contains three cylinders placed 0.075 from the center at shown in figure 4. each cylinder has a diameter of 0.0375m and a height of 0.05m. after the data of the single 2ghz frequency d b engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5937 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection were stored at a scattering field vector in multiple snapshot locations, we started to calculate the dynamic green’s function based on the distance obtained from radar range at 500mhz bandwidth at l-band to be added into the matching filter matrix. since the inversion is an ill-posed condition, the inversion between the scattering filed vector and the matching filter matrix has been done using the cg algorithm. the reconstructed xy-plane image of the measurement domain is shown in figure 5 in which the pixel size is 0.00375m. fig. 3. uav collecting data at constant flying attitude. fig. 4. three cylinders at the measurement domain using feko simulation. in the second scenario, we collected snapshot data at erratic flying pattern as shown in figure 6. the snapshot data of the scattering field were collected at a circular trajectory from different flying levels at 969 different locations around the measurement domain. we used a multistatic scanning mode to apply different uav scannings at the same time. the operation frequency is 2ghz with a wavelength of 15cm. the measurement domain contains five cylinders placed in the scene, as shown in figure 7. each cylinder has a diameter of 0.375m spaced equally by 0.075m at the x-axis and y-axis. the x-y plane in figure 8 is the cross-sectional of the z-plane at 0, and the pixel size is 0.00375m. the objects at the measurement domain appear as weak reflections due to the distance between the transmitters and the receivers, the varying uav flying pattern, and the number of snapshot measurements, as shown in figure 8. fig. 5. the reconstructed tsar of three cylinders at the measurement domain. fig. 6. the erratic flying pattern scenario of the uav. fig. 7. five cylinders at the measurement doamin using feko simulation. after collecting the scattering field, the inversion process has been done using the cg algorithm to obtain the object function from the matching filter matrix that has been calculated due to each snapshot position and direction through the green’s function. the inversion produced the final z -a x is ( m e te r) engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5938 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection reconstructed image. the final reconstructed image can be extended into a 3-d model image. fig. 8. the reconstructed tsar image of the measurement domain. vi. conclusion in the middle east there are more than 50,000 landmines buried randomly under the saudi-yemeni border, posing a fatal threat for the civilians. unfortunately, most of the buried mines are nonmetallic, which are challenging to be detected in a large area by conventional methods. a large area needs to be prescreened for landmines before it becomes safe to use. nevertheless, the synthesis array antenna is the output of a radar sensor onboard an aircraft to increase the airborne radar aperture length. the image formation algorithm generates the output image based on the collected signals, where the matched filter will apply autocorrelation to the input signal. overall, sar will treat the clutter as a processed signal to form an image. appropriate sar image requires high bandwidth and aperture parameters to obtain the desired resolution. on the other hand, soil losses degrade the penetrating depth based on the frequency and bandwidth for more underground sensing applications using sar techniques. in the tomographic sar mode, the data are collected as multistatic geometry exploiting the spectral/spatial diversity. furthermore, the multistatic decouple the relation between frequency forming the image and the radiated frequency. tsar can be formed by using the ultranarrow band (unb) at l-band to give more resolution than the conventional sar, where nbu reduces the signal attenuation for more penetration depth. for uav’s tsar, the measurement vector presents the collected signal in a multistatic mode. the matched filter is the electric field response at any flying location and direction, and the contrast vector (the object function) is the pixel weight. the idea based on exploit microwave signals to create an image of an object or prescreening underground. the operation frequency based on the applications (e.g. more microwave image resolution, less penetrating depth) where we are imaging based on tomography techniques. the tomographic sar techniques solve the contrast function of an object or land (image) as an inverse problem considering the received signal with its response in the medium. for our application, we need less bandwidth to increase the penetrating depth and obtain higher resolution. the uav’s tsar techniques were recording the aviation information and linked it to the recorded scattering field as discrete data. however, these methods exploit the spectral/spatial diversity, where the soil losses are always a problem for penetrating signals, but unb gives more degrees of freedom at radiation power level. for more applications, tomographic sar can mount unmanned ground vehicles (ugvs). finally, tsar applications can also extend to remote sensing applications. reference [1] l. robledo, m. carrasco, and d. mery, “a survey of land mine detection technology,” international journal of remote sensing, vol. 30, no. 9, pp. 2399–2410, may 2009, doi: 10.1080/01431160802549435. [2] k. schreiner, “landmine detection research pushes forward, despite challenges,” ieee intelligent systems, vol. 17, no. 2, pp. 4–7, apr. 2002, doi: 10.1109/mis.2002.999212. [3] p. gao and l. m. collins, “a two-dimensional generalized likelihood ratio test for land mine and small unexploded ordnance detection,” signal processing, vol. 80, no. 8, pp. 1669–1686, aug. 2000, doi: 10.1016/s0165-1684(00)00100-6. [4] r. brooks and g. di chiro, “principles of computer assisted tomography (cat) in radiographic and radioisotopic imaging,” physics in medicine and biology, vol. 21, no. 5, pp. 689–732, sep. 1976, doi: 10.1088/00319155/21/5/001. [5] r. m. mersereau and a. v. oppenheim, “digital reconstruction of multidimensional signals from their projections,” proceedings of the ieee, vol. 62, no. 10, pp. 1319–1338, oct. 1974, doi: 10.1109/proc.1974.9625. [6] d. c. munson, j. d. o’brien, and w. k. jenkins, “a tomographic formulation of spotlight-mode synthetic aperture radar,” proceedings of the ieee, vol. 71, no. 8, pp. 917–925, aug. 1983, doi: 10.1109/proc.1983.12698. [7] a. c. kak and m. slaney, principles of computerized tomographic imaging. ieee press, 1988. [8] m. d. desai and w. k. jenkins, “convolution backprojection image reconstruction for spotlight mode synthetic aperture radar,” ieee transactions on image processing, vol. 1, no. 4, pp. 505–517, oct. 1992, doi: 10.1109/83.199920. [9] o. ponce et al., “fully polarimetric high-resolution 3-d imaging with circular sar at l-band,” ieee transactions on geoscience and remote sensing, vol. 52, no. 6, pp. 3074–3090, jun. 2014, doi: 10.1109/tgrs.2013.2269194. [10] l. wei, t. balz, l. zhang, and m. liao, “a novel fast approach for sar tomography: two-step iterative shrinkage/thresholding,” ieee geoscience and remote sensing letters, vol. 12, no. 6, pp. 1377–1381, jun. 2015, doi: 10.1109/lgrs.2015.2402124. [11] s. bertoldo, c. lucianaz, m. allegretti, o. rorato, a. prato, and g. perona, “an operative x-band mini-radar network to monitor rainfall events with high time and space resolution,” engineering, technology & applied science research, vol. 2, no. 4, pp. 246–250, aug. 2012. [12] s. bertoldo, c. lucianaz, and m. allegretti, “on the use of a 77 ghz automotive radar as a microwave rain gauge,” engineering, technology & applied science research, vol. 8, no. 1, pp. 2356–2360, feb. 2018. [13] c. stringham, “gpu processing for uas-based lfm-cw stripmap sar,” isprs journal of photogrammetry and remote sensing, vol. 80, pp. 1107–1115, dec. 2014, doi: 10.14358/pers.80.12.1107. [14] h. sheng, k. wang, x. liu, and j. li, “a fast raw data simulator for the stripmap sar based on cuda via gpu,” in 2013 ieee international geoscience and remote sensing symposium igarss, melbourne, vic, australia, jul. 2013, pp. 915–918, doi: 10.1109/igarss.2013.6721309. [15] l. lo monte, d. erricolo, f. soldovieri, and m. c. wicks, “radio frequency tomography for tunnel detection,” ieee transactions on engineering, technology & applied science research vol. 4, no. 4, 2020, 5933-5939 5939 www.etasr.com almutiry: uav tomographic synthetic aperture radar for landmine detection geoscience and remote sensing, vol. 48, no. 3, pp. 1128–1137, mar. 2010, doi: 10.1109/tgrs.2009.2029341. [16] d. w. paglieroni, d. h. chambers, j. e. mast, s. w. bond, and n. reginald beer, “imaging modes for ground penetrating radar and their relation to detection performance,” ieee journal of selected topics in applied earth observations and remote sensing, vol. 8, no. 3, pp. 1132–1144, mar. 2015, doi: 10.1109/jstars.2014.2357718. [17] m. sato, “principles of mine detection by ground-penetrating radar,” in k. furuta and j. ishikawa, eds., anti-personnel landmine detection for humanitarian demining: the current situation and future direction for japanese research and development. london, uk: springer-verlag, 2009 [18] j. dula, a. zare, d. ho, and p. gader, “landmine classification using possibilistic k-nearest neighbors with wideband electromagnetic induction data,” in detection and sensing of mines, explosive objects, and obscured targets xviii, jun. 2013, vol. 8709, p. 87091f, doi: 10.1117/12.2016490. [19] l. carin, n. geng, m. mcclure, j. sichina, and lam nguyen, “ultrawide-band synthetic-aperture radar for mine-field detection,” ieee antennas and propagation magazine, vol. 41, no. 1, pp. 18–33, feb. 1999, doi: 10.1109/74.755021. [20] joel andrieu et al., “land mine detection with an ultra-wideband sar system,” presented at the detection and remediation technologies for mines and minelike targets vii, orlando, fl, usa, aug. 2002, vol. 4742, pp. 237–247, doi: 10.1117/12.479094. [21] p. d. gader, m. mystkowski, and yunxin zhao, “landmine detection with ground penetrating radar using hidden markov models,” ieee transactions on geoscience and remote sensing, vol. 39, no. 6, pp. 1231–1244, jun. 2001, doi: 10.1109/36.927446. [22] a. manandhar, p. a. torrione, l. m. collins, and k. d. morton, “multiple-instance hidden markov model for gpr-based landmine detection,” ieee transactions on geoscience and remote sensing, vol. 53, no. 4, pp. 1737–1745, apr. 2015, doi: 10.1109/tgrs.2014.2346954. [23] o. missaoui, h. frigui, and p. gader, “land-mine detection with ground-penetrating radar using multistream discrete hidden markov models,” ieee transactions on geoscience and remote sensing, vol. 49, no. 6, pp. 2080–2099, jun. 2011, doi: 10.1109/tgrs.2010.2090886. [24] j. macdonald and j. r. lockwood, alternatives for landmine detection. , santa monica, ca, usa: rand, 2003. [25] j. van den heuvel and f. fiore, “simulation study of x-ray backscatter imaging of pressure-plate improvised explosive devices,” in spie defense, security, and sensing, baltimore, maryland, usa, apr. 2012. [26] h. kasban, o. zahran, s. m. elaraby, m. el-kordy, and f. e. abd elsamie, “a comparative study of landmine detection techniques,” sensing and imaging: an international journal, vol. 11, no. 3, pp. 89– 112, sep. 2010, doi: 10.1007/s11220-010-0054-x. [27] l. yujiri, s. w. fornaca, b. i. hauss, m. shoucri, and s. talmadge, “detection of metal and plastic mines using passive millimeter waves,” presented at the detection and remediation technologies for mines and minelike targets, orlando, fl, united states, apr. 1996, vol. 2765, doi: 10.1117/12.241235. [28] h. ozturk et al., “millimeter-wave detection of landmines,” in spie defence, security, and sensing, baltimore, maryland, united states, may 2013, doi: 10.1117/12.2018026. [29] m. e. a. kanona, m. g. hamza, a. g. abdalla, and m. k. hassan, “a review of ground target detection and classification techniques in forward scattering radars,” engineering, technology & applied science research, vol. 8, no. 3, pp. 3018–3022, jun. 2018. [30] e. karpat, “clean technique to classify and detect objects in subsurface imaging,” international journal of antennas and propagation, vol. 2012, dec. 2012, doi: 10.1155/2012/917248, art no. 1005000. [31] d. carevic, “kalman filter-based approach to target detection and targetbackground separtion in ground-penetrating radar data,” presented at the detection and remediation technologies for mines and minelike targets iv, orlando, fl, usa, apr. 1999. [32] g. nadim, “clutter reduction and detection of landmine objects in ground penetrating radar data using likelihood method,” in 3rd international symposium on communications, control and signal processing, st julians, malta, mar. 2008, pp. 98–106, doi: 10.1109/isccsp.2008.4537200. [33] d. carevic, “clutter reduction and target detection in ground-penetrating radar data using wavelets,” in detection and remediation technologies for mines and minelike targets iv, orlando, fl, usa, apr. 1999. [34] d. potin, e. duflos, and p. vanheeghe, “landmines ground-penetrating radar signal enhancement by digital filtering,” ieee transactions on geoscience and remote sensing, vol. 44, no. 9, pp. 2393–2406, sep. 2006, doi: 10.1109/tgrs.2006.875356. [35] a. c. gurbuz, “determination of background distribution for groundpenetrating radar data,” ieee geoscience and remote sensing letters, vol. 9, no. 4, pp. 544–548, jul. 2012, doi: 10.1109/lgrs.2011.2174137. [36] m. c. wicks, “rf tomography with application to ground penetrating radar,” in conference record of the forty-first asilomar conference on signals, systems and computers, pacific grove, ca, usa, nov. 2007, pp. 2017–2022, doi: 10.1109/acssc.2007.4487591. engineering, technology & applied science research vol. 8, no. 3, 2018, 2958-2962 2958 www.etasr.com tunio et al.: performance and emission analysis of a diesel engine using linseed biodiesel blends performance and emission analysis of a diesel engine using linseed biodiesel blends m. m. tunio department of energy and environment engineering quaid-e-awam university of engineering, science and technology nawabshah, pakistan mureed.tunio@gmail.com m. r. luhur department of mechanical engineering quaid-e-awam university of engineering, science and technology nawabshah, pakistan luhur@quest.edu.pk z. m. ali department of chemical engineering mehran university of engineering and technology jamshoro, pakistan zeenat.ali@faculty.muet.edu.pk u. daher department of chemistry govt. degree (boys) college sakrand, pakistan oooga82@gmail.com abstract—the core object of this study is to examine the suitability of linseeds for biodiesel production. the performance of an engine at different proportions of linseed blends with petrodiesel and the amount of emissions rate were investigated. initially, linseed biodiesel was produced through transesterification process, and then it was mixed with petrodiesel fuel (d100) blends at volumetric ratios of 10% (lb10), 20% (lb20), and 30% (lb30). the properties of linseed biodiesel and its blends were investigated and compared with petro-diesel properties with reference to astm standards. it has been observed that the fuel properties of produced biodiesel are within astm permissible limits. the specific fuel consumption (sfc) of lb10 blend has been found lesser compared to lb20 and lb30. sfc of d100 is slightly less than that of all the blends. the brake thermal efficiency (bte) of lb30 is greater than that of pure diesel d100 at maximum load and greater than that of lb10 and lb20. the heat dissipation rate in all linseed blends is found to have been less than that of d100. carbon monoxide, carbon dioxide and nox emissions of linseed blends are mostly lower in comparison with d100’s. among all blends, lb10 was found more suitable alternative fuel for diesel engines and can be blended with petro diesel without engine modifications. it can be concluded that cultivation and production of linseed in pakistan is very promising, therefore, it is recommended that proper exploitation and use of linseed for energy production may be encouraged through pertinent agencies of pakistan. keywords–linseed oil; transesterification; diesel-biodiesel blends; engine performance; emission analysis i. introduction the demand of energy sources is increasing day by day due to population growth, urbanization and industrialization. fossil fuels are conventional energy sources and are used for power production for a long time. however, they are finite sources of energy and cannot be replenished once consumed. environmental consequences are also major drawbacks of fossil fuel consumption. therefore, it is an inevitable to explore alternative energy sources which must be environmental friendly to fulfill growing energy demand [1]. out of all agricultural sources, linseed is preferable because of its reasonable availability and easy accessibility, especially in sindh and punjab pakistan provinces [2]. pakistan is an energy deficient country, as demand is larger than production capacity. energy shortage and frequent load shedding have created a chaotic situation in every corner of the country [3]. the government of pakistan is encouraging and promoting research and efforts to utilize renewable energy sources to supplement fossil fuels. pakistan has a good potential of edible and nonedible crops for biodiesel (bio-energy) production. among all non-edible feed stocks, linseed is recognized to be one of the most suitable sources for biodiesel production as it is an oil seed bearing plant [4]. linseed oil is a non-edible vegetable oil and is considered as a potential alternative fuel for the compression ignition engines. it is a sulfur free, non-aromatic, nontoxic, and oxygenated oil. moreover, pakistani government is eager to introduce blended petroleum (with biodiesel) at a national level in order to meet the increasing energy demand. it was decided that 5% by volume of diesel will be blended with biodiesel up to 2015 and gradually the analogy will increase up to 10% in 2025 [5]. however, this has not been achieved yet due to the delay in large scale biodiesel production level, less foreign investment and lack of infrastructure facilities. linseed (linum usitatissimum l.) locally known as “alsi” is an annual winter plant grown for fiber and oil. it is a herbaceous annual-type plant that is cultivated in 59 countries for its fiber as well as its oil [6]. linseed contains oil at 35-45% by weight and is high in unsaturated constitutes [7]. in pakistan, linseed is cultivated on marginal and sub-marginal lands under irrigated conditions. linseed is cultivated in punjab and sindh provinces in 762 and 2929 hectares of land respectively, and its whole country cultivation was around 3691 hectares during 2014-15. the quantity of linseed production was 2622 tons during 2014. its yield in kg per hectare was 758 in sindh and 697in punjab, while its national average yield during 2014-15 was 710kg/ha [8]. various chemical properties and characteristics of linseed seeds, extracted oil and biodiesel produced decide its suitability for the replacement of petro diesel in internal combustion engines and other industrial applications. this study aimed to produce linseed biodiesel using indigenous linseed seeds and to examine the performance of a die exh pro kin val eff coo em ca pm kn cri att exc lin and dif lin sin me scr giv lin me en at im rem col ste oi est a. it equ the we bio me for me hou dri b. die la die 30 lin (m in engineerin www.etasr esel engine by haust emissio operties of pro nematic viscos lue. the pe ficiency (bte olant were eva mission, nitrog arbon monoxi m10 were also own as a maj itical air pollut ention to the cellent alterna i the experim nseed oil as raw d emission c fferent blends nseeds were pu ndh, pakistan. ethods like s rew press meth ves more yield nseed oil was echanical expe nergy and env t first, the lins mpurities, forei maining on th llected in air erilized, washe l’s free fatty terification. transesterifi linseed biod was produce uipped with ermometer wit ere made at odiesel yield. echanical stirr r 30 minutes. ethyl ester wa urs. the crude ied to remove blend prepa emissions the blends o esel were p aboratory. the esel at volume % (lb30) as nseed biodiese model dwe-6/ table i [12]. ng, technology r.com y using its diffe on rates. the oduced biodie sity, cetane n erformance p e), fuel cons aluated at vary gen dioxide ide (co), car analyzed. the jor contributor tant [9]. the o e synthesis of ate of petroleum ii. materi mental work p w material. it a characteristics s and compar urchased from . the linseed solvent extrac hod. the screw ds and so it w extracted wi eller of the bi vironment en seed seeds we ign particles a he outer seed’ r sealed glas ed with doubl y acid level fication proces diesel was pro ed using a fi mechanical th cork reflux different mol the mixture w er by keeping at the end of as transferred e methyl ester its moisture co raton, engine of varying rati prepared at e linseed bio etric ratios of s shown in fi el were tested /10–js–dv). t y & applied sci tunio erent blends w study include esel, such as d number, pour arameters, li sumption, and ying loads. re (no2), nitrog rbon dioxide e emission of p r to global wa objective of thi f linseed biod m diesel, espec als and meth produced biod also investigat of linseed m red them with m the local mar oil can be ob ction, enzyma w press metho was adopted [ ith the help o iofuel laborat ngineering, qu ere cleaned in and adhered a s surface. th ss bottles wh le distilled wa l was reduce ss duced through ive hole lid r stirrer, temp x condenser. v lar ratio to o was agitated w g its speed at 6 f the transester into a separa r was washed w ontent and unw e performance io of linseed b room tempe odiesel was b f 10% (lb10) igure 1. the on a slow sp the specificat ience research et al.: performa with petro dies es physcio-che density, flash point, and ca ike brake th d heat carrie garding exhau gen oxides (n (co2), pm2 particulate ma arming as wel is study was to diesel, which cially in pakis hods diesel using re ted the perform methyl ester h petro diesel rkets of hyder btained by dif atic extraction od is preferabl [10, 11]. the of the screw tory, departm uest, nawab n order to avo agrochemical s e extracted oi hich were al ater and oven ed through h transesterific reactor which perature cont various experi obtained max with the help 600rpm under rification, the ating funnel f with water and wanted reagen e and exhaust biodiesel and erature in b blended with ), 20% (lb20 different blen peed diesel en tions are ment h v ance and emiss el and emical point, lorific hermal ed by ust gas nox), 2.5 & atter is ll as a o draw is an stan. efined mance using l. the rabad, fferent n and le as it crude press ment of bshah. oid the sprays il was lready dried. single cation. h was troller, iments ximum of the r 55°c crude for 24 d then nts. petrobiofuel petro 0) and nds of ngine, tioned lubr and in incr of 3 of t was con per con and tem gas var dio pm no a. pro bio ceta ceta 52. disc sul vol. 8, no. 3, 20 sion analysis of tabl par m numbe s compr starti o max rotati coo dyn flo the engine ricating system d several senso an integrated reased gradua 3kw. the incr the engine fro s measured by nsumption wa rformance para nsumption (bs d outlet temper mperature and s analyzer 350 rious flue gas xide (co2), a m10 were dete . 531s. i fuel properti the results o oduced linseed diesel was fou ane number a ane number w 5, which is m covered to be lfur was found 018, 2958-2962 of a diesel engi fig. 1. linse le i. diese rameter type model er of cylinder bore stroke ression ratio ing method output xium load ional speed oling type amometer ow meter comprises m, fuel supply ors which are manner [12] ally (step: 0.75 rease of load om 2200 to 1 y an eddy cur as measured ameters like s sfc), lubricat ratures, suctio thermal effici 0 xl testo-met emissions like and nitrogen ermined by pa ii. results ies and engine of the proper d biodiesel are und to have low and flash poi was found to b more than the a e more in lb2 d less in all b ine using linse eed biodiesel blen el engine specifi specificat horizont dwe-6/10-j 1 80mm 95mm 23:01 manual 8.5 ps 3kw 2200rpm water coo eddy current elec float typ of several y system, wa attached with ]. the load o 5kw) from zer on the engine 200. the pow rrent electric d on volumetri speed, torque, tion oil tempe on and exhaust iency were exa ters were used e carbon mono oxides (nox) articulate met s and discuss e performance rties of the d shown in tab wer viscosity int. in blend be 53.5, where astm standar 20 and lb30 blends than th 2959 eed biodiesel b nds ications [12] ion tal s-dv l m oled ctro brake pe systems such ater cooling sy measuring de on the engine ro to the maxi e decreases the wer output (to dynamometer. ic basis [12, brake specific erature, water t pressures, ex amined. more d for the analy oxide (co), ca ) [14]. pm2.5 ter aerocet m sion e different blend le ii. the prod and density, h lb10, maxi eas, in lb20 it rd. flash poin 0 than petro d hat in 100% d blends h as ystem evices e was imum e rpm orque) fuel 13]. c fuel inlet xhaust eover, ysis of arbon 5 and model ds of duced higher imum t was nt was diesel. diesel. th all les cal ble fou the po out fou wa lb app the out ble ble lb and wit ble wo b. no em am qu car bio engineerin www.etasr he calorific val fuel propertie ss kinematic v lorific values. end compared und better than e ones reporte wer output wa q de kinematic v total a c fig. 2. b figure 3 sho tput for the d und to increas as found great b20 at maxim plied in intern ermal efficienc t by coolant ends. the hea ends than that b20 and lb30 d lb10. thi thout heating ends. lubricat ould be econom exhaust emi table iii sho ox no, no2, p missions of bi mong vehicula antities of rbonaceous m ofuels the env ng, technology r.com lues were with es of lb10 bl viscosity, high b10 was fou d to lb20 and n petro diesel. ed in [15-16] as examined b quality paramete ensity at 15oc kg/ viscosity at 40oc sulfur %wt flash point o c acid number mgk pour point oc cetane number alorific value mj brake power versu ows the deviat different blend se with increas ter than that mum load. lb nal combustio cy. figure 4 sh (hc) versus at dissipation r of d100. at z , whereas at fu s reveals tha resultantly w tion oil consum mical for the o issions ows the exhaus pm2.5 and pm iodiesel blend ar fuels, petro particulate material [17]. a vironmental im y & applied sci tunio hin permissibl lend were foun her cetane num und to be mo d lb30. all b the results w ]. engine per by load variatio tabl ers /lit (mm2 / sec) koh/gm j/kg us brake specific f tion of bte v ds of linseed b se of brake po of pure diese b30 blend co on (ic) engin hows the varia s brake powe rate was found zero load lb10 ull load lb30 at the engine with the use o mption could b operation of ic st emissions su m10 by load. i ds were comp o-diesel produ matters, wh as a result, by mpacts can be ience research et al.: performa le limits. gene nd better in te mber, even ele ore suitable fe blend samples were found sim rformance or on. the variat le ii. fuel allowable limit 0.88 1.9–6.0 0.05 max 130 min 0.80 max -15 to +5 47 mini 37.5 42.80 fuel consumption versus brake p biodiesel. bte ower. bte of el d100, lb1 ould be pract ne due to its h ation of heat c er for the dif d less in all li 0 recorded les had less than can run smo of linseed bio be less, theref c engines. uch as co, co in this study ex pared with die uces slightly hich consist the use of alt e reduced. by h v ance and emiss erally, erm of evated easible s were milar to brake tion in spe diff 0.7 wit blen max spe properties of li ts diesel 100 0.8401 3.06 0.735 74 0.249 0 52 44.2 power e was lb30 0 and tically higher carried fferent inseed ss than lb20 oothly odiesel fore, it o2 and xhaust esel’s. larger ts of ernate using dies lb resp whi 0.7 less in con dec f fi flue vol. 8, no. 3, 20 sion analysis of ecific fuel co ferent fuel bl 5kw power, th the increas nd found less ximum brake ecific fuel cons inseed biodiesel 0% lb 100 0.8809 4.17 0.0093 172 1.22 -3 45 42.85 sel the pm2.5 20 and lb30 pectively. sim ile in lb10, 32mg/m3 respe ser than diesel fuel-rich zon ntains more o creased [18]. fig. 3. brake p g. 4. brake po co is a form e gases, whic 018, 2958-2962 of a diesel engi onsumption (s lends is show sfc was fou se of brake p s as compare power d100 sumption (sfc l and blends % lb 10% 9 0.8945 3.57 3 0.125 74 1.48 -9 53.5 41.8 5 were record emissions wer milarly, pm1 , lb20 and ectively. gene l’s. the pm f nes and high oxygen than d power versus brak ower versus heat mation mixtur h when put t ine using linse sfc) versus wn in figure und maximum power. howev ed to lb20 a performed w c). % lb 20% 0.8509 3.67 0.113 78 1.59 -9 52.5 41.5 ded 0.09mg/m re 0.002, 0.04 10 was obse lb30 were erally biodiese formation proc h temperatures diesel, thereby ke thermal efficie carried out by co re of temperat together contr 2960 eed biodiesel b brake power 2. in genera m, and it decre ver, sfc in l and lb30. a well with respe lb 30% 0.8561 3.81 0.1007 81 1.64 -6 50.5 40.8 m3, whereas l 40 and 0.010m rved 2.157m 1.356, 1.308 el’s pm emissi cess mainly o s. since biod y pm emissio ency for the test fu olant for the test ture and unbu rol the rate of blends r for al, at eased lb10 at the ect to lb10, mg/m3 mg/m3, and ion is occurs diesel on is uels fuels urned f fuel engineering, technology & applied science research vol. 8, no. 3, 2018, 2958-2962 2961 www.etasr.com tunio et al.: performance and emission analysis of a diesel engine using linseed biodiesel blends decomposition and oxidation [19]. all biodiesel blends provide lesser co emissions compared to petro diesel. lb10 blend gave slightly lower emission than lb20 and lb30. similar results were also reported in [19-20]. the percentage of carbon dioxide (co2) emission was found increasing with the increase of biodiesel ratios at full load condition but remained lower than that of petro diesel. the nitrogen oxide levels (nox) is an exhaust emission of diesel engines. it could create health hazards, when it is inhaled. it can cause many diseases, like tuberculosis, severe headache, respiratory problems, lung cancer, nausea, skin cancer etc. [21]. the nox emissions of all blends were found lower than that of petro diesel. among all blends, lb20 blend emission was much lower than the ones of lb10 and lb30. this reveals that the linseed biodiesel is feasible using blend forms for nox reduction. similarly, other pollutant results, like no2, no and co2 remained lower than that of 100% diesel. table iii. comparison of exhaust emissions parameters diesel lb10 lb20 lb30 engine load (%) 25 50 75 100 25 50 75 100 25 50 75 100 25 50 75 100 pm2.5 (mg/m3) 0.14 0.16 0.66 0.60 0.02 0.04 0.012 0.002 0.02 0.03 0.005 0.04 0.005 0.029 0.056 0.010 pm10 (mg/m3) 0.87 0.84 1.52 2.15 0.52 0.73 0.31 1.35 0.42 0.58 0.74 1.30 0.36 0.39 0.42 0.73 no2 (ppm) 9.30 8.10 8.88 8.98 8.50 3.10 1.20 0.05 6.70 1.50 0.70 0.40 1.80 3.90 0.20 0.15 nox (ppm) 98.0 79.0 37.0 77.0 48.0 59.0 65.0 30.0 57 66.0 63.0 11.0 46.0 88.0 45.0 15.0 co (ppm) 425 480 510 530 182 188 269 365 161 200 201 328 210 235 264 372 no2 (ppm) 65.0 73.0 70.0 68.0 40.0 55.0 64.0 29.0 51.0 65.0 63.0 60.0 45.0 55.0 65.0 60.0 co2 %(ppm) 4.2 3.4 3.5 4.82 1.69 2.17 2.65 2.93 1.70 2.34 2.73 3.44 1.59 2.36 3.00 3.68 iv. conclusions linseed biodiesel was produced through transesterification process using indigenous linseeds. the produced biodiesel was blended with petro-diesel fuel (d100) at different volumetric ratios of 10% (lb10), 20% (lb20), and 30% (lb30). the fuel properties of produced biodiesel were found within astm permissible limits. the specific fuel consumption of lb10 blend was found less than that of lb20, and lb30. bte of lb30 is greater than pure diesel’s d100 at the maximum load. the heat dissipation rate was found less in all blends of linseed than that of d100. this reveals that the diesel engines can run smoothly without heating with the use of linseed blends. both particulate matters pm2.5 and pm10 results were investigated and it was found that pm emission is drastically less than petro-diesel’s at all loads. the co2 and nox emissions of linseed blends were found lower when compared to petro diesel fuel. among all blends, lb10 was found to be more suitable alternative fuel for diesel engines and can be blended with petro diesel without engine modifications. it was also observed that cultivation and production of linseed in pakistan is very promising therefore it is recommended that the proper exploitation and use of linseed for energy production may be encouraged by the relevant agencies of pakistan. references [1] s. kumar, a. pal, a. baghel, “an experimental analysis of biodiesel production from linseed oil”, international journal of engineering technology, management and applied sciences, vol. 3, no. 2, pp. 133140, 2015 [2] s. ali, m. a. cheema, m. a. wahid, a. sattar, m. f. saleem, “comparative production potential of linola and linseed under different nitrogen levels”, crop & environment, vol. 2, pp. 33-36, 2011 [3] m. asif, “sustainable energy options for pakistan”, renewable and sustainable energy reviews, vol. 13, no. 4, pp. 903-909, 2009 [4] s. a. r. kazmi, a. h. solangi, s. n. a. zaidi, jatropha curcas l. cultivation experience in karachi pakistan, joint study preliminary report of pakistan agricultural research council and pakistan state oil, 2003 [5] a. b. awan, z. a. khan, “recent progress in renewable energy–remedy of energy crisis in pakistan”, renewable and sustainable energy reviews, vol.33, pp. 236-253, 2014 [6] food and agriculture organization of the united nations, food and agriculture organization corporate statistical database, available at: http://www.fao.org/faostat/en/#home, accessed: 03/02/2018. [7] f. ullah, a. bano, s. ali,“optimization of protocol for biodiesel production of linseed (linum usitatissimum l.)oil”, polish journal of chemical technology, vol. 15, no. 1, pp. 74-7, 2013 [8] m. amjad, oilseed crops of pakistan pakistan agricultural research council, islamabad, 2014 [9] c.-j. ruan,w.-h. xing, j. a. t. da silva, “potential of five plants growing on unproductive agricultural lands as biodiesel resources”, renewable energy, vol. 41, pp. 191-199, 2012 [10] p. beerens, screw-pressing of jatropha seeds for fuelling purposes in less developed countries”, msc thesis, eindhoven university of technology, ministerio de ambiente y energía, 2007 [11] m. m. tunio, s. r. samo, z. m. ali, k. chand, “comprehensive study of jatropha (jatropha curcas) biodiesel production and its prospectus in pakistan”, sindh university research journal (science series), vol. 48, no. 1, pp. 209-212, 2016 [12] tokyo meter co, operational manual of single cylinder slow speed diesel engine research and test bed, model: dwe-6/10-js –dv, 2002 [13] k. singh, m. y. sheikh, y. b. mathur, “performance study of a vcr diesel engine fueled with diesel and low concentration blend of linseed oil biodiesel”, international journal of emerging technology and advanced engineering, vol. 4, no. 4, pp. 295-299, 2014 [14] testo, testo 350 m/xl 454 instruction manual, 2002 [15] s. r. samo, m. m. tunio, “production and characterization of biodiesel from indigenous linseed herb”, international journal of current trends in engineering & research” vol. 2, no. 8, pp. 91-97, 2016 [16] a. m. ashraful, h. h. masjuki, m. a. kalam, i. r. fattah, s. imtenan, s. a. shahir, h. m. mobarak, “production and comparison of fuel properties, engine performance, and emission characteristics of biodiesel from various non-edible vegetable oils: a review”, energy conversion and management, vol. 80, pp. 202-228, 2014 [17] r. sattanathan, “production of biodiesel from castor oil with its performance and emission test”, international journal of science and reseach,vol. 4, no. 1, pp. 273-279, 2015 [18] m. h. shojaeefard, m. m. etgahni, f. meisami, a. barari, “experimental investigation on performance and exhaust emissions of castor oil biodiesel from a diesel engine”, environmental technology, vol. 34, no. 13-14, pp. 2019-2026, 2013 engineering, technology & applied science research vol. 8, no. 3, 2018, 2958-2962 2962 www.etasr.com tunio et al.: performance and emission analysis of a diesel engine using linseed biodiesel blends [19] s. k. mahla, a. birdi, “performance and emission characteristics of different blends of linseed methyl ester on diesel engine”, international journal on emerging technologies, vol. 3, no. 1, pp. 5559, 2012 [20] m. ozcanli, h. serin, o. y. saribiyik, k. aydin, s. serin, “performance and emission studies of castor bean (ricinus communis) oil biodiesel and its blends with diesel fuel”, energy sources, part a: recovery, utilization, and environmental effects, vol. 34, no. 19, pp. 1808-1814, 2012 [21] m. m. krishna, k. v. krishna, “experimental investigations of comparative performance and exhaust emissions of linseed biodiesel fuelled di diesel engine with low grade lhr combustion chamber”, international journal of advanced scientific and technical research, vol. 5, no. 4, pp. 180-197, 2014 engineering, technology & applied science research vol. 8, no. 3, 2018, 2897-2900 2897 www.etasr.com lucas and huebner: numerical simulation of single-phase and two-phase flows in separator vessels … numerical simulation of single-phase and twophase flows in separator vessels with inclined halfpipe inlet device applied in reciprocating compressors fagner patrício lucas department of mechanical engineering university of minas gerais belo horizonte, minas gerais, brazil engfagner@gmail.com rudolf huebner department of mechanical engineering university of minas gerais belo horizonte, minas gerais, brazil rudolf@ufmg.br abstract—this paper aims to apply computational fluid dynamics (cfd) to simulate air flow and air flow with water droplets, as a reasonable hypothesis for real flows, in order to evaluate a vertical separator vessel with inclined half-pipe inlet device (slope inlet). thus, this type was compared to a separator vessel without inlet device (straight inlet). the results demonstrated a different performance for the two types in terms of air distribution and liquid removal efficiency. keywords-inlet device; separator vessel; computational fluid dynamics; reciprocating compressor; single and two-phase flow i. introduction the reciprocating compressor is widely used in industry, being an important machine to compress all gas types. however, the liquid fraction ingestion is one of the main causes of unavailability problems due to the “liquid hammer effect” that increases, quickly, the loads in the piston, piston rod, connection rod, crosshead, crosshead pin and other parts. as result, it can lead to their mechanical failure. according to [1], liquid can enter the compressor cylinder due to the impurity from other systems, the gas condensation in the suction piping or by handling low boiling point gas or wet gas during compression process. this context motivated the api-618 code (reciprocating compressors for petroleum, chemical and gas industry services) to recommend the use of separator vessels, in the suction of first stage and between stages, for removing 99% of droplets of 10µm or larger since the dispersed flow (or mist flow) is the most typical standard flow present in compressor unit. therefore, separator vessels have two important devices in order to capture all droplets through the gravitational deposition and inertial impaction mechanisms. the first, called inlet device, minimizes the droplet shearing, improves the downstream gas velocity distribution and, thus, maximizes the liquid removal efficiency, mainly in the gravitational deposition area [2]. the second, known as mist eliminator (or demister), removes the droplets in three steps: inertial impaction, coalescence and detaching of the droplets from the surface of wire due to the gravitational force [3]. this paper investigated numerically a vertical separator vessel, with inclined half-pipe inlet device and wire mesh mist eliminator, through single-phase and two-phase simulations, and finally the results were compared to a vertical separator vessel without inlet device and the same design of wire mesh mist eliminator. ii. materials and methods a. governing equations the following equations were used in the mathematical model of the numerical simulation [4]. the mass equation can be described as: ( ) 0u t       (1) where ρ (kg/m3) is the fluid density, u (m/s) is the fluid velocity and t is time. the momentum equation is:   ( ) t u uu p f          (2) where  (n/m²) is the viscous stress tensor, f (n) is the airwater droplets interaction force and p (pa) is the air pressure. b. souders-brown equation this equation is the most common method to sizing separator vessels and can be defined by the force balance applied on a droplet in an upwards-flowing in a fluid field, as described (figure 1) [5]. max. l air air k             (3) where k(m/s) is the separation factor (or souders-brown velocity), ρl and (kg/m³) ρair are the water and air density respectively and vmax is the maximum air velocity. maximum engineering, technology & applied science research vol. 8, no. 3, 2018, 2897-2900 2898 www.etasr.com lucas and huebner: numerical simulation of single-phase and two-phase flows in separator vessels … air velocity from (3) can be used to define the internal diameter of the separator vessel for the proposal air flow. the other dimensions (length and nozzles) were defined by practical methods from reciprocating compressors manufacturers. fig. 1. the forces acting on a droplet in a upwards-flowing c. cfd modeling the commercial cfd package ansys® cfx 15 was used in the present study, for solving the governing equations and the geometry was made in computer-aided design (cad) software. a pc processor with four cores was used, with 3.4ghz processor frequency and 8gb ram. the typical run times were around 7h. the mesh generated has unstructured tetrahedrons grids with 1,115,883 nodes and an inflation was considered close to the surface of the fluid volume to capture the details of the flow. the gas phase was taken to be air, with ρair=1.07kg/m³, qair=0.07m³/s and tair=25ºc. the liquid phase was assumed to be water and, thus, it was considered mwater=2.78e-3kg/s. 1,000 water droplets were divided in five diameters: 10µm, 50µm, 100µm, 150µm and 200µm. for the air and water droplets flow, the restitution coefficient was 0.15 for perpendicular collision and 0.30 for parallel collision. the values were defined based on the weber number according to [6]. figure 2 shows the separator vessel used in the cfd simulation for the present work. fig. 2. the separator vessel sized for the cfd simulation the study concentrated in the open space between “plane 1” and “plane 2” (figure 2). thus, the wire mesh mist eliminator was included in the modeling as a porous body with a resistance factor that lead to a pressure drop according to hazen-dupuit-darcy equation [7]. 2    c kh p (4) where p (pa) is the pressure drop, h (m) is the mist eliminator thickness, ρ is the air density and κ and c are dimensionless coefficients (obtained experimentally). figure 3(a) shows the computational domain for a separator vessel without inlet device and figure 3(b) shows the computational domain for the same separator vessel, but with inclined halfpipe inlet device. fig. 3. computacional domain for a separator vessel: (a) without inlet device and (b) with inclined half-pipe inlet device. the dimensions of the vessel are described in table i and the boundary conditions used in the cfd simulation are described in table ii. table i. dimensions of the model dimension (figure 3) value unit internal diameter 400 mm “plane 1” to “plane 2” 887.32 mm “plane 1” to “enmf i” 700 mm “plane 1” to “enmf ii” 712.32 mm table ii. boundary conditions boundary position boundary condition inlet cross section through the inlet nozzle uniform velocity profile, turbulence model (k-ε) outlet cross section of the separator vessel, some space above the packing bed (plane 2 of figure 3) free outlet water sump liquid surface considered flat (plane 1 of figure 3) no shear wall vessel wall and nozzle wall adiabatic for mass and energy. porous body plane enmf i or enmf ii pressure drop model engineering, technology & applied science research vol. 8, no. 3, 2018, 2897-2900 2899 www.etasr.com lucas and huebner: numerical simulation of single-phase and two-phase flows in separator vessels … iii. simulation results and discussion a. effect of the inlet device on uniformity of air flow the profile of air velocity was numerically determined for both types of separator vessels. the first type is the vessel without the inlet device, also called straight inlet, and the second type is the vessel with the inclined half-pipe inlet device, also called slope inlet. in this step of simulation, the support ring, used to assembly the wire mesh mist eliminator in the vessel, was not considered. thus, the vessel section has 400mm internal diameter, being the plane “enmf i” (figure 3) the aimed section to evaluate the air distribution. figure 4 shows the air vertical velocity for the two types of separator vessels. fig. 4. distribution of the air vertical velocity in the section “enmf i” placed 10.32mm below the wire mesh mist eliminator for: (a) the straight inlet and (b) the slope inlet. it is clear that both types presented a concentrated air flow along the wall which created a non-uniform air flow. however, it is necessary to quantify this distribution by the variation coefficient, widely used in the chemical process industries to evaluate structured, unstructured packing and distributor, with the following equation [8-10]: 5,02 _ _ 1 1                       u uu a a c i n i i t v (5) where cv (dimensionless) is the variation coefficient, n is the number of the cells, ai (m²) is the area of the cell i, at (m²) is the total area of the transversal section, ui (m/s) is the air velocity in cell i and u is the average air velocity:    n i ii t ua a u 1 _ 1 (6) the obtained variation coefficients were 2.67 and 2.34 for the straight and slope inlet respectively. thus, the results showed that the vessel with the inclined half-pipe inlet device (slope inlet) allowed a slightly better air distribution compared to the straight inlet. however, both types presented a high air velocity in some areas, above the limit of 3.25m/s (3). this condition is undesirable for phase separation. b. effect of the support ring on uniformity of air flow figure 5 presents the air vertical velocities for the separator vessels considering the support rings of wire mesh mist eliminators. it can be observed that maximum velocities decreased, but the straight inlet obtained a lower value compared to the slope inlet. the variation coefficients were 0.34 and 0.81 for straight and slope inlet, respectively. thus, it is clear that the support rings influenced the air distribution, mainly for the straight inlet due to the deviation of the air flow along the wall. the air velocity in the straight inlet remained below the limit of 3.25m/s. therefore, this configuration presented better results for the two parameters: air distribution and good condition for phase separation. fig. 5. distribution of the air vertical velocity in the section “enmf ii” placed 1.0mm below the wire mesh mist eliminator for: (a) the straight inlet and (b) the slope inlet. c. effect of the inlet device on liquid removal efficiency the path lines of water droplets for two kinds of inlets were determined by cfd analysis and the results are shown in figure 6. as observed, the slope inlet removed almost all water droplets above 10µm due to the coalescence of them in the bottom of the vessel. table iii shows a lower number of droplets escaped for the vessel with slope inlet. it is important to explain that the number of droplets that escaped, described in table iii, represents the phase separation in the sections “a” and “b” in figure 2. in real conditions, the remained droplets will be removed by the wire mesh mist eliminator. fig. 6. the path lines of the water droplets for: (a) the straight inlet and (b) the slope inlet. engineering, technology & applied science research vol. 8, no. 3, 2018, 2897-2900 2900 www.etasr.com lucas and huebner: numerical simulation of single-phase and two-phase flows in separator vessels … table iii. numbers of water droplets that escaped from the separator vessels droplet diameter numbers of water droplets (straight inlet) numbers of water droplets (slope inlet) 10 µm 121 152 50 µm 48 0 100 µm 17 1 150 µm 8 0 200 µm 2 0 total 196 153 d. effect of the inlet device on liquid removal efficiency the slope inlet presented a better efficiency compared to the straight inlet (table iv). table iv. liquid removal efficiency from the separator vessels water mass flow (straight inlet) water mass flow (slope inlet) input (kg/s) 2.78e-3 2.78e-3 output (kg/s) 5.45e-4 4.25e-4 removed (kg/s) 2.23e-3 2.35e-3 efficiency (%) 80.38 84.69 e. effect of the support ring on liquid removal efficiency the liquid removal efficiency of the slope inlet was increased after the inclusion of the mist eliminator support ring (table v). table v. liquid removal efficiency from the separator vessels with support rings. water mass flow (straight inlet) water mass flow (slope inlet) input (kg/s) 2.78e-3 2.78e-3 output (kg/s) 2.67e-4 1.92e-4 removed (kg/s) 2.51e-3 2.59e-3 efficiency (%) 90.39% 93.09% iv. conclusions in this study, cfd simulation was employed to simulate an air flow through the separator vessels with straight inlet and slope inlet (or with inclined half-pipe inlet device). the results showed that the uniformity of air flow and the liquid removal efficiency in a separator vessel were affected by the inlet device and the support ring of the mist eliminator. the slope inlet improved the liquid removal efficiency in the air gravity separation section. in the other hand, the straight inlet had a better air distribution with a suitable vertical velocity for phase separation. in this type of vessel, the internal diameter may be minimized since the air velocity (1.64m/s) stayed below the limit of 3.25m/s. the obtained results showed that the computational fluid dynamics is an important approach to evaluate the performance of separator vessels. references [1] b. g. s. prasad, “effect of liquid on a reciprocating compressor”, journal of energy resources technology, vol. 124, no. 3, pp. 187-190, 2002 [2] m. bothamley, “gas/liquid separators: quantifying separation performance part 1”, society of petroleum engineers, vol. 2, no. 4 2013. [3] h. t. el-dessouky, i. m. alatiqi, h. m. ettouney, n. s. al-deffeeri, “performance of wire mesh mist eliminator”, chemical engineering and processing: process intensification, vol. 39, no. 2, pp. 129-139, 2000 [4] f. m. white, viscous fluid flow, mcgraw-hill, inc., 1991 [5] m. souders, g. g. brown, “design of fractionating columns i. entrainment and capacity”, industrial & engineering chemistry, vol. 26, no. 1, pp. 98-103, 1934 [6] b. p. v. d. wal, static and dynamic wetting of porous teflon® surfaces, department of polymer chemistry, university of groningen, nova zelândia, 2006 [7] t. helsør, h. svendsen, “experimental characterization of pressure drop in dry demisters at low and elevated pressures”, chemical engineering research and design, vol. 85, no. 3, pp. 377-385, 2007 [8] s. r. darakchiev, “gas flow maldistribution in columns packed with holpack packing”, bulgarian chemical communications, vol. 42, no. 4, pp. 323–326, 2010 [9] z. olujic, “comparison of gas distribution properties of conventional and high capacity structured packings”, chinese journal of chemical engineering, vol. 19, no. 5, pp. 726-732, 2011 [10] t. petrova, n. v. bancheva, s. darakchiev, r. popov, “quantitative estimates of gas maldistribution and methods for their localization in absorption columns”, clean technologies and environmental policy, vol. 16, no. 7, pp. 1381-1392, 2014 engineering, technology & applied science research vol. 8, no. 3, 2018, 3079-3083 3079 www.etasr.com halepoto et al.: analysis of retransmission policies for parallel data transmission analysis of retransmission policies for parallel data transmission imtiaz ali halepoto department of computer systems engineering, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan halepoto@quest.edu.pk intesab hussain sadhayo department of telecommunication engineering, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan intesab@quest.edu.pk muhammad sulleman memon department of computer systems engineering, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan sulleman@quest.edu.pk adnan manzoor department of information technology, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan adnan@quest.edu.pk shahid bhatti department of information technology, quaid-e-awam university of engineering, science & technology, nawabshah, pakistan shahidmsit12@gmail.com abstract—stream control transmission protocol (sctp) is a transport layer protocol, which is efficient, reliable, and connection-oriented as compared to transmission control protocol (tcp) and user datagram protocol (udp). additionally, sctp has more innovative features like multihoming, multistreaming and unordered delivery. with multihoming, sctp establishes multiple paths between a sender and receiver. however, it only uses the primary path for data transmission and the secondary path (or paths) for fault tolerance. concurrent multipath transfer extension of sctp (cmt-sctp) allows a sender to transmit data in parallel over multiple paths, which increases the overall transmission throughput. parallel data transmission is beneficial for higher data rates. parallel transmission or connection is also good in services such as video streaming where if one connection is occupied with errors the transmission continues on alternate links. with parallel transmission, the unordered data packets arrival is very common at receiver. the receiver has to wait until the missing data packets arrive, causing performance degradation while using cmt-sctp. in order to reduce the transmission delay at the receiver, cmt-sctp uses intelligent retransmission polices to immediately retransmit the missing packets. the retransmission policies used by cmt-sctp are rtx-ssthresh, rtxlossrate and rtx-cwnd. the main objective of this paper is the performance analysis of the retransmission policies. this paper evaluates rtx-ssthresh, rtx-lossrate and rtxcwnd. simulations are performed on the network simulator 2. in the simulations with various scenarios and parameters, it is observed that the rtx-lossrate is a suitable policy. keywords-cmt; cmt-sctp; retransmission policies; sctp; parallel transmission i. introduction the most commonly used protocols of the transport layer of the osi model are udp and tcp. udp is a connectionless and unreliable transport layer protocol. udp sends data in the form of short messages, which are called datagrams, in a network. udp is a connectionless protocol, which means that there is no need to establish a connection between the sender and receiver. in terms of reliability, the tcp is the most widely used protocol today. a recently introduced protocol for the transport layer is sctp, which is reliable and connection-oriented. sctp is very similar to tcp and udp in terms of operations. it transmits multiple streams of data simultaneously between two endpoints that have established a connection. sctp is more efficient and more powerful in its design when compared to peer protocols. cmt-sctp extension enables a sender to simultaneously transmit data over various paths, which increase the overall transmission throughput. the competitor protocol to cmtsctp is multipath tcp, which is also in design phase. parallel transmission is very common in the development of mobile applications such as video streaming, online gaming, ecommerce, collaborative scientific projects and voip, that require faster data transmission and downloading data rate. the parallel transmission through cmt-sctp uses more than one physical interface, for example the parallel transmission of data between two mobile phones uses two interfaces. one interface transmits using 4g and one interface transmits using wifi. sending data through two interfaces increases throughput compared to sending data through one interface. the use of more interfaces also increases the internet availability of a mobile phone. cmt-sctp is in development process, and some issues such as transmission errors and recovery are challenging. the parallel transmission using cmt-sctp causes unordered data packet arrival at the receiver. the data travels in parallel through different paths, the data along a fast path may reach the receiver earlier than the data sent along a slow path. to solve the problem an immediate retransmission of the missing data is mandatory for smooth transmission. in order to reduce the transmission delay at the receiver, cmtengineering, technology & applied science research vol. 8, no. 3, 2018, 3079-3083 3080 www.etasr.com halepoto et al.: analysis of retransmission policies for parallel data transmission sctp utilizes some retransmission policies to quickly retransmit the missing data packets to the receiver. the main work of this paper is to evaluate retransmission policies for the performance analysis. there are five traditional retransmission policies of a cmt-sctp sender. rtx-same and rtx-asap are very simple and outdated by the three newer policies. obtained results show that rtx-loss rate is better in terms of performance than rtx-cwnd, and rtx-ssthresh. for the simulation setup a realistic network scenario is proposed. in the scenario, both paths with similar characteristics and dissimilar characteristics are tested. an example of dissimilar paths is parallel transmission using mobile phone via 4g and wifi. to extend the simulation for more real networks, a loss rate is added in the paths to reflect the scenario of transmission errors. this research suggests the suitable retransmission policy as well as a base knowledge to design a new retransmission policy for cmt-sctp. ii. retransmission policies the retransmission policies used by cmt-sctp are:  rtx-same: in this retransmission policy data packet retransmission is expected to the similar node. it utilizes just a single path to send data packets to the same destination until that particular node is proved dead.  rtx-asap: the sender retransmits missing data packets to a number of ip addresses. the receiver ip address that provides the congestion window (cwnd) space is the retransmission address. if multiple destinations provide cwnd space then the sender selects one randomly.  rtx-cwnd: in this policy the sender retransmits the data packets to the ip address with the largest cwnd and to a random ip if cwnds are equal.  rtx-ssthresh: in this retransmission policy the sender is retransmitting data packets to the destination with the largest slow start threshold. in case of destinations with equal slow start threshold value, the ip address is chosen randomly.  rtx-lossrate: in this retransmission policy, the sender retransmits data a node with lowest path loss. random selection is done in case of nodes with equal loss rates. iii. simulation model simulations have been carried out using network simulator 2 (ns-2) in which cmt-sctp extension is available. the proposed scenario consists of one sender and one receiver. there are two paths used between the sender and receiver. in order to create these two paths the sender and the receiver are both configured as multihomed hosts. each host is equipped with two interface cards. when the simulation begins the sender transmits the data in parallel along the two paths. the first step is setting up the ns-2 working parameters according to the proposed scenario. ns-2 has many default (fixed) parameters and many changeable parameters that are configured according to the proposed topology. in this research work, two scenarios are proposed as depicted in table i. the first one includes paths of no packet loss. the second includes paths with packet loss. four experiments are executed with the three cmt-sctp retransmission policies. the simulation time is 300 seconds for each policy. the policies used are rtxssthresh, rtx-cwnd, and rtx-lossrate. for each policy the simulation is repeated for 20 times and the average results are plotted. the protocol used is cmt-sctp. throughput is used as the evaluation parameter. the remaining parameters like the size of congestion window, buffer size at sender and receiver and the value of slow start threshold are set to the default settings of ns-2. table i. simulation parameters scenario without loss rate scenario with 10% loss rate # bandwidth, delay # bandwidth, delay 1 path 1:2-20mbps, 1ms path 2: 20mbps, 1ms 1 path 1: 2-20mbps, 1ms path 2: 20mbps, 1ms 2 path 1:22-40mbps, 1ms path 2: 20mbps, 1ms 2 path 1: 1-100mbps, 1ms path 2:1-100mbps, 1ms 3 path 1:5-50mbps, 200ms path 2: 1mbps, 200ms 3 path 1: 1mbps, 10100ms path 2: 1mbps, 1ms 4 path 1:1mbps, 10-100ms path 2: 1mbps, 1ms 4 path 1: 1mbps, 1-100ms path 2:1mbps, 1ms iv. results & discussion a. experiments without loss rate the plots in figure 1 show the performance of different retransmission policies executed without applying any kind of packet loss on the paths. four scenarios are proposed with different bandwidth and delay values as shown in table i. in the first scenario (figure 1(a)) the policies are evaluated based on the bandwidth and the propagation delay changes. the bandwidth on path 2 or secondary path is kept to a fixed value, while the bandwidth on path 1 or primary path varies from 2 to 20mbps. the three policies behave very similar. their starting performance is rather low from 2mbps to 16mbps and suddenly the throughput increases which is caused by the sudden increase in the congestion window. figure 1(b) shows that there is a relationship between the bandwidth and the transmission throughput. in this experiment, a large value of bandwidth is used. in experiment 2 the retransmission rate of rtx-ssthreshold and rtx-cwnd is very similar. however, due to the bandwidth availability (large bandwidth) the retransmission rate of rtx-loss rate is greater than the rates achieved with the other two policies. experiment on a large bandwidth with longer delay values is performed in the scenario 3. it is observed that, when the delay is longer the impact of retransmission policy is less because the retransmission timer expires sooner no matter which retransmission policy is applied. the same is observed in experiment 4, where a small bandwidth value is used and the delay value varies from 10ms to 100ms on path 1. the experimentation with no loss rate shows that the bandwidth plays a key role in increasing the transmission throughput. the rtx-lossrate policy improves the throughput with the increase in the bandwidth. on the other hand, propagation delay is a less significant parameter for the choice of retransmission policy. moreover, the proposed scenario is very simple. b. err add con del engineerin www.etasr (a) ex (c) (d) exp fig. 1 experiments in reality, the rors. so, in th dition of prog nfigured to b lay on path ng, technology r.com xperiment 1: ban (b) experiment ) experiment 3: d eriment 4: small . scenario of with loss ra ere are chance his scenario th grammed loss be 10%. in ex 2 are kept 2 y & applied sci ndwidth variation 2: large bandwid delay variation on bandwidth with l f two paths withou ate es of data loss he experiment rate on path xperiment 1, 20mbps and ience research hale on path 1 dth n path 1 longer delay ut loss rate due to transm s are repeated 1. the loss r the bandwidth 1ms. wherea h v epoto et al.: an mission d with rate is h and as the ban the betw betw (d) f lo exp ban (fig vol. 8, no. 3, 20 nalysis of retran ndwidth on pat delay remain ween 2mbps ween the polic (a) ex (b) expe (c) experiment 3 ) experiment 4: b fig. 2. scenar when the b ossrate reac periment is sl ndwidth on p gure 2(b)). 018, 3079-3083 nsmission polic th 1 changes fr ns constant on and 8mbps th cies as show in xperiment 1: band eriment 2: bandw : bandwidth varia bandwidth variatio rio of two paths w andwidth is ches the throu lightly modifi path 1 increa in this expe 3 cies for paralle from 2mbps to n 1ms. when b here is no no n figure 2(a). dwidth variation o width variation on ation on both path on on both paths with added loss ra larger than 8 ughput of 2.6 ied in experim ases from 22 eriment rtx 3081 el data transm o 20mbps, how bandwidth is oticeable differ on path 1 both paths. hs with longer del with larger bandw ate of 10% on path 8mbps, the r 8mbps. the g ment 2, wher 2mbps to 40m x-ssthresh ission wever small rence lay width h 1 rtxgiven e the mbps hold engineering, technology & applied science research vol. 8, no. 3, 2018, 3079-3083 3082 www.etasr.com halepoto et al.: analysis of retransmission policies for parallel data transmission produces the lowest throughput. in the same experiment, the rtx-cwnd, due to the large bandwidth the sender continuously increases the congestion window but due to the limitation in the buffer size of receiver the throughput at maximum reaches 0.84mpbs. the throughput of rtxlossrate is greater than the other two policies. experiment 3 (figure 2(c)) is an example of two paths with similar characteristics i.e., same bandwidth and delay. the bandwidth on path 1 and 2 varies from 1mbps to 100mbps. longer delay is used i.e. 100ms. the trend in terms of throughput for the retransmission policies is the same. initially with the increase in the bandwidth, the transmission rate also increases because at this point there are fewer chances of retransmissions and higher chances of successful transmissions due to the congestion window availability. in experiment 4 the bandwidth has a small value of 1ms delay (figure 2(d)). the results of experiment 4 are very similar to that of experiment 3. this shows that on similar path characteristics the performance of all the retransmission policies remains similar. however, the experiment on paths with different characteristics, rtxlossrate is the preferable policy for the data transmission. v. related work authors in [1] evaluated five retransmission polices of cmt-sctp on a simple scenario where a sender transmits data to a receiver simultaneously through two paths. according to their observations rtx-lossrate, rtx-cwnd, rtxssthresh are the best policies. they also suggested that in future, further work should be a new policy that must consider the loss rate into account [2, 4, 5, 11]. the concept of integrated policy is proposed in [3, 10]. the work in [6, 7, 17] highlighted the role of the retransmission policy and suggested improvements in path selection while transmitting and retransmitting the data. many authors evaluated sctp and cmt-sctp [8, 9, 12] and highlighted that a good policy also reduces the buffer-blocking problem. the retransmission policies are also used for the evaluation of the cmt-sctp extensions [13] along with other parameters. authors in [14] related the importance of retransmission policies with the problem of handoff in wireless mobile networks. authors in [15, 16] analyzed different failure scenarios of sctp and their impact on the use of retransmission policies. the issue of retransmission or efficient transmission may also be solved by the design of routing protocols [18]. vi. conclusion the trend of parallel transmission of data in order to obtain greater throughput is increasing. parallel transmission is also useful in the development process of mobile applications for emergency services where a device is connected with more than one networks. one of the promising protocols for the parallel transmission is the cmt-sctp. transmission errors are common, particularly in parallel transmission where the immediate solution is the retransmission of data. for that, cmt-sctp uses retransmission policies such as rtxlossrate, rtx-cwnd and rtx-ssthreshold. the current research compared the aforementioned retransmission policies in realistic network conditions where changes in bandwidth and delay are applied. moreover, the programmed loss rate is introduced in the paths in the simulations. the results of simulations suggest that for similar path characteristics all of the retransmission policies behave the same. however, on the paths with added random data loss, rtx-lossrate improves the throughput compared to rtxcwnd and rtx-ssthreshold. the research may be extended to the evaluation of cmt-sctp with other parallel transmission protocols such as multipath tcp. references [1] j. r. iyengar, p. d. amer, r. stewart, “retransmission policies for concurrent multipath transfer using sctp multihoming”, 12th ieee international conference on networks, singapore, vol. 2, pp. 713-719, ieee, 2004 [2] j. r. iyengar, p. d. amer, r. stewart, “receive buffer blocking in concurrent multipath transfer”, in ieee global telecommunications conference (globecom'05), st. louis, usa, vol. 1, p. 6, ieee, 2005 [3] a. l. caro, p. d. amer, r. r. stewart, “retransmission schemes for end-to-end failover with transport layer multihoming”, in ieee global telecommunications conference, globecom'04, vol. 3, pp. 1341 1347, ieee, 2004 [4] j. liu, h. zou, j. dou, y. gao, “reducing receive buffer blocking in concurrent multipath transfer”, in 4th ieee international conference on circuits and systems for communications, shanghai, china, pp. 367371, ieee, 2008 [5] p. natarajan, n. ekiz, p. d. amer, r. stewart, “concurrent multipath transfer during path failure”, computer communications, vol. 32, no. 15, pp. 1577-1587, 2009 [6] i. a. halepoto, f. c. lau, z. niu, z, “concurrent multipath transfer under delay-based dissimilarity using sctp”, in ieee second international conference on computing technology and information management (icctim), pp. 180-185, ieee, 2015 [7] i. a. halepoto, scheduling and flow control in cmt-sctp, hku theses online (hkuto), 2014 [8] p. natarajan, j. r. iyengar, p. d. amer, r. stewart, “concurrent multipath transfer using transport layer multihoming: performance under network failures”, in military communications conference, milcom 2006, washington, dc, usa, pp. 1-7, ieee, 2006 [9] j. r. iyengar, p. d. amer, r. stewart, “concurrent multipath transfer using transport layer multihoming: performance under varying bandwidth proportions”, in ieee military communications conference, milcom 2004, monterey, usa, vol. 1, pp. 238-244, ieee, 2004 [10] h. shen, c. wang, w. ma, d. zhang, “research of the retransmission policy based on compound parameters in sctp-cmt”, in 2nd international conference on information technology and electronic commerce (icitec), dalian, china, pp. 25-28, ieee, 2014 [11] a. l. caro jr, p. d. amer, r. r. stewart, “retransmission policies for multihomed transport protocols”, computer communications, vol. 29, no.10, pp. 1798-1810, 2006 [12] t. yang, l. pan, l. jian, h. hongcheng, w. jun, “reducing receive buffer blocking in cmt based on sctp using retransmission policy”, in ieee 3rd international conference on communication software and networks (iccsn), xi'an, china, pp. 122-125, ieee, 2011 [13] y. cao, c. xu, j. guan, “a record-based retransmission policy on sctp's concurrent multipath transfer”, in 2011 international conference on advanced intelligence and awareness internet (aiai 2011), shenzhen, china, pp. 67-71, ieee, 2011 [14] f. siddiqui, s. zeadally, “sctp multihoming support for handoffs across heterogeneous networks”, in: 4th annual communication networks and services research conference (cnsr 2006), moncton, nb, canada, ieee, 2006 [15] a. l. caro jr, j. r. iyengar, p. d. amer, g. j. heinz, r. r. stewart, “using sctp multihoming for fault tolerance and load balancing”, acm sigcomm computer communication review, vol. 32, no.3, p.23, 2002 engineering, technology & applied science research vol. 8, no. 3, 2018, 3079-3083 3083 www.etasr.com halepoto et al.: analysis of retransmission policies for parallel data transmission [16] a. l. caro jr, p. d. amer, j. r. iyengar, r. r. stewart, “retransmission policies with transport layer multihoming”, in: 11th ieee international conference on networks, sydney, australia, pp. 255-260, ieee, 2003 [17] i. a. halepoto, f. c. m. lau, z. niu, “scheduling over dissimilar paths using cmt-sctp”, in: seventh international conference on ubiquitous and future networks (icufn), sapporo, japan, pp. 535-540 ieee, 2015 [18] n. h. bhangwar, i. a. halepoto, s. khokhar, a. a. laghari, “on routing protocols for high performance”, studies in informatics and control, vol. 26, no. 4, pp. 441-448, 2017 microsoft word 42-3002_s_etasr_v9_n4_pp4538-4542 engineering, technology & applied science research vol. 9, no. 4, 2019, 4538-4542 4538 www.etasr.com malkanthi & perera: particle packing application for improvement in the properties of compressed … particle packing application for improvement in the properties of compressed stabilized earth blocks with reduced clay and silt s. n. malkanthi department of civil engineering university of ruhuna sri lanka snmalkanthi@cee.ruh.ac.lk a. a. d. a. j. perera department of civil engineering university of moratuwa sri lanka asoka@uom.lk abstract—soil as a building material has been used in different forms such as mud, adobe, rammed earth and bricks. the present study focuses on producing compressed stabilized earth blocks (csebs) giving attention to the particle size distribution in the soil mixture. the literature established that compressive strength significantly depends on clay and silt content and 25% of clay and silt produce optimum results while no attention has been given to the amount of other, larger particles. soil grading refers to the combination of different-size particles in a soil mixture. the correct selection of sizes in the correct proportion may cause improvements in cseb properties. this paper explains the application of particle packing technology for the improvement of cseb properties. the theoretical concepts provide a continuous particle size distribution, and the soil used for the experiments also has a continuous particle size distribution. the soil used in the experiments was subjected to washing to reduce the clay and silt content. separated clay and silt and large particles of different sizes were added to the mixture to match particle size distribution to the optimization curves as explained in particle packing theories. the experimental results show that the cseb properties can be significantly improved by modifying particle size distribution to fit the suggested optimization curves. according to the results, the compressive strength improved by more than 50% with different amounts of cement stabilization. significant improvements in the dry densities and water absorption ratios of blocks were observed with this particle size modification. keywords-cement stabilized earth blocks; soil washing; particle packing; optimization curves; compressive strength i. introduction earthen materials are been used in civil engineering construction worldwide with different forms, such as mud, adobe, rammed earth and bricks. csebs are earthen materials made of soil that are stabilized with different additives, such as cement, fly ash, and lime. csebs have been investigated as a building material for their advantageous properties. compressed earth blocks represent a cost-effective and environmentally friendly alternative building material to traditional masonry elements [1]. in practice, csebs are made of earth stabilized with up to 10% cement and pressed either using a hand-operated press or a hydraulically operated, machine-driven press [2]. csebs are available as bricks, blocks, interlocking blocks and hollow blocks. earthen constructions have many advantages, such as thermal comfort, local employment creation and minimal impact on the environment [3]. the use of earthen constructions is not limited to developing countries, even in developed countries, such as australia, approximately 20% of the new building market is occupied with earth-based construction projects [4]. clay and silt content, cement percentage, and soil grading used for cseb production influence the properties of cseb [5]. soil grading refers to a combination of different size particles in a soil mixture. the selection of the correct sizes in the correct proportion may improve cseb properties. correct sizes and proportion can be better explained with the theory of particle packing. with this background, the aim of this paper is to explain the application of particle packing technology for the improvement of cseb properties. to achieve this aim, the following objectives were considered: • the properties of csebs made with different soils and different particle size distributions were tested. • the soil grade was modified to fit the optimization curve for cseb production, and the improvements in block properties with different amounts of cement stabilization were assessed. ii. literature review a compressed earth block (ceb), also known as a pressed earth block or a compressed soil block, is a building material made primarily from soil compressed at high pressure to form blocks. a mechanical press is used to form blocks out of an appropriate mix of fairly dry inorganic subsoil, non-expansive clay and aggregates. if the blocks are stabilized with a chemical binder such as portland cement, they are called csebs [6]. there are different methods of producing building walls using soil, such as with csebs, wattle and daub materials, rammed earth walls, and cob, or in the recent past, mud blocks [7]. compared to major masonry units such as burnt bricks and cement blocks, csebs have considerable advantages related to corresponding author: s.n. malkanthi engineering, technology & applied science research vol. 9, no. 4, 2019, 4538-4542 4539 www.etasr.com malkanthi & perera: particle packing application for improvement in the properties of compressed … environmental effects. author in [8] mentions low energy consumption and the use of recyclable material. authors in [3] explained the advantages of using cseb in developing countries, as the bricks do not need plastering because they have a finish that is the same as that for wire-cut bricks, hence significantly saving in cost. the major challenges with cseb have been researched by many studies considering strength and durability. the amounts of clay and silt are the main factors that act both positively and negatively with the properties of csebs. a. strength of csebs compressive strength has become a fundamental and universally accepted unit of measurement to specify the quality of masonry units. authors in [9] showed the relationship between the clay and silt content and the compressive strength for different soil-cement ratios. in the test results of [9], the compressive strength displayed an increasing tendency with decreasing clay content and, as expected, it was high for high cement contents. however, other researchers used a minimum clay content limited to 15% [2, 10]. based on their experiments, the compressive strength has tendency to increase with decreasing fines content for different amounts of cement. author in [9] concluded that high compressive strength can be achieved when the plasticity index is low and with 10% cement. however, he tested blocks made with soil having a minimum clay content of 15%. it was also shown in [5], that a compressive strength of more than 10n/mm 2 can be achieved, even with a low plasticity index, when the clay content is between 10% and 15%. the mechanical properties of soil blocks with fiber reinforcement for two different soil types have been investigated in [11]. both soils had more than 40% clay and silt content. for these soil types, the maximum compressive achievement was limited to 3n/mm 2 , even with fiber reinforcement. authors in [12] reported compressive strengths of 1.2, 1.9 and 2.4n/mm 2 with 5%, 8% and 10% cement, respectively, when the soil plasticity index is 13.4. authors in [13] reported compressive strength results of 2.8 and 1.2n/mm 2 for csebs with 7.5% cement and soil having a plasticity index of 12.6 and 14.4 respectively. however, they did not consider any durability issues. in general, csebs with a minimum clay content of 15% have been tested in many studies, but the amount of larger particles has not been considered, which is the main focus of this paper. b. application of particle packing technology for csebs soil properties are the dominant factor of csebs’ properties. different researcher groups found different soil particle combinations, ingredients, etc. particle packing technology is an important aspect of concrete technology to select appropriate sizes and shapes of aggregates. the purpose of this section is to review particle packing technology, its application in different areas, and to match it to csebs. particle packing technology considers optimizing the right sizes and amounts of various particles to increase particle density [14]. additionally, the packing of aggregates for concrete is the degree of how well the solid particles of the aggregates are packed in terms of packing density [15]. the packing density is defined as the ratio of the solid volume of the aggregate particles to the bulk occupied volume. at first, large particles fill the container with large voids and smaller particles are added to reduce the voids. then, tiny particles are filled to further reduce voids and increase density [16]. when well-graded soil is used for csebs, it increases the strength property of the soil blocks. smaller particles should be selected to fill the voids between large particles to increase packing density. the concept of particle packing optimization has been used in the field of concrete technology, such as highperformance concrete [17] and interlocking paving block development [18]. authors in [17, 18] focused on an ideal grading curve that represents the grading with the greatest density. these ideal curves help to perform mixture proportion optimization since it is easy to modify the total particle size distribution by adjusting the ingredient proportions. most studies on particle packing use one of the following particle optimization methods: • optimization curves: groups of particles with a specific particle size distribution are combined in a way that the total particle size distribution of the mixture is closest to an optimum curve. the following are such optimization curves [14, 19]: �(�)= � ��� � � (1) where p(d)=size cumulative distribution function, d=considered particle diameter (m), dmax=maximum particle diameter in the mixture (m), q=parameter (0.33-0.5) which adjusts the curve for fineness or coarseness. authors in [19] utilized (1) with q=0.5 and as per [14]. others have suggested values of q in the range of 0.33-0.5. equation (1) was modified with the adjustment factor q=0.37 for optimum packing [20]: min max min ( ) qq qq d d p d d d − = − (2) author in [21] proposed a maximum density line that provides a guide to blend aggregates and obtain maximum density [21]. authors in [14] showed that when the packing density is high, high compressive strength can be achieved. • particle packing models: these models are analytical models that calculate the overall packing density of a mixture based on the geometry of the combined particle groups. these models are discrete, hence, they consider the definite sizes of different particles. • discrete element models: these models simulate the virtual particle structure from a given size distribution. considering the nature of the above-mentioned three optimization methods, an optimization curve is used to compare the soils used in this study. iii. research methodology as the main consideration of this paper, focus was given on particle packing concepts. first, different soil types were checked for clay and silt content. specifically, a wet sieve analysis test was performed in accordance with astm 117 [22] to determine the clay and silt percentage of the tested soil engineering, technology & applied science research vol. 9, no. 4, 2019, 4538-4542 4540 www.etasr.com malkanthi & perera: particle packing application for improvement in the properties of compressed … samples. csebs were cast from all the tested soil types. based on the research scope, one soil type was selected. the selected soil was washed in order to reduce clay and silt content. in this study, washed soil contained 5% clay and silt. this washed soil was used to produce csebs, and its grading was modified to match the particle packing concept by adding large-size particles that were separated from the same soil earlier. then, previously separated clay and silt (fines) were added to create fine particle percentages of 5%, 7.5% and 10%. for each fines content, 4%, 6%, 8% and 10% cement content was used as stabilizer. for each combination, 10 blocks were cast, resulting in a total of 120 blocks. blocks of 150mm×150mm×150mm were prepared using a commercially available cement sand block-making machine. both vibration and compaction were applied for block casting. the vibration time was regulated based on the preliminary test conducted. cast blocks were cured using wet gunny bags and sprinkling water for 7 and 28 days. the cast blocks were tested to determine their dry and wet compressive strengths, dry density and water absorption, as per sls 1382 (part 2) [23]. each soil block was placed carefully in the testing machine below the center of the upper bearing block, and load was added until failure. using the load at failure, the compressive strength could be determined. figure 1 shows the testing procedure and cast blocks. fig. 1. cast blocks and compressive test procedure the dry density of the blocks was determined after keeping the blocks in the oven for more than 24h at 105 0 c. each specimen was oven-dried to a constant mass, weighed and measured to determine its dry density. � = �������(��) ������(��) �10� (3) to determine the water absorption of the blocks, the ovendried test specimens were immersed in water for 24h and the increase in the mass of each oven-dried test specimen was calculated and expressed as a percentage of the specimen’s initial dry mass. = !�"���"�� ���$�%� ��� ����(�)&'(�) ��� ����(�)'(�) ��� ����(�) �100 (1) iv. testing the particle packing applicability initially, five soil types designated as s1, s2, s3, s4 and s5 were selected for preliminary testing. the s1 soil is industrially washed soil. the s3 and s5 soils are naturally available lateritic soils. the s2 and s4 soils were derived by washing of the s3 and s5 soils, respectively, to reduce clay and silt content. table i denotes the soil grading distribution for each soil. figure 2 shows a comparison with optimization curves based on the theoretical grading curves explained above. all the theoretical curves were considered within the particle size region of 0.075mm to 12mm. table i. soil grading distribution soil type particle size 0-0.075 0.075-2.0 2.0-6.0 6.0-12.0 s1 5 82 9 3 s2 23 24 23 30 s3 35 25 15 25 s4 19 28 20 33 s5 40 30 10 25 fig. 2. comparison of the particle distribution of the used soil with the theoretical distribution these five types of soil with the same cement content added (6%) were used for block casting, and the cast blocks were tested after 28 days for wet and dry compressive strength, block density and water absorption. considering the presented soil types and optimization curves, the particle size distribution of soil types s2 and s3 are closer to the optimization curves. a. cseb properties vs optimization curves five selected soil types were used for cseb manufacturing. table ii gives the tested properties for cast csebs. among the studied soil types, blocks made with soil types s2 and s3 have comparatively high dry and wet compressive strength. however, the other properties of the blocks do not have the highest values but are within the acceptable range specified by [23-25]. these standards define compressive strength values under three grades and those are grade 1 (strength value is above 6.0mpa), grade 2 (4.0-6.0mpa) and grade 3 (2.84.0mpa). the minimum density is 1750kg/m 3 and the maximum water absorption is 15%. blocks made with s1 soil type have high density but low strength. therefore, blocks with s1 were tested with different cement contents, and the results are shown in table iii. the s1 soil has 5% fines after washing of the originally available soil. figure 3 shows the graphical representation of these results. engineering, technology & applied science research vol. 9, no. 4, 2019, 4538-4542 4541 www.etasr.com malkanthi & perera: particle packing application for improvement in the properties of compressed … table ii. block properties (6% cement) soil type fines % compressive strength (mpa) dry density (kg/m3) water absorption (%) 28-day dry 28-day wet s1 5 1.06 0.55 1956 9.5 s2 19 3.01 1.55 1854 12.9 s3 23 2.95 0.82 1778 19 s4 33 1.98 0.69 1713 18.5 s5 40 0.99 0.25 1481 31 table iii. block properties varying cement content cement % compressive strength (mpa) dry density (kg/m 3 ) water absorption (%) 7-day dry 28-day dry 28-day wet 4 0.56 0.71 0.47 1820 10.0 6 0.85 1.06 0.55 1956 9.5 8 1.95 2.26 1.27 1956 10.2 10 3.96 4.49 3.42 1940 8.75 fig. 3. compressive strength of soil blocks with 5% fines, 20% quarry dust, and s1 soil although the soil gradation does not match the available optimization curves, the strength can still be improved with cement addition. however, many past studies highlighted that more than 10% cement is not economical. therefore, this study concerns soil grading near the optimization curve at low cement content. therefore, the available s1 soil type was modified by adding larger particles to match the power line for cseb production, and those blocks were tested. the final soil grading is shown in figure 4 with the corresponding comparison to the optimization curves. this modified soil was used to cast blocks with varying cement content. additionally, the influence of adding quarry dust was tested with 20% quarry dust and 0% quarry dust. figure 5 shows the 28-day dry compressive strength for blocks made with modified s1 soil and 0% and 20% quarry dust for varying cement and fines content. we see that maximum compressive strength for all fines contents can be achieved with 10% cement content. also, the use of quarry dust for the mixture does not have a significant influence on strength. the wet compressive strength, dry density and water absorption results for the tested blocks are shown in table iv. the water absorption ratio clearly shows a notable improvement when optimizing particle packing. the dry density values also show that all the blocks made with upgraded soil arrangements achieve values of more than 1800kg/m 2 . the sls 1382 minimum value is 1750 kg/m 2 . fig. 4. particle size distribution of the modified soil fig. 5. 28-day compressive strength results of soil blocks with modified soil table iv. blocks’ dry density and water absorption soil type clay and silt % cement % 28-day wet compressive strength (mpa) dry density (kg/m3) water absorption % s1 washed soil to obtain 5% fines, 20% quarry dust 5.0 4 0.47 1820 10.0 6 0.55 1956 9.5 8 1.27 1956 10.2 10 3.42 1940 8.75 s1* and 20% quarry dust 5.0 4 1.17 2009 8.43 6 1.26 2009 9.2 8 2.75 2009 7.7 10 5.26 2023 5.1 s1* and 0% quarry dust 5.0 4 0.88 1911 9.3 6 2.73 2009 7.1 8 4.54 2009 7.3 10 5.65 2018 7.1 s1* 7.5 4 0.7 1890 11.3 6 1.19 1961 10.1 8 5.11 2009 7.5 10 7.69 2055 6.8 s1* 10.0 6 3.2 1917 8.7 8 3.8 1865 8.7 10 5.9 1893 8.5 s1*: s1 modified to match the power line by larger particle addition v. conclusion compressed stabilized earth blocks (csebs) have been considered a key researched masonry unit over the past few decades. many researchers have concluded that the compressive strength increases with decreasing clay and silt content. however, most researchers focused on the clay and silt content only. their attempts focused on reducing clay and silt engineering, technology & applied science research vol. 9, no. 4, 2019, 4538-4542 4542 www.etasr.com malkanthi & perera: particle packing application for improvement in the properties of compressed … content by adding different soil, sand, etc. further, studies of the influence of other larger particle sizes have not been extensively performed. this study focused on rearranging the particle distribution of the soil to match the optimization curves while reducing the clay and silt content by soil washing. csebs produced with this rearranged soil showed improvements in their block properties. for this study, the soil was rearranged for three different clay and silt contents: 5%, 7.5% and 10%. the results show that high compressive strength can be achieved with 7.5% clay and silt content and 8% and 10% cement contents. most of the compressive strengths are acceptable for grade i blocks, as per sls 1382. the dry density and water absorption ratio were also higher than the specified values in sls 1382. this study mainly considered the strength characteristics of csebs. many studies have been conducted on the durability of csebs with comparatively high clay and silt contents. nevertheless, improvements are needed to enhance the durability of cseb walls. clay and silt content consist the main barrier to achieve the expected durability performance. therefore, this research will be extended to test the durability issues of csebs with low clay and silt contents. acknowledgment this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. the authors would like to acknowledge the support given by mr. d.m.n. l dissanayaka, technical officer of structural testing laboratory, mr. h.t.r.m. thanthirige, technical officer of building materials laboratory, and mr. t.p.d.g.i. yohan, technical officer of structural dynamics and health monitoring laboratory, university of moratuwa, sri lanka. references [1] j. d. sitton, b. a. story, “estimating soil classification via quantitative and qualitative feild testing for use in constructing compressed earth blocks”, procedia engineering, vol. 145, pp. 860-867, 2016 [2] k. heathcote, “compressive strength of cement stabilized presses earth blocks”, building and research information, vol. 19, no. 2, pp. 101105, 1991 [3] c. jayasinghe, a. perera, s. west, “the application of hand moulded stabilised earth blocks for rural houses in sri lanka”, international earth building conference, sydney, australia, january 19-21, 2005 [4] m. segetin, k. jayaraman, x. xu, “harakeke reinforcement of soil– cement building materials: manufacturability and properties”, building and environment, vol. 42, no. 8, pp. 3066-3079, 2006 [5] b. v. v. reddy, m. s. latha, “influence of soil grading on the characteristics of cement stabilised soil compacts”, materials and structures, vol. 47, no. 10, pp. 1633-1645, 2013 [6] https://en.wikipedia.org/wiki/compressed_earth_block [7] c. udawatta, r. azoor, r. halwatura, “manufacturing framework and cost optimization for building mud ccncrete blocks (mcb)”, 16th conference of the science council of asia: mobilization modern technologies for sustainable development in asia, colombo, sri lanka, may 30-june 1, 2016 [8] d. j. haris, “a quantitative approach to the assessment of the environmental impact of building materials”, building and environment, vol. 34, no. 6, pp. 751-758, 1999 [9] p. j. walker, “strength, durability and shrinkage characteristics of cement stabilised soil blocks”, cement & concrete composites, vol. 17, no. 4, pp. 301-310, 1995 [10] a. perera, c. jayasinghe, “strength characteristics and structural design methods for compressed stabilized block walls”, international masonry society, vol. 16, pp. 34-38, 2003 [11] h. danso, d. b. martinson, m. ali, j. b. williams, “physical, mechanical and durability properties of soil building blocks reinforced with natural fibres”, construction and building materials, vol. 101, no. 1, pp. 797–809, 2015 [12] c. m. chan, l. p. low, “development of a strength prediction model for “green” compressed stabilised earthbricks”, journal of sustainable development, vol. 3, no. 3, pp. 140-150, 2010 [13] b. s. waziri, z. a. lawan, mustapha, m. a. mala, “properties of compressed stabilized earth blocks (cseb) for low cost housing construction: a preliminary investigation”, international journal of sustainable construction engineering & technology, vol. 4, no. 2, pp. 39-46, 2013 [14] s. fenis, j. c. walraven, “using particle packing technology for sustainable concrete mixture design”, heron, vol. 57, no. 2, pp. 73-101, 2010 [15] m. n. mangulkar, s. s. jamkar, “review of particle packing theories used for concrete mix proportioning”, international journal of scintific and engineering research, vol. 4, no. 5, pp. 143-148, 2013 [16] s. v. kumar, m. santhanam, “particle packing theories and their application in concrete mixture proportioning : a review”, indian concrete journal, vol. 77, no. 9, pp. 1324-1331, 2003 [17] v. wong, k. w. cha, a. k. h. kwan, “applying theories of particle packing and rheology to concrete for sustainable development”, organization, technology and management in construction: an international journal, vol. 5, no. 2, pp. 844-852, 2013 [18] h. a. c. k. hettiarachchi, w. k. mampearachchi, “validity of aggregate packing models in mixture design of interlocking concrete block pavers (icbp)”, road materials and pavement desing, vol. 20, no. 2, pp. 462474, 2017 [19] w. b. fuller, s. e. thompson, “the laws of proportioning concrete”, asian journal of civil engineering transport, vol. 59, no. 2, pp. 67143, 1907 [20] j. e. funk, d. r. dinger, coal grinding and particle size distribution studies for coal-water slurries at high solids content, empire state electric energy research corporation, 1980 [21] t. c. powers, the properties of fresh concere, wiley, 1968 [22] astm, astm c 117 (2009): standard test method for materials finer than 75-µm (no. 200) sieve in mineral aggregates by washing, astm, 2009 [23] sri lanka standard institution, sls 1382 (2010): specification for compressed stabilized earth blocks: part 2 test methods, sri lanka standard institution, 2010 [24] sri lanka standard institution, sls 1382 (2010): specification for compressed stabilized earth blocks: part 1 requirements, sri lanka standard institution, 2010 [25] sri lanka standard institution, sls 1382 (2010): specification for compressed stabilized earth blocks: part 3 guildlines on production, design and construction, sri lanka standard institution, 2010 microsoft word 04-3524_s1_etasr_v10_n4_pp5889-5895 engineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5889 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … simulation-based analysis of a dynamic voltage restorer under different voltage sags with the utilization of a pi controller abdul hameed soomro electrical engineering department quaid-e-awam university of engineering science & technology campus larkano, pakistan hameedsoomro13@yahoo.com abdul sattar larik electrical engineering department mehran university of engineering. & technology jamshoro, pakistan sattar.larik@faculty.muet.edu.pk mukhtiar ahmed mahar electrical engineering department mehran university of engineering & technology jamshoro, pakistan mukhtiar.mahar@faculty.muet.edu.pk anwar ali sahito electrical engineering department mehran university of engineering & technology jamshoro, pakistan anwar.sahito@faculty.muet.edu.pk izhar ahmed sohu universty of tun hussein onn malaysia izharahmedsohu@gmail.com abstract-power quality problems are becoming a major issue. every utility company consumer desires to receive steady-state voltage, i.e. a sinusoidal waveform of constant frequency as generated at power stations, but the influence of disturbances in the shape of sags and swells, interruptions, transients and harmonic distortions which affect power quality, resulting in loss of data, damaged equipment, and augmented cost. the most powerful voltage disturbance is the sag voltage. in this paper, a dynamic voltage restorer (dvr) is proposed for sag voltage compensation. it is cost-effective and protects critical loads in a good manner from balanced or unbalanced sag voltage. control strategy (such as a pi controller) is adopted with dvr topology and the performance of such a device with the proposed controller is analyzed through simulation in matlab/simulink. three types of faults are utilized, which are available in matlab/simulink pack, for obtaining the sag voltage. the specific range of total harmonic distortion percentage is also discussed. after the result validation of the dvr topology in matlab/simulink, it has been seen that the proposed topology is able to compensate the sag voltage of any type of fault and reduce the unbalancing and voltage distortions of the grid. keywords-dynamic voltage restorer (dvr); voltage source inverter (vsi); faults; power quality i. introduction globally, power quality problems are becoming a major issue. the distribution system network supplies voltage to consumers for utilization [1, 2]. every consumer desires to receive a steady state voltage, i.e. a sinusoidal wave form of constant frequency as generated at power stations but disturbances produced on the distribution system [3] result in distorted waveforms, which create a difficulty in the operation of electrical and electronic equipment in industries risking production loss, restarting expenses, and breakdown [4]. voltage disturbances are sag and swell voltage, unbalance voltage, voltage fluctuations, small and large interruptions, harmonic distortions, and transients [5]. the most powerful disturbance is sag voltage: it is defined as a decrease in voltage from 10% to 90% for durations up to one minute [6]. mostly voltage sags are observed during the start of an induction motor, the energization of a transformer, system faults, or nonliner loads [7-9]. the problem of sag voltage can be avoided through the application of voltage compensation devices such as a dynamic voltage restorer (dvr) [10, 11]. ii. dynamic voltage restorer dvr is a commonly used device for sag voltage compensation. the dvr is connected in series to the sensitive loads and adds the needed voltage when necessary. dvr voltage sag compensation is a cost effective method applicable in small and large loads up to 45mva or even larger [9]. dvr is mainly composed of components such as the voltage source inverter (vsi), a voltage injection device, a filter, an energy storage device, and a controlling device [12, 13]. figure 1 shows the proposed dvr topology. corresponding author: abdul hameed soomro engineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5890 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … fig. 1. proposed dvr iii. dvr components a. energy storage unit through this commonly used in dvr topology unit, low and high values of voltage are compensated and the efficiency of dvr increases. usually batteries, superconducting magnetic and super capacitor energy storages are utilized as energy storage units [12-14]. b. voltage source inverter this is the most valuable component in dvr topology. it allows supplying the necessary voltage to the load for compensation. vsi is composed of power electronic components and has the capability of changing the direct current (dc) into a sinusoidal current (ac) with the desired amplitude, frequency and phase angle supplied by the energy storage unit. vsi is switched on through dc voltage supply of small input impedance and output voltage is independent of load current [9]. c. filter dvr is a non-linear device because of its semiconductor components. this would result in distortion in the output voltage waveforms. to avoid the problem of distorted wave forms, a filter is employed to get distortion free voltage to the load [9, 12, 15]. d. voltage injection device a voltage injection device such as the power transformer is employed in dvr topology and connected with each phase individually. its main purpose is to add the needed voltage to the load. better performance and reliability of voltage injection transformer is based on the selection of mva rating, impedance and turns ratio [9, 14]. e. by-pass switch large current flows after the occurrence of fault on the power system will result in excessive current flows through the dvr, so it is essential to give another path to the current. this can be achieved through the by-pass switch. f. controlling device the controller has an imperative role in any system. in dvr, the controller is contributed to the role of an observer of bus-bar voltage. if voltage sag is detected then it closes the bypass switch and the required voltage is added to the load. iv. dvr operating modes the dvr operates in three modes [9, 16], which are presented below. a. standby mode the dvr is not supplying voltage to the load during this mode but due to transformer reactance it may inject some voltage for voltage drop compensation [9]. b. protection mode faults on the power system result in the flow of heavy currents which result in damaging the dvr [9]. so, protection of the dvr is necessary by using protective devices such as breakers [17, 18]. c. injection mode when a voltage sag is sensed, the dvr comes into operation very quickly [15] and the required voltage value is added to the load. in this way, the voltage sag is compensated [9]. v. voltage injection techniques there are four voltage injection techniques employed in dvr topology [19]: a. pre-sag technique this technique is superior, because through it, the voltage difference before and after the sag is added to the load but it needs an energy storage unit of large capacity because of the un-controlled injected active power. b. phase advance technique in this technique, dvr use of real power is trimmed-down through the reduction in power angle. c. voltage tolerance with minimum energy technique in this technique, we can retain the load voltage in the patience area of less variation of voltage magnitude. d. in-phase voltage injection technique in this technique, the non-variable value of load voltage is achieved despite different phase angles of pre-sag and load voltage due to in-phase association of the added voltage with supply voltage. vi. proposed methodology a. voltage sag calculation source voltage and source reactance (�� and ��) are shown in the equivalent circuit in figure 2 and loads such as �� and �� are supplied through two feeders named �� and �� . under normal operating condition, the supply current � and pre-sag voltage at the common coupling point are shown in (1): � � � � � �� ��������� � � �� ��������� (1) where � is the impedance of the load, �� is the feeder reactance magnitude, and � is the supply current. when abnormal conditions occur (fault on feeder ��) the result is the flowing of large load and supply current. equations (2)-(3) show the supply voltage and fault current at the common coupling point. engineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5891 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … �� �� � �,���� �� (2) �,���� ! ���� ��� � ���������� (3) it has been seen that, the voltage across f2 is decreased because of the large voltage drop, also called sag voltage, produced across the �� (source reactance). figure 3 shows the equivalent circuit where the dvr is connected and adds the needed voltage �"�# to the critical load to compensate the value of sag voltage. fig. 2. sag voltage calculation. fig. 3. voltage injection through the dvr. fig. 4. proposed one line diagram. figure 4 shows the one line diagram of the 11kv, 50hz power system, which generates voltage, the three winding power transformer of star grounded/delta/delta, which supplies voltage to two transmission lines. these two transmission lines supply voltage to two distribution systems and these two supply voltages are stepped down to 0.4kv through the distribution transformer which is connected in star and delta. the faulty feeder where different faults will occur at point x is connected through bus-a and the neighboring feeder where sensitive loads are connected is represented by bus-b. the dvr performance is authenticated by applying different types of faults with 130ms fault time. b. voltage total harmonic distortion power quality (system output voltage) is well defined through total harmonic distortion (tdh) method. a 5% or less voltage tdh is acceptable when the voltage magnitude is up to 69kv. the border line for single frequency voltage harmonic is 3% [21]. equations (4)-(5) show the voltage tdh calculated at the common coupling point, where the harmonic number is symbolized by h and the phase order is symbolized by p, so p is equal to a, b, c [22, 23]. $%&' ()"*� �()"*+�()", (4) $%&�. /∑ 12� �� .1��3�. 4100 (05) c. voltage sag indice sag voltage and recovered voltage quality is pointed-out through sag voltage indices which are very sensitive to voltage disturbances and provide a precise response on the performance of the system. the below mentioned indices are also conversed in simulation outcomes. 1) detroit edison sag score (ss): for contracts between utilities and customers, the first utilized index was s detroit edison sag score. it is defined by [24]: 55 1 � ����+��, (6) where ��, �7 and �8 are the phase voltages. if we get the outcome of 55% near to 0, then we can say that, after sag voltage compensation, the recovered voltage is better. 2) voltage sag lost energy index (vslei) this type of index provides the lost energy (w) when a reduction in voltage is observed for short duration. the same is applied to the load as shown in (7): : ∑ �. ∑ $.; ; 4 <1 � � �=>?@=�a b -.�d (7) where e f, g, h , �. is the phase voltage during the occurrence of sag and $. is the sag period (ms) for every phase [25]. when the calculations are carried out for the three phases, the required energy is added to the phases individually. 3) phase voltage unbalance rate (pvur) pvur [25] is given by: e�ij k�lmn�n*o ��*� 4 100 (8) where �q shows the variation in phase voltage from ��'r. vii. dvr control algorithm in this study, the pi controller is proposed for the dvr and parks transformation stu method is used for the generation of the reference voltage as shown in (9) to the series voltage source inverter to search for and maintain the load side voltage at its actual value. v represents the angular frequency (rad/s). engineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5892 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … through this method, the three phase voltages are easily controlled because of the transformation from three phase to two voltage components (wx and wy) while the zero sequence components of phases fgh are neglected. as shown in control block diagram in figure 5, free wave distortion is basically generated through a phase locked loop (pll) circuit [26]. z �q �[ �u \ �]̂ ^̂ _ 2 sin vd 2 sinevd � �fg 2 sinevd � �fg2 cos vd 2 cosevd � �fg 2 cosevd � �f g 1 1 1 jk kk l m �� �7 �8 n (9) fig. 5. dvr controller scheme. the dvr controller is employed to observe the load voltage on bus-bar and the load voltage components are compared with dq components. when the voltage is trimmeddown, an error signal is produced between the measured and the reference voltage to activate the controller whereas the output signal is produced from the proposed controller to add the needed voltage to the load. in-order to get a distortion-free output voltage, it is necessary to add a filter. in this study, a lc low pass filter is proposed for the suppression of harmonic distortion and switching signals of 5.5khz for frequency modulation are considered. pulse width modulation (pwm) technique is used for vsi for the production of three 50hz phases at load terminals. an igbt switching device for vsi is selected for the study of the dvr operation. viii. simulation results the controller’s performance under harmonic distortions and non-linear load is analyzed by applying three conditions in matlab simulation scenarios. the applied mathematical statistics is available in [26]. the three fault conditions which are applied to the system to evaluate the performance of dvr are presented in table i and are discussed below. table i. fault conditions condition fault type condition i three phase to ground fault condition ii double line to ground fault condition iii single line to ground fault • fault condition i: under this fault condition, the system will be in type a voltage sag. the value of voltage sag is equal in the three phases (balanced voltage sag condition). • fault condition ii: under this fault condition, the system will be in types e, f, and g sag voltage depending on the transformer connection between fault point and bus. if the measured bus is with &/pr transformer then the observed sag voltage is of the f type. • fault condition iii: under this fault condition, the system will be in types b, c, and d sag voltage, depending on the value and direction of healthy phases. when a change in direction is observed with no change in magnitude, then we can say that the system is under voltage sag of c-type. table ii. dvr system stricture s.no strictures value 1 line resistance 1.0ω 2 line inductance 5.8mh 3 line frequency 50hz 4 phase voltage of the load 220v 5 per phase power of the load 100ws 6 per phase inductive power of the load 0.3kvar 7 per phase capacitive power of the load 0.6kvar 8 turns ratio of injection transformer 1:1 9 dc voltage 200v 10 filter inductance 70mh 11 filter resistance 0.2ω 12 filter capacitance 5.0µf fig. 6. voltage vs time in faulty feeder at fault condition i. fig. 7. un-compensation of voltage vs time at fault condition i. fig. 8. voltage compensation vs time at fault condition i. fig. 9. voltage vs time of faulty feeder at fault condition ii. for observing voltage sags, each fault condition is applied to the system for 130ms. table iii shows the outcome when the voltage sag is not compensated. table iv shows the outcome when the voltage sag is compensated after the application of the dvr. it has been seen that, thdv% and pvur% are as per [25]. so it is observed that the proposed topology is suitable for sag voltage compensation under three conditions (in condition i-50%, in condition ii-24.2% and in condition iiiengineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5893 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … 1.85%). figures 6–14 show the voltage waveforms (faulted, un-compensated and compensated voltage waveforms) under the three fault types. table iii. system outcome after voltage sag occurence stricture fault condition i ii iii $%&��% 4.20 3.20 0.94 $%&�7% 8.10 8.15 0.98 $%&�8% 8.61 2.90 0.44 $%&�% 6.95 4.80 0.80 e�ij% 8.94 16.32 1.48 55% 50.01 24.23 1.87 �5qr s 47.48 17.20 0.02 table iv. system outcome after voltage sag compensation stricture fault condition i ii iii $%&��% 0.48 0.99 0.57 $%&�7% 1.79 1.7 0.56 $%&�8% 1.70 1.10 0.010 $%&�% 1.29 1.25 0.38 e�ij% 0.50 0.70 0.02 55% 1.99 0.69 0.23 �5qr s 0.0020 0.000320 0.0000020 fig. 10. un-compensation of voltage in volts versus time at condition-ii. fig. 11. compensation of voltage vs time at fault condition ii. fig. 12. voltage vs time of faulty feeder at fault condition iii. figure 15 shows the voltage thdv% of the energy storage capacity. it has been seen that the smallest value of voltage thdv% is achieved by the utilization of 200dc voltage. the variation of ss% according to the change of energy storage capacity is shown in figure 16. it has been seen that 200dc voltage is a suitable choice which attains the lowest ss%. figure 17 shows the pvur% according to the change in produced energy after the compensation as per change of storage capacity. it has been seen that 130dc voltage is the smallest voltage which is able to meet the terms of the ieee standard 112-1991 with less than 2% of phase unbalance rate. fig. 13. un-compensation of voltage vs time at falut condition iii. fig. 14. compensation of load voltage vs time at fault condition iii. the dvr is connected to the linear load. fig. 15. variance of $%&�% in competition with energy storage. fig. 16. variation of 55% in competition with energy storage . fig. 17. variation of e�ij% in competition with energy storage. table v. system outcome after voltage sag occurence with linear and non linear loads stricture fault condition i ii iii $%&��% 5.49 3.97 4.39 $%&�7% 7.49 9.68 5.67 $%&�8% 9.60 3.35 5.58 $%&�% 7.52 5.67 5.19 e�ij% 7.90 14.58 1.24 55% 50.63 30.99 12.00 �5qr s 47.73 18.57 0.57 engineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5894 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … table vi. system outcomes after voltage sag compensation with of linear and non linear loads stricture fault condition i ii iii $%&��% 0.83 2.39 1.17 $%&�7% 1.83 3.02 1.07 $%&�8% 1.96 1.30 1.63 $%&�% 1.53 2.23 1.29 e�ij% 0.51 1.35 0.13 55% 2.03 2.20 0.31 �5qr s 0.001 0.0069 0.000 ix. conclusion voltage sag is a critical problem which affects a distribution system network causing loss of data, damaging equipment, reducing production loss, and increasing cost. in this paper, a dynamic voltage restorer is proposed for voltage sag compensation as a cost effective solution that protects critical loads in a good manner from balanced or un-balanced voltage sag. control strategy (i.e. a pi controller) is adopted within the dvr topology and the performance of such a device with the proposed controller is analyzed by simulations in matlab/simulink. after getting the simulation outcome of dvr under three fault conditions, it has been concluded that the proposed dvr scheme is effective for voltage sag compensation. the robustness of the proposed pi controller is observed under linear and non-linear loads and it is concluded that the pi controller is most suitable for linear loads or for loads with small value of voltage distortion. references [1] s. hazarika, s. s. roy, r. baishya, and s. dey, “application of dynamic voltage restorer in electrical distribution system for voltage sag compensation,” international journal of engineering science, vol. 2, pp. 30-38, 2013. [2] f. jandan, s. khokhar, z. memon, and s. shah, “wavelet based simulation and analysis of single and multiple power quality disturbances,” engineering, technology & applied science research, vol. 9, no. 2, pp. 3909-3914, apr. 2019. [3] j. m. lozano, j. m. ramirez, and r. e. correa, “a novel dynamic voltage restorer based on matrix converters,” in 2010 modern electric power systems, sept. 20-22, 2010. [4] r. pal and s. gupta, “topologies and control strategies implicated in dynamic voltage restorer (dvr) for power quality improvement,” iranian journal of science and technology, transactions of electrical engineering, vol. 44, no. 2, pp. 581–603, jun. 2020, doi: 10.1007/s40998-019-00287-3. [5] d. patel, a. k. goswami, and s. k. singh, “voltage sag mitigation in an indian distribution system using dynamic voltage restorer,” international journal of electrical power & energy systems, vol. 71, pp. 231–241, oct. 2015, doi: 10.1016/j.ijepes.2015.03.001. [6] m. ramasamy and s. thangavel, “photovoltaic based dynamic voltage restorer with outage handling capability using pi controller,” energy procedia, vol. 12, pp. 560–569, jan. 2011, doi: 10.1016/j.egypro.2011.10.076. [7] a. chauhan and p. thakur, power quality issues and their impact on the performance of industrial machines, anchor academic publishing, 2016. [8] “ieee recommended practice for monitoring electric power quality,” in ieee std 1159-2019 (revision of ieee std 1159-2009), pp. 1-98, 13 aug. 2019, doi: 10.1109/ieeestd.2019.8796486. [9] v. k. ramachandaramurthy, a. arulampalam, c. fitzer, c. zhan, m. barnes and n. jenkins, “supervisory control of dynamic voltage restorers,” in iee proceedings generation, transmission and distribution, vol. 151, no. 4, pp. 509-516, 11 july 2004, doi: 10.1049/ipgtd:20040506. [10] e. babaei, m. f. kangarlu, and m. sabahi, “compensation of voltage disturbances in distribution systems using single-phase dynamic voltage restorer,” electric power systems research, vol. 80, pp. 1413-1420, 2010. [11] m. faisal, m. s. alam, m. i. m. arafat, m. m. rahman and s. m. g. mostafa, “pi controller and park's transformation based control of dynamic voltage restorer for voltage sag minimization,” 2014 9th international forum on strategic technology (ifost), cox's bazar, 2014, pp. 276-279, doi: 10.1109/ifost.2014.6991121. [12] p. t. nguyen and t. k. saha, “dynamic voltage restorer against balanced and unbalanced voltage sags: modelling and simulation”, ieee power engineering society general meeting, 2004, denver, co, usa, 2004, pp. 639-644, vol. 1, doi: 10.1109/pes.2004.1372883. [13] j. g. nielsen and f. blaabjerg, “a detailed comparison of system topologies for dynamic voltage restorers,” in ieee transactions on industry applications, vol. 41, no. 5, pp. 1272-1280, sept.-oct. 2005, doi: 10.1109/tia.2005.855045. [14] m. b. kim, g. w. moon, and m. j. youn, “synchronous pi decoupling control scheme for dynamic voltage restorer against a voltage sag in the power system,” 2004 ieee 35th annual power electronics specialists conference (ieee cat. no.04ch37551), aachen, germany, 2004, pp. 1046-1051, vol. 2, doi: 10.1109/pesc.2004.1355565. [15] j. g. nielsen, m. newman, h. nielsen, and f. blaabjerg, “control and testing of a dynamic voltage restorer (dvr) at medium voltage level,” ieee transactions on power electronics, vol. 19, no. 3, pp. 806–813, may 2004, doi: 10.1109/tpel.2004.826504. [16] c. fitzer, m. barnes, and p. green, “voltage sag detection technique for a dynamic voltage restorer,” in conference record of the 2002 ieee industry applications conference. 37th ias annual meeting (cat. no.02ch37344), oct. 2002, vol. 2, pp. 917–924 vol.2, doi: 10.1109/ias.2002.1042668. [17] l. a. moran, i. pastorini, j. dixon, and r. wallace, “a fault protection scheme for series active power filters,” ieee transactions on power electronics, vol. 14, no. 5, pp. 928–938, sep. 1999, doi: 10.1109/63.788498. [18] m. j. newman and d. g. holmes, “an integrated approach for the protection of series injection inverters,” ieee transactions on industry applications, vol. 38, no. 3, pp. 679–687, may 2002, doi: 10.1109/tia.2002.1003417. [19] i. y. chung, d. j. won, s. y. park, s. i. moon, and j. k. park, “the dc link energy control method in dynamic voltage restorer system,” international journal of electrical power & energy systems, vol. 25, no. 7, pp. 525–531, sep. 2003, doi: 10.1016/s0142-0615(02)00179-5. [20] d. francis and t. thomas, “mitigation of voltage sag and swell using dynamic voltage restorer,” in 2014 annual international conference on emerging research areas: magnetics, machines and drives (aicera/icmmd), jul. 2014, pp. 1–6, doi: 10.1109/aicera.2014.6908218. [21] “ieee recommended practice and requirements for harmonic control in electric power systems,” ieee std 519-2014 (revision of ieee std 519-1992), pp. 1–29, jun. 2014, doi: 10.1109/ieeestd.2014.6826459. [22] s. h. e. a. aleem, a. f. zobaa, and a. c. m. sung, “on the economical design of multiple-arm passive harmonic filters,” in 2012 47th international universities power engineering conference (upec), sep. 2012, pp. 1–6, doi: 10.1109/upec.2012.6398664. [23] m. balci, s. abdel aleem, a. zobaa, and s. sakar, “an algorithm for optimal sizing of the capacitor banks under nonsinusoidal and unbalanced conditions,” recent advances in electrical & electronic engineering, vol. 7, pp. 116–122, dec. 2014, doi: 10.2174/2352096507666140925202729. [24] r. a. barr, v. j. gosbell, and i. mcmichael, “a new saifi based voltage sag index,” in 2008 13th international conference on harmonics and quality of power, oct. 2008, pp. 1–5, doi: 10.1109/ichqp.2008.4668806. engineering, technology & applied science research vol. 10, no. 4, 2020, 5889-5895 5895 www.etasr.com soomro et al.: simulation-based analysis of a dynamic voltage restorer under different voltage sags … [25] “ieee standard test procedure for polyphase induction motors and generators,” ieee std 112-1991, 1991, doi: 10.1109/ieeestd.1991.114383. [26] s. h. e. a. aleem, a. m. saeed, a. m. ibrahim, and e. e. a. el-zahab, “power quality improvement and sag voltage correction by dynamic voltage restorer,” international review of automatic control (ireaco), vol. 7, no. 4, pp. 386-393–393, jul. 2014, doi: 10.15866/ireaco.v7i4.2160. etasr paper format engineering, technology & applied science research vol. 9, no. 2, 2019, 4053-4056 4053 www.etasr.com al-khazaal et al.: study on the removal of thiosulfate from wastewater by catalytic oxidation study on the removal of thiosulfate from wastewater by catalytic oxidation abdulaal z. al-khazaal chemical engineering and materials engineering department, northern border university, arar, saudi arabia abdulaal.alkhazaal@nbu.edu.sa farooq ahmad chemical engineering and materials engineering department, northern border university, arar, saudi arabia farooq.amin@nbu.edu.sa naveed ahmad chemical engineering and materials engineering department, northern border university, arar, saudi arabia naveed.ahmad@nbu.edu.sa abstract-wastewater streaming from industrial plants, including petroleum refineries, chemical plants, pulp and paper plants, mining operations, electroplating operations, and food processing plants, can contain offensive substances such as cyanide, sulfides, sulfites, thiosulfates, mercaptans and disulfides that tend to increase the chemical oxygen demand (cod) of the streams. in the present work, removal of thiosulfate from wastewater by catalytic oxidation using aluminum oxide as a catalyst was studied. four main factors were considered, namely the initial thiosulfate concentration, the hydrogen peroxide concentrations, the amount of the catalyst and the operating temperatures. the analysis of thiosulfate and sulfate was carried out by using uv visible spectrophotometer. an empirical rate equation was developed. keywords-thiosulfate oxidation; kinetic model; catalytic oxidation; wastewater treatment i. introduction sulphur can be found in a variety of oxidation states, with three oxidation states of −2 (sulphide and reduced organic sulphur), 0 (elemental sulphur) and +6 (sulphate) being the most significant in nature. thiosulfate is often produced from an incomplete oxidation of sulfides (pyrite oxidation) or partial reduction of sulfate. the reactions that lead to thiosulfate formation are [1]: 8h2s + 4o2 → s8 + 8h2o (1) h2s + 3/2o2 → 2h + + so3 2 (2) so3 2+ 1/8s8 → s2o3 2 (3) h2s + 2o2 → 2h + + so4 2 (4) 2s2+ 3/2o2→ s2o3 2 (5) so4 + 5h + → ½s2o3 + 5/2h2o (6) thiosulfates are stable in neutral or alkaline solutions, but not in acidic solutions, due to their decomposition to sulfite and sulfur, the sulfite being dehydrated to sulfur dioxide: s2o3 2−(aq) + 2h+(aq) → so2(g) + s(s) + h2o (7) this phenomenon could cause a rise of water ph, increasing enrichment of s2-and s0 in pore water, and finally plant death and inhibition of nitrification by sulfide toxicity, while the thiosulfates are a very aggressive species regarding metal corrosion. [2] based on [3], it was found that the concentration of thiosulfate in refinery wastewater is about 174ppm and for other wastewater, the thiosulfate concentration is about 2050ppm. according to [4], the traditional method of treating thiosulfate containing wastes is by oxidation to sulfate which can be accomplished either chemically by using oxidizing agents (e.g. peroxide) and oxidation catalysts (e.g. complexed copper (ii)), or biological using aerobic processes such as activated sludge systems. several aerobic and anaerobic microorganisms have the ability to utilize thiosulfate and other sulfur species as sources of energy. the most important aerobic microorganisms are sulfur chemolithoautotrophic bacteria (sulfur autotrophs), which use various reduced forms of sulfur as energy sources (electron donors) and oxygen as the electron acceptor. the final product of the complete dissimilatory oxidation of these compounds is sulfate. in [5], authors removed sulfide from wastewater by oxidizing it to sulfate using hydrogen peroxide in the presence of iron oxide catalyst. they synthesized iron oxide catalyst using the sol-gel technique and used different analytical techniques including scanning electron microscopy (sem), fourier transform infrared spectroscopy (ft-ir), thermal gravimetric analysis (tga) and energy dispersive x-ray spectroscopy (edx). their results were further explained by a kinetic study. authors in [6] studied the treatment of wastewater containing thiosulfate by photo oxidation. they treated thiosulfate containing wastewater by aeration in the presence of uv light and found that the treatment process thus accelerates. they explained their results by carrying out a kinetic study of the process. authors in [7] explored the treatment of sulfidic wastewater by aeration in the presence of ultrasonic vibration. they examined the oxidation of sulfide by aeration in the presence of ultrasonic vibration and found that the oxidation process was faster. kinetics of the treatment process was also studied. author in [8] studied the oxidation of thiosulfate in the presence of ultrasonic vibration. the process was studied at different initial thiosulfate concentrations, different ultrasonic vibrations and at different hydrogen peroxide dosages. the results were further explained by studying the kinetics of the treatment process. corresponding author: abdulaal z. al-khazaal http://en.wikipedia.org/wiki/alkali http://en.wikipedia.org/wiki/sulfite http://en.wikipedia.org/wiki/sulfur http://en.wikipedia.org/wiki/sulfur_dioxide engineering, technology & applied science research vol. 9, no. 2, 2019, 4053-4056 4054 www.etasr.com al-khazaal et al.: study on the removal of thiosulfate from wastewater by catalytic oxidation ii. experimental work a. materials solutions with different thiosulfate concentrations were prepared synthetically. sodium thiosulfate of 60% purity was used to prepare the test solutions. the changes in thiosulfate and sulfate concentrations were examined by the dr-5000 uvvisible spectrophotometer. distilled water was used to prepare the test solution. the analysis of sulfate was carried out by dr5000 uv-visible spectrophotometer using barium chloride as a reagent [5-8]. b. catalyst preparation aluminum oxide (al2o3) catalyst was prepared from aluminium chloride and ammonium hydroxide by the sol-gel technique. aluminum oxide was used as a catalyst for the treatment of wastewater containing thiosulfate. the aluminum oxide prepared by the sol-gel technique was dried in the oven for 24h before use [5]. c. methodology oxidation of thiosulfate was carried out in a jacked glass reactor. the carried out experiments were divided into four groups. first the process was explored at different initial thiosulfate concentrations (700ppm, 1400ppm, 2100ppm and 2800ppm). the second experiment was carried out at hydrogen peroxide loading (0.8, 1.6, 2.4, 3.2 molar). in the third experiment, the treatment process was explored at different catalyst loadings (0.25g, 0.50g, 1.0g and 1.5g). finally, the effect of temperature was explored. the process was repeated at 35oc, 45 oc, and 55oc. iii. results and discussion a. effect of initial thiosulfate concentration the process was carried out at four different initial thiosulfate concentrations. catalyst and h2o2 amounts were kept constant throughout the experiments, 0.25g and 0.8 molar respectively. the experiment was conducted at room temperature (25oc). figures 1 and 2 clarify that the rate of sulfate formation or/and thiosulfate drop increases as the initial concentration of thiosulfate increases. an increase in the initial thiosulfate concentration increases the amount of thiosulfate ions per sample to be reacted and form sulfate ions. thus the formation of the sulfate ions increases. this finding shows that initial thiosulfate concentration is one of the main components in the rate law of this advanced oxidation process. b. effect of hydrogen peroxide concentration the treatment process was also explored at different hydrogen peroxide loadings. the results are shown graphically in figures 3 and 4. we see that the concentration of sulfate formation and/or thiosulfate drop increases as the concentration of h2o2 increases from 0.8 to 1.6 molar. while, the formation of the sulfate ions is less as the h2o2 increases to 2.4 molar and 3.2 molar. these results showed that the optimum concentration for h2o2 in this experiment is 1.6 molar because most of thiosulfate ions were being consumed. the reason for this is that h2o2 mainly functions as the agent to produce hydroxyl radical that reacts in the thiosulfate oxidation process. furthermore, the slope of the graph indicated the formation of sulfate ions which increases quickly at the first three readings for all h2o2 concentration and the slope starts to decline and become constant afterwards. fig. 1. thiosulfate concentration versus time for different thiosulfate concentrations. catalyst loading: 0.25g, h2o2: 0.8m. fig. 2. concentration of sulfate formation for different thiosulfate concentrations versus time. catalyst loading: 0.25g, h2o2: 0.8m. fig. 3. concentration of sulfate formation for different molar h2o2 concentrations versus time. catalyst loading: 0.25g, initial thiosulfate concentration=1400ppm. c. effect of catalyst loading the effect of catalyst loading was investigated. figures 5 and 6 show different catalyst loadings ranging from 0.25 to 1.5g used to explore the treatment process. the sulfate formation or/and thiosulfate drop were found to increase with increasing concentration of the catalyst, reaching the highest value of 1320mg/l and 630mg/l respectively, and then became constant. this may be due to the fact that when most of thiosulfate ions have reacted, the addition of higher quantities of catalyst would have no effect on the reaction. engineering, technology & applied science research vol. 9, no. 2, 2019, 4053-4056 4055 www.etasr.com al-khazaal et al.: study on the removal of thiosulfate from wastewater by catalytic oxidation fig. 4. concentration of thiosulfate for different molar h2o2 concentrations versus time. catalyst loading: 0.25g, initial thiosulfate concentration=1400ppm. fig. 5. concentration of sulfate formation versus time for different catalyst amounts. h2o2: 0.8m, initial thiosulfate concentration=1400ppm. fig. 6. concentration of thiosulfate versus time for different catalyst amounts. h2o2: 0.8m, initial thiosulfate concentration=1400ppm. d. effect of temperature the temperature of the reaction is important to the efficiency of the reaction. figures 7 and 8 describe the influence of temperature on the treatment process. with increase in temperature from 350c to 450c we found that the sulfate formation only shows an increment at the first three readings. after that it starts to decrease for the temperature of 35oc, and remains constant for 45oc. however, when the temperature is further increased to 55oc, sulfate formation and/or thiosulfate drop is found to suppress. from these results it can be concluded that the optimum operating temperature for this h2o2 advanced oxidation process with al2o3 catalyst is between 25oc and 45oc. fig. 7. concentration of sulfate formation versus time in various temperatures. h2o2: 0.8m, initial thiosulfate concentration=1400ppm, catalyst loading=0.25g. fig. 8. concentration of thiosulfate versus time in various temperatures. h2o2: 0.8m, initial thiosulfate concentration=1400ppm, catalyst loading=0.25g. e. kinetic study an empirical rate equation was developed. the orders with respect to all reactants and for the overall reaction were determined. the reactants were hydrogen peroxide (h2o2) and thiosulfate ion (s2o3 2-). when a catalyst is used the reaction rate may be stated on a catalyst weight basis. thus, the rate of the reaction becomes: rate=-k[ s2o3 2-]α [h2o2] β[al2o3] γ (8) k=ae(-ea/rt) (9) where k is the reaction rate constant and α, β and γ are the reaction orders for s2o3 2, h2o2 and al2o3 respectively. from (8)-(9), we can see that the rate is proportional with [s2o3 2-]α, [h2o2] β and [al2o3] γ. the mathematical relations of this explanation are shown below, with rate α being [ s2o3 2-]α, rate β being [h2o2] β, and rate γ being [al2o3] γ. thus: rα=[s2o3 2-]α (10) lnrα=αln[s2o3 2-] (11) rβ=[h2o2] β (12) lnrβ=β ln [h2o2] (13) rγ=[al2o3] γ (14) lnrγ=γ ln [al2o3] (15) using the above equations, rates and orders with respect to each reactant are calculated. the orders of reaction with respect to thiosulfate, hydrogen peroxide and catalyst were found to be 0.421, 0.556 and 0.558 respectively. the order of reaction with engineering, technology & applied science research vol. 9, no. 2, 2019, 4053-4056 4056 www.etasr.com al-khazaal et al.: study on the removal of thiosulfate from wastewater by catalytic oxidation respect to hydrogen peroxide was found by plotting the rate of reaction against concentration of hydrogen peroxide as shown in figure 9 which is the logarithmic plot of the rate of the reaction of thiosulfate oxidation and hydrogen peroxide concentration. the slope of the plot gives us the order of reaction with respect to hydrogen peroxide. using arrhenius equation, activation energy was found to be 3507kj/mol. fig. 9. plot of the reaction order with respect to h2o2. iv. conclusion thiosulfate from wastewater can be removed by catalytic oxidation using hydrogen peroxide as an oxidant. the following points can be concluded from this study:  the reaction rate depends on the reactant species which are thiosulfate and h2o2 in this project. however, compared to thiosulfate, the concentration of hydrogen peroxide has a bigger effect on the reaction rate.  the amount of catalyst also played an important role in supporting the rate of formation of sulfate ions. increasing the amount of catalyst increases the rate of reaction.  temperature affects the treatment process. however, increase in temperature above 45oc has no positive effect on the progress of treatment process. the optimum temperature of the treatment process is in the range of 25oc to 45oc.  the treatment process is also affected by the dose of hydrogen peroxide. dosage of higher hydrogen peroxide from a certain limit increases only the level of hydroxyl ions and has no fruitful effect on the treatment process. the optimum dosage of hydrogen peroxide in the current study was found to be 1.6m.  the rate of the reaction is also influenced by the initial thiosulfate concentration. increase in the initial concentration of thiosulfate increases the rate of the reaction.  the orders of reaction for all reactants were found positive, which means that increase in their initial concentration will increase the rate of the reaction. references [1] r. h. dinegar, r. h. smellie, v. k. la mer, “kinetics of the acid decomposition of sodium thiosulfate in dilute solutions”, journal of the american chemical society, vol. 73, pp. 2050-2054, 1951 [2] t. yan, removal of cyanide, sulfides and thiosulfate from ammonia containing wastewater by catalytic oxidation, us patent, no. 5360552, 2001 [3] dionex, determination of thiosulfate in refinery and other wastewaters, application note 138, dionex, 2001 [4] d. c. schreiber, s. g. pavlostathis, “biological oxidation of thiosulfate in mixed heterotrophic/autotrophic cultures”, water research ,vol. 32, no. 5, pp. 1363-1372, 1998 [5] n. ahmad, s. maitra, b. k. dutta, f. ahmad, “remediation of sulfidic wastewater by catalytic oxidation with hydrogen peroxide”, journal of environmental sciences, vol. 21, no. 12, pp. 1735-1740, 2009 [6] n. ahmad, f. ahmad, i. khan, a. daud khan, “studies on the oxidative removal of sodium thiosulfate from aqueous solution”, arabian journal for science and engineering, vol. 40, no. 2, pp. 289-293, 2015 [7] f. ahmad, n. ahmad, a. z. al-khazaal, i. alenezi, “remediation of sulfidic wastewater by aeration in the presence of ultrasonic vibration”, engineering, technology & applied science research, vol. 8, no. 3, pp. 2919-2928, 2018 [8] f. ahmad, “a new approach for the removal of thiosulfate from wastewater”, international journal of chemical and enviromental engineering, vol. 3, no. 5, pp. 303-308, 2012 [9] f. ahmad, n. ahmad, s. maitra, “treatment of sulfidic wastewater using iron salts”, arabian journal for science and engineering, vol. 42, no. 4, pp. 1455-1462, 2017 https://www.sciencedirect.com/science/article/pii/s100107420862481x https://www.sciencedirect.com/science/article/pii/s100107420862481x https://link.springer.com/article/10.1007/s13369-014-1473-0 https://link.springer.com/article/10.1007/s13369-014-1473-0 i. introduction ii. experimental work a. materials b. catalyst preparation c. methodology iii. results and discussion a. effect of initial thiosulfate concentration b. effect of hydrogen peroxide concentration c. effect of catalyst loading d. effect of temperature e. kinetic study iv. conclusion references microsoft word 17-2661_s engineering, technology & applied science research vol. 9, no. 2, 2019, 3959-3964 3959 www.etasr.com demetillo & taboada: real-time water quality monitoring for small aquatic area using unmanned … real-time water quality monitoring for small aquatic area using unmanned surface vehicle alexander t. demetillo college of engineering and information technology caraga state university butuan city, philippines atdemetillo@carsu.edu.ph evelyn b. taboada school of engineering university of san carlos cebu city, philippines evelynbtaboada@gmail.com abstract—most developing countries depend on conventional water quality monitoring methods which are usually expensive, complicated, and time-consuming. in recent years, stationary and portable water quality monitoring and a mobile surface vehicle have increased the utilization of on-site water measurements and monitoring. the first has the disadvantage of small coverage area while the second has its cost and operational complexity. this paper addresses these issues by placing materials and equipment used in fixed online water quality monitoring and using a customized and low-cost unmanned surface vehicle. the measurements are taken automatically on the equipment onboard the unmanned surface vehicle (usv), transmitted wirelessly to a pc-based remote station or nearby stations and saved there in a dedicated database. the overall system comprises a commercial water quality sensor, a gsm and zigbee module for a wireless communication system, a low-cost mobility platform, and the location/positioning system. during testing, all captured data like water quality parameters, location, and other essential parameters were collated, processed and stored in a database system. relevant information from the usv can be viewed on a smartphone or a computer. the usv was also tested to conduct unmanned water quality measurements using the preinputted navigation route which shows a good result in navigation and data transmission. water bodies with calm water such as lakes and rivers can use the usv, in a stand-alone mode or as a part of a networked sensor system. keywords-unmanned surface vehicle; water quality monitoring; wireless transmission i. introduction because of population increase, industrialization and climate change, water demand is increasing at an alarming rate [1-3]. the sources of potable water are limited and vulnerable, thus the need for water quality monitoring in rivers, lakes, and other freshwater bodies is becoming a more pressing issue [4]. there is a growing demand to implement a mechanism to enhance the effectiveness of existing water quality monitoring methods. mostly, water quality information gathering uses traditional or manual methods for various reasons ranging from lack of technology capacity, human resources and financial constraints [5-7]. these techniques or methodologies are usually accurate and inexpensive, but suffer from various disadvantages like the loss of vital information during transport, limited samples, and cost-effectiveness. they cannot perform real-time monitoring which could mitigate or prevent uncertain happenings in a particular area [8]. nowadays, some advanced countries perform water quality monitoring using innovative technologies like utilizing an electrochemical sensor which can give water quality information in real-time [10-11], maximizing the advantages of recent technology by combining electrochemical sensors with wireless communication technology [11]. the system automatically senses data, transmits it to the notification node and will alarm if the parameters exceed their limits. authors in [12] successfully demonstrated a method of deployment of a wireless sensor network with a mechanism to reduce traffic between base and remote stations for water quality monitoring wherein stationary stations could detect essential water parameters in real time. gathered data were transmitted every five minutes to the base station through multi-hoop routing using a flooding routing protocol. however, the stationary platform also has the disadvantages of high cost of covering the whole study area [13], of provision of sensors in every sampling point and lack of mobility [14]. with these, the development of a mobile technology which will take advantage of the water quality monitoring capability of the stationary sensors and add mobility to enhance coverage, effectiveness, and efficiency of water monitoring is necessary [15]. usv technology with sensors on board could cater and house more research tools, thus improving the methods of water quality monitoring. mobile vehicles in a marine or aquatic area could further increase the coverage of an automated water quality monitoring system with its ability to transfer from one location to another thus increasing the spatial and temporal measurement capability of the whole system. they can provide protection (housing and floatation) to the electronic equipment needed for monitoring and getting samples of the research area. with the advancement of computational ability, options for more monitoring equipment on board the usv increased through the utilization of new and innovative sampling methods [17, 18]. this led to the increasing trend of using different approaches, design, and implementation in a mobile vehicle for wqms endeavor. among many types of water surface vehicles for water and environmental monitoring, autonomous underwater vehicle corresponding author: alexander t. demetillo engineering, technology & applied science research vol. 9, no. 2, 2019, 3959-3964 3960 www.etasr.com demetillo & taboada: real-time water quality monitoring for small aquatic area using unmanned … (auv), unmanned surface vehicle (usv), and autonomous surface vehicle (asv) are the most widely used [18]. asv has the capabilities of large-area monitoring and worry-free unmanned navigation which is the ideal scenario for water quality monitoring, but comes with a high cost [19], while auv also can be a handy tool, especially in dangerous areas, but has also a high price along with its complex operation and setup [20]. usvs offer the advantages of being less expensive and having simpler supervision compared to uav, while they are easy to customize for a hybrid operation depending on the actual situation of the study area like an obstacle or floating objects which can be hardly detected using automation. several aquatic mobile vehicles have been developed for water quality monitoring with some enhancements like customized platforms to fit the actual conditions of the area [21], communication systems [22], and energy systems and mechanisms to cater the much-needed power of the vehicle [24, 25]. some known usv projects for water quality monitoring include bathymetric survey [25], data acquisition system on large fish and marine mammal movements [26], research on marine inhabitant activities [28-30], coastal environmental monitoring and pollutant tracking [13], emergency response and survey of damages [30], data acquisition and sampling for sea water and air [31]. authors in [33] focused on the improvement of surface vehicle platforms, communications and coordination, propulsion, control and other crucial improvements. they covered basic important wqm parameters ranging from temperature to sodium with data being wirelessly transmitted to a nearby computer system for storage. authors in [34] proposed another low-cost usv water quality monitoring, but with emphasis on inland water resources, while authors in [35] studied a multi-robot system approach to monitoring coastal waters, rivers, and lagoons in real-time with added features for the monitoring of heavy metal concentrations. measurement of marine important parameters like partial pressure and sea-air fluxes of co2 were conducted in [36]. the usv in [37] collects water samples in different water depths, places them in different water containers, and brings the samples for laboratory analysis. a remote-controlled watercraft with an automatic calibrator for sensors that need frequent calibration like ph was studied in [38] and low-cost bathymetry and depth monitoring in real time were performed in [25]. small water bodies like lakes and rivers usually have shallow waters which are difficult and too expensive for an environmental ship to monitor. operations in shallow waters require having a low draft and protection mechanism for the propellers. usv is able to operate in shallow waters with possible floating materials and plants such as water lilies, fish pens, and other barriers. with constraints for its total weight, it only allows a very minimalistic payload to be onboard the usv like a sensor, communication and navigational equipment with their electronics parts. the usv aims to provide a low-cost method capable of bringing information about water physicochemical parameters in real-time. this real-time monitoring can be an aid to the traditional or conventional methods of water quality monitoring which have high sensitivity, but seldom perform satisfyingly for budgetary constraints and other reasons. instead, usv utilization would make up continuous monitoring systems essential in the investigation and characterization of some water quality parameters like ph and temperature which have very inconsistent characteristics. received data can be further analyzed and may serve as input to decision making. this study focuses on the design of an unmanned mobile vehicle for water quality monitoring with consideration of the actual requirements and scenarios of the target area which is lake mainit, philippines. its main features are its affordability and adaptability to the field of study. ii. materials and methods the primary goal of the usv is to give an alternative method in water quality monitoring in dangerous areas and high-risk jobs. to collect data in hard to reach areas [19], to improve the data collection frequency, which if done manually could lead to expensive and laborious endeavor. figure 1 shows the diagram of the proposed system in which data collection is automated. figure 2 shows the usv prototype during its first testing. boat materials available in the area were used for creating its body and motor and its controllers are commercially available wq sensors from atlas scientific (ph and temperature) making this design ready for replication. also, to make it affordable, open source software was utilized. this software is used to receive and accumulate data acquired from the onboard sensors of the usv as an input interface for navigational operation setup, and to download the predefined usv direction. the system is divided in four subsystems, namely usv platform, automated sensor system, application software, and a base station. fig. 1. diagram of mobile water quality monitoring fig. 2. usv prototype engineering, technology & applied science research vol. 9, no. 2, 2019, 3959-3964 3961 www.etasr.com demetillo & taboada: real-time water quality monitoring for small aquatic area using unmanned … a. construction most of the materials used in the catamaran-type usv are available online or in the local market. the main goal of this research is to build a floating and mobile platform that can perform water quality monitoring in a sizeable aquatic area that is too large for the deployment of stationary sensors and too risky and difficult for manual sampling. usv design considers affordability, replicability and flexibility to cater to an actual field situation. the usv was designed and built in the center for robotics, automation and fabrication technology (craft) of caraga state university in butuan city, philippines. it is an enhancement of the previous project for wqm which uses buoy as floating material, but in a permanent location. the area to be covered is set to be 5km 2 per day with the need to sample/measure water in at least 1-meter depth. these requirements are in consideration with the area of the lake mainit of caraga region in the philippines where mining activities are rampant. these facts led to a consideration of a minimum speed of 1 nautical mile (0.5m/s), and a maximum weight of 40kg. a catamaran-type of construction was chosen for its proven stability on water, which allows easy control. the usv can be operated using remote control or can be automated depending on the location of the area and the distance it will cover. the operator might opt for remote-controlled operation if the area is small and there is a presence of an obstacle or floating materials or species that might destroy or damage the usv propeller, the casing or onboard sensors. the automatic operation fits with considerable area coverage with no obstruction or floating objects. the operator can design a mission plan and define its coordinates for the sampling location. with these, deployment makes data gathering unmanned and automated in dangerous and difficult to reach areas. b. electronic module in terms of electronic components, the usv system comprises of two subsystems: usv unit and ground control station with corresponding electronic equipment. the ground control station comprises of a computer system (laptop/pc) and wireless transceivers linking with the usv. the ground control has also the path/direction application program used in updating the usv path and pointing of sampling if needed. communication module comprises of telemetry, xbee, and rc modules. the data related to the mobility of the usv are catered by the ardupilot using telemetry. xbee module is used to transmit the gathered data to the nearest station while the remote controller (rc) module is used for the manual navigation control of the usv. it used a customized mission planner using an open source program powered by arduino ide to control the navigational operation of the usv. for a continuous operation, a mission plan must be downloaded to the microcontroller before the start of the mission. the automated operation used a compass board and gps module as a means to follow the predefined navigational route. sensing of data is the same with the stationary sensor methodology. the sensor determines the value and after proper conditioning and boosting it will be transferred to the microcontroller for preprocessing and will be sent to the wireless transceiver for transmission to the nearest stations or pre-defined cellphone numbers. wireless transmission is done through an xbee technology for short distance and gsm/gprs transceiver for long-distance transmission. arduino ide is the software used to configure the microcontroller (arduino mega 2560), xbee short distance communication and gsm/gps setup for long distance transmission. xbee module provides wireless connectivity to devices using end-point solutions. xbee uses ieee 802.15.4 networking protocol which is common to low power clustering communication setups. for long distance transmission and reception of data a gsm module is used. it has a library which mimics an arduino board to perform operations same as an ordinary gsm powered cellphone like receiving and sending messages and connect to the local telecommunication companies. the gsm/gprs transceiver has a built-in modem that facilitates data transfer from a serial port to the gsm network. it will receive data at the controller by another xbee. an open source program will process this data using microsoft visual studio flat form to facilitate userfriendly display and storage. a triple-axis magnetometer compass board hmc5883l (honeywell) and a neo-6m gps module (ublox) control the navigation system of the usv, designed to cover all functions for unmanned operation using a pre-programmed mission plan. the body of the usv platform utilizes marine plyboard with painting to safeguard the usv onboard electronic components. a wooden material houses the battery and other electronic components and connected to 1.5m (from head to tail) by 1m wide catamaran boat type. the usv can carry a maximum payload of 40kg. finally, the wireless transceiver and sensor calibration were completed before testing the whole system in the actual application. the water sensor unit is the primary component of the mobile water quality monitoring system. in this research, it employed two sensors from the atlas scientific company for measuring ph and temperature. with the built-in flexibility of arduino microcontroller, it can add more number of electrodes depending on the needs. ph and temperature sensors have their interface circuits which are also provided by the manufacturer. only the temperature sensor is connected directly to the analog pin of the microcontroller. it utilized a customized motherboard to connect all electronic components to the microcontroller. also, the board allows other modules and circuits, while having a feature to isolate the sensor individually and eliminate issues of noise. to maintain accuracy, electrodes are calibrated based on the manufacturer standards and using their own calibration solutions. the microcontroller unit is the main part of the usv and makes it unique among other aquatic mobile vehicles. it has pins which can expand its utilization. it comprises of a hardware microcontroller and a software program that guides the operation of the microcontroller with the rest of the usv parts. this research uses arduino mega 256, the most expandable arduino microcontroller to accommodate capability and future expansion of the system. same with the early variant of the arduino microcontroller, it has an opensource electronics prototyping platform, flexible, easy-to-use hardware, and user friendly programming environment [39]. the usv preprocess the data using the preprogrammed commands downloaded to the microcontroller. the developed software program uploaded into the microcontroller memory sets the sensor node to measure the water quality parameter at engineering, technology & applied science research vol. 9, no. 2, 2019, 3959-3964 3962 www.etasr.com demetillo & taboada: real-time water quality monitoring for small aquatic area using unmanned … predetermined time intervals. the data are always copied in an sd card before transmission as backup data storage. c. software the software for all usv electronic modules controlled by the microcontroller is written in arduino ide while microsoft visual studio (ide) was used to develop the windows application that interfaces the usv output to the computer systems in the base station. figure 3 shows the flowchart of the usv software. it uses the windows application to receive data from the serial port that connects the communication link of the usv. first, the microcontroller initializes all the components and waits for the gps to be ready before it navigates using either the preprogrammed path or through remote control. fig. 3. usv program flowchart for rc operation, all programming comes from manual input of the operator which converts a string of commands and processes by the microcontroller for execution like direction, points of sampling, etc. in the autonomous method, the microcontroller sets the destination points, navigates, gets the current position of the usv from the gps, reads the sensors, and sends and logs data. with the destination point encoded, the microcontroller navigates until it reaches home location. d. usv datalogger for easy data transfer from the usv to base station a data logger was designed. figure 4 shows the software application interface for the usv which was developed using visual studio. it was designed in such a way that it will display the incoming data in a tabulated form. also, it provides a mechanism of updating the path of the usv using a simple inputting of values through a graphical interface. the usv’s time to stay and gather data before it will navigate or move to another location can change through this logger. all necessary inputs are displayed for the user to review. iii. results and discussion after a laboratory bench testing of the different parts of the system, the usv was assembled. navigation trials were conducted in order to check if specifications were followed. a series of tests were also conducted for its autonomous navigation. laboratory tests included measurements of the quality of communication between the control unit and the usv, and the interruption of the radio link. the results of the connection measurements conform to the minimum standards. fig. 4. usv interface application a. navigation testing upon the construction of the vessel prototype, testing followed in the creek found inside the university campus. maneuvering capabilities, vessel stability, and navigation speed were tested. the results of maneuvering capabilities and stability were outstanding since the catamaran was stable and could rotate almost around its axis. the average navigation speed was 1knot, while 180° turn was achieved in less than 15s. also, testing the range of the radio link, stability, and behavior of the boat at some water disturbances that mimic small sea waves was conducted. tests showed that the radio link range in the open sea is about 150m. fig. 5. usv’s path b. wqm sensor testing a calibration process was conducted to the electrochemical sensors in the lab for their accuracy and functionality. calibration solution interface and isolation circuit from atlas scientific were utilized to calibrate and test the ph electrodes. for field testing, freshly calibrated electrodes from the lab were installed on the usv side platform. a series of final tests of the electronic component implemented were conducted before the engineering, technology & applied science research vol. 9, no. 2, 2019, 3959-3964 3963 www.etasr.com demetillo & taboada: real-time water quality monitoring for small aquatic area using unmanned … actual operation. the configuration of the water quality sensor was the same with the one used in the laboratory. the usv was programmed to measure a minute after its arrival at the designated testing point or on the points/area where the operator wanted to make the measurement. figure 5 shows the data received from the usv. it shows the effectivity of the system by accurately displaying important wqms parameters, which include ph, temperature, time, date etc. it validates that the system performs following the specifications of the electronic component and the usv delivers its primary function which is to protect components from water intrusion. the test also did not encounter a significant problem except the periodic gsm signal. the display works well on pcs, laptops, and smart mobile phones. with these, the onboard water sensor and its peripherals can provide the needed resources, thus giving stakeholders an alternative method on how to conduct water quality monitoring continuously. fig. 6. sample of the output of the system iv. conclusions with changing water environment, the need for a low-cost and mobile method of measurement is increasing. in this research, the design and development of a low-cost mobile vehicle as a tool to carry water sensors and transmit their results wirelessly was successfully implemented. utilization of easy to find sensors, locally available materials for the usv parts, and development of customized integrated software using open source technology are the main advantages of the system when compared with commercial usvs. laboratory and on-site results show the usv’s capability to conduct water quality monitoring in a lake and other small bodies of water. temperature ph, time and date were transmitted. in conclusion, this system has the potential to enhance reporting and information dissemination regarding the status of water quality. acknowledgement authors wish to thank the scholarship funding given by the department of science and technology through its engineering research and development for technology (erdt) program and the caraga state university for allowing the use of its laboratory and equipment. references [1] j. heath, h. p. binswanger, “natural resource degradation effects of poverty and population growth are largely policy-induced: the case of colombia”, environment and development economics, vol. 1, no. 1, pp. 65-84, 1996 [2] w. mo, h. wang, j. m. jacobs, “understanding the influence of climate change on the embodied energy of water supply”, water research, vol. 95, pp. 220-229, 2016 [3] c. dalin, n. hanasaki, h. qiu, d. l. mauzerall, i. rodriguez-iturbe, “water resources transfers through chinese interprovincial and foreign food trade”, proceedings of the national academy of sciences, vol. 111, no. 27, pp. 9774-9779, 2014 [4] m. v. japitana, e. v. palconit, a. t. demetillo, m. e. c. burce, e. b. taboada, m. l. s. abundo, “integrated technologies for low cost environmental monitoring in the water bodies of the philippines: a review”, nature environment and pollution technology, vol. 17, no. 4, pp. 1125-1137, 2018 [5] k. kondratjevs, a. zabasta, n. kunicina, l. ribickis, “development of pseudo autonomous wireless sensor monitoring system for water distribution network”, in: ieee international symposium on industrial electronics, pp. 1454–1458, ieee, 2014 [6] n. nasser, a. ali, l. karim, s. belhaouari, “an efficient wireless sensor network-based water quality monitoring system”, acs international conference on computer systems and applications, ifrane, morocco, may 27-30, 2013 [7] s. silva, h. n. nguyen, v. tiporlini, k. alameh, “web based water quality monitoring with sensor network: employing zigbee and wimax technologies”, 8th international conference on high-capacity optical networks and emerging technologies, riyadh, saudi arabia, december 19-21, 2011 [8] t. p. lambrou, c. c. anastasiou, c. g. panayiotou, m. m. polycarpou, “a low-cost sensor network for real-time monitoring and contamination detection in drinking water distribution systems”, ieee sensors journal, vol. 14, no. 8, pp. 2765-2772, 2014 [9] m. simic, l. manjakkal, k. zaraska, g. m. stojanovic, “tio2 based thick film ph sensor”, ieee sensor journals, vol. 17, no. 2, pp. 248255, 2017 [10] e. hoque, l. h. h. hsu, a. aryasomayajula, p. r. selvaganapathy, p. kruse, “pencil-drawn chemiresistive sensor for free chlorine in water”, ieee sensors letters, vol. 1, no. 4, pp. 1-4, 2017 [11] n. a. cloete, r. malekian, l. nair, “design of smart sensors for realtime water quality monitoring”, ieee access, vol. 4, no. 9, pp. 39753990, 2016 [12] w. y. chung, j. h. yoo, “remote water quality monitoring in wide area”, sensors actuators b: chemical, vol. 217, pp. 51-57, 2015 [13] w. naeem, t. xu, r. sutton, a. tiano, “the design of a navigation, guidance, and control system for an unmanned surface vehicle for environmental monitoring”, proc. imeche, vol. 222, no. m, pp. 67-79, 2008 [14] g. hitz, f. pomerleau, m. e. garneau, c. pradalier, t. posch, j. pernthaler, r. y. siegwart, “autonomous island water monitoring: design and application of a surface vessel”, ieee robotics &. automation magazine, vol. 19, no. 1, pp. 62-72, 2012 [15] g. ferri, a. manzi, f. fornai, f. ciuchi, c. laschi, “the hydronet asv, a small-sized autonomous catamaran for real-time monitoring of water quality: from design to missions at sea”, ieee journal of oceanic engineering, vol. 40, no. 3, pp. 710-726, 2015 [16] g. ferri, m. cococcioni, a. alvarez, “sampling on-demand with fleets of underwater gliders”, 2013 mts/ieee oceans bergen, bergen, norway, june 10-14, 2013 [17] t. huntsberger, g. woodward, “intelligent autonomy for unmanned surface and underwater vehicles”, oceans'11 mts/ieee kona, waikoloa, usa, september 19-22, 2011 [18] j. e. manley, “unmanned maritime vehicles, 20 years of commercial and technical evolution”, oceans 2016 mts/ieee monterey, monterey, usa, september 19-23, 2016 engineering, technology & applied science research vol. 9, no. 2, 2019, 3959-3964 3964 www.etasr.com demetillo & taboada: real-time water quality monitoring for small aquatic area using unmanned … [19] n. wang, s. lv, m. j. er, w. h. chen, “fast and accurate trajectory tracking control of an autonomous surface vehicle with unmodeled dynamics and disturbances”, ieee transactions on intelligent vehicles, vol. 1, no. 3, pp. 230-243, 2016 [20] l. paull, s. saeedi, m. seto, h. li, “auv navigation and localization: a review”, ieee journal of oceanic engineering, vol. 39, no. 1, pp. 131149, 2014 [21] b. bayat, n. crasta, a. crespi, a. m. pascoal, a. ijspeert, “environmental monitoring using autonomous vehicles: a survey of recent searching techniques”, current opinion in biotechnology, vol. 45. pp. 76-84, 2017 [22] j. sanchez-garcia, j. m. garcia-campos, m. arzamendia, d. g. reina, s. l. toral, d. gregor, “a survey on unmanned aerial and aquatic vehicle multi-hop networks: wireless communications, evaluation tools and applications”, computer communications, vol. 119, pp. 43-65, 2018 [23] a. makhsoos, h. mousazadeh, s. s. mohtasebi, m. abdollahzadeh, h. jafarbiglu, e. omrani, y. salmani, a. kiapey, “design, simulation and experimental evaluation of energy system for an unmanned surface vehicle”, energy, vol. 148, pp. 362-372, 2018 [24] h. niu, y. lu, a. savvaris, a. tsourdos, “an energy-efficient path planning algorithm for unmanned surface vehicles”, ocean engineering, vol. 161, pp. 308-321, 2018 [25] h. mousazadeh, j. hamid, o. elham, m. farshid, k. ali, s. z. yousef, m. ashkana, “experimental evaluation of a hydrography surface vehicle in four navigation modes”, journal of ocean engineering and science, vol. 2, no. 2, pp. 127-136, 2017 [26] c. a. goudey, t. consi, j. manley, m. graham, b. donovan, l. kiley, “a robotic boat for autonomous fish tracking”, marine technology society journal, vol. 32, no. 1, p. 47, 1998 [27] e. fumagalli, m. bibuli, m. caccia, e. zereik, f. delbianco, l. gasperini, g. stanghellini, g. bruzzone, “combined acoustic and video characterization of coastal environment by means of unmanned surface vehicles”, ifac proceedings volumes, vol. 19, no. 3, pp. 4240-4245, 2014 [28] l. bittencourt, w. soares-filho, i. m. s. d. lima, s. pai, j. lailsonbrito jr, l. m. barreira, a. f. azevedo, l. a. a.guerra, “mapping cetacean sounds using a passive acoustic monitoring system towed by an autonomous wave glider in the southwestern atlantic ocean”, deep sea research part i: oceanographic research papers, vol. 142, pp. 5868, 2018 [29] y. singh, s. sharma, r. sutton, d. hatton, a. khan, “a constrained a* approach towards optimal path planning for an unmanned surface vehicle in a maritime environment containing dynamic obstacles and ocean currents”, ocean engineering, vol. 169, pp. 187-201, 2018 [30] r. r. murphy, e. steimle, c. griffin, c. cullins, m. hall, k. pratt, “cooperative use of unmanned sea surface and micro aerial vehicles at hurricane wilma”, journal of field robotics, vol. 25, no. 3, pp. 164180, 2008 [31] m. caccia, r. bono, g. bruzzone, g. bruzzone, e. spirandelli, g. veruggio, g. capodaglio, a. m. stortini, “sesamo: an autonomous surface vessel for the study and characterization of the air-sea interface”, ifac proceedings volumes, vol. 36, no. 21, pp. 259-264, 2003 [32] m. blaich, s. wirtensohn, m. oswald, o. hamburger, j. reuter, “design of a twin hull based usv with enhanced maneuverability”, ifac proceedings volumes, vol. 9, no. 33, pp. 1-6, 2013 [33] j. wiora, a. kozyra, a. wiora, “towards automation of measurement processes of surface water parameters by a remote-controlled catamaran”, bulletin of the polish academy of sciences technical sciences, vol. 65, no. 3, pp. 351-359, 2017 [34] s. siyang, t. kerdcharoen, “development of unmanned surface vehicle for smart water quality inspector”, 2016 13th international conference on electrical engineering/electronics, computer, telecommunications and information technology (ecti-con), chiang mai, thailand, june 28-july 1, 2016 [35] g. ferri, a. manzi, f. fornai, b. mazzolai, c. laschi, f. ciuchi, p. dario, “design, fabrication and first sea trials of a small-sized autonomous catamaran for heavy metals monitoring in coastal waters”, 2011 ieee international conference on robotics and automation, shanghai, china, may 9-13, 2011 [36] f. p. chavez, j. sevadjian, c. wahl, j. friederich, g. e. friederich, “measurements of pco2and ph from an autonomous surface vehicle in a coastal upwelling system”, deep sea research part ii: topical studies in oceanography, vol. 151, pp. 137-146, 2018 [37] f. fornai, g. ferri, a. manzi, f. ciuchi, f. bartaloni, c. laschi, “an autonomous water monitoring and sampling system for small-sized asvs”, ieee journal of oceanic engineering, vol. 42, no. 1, pp. 5-12, 2017 [38] a. kozyra, k. skrzypczyk, k. stebel, a. rolnik, p. rolnik, m. kucma, “remote controlled water craft for water measurement”, measurement, vol. 111, pp. 105-113, 2017 [39] y. a. badamasi, “the working principle of an arduino”, 11th international conference on electronics, computer and computation, abuja, nigeria, september 29-october 1, 2014 microsoft word 4-2633_s_etasr_v9_n3_pp4100engineering, technology & applied science research vol. 9, no. 3, 2019, 4100-4104 4100 www.etasr.com mohammed et al.: sanitary landfill siting using gis and ahp sanitary landfill siting using gis and ahp a case study in johor bahru, malaysia habiba ibrahim mohammed department of geoinformatics, faculty of built environment and surveying, universiti teknologi malaysia, johor, malaysia mydearhabiba@yahoo.com zulkepli majid department of geoinformatics, faculty of built environment and surveying, universiti teknologi malaysia, johor, malaysia zulkeplimajid@utm.my yamusa bello yamusa school of civil engineering, universiti teknologi malaysia, malaysia, and department of civil engineering, nuhu bamalli polytechnic, zaria, nigeria yamusabello@yahoo.com mohd farid mohd ariff department of geoinformatics, faculty of built environment and surveying, universiti teknologi malaysia, johor, malaysia mfaridma@utm.my khairulnizam m. idris department of geoinformatics, faculty of built environment and surveying, universiti teknologi malaysia, johor, malaysia khairulnizami@utm.my norhadija darwin department of geoinformatics, faculty of built environment and surveying, universiti teknologi malaysia, johor, malaysia norhadija2@utm.my abstract—one of the major problems affecting municipalities is solid waste management. there is a difficulty in selecting suitable sites for waste disposal as it involves different factors to be considered before site selection. currently, waste generation in johor bahru has steadily increased over the last few years and the only existing sanitary landfill is reaching its capacity limits, which means that a new sanitary landfill site needs to be constructed. in this study, geographic information system (gis) and analytical hierarchy process (ahp) methods were utilized with the integration of dynamic data such as future population and projection of waste production in order to provide suitable sites for the construction of a sanitary landfill in the study area. thirteen criteria were considered for this study, namely water bodies, soil, geology, slope, elevation, residential areas, archeological sites, airports, population, road, railway, infrastructure, and land use. ahp was used to determine the weights for each criterion from the pairwise comparison matrix. consistency index and consistency ratio were checked and confirmed to be suitable. the results obtained from ahp were assigned to each criterion in gis environment using weighted overlay analysis tool. the final potential site map was produced, and the three most suitable potential landfill sites were identified. keywords-geographic information system; analytical hierarchy process; landfill siting; sustainable solid waste management i. introduction the most significant part of urban planning is identifying a desirable location for municipal solid waste disposal [1]. however, serious environmental problems or health hazards can arise from landfill locations and the disposing methods [2]. the greatest concerns associated with landfill environmental impacts are linked to its effects on ground and surface water, air, soil, odor emission, and issues regarding solid waste transportation [3]. in the majority of developed and developing countries, the most common technique adopted for solid waste disposal is sanitary landfill [4-5]. other methods are composting and incineration, but the landfill is the oldest and most common technique. due to landfills, a lot of problems have risen in the waste management sector [6]. there is a need for effective and efficient solid waste management to prevent public health hazards or negative environmental impact. global population increase and rapid industrialization mean an increase in the volume of waste. managing the waste produced by a city has become more complex [7]. getting rid of waste using landfills has become an unavoidable component of the entire solid waste management framework, regardless of reduction, reuse, and recycling activities and practices, there will always be a need for the transfer of the remaining waste into the landfill. the goal of a site selection exercise is to find the optimum location that satisfies a number of predefined criteria. locating a suitable and sustainable sanitary landfill site is very tedious, complex, and time consuming because it involves various different fields of knowledge (environmental, economic, political, social, technical, and engineering). gis has been used as a system for management, manipulation, representation and analysis of geospatial data to facilitate and cut down costs in a site selection process [8]. according to [9], gis is an ideal tool because of its ability to manage large amounts of spatial data acquired from different sources. the utilization of gis for a preliminary screening is normally carried out by classifying an individual map, based on selected criteria, into exactly defined classes or by creating buffer zones around geographic features to be protected [10]. meanwhile, multi-criteria decision analysis (mcda) investigates a number of possible choices for a siting problem, taking into consideration multiple criteria and conflicting objectives [6]. among the mcda methods, ahp [11] is the most common and popular, used to identify criteria corresponding author: yamusa bello yamusa engineering, technology & applied science research vol. 9, no. 3, 2019, 4100-4104 4101 www.etasr.com mohammed et al.: sanitary landfill siting using gis and ahp weights using a pairwise comparison matrix [12]. ahp is an mcda technique used in solving different decision-making problems. it was developed with the aim of dealing with manycriteria complex decisions [13, 14]. also, ahp is a wellstructured mathematical and psychological method of organizing and analyzing complex decisions [15]. this method is widely used by decision-makers and researchers in understanding problems and choosing the solution which is best for their goal [16]. technological development, globalization and population growth have accelerated the dynamics of the urbanization process in developing countries. suitable solid waste sites must match with the rapid urbanization process [17]. currently, waste generation in johor bahru has steadily increased over the last few years. waste generation in johor bahru is about 1.06kg/person/d and according to estimations is expected to rise to 1.4kg/person/d by 2025 [18]. solid waste produced in johor bahru has risen to nearly 30% from 2005 to 2010 and is estimated to rise to 50% by 2025 [19]. in this study, gis and ahp were used with the integration of dynamic data such as future population and projection of waste production in order to provide suitable sites for the construction of a sanitary landfill in the study area, providing long-term solution to solid waste management. ii. materials and methods a. study area the study area is situated in the southernmost part of the peninsular malaysia. it lies within latitude 1°29′0″n and longitude 103°44′0″e, with a total land area of about 220km 2 . it covers the administrative boundary of the johor bahru (jb) which is the capital city of johor, malaysia. b. population growth rate and waste generation the population of the study area was generated according to the estimated population of jb from the statistics department of malaysia [20]. table i. population and solid waste projection for jb [21] s/no year population solid waste (tons/year) 1 2010 815,600 315,556 2 2015 952,052 406,574 3 2020 1,104,843 520,215 4 2025 1,493,400 763,128 from table i, we can see that the calculated sum of the projected solid waste from 2010 to 2025 is drastically increasing. according to iskandar malaysia blueprint (2010), estimated land requirements based on 1,000 tons per day capacity and landfill lifespan of 15 year is 100 hectares excluding buffer. this justifies the need for locating new sanitary landfill sites to sustainably contain solid waste. c. data collection and processing landfill siting criteria guidelines such as integrated solid waste management blueprint for iskandar malaysia [21] and national strategic plan for solid waste management [22] were adopted. three main criteria were used which were divided into 13 sub-criteria. the data used in this study were collected from various sources: jb administrative boundary and land use/land cover map was acquired from the iskandar regional development agency. the geological map was derived from a scanned geological map of peninsular malaysia published by the director general of geological survey, malaysia (1985). road, water body, and railway maps were extracted from digitization of the topographical map series 4551 published in 1996. all the data are geo-referenced according to the kertau rso projection system. data from the us geological survey global visualization viewer (usgs glovis) digital elevation model (dem) needed for this study were accessed from their online archive http://glovis.usgs.gov/. aster gdem with spatial resolution of 30m was used to extract elevation and slope information of the studied area. erdas imagine software was used in processing and analyzing satellite images. arcgis software was used for digitizing and spatial data analysis. considering secure and reliable distance to landfill site in order to allocate the buffer zones for each layer was based on governmental guidelines, experts’ judgment, and local and international references. each criterion was categorized into classes, and each class was given a suitability score from 0 to 10 where 0 means that the area is unsuitable and 10 means that it is most suitable. distancing, reclassification and overlay analysis were undertaken in gis, using the spatial analyst tool arcgis. in order to evaluate the site selection criterion, ahp was used to measure the relative importance weight of each criterion. d. buffers buffer zones for each criterion were first created in accordance with the structural hierarchy criteria for the decision-making tree. these zones were calculated based on the landfill siting guidelines and related reviews. the buffer zones for water bodies, residential areas, archeological sites, airport, roads, railways and infrastructures were generated at various distances of 1000m, 2000m, 1500m, 3000m, 1500m, 1000m, and 150m. furthermore, slope, elevation, soil and geology maps were divided in different classes of suitability from less to most suitable. land use such as public facilities, educational sites, agricultural land, forest and vacant land were assigned with scores of 0, 0, 6, 3, and 10 respectively. pairwise comparison was then applied in order to determine the relative importance of each alternative in terms of each criterion. this is been measured according to a numerical scale of 1 (equal importance) to 9 (extreme importance). this procedure enables the decision maker to assess the contribution of each factor to reach the objective independently, thus simplifying the decision-making process [23]. iii. results and discussion in this study, a total of 13 criteria were used for the sanitary landfill site selection analysis. criterion map layers were extracted from different map sources including topographic sheets, geological and soil maps, land use/land cover maps, and dem. the land use/land cover maps were used to obtain a five layer (residential, airport, archeological, infrastructures, and population) criteria-based map. dem was used to derive the elevation and slope map of the area while topographic sheets were used to get roads, water bodies, and railways. all the data (map layers) were geo-referenced according to the kertau rso engineering, technology & applied science research vol. 9, no. 3, 2019, 4100-4104 4102 www.etasr.com mohammed et al.: sanitary landfill siting using gis and ahp projection system with 30m resolution pixels. criteria weights were obtained using the ahp pairwise comparison matrix based on expert’s judgment. a. implementation of ahp using ahp, the problems were broken down in hierarchical order, thus making it easier to be analyzed independently. after the construction of the hierarchy, systematic evaluation of the different criteria by pairwise comparison was done creating the pairwise comparison matrix. values have been assigned according to a numerical scale of 1 (equal importance) to 9 (extreme importance). this will enable the decision makers to assess the contribution of each factor to reach the objective independently through this comparison, thus simplifying the decision-making process [23]. b. deriving priorities (weights) for the criteria criteria importance may vary. the next step in ahp is weighting the criteria, because when siting a sanitary landfill not all the criteria are of equal importance. therefore, pairwise comparison is necessary for the relative importance weight of the criteria used by applying the saaty numerical scale of 1 to 9. the upper triangular of the matrix is filled with the values of comparison criteria above the diagonal of the matrix. the lower triangular matrix was completed by the contents of the upper diagonal part of the matrix reciprocally. the element of row � and column � is ����, and the lower diagonal is completed by applying: ��� = ��� (1) the values of c (i=1,2,3….m and j=1,2,3…. n) are used to signify the performance values in terms of the i-th and j-th element in a matrix [24-25]. thus, the complete comparison matrix for solving any decision-making problem and deriving weight for each criterion can be represented in a decision matrix as follows: � � � � � ���� ��� ��� ���� ��� ��� ���� ��� ��� ��� � � ������� (2) experts’ decisions were entered in the comparison matrix and the weight of each criterion was used to evaluate the best potential sites for the sanitary landfill site in the area (table ii). table ii. pairwise comparison matrix criteria r e si d e n ti a l w a te r b o d ie s g e o lo g y s o il s l a n d u se s lo p e e le v a ti o n r o a d in fr a st r u c tu r e a ir p o r t p o p u la ti o n a r c h e o lo g ic a l r a il w a y weights residential 1 3 4 3 5 5 7 7 5 7 2 5 8 0.239 water bodies 0.33 1 2 3 5 4 5 5 4 5 2 5 9 0.168 geology 0.25 0.50 1 2 3 3 4 4 3 4 4 3 6 0.121 soils 0.33 0.33 0.50 1 3 3 1 3 4 4 4 3 5 0.099 land use 0.20 0.20 0.33 0.33 1 2 3 3 2 3 2 2 4 0.067 slope 0.20 0.25 0.33 0.33 0.50 1 2 2 2 2 3 2 3 0.056 elevation 0.14 0.20 0.25 1 0.33 0.50 1 2 3 2 2 3 2 0.054 road 0.14 0.20 0.25 0.33 0.33 0.50 0.50 1 1 2 2 2 2 0.036 infrastructure 0.20 0.25 0.33 0.25 0.50 0.50 0.33 1 1 3 3 2 3 0.046 airport 0.14 0.20 0.25 0.25 0.33 0.50 0.50 0.50 0.33 1 2 3 2 0.033 population 0.50 0.50 0.25 0.25 0.50 0.33 0.50 0.50 0.33 0.50 1 2 2 0.038 archeological 0.20 0.20 0.33 0.33 0.50 0.50 0.33 0.50 0.50 0.33 0.50 1 2 0.025 railway 0.12 0.11 0.16 0.20 0.25 0.33 0.50 0.50 0.33 0.50 0.50 0.50 1 0.018 furthermore, from the result of the weighting obtained, it was revealed that residential areas and water bodies are the most important criteria while railway was the least important. c. eigenvector the eigenvectors of each row were calculated with the multiplication of each value of a given criteria in a column and same row of the original pairwise comparison matrix and then applying this to each row as shown in (3) [26]: ��� = ��� × � � × � …× � ��� (3) where ��� is the eigenvalue for row �, � is the total criteria number used in row � . the normalized eigenvector of the matrix, known as priority vector, was calculated from the pairwise comparison matrix by normalizing its eigenvalues to 1 as shown in (4): !� = ��� ∕ �#$� = 1��&� (4) these eigenvectors reflect weights of preferences [27] and can be defined as a method of normalized arithmetic averages [26]. the relative importance of the compared criteria was also computed from the values of the eigenvector [28]. λmax is the highest eigenvalue of preference matrix. it was acquired from the addition of the products of multiplication between each element of the priority vector and the sum of columns of the reciprocal matrix (5). '�() = * +,� �����. /��. (5) where ��� is the criteria summation of a single column and ,� is the corresponding value of the priority vector for each criteria weight in the comparison pairwise matrix. d. consistency check once the weight has been calculated, it is important to check for consistency. this is because the values used for the engineering, technology & applied science research vol. 9, no. 3, 2019, 4100-4104 4103 www.etasr.com mohammed et al.: sanitary landfill siting using gis and ahp computation of the weight are obtained from different opinions with different perspectives, therefore there is a possibility to encounter errors in the final stage of computation in the matrix [29]. the consistency index (ci) was computed using (6), '�() is the greatest eigenvalue of preference and n is the total number of the compared criteria: �0 = �1 �() 2����2 � (6) calculation of consistency ratio (cr) was done by dividing the value of ci by the random consistency index (ri) [11]. ri is derived from the mean ci element gotten as a result of random simulation of the comparison pairwise matrix: �3 = 454 (7) where cr should be ≤0.10 for weight consistency check. if cr≥0.10 a revision in the judgement ahp matrix is required [30-31]. for this study, the cr value is 0.088, meaning that there is consistency in the ahp matrix, and the weights assigned to the criteria can be used for analysis [32]. e. sanitary landfill suitability evaluation to find suitable potential areas for a sustainable sanitary landfill, the sum of the 13 weighted criteria thematic layers was put in the gis environment using arcgis software. criteria weights were assigned to each map layer which is in reclassified raster format using the map algebra tool based on: 6� = ,��7. × ��� (8) where ai is the suitability index for area i, wj is the relative importance weight of criterion, cij the grading value of area i under criterion j and n is the total number of criteria [32]. the ci value for this study is 0.137 and the cr value is 0.088 meaning that there is consistency in the ahp matrix and the weights assigned to the criteria can be used for analysis [32, 34]. map algebra tool in the arcgis spatial analyst tool box was used to produce the final output map of the potential sites. the map was divided into four different categories: unsuitable, less suitable, suitable, and most suitable. the map in figure 1 shows the distribution of the selected areas based on suitability where the most suitable are considered sites of higher priority. from the result of this map, it was found that most of the study area, 57% was unsuitable, 9% less suitable, 23% suitable and 11% most suitable. moreover, to get the most suitable sites, the potential site map was then imported to the condition toolset in the spatial analyst tool in arcgis to identify the most suitable or highest priority sites for the determination of the best potential sites. from the most suitable sites, the best potential sites were identified by filtering the most suitable sanitary landfill layers. thus, the output layer, in raster format, was converted into vector and then the areas that did not have an intersection with water bodies and railway were selected. based on experts’ judgment, using ahp criteria weighting and gis analysis, 3 candidate sites appear to be the best from environmental, economic, and social perspectives (figure 2). these sites fulfil the requirements for a sanitary landfill siting with a distance of at least 1000m away from roads, 1500m from residential areas, and far away from public and educational facilities. fig. 1. map of potential sanitary landfill sites. fig. 2. best potential sanitary landfill sites map. furthermore, after the integration of gis and ahp, the final map was then converted to keyhole markup language (.kml) file format which was used in the google earth pro for further accuracy check, from which the 3 most suitable potential sites were selected among the most suitable class. the coordinates of these sites were taken from the google earth pro for field check to determine the accuracy and precision of the sites. iv. conclusion rapid population growth and increase in economic and commercial activities have resulted to large increase of solid waste produced daily/annually. gis and mcda with 13 evaluation criteria were applied in this study for assessing the possible potential sites for sanitary landfill in johor bahru, malaysia. ahp was used in calculating the relative importance weights of the criteria, which were assigned in the final suitability map production. the most important criteria for this study were residential areas with 23.9% and water bodies with 16.8%, while the least important criterion was railway with 1.8%. based on experts’ judgment, using the ahp criteria engineering, technology & applied science research vol. 9, no. 3, 2019, 4100-4104 4104 www.etasr.com mohammed et al.: sanitary landfill siting using gis and ahp weighting and gis analysis, 3 most suitable potential sites were identified among the various sites from the most suitable class in the final map. each of these sites fulfilled the necessary requirements of selection guidelines with a distance of at least 1000m from water bodies and roads, 1500m from residential areas, and far away from public and educational facilities. these sanitary landfill sites can serve as backups for the existing one that is almost attaining its maximum capacity. finally, for the construction of the final sanitary landfill site, further geotechnical and hydrological analysis is required to prevent groundwater contamination caused by leachate. references [1] s. bahrani, t. ebadi, h. ehsani, h. yousefi, r. maknoon, “modeling landfill site selection by multi-criteria decision making and fuzzy functions in gis, case study: shabestar, iran”, environmental earth sciences, vol. 75, no. 4, pp. 337, 2016 [2] m. sharholy, k. ahmad, g. mahmood, r. c. trivedi, “municipal solid waste management in indian cities a review”, waste management, vol. 28, no. 2, pp. 459-467, 2008 [3] a. chabuk, n. al-ansari, h. m. hussain, s. kamaleddin, s. knutsson, r. pusch, j. laue, “soil characteristics in selected landfill sites in thebabylon governorate, iraq: soil characteristics in selected landfill sites in thebabylon governorate, iraq”, journal of civil engineering and architecture, vol. 11, no. 4, pp. 348-363, 2017 [4] h. k. jeswani, a. azapagic, “assessing the environmental sustainability of energy recovery from municipal solid waste in the uk”, waste management, vol. 50, pp. 346-63, 2016 [5] n. alavi, g. goudarzi, a. a. babaei, n. jaafarzadeh, m. hosseinzadeh, “municipal solid waste landfill site selection with geographic information systems and analytical hierarchy process: a case study in mahshahr county, iran”, waste management & research, vol. 31, no. 1, pp. 98-105, 2013 [6] b. nas, t. cay, f. iscan, a. berktay, “selection of msw landfill site for konya, turkey using gis and multi-criteria evaluation”, environmental monitoring and assessement, vol. 160, no. 1-4, pp. 491-500, 2010 [7] a. a. tahir, p. chevallier, y. arnaud, l. neppel, b. ahmad, “modeling snowmelt-runoff under climate scenarios in the hunza river basin, karakoram range, northern pakistan”, journal of hydrology, vol. 409, no. 1, pp. 104-117, 2011 [8] m. h. vahidnia, a. a. alesheikh, a. alimohammadi, “hospital site selection using fuzzy ahp and its derivatives”, journal of environmental management, vol. 90, no. 10, pp. 3048-3056, 2009 [9] b. sener, m. l. suzen, v. doyuran, “landfill site selection by using geographic information systems”, environmental geology, vol. 49, no. 3, pp. 376-388, 2005 [10] n. b. chang, g. parvathinathan, j. b. breeden, “combining gis with fuzzy multicriteria decision-making for landfill siting in a fast-growing urban region”, journal of environmental management, vol. 87, no. 1, pp. 139-153, 2008 [11] t. l. saaty, “what is the analytic hierarchy process?”, in: mathematical models for decision support, pp. 109-121, springer, 1988 [12] d. khan, s. r. samadder, “municipal solid waste management using geographical information system aided methods: a mini review”, waste management & research, vol. 32, no. 11, pp. 1049-62, 2014 [13] t. l. saaty, “a scaling method for priorities in hierarchical structures”, journal of mathematical psychology, vol. 15, no. 3, pp. 234-281, 1977 [14] m. kurttila, m. pesonen, j. kangas, “utilizing the analytic hierarchy process (ahp) in swot analysis—a hybrid method and its application to a forest-certification case”, forest policy and economics, vol. 1, no. 1, pp. 41-52, 2000 [15] h. i. mohammed, z. majid, n. b. yusof, y. b. yamusa, “analysis of multi-criteria evaluation method of landfill site selection for municipal solid waste management”, international conference on civil and environmental engineering, penang, malysia, november 28-29, 2017 [16] h. madurika, g. hemakumara, “gis based analysis for suitability location finding in the residential development areas of greater matara region”, international journal of scientific & technology research, vol. 4, pp. 96-105, 2015 [17] g. wang, l. qin, g. li, l. chen, “landfill site selection using spatial information technologies and ahp: a case study in beijing, china”, journal of environmental management, vol. 90, no. 8, pp. 2414-2421, 2009 [18] a. h. abba, z. z. noor, a. aliyu, n. i. medugu, “assessing sustainable municipal solid waste management factors for johor-bahru by analytical hierarchy process”, advanced materials research, vol. 689, pp. 540-545, 2013 [19] s. t. tan, c. t. lee, h. hashim, w. s. ho, j. s. lim, “optimal process network for municipal solid waste management in iskandar malaysia”, journal of cleaner production, vol. 71, pp. 48-58, 2014 [20] statistics department malaysia, total population by ethnic group, local authority area and state, malaysia, sdm, 2010 [21] blueprint for iskandar malaysia, integrated solid waste management blueprint of iskandar malaysia, 2010 [22] local government departmnent, ministry of housing and local government malaysia, national strategic plan for solid waste management, 2005 [23] m. khodaparast, a. m. rajabi, a. edalat, “municipal solid waste landfill siting by using gis and analytical hierarchy process (ahp): a case study in qom city, iran”, environmental earth sciences, vol. 77, no. 2, pp. 52, 2018 [24] m. hussain, assessment of groundwater vulnerability in an alluvial interfluve using gis, phd thesis, indian institute of technology roorkee, 2004 [25] m. uyan, “msw landfill site selection by combining ahp with gis for konya, turkey”, environmental earth sciences, vol. 71, no. 4, pp. 1629-1639, 2013 [26] t. l. saaty, l. g. vargas, models, methods, concepts & applications of the analytic hierarchy process, springer, 2012 [27] p. cabala, “using the analytic hierarchy process in evaluating decision alternatives”, operations research and decisions, vol. 20, no. 1, pp. 523, 2010 [28] c. kara, n. doratli, “application of gis/ahp in siting sanitary landfill: a case study in northern cyprus”, waste management & research, vol. 30, no. 9, pp. 966-80, 2012 [29] t. l. saaty, “the analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making”, in: multiple criteria decision analysis: state of the art surveys, pp. 345405, springer, 2005 [30] t. l. saaty, “relative measurement and its generalization in decision making why pairwise comparisons are central in mathematics for the measurement of intangible factors the analytic hierarchy/network process”, racsam-revista de la real academia de ciencias exactas, fisicas y naturales. serie a. matematicas, vol. 102, no. 2, pp. 251-318, 2008 [31] s. djokanovic, b. abolmasov, d. jevremovic, “gis application for landfill site selection: a case study in pancevo, serbia”, bulletin of engineering geology and the environment, vol. 75, no. 3, pp. 12731299, 2016 [32] p. aragones-beltran, j. p. pastor-ferrando, f. garcia-garcia, “an analytic network process approach for siting a municipal solid waste plant in the metropolitan area of valencia (spain)”, journal of environmental management, vol. 91, no. 5, pp. 1071-1086, 2010 [33] m. eskandari, m. homaee, s. mahmodi, “an integrated multi criteria approach for landfill siting in a conflicting environmental, economical and socio-cultural area”, waste management, vol. 32, no. 8, pp. 15281538, 2012 [34] p. v. gorsevski, k. r. donevska, c. d. mitrovski, j. p. frizado, “integrating multi-criteria evaluation techniques with geographic information systems for landfill site selection: a case study using ordered weighted average”, waste management, vol. 32, no. 2, pp. 287-96, 2012 microsoft word 19-3055_s_etasr_v9_n5_pp4679-4684 engineering, technology & applied science research vol. 9, no. 5, 2019, 4679-4684 4679 www.etasr.com added et al.: miniaturized chipless rfid tags based on periodically loaded microstrip structure miniaturized chipless rfid tags based on periodically loaded microstrip structure maha added physical department, university of tunis el manar, tunis, tunisia maha.added@gmail.com safa chabaan physical department, university of tunis el manar, tunis, tunisia safa.chebaane90@gmail.com karima rabaani physical department, university of tunis el manar, tunis, tunisia rabaanikarmia@hotmail.fr noureddine boulejfen research center for microelectronics and nanotechnology, technopole of sousse, sousse, tunisia nboulejf@ucalgary.ca abstract—a compact chipless radio frequency identification (rfid) tag-based on slow-wave technology is introduced in this paper. the tag consists of a resonant circuit based on open stub resonators periodically loaded by shunt stubs allowing a coding capacity of 9 bits and operating in a frequency range from 2 to 4ghz. the receiving and transmitting antennas of the tag are particularly designed to minimize the tag size as much as possible. the proposed tag presents a robust bit pattern with a compact and fully printable structure using fr4 substrate for a low-cost tag. keywords-chipless rfid tag; slow-wave technology; coding capacity i. introduction radio frequency identification (rfid) is one of the most rapidly growing segments of modern automatic capture and identification. however, conventional chipped rfid systems have many limitations related to the use of the chip such as high cost, susceptibility in harsh environments, short life of the chip battery packs, etc. to overcome these limits, chipless rfid systems appear where the tag is a fully passive microwave structure and its encoding data depends only to its geometry [1]. frequency domain chipless rfid tags use spectral signature to encode data [6]. frequency domain tags are classified into two main families, the rcs-based tags and the retransmission-based tags. the rcs-based tags use resonant antennas that receive the signal and send it back with the tag signature [1-7]. generally, they can reach a high coding capacity with a compact size. in [1], a compact rcs-based tag using resonant antennas in c form was proposed. the reported tag offers a coding capacity of 20 bits operating in a frequency range from 2 to 4ghz and with an overall size of 25×70mm². however, the rcs-based tags generally have a short reading range [2, 3] and a strong mutual coupling, which limits the data encoding capacity [4]. concerning the retransmission-based tags [8-13], single or double antennas are used to receive and transmit the signal. the spectral signature of the tag is obtained by a resonant circuit with multiple resonators where each of them creates a notch or a peak around a given frequency point. the chipless tag introduced in [13] is a retransmission-based tag using spiral resonators that allow the coding of 35 bits in a frequency band ranging from 3.1 to 7ghz. even if retransmission-based tags usually have an important size, this type of chipless tags has a robust bit pattern and ensures a large reading range compared to the rcs-based tags thanks to the use of independent antennas for both transmission and reception. in this paper, the miniaturization technique based on the slow wave approach has been used to design a 9-bit compact retransmission-based tag operating in a frequency band of 2 to 4ghz. for a complete chipless tag, dual crosspolarized monopole antennas were designed and connected to the resonant circuit to establish a communication link with the interrogator. ii. tag design the retransmission-based tag presented in this paper consists of a resonant circuit of nine resonators connected to two cross polarized antennas, one for the transmission and the one for the reception. as it is shown in figure 1, a basic chipless rfid tag structure can be composed of the known resonant circuit based on quarter-wave open stub resonators and two identical ordinary rectangular monopole antennas. the overall dimensions of the basic chipless rfid structure are around 119×73mm². a. resonant circuit design 1) basic resonant circuit structure a basic structure of a resonant circuit can be based on the known quarter-wave open stub resonators as shown in figure 2. this structure has been realized using the fr4 substrate with corresponding author: maha added engineering, technology & applied science research vol. 9, no. 5, 2019, 4679-4684 4680 www.etasr.com added et al.: miniaturized chipless rfid tags based on periodically loaded microstrip structure thickness h=0.4mm, dielectric constant ɛr=4.7 and loss tangent tgδ=0.019. as presented in the figure 2, the initial design of the prototype contains 9 open stub resonators equally spaced with 1mm apart to avoid mutual coupling. the length of each resonator is equal to λg/4 at its corresponding resonant frequency, where λg is the wavelength in the substrate. the resonance frequency is independent from the width of the resonator [9, 16]. the parameters of each resonator in the basic multi-resonantor are given in table i. fig. 1. basic chipless rfid tag structure fig. 2. basic resonant circuit structure, g1=46 mm, g2=25 mm a tuning process using agilent ads momentum software has revealed that high q resonances can be obtained with a 15ω feed line, such that wb=4mm. thus, a taper impedance transformer is required to match the input impedance of the resonant circuit to 50ω. the length of the taper section is equal to λg/4 with respect to the lowest operating frequency. in this work, the appropriate length of the impedance transformer section is found to be lt=14.5mm. the resonant circuit response exhibits 9 resonance frequencies and thus a coding capacity of 9 bits, where the operating frequency range is between 2 and 4.5ghz. the overall dimensions of the obtained circuit are about 46×25mm². 2) miniaturized resonant circuit structure starting from the basic resonant circuit described above, a slow wave structure is used to reduce the size of the initial circuit while keeping the same electrical behavior. periodically loading transmission lines with shunt capacitances can increase their effective electrical length [14]. using this concept, significant size reduction of several passive microwave components have been achieved. for instance, the technology of periodically loaded slow wave microstrip lines has been used in [14] to miniaturize the size of branch-line and rate-race couplers and in [15] to design miniaturized single-band twoway and dual-band two-way wilkinson power dividers. in this paper, slow-wave structure is developed for the miniaturization of chipless rfid tags. based on the slow-wave concept, a conventional microstrip line with a given length is substituted for a shorter microstrip line loaded with equally spaced capacitances terminated to ground. the role of the loading capacitances is to slow down the wave propagation within the microstrip line. this results in a longer effective electrical length compared to an unloaded microstrip line. based on this approach one can accurately determine the values of the loading capacitances cp and their spacing d to guarantee the desired electrical behavior while reducing the line length. let’s consider an unloaded lossless transmission line with characteristic impedance zc_un and a phase velocity vp_un given by [15] such as: ��_�� � �� (1) �_�� � �√� (2) where l and c are the line inductance and capacitance of the transmission line respectively. loading periodically the transmission line with equally spaced shunt capacitors cp allows the reduction of its effective characteristic impedance zc_lo and phase velocity vp_lo. for a spacing d between the capacitors less than the signal wavelength, zc_lo and vp_lo are given by [15] as: ��_�� � � � ���� (3) �_�� � ��� � ���� � (4) equation (4) shows the reduction of the phase velocity vp_lo compared to that of the unloaded line. this means that an effective electrical length can be achieved using a transmission line with a shorter physical length. the effective electrical length of the loaded line is expressed as: φ��= ������_�� = !"#�$ �% & �� � (5) where n is the number of the loading capacitors and "# is the angular frequency of interest. using (1)-(5) we can determine the value of cp and the spacing d of the loading capacitors: ! � '(_��)�� ��_*+� ��'(_*+ (6) %� � )���'(_*+,-'(_��,�� ��'(_*+,'(_�� (7) to obtain an entirely planar circuit, the loading capacitances cp can be realized using open-circuit stubs by applying the following formula [15]: %� � �./*0'(_./*0��_./*0 for ����_./*0 123�4 ≪ 1 (8) where lstub, zc_stub and vp_stub are the length, characteristic impedance and phase velocity of the stub respectively. engineering, technology & applied science research vol. 9, no. 5, 2019, 4679-4684 4681 www.etasr.com added et al.: miniaturized chipless rfid tags based on periodically loaded microstrip structure 3) design considerations to achieve the highest possible reduction, the impedance of the unloaded line zc_un should be at its highest possible value which is obtained by choosing the lowest possible microstrip width that can be manufactured. in addition, to minimize the crosstalk between stubs, the spacing d must be greater than 3h where h is the height of the substrate. this condition (d≥3h) can be relaxed by placing the stubs on both sides of the line. taking into account these design considerations, the resulting slow wave structure can be duplicated for all the resonators of the proposed resonant circuit. depending on the frequency of resonance of each resonator a corresponding stub length is required. as previously mentioned, to obtain the highest possible line length reduction (l/lqwl), the characteristic impedance of the unloaded microstrip line zc_un should be at its highest value, so its width should be at the lowest possible value. in this work, the width of all the unloaded microstrip lines is fixed to wun=0.2mm allowing a characteristic impedance of 94ω. for a planar structure, the capacitances cp implemented using open stubs with a similar width for all the resonators and length that varies from one resonator to the other as described by (8). to avoid mutual coupling between the stubs, the spacing d is fixed to 0.6mm for all resonators and stubs are alternatively placed on both sides of the line as shown in figure 3. the number of sections n is chosen to be the same for all the resonators. consequently, different resonance frequencies can be obtained by changing the stub length lstub. fig. 3. slow wave resonant circuit, g3=46mm, g4=14mm table i illustrates the resonance frequencies, the parameters of each quarter-wave resonator with the capacitance and stub length of its corresponding slow-wave structure. knowing that d=0.6mm and n=14, the length of the unloaded stub for all the resonators is l=n×d=8.4mm. figure 3 shows the slow wave structure based resonant circuit. the spacing between resonators is fixed such as to avoid the coupling effect and to ensure the stability of the resonance frequencies when changing the configuration pattern of the resonant circuit. the dimensions of the slow wave structure based resonant circuit are 48×14mm² offering a size reduction of 41.6% compared to the basic resonant circuit. to validate the obtained values of cp and lstub, the simulation of both circuits is performed without considering the coupling effect using agilent ads simulator. the comparison between the transmission responses of the basic quarter-wave length based resonant circuit and the slow wave structure based one is illustrated in figure 4, which shows a good match between the two responses, which validates the calculated values and the slow-wave structure efficiency with a size reduction of 41.6% while keeping the same coding capacity and operating frequency range. the simulated and measured transmission responses of the proposed slow wave based resonant circuit in the presence of coupling effects are discussed in section iii. table i. physical and electrical parameters of the slow wave based resonators vs quarter-wavelen gth resonators quarter-wavelength resonator slow wave resonator res. freq. (ghz) lqwl (mm) w (mm) zc_qwl (ω) 78 (pf) 9:;<= (mm) l e n g th r e d u c ti o n (% ) res1 2.28 18.9 1.16 39.7 0.257 2.8 55.5 res2 2.52 17.23 1 43.8 0.201 2.2 51.2 res3 2.72 16.08 0.88 47.4 0.164 1.8 47.7 res4 2.919 15.05 0.789 50.6 0.137 1.5 44.2 res5 3.07 14.3 0.724 53.2 0.118 1.3 41.6 res6 3.25 13.64 0.654 56.3 0.1 1.1 38.5 res7 3.459 12.9 0.583 60 0.08 0.9 34.9 res8 3.716 12.07 0.507 64.4 0.063 0.7 30.4 res9 4.045 11.18 0.426 70.1 0.045 0.5 24.9 fig. 4. comparison between the transmission responses of quarter-wave resonant circuit and based slow wave resonant circuit without considering the coupling effect b. antenna designs omni-directional monopole uwb antennas are generally used for signal reception and transmission in chipless tags (figure 1). these antennas are known by their relatively big size, which further increases the overall size of the tag. 1) receiving antenna for a miniaturized antenna structure, an omni-directional monopole uwb antenna with a slow wave feed line has been designed. the structure of the proposed antenna using the slow wave based feed line is presented in figure 5. the addition of slots in the ground plane is required to improve the reflection response of the antenna. the positions, forms, and dimensions of the slots are chosen according to the surface current density in the antenna. the dimensions of the antenna have been reduced to 26×44mm². the simulated and measured responses of the proposed antenna are presented and discussed below. 2) transmitting antenna the transmission antenna of the tag is designed in a way to minimize as much as possible the size of the tag while keeping its horizontal polarization to avoid cross talk between transmitted and received signals. therefore, as shown in figure 6, a rectangular monopole uwb antenna with a bended feed engineering, technology & applied science research vol. 9, no. 5, 2019, 4679-4684 4682 www.etasr.com added et al.: miniaturized chipless rfid tags based on periodically loaded microstrip structure line is used. the bending of the feed line allowed a good space management of the entire tag. however, it resulted in a ground plane shape modification leading to performance degradation. to overcome this problem some feed line length tuning has been performed. figure 6 shows the final shape and the design parameters of the transmitting antenna. fig. 5. structure of receiving antenna based on slow wave feed line: ygnd2=14.54mm, lp=2.5mm, wp=1mm, lf=7mm, wf=0.5mm, ef=2mm, ff=1mm, we=2.2mm,;le=1mm, b2=27mm, a2=22mm fig. 6. transmitting antenna structure: xr=39mm, yr=36mm, y1=5.5mm, y2=14.25mm, y3=5mm, y4=50.25mm, x1=10mm, x2=24.25mm, x3=2.4mm, wan2=0.7mm figure 7 presents the new tag structure using the designed antennas. the size of the whole tag is about 66.5×55mm² allowing a miniaturization of 58.2% compared to the basic structure tag described in figure 1. to study the mutual coupling effect between antennas and to fix the spacing threshold between the two radiating elements, each antenna is excited by a plane wave and simulated independently in the presence of the other antenna. fig. 7. final chipless rfid tag structure: g7=66.5mm, g8=55mm iii. results and discussion so far, to build up a clearer picture of the real behavior of the resonant circuit, an electromagnetic simulation that considers the coupling effects is required. therefore, a simulation using cst studio suit has been performed for different tag codes. for the experimental validation of the proposed miniaturization approach, the developed resonant circuit has been fabricated in the all-one configuration as shown in figure 8(a). then its transmission coefficient has been measured in the 1.5-5ghz frequency band. figure 8(b) shows a good agreement between the measured and the cstsimulated (with coupling) transmission coefficients. (a) (b) fig. 8. slow wave based resonant circuit: (a) realized resonant circuit, (b) simulated and measured responses each resonator branch of the resonant circuit operates at a correspondent frequency allowing a coding capacity of 9 bits. when all the resonators are connected to the main transmission line the resulting tag code is set to 111111111 and the resonance frequencies are: 2.28ghz, 2.58ghz, 2.8ghz, 3ghz, 3.22ghz, 3.41ghz, 3.63ghz, 3.85ghz, and 4.26ghz. each bit in the code is set or reset by connecting or disconnecting the corresponding resonator branch from the main transmission line. as presented in figure 9, res2, res4, res6 and res8 are disconnected from the transmission line to set the tag code to 101010101. after the preliminary validation using the all-one realized resonant circuit, a comparison between the simulated and measured results has been performed for the resonant circuit with the tag code set to 101010101. as revealed by figure 10 a good agreement between the measured and the simulated results has been observed. fig. 9. chipless rfid tag configuration with tag code set to 101010101 engineering, technology & applied science research vol. 9, no. 5, 2019, 4679-4684 4683 www.etasr.com added et al.: miniaturized chipless rfid tags based on periodically loaded microstrip structure fig. 10. simulated and measured transmission responses of the resonant circuit with tag code set to 101010101 as a next stage, a comparison between two different measured tag codes has been performed. figure 11 includes the measured transmission responses of resonant circuits with tag codes set to 111111111 and 101010101. the figure shows clearly that the resonance frequencies are quite stable even if the tag code changes, which demonstrates the coding robustness of the proposed resonant circuit. fig. 11. measured transmission responses of resonant circuits with tag codes set to 111111111 and 101010101 for experimental validation purposes, the designed slow wave based antenna has been fabricated and measured over the 1.5 to 5ghz frequency band. the fabricated antenna is presented in figure 12(a) while figure 12(b) illustrates the simulated and the measured reflection coefficients. according to the measured reflection coefficient illustrated in figure 12(b), the proposed antenna is well matched between 1.6ghz and 4.5ghz, which covers the operating frequency band of the proposed resonant circuit. furthermore, the radiation efficiency, the radiation pattern and the gain of the proposed antenna have been simulated. as it is shown in figure 13(a), the radiation efficiency of the antenna is around 70% for the entire operating frequency band. the simulated radiation pattern of the antenna, illustrated in figure 13(b), is omnidirectional with a gain around 2.2dbi which confirms that the designed antenna is suitable for rfid applications. (a) (b) fig. 12. (a) realized receiving antenna, (b) simulated and measured reflection coefficient (a) (b) fig. 13. radiation efficiency and radiation pattern of the slow wave based antenna: (a) radiation efficiency, (b) the radiation pattern at h and e plane respectively regarding the transmitting antenna described in figure 6, the corresponding simulated and measured reflection coefficients are illustrated in figure 14(b) showing that the operating bandwidth of the antenna is between 2.2ghz and 4.3ghz which is also adequate to the frequency range of the multi resonant circuit. the radiation pattern of the antenna presented in figure 15 is almost omnidirectional and the gain in the operating frequency range is about 2.2dbi, which confirms that the designed antenna is suitable for rfid applications. table iii presents a comparison between different rfid chipless retransmission-based tags including this work. (a) (b) fig. 14. transmitting antenna: (a) realized transmitting antenna, (b) comparison of simulated and measured reflexion response of the transmitting antenna 1.5 2 2.5 3 3.5 4 4.5 5 x 10 9 -30 -25 -20 -15 -10 -5 0 frequency (hz) s 1 1 ( d b ) simulation measurement engineering, technology & applied science research vol. 9, no. 5, 2019, 4679-4684 4684 www.etasr.com added et al.: miniaturized chipless rfid tags based on periodically loaded microstrip structure (a) (b) fig. 15. radiation pattern of the transmitting antenna: (a) h-plane, (b) e-plane table ii. comparison between different chipless tags ref. used approach coding capacity frequency range (ghz) size [9] triangle microstrip filter 6 bits 4-7 150×30mm² (without antennas) [10] microstrip open resonators 8 bits 2-4 80×60mm² (with antenna) [12] siw 1 bit 10.5-11 10×10mm² (without antennas) this work slow wave resonator 9 bits 2-4 66×55mm² (with antennas) fig. 16. the realized proposed chipless rfid tag iv. conclusion in this paper, a slow-wave structure was used to miniaturize a rfid chipless retransmission-based tag. firstly, a slow-wave structure based multi-resonator has been developed and a size reduction of 41.6% was reached compared to the quarter-wave open stub resonators. the developed multi resonator includes nine open stub resonators periodically loaded by shunt stubs. the resonance frequency of each resonator depends only on the length of its shunt stubs. the nine resulting resonance frequencies lay in the 2.28 to 4.26ghz range. simulation results not included in this paper revealed that with the use of the roger 4003c substrate instead of the fr4 one can further reduce the size of the nine bit multi-resonator and improve its behavior. therefore, a miniaturized rectangular monopole uwb antenna, which is used as the receiving antenna of the tag, has been designed using a slow-wave structure, while respecting the required operating frequency range and allowing omnidirectional pattern. with a good space management of the whole tag, a reduction of more than 58% has been obtained compared to the basic tag structure which demonstrated the efficiency of the used approach for designing miniaturized chipless rfid tags. references [1] a. vena, e. perret, s. tadjini, “a fully printable chipless rfid tag with detuning correction technique”, ieee microwave and wireless components letters, vol. 22, no. 4, pp. 209-211, 2012 [2] m. a. islam, n. c. karmakar, “a novel compact printable dualpolarized chipless rfid system”, ieee transactions on microwave theory and techniques, vol. 60, no. 7, pp. 2142-2151, 2012 [3] h. s. jang, w. g. lim, k. s. oh, s. m. moon, j. w. yu, “design of lowcost chipless system using printable chipless tag with electromagnetic code”, ieee microwave and wireless components letters, vol. 20, no. 11, pp. 640-642, 2010 [4] i. jalaly, i. d. robertson, “capacitively-tuned split microstrip resonators for rfid barcodes”, european microwave conference, paris, france, october 4-6, 2005 [5] m. martinez , d. v. d. weid, “compact slot-based chipless rfid tag”, ieee rfid technology and application conference, tampere, finland, september 8-9, 2014 [6] m. s. bhuiyan, n. karmakar, “chipless rfid tag based on splitwheel resonators”, 7th european conference on antennas and propagation, gothenburg, sweden, april 8-12, 2013 [7] c. m. nijas, u. deepak, p. v. vinesh, r. sujith, s. mridula, k. vasudevan, p. mohanan, “low-cost multiple-bit encoded chipless rfid tag using stepped impedance resonator”, ieee transactions on antennas and propagation, vol. 62, no. 9, pp. 4762-4770, 2014 [8] c. s. hartmann, “a global saw id tag with large data capacity”, ieee ultrasonics symposium, munich, germany, october 8-11, 2002 [9] c. m. nijas, r. dinesh, u. deepak, a. rasheed, s. mirdula, k. vasudevan, p. mohanan, “chipless rfid tag using multiple microstrip open stub resonators”, ieee transactions on antennas and propagation, vol. 60, no. 9, pp. 4429-4432, 2012 [10] m. e. jalil, m. k. a. rahim, n. a. samsuri, r. dewan, “chipless rfid tag based on meandered line resonator”, ieee asia-pacific conference on applied electromagnetics, johor bahru, malaysia, december 8-10, 2014 [11] h. e. matbouly, n. boubekeur, f. domingue, “a novel chipless identification tag based on a substrate integrated cavity resonator”, ieee microwave and wireless components letters, vol. 23, no. 1, pp. 52-54, 2013 [12] s. moscato, r. moro, m. bozzi, l. perregrini, s. sakouhi, f. dhawadi, a. gharsallah, p. savazzi, a. vizziello, p. gamba, “chipless rfid for space applications”, ieee international conference on wireless for space and extreme environments, noordwijk, netherlands, october 3031, 2014 [13] s. preradovic, n. c. karmakar, “design of fully printable planar chipless rfid transponder with 35-bit data capacity”, european microwave conference, rome, italy, september 29-october 1, 2009 [14] k. w. eccleston, s. h. m. ong, “compact planar microstripline branchline and rat-race couplers”, ieee transactions on microwave theory and techniques, vol. 51, no. 1, pp. 2119-2125, 2003 [15] j. s. hong, m. j. lancaster, “capacitively loaded microstrip loop resonator”, electronics letters, vol. 30, no. 18, pp. 1494-1495, 1994 [16] k. rawat, f. m. ghannouchi, “a design methodology for miniaturized power dividers using periodically loaded slow wave structure with dualband applications”, ieee transactions on microwave theory and techniques, vol. 57, no. 12, pp. 3380-3388, 2009 [17] c. zhou, h. y. d. yang, “design considerations of miniaturized least dispersive periodic slow-wave structures”, ieee transactions on microwave theory and techniques, vol. 56, no. 2, pp. 467-474, 2008 microsoft word final-ed.doc etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 416 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… analysis of video signal transmission through dwdm network based on a quality check algorithm s. ilic dpt. of elec. and comp. engineering university of prishtina kosovska mitrovica, serbia sinisa.ilic@ pr.ac.rs b. jaksic dpt. of elec. and comp. engineering university of prishtina kosovska mitrovica, serbia branimir.jaksic@ pr.ac.rs m. petrovic dpt. of elec. and comp. engineering university of prishtina kosovska mitrovica, serbia mile.petrovic@ pr.ac.rs a. markovic dpt. of telecommunications university of nis nis, serbia acomarkovic87@ yahoo.com v. elcic dpt. of information technology university of slobomir p bijeljina, bosnia and herzegovina vanja.elcic@ gmail.com abstract — this paper provides an analysis of the multiplexed video signal transmission through the dense wavelength division multiplexing (dwdm) network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. on the basis of this algorithm, simulations of transmission for specific values of fiber parameters are executed. the analysis of the results shows how the ber and q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. analysis of dwdm systems is performed in the software package optisystem 7.0, which is designed for systems with flow rates of 2.5 gb/s and 10 gb/s per channel. keywords – ber parameter; q factor; dwdm network; amplifying section i. introduction dense wavelength division multiplexing (dwdm) is a technology that allows multiplexing of multiple optical carrier signals on a single optical fiber by using different wavelengths for transmission of various information. the smallest attenuation of the signal in the optical fiber is achieved by applying the wavelength of 1550 nm or by using the "third optical window" [1-4]. dwdm systems allow the expansion of the existing capacity without laying additional fibers in optic cables. the capacity of the existing system is expanded using multiplexers and demultiplexers at the ends of the system [5-6]. for the successful transmission of optical signals over long distances, doped fiber amplifiers with erbium (edfa erbium doped fiber amplifier) are used. erbium is a rare element and, when excited, it is emitting the light at a wavelength of 1,54 µm, which is the wavelength at which the attenuation of signal power is minimal. weak signals enters the erbium doped fiber, in which light is injected by lasers pumps. this light excites erbium atoms, and the atoms are releasing the accumulated energy in a form of additional light with wavelength around 1550 nm. as this process continues through the fiber, the signal is amplified. edfa is available in the c and l windows but with quite narrow range (1530-1560 nm) [7-8]. edfa can amplify optical signals as much as they can be multiplexed in a given range until a strong enough signal is received. when the level of the signal at the input is reduced, the signal can not step up all multiplexed signals. ii. ber and q factor the performance of an optical communication system is specified by the bit error ratio (ber) [7-8]. ber is the probability that the impulse is interpreted incorrectly (i.e. a logical '1' is detected as '0' and vice versa). thus, a ber of 10 -6 corresponds to an average of one error per million bits. the ber value depends on the characteristics of the laser source and the transmission route. with the increase of the flow in optical systems, in both systems with standard single mode optical fiber and systems with special purpose fiber, effects of spontaneous emission, polarization mode dispersion, chromatic dispersion, optical fiber nonlinearities and noise in the receiver are increasing. therefore, ber measurement is of great importance when more adequate results are in question [7-8, 9]. the criteria used in optical receivers is that ber is less than 10 -9 . for a fluctuating signal received at a decision circuit, sampling is performed at time . d t the sampled value of the signal i varies from one bit to another around the mean value 1 i or 0i , depending on whether the bit is corresponding to 1 or etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 417 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… 0 in the bit stream. the decision circuit compares sampled values with the threshold value d i and calls the bit 1 if d i i> or 0 if d i i< . an error occurs if d i i< for bit 1 or if d i i> for bit 0. both errors can be included in the definition of error probability as [7-8]: ber (1) (0 / 1) (0) (1 / 0)p p p p= + (1) where (1)p and (0)p are the probability of receiving bits 1 and 0, respectively, (0 / 1)p is the probability that 0 was decided when 1 was received and (1 / 0)p is the probability that 1 was decided when 0 was received. if the probability of occurrence of bits 1 and 0 are equal, then (1) (0) 1 / 2p p= = and ber is given by: [ ] 1 ber (0 / 1) (1 / 0) 2 p p= + (2) ber with the optimum adjustment for decision threshold depends only on the parameter q : ( )2exp q / 21 q ber 2 2 q erfc π −  = ≈    (3) and erfc is short for complementary error, defined by [10]: ( ) ( )∫ +∞ −= x dyyxerfc 2 exp . (4) the parameter q can be written as [1-2]: 0 1 0 1 q i i σ σ + = + (5) where 2 1 σ , 2 0 σ are noise variance corresponding to the symbols 1 and 0, respectively. an approximate form of ber is obtained by using the asymptotic expansion ( )q / 2erfc and is quite accurate for q 3> . iii. system model and algorithm analysis of the dwdm transmission system was performed using the software package optisystem 7.0 [11]. the algorithm used for the quality check of the dwdm transmission is presented in figure 1. the quality check algorithm applies to a fixed flow rate r. the parameters that vary are the number of dwdm channels (8 to 512) and the length of the amplifying section as (km), at whose ends edfa amplifiers are placed. the parameter that is being evaluated is the q factor, which shows whether the transmission quality is good. the boundary value for this factor is q 6.= fig. 1. algorithm for the evaluation of quality of dwdm transmission network during the design stage. the algorithm consists of a sub-cycle and a main cycle, for which the number of execution varies. the sub-cycle (blue line in figure 1) refers to the number of shares and the number of its executions depends on the values of the q factor for a variable number of shares and constant number of dwdm channels. the main cycle (red line in figure 2) refers to the number of dwdm channels, contains a sub-cycle for each section and it is repeated seven times, for three values of n (n = 8, 16, 32, 64, 128, 256, 512). at the beginning there are eight channels (n=8). after multiplexing (constant flow per channel , n dwdm channels), the signal is sent to an amplifying section as (a km in length). the q factor is calculated for the given values of as, r and n. if the condition q 6> is met, quality transmission is achieved and then the number of shares is increased by 1 (a extra km). after that, the algorithm will start from the part related to the calculation of q , that is the sub-cycle (blue line in figure 1) will be repeated until the condition q 6> is met. when the condition is not met, the sub-cycle is not repeated, and the current n and as values are the values for which high-quality transmission is not possible. once the subcycle is ended, the number of channels n will double and the etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 418 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… algorithm is starting from the beginning, i.e., from the part that is refering to the multiplexing of the flow r with increased value of n dwdm channels. again, the cycle starts from one section, the length of a a km, but for double the value of the number of dwdm channels. the previous sub-cycle will be repeated as many times as needed to achieve the required quality during the transfer. once the sub-cycle is ended, the cycle will be repeated with double the number of dwdm channels and it will be examined whether the current value of channels is n<513, since this is the condition for ending the main cycle. the algorithm is applied for the specific values of the optical fiber system defined by the itu g.652 standard [1213]. two systems are observed, the first with a flow rate of 2.5 gb/s, and the second with a flow rate of 10 gb/s. the system is analyzed for 16, 32 and 64 dwdm channels, while the length of the amplifying section ranges from 40 to 80 km. the dispersion characteristics of the fiber are given in table i, and the diagram of the dwdm network analyzed in optisystem is given in figure 2. fig. 2. 16-channels dwdm network. power emitted by each source is 5 dbm. after receiving the digital signals, multiplexing is performed in dwdm multiplexers. the frequency of each channel is separated by 1 ghz. then a multiplexed signal is sent through an optical fiber where on every a km a edfa amplifier is set, with the following parameters: gain = 20 db, power = 15 dbm. since the system works in the third optical window, the attenuation along the length of the fiber is 0.2 db/km. on the receiving side, demultiplexing of the signals is done using dwdm demultiplexer running at the same frequency as the dwdm multiplexer. at the receiver, a ber analyzer is set to determine the values for ber and q , based on which one can determine the performance of the transmission system. results showed that the change of ber and q depends on the length of the fiber, i.e. on the number of the amplifying shares, and they also showed what kind of an effect the number of multiplexed channels and the flow rate per channel have on signal transmission. iv. simulation results table ii and table iii provide the ber parameter for flow per channel at 2.5 gb/s and 10 gb/s, respectively. based on the obtained values of ber, the graphs shown in figures 3-7 were drawn, showing that the value of the q factor decreases with the change in the number of dwdm channels and the length of the amplifying section. the purple dashed straight line represents the limit at which signal transmission quality distorts. if the limit of the quality transmission is a ber value of 10 -9 and q 6,= when the number of dwdm channels incerases quality transmission can be achived only if the the length of the section is reduced. decrease of the q factor is much more pronounced in the first amplifying sections, while with the greater number of them, q factor becomes approximately constant. table i. dispersion characteristics of the analyzed fibers 5ame value units mode group velocity dispersion include normal third-order dispersion include normal dispersion data type constant normal frequency domain parame not include normal dispersion 16.75 ps/nam/km normal dispersion slope 0.075 ps/nm^2/km normal beta 2 -20 ps^2/km normal beta 3 0 ps^2/km normal etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 419 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… table ii. ber parameter values for the flow per channel of 2.5 gb/s 5umber of amplifying section 5 u m b e r o f d w d m c h a n n e ls l e n g th o f a m p li fy in g se c ti o n 1 2 3 4 5 6 7 8 9 10 40 km 2.80e-183 2.84e-087 1.73e-063 3.36e-055 4.07e-043 2.41e-039 5.17e-029 1.95e-023 3.62e-020 5.16e-020 50 km 1.18e-182 2.35e-089 1.32e-053 1.14e-036 9.29e-027 7.67e-020 6.60e-018 2.05e-015 7.41e-013 3.05e-011 60 km 8.59e-174 1.78e-082 4.22e-056 2.64e-039 1.87e-035 6.12e-026 2.39e-019 1.49e-015 5.02e-014 6.67e-012 70 km 8.68e-167 1.89e-112 4.95e-054 1.45e-036 9.72e-030 1.46e-026 1.66e-022 1.97e-017 3.49e-014 8.96e-015 16 80 km 5.56e-166 3.68e-101 1.99e-063 1.68e-026 7.88e-011 0.0014 1 1 1 1 40 km 1.17e-177 5.63e-076 5.28e-062 7.10e-044 3.13e-039 6.55e-032 2.76e-026 2.54e-023 4.82e-019 8.02e-017 50 km 5.85e-160 2.27e-065 8.52e-047 2.52e-036 9.97e-025 4.74e-019 2.72e-016 2.67e-013 7.72e-012 2.88e-010 60 km 5.36e-169 1.64e-074 1.94e-051 2.82e-034 6.68e-024 1.66e-020 3.20e-017 3.24e-014 1.09e-012 1.51e-010 70 km 7.79e-164 2.58e-068 7.24e-042 1.33e-029 1.09e-022 3.94e-020 8.12e-016 8.13e-014 1.58e-012 1.16e-010 32 80 km 1.78e-164 3.14e-081 9.84e-047 1.21e-021 7.66e-009 0.0047 1 1 1 1 40 km 1.92e-067 1.41e-048 2.73e-043 1.82e-030 6.28e-026 3.22e-023 5.43e-022 2.45e-018 2.68e-018 2.52e-016 50 km 9.73e-065 1.35e-044 1.55e-039 4.96e-029 3.15e-023 1.02e-018 2.31e-015 3.90e-013 5.85e-011 3.13e-010 60 km 7.22e-065 8.84e-051 1.05e-043 2.53e-034 7.13e-024 3.33e-017 7.13e-015 2.90e-013 6.41e-013 3.88e-010 70 km 5.19e-061 2.77e-045 5.63e-029 1.36e-024 5.17e-022 2.52e-017 4.53e-015 8.40e-014 2.12e-011 2.05e-010 64 80 km 1.83e-061 1.94e-038 7.81e-024 8.37e-014 1.25e-008 0.0051 1 1 1 1 table iii. ber parameter values for the flow per channel of 10 gb/s 5umber of amplifying section 5 u m b e r o f d w d m c h a n n e ls l e n g th o f a m p li fy in g se c ti o n 1 2 3 4 5 6 7 8 9 10 40 km 1.25e-145 6.34e-069 3.76e-040 4.23e-028 1.08e-018 2.27e-013 1.82e-009 0.0014 0.0024 0.0028 50 km 3.78e-138 3.18e-059 9.42e-031 5.51e-018 1.29e-011 1.18e-008 0.0007 0.001 0.0017 0.0021 60 km 6.96e-142 9.46e-049 6.03e-024 5.67e-015 7.68e-009 0.0005 0.0009 0.002 0.0026 0.0034 70 km 3.51e-126 6.01e-055 3.25e-023 1.39e-010 0.0005 0.0014 0.0016 0.0021 0.0028 0.0032 16 80 km 1.80e-117 5.72e-040 1.65e-014 0.0009 0.0018 0.0414 1 1 1 1 40 km 3.31e-094 6.37e-066 1.81e-036 1.29e-026 1.99e-017 4.63e-013 3.16e-009 0.0016 0.0026 0.0031 50 km 8.33e-090 3.97e-048 8.41e-025 2.06e-016 1.98e-010 8.55e-008 0.0009 0.0012 0.0024 0.0032 60 km 1.25e-080 2.15e-039 5.70e-021 1.87e-013 4.01e-008 0.0006 0.0012 0.0025 0.0037 0.0059 70 km 2.47e-080 4.65e-032 8.55e-016 1.54e-009 0.0008 0.0014 0.0019 0.003 0.0034 0.0046 32 80 km 1.31e-070 1.20e-030 3.97e-012 0.0009 0.0031 1 1 1 1 1 40 km 5.602e-048 3.31e-047 2.02e-030 3.29e-023 4.42e-017 3.50e-012 3.61-008 0.0026 0.0028 0.0032 50 km 6.32e-045 2.04e-034 7.28e-023 6.46e-015 4.73e-010 0.0006 0.001 0.0012 0.0028 0.0046 60 km 2.40e-044 1.75e-033 8.84e-019 1.83e-010 0.0007 0.0007 0.0012 0.0026 0.0035 0.0059 70 km 3.26e-044 1.84e-028 4.34e-014 4.36e-009 0.0011 0.0015 0.0031 0.0037 0.0037 0.0044 64 80 km 6.10e-045 3.13e-020 1.88e-009 0.0014 0.0043 1 1 1 1 1 fig. 3. changing the q factor for length of the amplifying section 40 km. fig. 4. changing the q factor for length of the amplifying section 50 km. etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 420 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… fig. 5. changing the q factor for length of the amplifying section 60 km. fig. 6. changing the q factor for length of the amplifying section 70 km. fig. 7. changing the q factor for the length of the amplifying section 80 km. the given figures shows that with an increasing length of the amplifying section for the system of 10 gb/s there is no major change in quality with the larger length of signal transmission. in the case of the system of 2.5 gb/s, increasing the length of the amplifying sections means that there will be degradation of transmission quality. figures 8-13 show the eye diagram for the dwdm transmission system for an amplifying section length of 80 km, for 16, 32 and 64 channels and with a flow rate per channel of 2.5 gb/s and 10 gb/s. closed lines represent sectors with ber values of 10 -8 to 10 -12 . a) b) c) fig. 8. the eye diagram for flow rate per channel 2.5 gb/s and 16 dwdm channels: a) 80 km, b) 240 km, c) 400 km. etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 421 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… a) b) c) fig. 9. the eye diagram for flow rate per channel 2.5 gb/s and 32 dwdm channels: a) 80 km, b) 240 km, c) 400 km. a) b) c) fig. 10. the eye diagram for flow rate per channel 2.5 gb/s and 64 dwdm channels: a) 80 km, b) 240 km, c) 400 km. a) b) c) fig. 11. the eye diagram for flow rate per channel 10 gb/s and 16 dwdm channels: a) 80 km, b) 240 km, c) 400 km. etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 422 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… a) b) c) fig. 12. the eye diagram for flow rate per channel 10 gb/s and 32 dwdm channels: a) 80 km, b) 240 km, c) 400 km. a) b) c) fig. 13. the eye diagram for flow rate per channel 10 gb/s and 64 dwdm channels: a) 80 km, b) 240 km, c) 400 km. v. conclusion based on a quality check algorithm, used for calculation of the distance at which the transmission quality is lost and for calculating the number of dwdm channels at which the optical signal will be distorted, simulations of the network for specific values are conducted. an analysis of the ber parameter and the q factor shows that the length of the amplifying section, the flow rate per channel and the number of dwdm channels affect the transmission quality. results showed that ber and q are changing with the change in length of an amplifying section. decrease of q is much more pronounced in the first amplifying section, while with the greater number of sections, it becomes approximately constant. a conclusion was made: with increasing length of the amplifying section for the system of 10 gb/s there is no major change in quality with the larger length of signal transmission. in the case of the system of 2.5 gb/s, increasing the length of amplifying sections means that there will be degradation of transmission quality. acknowledgment this work was done within the research project of the ministry of science and technological development of serbia tr35026 and iii47016. references [1] k. m. sivalingam, s. subramaniam (eds), optical wdm networks: principles and practice, kluwer academic, norwell, ma, 2000 [2] m. t. fatehi, m. wilson, optical networking with wdm, mcgraw-hill, new york, 2001 [3] m. stefanovic, d. milic, “an approximation of filtered signal envelope with phase noise in coherent optical systems”, journal of lightwave technology, vol .19, no. 11, pp. 1685-1690, 2001 [4] i. djordjevic, m. stefanovic, “performance of optical heterodyne psk systems with costas loop in multichannel environment for nonlinear second-order pll model”, journal of lightwave technology vol. 17, no.12, pp. 2470-2479, 1999 [5] r. ramaswami, k. sivarajan, optical networks: a practical perspective, 2nd ed., morgan kaufmann publishers, san francisco, 2002 [6] i. p. kaminow, t. li, a. willner (eds), optical fiber telecommunications v, elsevier/academic press, 2008 [7] g. agrawal, nonlinear fiber optics, 2nd ed., academic press, 2001 [8] g. agrawal, fiber-optic communication systems, 3nd ed., wiley, 2002 [9] e. g. sauter, nonlinear optics, john wiley & sons, inc., new york etasr engineering, technology & applied science research vol. 3, �o. 2, 2013, 416-423 423 www.etasr.com ilic et al.: analysis of video signal transmission through dwdm �etwork based on a quality… [10] m. abramowitz, i. a. stegun, handbook of mathematical functions, dover, new york, 1970 [11] optiwave-design software for photonics, optisystem-optical communication system and amplifier design software, http://www.optiwave.com/products/system_overview.html (accessed 03.11.2012) [12] international telecommunication union, “optical fibers, cables and systems”, itu-t manual, 2009 [13] international telecommunication union-itu-t g.652, “series g: transmission systems and media, digital systems and networks. transmission media and optical systems characteristics: characteristics of a single-mode optical fibre and cable”, itu, 2009 authors profile sinisa ilic graduated at faculty of electrical engineering in pristina 1992 in the field of electronics and telecommunication. as b.sc. engineer he worked at television and radio pristina. he received his m.sc. at faculty of electrical engineering in belgrade in the field of digital transmission of information and defended phd thesis at university of pristina in the field of digital signal processing and computer engineering. he is teaching now databases, design of information systems, infrastructure of e-commerce and biomedical informatics at faculty of technical sciences in university of pristina located in kosovska mitrovica. his areas of interest are: databases, information systems, biomedical informatics, multimedia, digital signal processing. he is author and co-author of many scientific papers published in journals and presented at international conferences. he is also involved in several educational projects and in several commercial projects related to introduction of public finance management information systems. branimir jaksic is assistant at the department of electronic and computing engineering, faculty of technical sciences in kosovska mitrovica, serbia. he is phd candidate in the faculty of electronic engineering, university of nis, serbia. areas of research include optical and satellite communications. he has authored several scientific peer-reviewed papers on the above subject. mile petrovic is full professor at the department of electronic and computing engineering, faculty of technical sciences in kosovska mitrovica, serbia. areas of interest include telecommunications television techniques. he has authored over 50 scientific peer-reviewed papers and a large number of projects and patents. he is a member of the technical program committee and reviewer for several international journals and symposia. aleksandar markovic is phd candidate in the faculty of electronic engineering, university of nis, serbia. his research interests are statistical communication theory and optical communications. he has published several journal publications on the above subject. vanja elcic is assistant at the university slobomir p in bijeljina, bosnia and herzegovina. his areas of interest include information technology and telecommunications. he has authored several scientific peer-reviewed papers in the field of information technology and telecommunication. engineering, technology & applied science research vol. 8, no. 5, 2018, 3355-3359 3355 www.etasr.com laaziri et al.: information system for the governance of university cooperation information system for the governance of university cooperation majida laaziri information system engineering resarch group national school of applied sciences abdelmalek essaadi university tetouan, morocco samira khoulji information system engineering resarch group national school of applied sciences abdelmalek essaadi university tetouan, morocco khaoula benmoussa information system engineering resarch group national school of applied sciences abdelmalek essaadi university tetouan, morocco kerkeb mohamed larbi information system engineering resarch group faculty of sciences abdelmalek essaadi university tetouan, morocco abstract—recognizing the impact of international cooperation in science and technology, all higher education institutions prioritize strategic partnerships. if setting up a partnership is important, its management, monitoring and evaluation of cooperation actions, regular communication among partners, and the ability to allow all parties to monitor the functioning of the partnership are more important. for good co-operation management, an information system becomes a mandatory condition. abdelmaleek essaadi’s university team has set up an information system for the governance of a university cooperation called simacoop, to support cooperation between governments and universities, and to facilitate the process of partnership management. this system also helps in identifying the shared vision and goals of the partnership members and develops documents that define the partnership terms. in addition, simacoop has put in place procedures for maintaining and monitoring the partnership evolution [1]. the purpose of this article is to give a general presentation on simacoop’s design and development for the governance of university cooperation. keywords-information system; the governance of university cooperation; simacoop platform; object-oriented methodology; uml i. introduction moroccan universities and research institutions face the challenge of synchronization with the demands and expectations of modern society while reinforcing their actions on international level. a large academic and research university is not any more a sufficient condition to provide access to high quality education, if it does not cooperate with other major universities and organizations at national and international level. for this reason, all higher education institutions give importance to university cooperation, which enables the development of innovative international partnerships, student mobility, establishment of relationships and networks, experience and knowledge exchange, and generation of ideas and knowledge. in addition, university cooperation requires effective management to ensure the continuity of constructive and productive relations [2], and to allow all considered parties to monitor the functioning of the partnership. in this context and with the evolution of information systems in different fields, the abdelmaleek essaadi university (aeu) set up a tracking information system for the management of university cooperation called simacoop. ii. simacoop generalities simacoop is a monitoring system designed to support governments and universities in the cooperation, partnership and student exchange plans and programs, and to improve communication, collaboration and integration between universities and their partners, performance management, strategic planning, research performance evaluation and establishing of a sound policy for the development of the institutional relationship[3]. simacoop offers a range of services, it mainly gives the university and its partners the possibility to: do a follow-up of the activities of the cooperation projects, do a follow-up of cooperation agreements and their results, and get personalized follow-up of foreign students (figure 1). it notifies the user of the dates of the cooperation project activities, whether they are imminent or behind schedule, and informs them of the dates of the agreements in force and the non-renewable agreements. thus, this system can detect the strong points of cooperation in order to overcome its weaknesses, benefit from the exchange of experience, skills and competences, and raise the level of efficiency and productivity, monitor the implementation, the extent to which each party fulfills its obligations and commitments, and the respect of cooperation agreements with partner missions and their strategic objectives [1]. simacoop is a coherent information system centered in a global platform integrating several services that meet the administrative and academic needs (figure 2). engineering, technology & applied science research vol. 8, no. 5, 2018, 3355-3359 3356 www.etasr.com laaziri et al.: information system for the governance of university cooperation fig. 1. simacoop features a. administrative  makes the partnership management process easier.  facilitates the work of the administration and allows it to accomplish its mission.  provides reliable, relevant and instant information  provides better partnership management.  follows the strategic objectives of the partnership.  detects successful actions and challenges.  allows regular communication among partners.  provides information about the evolution of the partnership.  ensures monitoring and evaluation of cooperation actions. b. academic  measures the evolutions and the consequences of the decisions taken.  helps the actors of the partnership in their activities.  schedules and controls the tasks of the partnership process.  allows effective communication.  adds depth and breadth to its impact on the scientific community.  identifies the shared vision and goals of the partnership members.  develops documents that define the partnership terms.  follows-up the partnership as it evolves.  allows all parties to follow the functioning of the partnership.  maintains constructive and productive relationships.  evaluates research.  shares and exchanges information.  reduces cost.  takes advantage of the exchange of experience, skills and competences.  raises the level of efficiency and productivity.  improves the quality of data reported by the institutions management. fig. 2. administrative and university services of simacoop iii. object-oriented methodology object-oriented technology is a new methodology that is widely applied in the field of software development. objectoriented methodology (oom) is a supreme technique for solving complex problems by ripping the problem into subtasks. oom is a new approach to system development that encourages and facilitates the reuse of software components. by using oom, higher productivity, lower maintenance costs and better quality can be achieved [4, 5]. oom requires objectoriented techniques to be used during system analysis, design, and implementation. this methodology allows to determine what the system objects are, how they behave over time or in response to events, and what responsibilities and relationships an object has to other objects. object-oriented analysis allows seeing all the objects in a system, their commonalities, their differences and the way the system handles objects [6, 7]. during design, the overall architecture of the system is described. during the implementation phase, the class objects and interrelations of these classes are translated and effectively coded. there are several object-oriented development methods (ooms). in the design of our information system, we used the object modeling technique (omt) because it is the most developed one, and covers a good part of the system development cycle. omt is an object-oriented software development methodology given by james rumbaugh. this methodology describes a method of analysis, design and implementation of a system using an object-oriented technique. it is a fast and intuitive approach to identify and model all the objects that constitute a system [8]. omt consists of three basic models, each capturing important aspects of the system, namely the object model which describes the static, structural and system data, the dynamic model that describes the temporal, behavioral and control aspects of the system and the functional model that describes the transformational and functional aspects of the system [9, 10]. omt methodology supports the entire life cycle of the system, according to five main phases [11-13]. a. system analysis as in any other system development model, system analysis comes first. in this phase, the developer interacts with the system user to know the user's needs and analyzes the system engineering, technology & applied science research vol. 8, no. 5, 2018, 3355-3359 3357 www.etasr.com laaziri et al.: information system for the governance of university cooperation to understand how it works. on the basis of this study, the analyst prepares a model of the desired system. this model is purely based on what the system needs to do. at this stage, the implementation details are not taken into account. only the model of the system is prepared on the basis of the idea that the system consists of a set of interacting objects. the important elements of the system are underlined. b. system design system design is the next stage of development, where the overall architecture of the desired system is decided. the system is organized as a set of subsystems interacting with each other. when designing the system as a set of interacting subsystems, the analyst considers the specifications observed in the system analysis as well as the user-imposed requirements. the analysis of the system consists in perceiving the system as a set of interacting objects. a larger system can also be seen as a set of smaller interacting subsystems that are themselves composed of a set of interacting objects. when designing the system, the focus is on the objects that constitute the system, not on the processes that are running in the system. c. object design in this phase, the details of system analysis and system design are implemented. the objects identified in the design phase of the system are designed. in this phase, the implementation of these objects is decided in the form of required data structures and interrelationships between objects. this concept is known as class creation. in this phase of the development process, the designer decides classes in the system based on these concepts. it decides whether classes should be created from scratch, whether existing classes can be used as is, or new classes can be inherited. d. implementation during this phase, the class objects and interrelations of these classes are translated and effectively encoded using an object-oriented programming language. the required databases are created and the entire system is transformed into an operational system. e. testing this phase aims to test the system in implementation. three cases are produced and used:  test cases: what to test in the system.  test procedures: the procedures that allow running the test.  test components: the environment needed to actually perform the test cases. iv. ooad with an uml, omg standard. object oriented analysis and design (ooad) is a technical approach used in the analysis and design of the system through the application of paradigm and object-oriented concepts, including visual modeling, applied throughout the cycle system development [8]. it models a system as a group of objects. one of the most used notations to represent the objects of a system and how they interact with each other is the unified modeling language (uml). object management group (omg) for object-oriented modeling, standardized uml in 1997. it is a graphical language for data modeling and processing (objectoriented), that allows the specification, representation and construction of computer system components, it combines widely accepted concepts from a number of object-oriented modeling techniques and is inherited from several other methods like omt [14]. it is intended to model information systems and covers the different phases of an object development (analysis, design and implementation) [15, 16]. it improves and facilitates communication, representation and understanding of object solutions [4]. v. system architecture design the design of the system architecture is how system functionality is to be provided by the system components, where the system represents a set of components that perform the defined functions [17]. the design process of the information system architecture focuses on the decomposition of the system into different components and their interactions to satisfy functional and non-functional requirements. figure 3 provides an overview of the design and development of the simacoop information system. fig. 3. simacoop architecture according to figure 3, the realization of the simacoop information system was based on the functionality of the symfony framework and the architecture of the mvc paradigm. thanks to symfony's functionalities, our system has benefited from a well-structured modular workspace with a clear and maintainable code, the code is separated in three layers according to the mvc model, it is characterized by an object-relational mapping layer (orm) and a data abstraction layer, ensuring data separation, display, processing, and actions [18]. vi. result in the simacoop information system, there are three types of actors who can access the platform: project manager, dean, university president. their functions are different, so each actor corresponds to a space in the platform as follows: a. project manager’s space simacoop offers the project manager an account to enter the general information of his cooperation project, the information of the project partners, the activities, the expenses and the budget (figure 4 [3]). engineering, technology & applied science research vol. 8, no. 5, 2018, 3355-3359 3358 www.etasr.com laaziri et al.: information system for the governance of university cooperation fig. 4. project manager’s space simacoop allows the project manager to:  follow the activities of cooperation projects.  manage projects better.  communicate with other partners (government, universities, national and international institutions, ...).  be notified of project activities that are imminent or delayed.  exchange and share information and cooperation documents with other project managers. b. dean's space simacoop offers the dean of the institution an account that enables him to introduce his information as project manager and to add the project managers belonging to his institution in the simacoop platform, the validation of their partnership activities, and all the partners of his establishment (figure 5 [3]). fig. 5. dean’s space simacoop allows the dean to:  manage the budget and the expenses of the cooperation projects and the partnership agreements.  follow the agreements and conventions of the institution over time.  make a personalized follow-up of the foreign students of the establishment.  download the annual report of the activities of the cooperation projects and partnership agreements of the institution.  exchange information and share cooperation documents.  provide visibility to all project managers at the institution, and all national and international partners.  be advised of the dates of activities carried out, unrealized projects and the agreement dates that are in force or renewed.  follow the evolution and the execution of the actions of the partnerships of the establishment.  detect successful partnership actions and challenges.  ensure the continuation of constructive and productive relationships. c. university administrator's space simacoop offers the president of the university an account for two roles: he can introduce his information as project manager and he is also the manager of the platform. he has total visibility on the database, his task is to manage the entire system, and he is in charge of creating and validating the accounts of all the project managers who belong to the institutions from his university in the simacoop platform. therefore, he specifies the access rights of each user (figure 6 [3]). fig. 6. administrator space simacoop allows the president to:  validate the accounts of the project managers of the university.  ensure the smooth running of all agreements and partnership agreements of the university.  order the budgets and expenses of cooperation projects and the university.  check and accredit all the information concerning all cooperation partnerships of the university.  download the annual report of the projects and cooperation partnerships of the university. vii. conclusion the use of information systems in the university has become a reality, and a central part of its operation, because they provide universities with management tools that meet their needs, and help all users in their activities. they schedule and control tasks and they enable effective communication. as the problems faced by university communities become more complex, the idea of a management information system on university cooperation can be very promising. through partnerships, the university can contribute and benefit from the efforts of other higher education institutions. through a cooperative information system, universities can accelerate engineering, technology & applied science research vol. 8, no. 5, 2018, 3355-3359 3359 www.etasr.com laaziri et al.: information system for the governance of university cooperation learning and disseminate skills and knowledge. in addition, they can add depth and breadth to their impact on the scientific community. references [1] m. laaziri, k. benmoussa, s. khoulji, k. mohamed larbi, “simacoop: a framework application for the governance of university cooperation”, transactions on machine learning and artificial intelligence, vol. 5, no. 4, pp. 785-794, 2017 [2] r. de vry, g. watson, “university of delaware’s faculty-it partnership: educational transformation through teamwork”, the technology source, available at: http://technologysource.org/article/ university_of_delawares_facultyit_partnership/ [3] simacoop, available at: http://simacoop.uae.ac.ma/ [4] u. a. khan, i. a. al-bidewi, k. gupta, “object-oriented software methodologies: roadmap to the future”, international journal of computer science issues, vol. 8, no. 5, pp. 392-396, 2011 [5] m. r. b. prakash, h. s. chandrasekharaiah, “an object oriented perspective for ac/mtdc system simulation”, ieee international conference on intelligent system application to power systems, orlando, usa, january 28-february 2, 1996 [6] h. lee, c. lee, c. yoo, “a scenario-based object-oriented methodology for developing hypermedia information systems”, ieee 31st hawaii international conference on system sciences, kohala coast, hi, usa, january 9, 1998 [7] r. chalmeta, t. j. williams, f. lario, l. ros, “developing an objectoriented reference model for manufacturing”, ifac proceedings volumes, vol. 30, no. 1, pp. 351-356, 1997 [8] s. hong, g. van den goor, s. brinkkemper, “a formal approach to the comparison of object-oriented analysis and design methodologies”, ieee 26th hawaii international conference on system sciences, wailea, hi, usa, january 8, 1993 [9] j. osis, o. ivasiuta, p. rusakovs, “advanced object-oriented modeling techniques for large scale systems”, ifac proceedings volumes, vol. 31, no. 20, pp. 787-792, 1998 [10] r. h. bourdeau, b. h. c. cheng, “a formal semantics for object model diagrams”, ieee transactions on software engineering, vol. 21, no. 10, pp. 799-821, 1995 [11] ukessays, two object oriented methodologies booch and rambaugh information technology essay, available at: https://www.ukessays.com/ essays/information-technology/two-object-oriented-methodologiesbooch-and-rambaugh-information-technology-essay.php [12] m. p. selvan, k. s. swarup, “object methodology techtorial”, ieee power energy magazine, vol. 3, no. 1, pp. 18-29, 2005 [13] visual basic tutorials, object oriented methodology life cycle model, available at: http://www.freetutes.com/systemanalysis/sa2-objectoriented-methodology.html [14] g. booch, j. rumbaugh, i. jacobson, the unified modeling language user guide, addison wesley, 1998 [15] s. servigne, “conception, architecture et urbanisation des systemes d’information”, in: encyclopædia universalis, encyclopædia britannica, 2008 (in french) [16] o. glassey, j. l. chappelet, comparaison de trois techniques de modélisation de processus : adonis , ossad et uml, institute de hautes etudes en administration publique, 2002 (in french) [17] tutorials point, ooad object oriented system, available at: https://www.tutorialspoint.com/object_oriented_analysis_design/ooad_o bject_oriented_system.htm [18] symfony 4.0 documentation, available at: https://symfony.com/ doc/current/index.html microsoft word 20-3680_s1_etasr_v10_n4_pp5998-6003 engineering, technology & applied science research vol. 10, no. 4, 2020, 5998-6003 5998 www.etasr.com nguyen et al.: backstepping control for induction motors with input and output constrains backstepping control for induction motors with input and output constraints tung lam nguyen department of industrial automation university of science and technology hanoi, vietnam lam.nguyentung@hust.edu.vn thanh ha vo department of electrical engineering university of transportation and communications, hanoi, vietnam vothanhha.ktd@utc.edu.vn nam duong le department of electrical engineering quy nhon university quy nhon, vietnam lenamduong@qnu.edu.vn abstract−in practice, the applied control voltage for an induction motor drive system fed by a voltage source inverter has a limit depending on the dc bus capacity. in certain operations such as accelerating, the motor might require an excessively high voltage value that the dc bus cannot supply. this paper presents a control solution for the bounded control input problem of the induction motor system by flexibly combining a hyperbolic tangent function in a backstepping control design procedure. in addition, the barrier lyapunov function is also employed to force speed tracking error in a defined value. the closed-loop system stability is proven, and the proposed control is verified through numerical simulations. keywords-backstepping; barrier lyapunov funtion; induction motor; foc i. introduction induction motors have been serving as a major workforce in various industrial applications [1, 2] due to their robustness and ease of maintenance. despite the fact that technologies used in induction motor drive systems are well-established, the drive system still draws control researchers’ attention due to its complicated dynamical properties [3, 5]. in the two most common induction motor drive control techniques, flux oriented control (foc) [6] based schemes are widely used when compared to direct torque control (dtc) structure [7]. foc renders the induction motor as a direct current motor in terms of decoupling torque producing and flux forming process. there have been many attempts to control the induction motor based on foc, from classical pid control [2] to advance nonlinear strategies including model predictive control [8], sliding mode control [9], fuzzy-neural approach [10], and genetic algorithm [11]. thanks to its systematic design and the ability to cope with system nonlinearities, backstepping method is intensively used in the induction motor drive. authors in [12] successfully employed backstepping integrated with a high-gain observer for stabilizing the motor drive without information on the rotor speed, flux, and load torque. in order to compensate system uncertainties, backstepping was combined with a recurrent neural network to enhance tracking performance in induction servo systems [13]. a similar approach can be found in [14] where a radial basis function neural network was used. the proposed control guarantees system stability and bounded signals. exploiting the robustness of sliding mode control, authors in [15] designed a backstepping sliding mode control for dealing with lumped uncertainties in linear induction motor drives. the effectiveness of the proposed control is verified numerically and experimentally. in [16], adaptive backstepping control supported by a fuzzy system for integral action was developed for a linear induction motor. simulation study showed the control ability to cope with parameter variations and load disturbances. in the quest of compensating system mechanical parameters such as unknown viscous coefficient and load torque, an adaptive backstepping algorithm was designed in [17]. other applications of backstepping control in induction motor drive can be seen in [18, 19]. the aforementioned researches mainly focus on ways to improve the dynamical responses of the induction motor drive closed-loop system regardless the limitations of the control inputs. it is clearly that in certain circumstances, the motor might require exceeded voltage value that the dc bus cannot supply. if the required voltage is not satisfied, it results in system performance degradation. in this paper, a method is introduced to tackle the problem by manipulating backstepping control with the assistance of a virtual control defined through a hyperbolic tangent function for limiting the control input in a specified value determined by the capacity of the dc bus. in addition, the paper also integrates barrier lyapunov function in the design steps in order to force the speed tracking error falling in a desired range. ii. mathematical model formulation the mathemathetical model of the induction motor is well defined, for more detailed derivation please refer to [20]. the induction motor model in the d-q reference frame is obtained as in (1) where: 2 1 m s r l l l σ = − is the leakage factor, s s s l t r = the stator time constant, r r r l t r = the rotor time constant, mr r l k l = , 2 s r rr r r kσ = + , ' l t r σ σ σ σ = , m sds r rd l i t ω ω ψ = + the slip estimation, ω is the mechanical rotor speed, zp is the number of pole pairs, j is the rotor and load lumped inertia, tl the load torque, rdψ corresponding author: tung lam nguyen engineering, technology & applied science research vol. 10, no. 4, 2020, 5998-6003 5999 www.etasr.com nguyen et al.: backstepping control for induction motors with input and output constrains the rotor flux, and , , : m r s l l l mutual, rotor, and stator inductance. , , , ' ' ' 2 ' ' 1 1 1 1 1 1 1 1 1 1 1 ( ) 3 3 (1 ) 2 2 sd sd s sq rd sd s r r s sq s sd sq rd s r rq sq r s rd sd rd s rq r r m m p rd sq p s rd sq r di i i u dt t t t l di i i dt t t u t l d i dt t t l m z i z l i t σ σ ω ψ σ σ σ σ σ σ ω ωψ σ σ σ σ ψ σ σ ψ ψ ω ω ψ ψ σ ψ   − − = − + + + +      − − = − − + −    − + + = − + − = = − −              (1) the first two equations characterize motor current dynamics. the last two equations denote motor flux forming process and equation of motion. it can be observed that the induction motor given in (1) is a coupled and nonlinear system. the block diagram representation of the induction motor is given in figure 1. ⊗ ⊗ ⊗ ⊗ × × × × ÷ (1 ) t st σ σ+ 1 m r l t s+ p z sj • • • • • • • 1 1 1 m r m r l tl ts σ σ − + m r l t 3 2 m p r l z l sd u sq u q d e − m e m e d q e − rd ψ r ω ω sd i sq i s ω + + + − − − 1 1 1 m r m r l tl ts σ σ − + l m + + m m (1 ) t st σ σ+ 1 s lσ 1 s lσ rd ψ fig. 1. structure of the induction motor in dq-coordinates. iii. backstepping controller design with control input and output constrains a. magnetic flux control design the control objective is to drive the motor magnetic flux to a desired value in such a way that the tracking error stays in a predefined range. in order to achieve this objective, the tracking error is defined as follows: ' 1 1 rdref z x ϕ= − (2) and a lyapunov candidate function is: 2 1 2 2 1 1 log 2 b b d k v k z = − (3) taking the time derivative of v1 gives: 1 2 1 2 2 1 ( ) d d b d z x q v k z − = − � � (4) at this point, x2 is considered as a control input, the error between x2 and the desired control value is defined as: 2 2 1 z x α= − (5) where a1 is the virtual control. substituting (5) into (4) gives: 1 2 1 1 2 2 1 ( ) d d b d z z a q v k z + − = − � � (6) equation (6) suggests that the virtual control can be selected as: 1 1 1 1 2 2 2 1 ( ) d d b d q x a c z x k z − = + + − � � (7) the virtual control renders 1 v� as: 2 1 1 1 1 2d d v c z z z= − +� (8) to deal with the coupling term 1 2 z z and force 2 z to zero, we propose a lyapunov candidate function as: 2 2 2 1 1 2 d v z v= + (9) the derivative of v2 will have the following form: 2 2 1 1 1 2 2 2d d d d d v c z z z z z= − + + � (10) taking the derivative of (5) we have: 2 2 2 1 1 1 1 1 2 1 1 2 2 2 1 1 ( )( ) 2 ( ) d b d d k d q x k z z z z c z k z − − − = − − − �� �� � � � (11) substituting the first and second equations (10) to (11) results in: ( ) ( ) ( ) 2 2 1 12 2 1 1 2 1 1 2 12 2 1 1 2 1 2 12 2 2 1 1 1 1 1 1 ( ) 1 1 1 1 1 ( ) ( ) 2 1 1 ( ) s sd r b d d r r rr b d d d d r r b d z x x x u t t tt k z x x c x x q t t t tt k z z x x q q t tk z σ σ σ σ σ ω σ σ − = − + + + − − − + − − − + − − − − � � � �� (12) from (10) and (12) the control signal can be achieved guaranteeing x1 converges to dq with the dynamical error confined in a range defined by kb. another control objective is to specify the control input subject to a limitation. it is necessary to consider input constrains in the control design since the applied voltage is supplied from an inverter whose output voltage is practicallye limited. toward this end, the limitation of the output voltage is described as a tangent function. ( ) tanh( ) sd m m u g u u υ υ≡ = (13) where m u is the control limit. the design of sd u is passed to the construction of ϑ . equation (13) results in: engineering, technology & applied science research vol. 10, no. 4, 2020, 5998-6003 6000 www.etasr.com nguyen et al.: backstepping control for induction motors with input and output constrains ( ) ( )g g υ υ υ υ ∂ = ∂ �� (14) the control design of sd u is subject to sd m u u≤ is a shift to a task of finding a virtual signal u1 defined by: ( ) 1 1 g g uυ υ − ∂ = ∂ � (15) to proceed we define: ( )3 2 2sdz u gα υ α= − ≡ − (16) from (12) and (16) the virtual control a2 can be selected as: 2 2 2 2 1 1 1 12 2 1 1 1 2 12 2 1 1 2 1 1 2 1 2 12 2 2 1 1 2 2 1 ( )( ( ( ) 1 1 1 1 ( ) ( ) 1 1 2 1 1 ( ) ( ) ( ) ) r b s r b r r b r r d d r r b r r d x l t k z z x t k z t x x x t t k z t t z c x x q x x q t t k z t t q c z σ σ α σ ω σ σ = − − + − + − − + − − − + − − + − − − − + � � �� (17) virtual control a2 renders 2v � as: 2 2 2 1 1 2 2 2 32 2 1 1 1 ( ) b r s v c z c z z z k z t lσ = − − + − � (18) in order to remove the coupling term related to 2 3 z z and drive ( )g υ to track a2, a lyapunov candidate function is proposed as follows: 2 3 3 2 1 2 v z v= + (19) the derivative of (19) yields: 2 2 3 1 1 2 2 2 3 3 32 2 1 1 1 ( ) b r s v c z c z z z z z k z t lσ = − − + + − � � (20) taking the derivative of (16) and substituting the result into (20), we can finally get the virtual control that guarantees bounded control input as: 1 2 2 3 32 2 1 1 1 ( ) b r s u z a c z k z t lσ = − + − − � (21) substituting the proposed control into (20) gives: 2 2 2 3 1 1 2 1 3 3 0v c z c z c z= − − − ≤� (22) and the design process for magnetic flux is completed. b. speed control design the speed control design with input and output constrains is carried out in the same manner as in the previous section. for the sake of abstract presentation, control design steps are summarized in this section as follows: 1 3 ref t x ω= − � (23) 2 1 1 1 4 2 2 1 1 ref b x d t x k t ω β − = − + − − �� (24) 2 4 1 t x β= − (25) 2 2 1 1 1 4 2 2 12 2 1 1 1 4 2 1 12 2 2 2 1 1 1 1 2 1 1 4 1 4 1 2 22 2 2 1 1 ( ) 1 ( ( ) ( ) 1 1 ( ) ( ) ( ) 2 ( ) ( ) ) ( ) s b m s m b refm b r r b m m ref ref b j l k t k x x x x k x j k t t k x x x t j k t t t k t t k x x k x x d d t k t j j σ σ σ β ω ω σ ω ω ω − − = − − − − − + − − + − − + − + − + − � � � (26) 3 2 2 ( ) sq t u gβ υ β= − = − (27) 1 2 2 2 3 32 2 1 1 ( ) m b s k x u t d t k t j l β σ = − + − − � (28) where 1 ,β 2 β are virtual controls and t1, t2, t3 are the backstepping design errors in each step. the control signal guaranteeing bounded input and output is u2. iv. simulation results a. settings simulations were conducted on an im machine with the structure control for induction motor shown in figure 2. the space vector modulation is used for the inverter part. ,rψ ,rω and , i r are flux, speed, and dq-current controllers, respectively. simulation parameters are given in table i. in the simulation, two control schemes were implemented: conventional and input-output embedded backstepping controls. table i. simulation parameters rated power pnom 2.2kw speed motor nnom 2880prm rated phase current inom 4.7arms number of pole pairs zp 1 rotor resistance rr 0.42ω stator resistance rs 0.37ω rotor inductance lr 34.25mh stator inductance ls 34.41mh mutual inductance lm 33.1mh total inertia j 0.001kgm 2 b. simulation procedure at t=0s, magnetic current is created and at t=2s the system speeds up to 1200rpm. in the simulation scenario, a sudden torque with rated value of 1.5nm is applied on the motor shaft. the results of the simulation of stator current and speed controllers using conventional backstepping control method and backstepping with control input and output constrains are shown in figures 3-8 where (a) represents the response of conventional backstepping and (b) represents the backstepping with input and output constraints. the proposed backstepping controller is applied to the induction motor. the coefficients are chosen as: c1=1000, c2=1000, c3=5150, c4=1000, d1=500, d2=500, d3=1000, and d4=500. engineering, technology & applied science research vol. 10, no. 4, 2020, 5998-6003 6001 www.etasr.com nguyen et al.: backstepping control for induction motors with input and output constrains fig. 2. backstepping control with output and input contraints. in the simulation result, the control signal constraint is selected as: 2 2 160v s sd sq u u u= + ≤ (29) constraints on dq are determined according to rectangular approximation: max max 2 and 1 3 3 dc dc sq sd v v u uα α= = − (30) where vdq=300v and a=0.8. this condition implies that control inputs must satisfy: max sq sq u u≤ and max sd sd u u≤ (31) the designed controller can also limit the system’s output through the value kb. this helps us to control the overshoot of the system. in the simulation, we set the value kb=0.1. (a) (b) fig. 3. speed response. for the system controlled by the conventional backstepping method, we set the dc bus voltage to 300v dc v = . this implies that there is no hard limit applied in this case. on the contrary, the dc bus voltage is restricted to 160v when the induction motor drive system is regulated by backstepping control with input and output constraints. in both cases, simulation results show that the two approaches provide good tracking performance with fast response time as can be seen in figure 3. (a) (b) fig. 4. voltage response usd. (a) (b) fig. 5. voltage response usq. (a) (b) fig. 6. stator current responses isd. engineering, technology & applied science research vol. 10, no. 4, 2020, 5998-6003 6002 www.etasr.com nguyen et al.: backstepping control for induction motors with input and output constrains (a) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -5 0 10 15 20 isq* isq i s q [ a ] (b) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time[s] -5 0 10 15 20 is q [ a ] isq* isq fig. 7. stator current responses isq. (a) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 40 80 100 160 200 us us* time[s] u s [v ] (b) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time[s] 0 40 80 120 160 u s [ v ] us us* fig. 8. voltage us response. it is noted that at 2s the motor accelerates to 1200rpm. this speeding action requires a considerably large quadrature current and voltage (figures 4(a), 5(a), 6(a), and 7(a)), resulting in us reaching 180v in the case of the conventional method as shown in figure 8(a). meanwhile, due to the fact that the control inputs are tied to 160v, the us measured values when the backstepping with input and output limitation action is suppressed well below 120v (figures 4-8(b)). the results show technical meaning when the system requires large control input when fast accelerating or deaccelerating in the face of limited dc bus capacity. the speed tracking also indicates that the overshoot remains in the bounded set specified by 0.1. b k = we see that the controller designed by this method helps the drive system respond well in accordance with the set values. however, the disadvantage of this method is that it is difficult to select the optimal coefficients for the controller, the design process of calculating control signals requires many derivative steps leading to a large volume of calculations. v. conclusion the paper proposes a control mechanism backboned by backstepping control to avoid voltage saturation in induction motor drive systems. the hyperbolic tangent is embedded in the control design to deal with the voltage saturation problem. this approach differs from other works where either torque producing or flux forming voltage is sacrificed when the input voltage enters the saturation zone. the simulation results show that the control can suppress the control input voltage in the prescribed domain while maintaining the tracking error due to the help of the barrier lyapunov function. in future research, we aim at deploying the proposed control in an experimental rig. references [1] x. shi, h. li, and j. huang, “a practical scheme for induction motor modelling and speed control,” international journal of control and automation, vol. 7, pp. 113–124, apr. 2014, doi: 10.14257/ijca.2014.7.4.11. [2] t. m. chikouche, a. mezouar, t. terras, and s. hadjeri, “variable gain pi controller design for speed control of a doubly fed induction motor using state-space nonlinear approach,” engineering, technology & applied science research, vol. 3, no. 3, pp. 433–439, jun. 2013. [3] “a robust sensorless output feedback controller of the induction motor drives: new design and experimental validation,” international journal of control, vol. 83, no. 3, pp. 484–497, doi: 10.1080/00207170903193474. [4] v. t. ha, v. h. phuong, n. t. lam, and n. p. quang, “a dead-beat current controller based wind turbine emulator,” in 2017 international conference on system science and engineering (icsse), ho chi minh city, vietnam, jul. 2017, pp. 169–174, doi: 10.1109/icsse.2017.8030859. [5] v. t. ha, n. t. lam, v. t. ha, and v. q. vinh, “advanced control structures for induction motors with ideal current loop response using field oriented control,” international journal of power electronics and drive systems (ijpeds), vol. 10, no. 4, pp. 1758–1771, dec. 2019. [6] j. zhang et al., “integrated design of speed sensorless control algorithms for induction motors,” in 2015 34th chinese control conference (ccc), jul. 2015, pp. 8678–8684, doi: 10.1109/chicc.2015.7261011. [7] m. k. sahu, a. k. panda, and b. p. panigrahi, “direct torque control for three-level neutral point clamped inverter-fed induction motor drive,” engineering, technology & applied science research, vol. 2, no. 2, pp. 201–208, apr. 2012. [8] s. m. kazraji and m. b. b. sharifian, “model predictive control of linear induction motor drive,” in iecon 2017 43rd annual conference of the ieee industrial electronics society, oct. 2017, pp. 3736–3739, doi: 10.1109/iecon.2017.8216635. [9] c. regaya, a. zaafouri, and a. chaari, “a new sliding mode speed observer of electric motor drive based on fuzzy-logic,” acta polytechnica hungarica, vol. 11, pp. 219–232, feb. 2014, doi: 10.12700/aph.11.03.2014.03.14. [10] m. denai and s. attia, “fuzzy and neural control of an induction motor,” international journal of applied mathematics and computer science, vol. 12, pp. 221–233, jan. 2002. [11] m. chebre, a. meroufel, and y. bendaha, “speed control of induction motor using genetic algorithm-based pi controller,” acta polytechnica hungarica, vol. 8, no. 6, pp. 141–153, jan. 2011. [12] i. haj brahim, s. hajji, and a. chaari, “backstepping controller design using a high gain observer for induction motor,” international journal of computer applications, vol. 23, no. 3, pp. 1-6, jun. 2011, doi: 10.5120/2873-3730. [13] c. m. lin and c. f. hsu, “recurrent-neural-network-based adaptivebackstepping control for induction servomotors,” ieee transactions on industrial electronics, vol. 52, no. 6, pp. 1677–1684, dec. 2005, doi: 10.1109/tie.2005.858704. [14] j. yu, y. ma, b. chen, h. yu, and s. pan, “adaptive neural position tracking control for induction motors via back stepping,” international journal of innovative computing, information and control, vol. 7, no. 7, pp. 4503–4516, jul. 2011. engineering, technology & applied science research vol. 10, no. 4, 2020, 5998-6003 6003 www.etasr.com nguyen et al.: backstepping control for induction motors with input and output constrains [15] f.-j. lin, p.-h. shen, and s.-p. hsu, “adaptive backstepping sliding mode control for linear induction motor drive,” iee proceedings electric power applications, vol. 149, no. 3, pp. 184–194, may 2002, doi: 10.1049/ip-epa:20020138. [16] i. k. bousserhane, a. hazzab, r. mostefa, b. mazari, and m. kamli, “mover position control of linear induction motor drive using adaptive backstepping controller with integral action,” tamkang journal of science and engineering, vol. 12, pp. 17–28, mar. 2009. [17] h. t. lee, “adaptive pc-based backstepping position control of induction motor,” international journal of power electronics, vol. 3, no. 2, p. 156, 2011, doi: 10.1504/ijpelec.2011.038891. [18] m. moutchou, a. abbou, and h. mahmoudi, “mras-based sensorless speed backstepping control for induction machine, using a flux sliding mode observer,” turkish journal of electrical engineering and computer sciences, vol. 23, pp. 187–200, jan. 2015, doi: 10.3906/elk1208-50. [19] m. morawiec, “dynamic variables limitation for backstepping control of induction machine and voltage source converter,” archives of electrical engineering, vol. 61, no. 3, pp. 389–410, 2012, doi: 10.2478/v10171012-0031-1. [20] n. p. quang and j.-a. dittrich, vector control of three-phase ac machines: system development in the practice, 2nd ed. berlin heidelberg: springer-verlag, 2015. microsoft word 21-2857_s_etasr_v9_n4_pp4428-4432 engineering, technology & applied science research vol. 9, no. 4, 2019, 4428-4432 4428 www.etasr.com usman: an efficient depth estimation technique using 3-trait luminance profiling an efficient depth estimation technique using 3trait luminance profiling imran usman college of computing and informatics saudi electronic university riyadh, saudi arabia i.usman@seu.edu.sa abstract—this paper presents an efficient depth estimation technique for depth image-based rendering process in the 3-d television system. it uses three depth cues, namely linear perspective, motion information, and texture characteristics, to estimate the depth of an image. in addition, suitable weights are assigned to different components of the image based on their relative perspective position of either the foreground or the background in the scene. experimental results on publicly available datasets validate the usefulness of the proposed technique for efficient estimation of depth maps. keywords-depth estimation; 3d tv; dibr; depth image; 3-d warping i. introduction with the advancement of technology in recent decades and the reduced cost of hi-tech hardware, a wide range of new possibilities is realizable as the demand for enhanced viewing experience in the 3d field. in recent years, there has been rapid progress in the fields of image capturing, coding and display which brings the realm of 3d closer to reality than ever before [1]. real, 3d world incorporates a third dimension (z-axis) that defines depth. depth is perceived by human vision in the form of binocular disparity. as human eyes are located at slightly different positions, different views of the real world are perceived which are then used by the human brain to reconstruct the depth information of the scene. a 3d display takes advantage of this phenomenon, creating two slightly different images of every scene and then presenting them to the individual eyes. with an appropriate disparity and calibration of parameters, a correct 3d perception can be realized. in the field of 3d, one of the important steps is to generate the 3d content itself. for this purpose, special cameras have been designed to generate 3d models directly. for example, a stereoscopic dual-camera uses co-planar configuration of two monoscopic cameras. depth information is computed using binocular disparity. another example is the depth range camera. the examples mentioned above are used for direct generation of the 3d content. on the other hand, there is a tremendous amount of data in 2d which can be converted into 3d. unfortunately, the conventional 2d camera does not provide any information about the depth in the image. thus, developing a method for estimating an image’s depth map close to the real depth map has become of prime interest [2, 3]. once accurately estimated, the depth map can then be used to construct the 3d image using techniques such as depth image based rendering (dibr). authors in [4] used the concept of tensor voting at two different levels for depth estimation in a semi-automatic sparseto-dense structure aware depth estimation method. authors in [5] proposed a collaborative deconvolutional neural network to concomitantly model semantic segmentation and single-view depth estimation for mutual benefits. authors in [6] proposed a two-stage depth estimation technique based on dense feature extractor and a depth map generator. the dense feature extractor extracts multi-scale information from input image while keeping the feature maps dense. these multi-scale features are then fused by the depth map generator using a defined attention mechanism. authors in [7] proposed a method for depth estimation corresponding to every viewpoint of a dense light field. their proposed algorithm computes the disparity for every viewpoint by taking occlusions into consideration. it also preserves the continuity of the depth space and prior knowledge on the depth range is not required. authors in [8] proposed a network for depth estimation combining encoder-decoder architecture with adversarial loss. authors in [9] proposed a depth estimation framework based on deep convolutional neural network. it is based on depth prediction and depth enhancement sub-networks. all the mentioned works are computationally extensive and resource hungry. in this work, we propose a simple idea of depth profiling based on three depth cues, namely linear perspective, motion detection and texture characteristics, to estimate the depth of an image. ii. the proposed depthmap estimation technique in order to generate the depth map, three cues are observed: motion information, linear perspective and texture characteristics. in motion information, the temporal difference is taken between the two consecutive frames and then thresholding is applied. in linear perspective, the vanishing line or the vanishing point [3] is used to find the points that are farthest in the image. corresponding to these, gradient depth values are assigned. texture characteristics are determined and analyzed. in addition, this work further demonstrates the use of corresponding author: imran usman engineering, technology & applied science research vol. 9, no. 4, 2019, 4428-4432 4429 www.etasr.com usman: an efficient depth estimation technique using 3-trait luminance profiling bilateral or gaussian filter for smoothing of the depth map to achieve better results. figure 1 presents the general architecture of the proposed technique in a flow diagram. the details of the proposed technique are presented in the following subsections. reference image current image temporal difference between current and reference image locating vanishing line/ point in the current image moving object border detection defining foreground and background objects by thresholding gradient depth map mapping texture characteristics of the current image merged depth map bilateral smoothing filter depth map dibr technique left view right view anaglyph fig. 1. general architecture of the proposed technique a. motion information the human brain pays more attention to the things that are in motion than to those which are static. so, information in the form of video (objects in motion) is more understandable and clearer than steady images. objects that are far away, or are distant, seem to be static or moving very slowly when we look at them while we are in motion, whereas the objects that are nearer to us move fast when observed from a moving source. this effect is known as motion parallax. things closer to us seem to move rapidly while the objects far away move slowly or are at rest. keeping this fact into consideration, we use the approach of temporal difference to find out objects that are moving or are in a state of rest. by doing so, the objects are assigned to the foreground domain and the background domain in an image by assigning threshold values to the points in the image. the formula for finding the temporal difference is: 1 ( , ) ( , ) c c i j i jρ ρ −∆ = − (1) where c ρ is the current frame pixel at ( , )i j and 1c ρ − is the reference frame pixel at point ( , )i j . after calculating the temporal difference between the frames, the values are compared with the threshold value. the threshold values are computed as: 1, ( , ) 0, c if i j otherwise ∆ > ∂ ∆ =    (2) where c ∂ is the standard deviation given as: 1 1 ( ) 1 m c i im ρ ρ = ∂ = − − ∑ (3) where m is the total number of pixels and ρ is the mean of total pixels in a frame. from the first two equations, moving objects are found. in depth map, the pixels value ranges from 0 to 255, whereby 0 represents the farthest point and 255 shows the nearest point in the image. in the algorithm, the values of temporal difference show which objects are static in the image and which are in motion. if the temporal difference between each pixel of the current frame and the reference frame is less than the standard deviation of the frame, then those pixels are assigned with the value 0. this denotes that these sets of points belong to the background (farthest region). similarly, if the temporal difference generates a value greater than the threshold value, the pixels are assigned with higher gray level values, indicating that these points belong to the foreground (nearest region) of the image. in this cue the frame or image is separated between two parts: the foreground and the background. b. linear perspective objects that are placed far away subtend a very small angle in our eye which the mind interprets as being the farthest in the scene. parallel lines in the image, like the long railway tracks running in the image, provide linear perspective that helps to see objects in three-dimensional view. linear perspective is also a depth cue related to texture gradient and relative size. in every image, when observing closely, we find either the vanishing line or point. the vanishing point is the farthest point in the image where all parallel lines converge. in terms of graphical perspective, it is a point in the image plane that is defined by a line in the space. the vanishing line in an image is defined as a set of vanishing points which are all located in one line. the vanishing line or point represent the farthest point in the image which ultimately gives the depth information. in this work, the vanishing line or point are located empirically using different tools. this algorithm, in general, discusses three possibilities for the occurrence of vanishing point/line. after finding the vanishing point/line, depth gradient values are assigned accordingly. these are discussed below in detail in three different cases. 1) case i: if horizontal line exists this case is applied when only a horizontal vanishing line appears. pixels above the vanishing line are assigned ‘0’ gray level and the pixels below this line are assigned lighter shades engineering, technology & applied science research vol. 9, no. 4, 2019, 4428-4432 4430 www.etasr.com usman: an efficient depth estimation technique using 3-trait luminance profiling of the gray level (1-255), since they move away from the vanishing line. 2) case ii: if a vanishing point appears on the left part of the image the approach used for assigning gradient value in this case is to first divide the image into two portions, left and right respectively. when the vanishing point appears on the left side of the frame, the upper left corner of the image is assigned ‘0’ level of gray. consequently, the upper right part of the frame is assigned ascending values of gray from 0 to 255. the lower left corner of the frame is assigned ascending values of gray level from darker to lighter shades of the gray scale. finally, the lower right part of the frame is assigned ascending values of gray ranging from 0 to 255 diagonally. the following are the four regions in which the image is divided after locating a vanishing point for the assignment of gradient values. • upper left regions: (1 : ,1 : ) 0image i j = (4) where i and j are the coordinates of the vanishing point in the image. • lower left region: (1 : ,1 : ) 0image i j = (5) where i and j are the vanishing point coordinates in the image and m is the total number of rows in the image. • upper right region: 0 (1: , 1: ) 255 image i j n gradient values+ =    (6) where i and j are vanishing point co-ordinates in the image and n is the total number of rows in the image. • lower right region: 0 ( 1: , 1: ) 255 image i m j n gradient values+ + =    (7) 3) case iii: if a vanishing point appears on the right part of the image the upper right corner of the image is assigned ‘0’ value of gray level, and the upper left part of the frame is assigned ascending values of gray from 0 to 255. similarly, the lower right corner of the frame is assigned ascending values of gray level from darker to lighter shades of the gray scale. the left lower part of the frame is assigned ascending values of gray ranging from 0 to 255 diagonally. the details of the four regions are given below. • upper right regions: (1 : , 1: ) 0image i j n+ = (8) where, i and j are the co-ordinates of the vanishing point in the image and n is the total number of columns in the image. • lower right region: 0 ( 1: , : ) 255 image i m j n gradient values+ =    (9) where, i and j are vanishing point coordinates in the image, m is the total number of rows in the image, and n is the total number of columns. • upper left region: 0 (1: , 1:1) 255 image i j gradient values− =    (10) where, i and j are vanishing point coordinates in the image . • lower right region: 0 ( 1: , 1:1) 255 image i m j gradient values+ − =    (11) c. texture characteristics texture characteristics of an image also provide very important clues about the depth of the image. it is a common observation that when any object is placed closer, it gives more details about its color, shape, brightness etc. the objects that are located at some distance, or are located very far away are hard to be determined in terms of their brightness and contrast. their details might get lost, get blended in the background and may even seem to be a part of the background itself. this common observation adds up value to the importance of texture characteristics in finding the nearer and the farthest objects in an image providing us with depth information. the texture characteristics of an image include brightness, luminance, hue and chrominance. it has been discovered that human perception gives priority to brightness, hue, and chrominance [10]. brightness is the characteristic of the visual perception which helps in recognizing whether the object is reflecting or radiating light or not. it is also termed as an attribute that tells us about the luminance of an object. objects present in the foreground usually have greater value of brightness as compared to the objects present in the background. so, greater values of brightness are assigned to the objects in the foreground and lesser values to the objects present in the background of an image. the quantity of light that is absorbed or reflected from the object is described as luminance, which is a property defining color. hue shows the wavelength of the energy band of the light at which it has maximum value. chrominance is the remaining component of the color arrangement when luminance is removed from it. converting images into gray scale gives us the values of pixels demonstrating texture characteristics [11]. converting images into gray scale means removing the rgb contents from the image and assigning each pixel values between 0 to 255 according to its characteristics. d. merging the three cues to generate the depth map the three depth cues (motion detection, linear perspective and texture characteristics) are used to estimate the depth of the engineering, technology & applied science research vol. 9, no. 4, 2019, 4428-4432 4431 www.etasr.com usman: an efficient depth estimation technique using 3-trait luminance profiling image. in order to convert a 2d image into 3d, the image is divided into two parts: background and foreground. while merging the cues for estimating the values in depth map, more values are assigned to the objects present in the foreground of the image, whereas the objects present in the background of the image are assigned smaller weights. this is done because objects present in the foreground of the image receive more attention by human brain than objects in the background. cue merging to generate depth map is divided into the following two cases. 1) case i: foreground for the foreground region, the depth is calculated as: f mo mo lin lin depth depth depthω ω= × + × (12) where, mo depth and lin depth are the depth values extracted in motion information and linear perspective cues respectively, mo ω is the weight assigned to the motion information cue, and lin ω is the weight assigned to the linear perspective cue. as discussed earlier, moving objects get more attention from the human visual system, hence, higher weights are assigned to the depth values calculated in motion information cue i.e. 0.7 mo ω = . the value assigned to linear perspective cue is 0.25 lin ω = . these values are determined and assigned empirically after subjective tests of the human visual system. 2) case ii: background for the background region, the depth is calculated as: lin lin tex texb depth depth depthω ω= × + × (13) where, tex depth and lin depth are the depth values calculated in texture characteristics and linear perspective respectively. linear perspective dominates gradient depth values for generating depth in static objects and background, therefore, a higher weight value is assigned to it and a lower weight value is assigned to texture characteristics. hence, 0.7 lin ω = and 0.25 tex ω = . the final depth map is generated by combining the results of these two cases. f b depth depth depth= + (14) e. smoothing of the depth map in order to reduce noise in the generated depth map while preserving its edges, and to achieve a smooth depth map, bilateral filters [12] are used (other filters, e.g. gaussian, can also be used). another strategy to effectively remove noise is to use a bilateral filter along with directional gaussian kernels which are edge-dependent for the non-hole regions. in addition, considering the similarity of the depth pixels, a trilateral filter can be selectively used in combination according to the type of pixels. iii. results and discussion the proposed technique is implemented in matlab using an intel core i7 3.4ghz processor with 8gb ram. for experimentation, we used the middlebury 2005 dataset [13] which is comprised of a number of different images having a variety of depth perspectives. these images also contain the corresponding stereo images and the created anaglyph images. the anaglyph images are created by either using the stereo images, or by using the center image and the associated depth map. figure 2 presents some of the images from the middlebury dataset and their associated depth maps. these random images show daily life objects with different depth perspectives. the images are rectified and radial distortion has been removed. the depth maps are created using a focal length of 3740 pixels, and a baseline of 160mm, while the intensity and disparity values are kept at 60. more details on other parameter settings can be found in [13]. (a) i (b) i (a) ii (b) ii (a) iii (b) iii (a) iv (b) iv fig. 2. sample images from the middlebury dataset [13]. (a) i-iv are the original images, and (b) i-iv are the corresponding depth maps. table i presents the performance comparison of the proposed technique and the depth maps from the middlebury data set for art, books, computer, dolls, and drumsticks images. for comparison, we assume the depth maps from the middlebury data set as the standard depth maps corresponding engineering, technology & applied science research vol. 9, no. 4, 2019, 4428-4432 4432 www.etasr.com usman: an efficient depth estimation technique using 3-trait luminance profiling to the scene and compare them with the luminance profiled depth maps generated through the proposed technique. we used peak signal to noise ratio (psnr) and structural similarity index measure (ssim) as comparison metrics between the estimated depth map and the original depth map. it can be observed from table i that the proposed technique yields an acceptable level of similarity compared with the standard depth maps for all of the selected images. table i. performance comparison of the proposed technique and the depth maps from middlebury dataset depth map image name psnr ssim art 28.970 0.689 books 28.516 0.651 computer 29.387 0.713 dolls 29.146 0.704 drumsticks 28.632 0.701 figure 3 shows the average performance comparison of the proposed technique and the standard depth maps using a number of images from the middlebury data set. it is to be noted that for the purposes of comparison and display, the values of psnr are scaled down using a dividing factor of 30 in order to bring them close to the values of ssim. once again, it can be observed that the proposed technique produces depth maps that are in acceptable range of similarity and luminance profiling as compared to the original depth maps. fig. 3. average performance comparison of the proposed and standard technique on depth maps iv. conclusion this work presents a simple depth estimation method which is computationally fast and resource efficient. the proposed technique utilizes linear perspective, motion detection and texture characteristics to estimate the luminance profiling of an image scene. in motion information, the temporal difference is taken between two consecutive frames and then thresholding is applied. in linear perspective, the vanishing line or the vanishing point are used to find the farthest points in the image. corresponding to these, gradient depth values are assigned. texture characteristics are determined and analyzed in order to estimate the depth map. finally, bilateral filters are used to smooth the depth map. the experimental results show that the depth maps generated through the proposed technique are of acceptable quality and can be used in real world applications. acknowledgment the author acknowledges saudi electronic university’s help and financial support. references [1] j. son, b. javidi, s. yano, k. choi, “recent developments in 3-d imaging technologies”, journal of display technology, vol. 6, no. 10, pp. 394-403, 2010 [2] l. zhang, w. j. tam, “stereoscopic image generation based on depth images for 3d tv”, ieee transactions on broadcasting, vol. 51, no. 2, pp. 191-199, 2005 [3] a. almansa, a. desolneux, s. vamech, “vanishing point detection without any a priori information”, ieee transactions on pattern analysis and machine intelligence, vol. 25, no. 4, pp. 502-507, 2003 [4] b. wang, j. zou, y. li, k. ju, h. xiong, y. f. zheng, “sparse-to-dense depth estimation in videos via high-dimensional tensor voting”, ieee transactions on circuits and systems for video technology, vol. 29, no. 1, pp. 68-79, 2019 [5] j. liu, y. wang, y. li, j. fu, j. li, h. lu, “collaborative deconvolutional neural networks for joint depth estimation and semantic segmentation”, ieee transactions on neural networks and learning systems, vol. 29, no. 11, pp. 5655-5666, 2018 [6] z. hao, y. li, s. you, f. lu, “detail preserving depth estimation from a single image using attention guided networks”, 2018 international conference on 3d vision (3dv), verona, italy, september 5-8, 2018 [7] x. jiang, m. l. pendu, c. guillemot, “depth estimation with occlusion handling from a sparse set of light field views”, 25th ieee international conference on image processing, athens, greece, october 7-10, 2018 [8] m. carvalho, b. le saux, p. trouve-peloux, a. almansa, f. champagnat, “on regression losses for deep depth estimation”, 25th ieee international conference on image processing, athens, greece, october 7-10, 2018 [9] x. duan, x. ye, y. li, h. li, “high quality depth estimation from monocular images based on depth prediction and enhancement subnetworks”, ieee international conference on multimedia and expo, san diego, usa, july 23-27, 2018 [10] k. ghosh, s. k. pal, “some insights into brightness perception of images in the light of a new computational model of figure–ground segregation”, ieee transactions on systems, man, and cybernetics part a: systems and humans, vol. 40, no. 4, pp. 758-766, 2010 [11] m. song, d. tao, c. chen, x. li, c. w. chen, “color to gray: visual cue preservation”, ieee transactions on pattern analysis and machine intelligence, vol. 32, no. 9, pp. 1537-1552, 2010 [12] a. v. le, s. w. jung, c. s. won, “directional joint bilateral filter for depth images”, sensors, vol. 14, no. 7, pp. 11362-11378, 2014 [13] http://vision.middlebury.edu/stereo/data/scenes2005/[accessed: 21-apr2019] microsoft word 10-3343_s_etasr_v10_n2_pp5396-5401 engineering, technology & applied science research vol. 10, no. 2, 2020, 5396-5401 5396 www.etasr.com lam et al.: simulation models for three-phase grid-connected pv inverters enabling current … simulation models for three-phase grid-connected pv inverters enabling current limitation under unbalanced faults le hong lam faculty of electrical engineering the university of danang—university of science and technology da nang, vietnam lhlam@dut.udn.vn tran dai hoang phuc faculty of electrical engineering the university of danang—university of science and technology da nang, vietnam 105150107@sv.dut.edu.vn nguyen huu hieu faculty of electrical engineering the university of danang—university of science and technology da nang, vietnam nhhieu@dut.udn.vn abstract—normally unbalanced grid voltage dips may lead to unbalanced non-sinusoidal current injections, dc-link voltage oscillations, and active and/or reactive power oscillations with twice the grid fundamental frequency in three-phase gridconnected photovoltaic (pv) systems. double grid frequency oscillations at the dc-link of the conventional two-stage pv inverters can further deteriorate the dc-link capacitor, which is one of the most important limiting components in the system. proper control of these converters may efficiently address this problem. in such solutions, current reference calculation (crc) is one of the most important issues that should be coped with the reliable operation of grid-connected converters under unbalanced grid faults. therefore, this paper proposes and simulates crc methods and presents the results in order to improve the quality of the grid-connected pv system under unbalanced grid voltage fault. keywords–photovoltaic (pv) systems; unbalance voltage; two– stage converters; power oscillation; dc–link voltage oscillation i. introduction with the fast increase of grid-connected pv generation [1, 2], pv systems should contribute to the grid stability by providing ancillary services, beyond the basic power delivery. the new grid requirements demand the grid-connected pv systems, singleor three-phase, to have the capability to operate in power factors other than unity. also, based on the recently revised grid codes, pv inverters are preferred to stay connected during grid voltage faults [3-6]. when a fault happens, the converter has to detect the incident and respond quickly to the disturbance to mitigate the adverse effects on the inverter, the equipment connected to the grid, and the upstream system. indeed, the revised grid codes require the pv systems to inject a certain amount of reactive power in case of low voltage fault [7, 8]. these issues are now gaining more consideration in pv systems, as the power capacity of an individual pv system is also increasing. detection of voltage sags, current limitation, current reference calculation [9], active and reactive power oscillation, and dc-link voltage oscillation are such important issues. besides, they are the key issues to the proper operation of grid-connected pv converters under faults. among them, crc plays the most important role to satisfy the grid requirements, especially under unbalanced grid faults. in current researched methods, d-q methodology is commonly implemented under grid voltage faults [3-6, 10] although it is quite complex since it requires building blocks to convert the signals and calculate the frequency as the phase lock loop (pll), dual second order generalized integrator based frequency-locked loop (dsogi–fll). the aforementioned active power oscillation can have a negative impact on the reliable operation of the grid-connected pv converters. in two-stage pv converters, where a dc-dc converter operates as maximum power point tracking (mppt) [11], it is common that a pi controller determines the active power reference. thus, in case that the injected active power starts fluctuating, the pi controller cannot follow its sinusoidal variations because the pv power injected to the dc-link is constant. as a result, the dc-link voltage will fluctuate with the same frequency of the injected active power [12]. notably, due to the high failure rates of the electrolytic capacitors of the two-stage pv converters, the system reliability is challenged. this is worsened by dc-link voltage ripples. in this paper there are two main contributions: (i) dc link ripples during unbalanced faults are reduced with proper control of dc-dc converter and (ii) the impact of the pv systems on the distribution grid is analyzed and a control strategy is proposed for the pv using calculation methods referenced in the static αβ reference frame to reduce the complexity of the control structure and more efficient instead of using moving d-q reference frames. moreover, the deprecating power of the main grid can be made by changing the power factor of wind generators [13-15]. this control strategy may offer a new solution to solve the voltage sag issue on a power system due to the pv plants connected to the grid by streaming their reactive power to the grid to enhance its power quality. ii. system operation this section analyzes the inverter operation under normal and abnormal conditions for a three-wire three-phase pv system. the two-stage three-phase system shown in figure 1 corresponding author: le hong lam engineering, technology & applied science research vol. 10, no. 2, 2020, 5396-5401 5397 www.etasr.com lam et al.: simulation models for three-phase grid-connected pv inverters enabling current … includes a boost converter and a full-bridge inverter interconnected through the dc-link capacitor. pv array boost converter v dc 2 v dc 2 c c dc dc s4 s3 s6 s2 s5s1 a c b lcl filter va v b vc fig. 1. two-stage three-phase grid-connected pv system. the formulation is performed in the stationary reference frame (srf). the conversion from the three-phase system into the srf is as: ��� = ������ = �� 1 − �� − ��0 √ � − √ � � � ������� (1) where ��, �� are the voltages in the srf from ��, ��, ��. the apparent power s is written as: � = �. �∗ = � + �� (2) since under normal conditions the grid voltages and loads are balanced, there will not be any oscillatory components in the active and reactive components of the power and the injected current is completely sinusoidal. however, under unbalanced conditions, the ns components will appear in both current and voltage vectors. thus, the apparent power is rewritten as: � = ���. ���∗ = ���! + ���" #. ���! + ���" #∗ = ���! . ���! ∗ +���! . ���" ∗ + ���" .���! ∗ + ���" . ���" ∗ (3) in which ���! , ���" are derived from: ���! = �� $1 − %% 1& ��� (4) ���" = �� $1 %−% 1& ��� (5) where % = '"()/� is a 90°-lagging phase-shifting operator applied to the time domain [16]. similarly, ���! , ���" are achieved following (4) and (5): ���! = �� $1 − %% 1& ��� (6) ���" = �� $1 %−% 1& ��� (7) in (3), there are four terms in the apparent power formulation. ���! . ���! ∗ = ��! + ���!#. ��! + ���!#∗ = ��!��! + ��!��! +� ��!��! − ��!��!# = �� + ��� (8) ���! . ���" ∗ = ��! + ���!#. ��" + ���"#∗ = ��!��" + ��!��" +� ��!��" − ��!��"# = �� + ��� (9) ���" . ���! ∗ = ��" + ���"#. ��! + ���!#∗ = ��"��! + ��"��! +� ��"��! − ��"��!# = � + �� (10) ���" . ���" ∗ = ��" + ���"#. ��" + ���"#∗ = ��"��" + ��"��" +� ��"��" − ��"��"# = �+ + ��+ (11) the constant and oscillating parts of the total active and reactive power are written as: � = �, + � (12) �, = �� + �+ = ��!��! + ��!��! + ��"��" + ��"��" (13) �= �� + � = ��!��" + ��!��" + ��"��! + ��"��! (14) � = �, + � (15) �, = �� + �+ = ��!��! − ��!��! + ��"��" − ��"��" (16) �= �� + � = ��!��" − ��!��" + ��"��! − ��"��! (17) where � and � are the total active and reactive power, and �,, �,, �-, �are the constant and oscillating parts in the active and reactive power. under balanced voltage sag faults, there is no ns in the voltages and currents, thus there are no oscillatory components in the active and reactive power. however, during unbalanced faults, the ns components appear in the voltages and currents. from (12) and (15), it is concluded that the active and reactive power have a constant part named �0 and �0. also, there are two oscillating parts in the active and reactive power, denoted as �. and �.. fundamentally, in the pv power systems, all the active power generated by the pv panels is delivered to the dc-link. this active power is continuously processed by the inverter and injected into the grid. if the active power generated by the inverter is less than the power injected to the dc link from the pv source, the dc link voltage will increase. proper control is needed to synchronize the power flow from the pv source to the grid by regulating the dc link voltage. accordingly, in the case that the injected active power has double grid frequency oscillations, the dc-link voltage will inevitably oscillate with the same frequency. double grid frequency oscillations of the dc-link voltage have a negative impact on the life cycle of the capacitive dc-link. iii. proposed control strategy in this section, a new control strategy to overcome the unbalance grid voltage based on the current reference generation (crg) is presented and a current limitation scheme is used to warranty the overcurrent issue. a. current reference generation in order to eliminate active power oscillations, (14) has to be zero. in (14) and (17), �, and �, are equal to the average active power ( �/01 which is the output of the dc-link regulator) and reactive power (�/01 which is calculated during grid faults) references. these values are continuously calculated by the control algorithm. the control goal is to eliminate the oscillatory components from the active power, while allowing reactive power to oscillate with the double grid engineering, technology & applied science research vol. 10, no. 2, 2020, 5396-5401 5398 www.etasr.com lam et al.: simulation models for three-phase grid-connected pv inverters enabling current … frequency. hence, �is considered equal to 2 ��!��" − ��!��"#. accordingly, (17) is rewritten as: �= ��!��" − ��!��" + ��"��! − ��"��! = 2 ��!��" − ��!��"# (18) ��!��" − ��!��" − ��"��! + ��"��! = 0 (19) furthermore, (13), (14), (16), (19) are written as: 34 44 5 ��! ��! ��" ��"��" ��" ��! ��!��! − ��! ��" − ��"−��" ��" ��! − ��! 67 77 8 344 45��!��!��"��"677 78 = 9�/010�/010 : (20) a formulation for generating sinusoidal currents to deliver a certain amount of active and reactive power [3, 17, 18] is obtained as: ��! = ;<=>?">@ �/01 − ;?!>@ �/01 (21) ��" = − ;?">@ �/01 − ;?!>@ �/01 (22) ��! = ;c=>?">@ �/01 − ;ca=>?!>@ �/01 (23) ��" = − ;cb>?">@ �/01 − ;cab>?!>@ �/01 (24) ���d��d� = �0 − 11 0� ������ (25) ef = ��!� + ��!� (26) eg = ��"� + ��"� (27) where ��d, ��d are the orthogonal voltages of the srf voltage vectors. the srf currents are driven from the average value of the active and reactive power. these references determine the peak-peak value of the oscillations on the reactive power. accordingly, a general formulation is obtained as: ��h = ;<=";?"n>@>opqr � (32) where � is the apparent power or the nominal power of the power converter, e��s0 is the base voltage, which is equal to the root-mean-square (rms) value of the line-line grid voltage. on the other hand, according to the voltage sag depth, the reactive power can be calculated as: t�/01 = 0 efu > 0.9�/01 = � × 1.5 × 0.9 − efu# 0.2 < efu < 0.9�/01 = 1.05 × � efu < 0.2 (33) with efu being calculated as: efu = �;opqr (34) given the nnp and reactive power of q, the maximum allowed active power �[�\ for the inverter to inject to the grid while avoiding overcurrent can be achieved as: �[�\ = nmm�� − �� (35) for operation of the converter under very deep voltage sags, nnp will have a low value, since nef − neg becomes small. therefore, under a deep voltage sag, the condition is: if � > mm� → � = mm� and �[�\ = 0 (36) vpu pmax p* vpu<0.9 no yes fault signal =1 pmax < p* enable signal = 0 compare signal = 1 no yes calculate pmax, nnp, qref pmax non mppt mppt enable signal = 1 fig. 2. flowchart of the proposed control algorithm the dc-dc converter operates as the mppt–p&o, the dc-dc converter should switch to the non-mppt mode in case a grid fault occurred and the inverter could not inject the maximum pv power. the flowchart in figure 2 summarizes and clarifies the control system. if efu < 0.9 the voltage sag detection block will generate a fault signal activating the mm�, �[�\, �/01 calculator block. then, for a comparison between �[�\ and �∗, a comparator signal will be generated there is an and block, in which, if the comparator signal and engineering, technology & applied science research vol. 10, no. 2, 2020, 5396-5401 5399 www.etasr.com lam et al.: simulation models for three-phase grid-connected pv inverters enabling current … fault signal are equal to 1, the dc-dc converter switches to the non-mppt. in this case, the fault signal is 1, while the comparator signal remains zero. mppt may continue working under abnormal operation when the fault exists in the grid. iv. control structure design a control structure was implemented in matlab/simulink [19]. it should be noted that the proposed strategy control is mainly in voltage source inverter (vsi) and current controller. since, these modules play an important role in creating the reference current to balance voltage after faults. a. voltage source inverter in connected grid mode, the vsi controller [20] power flow comes from the pv system to respond to the requirements of the grid, and control the voltage on the capacitor abc of dc– link. active power is required from the grid reference given �/01 from ebc voltage regulator. meanwhile, reactive power �/01 is calculated according to the current limit to stabilize the voltage on the ac bus. figure 3 shows the schematic diagram of the ebc voltage control loop, the quantity to be controlled because it is proportional to the energy stored in the capacitor. dc voltage on the capacitor is stable thanks to the energy balance through (ignoring losses in vsi): �h> = �bc + �d (37) where �h> is the power from the pv, �bc is the power on the capacitor, and �d is the power inject to the grid. vdc vdc_ref (.) 2 (.) 2 + kv(s) + ppv p ref fig. 3. diagram control eef voltage when the voltage on the dc capacitor is stable, the value of the �bc is considered as zero: �h> = �d (38) equation (38) indicates that the controlling �d will be controlled through the �d to achieve the desired value. then, the referenced currents g�/01 and g�/01 are calculated through the reference power value from (39) and (40) and then they are taken to the current control. ��/01 = ;<=";