transactions template journal of engineering research and technology, volume 4, issue 1, marsh 2017 1 design of a tri-band double wire square loop frequency selective surface for mobile signal shielding mohammed t. alhaddad 1, talal f. skaik2 1islamic university of gaza, p.o. box 108, palestine, e-mail:mohhaddad1984@gmail.com 2 islamic university of gaza, p.o. box 108, palestine, e-mail: talalskaik@gmail.com abstract— this paper presents the design of a novel frequency selective surface (fss) structure. the proposed fss structure here is constructed of square loops of copper wires interconnected together using iron wires and thus forming a wire net. it is designed to shield the mobile signals of different networks: gsm 900, gsm 1800 and 3g and hence operating as an electromagnetic bandstop filter. a single cell consists of double square copper loops with outer loop tuned at gsm 900 and the inner loop tuned at gsm 1800 and 3g frequency bands. the structure can be easily manufactured and installed in outdoor areas. the simulation results of transmission coefficients show stable frequency response for both the te and tm polarizations for angle of incidence from 0 0 to 30 0 . index terms—frequency selective surface (fss), gsm signal shielding, wire square loop. i introduction there has been extensive research on frequency selective surfaces (fss) in the last few decades. fss structures are two dimensional arrays formed of metallic patch elements or their complementary having apertures and they act like band-stop filter in case of patch elements and as band-pass filters in case of aperture elements. the most commonly used shapes in design of fss are straight dipole, circular loop, cross dipole, three-legged dipole, square loop and jerusalem cross as shown in figure 1[1]. the frequencies of transmitted or reflected signals strongly depend on the resonant frequencies of the shapes of conductive elements in the fss structure. the applications of fsss are various and several designs have been reported in literature for different purposes including microwave ovens, radome antennas, electromagnetic signal shielding ,...etc. in [2], a transparent fss is proposed for microwave oven front door to prevent leakage of high power electromagnetic waves. large structures of fsss with curved geometries are used in antenna radomes [3-4]. some designs of fsss are proposed to improve radio frequency (rf) transmission through energy saving glass in green buildings to overcome the glass attenuation of electromagnetic signals [5-6]. in some places such as hospitals, airports and places of worship such as mosques it is desired to maintain the environment free from mobile signals. for this purpose frequency selective surfaces can be used to shield the building from electromagnetic radiation from gsm sources. the fss structures can be mounted on walls or designed on glass to achieve spatial filtering for the desired frequency. several fss structures have been reported in literature for the purpose of gsm shielding [7-11]. in [7] the authors proposed an fss structure on fr-4 substrate that shields the 900, 1800 and 2100 mhz bands with minimum attenuation of 20 db. in [8] the authors presented an fss with double squared loop elements etched on fr-4 to block gsm 900 and 1800 bands. similarly, dual band gsm 900 and gsm 1800 fss structures are proposed in [9-10] but realized as wallpaper on walls for signal shielding. in [11] a fss structure on fr-4 substrate with circular apertures is proposed for shielding of gsm 1800 downlink signals. furthermore, reconfigurable fss designs have been presented for shielding where pin diodes are added to the structures to control the fss filtering behavior [12-13]. other broadband shielding fss structures have been proposed to filter out wide frequency range (i.e. 6.5-14 ghz) [14]. mohammed t. alhaddad, talal f. skaik / design of a tri-band double wire square loop frequency… (2017) 2 figure 1 unit cells of common fss elements the previously reported structures are for indoor use and the majoritites are designed with microstrip patches on dielectric substrates which would be impractical to shield the entire building. in this paper, we propose a novel frequency fss structure made from copper wire cells attached together using iron metal to form the periodic structure and it does not need a dielectric substrate as the conventional structures. each cell is fomed of double square loops and the whole periodic fss structure is supported by iron wires that connect the loops together. to the best of the authors’ knowledge, it is the first proposed fss made of wires that can be installed outdoor to shield mobile signals at different frequency bands: gsm-900, gsm-1800 and umts (3g). a single cell consists of two square loops interconnected together using iron wires. the proposed fss structure as well as simulation results will be presented next sections. ii. structure of the fss the frequency selective service is designed using wire net that can be built in open areas. the signal of interest here to be blocked is the downlink frequency bands for multi cellular networks including gsm 900 downlink (925-960 mhz), gsm 1800 downlink (1805-1880 mhz) and the umts downlink (2110-2170 mhz). the novel proposed single cell structure is shown in figure 2. it is formed of double square copper loops interconnected by iron wires. the copper wires have cylindrical shape with radius r2 with soft square corners. the iron metal is used in the design to connect the copper square cells with each other and thus supporting the whole periodic structure. the iron elements are of cylindrical shape with radius r1. the surrounding environment of the fss is air with dielectric constant of ε = 1.00059. the main feature of this design is ease of formation and convenience in installation and use in open areas. the dimensions of the fss cell structure are given in table 1. figure 2 single cell structure of the fss table 1 dimensions of the design parameter a b c d value (mm) 75.21 54.6 26.31 54.17 parameter e g r1 r2 value (mm) 8.7 5.19 1 2.5 the whole fss cell structure is shown in figure 3. it is constructed of the double square loops joined together using the iron wires to form the shown periodic structure. the bandstop characteristics are achieved by optimizing the strucutre using cst simulation software [15]. the periodicity of unit cell is 92.7×92.7 mm . the outer square loop is tuned to 900 mhz while the inner square loop is tuned to 1800 mhz and 3g band. the diameter of the wire used to form the elements is 5 mm. figure 3 periodic square cells iii. simulation results the transmission coefficients are obtained within the frequency range from 500 mhz to 2600 mhz for both te and tm polarizations using cst simulation software. in figure 4, the transmission coefficients are presented for te polarization for 0 and 30 angle of incidence. the resonant frequencies at 0 are 972.4 mhz and 1994.8 mhz, while at 30 the frequencies are 965.2 mhz and 1973.2 mhz respectively. the corresponding transmission coefficients are -55.4 db, -58.1 db at 0 , and -54 db, -59.7 db at 30 . the shift in resonant frequency from 0 to 30 is about -7.2 mhz at 900 mhz. and -21.6 mhz at 1800 mhz/3g bands. this shows that the proposed fss has a stable frequency response as the angle of incidence varies from 0 to 30 . mohammed t. alhaddad, talal f. skaik / design of a tri-band double wire square loop frequency… (2017) 3 figure 4 simulation results of tri-bandstop fss for te polarization in figure 5, the transmission coefficients are presented for tm polarization for 0 and 30 angle of incidence. the resonant frequencies at 0 are 972.4 mhz, 1994.8 mhz, while at 30 , these are 990.4 mhz, 1926.4 mhz. the corresponding transmission coefficients are -52.5 db, -58.7 db at 0 , and -52.4 db, -57.5 db at 30 , respectively. the shift in resonant frequency from 0 to 30 is about 18 mhz at 900 mhz, and -68.4 mhz at 1800 mhz/3g bands. this shows fair stability in frequency for the tm polarization as the angle of incident is varied from 0 to 30 . therefore, it can be seen that the fss response is sufficiently stable for both te and tm polarizations as the angle of incidence is varied from 0 to 30 . figure 5 simulation results of tri-bandstop fss for tm polarization figure 6 shows that the resonant frequencies are stable over te and tm polarizations when the angle of incidence is 0 0 . however, when the angle increases to 30 the resonant frequencies shift by 25.2 mhz at band 900mhz, and shift by 46.8 mhz at band 1800/3g, as depicted in figure 7. tables 2 and 3 summarize the simulation results for both te and tm polarizations, respectively, and show the -10 db transmission bandwidth at each frequency band. figure 6 te and tm polarization at theta=0 figure 7 te and tm polarization at theta=30. table 2 summarizes the simulation results for te polarization and presents the -10 db bandwidth for each particular frequency band. similarly, table 3 presents the results for the tm polarization. it can be shown that the achieved -10 db bandwidth is satisfies the system rquirements and thus blocking the signals of interest can be achieved. table 2 -10 db transmission bandwidths at 900/1800 mhz and 3g band for te polarization 900 mhz angle resonant frequency fr1 (mhz) bandwidth bw (mhz) te 0 972.4 225.3 te 30 965.2 226.4 1800 mhz/3g angle resonant frequency fr2 (mhz) bandwidth bw (mhz) te 0 1994.8 639.8 te 30 1973.2 503 table 3 -10 db transmission bandwidths at 900/1800 mhz and lte band for tm polarization 900 mhz angle resonant frequency fr1 (mhz) bandwidth bw (mhz) tm 0 972.4 221 tm 30 990.4 186 1800 mhz/3g angle resonant frequency fr2 (mhz) bandwidth bw (mhz) tm 0 1994.8 626 tm 30 1926.4 478 iv. conclusion a frequency selective surface is presented in this paper to shield mobile signals: gsm 900, gsm 1800 and umts (3g) in outdoor areas. the fss structure is formed of double square wire loops connected together using iron wires. the simulation results for both te and tm polarizations showed stable frequency response as angle of incidence is changed from 0 to 30 . mohammed t. alhaddad, talal f. skaik / design of a tri-band double wire square loop frequency… (2017) 4 references [1] d. hu," 3d frequency selective surfaces," mphil thesis, the university of sheffield, united kingdom, september 2012. [2] j. murugan, t. kumar, p. salil, and c. venkatesh, "dual frequency selective transparent front doors for microwave oven with different opening areas", in progress in electromagnetics research letters, vol. 52, pp. 11-16, 2015. [3] l. baoqin , d. sishen, z. huanmei, y. xiangyan, "design and simulation of frequency-selective radome together with a monopole antenna", aces journal, vol. 25, no. 7, july 2010. [4] h. chen, x. hou and l. deng, "design of frequencyselective surfaces radome for a planar slotted waveguide antenna," in ieee antennas and wireless propagation letters, vol. 8, pp. 1231-1233, 2009. [5] fang ma and long li, "design of a tri-bandpass fss on dual-layer energy saving glass for improving rf transmission in green buildings," proceedings of ieee international conference on communication problem-solving (iccp), guilin, 2015, pp. 405-407. [6] g. i. kiani, l. g. olsson, a. karlsson, k. p. esselle and m. nilsson, "cross-dipole bandpass frequency selective surface for energy-saving glass used in buildings," in ieee transactions on antennas and propagation, vol. 59, no. 2, pp. 520-525, feb. 2011. [7] b. döken and m. kartal, "triple band frequency selective surface design for global system for mobile communication systems," in iet microwaves, antennas & propagation, vol. 10, no. 11, pp. 1154-1158, 2016. [8] n. abdul khalid and f. che seman, "double square loop frequency selective surface (fss) for gsm shielding", australian journal of basic and applied sciences, vol. 8, no. 21, pp. 25-29, 2014. [9] w. kiermeier and e. biebl, "new dual-band frequency selective surfaces for gsm frequency shielding," proceedings of european microwave conference, 2007, munich, pp. 222-225. [10] r. sivasamy, l. murugasamy, m. kanagasabai, e. f. sundarsingh and m. gulam nabi alsath, "a low-profile paper substrate-based dual-band fss for gsm shielding," in ieee transactions on electromagnetic compatibility, vol. 58, no. 2, pp. 611-614, april 2016. [11] r. sivasamy, m. kanagasabai, s. baisakhiya, r. natarajan, j. pakkathillam, and s. palaniswamy, "a novel shield for gsm 1800 mhz band using frequency selective surface", progress in electromagnetics research letters, vol. 38, pp. 193-199, 2013. [12] m. bouslama, m. traii, t. denidni and a. gharsallah, "reconfigurable frequency selective surface for beamswitching applications", iet microwaves, antennas & propagation, 2016, in press. [13] k. elmahgoub, f. yang, and a. elsherbeni, "design of novel reconfigurable frequency selective surfaces with two control technqiues", progress in electromagnetics research c, vol. 35, pp. 135-145, 2013. [14] i. sohail, y. ranga, l. matekovits, k. p. esselle and s. g. hayt, "a low-profile single-layer uwb polarization stable fss for electromagnetic shielding applications," international workshop on antenna technology (iwat), 2014 sydney, 2014, pp. 220-223. [15] cst.com, "cst computer simulation technology", 2016. [online]. available: https://www.cst.com/. [accessed: 04nov2016]. mohammed t. alhaddad received the b.sc. degree in 2007 from the islamic university of gaza and received m.sc. degree in communications engineering in 2016 from the the islamic university of gaza. he worked as a designer of many uhf, vhf and wifi networks. his research interests include design of microwave filters, antennas and optical networks. talal f. skaik received the b.sc. degree in 2004 from the islamic university of gaza, where he worked as a teaching assistant until 2006. he was awarded hani qaddumi scholarship and received m.sc. degree in communications engineering with distinction in 2007 from the university of birmingham, uk. he was awarded the orsas scholarship for doctoral study at the university of birmingham, uk and received phd degree in microwave engineering in 2011. throughout his phd study, he worked as a teaching assistant, and also as a research associate on micromachined microwave circuits. he was the head of electrical engineering department at the islamic university of gaza from sept. 2014 until august 2016. he is currently an assistant professor at the islamic university of gaza. his research interests include design of microwave filters, diplexers, multiplexers, energy harvesting systems, reconfigurable antennas and microwave passive components https://www.cst.com/ transactions template journal of engineering research and technology, volume 3, issue 3, september 2016 73 a pilot study on smart search for optimal parking space allocation zeina z. shakhshir 1 , ahmad i. abu-eisheh 2 , sameer a. abu-eisheh 3 1 an-najah national university, nablus, palestine, shakhshirzeina@gmail.com 2 an-najah national university, nablus, palestine, eng_ahmad.issa@hotmail.com 3 an-najah national university, nablus, palestine, sameeraa@najah.edu abstract— smart cities are concerned with the implementation of the new technologies and systems, which demonstrate the relationships between virtual and physical environment through proper applications. transportation systems are among the fields which have been taken into consideration in this research. a pilot study for the parking system at an-najah national university-new campus has been made as part of the attempt to look forward towards transferring the university into a smart one by 2020. smart parking helps in managing the parking supply and in understanding the interaction between parking supply and demand. in addition, it helps in finding the optimal allocation of parking spaces, through determining the best route, therefore reduces the delay to achieve a better time management. the goal of the smart parking is having an efficient parking system which delivers real improvements in user service quality and provides real-time flexibility. proper management and control of parking lots also help through utilization of available parking spaces (supply) most efficiently. moreover, parking availability information contributes to better allocation of spaces to users and in allowing good distribution over parking facilities. one aspect of smart parking is being considered using advanced technology. an implementation of geographic information system to transportation (gis-t) has been used with the association of gis applications and taking into consideration the impedance of travel time or the distance travelled to find the optimal system assignment. the results show the outcome of the system optimal allocation of parking stalls for a sample of instructors’ offices in faculty of engineering at annajah national university. index terms— gis, gis-transportation, smart cities, parking systems, palestine i introduction an-najah national university, located in nablus city, is the largest university in the west bank. it has about 20,000 student distributed on 13 different faculties [1]. due to the needs for expansion of the university to meet the increasing student enrollment, continuous development and construction of a new campus, extending over 130 dunums of land in beit wazan/al-jneid area, is still in progress for the past 15 years. there is considerable movement to, from and within the new campus. the focus of this research is on parking related problems at the new campus and how a friendly intelligent solution for the problem is developed, which could be a model on how to solve aspects of the parking problems. parking is an essential component of the transportation system. its convenience affects the time needed and eases of reaching destinations and therefore affects overall accessibility. currently, an-najah national university has its own onstreet parking lanes as well as off-street parking lots. however, these parking lots have problems which arise from the gap between demand and supply of parking spaces, and the improper allocation of parking stalls and lots to faculty and administrative staff. limited research has been published on the optimal allocation of parking space as part of the smart parking management literature. a new trend is observed in the general area of smart parking management, as part of smart cities, which has concentrated on application of secured wireless network and sensor communication for parking reservation (examples are presented in [2] and [3]. various approaches have been considered in the modeling of smart parking systems such as multi-agent based models [4]. ii objectives the research aims to utilize the available parking spaces (supply) more efficiently, through proper management and control of the parking supply, in general, and to consider advanced technologies to better allocate the parking supply stalls through identifying minimal total travelled distances for the faculty and staff, which contribute to optimal allocation of spaces to users and in allowing good distribution zeina z.shakhshir, ahmad i.abu-eisheh and sameer a. abu-eisheh / a pilot study on smart search for optimal parking space allocation (2016) 74 over parking facilities. iii methodology in this research, a number of steps had been considered in the study, analysis, and modeling of parking conditions at the new campus of an-najah national university. this had led to the design of the process of the identification of the optimal allocation of the parking spaces for giving levels of parking demand. the first step was getting insight information on the study area and collecting basic parking supply and demand data, as well as any relevant data which would help in understanding the current parking status at the new campus. this step involved conducting parking field studies that are related to parking supply and parking demand through identifying the available parking supply as well as conducting parking use studies on two days to find out all the relevant parking use parameters [5]. then, a survey study was conducted on a sample of the customers to make an overview of the nature of the customers’ behavior and to emphasis the determined parking parameters. academic and administrative staff offices were identified and specified on the relevant maps. analysis was then conducted to assess supply sufficiency to satisfy the current level of demand. next, and in order to create the model to optimally allocate parking spaces for the demand, autocad program was used to build the computerized model. this model was transferred to geographic information systems (gis), where a network was created to represent all available parking spaces and offices and the routes connecting them. finally, a program developed using java is used for the optimal allocation of the parking spaces to the customers. the developed method differs from other methods that consider mathematical programming to achieve the optimal allocation of parking spaces, such as that adopted by geng and cassandras [6], in that it uses gis in the optimal allocation of spaces. other research had considered combining gis with gps and 3g for searching of parking space and parking guidance and information system, but again without the optimal allocation of parking spaces [7]. it is to be stated that the research focuses on the faculty of engineering block within the university new campus as the case study for the implementation of the smart solution of the allocation of parking spaces to customers. the implementation of the developed approach will eventually result in the proper management of the parking supply, and thus will assist in the transformation of the parking system into a smart parking system. the methodology highlights the use of gis application as the aspect of the application of advanced technologies to arrive at better solutions to ensure optimal parking allocation. iv modeling, analysis, and design in order to use advanced technologies for the allocation of the parking stalls to users, modeling was done to give reasonable results in allowing good distribution over parking facilities. besides, one of the proper techniques for arranging the results is using programming software. modeling using software was not a direct task. first, the faculty of engineering was modeled using two-dimensional and three-dimensional models to give proper representation of the offices and paths, in order to arrive at the best route for each office in the faculty of engineering, considering impedance, which is the distance, in order to reach the best parking stall for each of the customers, located in offices in the faculty of engineering building. the optimized allocation of the parking stalls is corresponding to the optimal equilibrium condition, considering the concept of system equilibrium. the concept of system equilibrium implies that the overall system has achieved the minimal cost, in this case the minimum distance, which does not necessarily imply achieving the minimum cost for each of the customers. smart parking is achieved to manage the current status of the parking supply in the faculty of engineering block, without consideration of any expansion. the three-dimensional network of the faculty of engineering doesn't need to be limited to only one building. it can be expanded to include the surrounding parking supply, which include all the parking garages, the surface lots, as well as the on-street parking spaces. the network dataset that have been created has source features that have a geometry that includes z-coordinate values. the network has been created through arcscene, which is a 3d viewing application for gis data using arcgis, applying the three-dimensional network analyst extension. spatial analyses were performed based on the results of such application considering relevant features of arcgis [8]. figure 1 shows the faculty of engineering building and the parking supply network. the geoprocessing model builder that is shown in figure 2 finds least-cost (or least distance) routes which are the best routes between any two locations. zeina z.shakhshir, ahmad i.abu-eisheh and sameer a.abu-eisheh / a pilot study on smart search for optimal parking space allocation (2016) 75 as a result of this process, a route between location 1of the office inside the building and location 2 of the parking spaces, as an example, is shown in figure 3. figure 2 model builder for finding the best route between two locations figure 3 best route between two locations in 3d model network the search for the solution for the optimal allocation of parking stalls, the 3d modeling is facilitated by considering a 2d transformation of the faculty of engineering block and the associated parking facilities. the resulting two-dimensional network is then used to find the optimal cost for each origin-destination (od) pair of office-parking stall combination. figure 4 shows the resulting two-dimensional network for part of the faculty of engineering block. it is to be noted that such 2d transformation considers the equivalent horizontal special dimensions with respect to the vertical distances. using network analyst extension of arcgis, which is usually utilized to dynamically model realistic network conditions and solve vehicle routing problems, is used here to find the best walking route (or path) between any two locations in the network, by selecting the locations within the network and solving the best route. the od cost matrix (origin-destination cost matrix) finds the least cost paths (or distances) within the network from multiple origins to multiple destinations [9]. the locations of the offices are loaded and defined as points of origins, while the parking stalls are defined as points of destinations. location 2 location 1 figure 4 partial presentation of 2d network for the faculty of engineering block figure 1 representation of faculty of engineering building and the parking supply network zeina z.shakhshir, ahmad i.abu-eisheh and sameer a. abu-eisheh / a pilot study on smart search for optimal parking space allocation (2016) 76 v results and discussion after many trials, the outcome shows that the twodimensional model gives accurate results for more offices rather than the three-dimensional model. the results imply the implementation of the concept of both the 2d and the 3d network analyst to arrive at the optimal od cost matrix. java program was developed to assist in achieving system equilibrium, where optimal allocation of parking spaces to customers ensures that the overall cost (or distance) is minimal. the results give the optimum allocation for the offices and parking spaces, through defining the optimal od routes or paths. table 1 shows an example of the optimum allocation for some offices. it has to be stated that the computational time needs to find the optimal route for the study case is not high, which is about 150 second for the study case of 41 offices and 98 parking spaces. table 1 partial optimal parking allocation office no. 32 33 34 46 87 90 parking space no. 27 25 28 21 79 91 distance (m) 95.8 105.9 106.2 77.2 104.8 107.3 figure 5 shows the optimal parking space for office 33 as an example. similarly, the optimal parking space for each of the offices is specified. vi conclusions and recommendations the faculty of engineering parking is converted into smart parking through the proper distribution and management for the parking facilities using gis applications. the network was created utilizing arcscene, which is a 3d viewing application using arcgis, and applying the threedimensional network analyst extension. origin-destination cost matrix results from the network analyst and java program manages these results in a smart way by giving assigning to each office the optimum space in the available faculty of engineering parking supply. this had assisted to achieve the system equilibrium route (path) assignment, which gave the overall minimum sum of distances that would be walked by the academic and administrative staff between their offices and allocated parking spaces. it is recommended to further develop the used model to consider the parking durations for each customer (office occupant) in order to minimize the needed parking spaces considering parking duration. as a result, equilibrium between the demand and the supply will be achieved over time, and therefore, the problem could be solved over the whole length of the work day. moreover, extension of the model towards adopting user equilibrium is recommended in order to achieve the minimum costs (distances) for each of the users, not for the system as a whole. references [1] an-najah national university, https://www.najah.edu/ar/about/annu-facts/ acccessed on 1/6/2016. [2] t. giuffrè, s. m. siniscalchi and g. tesoriere, "a novel architecture of parking management for smart cities," siiv 5th international congress sustainability of road infrastructures, october 2012. [3] s. h. hamdoon and e. soleit, "an automatic management of car parking using smart lan and relational database technologies," international journal of computer applications, vol. 69, no. 9, pp. 40-47, may 2013. [4] m. bilal, c. persson, f. ramparany, g. picard and o. boissier, "multi-agent based governance model for machine-to-machine networks in a smart parking management system," 2012 ieee international conference on communications (icc), pp. 6468 – 6472, june 2012, doi: 10.1109/icc.2012.6364789. [5] n. garber, and l. hoel (2010). traffic and highway figure 5 the optimal parking space for office 33. https://www.najah.edu/ar/about/annu-facts/ http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6350935 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6350935 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6350935 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6350935 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6350935 zeina z.shakhshir, ahmad i.abu-eisheh and sameer a.abu-eisheh / a pilot study on smart search for optimal parking space allocation (2016) 77 engineering, 4th edition (si edition), ci-engineering publisher, boston, usa. [6] y. geng and c. g. cassandras, "a new ―smart parking‖ system based on optimal resource allocation and reservations," ieee transactions on intelligent transportation systems, vol. 14, no. 3, pp. 979-984, october 2011, doi: 10.1109/ tits.2013.2252428. [7] y. shiue, j. lin and s. chen, "a study of geographic information system combining with gps and 3g for parking guidance and information system," international scholarly and scientific research & innovation, vol. 4, no. 4, pp. 337-343, 2010. [8] esri, http://www.esri.com/products/arcgis-capabilities/ spatial-analysis acccessed on 1/6/2016. [9] esri, http://desktop.arcgis.com/en/arcmap/latest/extensions/n etwork-analyst/od-cost-matrix.htm acccessed on 1/6/2016. zeina z. shakhshir, earned her bachelor's degree in civil engineering from an-najah national university in 2016. she is a member of the american society of civil engineers since 2012. ahmad i. abu-eisheh, earned his bachelor's degree in civil engineering from an-najah national university in 2016. prof. sameer abu-eisheh, is a professor of civil engineering at an-najah national university, palestine. he received his ph.d. and msc. from the pennsylvania state university, u.s.a., in 1987 and 1984, respectively. he had served as the president’s assistant for planning and development, dean of the faculty of engineering, and head of the department of civil engineering at an-najah. he was a visiting professor at lille university, france, university of washington and texas a&m university, u.s.a., and technical university of braunschweig, germany. in addition he served as the palestinian minister of planning, acting minister of finance, acting minister of education and higher education, and acting deputy prime minister. he authored more than 100 published or conference papers, and participated in authoring two books and five manuals. http://www.esri.com/products/arcgis-capabilities/%20spatial-analysis http://www.esri.com/products/arcgis-capabilities/%20spatial-analysis transactions template journal of engineering research and technology, volume 7, issue 2, october 2020 12 investigation of power quality indices in jordan university of science and technology grid-tie photovoltaic plant mohammed s. ibbini and abdullah h. adawi doi: https://doi.org/10.33976/jert.7.2/2020/2 abstract—in this paper, the effects of the grid-tie photovoltaic plant (pv) are analyzed on the power factor and the voltage harmonic distortion in the power quality aspect of the distribution network. the conditions for the total harmonic distortion (thd) in the power grid related to the photovoltaic power station connected to the user side are also summarized. based on matlab/simscape software, one string of the photovoltaic system at the jordan university of science and technology (just) was simulated and hence, compared its results with the real results of the system. measurement and simulation results illustrate that the voltage harmonic distortions to the power grid do not exceed the recommended levels, but the photovoltaic system needs to have a capacitor bank to get a unity power factor. index terms—power quality, power factor, thd, photovoltaic, pv module, and matlab/simscape. i introduction photovoltaic is considered a promising technology, and it has witnessed rapid growth in recent years, especially after increasing the costs of fossil fuel. in the case of jordan, shrinking resources and a reasonable energy crisis severely constrain economic prospects and industrial growth. therefore, it is more urgent to increase the use of renewable energy. despite the great benefits that can be obtained from connecting the pv system with the power grid, new challenges on the grid designing and protection can emerge. power flow direction and power quality (harmonics and voltage fluctuation) at the user side are affected after the pv system connected to the distribution network [1]. the energy extracted from the pv panel depends on various factors such as temperature, irradiance, and climate conditions. however, the sun irradiation strongly affects the performance of the power system due to continuous changes in the amount of radiation falling and leading to voltage flickering and fluctuation. the larger the capacity of the pv power system connected to the electrical grid, the higher the dynamic power quality problems resulting from photovoltaic power generation like frequency variation, voltage sag …etc [2, 3]. power systems designed to function at the fundamental frequency, which is 50-hz in jordan, are prone to unsatisfactory operation and, at times, failure when subjected to voltages and currents that contain substantial harmonic frequency elements. many problems can be caused by harmonics such as over-voltage, over-current, resonance, loss of transmission lines, and protection faults. ensuring the voltage level and the power quality at the joint coupling point of the photovoltaic system is very important to achieve the best possible power system performance [4, 5]. in the jordan university of science and technology (just) pv plant, sma inverters are used due to the capability of providing reactive power and voltage adjustment. in many pv plants, the reactive power could be adjusted by external types of equipment like reactive power compensation systems, but it is better to use inverters that can adjust reactive power values [6]. if an immediate change in voltage happens, then this leads to the cause of what is known as a voltage flicker. despite the voltage flicker usually happened at the user side, it affects the sinusoidal voltage waveform of the power grid. in the case of grid-tie pv plants, continuous changes in the amount of solar radiation leading to voltage flickering. such distortions can disturb the user’s equipment and cause the inrush current. moreover, it affects the mechanism of the network impedance and leads to a sudden rise of voltage or sudden fall and hence, the causes of the voltage flicker [7]. the voltage flicker limits are contained in the following documents [8]: (a) iec/tr3 61000-3-7 (1996) “assessment of emission limits for fluctuating loads in mv and hv power systems.” (b) iec 868 / engineering recommendations p28 (pg 17) “limits on voltage flicker short term and long-term severity values.” harmonics are typically produced by the user’s apparatus generating waveforms that distort the fundamental 50 hz wave. such harmonic generation can damage user apparatus and can fail transmission network apparatus. the limits for harmonic distortion levels are given in the following documents: https://doi.org/10.33976/jert.7.2/2020/2 mohammed s. ibbini and abdullah h. adawi/ investigation of power quality indices in jordan university of science and technology grid -tie photovoltaic plant (2020) 13 (a) bs en 50160:2000 “voltage characteristics of electricity supplied by public distribution systems.” (b) uk engineering recommendation g5/4, february 2001 “planning levels for harmonic voltage distortion and the connection of non-linear equipment to transmission systems and distribution networks.” (c) iec/tr3 61000-3-6 (1996) “assessment of emission limits for distorting loads in mv and hv power systems.” all the requirements mentioned above and standards are essential for the power quality of the electrical grid, and there are always taken into account to ensure the energy quality of the jordanian power system [9]. ii pv system modeling the whole model of the pv system is built by matlab/simsacpe simulation software. the injection current, which comes from the photovoltaic generation system to the grid, could be transferred into essential components. the frequency of the inverter is the same as the power grid frequency, and its capacity is related to the output power of the pv system [10-12]. pv systems are like any other electrical power generation system, with only different components and hence, different physical properties. although a pv array produces power when exposed to sunlight, several other components are required to conduct correctly, control, convert, distribute, and store the energy produced by the array. depending on the functional and operational requirements of the system, the specific components required may include significant entities such as dc-ac power inverter, battery bank, etc. the essential element of a pv module is called a pv cell. in simscape workspace, a pv cell block can be used to precisely simulate the behavior of a real pv cell. more than one pv cell block can be used to make a pv module out of series and parallel connected cells. the module will have one input, irradiance in w/m2, and two voltage polarity outputs, +v and –v. the whole pv system, which consists of pv modules, mppt controller, dc-dc converter and utility can be easily modeled using simscape, as shown in figure 1. the adopted pv cell specifications are as shown in table1 below fig. 1. matlab /simscape model of the photovoltaic system. table 1 the pv cell specifications number of pv module 220 number of pv module in series 22 open circuit voltage 38.6v short circuit current 9.03a voltage at mpp 31.4v current at mpp 8.44a maximum power 265w number of inverters 1 inverter power 60000w iii case analysis at the just pv plant, one string is simulated in matlab/simscape environment. two hundred and twenty pv modules are connected, every 22 pv modules connected in series with total voltage about 850vdc, these pv panels are combined in a dc combiner box. dc combiner box is directly connected to an sma 60kw grid-tie inverter. the following figures show the practical characteristics of the system during 2018, where the figures show the feasibility of installing such projects in jordan. in figure 2, which shows the amount of solar radiation during the year, it is clear that the majority of the year has a very high level of solar radiation, which significantly increases the efficiency of the system and reduces the cost of project recovery. mohammed s. ibbini and abdullah h. adawi/ investigation of power quality indices in jordan university of science and technology grid -tie photovoltaic plant (2020) 14 fig. 2. irradiation in just pv plant during the year 2018 the following tables show the differences between simulation and actual results in the same operating conditions. the proposed simulation performance is tested under two conditions, the first condition at 1000w/m2solar radiation and temperature of 17.94 co, the second condition at 615w/m2solar radiation, and temperature of 16.86 co. according to the following results, it’s evident that the simulation results are very close to the real readings of the photovoltaic system at the same operating conditions. table 2 differences between simulation results and actual results at 1000 w/m2, 17.94 co real system results simulation results irradiance 1000 w/m2 1000 w/m2 temperature 17.94 co 17.94 co maximum dc power 50.04 kw 58.30 kw maximum dc voltage 619.4 v 690.8 v active power 47.69 kw 58.30 kw power factor 0.93 1 maximum ac current l1 74.7 a 84.49 a maximum ac current l2 74.8 a 84.49 a maximum ac current l3 74.6 a 84.49 a maximum ac voltage l1 242.95 v 230 v maximum ac voltage l2 240.57 v 230 v maximum ac voltage l3 240.23 v 230 v table 3 differences between simulation results and actual results at 615 w/m2, 16.86 co real system results simulation results irradiance 615 w/m2 615 w/m2 temperature 16.86 co 16.86 co maximum dc power 30.95 kw 36.80 kw maximum dc voltage 640.8 v 400 v active power 30.47 kw 36.80 kw power factor 0.92 1 maximum ac current l1 46.5 a 53.30 a maximum ac current l2 46.6 a 53.30 a maximum ac current l3 46.5 a 53.30 a maximum ac voltage l1 239.04 v 230 v maximum ac voltage l2 236.74 v 230 v maximum ac voltage l3 236.17 v 230 v these values in the tables show the differences between the real values of the solar system and the simulation values when modeling the solar system through a matlab/simscape program. many values, such as the value of solar radiation and temperature are assumed to be constant. however, these values are constantly changing, resulting in differences between actual values and simulation values. another reason may strongly affect the difference in readings between actual and simulated values, which is partial shading. in the case of partial shading, the actual power values of the pv system less than the simulation values. the simulation programs cannot consider the impact of all operating conditions, but it is clear that its results are very close to the actual values and this indicates the strength of the simulation program. thd in the output voltage is illustrated in figure 3. mohammed s. ibbini and abdullah h. adawi/ investigation of power quality indices in jordan university of science and technology grid -tie photovoltaic plant (2020) 15 fig. 3. thd in the output voltage. it’s evident from the previous figure that the thd over a whole year was very few and within the recommended levels in international standards rules. on the other hand, the simulation result illustrates that the thd=0.02, matlab/simscape software, provides an fft command window. this command window is used to make many analysis features like thd. the terms of calculation and assessment of the reactive power of the grid-tie pv systems are [13]: 1) the voltage levels of the electrical grid must remain stable and within normal range with pv power generation. 2) the reactive power exchange between pv generation and grid on the point of common coupling is zero. as mentioned previously, the inverters used in the pv power system are of type sma; this type can compensate for the reactive power values. moreover, it is allowing the power factor values to stay within the normal range. the following picture illustrates the power factor values during 2018. fig. 4. power factor values in 2018. it’s evident from the previous figure that the power factor of the grid-tie photoelectric system is equal to 0.9 at most times and this is considered very well. to access a unity power factor using matlab/simscape software. the values in tables 2 and 3 are taken into account. moreover, the power factor correction is: 𝑘𝑉𝐴𝑟 = 𝑃𝑜𝑤𝑒𝑟 (𝐾𝑊)(𝑇𝑎𝑛 (∅𝐴 − ∅𝐵 )), where, ∅𝐴 = 𝑐𝑜𝑠 −1 ( 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 𝑝𝑜𝑤𝑒𝑟 𝑓𝑎𝑐𝑡𝑜𝑟 ) ∅𝐵 = 𝑐𝑜𝑠 −1 ( 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑝𝑜𝑤𝑒𝑟 𝑓𝑎𝑐𝑡𝑜𝑟 ) according to table 3, ∅𝐴 = 𝑐𝑜𝑠 −1(0.92), and the required power factor is 1. however, the active power is 30.47kw, so regarding the previous equation, the capacitor bank should be added with a capacity of 20kvar per string. the following figure illustrates the value of the power factor after the addition of the capacitor bank. fig. 5. power factor simulation result after the addition of compensator. v conclusion in this paper, the impact of the grid-tie photovoltaic power system in just and the power quality in the distribution network was analyzed and studied. the power quality problems resulting from connecting the photovoltaic power plant to the electrical grid were summarized, and quality standards and requirements for maintaining power quality were mentioned. based on the matlab/simscape simulation software, this paper simulates the thd and power factor in the case of jordan university of science and technology grid-tie photovoltaic plant. the result illustrated that 1) the thd caused by grid-tie photovoltaic plant injecting into the grid satisfies standards requirements. 2) 20kvar capacitor bank capability per string should be added to the photovoltaic system to obtain a unity power factor. it is also recommended to study the effect of voltage fluctuation, voltage flickering, and other factors that impact the quality of electrical power. mohammed s. ibbini and abdullah h. adawi/ investigation of power quality indices in jordan university of science and technology grid -tie photovoltaic plant (2020) 16 references [1] k. thirumala, t. jain, and a. c. umarikar, “visualizing time-varying power quality indices using generalized empirical wavelet transform,” electr. power syst. res., vol. 143, 2017. [2] s. yan, s.-c. tan, c. k. lee, b. chaudhuri, and s. y. hui, “use of smart loads for power quality improvement,” ieee j. emerg. sel. top. power electron., vol. 6777, no. c, 2016. [3] h. al riyami, a. al busaidi, a. al nadabi, m. al siyabi, o. h. abdalla, k. al manthari, b. hagenkort, s. mirza, and r. fahmi, “power quality of dhofar network with 50 mw wind farm connection,” 2016. [4] b. stridh and j. rosenlind, “power quality experiences from sweden â€tm s first mw photovoltaics park and impact on lv planning,” 2016. [5] s. jo, s. son, and j. park, “on improving distortion power quality index in distributed power grids,” vol. 4, no. 1, pp. 586–595, 2013. [6] n. mu, j. c. alfonso-gil, s. orts-grau, s. seguí-chilet, and f. j. gimeno-sales, “instantaneous approach to ieee .power terms and quality indices,” vol. 125, 2015. [7] j. barros and r. i. diego, “a review of measurement and analysis of electric power quality on shipboard power system networks,” renew. sustain. energy rev., vol. 62, 2016. [8] d. d. ferreira, j. m. de seixas, a. s. cerqueira, c. a. duque, m. h. j. bollen, and p. f. ribeiro, “a new power quality deviation index based on principal curves,” electr. power syst. res., vol. 125, 2015. [9] e. almaita, “harmonic assessment in jordanian power grid based on load type classification,” vol. 11, 2016. [10] m. s. ibbini, a. h. adawi, “a simscape based design of a global maximum power point tracker under partial shading condition”, international journal of smart grid and clean energy, vol. 8, no. 1, january 2019. [11] m. s. ibbini, a. h. adawi, “analysis and design of a maximum power point tracker for a stand-alone photo voltaic system using simscape”, international journal of advanced trends in computer science and engineering, vol. 8, no. 1, january – february 2019. [12] m. s. ibbini, a. h. adawi, “a simscape based design of a dual maximum power point tracker of a standalone photovoltaic system.” international journal of electrical and computer engineering (ijece), vol. 10, no. 3, june 2020. [13] c. m. de brito, p. van rhyn, and s. africa, “the use of power quality standards to establish an equivalent transformer capability under harmonic loading,” 2016. mohammed salameh ibbini received his ph. d in electrical engineering from the university of illinois at urbana-champaign, his m. sc. in electrical engineering from the university of coloradoat boulder and his b. sc. in electronics from the enseec in france. he is currently professor of electrical and biomedical engineering at jordan university of science & technology. he is currently the vice president of jordan university of science and technology and had held different administrative and academic positions. mohammed ibbini has been professor of ee and bme at just since 2005, authoring and co-authoring over 70 articles and attending a huge number of international conferences. professor ibbini taught different courses in linear and nonlinear control, biomedical instrumentation, signal and systems and machines. his research interest includes but not limited to nonlinear control, renewable energy, feedback linearization, ultrasound and microwave cancer therapy, diabetes and bridging the gap between the university output and the work market needs. prof. ibbini is a strong advocate of hands-on, real life examples, project based learning, learning by doing and innovation in engineering. abdullah hamed adawi is born in saudia arabia in 1990, i obtained a bachelor degree in electrical power engineering in 2012 from albalqaa applied university. i received my master in 2019 in the field of power and control engineering from jordan university of science and technology. . transactions template journal of engineering research and technology, volume 3, issue 3, september 2016 44 role of flywheel energy storage system in microgrid salima nemsi 1 , seifeddine abdelkader belfedhal 2 , linda barazane 3 1 laboratory of electrical and industrial systems, university of sciences and technology houari boumediene, algeria nemsisalima@yahoo.fr 2 laboratory of electrical engineering and plasma, university of ibn khaldoun, algeria seifedinebelfedhal@yahoo.fr 3 laboratory of electrical and industrial systems, university of sciences and technology houari boumediene, algeria lbarazane@yahoo.fr abstract—recently, the idea of electricity generation from one side has changed by introducing the concept of microgrid. the latter enables not only producer to be consumer or vice versa. but to aggregate different renewable energy sources like solar and wind in order to mitigate co2 emission and produce clean energy, avoid big power losses, which are principally due both to the large electrical power produced in one place and long transmission lines .nevertheless, this operation depends strongly on storage systems with power electronic converters for being reliable and controllable. in this context, a power electronic converter supplying a squirrelcage induction machine coupled to a flywheel is proposed for study in this paper, this system is known as flywheel energy storage system (fess) and aims to improve the quality of electric power of grid or consumer by storing an amount of energy under kinetic form during high production of wind for example and to generate that quantity in case of deficit of primary source. the simulation results have been achieved using the software matlab/simulink. index terms—microgrid, flywheel energy storage system (fess), matlab/simulink. i introduction the worldwide is facing serious problems with electrical energy. from a side, the pollution caused by co2 emission from fossil fuels and from other side, the depletion of traditional sources like gas. besides, the big losses, which are principally due both to the large electrical power produced in one place and long transmission lines. to overcome these drawbacks, a new form of electricity generation has been proposed known as microgrid. usually, the latter is composed by an aggregation of distributed generation units, which depend essentially on renewable energy resources like wind and solar, loads and storage devices [1][2]. the whole system can be connected to either the main grid and known as grid-connected mode or works as autonomous known as standalone mode [3][4]. an important feature of renewable energy resources is the fluctuation of the output power over time. hence, the importance of storage systems within microgrid appears especially for boosting the power supplied by the microgrid in grid-connected mode if the distributed generation sources are not supplying the expected level of energy due to their natural power variation [5]. different types of storage exist, some are already used and others are under development and can be classified into two categories [6]: a/long term storage: where the period of storage is above 10 minutes and the well-known types of long term storage are batteries, storage under potential form of water. b/short term storage: where the period of storage is less than 10 minutes and well-known types of short term storage are flywheel, super capacitor. in this context, the objective of this article is to study the flywheel energy storage system (fess) alone: the latter has many advantages like: simple maintenance, detailed knowledge of stored energy level, clean storage unlike batteries, independent lifetime duration of storage/retrieval cycles. this article is arranged as follows: section ii briefly describes the main component parts of the fess and its working principle. in section iii, the importance of such type of storage is presented. section v is devoted to mathematical model of the whole system. the fifth section shows the simulation results using matlab/simulink and discussion. finally, conclusion is presented in section vi. ii constitution and working principle of flywheel storage system figure 1 shows the main component parts of the storage salima nemsi, seifedine abdelkader belfedhal and linda barazane / role of flywheel energy storage in microgrid (2016 ) 45 system based on flywheel, which comprises the following elements: a flywheel a motor-generator a power electronic converter as in the majority of the energy storage systems, there are a reversible transformation of energy. during storage, the electrical energy is converted into mechanical energy through the electric motor. the mechanical energy is stored in the flywheel as kinetic energy of a rotating mass. during the discharge of fess, the mechanical energy is converted into electrical energy through the electric generator. the operating speed is imposed by the power electronic converter, which imposes the direction of transfer of energy through the electrical machine [6]. u stands for dc voltage link. iii importance of flywheel storage system in order to illustrate the behavior of fess in a microgrid, we propose the schematic depicted in figure 2 where the microgrid in our case depend only on one type of renewable energy which is wind turbine connected to the grid in presence of a fess. we suppose that the wind profile enables to generate an active power pwind. the latter has variable values due to the random character of the wind. on the other hand, the grid must receive a smoothed power [6]. and knowing the power that must be delivered to the grid preg, the fess reference power can be determined as follows: ( ) if the reference power is positive, there is an excess of energy must be stored under kinetic form and the asynchronous machine works as motoring operation. else, the asynchronous machine works as a generator where we have energy to deliver. iv flywheel energy storage system model in this part, the modeling of all parties constituting the fess will be presented. a flywheel in this paragraph, the value of the inertia of the flywheel according to the power storing and which can be restored in a timely manner will be determined. the relationship that related the power to energy is the following [8]: ( ) with: pf: maximum power deliverable by the storage system (equal to the nominal power of the asynchronous machine coupled to the flywheel) [w]. ef: energy stored [j]. then, the relationship between energy, inertia and angular velocity is: dc ac control motor/generator flywheel u p safety and vaccum envelope magnetic bearings figure 1: flywheel energy storage system constitution [7] figure 2 example of flywheel energy storage system associated to wind energy [7] electrical grid wind turbine f e s s pwind preg pref t t 0 0 pwind preg pref energy storage energy delivery http://www.google.dz/imgres?sa=x&biw=1311&bih=620&tbm=isch&tbnid=w70dqughdkp02m:&imgrefurl=http://www.leblogenergie.com/2008/03/11/le-stockage-dne/&docid=vmroritn-foacm&imgurl=http://www.leblogenergie.com/files/2012/07/beaconsmartenergy25_2.gif&w=150&h=306&ei=luafuoqbkutcigluoocabg&zoom=1&ved=1t:3588,r:78,s:0,i:320&iact=rc&page=5&tbnh=189&tbnw=93&start=75&ndsp=23&tx=42&ty=65 salima nemsi, seifedine abdelkader belfedhal and linda barazane / role of flywheel energy storage in microgrid (2016 ) 46 ( ) where: ωf: flywheel angular velocity in [rad/s]. jf: flywheel moment of inertia expressed in [kg.m 2 ]. the moment of inertia of the flywhell is a key parameter because it characterizes the ability of storage (or restitution). by grouping equations (2) and (3), we get the following equation: ( ) passing to small changes, we have: ( ) t: time variation during charge or discharge for maximum power [s]. ωf: small variation in angular velocity about an operating point, in [rad/s]. ( ) ( ) ( ) where: ωfmax: maximum flywheel angular velocity in [rad/s]. ωfmin: minimum flywheel angular velocity in [rad/s]. b asynchronous machine the asynchronous machine is chosen according to these advantages in terms of simplicity and robustness of the rotating parts. b.1 electrical equations in the dq reference we use the model of the mas in the park reference. flux and currents are given by the following system [6][8][9]: [ ] [ ( ) 0 ( ) 0 ] [ ] [ 0 0 0 0 0 0 ] [ ]( ) ( 0) ( ) where: rs, rr: stator and rotor phase resistances. ls, lr: stator and rotor phase inductances. m: mutual inductance. vds,vqs: direct and quadrature components of stator voltage. ids,iqs: direct and quadrature components of stator current. ds, qs: direct and quadrature components of the rotor flux. p: number of pole pairs. s: pulsation of the field in the stator reference frame. b.2 control to determine the control (reference voltages to be applied to the converter) of the asynchronous machine, we choose to work with rotor flux oriented control because equations are simpler compared to control stator flux or air gap flux oriented [6]. the positin of the reference is obtained to cancel the quadratic component of the flux rotor. therefore, qlign the rotor flux vector with the axis of the park reference. suppose: ( ) 0 ( ) we obtain the following equations: [ ] [ 0 ] [ ] [ 0 0 0 0 0 0 ] [ ] ( ) the reference flux is imposed by the field weakening law of the asynchronous machine as follows [6]: { | | | | | | ( ) konowing that: salima nemsi, seifedine abdelkader belfedhal and linda barazane / role of flywheel energy storage in microgrid (2016 ) 47 pi calculation iqs_ref calculation γ decoupling system pi flux estimator pi abc dq abc dq flywheel converter asynchronous machine ( ) rn: nominal rotoric flux [web]. sn: nominal statoric flux [web]. where: √ ( ) with: vs: rms value of simple statoric voltage [v]. s: grid pulsation equal to 314.16 rad/s. the reference direct statoric current is given by: ( ) ( ) pi: proportienal integral regulator. we estimate the value of rotoric flux through the following equation: ( ) s: laplace operator. we want to control the power of the asynchronous machine coupled to the flywheel. from a reference power, one can deduce the electromagnetic torque reference of the machine leading the flywheel by measuring the rotational speed. the expression of the electromagnetic torque can be calculated by [10]: ( 0) c converter we define voltages modulated by the converter in the park reference and applied to the stator of the asynchronous machine by the following system [9][11]: [ ] [ ] ( ) figure (3): flywheel energy storage system control [8] with: vd-reg and vq-reg represent converter adjusting tension in the park reference. as well, the current modulated by the converter is given by: [ ][ ] ( ) the control of the converter associated with the asynchronous machine is derived by reversing the system in equation (21): ( ) ( ) knowing that: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( 0) and ( ) the globale fess control scheme is depicted in figure (3). due to the large size of that scheme, it is placed at the end of this article. figure (3): flywheel energy storage system control [8] salima nemsi, seifedine abdelkader belfedhal and linda barazane / role of flywheel energy storage in microgrid (2016 ) 48 v simulation results figures (4) to (10) illustrate the operation of fess in a period of 60 seconds. the initial velocity of the flywheel is fixed at 1500 rpm and the reference power is equal to the nominal power of the asynchronous machine 450 kw. 0 10 20 30 40 50 60 140 160 180 200 220 240 260 280 300 320 time (s) f ly w h e e l s p e e d ( ra d /s )  ref  figure (4): flywheel rotation speed 0 10 20 30 40 50 60 -6 -4 -2 0 2 4 6 x 10 5 time (s) p o w e r (w ) figure (5): power storage system 0 10 20 30 40 50 60 0 50 100 150 200 250 300 time (s) d ir e c t c u rr e n t (a ) i ds-ref i ds figure (6): direct statoric current 0 10 20 30 40 50 60 -300 -200 -100 0 100 200 300 time (s) q u a d ra tu re c u rr e n t (a ) i qs-ref i qs figure (7): quadrature statoric current 0 10 20 30 40 50 60 0 0.5 1 1.5 2 2.5 3 3.5 4 time (s) f lu x ( w b ) ref figure (8): rotoric flux 0 10 20 30 40 50 60 -3000 -2000 -1000 0 1000 2000 3000 time (s) v o lt a g e ( v ) a n d c u rr e n t (a ) v s (v) i s (a) figure (9): statoric current and voltage salima nemsi, seifedine abdelkader belfedhal and linda barazane / role of flywheel energy storage in microgrid (2016 ) 49 the flywheel rotation speed is shown in figure (4). note that the speed goes from 1500 to 3000 rpm in 30 seconds. this corresponds well to storage. then this speed goes from 3000 to 1500 rpm in 30 seconds to restore 450 kw. the power storage system is shown in figure (5). it is requested in this simulation to store 450 kilowatts during the first 30 seconds and return 450 kw in 30 seconds remaining. looking at this figure, we see that the reference power is followed. we also note that the reference power is reversed when the speed of the flywheel reaches a high or low limit (see figures 4 and 5).therefore, we ask the asynchronous machine to provide or consume nominal power of 450 kilowatts. a positive power corresponds to a power consumed by the machine and a negative power represents a power supplied by the machine. figures (6), (7) and (8) respectively show the evolution of the direct, quadrature current and flux of the asynchronous machine, there is a good follow instruction. a second observation that can be drawn from these figures depends on direct current and its relationship with the flux, it is found from the change in the direct current component, which is the image of the flux. during storage, the current is in phase delay with the voltage where the machine acts as a motor (see figure 10) and in return, the current is ahead of phase with the voltage where the machine works as a generator (see figure 11) allows it to justify the two modes of operation of the asynchronous machine. v conclusion in this article, we have presented the fess as a solution to store electrical energy as a kinetic form in periods of excess of production of renewable energies sources and to restore it in the case of deficit following the random characteristic of such alternatives system. initially a general view of the constituent parts of this system and its operating principle has been shown. then, each part of fess have been modeled separately including: flywheel, asynchronous machine and its control and power converter. finally, the results using matlab/simulink software justify the advantages of the flywheel energy storage system either in storage period where the system works as a motor and stocks 450 kw as a kinetic form or in restitution period where the system works as a generator. references [1] f. katiraei and m. r. iravani, ―management strategies for a microgrid with multiple distributed generation units―, ieee transactions on power systems, vol. 21, no. 4, nov 2006. [2] j. m. guerrero, n. berbel, j. matas. l. garcía de vicuña and j. miret, ―decentralized control for parallel operation of distributed generation inverters in microgrids using resistive output impedance―, iecon 2006 32 nd annual conference on ieee industrial electronics, 2006. [3] j. m. guerrero, j. matas, l. garcía de vicuña, m. castilla, and j. miret, ―wireless-control strategy for parallel operation of distributed-generation inverters―, ieee transactions on industrial electronics, 20 20.01 20.02 20.03 20.04 20.05 20.06 20.07 20.08 -3000 -2000 -1000 0 1000 2000 3000 time (s) v o lt a g e ( v ) a n d c u rr e n t (a ) v s (v) i s (a) figure (10): zoom of statoric current and voltage during storage 40 40.01 40.02 40.03 40.04 40.05 40.06 40.07 40.08 -3000 -2000 -1000 0 1000 2000 3000 time (s) v o lt a g e ( v ) a n d c u rr e n t (a ) v s (v) i s (a) figure (11): zoom of statoric current and voltage during restitution http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=4152824 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=4152824 salima nemsi, seifedine abdelkader belfedhal and linda barazane / role of flywheel energy storage in microgrid (2016 ) 50 vol. 53, no. 5, oct 2006. [4] j. kim, j. m. guerrero, p. rodriguez, r. teodorescu and k. nam, ―mode adaptive droop control with virtual output impedances for an inverter-based flexible ac microgrid―, ieee transactions on power electronics, vol. 26, no. 3, mar 2011. [5] m. a. abusara, s. m. sharkh, ―control of line interactive ups systems in a microgrid―, ieee international symposium on industrial electronics, 2011. [6] g. o. cimuca, ―système inertiel de stockage d’énergie associé à des générateurs éoliens―, phd thesis in electrical engineering, lille university, 2004. [7] h. ben ahmed, b. multon, n. bernard, c. kerzreho, ―le stockage inertiel électromécanique―, revue 3ei n°48, pp. 18-29, mar 2007. [8] a. davigny, ―participation aux services systèmes de fermes d’éoliennes à vitesse variable intégrant du stockage inertiel d’énergie―, phd thesis in electrical engineering―, lille university, 2004. [9] s. belfedhal, m. berkouk, ―modeling and control of wind power conversion system with a flywheel energy storage system―, international journal of renewable energy research, ijr, vol.1, no3, pp.43-52, 2011 [10] d. leclercq, ―apport du stockage inertiel associé à des éoliennes dans un réseau électrique en vue d’assurer des services systèmes―, phd thesis in electrical engineering, lille university, 2004. [11] l. leclercq, b. robyns j.m. grave, ―control based on fuzzy logic of a flywheel energy storage system associated with wind and diesel generators―, mathematics and computers in simulation, vol. 63, pp. 271–280, 2003. salima nemsi received her b.sc., m.sc. degrees in electrical engineering, in 2008 and 2011 respectively, from university of science and technology houari boumediene, algiers, algeria, where she is currently working toward the ph.d. degree. since 2013, she has been working full-time as a researcher at renewable energy development center, algiers, algeria. her research interest include photovoltaic and wind energy, storage systems, autonomous and grid integration, dc-dc and dc-ac converters. seifedine abdelkader belfedhal received his b.sc., m. sc., degrees in electrical engineering, in 2007 and 2010 respectively, from ibn khaldoun university, tiaret, algeria. he is a phd student at the same university. his research interest include wind energy, power conversion, energy management and power converters. linda barazane professor at university of science and technology houari boumediene, algiers, algeria, received her engineer degree and m. sc., in electrical engineeering from the national polytechnic school of algiers (enp), algeria, in 1989 and 1993, respectively. she received the doctorate degree in electrical engineering departement of university of science and technology houari boumediene, in 2006. her research interest are in fuzzy logic systems, electrical and renewable energy. transactions template journal of engineering research and technology, volume 4, issue 2, june, 2017 61 modelica-based model for activated sludge system khalil t. matar 1 , fahid k.j. rabah 2 , mohamed m. abdelati 3 abstract—activated sludge system is the most important stage in municipal wastewater treatment process. it is a biological operation used for treating sewage by means of microorganism. one type of activated sludge systems is the oxidation ditch. in this work, models derived through object-oriented programming will be used to build a simulation model for a typical oxidation ditch. the derived model was constructed and programmed using modelica language. the simulation model will help better understand the system behavior. thereby, the model will be provided as an assessment or evaluation tool for the performance of control schemes. the tool gave the expected results, in terms of reducing the concentration of organic matter in wastewater coming out from oxidation ditches. key words— oxidation ditch, modeling, modelica, activated sludge, wastewater treatment plant i introduction wastewater is water containing solids, whether suspended or dissolved. therefore, wastewater treatment is defined as the process of separating solids from water to get cleaner water that can be reused or safely disposed into the environment. there are many methods used for separating solids from water such as physical, chemical, and biological methods [1].in this paper a biological method for wastewater treatment was focused on and modeled. the principal biological processes used for wastewater treatment are divided into two main categories; the first category is the suspended growth processes while the second one is the attached growth processes[2]. for the suspended growth processes, microorganisms are maintained in suspension while converting organic matter or other constituents in the wastewater to gases and cell tissue. examples for these processes are conventional activated sludge system, oxidation ditches(od), sequencing batch reactors (sbr), aerated lagoons and up flow sludge blanket reactors. on the other hand, the attached growth processes are characterized by microorganisms responsible for conversion of organic matter or other constituents in wastewater to gases and cell tissue are attached to some inert material such as: rocks, sand, ceramic, or plastic materials. this research contributes to the rehabilitation of the wastewater treatment plant of the european hospital located in the southern of gaza strip. this plant receives a daily average amount of 141 m of untreated wastewater coming out of the european hospital. the average concentration of organic matter)bod) entering the plant is 317 mg /l [3], so this wastewater needs to be treated before it is injected into the aquifer or prior to use in irrigation, to prevent the pollution of the groundwater. the plant utilizes an oxidation ditch for the biological treatment process. the main objective of this research is to create a simulation model for the organic matter removal process from wastewater in the od using the component-oriented language modelica in order to better understand the process dynamics and investigate the efficiency of possible control schemes. in [4,5] wastewater pumping stations are modeled using modelica. in [6] the modelica application library wastewater containing three activated sludge models of different complexity with the essential components of municipal wastewater treatment plants is presented. in [7] the dynamic optimization method was applied to a wastewater treatment plant (wwtp) model. with the help of the library wastewater an asm no. 2d model of the wwtp jena was examined and evaluated. this research deals with mathematical modeling and process control of od wastewater treatment in european hospital in khanyounis as a case study. the increased importance of biological purification processes has also resulted in increasing interest in their mathematical modeling, for understanding as well as for predictive and control purposes. the plant under study consists of 15 parts, as illustrated in the process flow diagram depicted infigure1.part i: inlet pumping station consists of a tank for wastewater collection entering the wastewater to plant, in addition to the two screw pumps capacity of each is 10 l/s. part ii: muncher which is used to remove solids. part iii: oxidation ditch, which is the most important part of the plant because the biological treatment process occurs within this part. part iv: sedimentataion tank, which is used to separate and remove settleable suspended solids. part v: flowmeter &chlorination, which is used to control the sterilization process and to add chlorine. part vi: sludge pumping station, which is a tank to collect the sludge for distribution either to oxidation ditch or to be sent for sludge treatment. part vii: sludge consolidation tank, which is a tank that mixes the sludge to become homogeneous, so it is transferred for drying. part viii: distribution chamber, which is the place where the treated water is being distributed to downstream units. part ix: final collection tanks, which are used to collect water that has been treated. part x: outlet pumping station, which is used to transfer the water that has been treated to finishing ponds and soakaway area. part xi: soakaway area, are ponds to inject treated water into groundwater. part xii: chlrination unit, which is used for the disinfection of treated water using chlorine. part xiii: sludge drying beds, is the place, khalil t. matar, fahid k.j. rabah, and mohamed m. abdelati/ modelica-based model for activated sludge system (2017) 62 figure 1: european hospital gaza-sewage treatment plant where the sludge is being dried for disposal or some industrial use. the rest of this paper is organized as follows: section 2 describes the biological treatment stage, section 3 describes modeling, and section 4 presents simulation results. finally, section 5 provides a conclusion and an outlook on future work. ii process description activated sludge is a process for treating wastewater using microorganisms in the presence or absence of oxygen. the activated sludge process is a biological process that can be used to convert dissolved and particulate organic biodegradable compounds into acceptable end products and remove nutrients, such as nitrogen and phosphorus. in this paper, it is focused on organic matter as the major contaminant material. there are a variety of types of activated sludge plants. such as, conventional activated sludge system, oxidation ditches, sequencing batch reactor (sbr), aerated lagoons, and up flow anaerobic sludge blanket (uasb). the plant under study utilizes an oxidation ditch which is a suspended growth system as shown in figure 2. it is a modified version of the conventional activated sludge system. oxidation ditches may be considered as a traditional completely-mixed systems characterized by long detention hydraulic time and sludge age. oxidation ditch consists of one or multiple channels forming a loop, oval or horseshoe. wastewater is recycled in the oxidation ditch by using a rotating brush. the brush has another function which is dissolving some atmospheric oxygen in the liquor which is necessary for microorganisms growth. figure 2: the oxidation ditch under study. the flow velocity in the oxidation ditch is maintained in the range of 25-30 cm/s to keep the microorganisms suspended. in general the process consists of anoxic zone followed by aerobic one as illustrated in figure 3. at the influent to the oxidation ditch, there are organic matter and nitrates (no ) coming from the aerobic zone along with a low level of dissolved oxygen (o2) (which is usually less than 0.5 mg o2 /l). this is called anoxic condition where denitrification occurs. at the end of the anoxic zone and the beginning of the aerobic zone at the brush, the pollutants that exist are: the remaining organic matter that was not used for denitrification in addition to o2 introduced by the aerator and ammonium (nh ) that comes http://en.wikipedia.org/wiki/sewage khalil t. matar, fahid k.j. rabah, and mohamed m. abdelati/ modelica-based model for activated sludge system (2017) 63 with influent and passes through the anoxic zone without any reduction in its concentration. in these conditions both bod5 (organic matter measurement unit) removal and nitrification occurs ( conversion of nh to no ) under aerobic conditions. at the end of the aerobic zone the dissolved oxygen is reduced again to around zero due to the consumption by the microorganism. oxidation ditches differ from conventional activated sludge system in terms of the ability to achieve the removal targets with high-performance and low operation and maintenance costs. in addition, od produces less sludge compared to other activated sludge biological treatment processes[8,9,10]. figure 3: the process flow in oxidation ditches. iii model derivation for the modeling of oxidation ditch process, component models are not found in the standard simulation tool libraries. in this work, it is not intended to build a sophisticated model for detailed investigations rather to conclude with a manageable working model. the objective is to design a simulation tool that simulates oxidation ditch operations, particularly the reduction of organic matter (i.e. reduction of bod concentration. to this end, the modeling and simulation environment “dymola” which is based on the component-oriented language modelica was used [2]. the modeling of oxidation ditch is addressed in the literature as completely mixed reactor [8,9]. the od process is illustrated in the simplified schematic drawing as shown in figure 4. the following variable notation will be used in the model development: q wastewater flow rate, (m3/d). 𝜃 hydraulic detention time of the reactor, (day). 𝜃 the mean cell-residence time, (day). s concentration of the limiting substrate in solution, (mg/l). y maximum yield coefficient measured during any finite period of logarithmic growth, defined as the ratio of the mass of cells formed to the mass of substrate consumed, (mg biomass / mg substrate). 𝑘 endogenous decay coefficient, (1/day) 𝑉 reactor volume, (m 3 ). 𝑋 concentration of microorganisms in the influent, (mg/l). 𝑋 concentration of microorganisms in the reactor, (mg/l). 𝑄 sludge waste flow rate, (m 3 /d). 𝑋 concentration of biomass in the return line, (mg/l). 𝑄 effluent flow rate, (m 3 /d). 𝑋 concentration of biomass in the effluent, (mg/l). figure 4: oxidation ditch system: completely mixed reactor with solids recycles the biomass concentration in the influent (x0) of this reactor is negligible compared to the biomass concentration in the reactor (x). the hydraulic detention time in the od is related to the reactor's volume and the flow rate as follows: 𝜃 = 𝑉 𝑄 (1) the net production of sludge during wastewater treatment px, measured in kgvss/d, is given by the following equation: 𝑃𝑥 = 𝑌 1 + 𝑘 𝜃 𝑄(𝑆1 − 𝑆2) (2) where s1 and s2 are the concentration of organic matter in the influent an effluent , respectively. the concentration of oxygen (r ) kg o2/d affects the reduction of organic matter as well as the net production of sludge. in [1,2] it has been shown that these quantities are related as follows: 𝑅 = 𝑄 (𝑆 − 𝑆 ) − 1.42𝑃 (3) solving equations 2 and 3 results in the following equation; 𝑆 = 𝑆 − 𝐾 𝑅 𝑄 (4) where k is a coefficient that depends on y,k and θ . khalil t. matar, fahid k.j. rabah, and mohamed m. abdelati/ modelica-based model for activated sludge system (2017) 64 after describing the mathematical equations of the oxidation ditch system, the following is a description of the wastewater treatment control system. the main strategy followed is to control the treatment efficiency by controlling the oxygen concentration in the oxidation ditch. in the oxidation ditches the amount of dissolved oxygen is controlled through adjusting the speed of the rotating brush. manufacturers of aerators provide an empirical quadratic equation that relates r and the rotational speed of the brush (r). the parameters of such an equation depend on the physical characteristics of the aerator. the one used in this study has the following characteristic equation: 𝑅 = 𝑟2 𝑟 + (5) the feedback in the controller is the concentration of organic matter at the effluent (s ). an oxygen sensor is used to estimate this quantity assuming a quadratic relationship between r and oxygen level (o).depending on the characteristics of the brush aerator installed in the od under study , the relation of (o) to( r ) is as follows: 𝑂 = 0.0116[ 1250 (𝑟 − 500) + 𝑟 8(𝑟 − 500) − 65𝑟 (𝑟 − 500) ] (6) now, the control problem is to adjust the aerator speed to keep the organic matter of the effluent at a specific level. in order to carry simulations, an estimate for the daily variation of influent is required. an estimate is depicted in figure 5. figure 5: daily flow pattern of wastewater in the hospital iv implementation procedure this subsection describes briefly how modeling is implemented in modelica using the dymola tool. the first step is defining the wastewater connector (w). its icon is represented by a small blue square and it is defined as follows: connector w real s(min=0.0); real x(min=0.0); flow real q; end w; figure 6: modelica code of the oxidation ditch. to build an oxidation ditch model, two wastewater connectors (w1 and w2)are needed, in addition to an output connector (o) of type real and input connector (r) of type real. being governed by equations 1 to 6, its modelica code is shown in figure 6. v simulation results the goal of the simulation is to verify the derived model and to see the progress of operations in the oxidation ditch. the control problem is a typical feedback control scenario in which the concentration of organic matter at the effluent is measured in correlation to the oxygen sensor reading, compared to a reference value, then resultant error signal is used to derive the controller which adjusts the aerator speed. consequently, the level of dissolved oxygen is adapted which is the key factor of the microorganisms production that affects the organic matter consumption. however, excess amounts of oxygen levels are undesired as it leads to growth of the filamentous bacteria which cause sludge bulking. it is recommended to keep the oxygen level below 3 mgo2/l. the top level module of such a system may be implemented in “dymola” as illustrated in figure 7. the proposed process controller is based on a pid controller with limited output, anti-windup compensation and set point weighting as illustrated in figure 8. this pid controller is available in the modelica standard library and it is explained thoroughly in[11]. the controller is tuned and its output is limited to the range from zero to 1460 rpm which is the maximum possible speed of our aerator. khalil t. matar, fahid k.j. rabah, and mohamed m. abdelati/ modelica-based model for activated sludge system (2017) 65 figure 7: the system model figure 8: pid controller with limited output the simulated controller of this process is shown in figure 9. figure 9: process controller the lower part of the controller has a pid module with limited output, anti-windup compensation and set point weighting. its output specifies the required speed of the aerator . the upper part has an integrator with a limited output from zero to one. in regular cases, the error signal is positive and the integrator saturates to unity value. once the dissolve oxygen exceeds the specified threshold (o≈3 mg/l), the integrator output starts to decrease and eventually saturates to 0. this gives a measure for the persistence of the dissolved oxygen to exceed the threshold value. the result of this integrator is multiplied with the output of the limited pid module to generate the recommended speed of aerator. the variable frequency drive (vfd) is modelled by a first-order block with a time constant of 5 s resulting in an acceleration time of about half a minute to move forward or backward between zero speed and rated speed states. the wastewater flow pattern at the influent is illustrated in figure 5. the concentration of organic matter at influent and effluent of the oxidation ditch are illustrated in figure 10. the concentration of organic matter at the influent equals 290 mgbod5/l while the resultant steady state concentration of organic matter at the effluent is reduced to 20 mgbod5/l. figure 10: concentration of organic matter at influent and effluent of oxidation ditch . the speed of aerator is shown in figure 11,while the readings of the oxygen sensor are shown in figure 12. both are reasonable compared to field measurements and are within the acceptable range. figure 11: aerator rotational speed figure 12: oxygen concentration in the od. s2=20 mgbod5/l s1=290 mgbod5/l oxidation ditch khalil t. matar, fahid k.j. rabah, and mohamed m. abdelati/ modelica-based model for activated sludge system (2017) 66 vi conclusion this paper targeted a small size hospital’s plant, which depends on oxidation ditch to treat wastewater. the study presents an easily controlled and managed model for the oxidation ditch of the wastewater treatment plant of the european hospital. the model provides a tool for testing the system performance and controlling its treatment process. it also helps in understanding the dynamics of the system and allows designing a stable and robust controller unit for the system. the following conclusions were derived from this work: 1the control system enabled the control of oxygen concentration from 0 to 2 mg o2/l. 2the system enabled the controlled reduction of organic matter concentration in the range of 290 to 20 mg bod5/l by controlling the brush speed and ,consequently, the concentration of dissolved oxygen in the od. in future work, this model will be developed to include the simulation of ammonia and nitrate reduction in the od treatment unit. the work will study the development of the control system and will introduce a new effective techniques in the control process. references [1] metcalf and eddy wastewater engineering: treatment, disposal, reuse. new york, usa, mcgrawhill )1979 11-11 ( . [2] davis, m. and cornwell, d. introduction to environmental engineering. mcgraw-hill,2006. [3] al-sheikh, j. design and construct alteration and construction works at the european gaza hospital. gaza strip, palestine, ministry of health (2002) iv& 3. [4] abdelati, m., felgner, f. and frey, g. modeling, simulation and control of a water recovery and irrigation system. the 8th international conference on informatics in control, automation and robotics (icinco-8), noordwijkerhout, netherlands,(2011) 323-329. [5] abdelati, m., felgner, f. and frey, g. modeling wastewater pumping stations for cost-efficient control. 16th ieee international conference on emerging technologies and factory automation (etfa ), (2012). [6] reichl, g. wastewater a library for modeling and simulation of wastewater treatment plants in modelica. technische universitätilmenau, ilmenau, germany (2003). [7] ziehn, t.,reichl, g. andarnold, e. application of the modelica library wastewater for optimization purposes. 4th international modelica conference, (2005) 351356. [8] u.s. environmental protection agency (epa). wastewater technology fact sheet oxidation ditches. retrieved september 2000,from http://water.epa.gov/scitech/wastetech/upload/2002_06 _28_mtb_oxidation_ditch.pdf [9] abusam, a.,keesman, k.j.,spanjers, h., g. van stratenand and meinema, k. a procedure for benchmarking specific full scale activated sludge plants used for carbon and nitrogen removal. ifac,(2002). [10] pons, m., mourot, g. andragot, j. modeling and simulation of a carrousel for long-term operation(2011, august 28).retrived september 2, 2011, from https://hal.archives-ouvertes.fr/hal-00632752/document [11] astrom, k. and hagglund, t.pid controllers: theory, design, and tuning. north carolina, usa, instrument society of america, journal of engineering research and technology, volume 7, issue 1, april 2020 1 effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a abdelrahim abusafa 1,a and aysar yasin 2 1 chemical engineering department, annajah national university, nablus p.o. box:7, palestine 2 energy engineering and environment department. annajah national university nablus, p.o. box:7, palestine a corresponding author: email: abusafa@najah.edu https://doi.org/10.33976/jert.7.1/2020/1 abstract, two-phase closed thermosiphon system for cooling high heat flux electronic devices was constructed and tested on a lab scale. the performance of the thermosyphon system was investigated using r-134a as a working fluid. the effect of heat flux and the refrigerant pressure on the evaporator side heat transfer coefficient were investigated. it was found that the heat transfer coefficient increases by increasing the heat flux on the evaporator or by reducing the inside pressure. the effect of heat transfer mode of the condenser (natural or forced) also affected the overall heat transfer coefficient in the cycle. at the 200w heating load, the values of the heat transfer coefficients were 32 and 1.5 kw/m². ˚c, for natural and forced convection modes, respectively. the temperature difference between the evaporator and the refrigerant saturation pressure was found to be dependent on heat flux and the pressure inside the system. at 40w heating load, the heat transfer coefficient was calculated to be 500, 3000 and 7300 w/ o c.m 2 at 0.152, .135 and 0.117 reduced pressure, respectively. it can be concluded that such a thermosyphon system can be used to cool high heat flux devices. this can be done using an environmentally friendly refrigerant and without any need for power to force the convection at the condenser. index terms: heat transfer coefficient; two-phase closed-loop thermosyphon; r-134a; forced and natural convection. 1. introduction the heat dissipation of electronic components is growing. the traditional air-cooling is ineffective. it is bounded to dissipate heat at 0.05w/cm² rate. higher air velocity or a considerably larger-dissipation area is required in case of encountering higher heat flux rates [1]. the thermal design must consider the system size, insert more components within limited space and reduce the system acoustic noise generated from the heat sinks fans. the thermal solution is required to dissipate the maximum power consumption of the electronic equipment and make it below its maximum operating temperature. liquids have much higher thermal conductivities than gases. therefore, liquid cooling is more effective than gas cooling. two-phase heat transfer technologies are used for cooling high heat flux devices such as heat pipes [2-6]. capillary pumped loops and thermosyphons are commonly applied to cooling microelectronic components and devices [7]. the closed-loop thermosyphon is a simple structure and reliable. it is efficient for transferring heat in long distances with a slight reduction in temperature [8]. a cooling system based on a loop thermosyphon type composed mainly of a heating block, an evaporator, and an air-cooled condenser was inspected experimentally [9]. an efficient and small passive cooling two-phase systems with advanced micro-thermosyphon cooling systems were investigated [10]. in this paper, the passive closed-loop two-phase thermosyphon system was investigated. in this technology no power was needed to transfer heat and high heat flux could be dissipated. the two-phase thermosyphon passive system is a gravity-dependent and wickless heat pipe. the system principally consists of an evaporator, condenser, and risers. the evaporator is attached directly to the hot component. the condenser should be above the evaporator to exploit the gravity that made the condensate easily back to the evaporator. the riser copper tube was connected to the top part of the evaporator and the down-comer copper tube. the circulation in the evaporator was initiated when the working fluid heated up and reached boiling conditions. the net driving head caused by the difference in density between the liquid in the down-comer and vapor/liquid mixture in the riser should be able to overcome the pressure drop caused by the mass flow. the vapor bubbles mailto:abusafa@najah.edu https://doi.org/10.33976/jert.7.1/2020/1 abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 2 started to form at the design temperature. when the working fluid became vapor, its density reduced and left the evaporator to the condenser. this happens via the riser tube where it got condensed and found its way to the evaporator via the down-comer to start new circulation. the purpose of this research was to investigate the effect of pressure on heat transfer, heat transfer coefficients, and overall heat transfer coefficients. the effect of different types of microstructures on the performance of the closedloop thermosyphon system had been studied in [4, 8, 11]. the flow and heat transfer in a closed-loop thermosyphon had been studied theoretically and experimentally in [5-8], and [ 12-13]. the pressure drop in riser and evaporator in an advanced two-phase thermosiphon loop had been investigated experimentally [14, 15]. it had been found that the total pressure drop in the riser was the sum of the gravitational and frictional pressure drop [14, 15]. pressure in a closed two-phase thermosiphon loop had a significant effect on the boiling heat transfer coefficient in the narrow channels of the thermosyphon evaporator [1]. increasing pressure level generally led to lower temperature difference and smaller tubing diameter [16]. the thermal performance of a thermosyphon filled with r-134a had been studied in [17]. it was found that the performance of the r-134a thermosyphon increased with high coolant mass flow rates. the high filling ratios and the greater temperature difference between bath and condenser affect the performance as well [17]. pressure has a great influence on critical heat flux (chf) [18]. it was found also that limit critical heat flux decreased with an increase in pressure but increased with an increase in diameter [18]. this paper is arranged as follows: section 2 describes the thermosyphon system components. section 3 illustrates the research methodology. section 4 presents the analysis and discussion of the results. the conclusion of the study is presented in section 5. 2. cooling system components the research results were based on an experimental setup built in the laboratory [2]. the main components were evaporator, condenser, riser pipe, down-comer pipe, seven thermocouples for temperature measurements, four pressure gauges for pressure measurements, two heating elements with potentiometers and suitable insulators. fig.1 illustrates the experimental setup of the investigated thermosyphon system. the system components and used materials details are shown below: fig.1. schematic diagram of the tested thermosiphon system 2.1 evaporator: the tested evaporator was made from copper and the weight was 440 grams. the outer dimension of the evaporator was 63×50×20 as shown in fig. 2. the internal design of channels and the top view of the tested evaporator are illustrated in figs 3 and 4, respectively. abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 3 fig.2. the external shape of the evaporator fig.3. internal channels of the tested evaporator fig. 4. top view of the tested evaporator the dimensions of the internal channels of the evaporator are illustrated in figs 3 and 4. four vertical channels with a diameter of 5mm, length of 37mm, and total internal surface area of 34.5 cm². two cartridge heaters were used in the experiment, each of 250w capacity, and they were exactly inserted into two cylindrical holes drilled in the evaporator. to reduce the contact resistance as much as possible between the heaters and the evaporator surfaces, special epoxy material was filled in between. the power of the heaters was equally distributed to the evaporator channels and was varied in steps from (4–150) w at natural convection condensation. for this purpose, the heaters were connected to a potentiometer with a solidstate relay. fig. 5 shows the position of the evaporator's internal channels concerning heaters' holes and shows the direction of refrigerant flow in the system. the heat supplied from the heaters was assumed to be dissipated through the evaporator without any losses due to the tight fixing of the heaters inside the evaporator as well as the good insulation of the evaporator using 3mm thickness rock wool. 2.2 condenser: an aluminum-finned condenser with the fins oriented vertically that can be cooled by free or forced convection was used. the condenser is 1.32 m² outside surface area through 120 parallel plates 4.4 × 25 cm with 3 mm spacing fig.5. position of evaporator's internal channels concerning heaters holes 2.3 riser and down-comer: both the down-comer and riser were constructed from copper tubing that has high corrosive resistivity against hfcs. the detailed dimensions of the riser and down-comer are illustrated in fig. 1. 2.4 sight glass: two sight glasses were mounted on both the riser and down-comer close to the condenser. the sight glasses gave a clear idea about the fluid phases as well as monitoring any malfunction of the system. 2.5 measuring devices: the input heat flux was determined by measuring the voltage and the resistance of each heater using a fluke 45 dual multi-meter. by applying ohms law ( ) the input heat flux was calculated. in each measurement, the average values were considered to eliminate the voltage fluctuations, which was less than 2%. seven k-type thermocouple sensors were attached in the thermosyphon system to monitor the temperatures all over the system. the position of each sensor is illustrated in the schematic diagram of the abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 4 thermosyphon system shown in fig. 1. the temperature readings were registered by a multi-channel thermometer (lutron model: tm-946). four pressure gauges were used to read the pressures on the inlets and outlets of the evaporator and condenser. fig. 1 shows the position of each pressure gauge. the pressure readings were recorded. the corresponding saturation temperatures were estimated using saturated liquid-vapor tables for each refrigerant. 2.6 working fluid: r134a-tetrafluoromethane (cf3ch2f) was used in this study. the system was evacuated before charging the refrigerant. at the end of the charging process, the non-condensable gasses were also vented out through the vents at the top of the condenser after heating the system. 3. research methodology the research methodology was mainly based on an experimental setup of a two-phase closed thermosyphon system built in the laboratory. the effect of heat flux, pressure on the system performance was investigated. the heat flux was the main manipulated variable during the experimental work. the heat flux steps were 5 to 10 w. the maximum heat flux was always below the critical heat flux (chf). after the system reaches the steady-state, the temperatures at the designated points were recorded in 10 seconds intervals and then these values were averaged every three minutes 4. results analysis and discussion 4.1 calculation of heat transfer coefficient at different heat loads: the experimental heat transfer coefficient was defined by the ratio of the heat flux and the temperature difference between the evaporator wall and the saturation temperature of the refrigerant as in eq. (1). since it was difficult to measure the surface temperature of the channels, it was assumed that the temperature of the inner channel surface was the same as the temperature measured inside the wall near the channel. (1) where q is heat load (w), a is the internal surface area of the evaporator channels (m 2 ), te is the surface temperature of the evaporator channel (°c), ts is the surface temperature of the evaporator and tsat is the saturation temperature at working pressure. the heat transfer coefficient was affected by errors in the temperature difference between the wall and the fluid and in the heat flux measurement. this temperature difference was compensated in the above equation by subtracting it from the temperature difference between the evaporator wall and the saturation temperature of the refrigerant; it was measured and was found to be neglected from the estimation [1]. the high heat transfer coefficient was greatly dependent on the temperature difference between the evaporator wall and the temperature of the refrigerant in the inside channels. the heat flux ( ̇ exposed to the evaporator is calculated using eq. (2). ̇ the experimental value of the heat transfer coefficient was studied concerning different reduced pressures. fig. 6 shows the heat transfer coefficient versus the heat load dissipated into the evaporator channels using refrigerant r-134a at natural convection (nc) heat transfer mode in the condenser. eq. (1) is used for calculating the experimental heat transfer coefficient. it is clear that the heat transfer coefficient increases with increasing heat load. abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 5 fig.6. the heat transfer coefficients for r134a at different heat loads using natural convection (nc) mode in the condenser the direct proportional relation between the heat transfer coefficient and the heat load indicates nucleate heat transfer boiling in the thermosyphon system. the heat transfer coefficient with free convection cooling mode was higher than the heat transfer coefficient with forced convection (fc) at the same heat loads at low heat loads. this was due to higher saturation pressure in the thermosyphon system at natural convection. this fact appears clearly in fig. 7, which compares the forced and natural convection heat transfer coefficients. this result agrees with the conclusions obtained in [12]. fig.7.comparison between heat transfer coefficients for r134a at different heat loads and condenser cooling modes (fc and nc) fig. 8 shows the relationship between heat flux and the temperature difference between the evaporator wall and the saturation temperature of the refrigerant inside the channels at the natural convection mode of the condenser. abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 6 fig. 8. heat flux for r134a versus the resulted temperature difference, using natural cooling in the condenser (nc) 4.2 experimental heat transfer coefficient at different heat loads and reduced pressures. fig. 9 illustrates the relationship between the heat transfer coefficient and the heat load at different reduced pressures. it shows that the heat transfer coefficient increases as the reduced pressure increases at different heat loads. fig. 9. comparison between heat transfer coefficients for different reduced pressures at different heat loads this result had been approved by many researchers who studied the effect of pressure on the boiling heat transfer coefficient. this result can also be generalized for different fluids as the law of corresponding states that variation of thermodynamic and transport properties with reduced pressure was similar for different fluids [1]. fig. 10 illustrates more clearly the relation between the heat transfer coefficient and the reduced pressure at a specific heat load range. abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 7 fig. 10. heat transfer coefficient values s at different reduced pressure and 50 w heating load it can be seen in figs 9 and 10 that the heat transfer coefficient is promoted by increasing reduced pressure. several investigators studied the effect of the pressure level on the heat transfer coefficient and proved a similar phenomenon with different refrigerants [19]. the relationship between the heat flux and temperature difference at different reduced pressures is presented in fig. 11. fig. 11. relationship between the heat flux and temperature difference at different reduced pressures it is clear that as the heat flux increases the temperature difference increases but with different slopes depending on the saturation pressure in the thermosyphon system. the heat transfer coefficient increases with increasing saturation pressure. at reduced pressures 0.157 and 0.135, the maximum temperature difference obtained was 2.9˚c at 15.2 kw/m² and was 5.51˚c at 13.4 kw/m², respectively. the effect of reduced pressure on the temperature difference at different heat flux, using isobutane (r600a) refrigerant had been studied in the literature [1]. it was found that temperature difference decreases as the reduced pressure increases, while the temperature difference increases as the heat flux increases. 5. conclusions abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 8 this study demonstrated that the heat transfer coefficient was directly proportional to the reduced pressure [2]. the temperature difference [teva–tsat] was found to be dependent on both the pressure inside the system and the heat flux applied to the evaporator. the heat transfer coefficient was highly dependent on the heat applied to the evaporator. it increases linearly with heat applied to the evaporator. the heat transfer coefficient with free convection condensation was higher than the heat transfer coefficient with forced convection condensation. this has occurred at the same heat load. this was due to higher saturation pressure in the thermosyphon system at natural convection. abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 9 6. references [1] r. khodabandeh, b. palm, influence of system pressure on the boiling heat transfer coefficient in a closed two-phase thermosyphon loop, int. j. thermal sci. 41 (2002) 619–624. [2] yasin, a. m. m. (2007). cooling of high heat flux electronic devices by two-phase thermosyphon system (master thesis, an-najah national university). [3] l.l.vasiliev, micro and miniature heat pipes-electronic component coolers, applied thermal engineering, elsevier, new york, 28 (2008) 266-273. [4] p.chen, y.tang, x.k.liu, formation of integral fins function surface by extrusion-ploughing process, transaction of nonferrous metals society of china, the nonferrous metals society of china, changshao china, vol.11(2006) 1029-1034. [5] daraghmeh, h., sulaiman, m., yang, k. s., & wang, c. c. (2019). investigation of separated two-phase thermosiphon loop for relieving the air-conditioning loading in datacenter. energies, 12(1), 105. [6] e abreu, vasco tavares brito. "test and optimization of a two-phase thermosyphon cooling system for microprocessors under real working conditions." (2017). [7] su dashi; liu yajun; zhou wei, a new kind of microstructures applied to closed loop thermosyphons, proceedings of the ieee international conference on mechatronics and automation, vol., no., (2009) 2167 2172. [8] qiusheng liu, katsuya fukuda & purwono f. sutopo, experimental study on thermosyphon for shipboard high-power electronics cooling system, heat transfer engineering journal, volume 35, issue 1112, july 2014, pages 1077-1083. [9] yeo, j. , yamashita, s. , hayashida, m. and koyama, s. (2014) a loop thermosyphon type cooling system for high heat flux. journal of electronics cooling and thermal control, 4, 128-137. doi: 10.4236/jectc.2014.44014. [10] chin lee ong, n. lamaison, j. b. marcinichen and j. r. thome, "two-phase mini-thermosyphon electronics cooling, part 1: experimental investigation," 2016 15th ieee intersociety conference on thermal and thermomechanical phenomena in electronic systems (itherm), las vegas, nv, usa, 2016, pp. 574-581. doi: 10.1109/itherm.2016.7517599. [11] bahmanabadi, amir, meysam faegh, and mohammad behshad shafii. "experimental examination of utilizing novel radially grooved surfaces in the evaporator of a thermosyphon heat pipe." applied thermal engineering 169 (2020): 114975.‏ [12] r dobson and j ruppersberg. flow and heat transfer in a closed loop thermosyphon. part i – theoretical simulation, journal of energy in southern africa, 18 (3) (2007). [13] r dobson and j ruppersberg. flow and heat transfer in a closed loop thermosyphon. part ii – experimental simulation, journal of energy in southern africa,18 (3) (2007). http://www.tandfonline.com/author/liu%2c+qiusheng http://www.tandfonline.com/author/fukuda%2c+katsuya http://www.tandfonline.com/author/sutopo%2c+purwono+f http://www.tandfonline.com/doi/full/10.1080/01457632.2013.863096#abstract http://www.tandfonline.com/doi/full/10.1080/01457632.2013.863096#abstract http://dx.doi.org/10.4236/jectc.2014.44014 abdelrahim abusafa and aysar yasin / effect of pressure on the performance of passive two-phase closed thermosyphon system using r-134a (2020) 10 [14] r. khodabandeh, pressure drop in riser and evaporator in an advanced two-phase thermosyphon loop, international journal of refrigeration 28 (2005) 725-734. [15] r. khodabandeh, thermal performance of a closed advanced two-phase thermosyphon loop for cooling of radio base stations at different operating conditions, applied thermal engineering 24 (2004) 2643-2655. [16] b. palm, r. khodabandeh, choosing working fluid for two-phase thermosyphon systems for cooling of electronics, j electron packaging, trans asme 125 (2003) 276–281. [17]. adnan, samah ihsan, aouf abdulrahman ahmad, and adnan abdulamar abdulrasool. "experimental study of wickless heat pipe with flat evaporator for used in cooling of electronic components." journal of the university of babylon for engineering sciences 27.2 (2019): 125-137.‏ [18] pioro,i. l., d.c. groenveld, s. c. cheng, s. doerffer, a. z. vasic, comparison of chf measurement in r134a cooled tubes and water chf look-up table, int. j. of heat and mass transfer, 44 (2001) 73-88. [19] bao, z. y., d.f. fletcher, b.s. haynes, flow boiling heat transfer of freon r11 and hcfc123 in narrow passages", int. j. of heat and mass transfer, 43 (2000) 3347-335. transactions template journal of engineering research and technology, volume 4, issue 1, marsh 2017 16 developing a security model for enterprise networks (smen) aiman a. abu samra 1 , khaled w. alnaji 2 1 aiman a. abu samra department of computer engineering faculty of engineering islamic university of gaza, gaza strip, palestine, 2 khaled w. al naji department of computer & industrial professions university college of applied sciences, gaza strip, palestine, kalnaji@ucas.edu.ps abstract—enterprise network (en) supports thousands of users, and interconnects many networks. en integrates different operating systems and hosts hundreds of servers that provide several services such as web applications, databases, e-mail, and others. security threats represent a serious problem to en. they try to damage enterprise confidentiality, integrity, and availability. security provides protection against attacks, hacking, and data theft. in this paper, we propose a security model (smen) of en. the proposed model provides security at different layers. it integrates both hardware and software security solutions. we perform a defense evaluation for the proposed model the results show that smen was able to detect and prevent all attacks and malwares that were induced by the framework metasploit. performance evaluation shows that applying proposed model has a little negative effect on bandwidth utilization and hence on network performance. index terms—enterprise network, security model, snort, ossec, intrusion detection/prevention. i introduction enterprise network en is composed of a distributed infrastructure that connects different users, devices and branches networks. it includes high performance computing servers, massive storage solutions, and speed network for both lan and wan[1]. even enterprise network should meet a set of technical goals, it should also meet the business needs of the enterprise[2]. enterprise network contains hundreds of network devices such as routers and switches. it integrates multiple technologies, protocols, software applications, and vendors. en includes data center that hosts different services such as web, e-mail, dns, ftp, and other services. en includes demilitarized zone (dmz) network[3], internal network, external network, and branches networks.it may also include other special networks such as management and monitoring networks. some branches networks are connected to en through wan services such as vpn, leased line, and frame relay, while others may be connected using wireless technology such as wi-fi, wimax, and microwave. enterprise network has several requirements like availability, scalability, security, and mobility. en requires an updated security model that reflects changes in technology and services. so we have to build and maintain a robust network security for both end users and servers. this paper proposes a security model (smen) for enterprise network. in this paper, we recognize enterprise network as a real network that is in use until developing the proposed model. we use both hardware devices and open source tools such as snort, ossec, and splunk to implement our model. performing deep inspection of traffic that passes through en, represents an important step in this work. we need to provide secure connection between branches networks and en services. the smen model takes into account the defense in depth strategy by implementing security at more than one layer. defense evaluation was done using metasploit, while performance evaluations was done using freemeter. ii related works there are number of security models that are designed for enterprise network. some of them depend on firewall and routers, while others depend on intrusion detection/prevention system. different factors are affected by security model design such as number of users, supported services, and others. in [4], the proposed security model is implemented with connectivity fault management (cfm). cfm provides an end-to-end traffic carrier in the metro ethernet domain. cfm defines protocols and practices of operations, administration, and maintenance for paths through 802.1 bridges and lans. the proposed in [4] model provides an effective and reliable isolation of individual traffic flows and the associated lans using cfm. but the paper did not provide real implementation of network security concepts, paper in [5] introduced a new concept, which is network business security. using the proposed concept, the paper defines the object of information security in three parts, which are: data security, network system security, and neteven page aiman a. abu samra1, khaled w. alnaji2/ developing a security model for enterprise networks (smen) (2017) 17 work business security. we think this research provides formal description of network business security model. but there is no practical implementation using either hardware or software. it is not sufficient to use only routers and firewall to provide network security for en. authors in [6] proposed a network security model for the campus network. internet access exposes campus network to attacks and intrusions. it becomes so important to provide a secure campus network that has ability to defense against intrusions and attacks. but paper in [6] did not provide a security solution for branches networks, the proposed models have not a clear implementation of intrusion detection/prevention systems. research in [6] does not provide either defense evaluation or performance evaluation. author in [7] used routers and firewalls to design and implement a network security model for cooperative networks. he listed the network security vulnerabilities in routers and firewalls. he discussed prevention mechanism against different types of threats and attacks. the model used packetshaper, which is a traffic management appliance, in order to monitor and control network traffic passing over widearea networks. we think packetshaper is not sufficient, as devices and tools are required to provide a security model for en. iii enterprise network model enm here we introduce the topology of enm and explain each component. we also discover functions of these components and their effects on enm. figure 1 shows the topology of our enm. enterprise network includes several network devices such as layer 2 switches and layer 3 switches. they are used as access devices, distributed devices, and core devices. on the other hand, router is used for wan services connection. other devices such as firewalls are used to filter passing inbound and outbound. intrusion prevention system is used to detect and prevent potential attacks according to predefined signatures. we also introduced different connections such as frame relay, leased line, and vpn. while frame relay provides shared bandwidth, leased line providesdedicatedbandwidth for connected network. vpn presents a secure, cheap solution for connecting branches networks to enterprise networks. http/https are the most used protocols in enterprise networks. we introduce dmz network, which includes public services that allow public user (via internet) to access them [3]. dmz network hosts web server, e-mail server, dns server, ftp server, web-based application, and others. figure 1:enm topology iv security model for enm our proposed model, security model (smen) for enm, includes nine modules, see figure 2 in smen model, we provide efficient and secure enterprise network. we use hardware firewall, which includes network intrusion protection system nips module. we prefer to place nips module inside firewall for the following reasons:  reducing budget, one appliance rather than two appliances (firewall and nips).  reduce false positive alarms that are generated, where no intrusion or attacks.  simplifying determination of attacks using real ip addresses and to avoid ip with nat option.  providing intrusion prevention for dmz network and internal network. in addition to nips module, the model use snort nids for monitoring and analysis traffic from/to dmz and intereven page aiman a. abu samra1, khaled w. alnaji2/ developing a security model for enterprise networks (smen) (2017) 18 nal networks. we intend to get deep inspection of traffic that pass through dmz and internal networks. we run snort nids in active response mode in order to prevent potential intrusions and attacks. figure 2:modules of the proposed security model figure 3 shows the proposed smen model of enterprise network, it shows the placement of nips and nids in en. we implement two snort nids to monitor real time traffic for dmz and internal network. the first snort nids is connected to both dmz layer2 switch and to the management network. the second snort nids is connected to internal layer 3 switch and to the management network. it is necessary to provide security for branches networks, this will require nips for each branch due to management needs and cost requirements. we prefer to use nips module inside router at each branch network to reduce cost and management requirements. monitoring and management of routers are done remotely from en management network. as nids and nips are not sufficient, the proposed model provides host intrusion prevention system (hips) for individual hosts to provide security at servers and hosts level. we use hips to protect servers, which host services of enterprise network. hips will protect our servers from zeroday attacks. hips uses anomaly detection, which provides ability to stop unknown attacks. figure 3:the proposed security model of enterprise network as shown in figure 3, we implement ossec on each server of our enterprise network, also we have ossec server as central management sever for monitoring and analysis real time traffic received from servers (ossec agents). ossec manger is configured in active response in order to stop malicious activity. traffic analysis is considered as the starting point for designing a security model of enm. we use "netflow analyzer professional plus” and mrtg as a traffic analysis tools to perform network traffic analysis. traffic analysis was done during a work week five eight-hour days. en has internet bandwidth of 150 mbps, which is provided by a local internet serviceprovider isp. the maximum inbound traffic of internet usage is about 27.2mbps (27.2%), while the maximum outbound traffic is about 105.3 mbps (70.2%). the average inbound traffic of internet usage is about 20 mbps (13.33%), where the average outbound traffic is about 85.5 even page aiman a. abu samra1, khaled w. alnaji2/ developing a security model for enterprise networks (smen) (2017) 19 mbps (57%). we observe that internet usage is nearly the same for most work days in the week. en interconnects its branches via wan services through bandwidth of 90 mbps. the maximum inbound traffic used by branches networks is about 11.6 mbps (12.88%), while the maximum outbound traffic is about 69.7 mbps (77.44%). the average inbound traffic used by branches networks is about 10.291 mbps (11.43%), while the average outbound traffic is about60mbps (66.66%). also we observe that the traffic used by branches networks is nearly the same for most work days of the week. the percentage of traffic usage byhttp applications/protocols occupy the most percentage of the total traffic, it is about 93% of the traffic. https application/protocol is considered to be the second one to consume traffic with about 4% of the total traffic. http applications/protocols occupy the most percentage of the total traffic from branches network to public network (internet) and/or dmz network, it is about 64% of the traffic. users in the branches networks use http protocol to reach public network such as internet. https application/protocol is considered to be the second one to consume traffic with about 33% of the total traffic. we observe that https traffic is fairly large. other applications/protocols (as we stated previously) consume less than 1% of the total traffic. http applications/protocols occupy the most percentage of the total traffic from one branch network to public network (internet) and/or dmz network, it is about 74% of the traffic. https applications/protocols are considered to be the second one to consume traffic with about 5% of the total traffic. domain services occupy about 2% of the total traffic, they are used by domain controller for domain management. we have 2% of traffic consumed by oracle applications, it uses ncube-lm-licenser managerat port 1521. custom applications which are programmed and developed by programmers of enterprise network consume less than 1% of the total traffic. according to our previous traffic analysis of our enterprise network, we observed that most traffic is represented by http/https applications/protocols. intrusion detection and prevention systems(idps) can inspect layer 7 applications/protocols like http, ftp, and smtp. once idps detects intrusion, it will apply corresponding actions that we previously defined. even idps provides additional security layer to our enterprise network, it cannot be used alone. we will use firewall beside idps system to provide defense in depth strategy for our enterprise network. nids never replaces firewall device, encryption, and other authentication methods. v implementation and evaluation a implementation to implement and evaluate performance of proposed smen model, we used number of devices and tools. table 1 describes the devices and tools used. in smen proposed model, we used a hardware firewall with integrated ips module, it is fortigate-3140b firewall. fortigate-3140b provides up to 58 gigabits per second (gbps) firewall throughput, it includes integrated ips, application control, user-based policies, and endpoint policy enforcement. we used snort to perform a real time traffic analysis and packet logging on enterprise network. snort provides multiple function, it can do protocol analysis, content searching, and content matching [8]. we used ossec as hips in our proposed model. we prefer ossec as hips for many reasons, it has the ability to inspect encrypted protocols such as https traffic. ossec is a powerful correlation and analysis engine, it integrates log analysis and does file integrity checking [9]. moreover, we device name specification operating system installed tools pc1-ossec cpu: core i5, 1.8 ghz ram: 6gb windows 7 os 64bit freemeter ossec agent p2-no-ossec freemeter pc3-metasploit centos 6.3 metasploit pro 4.7 srv-snortdmz dell poweredge 2950 server cpu: intel xeon 5300 sequence: dual independent 1066mhz; ram: 32gb centos 6.3 snort 2.9.5.5 srv-snortinternal srv-ossec dell optiplex 755 cpu: core i5, 1.8 ghz ram: 6gb centos 6.3 ossec-hids2.7.1 srv-splunk splunk-6.0 firewall fortigate-3140b firewall with nips fortigate fortigate3140b table 1 control rule base for mppt fuzzy controller. even page aiman a. abu samra1, khaled w. alnaji2/ developing a security model for enterprise networks (smen) (2017) 20 can use ossec to monitor windows registry, detect rootkit using host-based anomaly detection and provide centralized policy enforcement. in smen proposed model, we used ossec server with iptables service (linux firewall) to implement and build hostbased intrusion prevention system. in this case ossec will act as hids, when ossec hips detects intrusion we will place the ip address into the iptables for a period of time to prevent its access to network. when no more scan activity is present the iptables will drop the ip address from the table. open source tools were used in implementing smen model of enterprise network. splunk is an open source free tool that can be integrated with both snort nids and ossec hips. when integrated with snort, splunk provides field extractions for snort alert logs, dashboards, graphs, event types, tags, correlates real-time data, and reports [10]. we can also integrate splunk with ossec for better correlation of alerts generated by ossec. splunk generates reports for future analysis and management, it displays logs in a graphic format. b evaluation in this section, we evaluate both the defense and performance of proposed smen model. for defense evaluation, we used metasploit tool as penetrating test tool, while bandwidth utilization wasusedfor performance evaluation. we used metasploit framework to induce malicious codes and attacks to enterprise network. metasploit launches 688 different attacks in order to exploit security vulnerability of enterprise network [11]. smen model successfullydetects all of these attacks, we havethe recent update signatures for both firewall and snort nids. we use personal computerwithout ossec (pc2-noossec) to evaluate bandwidth utilization before applying proposed smen model, pc2-no-ossecis connected directly to internet service. it bypasses firewall and snort nids, it has no ossec agent. figure 4 shows the graph of bandwidth utilization of pc2, y-axis represents time in seconds, while x-axis represents bandwidth in megabits (mb(. we can observe that pc2 consumes bandwidth in different ways along time, but generally it consumes little bandwidth. there is no heavy consuming of bandwidth. a graph of bandwidth utilization after implementing proposed smen model is represented in figure 5, a quit few additional amount of bandwidth is consumed. it is almost unnoticeable that ossec agent has effects on network bandwidth when compared with figure 4. firewall still has more effects on network performance, we have to remember that firewall contains ips module. ossec affects cpu and ram utilization rather than bandwidth utilization and hence network performance. vi conclusion the proposed security model smen provides security at different layers. it integrates both hardware and software solutions. we performed a defense evaluation for the proposed smen model, the results show that smen was able to detect and prevent all attacks and malicious codes that were induced by metasploit framework. performance evaluation shows that applying proposed smen model has unnoticeable impact on the system’s performance and little effects on bandwidth utilization and hence network performance references [1] s. m. nadaf, h. k. rath, and a. simha, "a novel approach for an enterprise network transformation and optimization," in india conference (indicon), 2012 annual ieee, 2012, pp. 317-322. [2] m. weinstein, "planning enterprise networks to meet critical business needs," in enterprise networking mini-conference, 1997. enm-97. in conjunction with the icc-97., first ieee, 1997, pp. 3-13. [3] e. dart, l. rotman, b. tierney, m. hester, and j. zurawski, "the science dmz: a network design pattern for figure 4: bandwidth utilization before applying proposed model figure 5: bandwidth utilization after applyingproposed model even page aiman a. abu samra1, khaled w. alnaji2/ developing a security model for enterprise networks (smen) (2017) 21 data-intensive science," in ieee/acm annual supercomputing conference (sc13), denver co, usa, 2013. [4] s. singh, "ethersec: an enterprise ether-network security model," in networks, 2008. icon 2008. 16th ieee international conference on, 2008, pp. 1-5. [5] w. kehe, z. tong, l. wei, and m. gang, "security model based on network business security," in computer technology and development, 2009. icctd'09. international conference on, 2009, pp. 577-580. [6] w. zongjiang, "a new type of intelligent network security model of the campus study," in computer research and development (iccrd), 2011 3rd international conference on, 2011, pp. 325-329. [7] s. alabady, "design and implementation of a network security model for cooperative network," int. arab j. e-technol., vol. 1, pp. 26-36, 2009. [8] snort. (2016, october 1). snort | network intrusion and detection system. available: https://www.snort.org/ [9] ossec. (2016, october 1). home --ossec. available: http://ossec.github.io/ [10] splunk. (2016, october 1). operations intelligence, log management, application management, enterprise security and compliance|splunk. available: https://www.splunk.com/ [11] metasploit. (2016, october 1). penetration testing software|metasploit. available: https://www.metasploit.com/ dr. aiman a. abu samra is an associate professor at the computer engineering department at the islamic university of gaza. he received his phd degree from the national technical university of ukraine in 1996. his research interests include computer networks and mobile computing. dr. aiman is a member in the review committee of the international arab journal of information technology (iajit) eng. khaled w. alnaji is graduated from isalmic university of gaza (iug-gaza), gaza strip, palestine (bachelor of computer engineering) in 2006. he received the master of computer engineering from iug-gaza, gaza strip, palestine in 2014. he is curently an asscoicate teacher ate university college of applied sciences (ucas). his interests: computer networks and network security. e-mail: kalnaji@ucas.edu.ps transactions template journal of engineering research and technology, volume 3, issue 4, december 2016 92 factors affecting increasing waste in gaza strip construction sites eyad haddad, ali tayh abstract— waste has been recognized as a major problem in the construction industry. not only does waste have waste has been recognized as a major problem in the construction industry. not only does waste have an impact on the efficiency of the construction industry but also on the overall economy of the country. the main objective of this study is to identify the main waste causes in gaza strip construction industry. the current research primarily employed the method of questionnaire surveys to collect the required data. following a thorough literature review and structured interviews with professionals who have work experience in the field of construction in gaza strip. comprehensive list of factors were identified and categorized into five groups with total thirtyfive factors. then, eightyfour questionnaires were distributed to contracting companies working in the field of construction projects in gaza strip. this study focused on material waste in construction sites in gaza strip including building work. a statistical analysis was conducted to calculate mean, standard deviation (sd) and standard error (se) for each factor. the results were accepted when the value of the standard error is less than 0.2. a comment on the results that have been reached is shown in order to illustrate the extent of the impact of those factors on increasing waste on construction projects in gaza strip. index terms— construction projects, gaza strip construction industry, waste construction, factors. 1. introduction the increasing quantities of waste have created a bad image for the construction industry. in addition, an ineffective planning and control of materials on sites could lead to poor performance and undesirable project outcomes [1]. nevertheless, the economic impact, contributions to employment and the benefits of investment in construction industry are very enormous. construction activity forecasts the general direction of an economy and for this reason the industry is often described as a leading economic sector. according to horvath (1999), the construction industry is one of the largest and most important industries, being at the same time the main consumer of natural resources and one of the largest polluters. construction material contributes significantly to the cost of construction project; therefore, material wastage has adverse impact on construction cost, contractor’s profit margin, construction duration and can be a possible source of dispute among parties to a project [2]; fellows, langford and newcombe, 2002) [3]. the cost of material waste generated on building sites represents avoidable cost in construction which can either be eliminated or reduced. hoe (2005) stated that the extent to which waste can be prevented in the construction industry has been a long-debated issue [4]. whereas it is impossible to completely eliminate all wastage, the concern should be how practices in the local industry can be managed to minimize waste. the main objective of this study is to identify the main waste causes in gaza strip construction industry in order to establish an initial framework for future studies to develop methods for prevention and elimination of waste causes inherent in the construction process. the objectives of this research have been achieved through eightyfour questionnaires received from construction firms operating in gaza strip. enshassi (1996) found from a study of 86 housing projects in the gaza strip that the materials losses resulting from direct and indirect wastes were about 3.6–11%, which was significantly higher than the values that were normally allowed (2– 4.5%) [5]. 2. methodology a quantitative approach is selected to determine the importance of the factors which affect the causes of materials waste in construction projects in gaza strip. in this research, site visits, structured questionnaire, interviews and literature research related to the construction industry used for data gathering. this study has been conducted to show the degree of influence of 35 factors divided into five groups, namely, on site practice group, materials handling group, materials/ transportation group, site management group, site supervisor group. these factors have been selected by a careful review of literature and previous researches in the field of waste in construction. the population of this study includes contracting companies of first, second and third category that have a valid eyad haddad, ali tayh / factors affecting increasing waste in gaza strip construction sites (2016) 93 94% 37% 50% 6% 38% 40% 0% 25% 10% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% % main secondary unspecialized field field of company specialization buildings water and sewage roads and transportation fig. 1: field of company specialization mean and ranking for main group on site practice group (1). m=3.62. rank(5) site management group (4). m=3.64. rank(4) materials handling group (2). m=3.84. rank(3) transportation group (3). m=4.01. rank(2) site supervisor group (5). m=4.59. rank (1) group (5) site supervisor group (3) transportation group (2) materials handling group (4) site management group (1) on site practice fig. 2: mean and ranking for main group registration by palestinian contractors union (pcu) in gaza strip. the target population was distributed between three levels of contracting companies: the first class has 34 companies; the second class has 21 companies; the third class has 29 companies. a statistical analysis was conducted to calculate mean, standard deviation (sd) and standard error (se) for each factor. 3. result and discution 3.1 work experience of respondent figure 1 shows that 94% of contracting companies are involved mainly in construction building works, while 6% of them are involved secondarily in building works. this gives a high confidence in the quality of answers, because the study involved building construction projects. . 3.2. factors of causing material waste in this part, the respondents were asked to identify the main causes of material waste. 3.2.1 main group the questionnaire of this study considered 35 factors which cause material waste in construction, and those factors were distributed into five groups, namely, on site practice; materials handling; material transportation; site management; and site supervision. figure 2 gives the result of a collected data, namely, causes of materials waste and illustrates the mean and ranking of each group. the survey revealed that the site supervisor group is the major causes of materials waste with mean 4.59 and highest ranking, while the lowest mean 3.16 is for onsite practice group. 3.2.3 mean and ranking of on site practice group (g1) the mean of each of the sub-factors of the on site practice group which causes materials waste are presented in table (1) in a descending order. rank of each factor is also listed. ―materials damage on site; improper cutting of materials; manufacturing defects‖ had the highest means 4.26, 3.915, and 3.790 respectively. while ―lack of materials (due to closure); burglary, theft and vandalism; over sizing structural elements during execution‖ had the lowest rank with means 2.74, 3.42 and 3.50 respectively. table 1 mean and ranking of on site practice (g1) the result in table (1) shows that "materials damage on site" factor was ranked in the first position with mean value 4.263. it was ranked in the fourth position among the ten factors that caused material waste at materials/on site factors group done by al-moghany study [2]. it was also ranked in the first position among the thirty five factors that caused material waste, see table 6. and, the results showed that ―lack of materials due to closure" was ranked as the lowest factor which increasing waste with mean value 2.742, but this factor was ranked in the ninth position with mean value 3.33 by almoghany study [2]. 3.2.4 mean and ranking of materials handling group (g2) table 2 shows the mean of each of the sub-factors of the materials/ handling group which causes material waste in a descending order. rank of each factor is also listed. improper handling of materials on site with mean 3.975 had the highest ranking. and insufficient instructions about handling matefactor mean rank materials damage on site. 4.263 1 improper cutting of materials. 3.915 2 manufacturing defects. 3.79 3 poor quality of materials. 3.765 4 existence of unnecessary materials on site. 3.69 5 overproduction/production of a quantity greater than required or earlier than necessary. 3.638 6 lack of onsite materials control. 3.615 7 poor storage of materials. 3.541 8 using excessive quantities of materials. 3.514 9 over sizing structural elements during execution. 3.501 10 burglary, theft and vandalism. 3.417 11 lack of materials (due to closure). 2.742 12 eyad haddad, ali tayh / factors affecting increasing waste in gaza strip construction sites (2016) 94 rials on site with mean 3.701 had the lowest rank. the results in table 2 showed that ―improper handling of materials on site ―factor was ranked in the first position with mean value 3.975. it was also ranked in the third position among the thirty five factors that caused material waste (table 6). this problem is due to lack of training, lack of manuals for dealing with materials at the construction sites, and insufficient instructions about handling materials on site. table 2 mean and ranking of materials handling group (g2) 3.2.5 mean and ranking of materials/ transportation group (g3) table 3 shows the mean of each of the sub-factors of the materials/ transportation group which cause materials waste in a descending order. rank of each factor is also listed. the results in table 3 showed that ―improper materials ―factor was ranked in the first position with mean value 4.156. it was also ranked in the second position among the thirty five factors that caused material waste as shown in (table 6). the results in table 3 showed that ―storing materials in faraway stores ―factor was ranked in the second position with mean value 3.863. it was also ranked in the sixth position among the thirty five factors that caused material waste as shown in (table 6). table 3 mean and ranking of materials/ transportation group (g3) 3.2.6 mean and ranking of site management group (g4) the mean of each of the sub-factors of site management group which causes materials waste are presented in table 4 in a descending order. rank of each factor is also listed. ―poor qualification of the contractor’s technical staff assigned to the project; shortage of technical professionals in the contractor’s organization; lack of material and time waste management plan‖ had the highest means 3.878, 3.782and 3.775 respectively. while ―lack of a quality management system aimed at waste minimization; providing project team with insufficient information; contractors’ slowness in taking decisions‖ had the lowest rank with means 3.415, 3.516 and 3.564 respectively. the results in table 4 showed that ―poor qualification of the contractor’s technical staff assigned to the project. ―factor was ranked in the first position with mean value 3.878. it was also ranked in the fifth position among the thirty five factors that caused material waste (table 6). lack of supervision and poor qualification of the contractor’s technical staff was identified as variables that had detrimental effect when they occurred [6]. alwi et al. (2002) considered the lack of supervision as a major factor causing waste in construction projects, and was ranked in sixth position in group (1), human resource category [6]. table 4 mean and ranking of site management group (g4) factor mean rank poor qualification of the contractor’s technical staff assigned to the project. 3.878 1 shortage of technical professionals in the contractor’s organization. 3.782 2 lack of material and time waste management plan. 3.775 3 ineffective control of the project progress by the contractor. 3.692 4 poor site layout. 3.626 5 delay in project commencement. 3.578 6 contractors’ slowness in taking decisions. 3.564 7 providing project team with insufficient information. 3.516 8 lack of a quality management system aimed at waste minimization. 3.415 9 3.2.7 mean and ranking of site supervisor group (g5) the mean of each of the sub-factors of site management group which causes materials waste are presented in table 5 in a descending order. rank of each factor is also listed. ―suspension of work by the owner; poor control of supervision and delay in giving instructions; change orders‖ had the highest means 3.741, 3.707 and 3.627 respectively. while ―owner's delay in handing over the site to the contractor; poor coordination and communication among the consultant, the owner and the contractor; slow response from the consultant team to contractor inquiries‖ had the lowest rank with means 3.441, 3.45 and 3.463 respectively. the results in table 5 showed that ―suspension of work by the owner ―factor was ranked in the first position with mean value 3.741. it was also ranked in the twelfth position among the thirty five factors that caused material waste (table 6). al mogany (2006) mentioned that suspension of work by the owner was ranked in fifth position among ninety-two factors that caused material waste [2]. factor mean rank improper handling of materials on site. 3.975 1 duplication of transporting material on site. 3.854 2 insufficient instructions about handling materials on site. 3.701 3 factor mean rank improper materials. 4.156 1 storing materials in faraway stores 3.863 2 eyad haddad, ali tayh / factors affecting increasing waste in gaza strip construction sites (2016) 95 the results showed that ―owner's delay in handing over the site to the contractor " was ranked as the lowest factor which increasing waste with mean value 3.441, but this factor was ranked in the seventh position with mean value 3.03 at [2]. al-khalil ,etal (1999) mentioned that delay to deliver the site to the contractor by the owner was ranked in twenty-sixth position among sixty factors which cause waste and project delay [1] . table 5 mean and ranking of site supervisor group (g5) 3.3 over-all ranks of all factors causing material waste table 6 outlines the factors causing material waste in descending manner. it indicate that the highest five factors are ―materials damage on site; improper materials; improper handling of materials on site; improper cutting of materials; poor qualification of the contractor’s technical staff assigned to the project ‖ with mean ranks 4.263, 4.156, 3.975, 3.915, and 3.878 respectively. it has been noticed that the ―poor coordination and communication among the consultant; the owner and the contractor; owner's delay in handing over the site to the contractor; burglary, theft and vandalism; lack of a quality management system aimed at waste minimization; and lack of materials (due to closure) are the lowest five factors that causing materials waste with mean ranks 3.45, 3.441, 3.417, 3.415 and 2.742. table 6 mean and rank of over-all factors causing material waste factor mean rank suspension of work by the owner. 3.741 1 poor control of supervision and delay in giving instructions. 3.707 2 change orders. 3.627 3 poor cooperation of the owner towards settling contractors payments and claims 3.641 4 delay in performing inspection and testing by the consultant team. 3.565 5 poor qualification of consultant engineer’s staff assigned to the project. 3.517 6 slow response from the consultant team to contractor inquiries. 3.463 7 poor coordination and communication among the consultant, the owner and the contractor. 3.45 8 owner's delay in handing over the site to the contractor. 3.441 9 factor group no. mean rank materials damage on site. g1 4.263 1 improper materials. g3 4.156 2 improper handling of materials on site. g2 3.975 3 improper cutting of materials. g1 3.915 4 poor qualification of the contractor’s technical staff assigned to the project. g4 3.878 5 storing materials in faraway stores g3 3.863 6 duplication of transporting material on site. g2 3.854 7 manufacturing defects. g1 3.79 8 shortage of technical professionals in the contractor’s organization. g4 3.782 9 lack of material and time waste management plan. g4 3.775 10 poor quality of materials. g1 3.765 11 suspension of work by the owner. g5 3.741 12 poor control of supervision and delay in giving instructions. g5 3.707 13 insufficient instructions about handling materials on site. g2 3.701 14 ineffective control of the project progress by the contractor. g4 3.692 15 existence of unnecessary materials on site. g1 3.69 16 poor cooperation of the owner towards settling contractors payments and claims g5 3.641 17 overproduction/production of a quantity greater than required or earlier than necessary. g1 3.638 18 change orders. g5 3.627 19 poor site layout. g4 3.626 20 eyad haddad, ali tayh / factors affecting increasing waste in gaza strip construction sites (2016) 96 4. conclusions the construction industry has been found to be a major generation of waste. this study focused on material waste in construction sites in gaza strip including building work, it also identified the major causes of waste in construction sites and resented a comprehensive analysis of these causes. a questionnaire-based survey was used to elicit the attitude of contracting companies towards major factors which causes waste in gaza strip construction sites and resented a comprehensive analysis of these causes. 84 questionnaires were distributed between three levels of contracting companies: the first class has 34 companies; the second class has 21 companies; the third class has 29 companies. the respondents were asked to indicate the degree of influence of 35 factors which increasing waste divided into five groups, namely, on site practice group, materials handling group, materials/ transportation group, site management group, site supervisor group. the results indicated that materials damage on site, improper materials, improper handling of materials on site, improper cutting of materials and poor qualification of the contractor’s technical staff assigned to the project are the highest factors which cases increasing waste in construction sites in gaza strip. also, the results indicated that lack of materials (due to closure), lack of a quality management system aimed at waste minimization, burglary, theft and vandalism, owner's delay in handing over the site to the contractor and poor coordination and communication among the consultant, the owner and the contractor are the least factors affecting increasing waste on gaza strip construction sites. 5. references [1] jayamathan, j. & rameezdeen, r. (2014). influence of labour arrangement on construction material waste generation. structural survey, 32(2): 76-88. [2] al-moghany, s. s. (2006). managing and minimizing construction waste for gaza strip (unpublished master’s thesis). faculty of engineering, deanery of graduate studies, construction management programme, the islamic university of gaza, gaza, palestine. [3] fellows. r., langford, d., newcombe, r. & urry, s. (2002). construction management in practice. 2nd ed., united kingdom: blackwell science limited, pp. 180 181. [4] hoe, l.k. (2005). causal model for management of subcontractors in waste minimization (unpublished phd thesis). department of building, national university of singapore, singapore. [5] enshassi, a., (1996). materials control and waste on building sites. building research and information, 24 (1): 31–34. [6] alwi, s., hampson, k. and mohamed, s., (2002). non value-adding activities. a comparative study of indonesian and australian construction projects. proceedings of the tenth annual conference of the international group for lean construction iglc-10, gramado, brazil. [7] eyad haddad (2015). a construction resources management system for gaza strip building contractors. published ph.d. thesis. al azhar university, cairo, egypt. [8] al-khalil, m. i., and al-ghafly, m. a., (1999). important factor group no. mean rank lack of on site materials control. g1 3.615 21 delay in project commencement. g4 3.578 22 delay in performing inspection and testing by the consultant team. g5 3.565 23 contractors’ slowness in taking decisions. g4 3.564 24 poor storage of materials. g1 3.541 25 poor qualification of consultant engineer’s staff assigned to the project. g5 3.517 26 providing project team with insufficient information. g4 3.516 27 using excessive quantities of materials. g1 3.514 28 over sizing structural elements during execution. g1 3.501 29 slow response from the consultant team to contractor inquiries. g5 3.463 30 poor coordination and communication among the consultant, the owner and the contractor. g5 3.45 31 owner's delay in handing over the site to the contractor. g5 3.441 32 burglary, theft and vandalism. g1 3.417 33 lack of a quality management system aimed at waste minimization. g4 3.415 34 lack of materials (due to closure). g1 2.742 35 eyad haddad, ali tayh / factors affecting increasing waste in gaza strip construction sites (2016) 97 causes of delay in public utility projects in saudi arabia. journal of construction engineering and economics, vol. 17, 647655. [9] eyad haddad (2006). a construction materials management system for gaza strip building contractors. unpublished msc thesis. the islamic university of gaza (iug). [10] horvath, a. (1999). construction for sustainable development – a research and educational agenda. department of civil and environmental engineering, university of california, berkeley, usa. retrieved from http://www.ce.berkeley.edu/~tommelein/cemworkshop/horv ath.pdf retrieved december 16, 2011. [11] enshassi, a., mohamed, s. & abushaban, s. (2009). factors affecting the performance of construction projects in the gaza strip. journal of civil engineering and management, 15(3): 269–280. [12] garas, g.l., anis a. r., and el gammal, a., (2001). materials waste in the egyptian construction industry. proceedings of the ninth annual conference of the international group for lean construction iglc-9, singapore. eyad haddad. assistant professor, civil engineering department, faculty of engineering, university of palestine. gaza, gaza strip ali tayh assosiated professor, civil engineering department, faculty of engineering, university of palestine. gaza, gaza strip journal of engineering research and technology, volume 6, issue 1, april 2019 1 producing porous asphalt in palestine according to astm d7064 shafik jendia 1 , mousa krezem 2 1 professor of highway engineering, islamic university of gaza, palestine 2 msc. degree of infrastructure engineering, islamic university of gaza, palestine abstract: porous asphalt (pa) pavements offer an alternative technology for storm water management because it differs from traditional asphalt pavement designs in its high air void ratio which can reach more than 20 %. aim of the research is to produce porous asphalt according to astm d7064 and to study the different characteristics and properties of porous asphalt. the practical program of this study starts with studying the different properties of the used aggregates in preparing porous asphalt samples, then studying the different characteristics of used bitumen (density, ductility .. etc.). after that porous asphalt mix was prepared and pa samples was subjected to required laboratory tests to determine the air void ratios (va%), optimum bitumen content (obc) and permeability coefficient. the test results found that va% ranges between 21 % to 29 % by several bitumen contents. the optimum bitumen content obc of the non-modified porous asphalt was 3.5 % and the permeability coefficient reached approximately 55 m/day. keywordsporous asphalt, air void ratio, optimum bitumen content draindown test, cantabro abrasion test, marshal test, astm d7064. introduction porous asphalt (pa) materials are an open-graded mixes composed of relatively uniform graded aggregate and bitumen or modified binders. pa is mainly used as drainage layer, either at the pavement surface called permeable friction courses (pfc) or within the pavement structure [1]. a layer of porous asphalt with thickness varies in the range of 20-100 mm and an air void ratio that is generally between 18 – 22 % is normally placed as an overlay on top of an existing conventional concrete or asphalt surface. this overlay typically is referred to as porous asphalt (pa), permeable friction courses (pfc), or open graded friction courses (ogfc) [2]. the high air void ratio of pa compared with dense asphalt concrete makes it as a storm water best management practice where the water goes through it rapidlywithout any ponding at the surface faster than other traditional figure (1): cross section of porous asphalt pavement [4] shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 2 dense-graded pavement as shown in figure (1) [3]. pa is used all over the world mainly for two pavement applications; the first one as wearing courses on high-speed roadways where a thin layer of porous asphalt ranging from 20 to 50 mm thick is placed over a conventional impermeable pavement surface, this overlay allows water to drain into the porous layer and then moves laterally within it which highly helps in improving roadway safety. the second application where pa can also be used is for storm water management purposes where the quantity of storm water runoff is significantly reduced due to infiltration through the porous asphalt layer. in the second type of applications, a thicker porous asphalt layer (50–100 mm thick) is placed over an open graded aggregate base course that acts as a reservoir for storm water before it infiltrates into the underlying soil [5]. pa was produced previously in the laboratory based on the german specifications as a new technology in palestine through a published scientific paper in this topic titled as: “porous asphalt: a new pavement technology in palestine” [6]. objectives of the study the main aim of this paper is to study the purpose, properties, advantages, disadvantages, of the porous asphalt and finally determine the possibility of producing it in palestine. more precisely, this paper represents a trial to: 1. determine the most suitable aggregate gradation which should be used in pa mix according to astm specifications. 2. obtain the optimum design of pa by determination va%, obc, stability and flow. 3. evaluate the permeability of pa. advantages of porous asphalt although its useful properties and benefits, pa has not been paved in palestine yet. some of these advantages are presented below [7, 8]: 1. reduction in splash and spray, reduced aquaplaning 2. reduction in light reflection and headlight glare 3. noise reduction on the street surface 4. improvement in skid resistance 5. rut-resistance 6. requires less need for curbing and storm sewers 7. recharging groundwater supply disadvantages of porous asphalt despite its useful properties and advantages, a high level of care still be needed before the use of pa due to its following disadvantages [5, 7, 8, 9, 10, 11]: 1. stripping: big amounts of water that penetrate porous asphalt due to its high permeability can make the asphalt in wet condition for long periods where some of these water amounts are not properly drained and remains in the asphalt's structure. this continuous wet condition of asphalt increases the rate of bitumen‟s stripping from the aggregate surfaces [5]. 2. raveling: raveling is defined as a pavement distress resulting from the loss of individual aggregate particles at the surface of the pavement due to a loss of adhesion between the binder and the aggregate. in shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 3 porous asphalt, and due to the high air voids content, oxygen, sunlight and water have access to a higher surface area of mixture which increasing the possibility of bitumen exposure to air compared to impervious asphalt mixes. this exposure to the air can lead to premature oxidation of the binder, thus making it brittle and leading to raveling [7]. 3. aging: one of the main properties in porous asphalt mixes is its high porosity which creates a suitable environment to the binder film in the mix to be exposed to oxygen, sunlight, water etc. this continuously exposure results in bitumen hardening and thus aggregated can be stripped easily from the asphalt mixes which leads finally to shorter service life of porous asphalt than conventional dense mixes [8]. 4. draindown: draindown occurs in porous asphalt as a result of lack of fine aggregate which is done to reach higher permeability rates than dense asphalt mixes. also the excess asphalt binder (bitumen) which is used to increase durability can cause draindown. this draindown appears as an excess asphalt binder that drains out of a porous asphalt mixture. moreover, the high temperatures in summer can be a main reason for draindown which soften the binder and by gravity it gradually downwards in the asphalt layers until reaching stability in a cooler portion of the pavement. this phenomenon causes big problems in porous asphalt where the excess bitumen which downwards in the asphalt structure can clog these layers and thus decreases the permeability rates of the porous asphalt [7]. to prevent draindown in porous asphalt, then stabilizing additives should be used such as polymers which stiffen the asphalt binder and fibers which absorb the additional binder content [9]. 5. winter maintenance: lower thermal conductivity of the porous asphalt leads to make it colder than the dense asphalt mixes and thus this can facilitate the early settlement of snow and remains longer on the asphalt surface[10]. some other issues need high degree of care when dealing with porous asphalt like: 6. reduction in porosity [11] 7. shorter service life materials and testing program a. stage one: determination of aggregate and bitumen properties a.1 aggregate properties four main types of aggregates were used in this experimental study. table (1) highlights the different types of aggregate properties. table (1): aggregate test results according to astm specifications specification test results reference specific gravity (g/cm 3 ) 2.64–2.79 astm c127-15 c128-15 water absorption (%) 0.6-0.8 astm c128 abrasion loss value (%) 13.1-16.2 astm c128 sieve analysis of aggregates and blending results see appendix )a( shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 4 0 10 20 30 40 50 60 70 80 90 100 0.01 0.1 1 10 100 % p a ss in g sieve size (mm) min. max. suggested gradation a.2 bitumen properties bitumen is tested in order to determine its mechanical properties. table (2) illustrates the bitumen characteristics. table (2): bitumen test results according to astm specifications test results reference density 1.03 (g/ml) astm d 3289-08 penetration 59 (1/10 mm) astm d5/d5m-13 flash point 289 (º c) astm d92 – 12b ductility > 150 (cm) astm d113-07 solubility 99.5 (%) astm d2042-09 softening point 49 (º c) astm d 36 b. stage two: blending process according to astm d7064 specifications, three different aggregate gradation curves were selected using blending process. blending process (appendix (a)) is based on a trail mathematical approach through proposing trial percentages for each aggregate type and finally comparing the passing percentages for each sieve with its equivalent in the specification limits. if the resulted gradation is within the acceptable limits, then the process is finalized with no further adjustments need to be made and this gradation is considered to be the chosen one; if not, an adjustment in aggregate sizes proportions should be made and the calculations repeated until reaching a gradation in which all aggregate sizes are within the acceptable limits [12]. the previous procedure was used 3 times to select 3 gradations within the astm d7064 limits. one of the 3 gradation curves is shown in figure (2) and the calculation of all gradations is illustrated in appendix [a]. c. stage three: determination of pa properties c.1 selection of the most suitable gradation curve this stage aims to select the most suitable gradation from the three proposed ones in stage 2 based on the maximum air void ratio. to determine the air void ratio (va %) in porous asphalt, 12 porous asphalt samples (4 samples for each of 3 gradations) were produced with a trail asphalt content 6% as stated in astm d7064 specifications [13]. after that, bulk bulk specific gravity (gmb) and maximum theoretical specific gravity (gmm) were calculated to use them in finding the percent air void ratio (va %) for each asphalt mix using equation (1) va % = …. equation (1) figure (2): suggested gradation curve in comparison with astm shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 5 10 20 30 40 3 3.5 4 4.5 5 5.5 6 6.5 a ir v o id r a ti o % bitumen content % results of air void ratios are shown in table (3): table (3): gmb, gmm & va % for 3 different aggregate gradations item gradation i gradation ii gradation iii gmb 1.946 1.866 1.857 gmm 2.427 2.428 2.424 va % 19.82 23.13 24.36 based on the previous results in table (3), it is clearly shown that gradation iii (in appendix [a]) is the most suitable one where it gives the highest air void ratio (24.36 %) with the stated percentages of four different aggregate types in table (4) table (4): different aggregate percentages of the most suitable gradation for porous asphalt aggregate type percentage % filler (0/0.075 mm) 3% simsimia (0/0/12 mm) 17% adasia (0//19 mm) 75% folia (0/25 mm) 5% total 100% c.2 conducting draindown and cantabro abrasion tests in order to determine the obc, 39 pa samples were prepared using 6 different bitumen contents (3.5-6 % with 0.5% increment) and subjected for laboratory tests as shown in tables (5) and (6): table (5): number and purpose of the produced pa samples in the test program no. of produced pa samples purpose 6 for air void ratio calculations no. of produced pa samples purpose 6 for draindown test (astm d6390) 18 for cantabro abrasion tests (astm c131) 9 for marshal test (stability and flow) table (6): tests results of mechanical properties of pa using 6 different bitumen contents bitumen value (%) va (%) draindown (%) abrasion loss (%) 3.5 28.9 0.04 61 4 25 0.07 59.6 4.5 21.6 0.09 57.1 5 27.3 0.12 55.4 5.5 28 0.16 51.3 6 24.4 0.21 47.8 figure (3): air void ratio (va %) vs. bitumen content (%) as stated in astm d7064, the selected open graded frication course (ogfc) should be the one which meets the below specifications [13]:  total air void ratio should be minimum 18% shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 6  draindown value should not exceed 0.3%  abrasion loss on un-aged specimens from the cantabro test should not exceed 20% where the cantabro abrasion criteria are optional to be used in judgment as per astm d7064 recommendations. table (6) and figure (3) indicate that when bitumen content increases, the air void ratio decreases until reaching high amounts of bitumen more than 4.5 % then the air void ratio increases. this can be referred to the low content of filler in relative to the increased bitumen content (especially at the high contents 5, 5.5 & 6%) in pa mixes. the high bitumen content lead finally to decrease the bulk specific gravity (gmb) and thus increasing the calculated air void ratio in the pa sample (eq. 1). accordingly, it is appeared that the unmodified pa mix cannot be used with high bitumen contents more than 4.5 % percentages so that from the previous results, it is clearly shown that 3.5%, 4% and 4.5% bitumen contents are the most three contents that met astm d7064 specifications in terms of air void ratios and draindown values. therefore, these 3 bitumen contents were used to produce new 9 porous asphalt samples to be tested against their stability and flow (marshall test) as presented in the next step. c.3 conducting marshall test to determine optimum bitumen content (obc) 9 pa samples were prepared using the most suitable bitumen contents (3.5 %, 4 % and 4.5 %) (3 samples for each bitumen content) to determine their stability and flow using marshall test. the results of this test resulted in finding the obc which gave the highest stability values. results are given in table (7). table (7): stability and flow values for pa according to used bitumen content bitumen content (%) stability (kn) flow (mm) 3.5 4.95 3.6 4 3.46 3.8 4.5 4.44 4 it is clearly appeared that bitumen content 3.5% is the content which gives the highest stability of the asphalt mix (504.55 kg ≈ 4.95 kn), accordingly 3.5 % is taken as the obc. d. stage four: determination of permeability coefficient (k) of porous asphalt mix to evaluate the permeability of pa, the falling head test was used. the testing process included preparing 2 pa samples from 2 different bitumen contents; 3.5 % “obc” and 4 % “for comparison purposes”. table (8) highlights the final results of permeability coefficient for each bitumen content where the results indicate that using bitumen content 3.5% results in higher permeability of pa. more details of calculating the permeability coefficient are presented in appendix [b]. table (8): permeability coefficient (k) for porous asphalt using 2 different bitumen contents bitumen content permeability coefficient (k) (meter/day) 3.5% 55.00 4% 52.56 shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 7 conclusions based on experimental work results for porous asphalt mixtures, the following conclusions can be drawn: a. porous asphalt can be easily produced in palestine using the local available materials according to astm specifications (astm d7064). b. porous asphalt has a high ratio of air void ratio that can reach more than 25% compared with air void ratio in dense asphalt concrete which lies between (3-7) %. c. using different bitumen contents is leading to different air void ratios where the air void ratios were in the range (21–29) % when using bitumen contents between (3.5-6) %. d. the optimum bitumen content of porous asphalt is 3.5 % where it is lower than the bitumen content in dense asphalt concrete which can reach more than 5% due to its higher durability compared with porous asphalt types. e. the optimum bitumen content is 3.5 % where it gives the highest air void ratio and lowest percentages of draindown in addition to normal value of asphalt stability. f. the bitumen content in porous asphalt is inversely proportional with abrasion loss values where using higher bitumen contents leads to lower abrasion loss values. g. due to higher air void ratio of porous asphalt compared to dense asphalt concrete, its stability values are lower than other conventional asphalts. recommendations a. it is recommended to start using porous asphalt in palestine in order to gain its useful properties especially those which are related to decreasing runoff water amounts due to its higher permeability. b. it is recommended to use high permeable base course and subbase layers in porous asphalt to increase the efficiency in water infiltration especially with the high permeability of porous asphalt layer that reaches 55 m/day. c. it is recommended to conduct more research in porous asphalt topics to improve its durability and water infiltration characteristics especially in terms of the internal road layers (base course and subbase). d. it is recommended to conduct a specific studies related to the financial costs of porous asphalt and compare it with the costs of conventional types of asphalt. e. it is recommended to start preparing unique palestinian specifications related to porous asphalt and its properties. f. it is required to use some modifiers in porous asphalt mixtures to enhance its stability and abrasion loss values more than those that were obtained in this study. shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 8 references [1] federal aviation administration (faa). (2001). “hot mix asphalt paving handbook.” advisory circular no. 150/5370-14a, u.s. department of transportation, washington, d.c. [2] barrett, m. e., and shaw, c. b. (2006). “stormwater quality benefits of a porous asphalt overlay (no. fhwa/tx07/0-4605-2)”. center for transportation research, university of texas at austin. [3] putman, b. j., and kline, l. c. (2012). “comparison of mix design methods for porous asphalt mixtures”. journal of materials in civil engineering, 24(11), pp ‏.1359-1367 [4] permeable pavement, https:// stormwater.pca.state.mn.us/index.php?title =file:permeable_pavement_volume_credit _2.png, accessed on 27 th november 2018. [5] ali, n., ramli, m. i., and hustim, m. (2012). “porous asphalt „s contribution on road safety and environment”, 8th international symposium on lowland technology september 11-13, 2012 in bali, indonesia. [6] jendia, s., aldahdooh, z., aburahma, m., et al. (2018). “porous asphalt: a new pavement technology in palestine”. journal of engineering research and technology, 5(1). [7] lyons, k. r., and putman, b. j. (2013). “laboratory evaluation of stabilizing methods for porous asphalt mixtures”. construction and building materials, 49, 772-780. [8] hoban t w s, liversedge f and searby r. (1985). “recent developments in pervious macadam surfaces”, proc. 3rd eurobitumen symp., the hague, pp 635-640. [9] shankar, a. u., suresha, s. n., and saikumar, g. m. v. s. (2014). “properties of porous friction course mixes for flexible pavements”. indian highways, 42 (3). [10] palatová, m. (2012). "new trends in the construction of flexible pavements", m.sc thesis: brno university of technology. [11] van heystraeten, g., and moraux, c. (1990). “ten years' experience of porous asphalt in belgium”. transportation research record, (1265). [12] jendia, s. (2000). “highway engineering-structural design”. gaza: dar el-manara library. [13] american society of testing and materials (astm). astm d7064/d7064m – 08 “standard practice for open-graded friction course (ogfc) mix design”. shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 9 appendix a aggregates blending table a.1: suggested percentages for porous asphalt aggregate mix aggregate mix grain size (mm) suggested percents for final agg. mix 0.075 0.18 0.3 0.6 2.36 4.75 9.5 12.5 19 25 filler 59.68 76.94 88.73 98.06 100.00 100.00 100.00 100.00 100.00 100.00 3.00% 1.79 2.31 2.66 2.94 3.00 3.00 3.00 3.00 3.00 3.00 simsimia 1.34 1.46 1.57 1.62 22.00 54.70 95.63 100.00 100.00 100.00 17.00% 0.23 0.25 0.27 0.28 3.74 9.30 16.26 17.00 17.00 17.00 adasia 0.38 0.45 0.52 0.56 0.63 0.70 23.90 91.16 100.00 100.00 75.00% 0.29 0.34 0.39 0.42 0.47 0.53 17.93 68.37 75.00 75.00 folia 0.54 0.59 0.64 0.72 0.77 1.52 13.01 35.64 95.31 100.00 5.00% 0.03 0.03 0.03 0.04 0.04 0.08 0.65 1.78 4.77 5.00 ∑% passing 2.33 2.92 3.35 3.67 7.25 12.90 37.83 90.15 99.77 100.00 100.0% sieve size (mm) 0.08 0.15 0.30 0.85 2.36 4.75 9.50 12.50 19.00 25.00 wearing 0/ 12.5 (min) 2 5 10 35 85 100 astm specifications d7064/d7064m − 08 wearing 0/ 12.5 (max) 4 10 25 60 100 100 shafik jendia, mousa krezem/ producing porous asphalt in palestine according to astm d7064 (2019) 10 appendix (b) determination of permeability of porous asphalt mix permeability coefficient (k) of porous asphalt samples was calculated using the falling head test method in which k can be found using equation b.1 below: ……………. equation (b.1) where,  q: volume of water infiltrated through porous asphalt sample (cm 3 )  l: sample height (cm)  a: cross sectional area of asphalt sample (cm 2 )  h: height of water column (pressure head) (cm)  t: elapsed time to infiltrate water through the asphalt sample (sec.) during the laboratory program execution, 2 porous asphalt samples with 2 different bitumen content (3.5% and 4% ) were prepared for permeability test purposes. table (b.1) below presents the results of the test. item sample 1 (3.5% bitumen) sample 2 (4% bitumen) q (cm 3 ) trail 1 1,893 1,877 trail 2 1,946 1,861 trail 3 1,930 1,850 q (cm 3 ) (average) 1,923 1,862.67 l (cm) 7.8 7.85 d (cm) 10 10.1 a (cm 2 ) = ( ) 78.54 80.12 h (cm) 50 50 t (sec.) 60 60 from the previous data in table (b.1), equation (b.1) can be used to find k as follows: for sample 1 (3.5% bitumen) 0.063659282 cm/sec. = 55.002 m/day for sample 2 (4% bitumen) 0.060833581 cm/sec. = 52.56 m/day transactions template journal of engineering research and technology, volume 4, issue 2, june, 2017 43 developing and evaluating training programs on energy efficient building design: the iug experience, palestine ahmed s. muhaisen 1 , and omar s. asour 2 1 department of architecture, iug, gaza, palestine, e-mail amuhaisen@iugaza.edu.ps 2 department of architecture, iug, gaza, palestine, e-mail oasfour@iugaza.edu.ps abstract—nowadays, great research efforts are devoted to investigate energy efficiency practices in buildings as a response to the rapid consumption of the depleting fuels and the associated environmental challenges. this paper presents a systematic approach that was implemented in the islamic university of gaza (iug), palestine to develop and evaluate a training program on energy efficiency in buildings. the aim of the training program was to bring together engineers and architects from a variety of governmental, non-governmental, and international organizations to learn and discuss energy efficiency practices in buildings considering the local conditions of the gaza strip. this study reports the methods used to design, implement, and assess the program including a questionnaire, focus group, and reflection workshop. the study concluded that there is a great need in the gaza strip for such training courses. there is also a need to expand the scope of the training to cover further categories of people involved in the construction sector, and to use additional training formats such as on-job, and over-distance training. index terms—energy, efficiency, buildings, training, gaza i introduction human current dependence on the rapidly depleting fossil fuels forms a great challenge that directly affects security of the planet. from the perspective of sustainable development, two actions are required: to rationalise this consumption, and to exploit the available renewable energy resources. the greatest potential for realising this change lies in the buildings sector, which is a main source of the global co2 emissions [1]. the role of energy efficient building design strategies is significant here to protect the environment, rationalize resources consumption, and allow for financial savings by implementing energy-efficiency practices. this paper aims to present a systematic mixed qualitative and quantitative approach that was implemented in the islamic university of gaza (iug) to design, implement, and assess a training program on energy efficient building design. this training program is part of peeb project, which is an austrian-funded project through appear. this project is carried out through a two-year partnership between iug and tu wien, and aims to provide a structured program of activities to promote energy efficient building design practices in the gaza strip. the project activities included developing an academic course that was delivered to the architecture students at iug [2], organizing an international conference on energy efficiency [3], establishing a computer lab equipped with simulation tools and environmental performance measurement devices, and carrying out a training program for local engineers and architects. the latter activity in terms of its implementation and evaluation is the main focus of this paper, which is discussed in the following sections. ii energy situation in gaza the gaza strip has been suffering from a lack of energy, especially electricity, for many years. it is estimated that the shortage of electricity is about 40-50% of the actual needs of the people in the gaza strip [4]. as a result, electricity supply is cut off for about 8-12 hours a day. this has led over the previous years to a sever deterioration in most aspects of life. this deficiency between supply and demand is attributed to many reasons including the unstable political situation, the hard economic conditions, the deteriorated distribution grids and the limited available sources of electricity [4]. thus, saving energy in buildings and utilizing available renewable energy resources is required as an immediate action to tackle this problem. this gains further importance in view of the fact that more than 70% of the supplied electricity is consumed by residential buildings, and the abundance of solar energy in the gaza strip all year-round [5]. producing energy efficient buildings is a well-recognized approach of building design and construction to respond to the energy challenges in most countries of the world. applying this approach in the gaza strip is expected to reduce the dependence on the depleting conventional energy resources, limit the environmental adverse impacts of burning fossil fuels and create more sustainable and environmentallyfriendly built environment. energy efficiency refers to using less energy to produce the same output [6]. in general, energy efficient buildings are characterized by the use of efficient energy systems that ensures high energy performance in all building operation aspects (e.g. lighting) [7]. however, as indicated in figure 1, energy should also be conserved in ahmed s. muhaisen, and omar s. asour / developing and evaluating training programs on energy efficient building design …(2017) 44 the construction stage through the use of low embodied energy materials at all building life cycle stages. in addition, building design should include on-site renewable energy technologies to reduce building reliance on electricity and other non-renewable energy sources. the use of pv and solar thermal systems is a common practice in this regard. figure 1 the concept of energy efficiency in buildings [6, adapted] iv the role of training in the field of energy efficiency energy efficiency in buildings design, construction and operation is not a new issue in the construction sector. it has been discussed in several world congresses in the last decades as a main part of the broader context of sustainability [8]. however, this may not be the case in the developing countries, where the inefficient old practices still have their impact and legitimacy. this includes the gaza strip, palestine. despite some preliminary efforts in this regard [9, 10], great efforts are required to improve public awareness about the importance of energy efficiency in buildings. the role of universities is essential here. university courses offer engineering students the opportunity to learn the basic required skills to design and erect energy efficient buildings. it is very common now that universities offer comprehensive postgraduate programs in this regard. in these programs, issues such as sustainability, energy efficiency and carbon pricing are discussed [11]. at iug, department of architecture at faculty of engineering has developed a specified course for undergraduate students in this regard. this course incorporates both theoretical and practical parts. the theoretical part includes the concepts of sustainability and energy, renewable energy technologies, heat transfer basics, thermal comfort, thermal properties of building materials, thermal insulation, passive cooling and heating, and energy efficiency assessment. the practical part includes computerized simulation tutorials on building environmental performance to practice thermal design of buildings [2]. however, what about those who have already graduated without studying such courses? the role of training is essential in such cases. a range of training options on energy efficiency in buildings is possible. this includes structured classroom training courses, which offer the opportunity for engineers and the construction sector stakeholders to learn how to implement energy efficiency strategies in buildings. these courses are effective to provide the required knowledge using the traditional means such as black/whiteboard or the relatively new ones such as powerpoint presentations and video training. it is very useful in this training format to deliver the training in an interactive mode using discussions, quizzes, and brainstorming in order to keep the trainees engaged. the most important idea here is that training is not an ordinary lecture. it should include a variety of techniques that make training an enjoyable experience and in the same time develop the required skills [12]. also, work-based training is essential in this regard since it facilitates delivering the training in a practical and technical-oriented mode. this could be conducted through an organised effort of experts who can team up to conduct onsite visits to explain energy efficiency strategies in buildings design, construction, and retrofit. this could be done in partnership with the government or the concerned organisations such as universities and engineering syndicates. training on energy efficiency may also be provided over distance. a forum or website may also be established to provide a source of continuous self-training through the provision of information on energy efficiency in buildings. this method of training is becoming more common due to the fact that the internet is becoming more accessible. however, it limit’s the potential of exchanging experience between trainees. although this paper is concerned with training on energy efficient building design, the broader context should cover buildings operation as well. several skills are needed in this respect in order to identify and measure energy consumption, and recommend improvement strategies in this regard. these skills include energy usage audit, improving energy usage, and reduce the associated risks [13]. v the implemented training program a needs assessment stage prior to conducting the training program, the project team carried out a needs assessment in gaza to question feasibility of the proposed program. the idea was based on an international report [14] that investigated the potential of sustainable construction and green jobs in the gaza strip. the report recommended that a great capacity building effort is required in this field. this was followed by a focus workshop that was held in dec. 2015 and gathered about 30 concerned stakeholders from the palestinian ministries, municipalities, local consultants, international organizations, renew -able energy energy efficiency energy conservation ahmed s. muhaisen, and omar s. asour / developing and evaluating training programs on energy efficient building design …(2017) 45 ngos, and the private sector to discuss the potential of this training. the workshop discussed the local energy situation in gaza and diagnosed the opportunities and challenges that exist in the field of promoting energy efficient buildings. according to these two activities it was concluded that the proposed training is feasible, and will contribute to alleviate the energy crisis in gaza and improve the local sustainable development practices. it was also believed that this course will contribute to raise awareness of the importance of energy efficiency in buildings among the different sectors of the society. in addition, the proposed training is consistent with the palestinian strategy to achieve sustainable energy development, and ease the local energy shortage [14]. moreover, the gathered researchers and experts stressed the need to stimulate and activate the role of official institutions to adopt this approach and work on the development of new laws and regulations to ensure its application. b implementation stage two main activities were carried out here: preparation of the training content, and delivering the training program. based on several brainstorming sessions that were conducted by the project team members in gaza and vienna, several topics were selected for the training considering the training program limitations. these topics are as follows: 1. introduction to the energy efficient design. this included the following topics: sustainability and architecture, energy and its resources, energy efficiency concept, design principles for energy efficiency, and assessment of energy efficiency. 2. sustainable urban planning. this included the following topics: historical background, environmental aspects, socio-economic aspects, energy consumption control through urban planning, and case studies. 3. solar design. this included the following topics: solar geometry, thermal comfort, passive cooling, passive heating, and case studies. 4. building materials and thermal insulation. this included the following topics: thermal properties of materials, thermal insulation concept, thermal insulators, building envelope insulation strategies, and case studies 5. renewable energy. this included the following topics: global overview, wind energy, solar energy, geothermal energy, and case studies. 6. pv design and installation. this included the following topics: pv cells, pv types, stand-alone and gridconnected systems, pv sizing, pv performance optimization, and case studies. 7. thermal modeling and its applications (two meetings). this included: basic principles, modeling engines and their limitations, output analysis, conceptual strategies for energy efficiency, and practical examples. in the stage of delivering the training, the concerned organizations were contacted to nominate their training candidates. nominees included engineers and architects who work in the palestinian ministries, international organizations, ngos, universities, and the private sector. the training program was held at the iug community service and continuing education deanship (csced), in addition to the architecture department computer lab. the training program included eight three-hour meetings delivered through two weeks. training materials were supplied to the trainees in the form of powerpoint slides. training classes were managed in a way that facilitates group discussions of the training topics, which provides an opportunity to draw on the trainees’ ideas and experiences. five professors from iug have prepared and delivered the training topics. in addition to the training material given to the trainees, there was a practice in the lab to explore energy simulation tools, and a field visit to see some practical examples especially on thermal insulation and solar pv systems. c assessment stage the program’s organizing team asked the participants to evaluate the program using a questionnaire form. the form included four fields and 14 questions as follows: 1. general acquired knowledge. this field included two questions as follows: q1.1: i became able to discuss the general principles of energy efficient building design. q1.2: the course strengthened my belief that it is crucial to protect the environment and natural resources through implementing environmentally friendly design methods. 2. detailed training content. this field included five questions as follows: q2.1: training period was appropriate. q2.2: lectures were adequately divided and distributed over the week. q2.3: the theoretical background offered in the course was appropriate and consistent with the course aim. q2.4: the practical part of the course was sufficient to gain new basic skills in the field. q2.5: course time and venue were appropriate. 3. developed skills. this field included four questions as follows: q3.1: i gained good skills in the field of estimating building material thermal properties. q3.2: i became able to properly select the appropriate building material for energy saving based on its thermal specifications. q3.3: i am now capable to develop my skills in using simulation programs for thermal design based on the basic skills i gained in the course. q3.4: i am now capable to propose some ideas in the field of energy efficient buildings design, and integrate them into the building in order to promote energy savings. 4. future career development. this field included three ahmed s. muhaisen, and omar s. asour / developing and evaluating training programs on energy efficient building design …(2017) 46 questions as follows: q4.1: i will keep working in the future to develop my knowledge and skills in the design and construction of energy efficient buildings. q4.2: i believe that implementing the main principles of energy efficient design would improve the local architecture and the local engineering practice. q4.3: if i were appointed in a decision making position, i will be keen to make it obligatory to implement energy efficiency principles in buildings design. study population was 20 trainees affiliated to a variety of organizations as mentioned above, which also represents the study sample. thus, a total of 20 questionnaire forms were fully completed. the questions were answered using a fivepoint likert scale. table 1 shows the obtained results. it is clear that the respondents are generally satisfied with the idea of the course. there is a consensus that the course strengthened the participants’ awareness about the topic importance, and improved their ability to discuss the general principles of energy efficient building design. as for the training content and structure (q2.2 and 2.3), the trainees were generally satisfied. as for the training period (q2.1), 45% of the respondents were not sure that the training period was sufficient. this is expressed clearly in the practical part case (q2.4), 40% of the respondents were not sure that it was long enough, and 15% of them think that it should be longer. the respondents in general believe that the course improved their skills in the field of energy efficient building design. they believe that they became able to: estimate building material thermal properties (65% of the respondents). select the appropriate building material for energy savings (95% of the respondents). develop their skills in using thermal simulation programs (50% of the respondents). propose and implement design ideas in the field of energy efficient buildings (100% of the respondents). table 1 training assessment questionnaire results field question 1 (%) 2 (%) 3 (%) 4 (%) 5 (%) total (%) acquired knowledge q1.1 0 0 0 75 25 100 q1.2 0 0 0 30 70 100 training content q2.1 0 5 45 40 10 100 q2.2 0 0 5 70 25 100 q2.3 0 5 5 70 20 100 q2.4 0 15 40 45 0 100 q2.5 0 5 0 85 10 100 developed skills q3.1 0 0 35 65 0 100 q3.2 0 0 5 75 20 100 q3.3 0 0 50 35 15 100 q3.4 0 0 0 80 20 100 future career q4.1 0 0 5 55 40 100 q4.2 0 0 5 60 35 100 q4.3 0 0 0 30 70 100 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree finally, the respondents presented an agreement that the training course would have a sustainable impact on their future career. this because they will keep developing their capacity in this field, implement what they have learned to improve the local architecture, and, wherever possible, support decisions that enforce the implantation of energy efficiency measures in buildings. in addition to the questionnaire, the project team organised a reflection workshop following the completion of the training course to discuss the course impact and the possible improvements. the trainees expressed the following notions: it is essential to develop longer training programs specialized in the field of energy-efficient buildings. the practical aspects need more attention including lab practices, field visits, case studies, and construction of real models. it is essential to develop a system (code) for the design and construction of energy-efficient buildings. the issue of buildings retrofit may also be discussed. effective involvement of the private sector is required in order to promote the principles of energy efficient buildings. the training may be expanded to target the contractors and other stakeholders in the construction sector. vi conclusion promotion of energy efficiency in buildings design requires multi-dimensional strategy that is based on collective and well-organised activities. this strategy aims to preserve the resources, protect the environment, and reduce both the running and initial costs of buildings. the role of training is essential here to encourage the concerned stakeholders to implement this strategy. in this context, iug has designed, implemented, and assessed a training program on energy efficient building design. the trainees’ feedback showed that such training programs are relevant in palestine, and are extremely needed in gaza. however, the assessment stage revealed some recommended developments for the course. this includes developing longer and more detailed training programs, increasing training hours allocated for the practical part of the program including lab practices, field visits, case studies, and construction of real models, and finally considering the issue of buildings retrofit as a main activity in the construction market. in this regard, there is a great responsibility on universities to develop and deliver the required training. however, other institutions may also get involved such as the concerned professional syndicates. the role of the official instiahmed s. muhaisen, and omar s. asour / developing and evaluating training programs on energy efficient building design …(2017) 47 tutions is essential too in order to adopt this approach and work on the development of new laws and regulations to give the training outputs more value. in addition to the classroom-based training, the authors recommend that the delivered training course may be adapted and simplified, and then posted online as a free on-line training course in arabic. this could be an interactive course supported by multimedia means and self-learning strategies. an online forum may also be established to provide updated knowledge and exchange experiences and incentives. it may also be useful to produce a further shorter version in the form of leaflets and booklets that target the different stakeholders involved in the construction sector to make them aware of the very basic principles of energy efficient construction techniques. from our experience following the implemented training program in gaza, the issue of thermal insulation may be on the top of the agenda in this regard. acknowledgment the authors would like to thank iug and tu wien team members who participated in the development process of the training program as part of peeb project. the authors and all the project team members are very grateful to appear program and austrian development cooperation (adc) for funding the project and all associated activities, including the training program. references [1] p., smith, architecture in a climate of change a guide to sustainable design. oxford: architectural press, 2nd ed., 2005 [2] a. muhaisen, “the experience of developing a module on energy-efficient buildings for architecture students”, 6 th international engineering conference on energy-efficient buildings, islamic university of gaza, december 2016. [3] website of the 6 th international engineering conference on energy-efficient buildings. available online accessed [jan 25, 2017]. [4] palestinian national authority. a report on the energy crisis in the gaza strip. available at: , 2014, accessed [jan 25, 2017]. [5] a. muhaisen, "effect of wall thermal properties on the energy consumption of buildings in the gaza strip". 2 nd international sustainable buildings symposium (isbs 2015), gazi university, turkey, 2015. [6] energywall, energy efficiency. available at accessed [feb 13, 2017]. [7] j. kim, and b. rigdon, introduction to sustainable design. michigan: national pollution prevention center for higher education, 1998. [8] b. edwards and p. hyett, rough guide to sustainability. london: riba, 2002 [9] ministry of local government, the palestinian code of energy efficient buildings. ramallah: ministry of local government, 2004. [10] o. asfour, “a preliminary framework of sustainability assessment of buildings in the gaza strip, palestine”, 6 th international engineering conference on energy-e efficient buildings, islamic university of gaza, october 2016. [11] energy efficiency exchange, austrian department of the environment and energy, energy efficiency training options. available at accessed [jan 25, 2017]. [12] b. o. martin, k. kolomitro, and t. c. lam, “training methods: a review and analysis”, human resource development review, vol.13, issue 1, pp. 11-35, 2013 [13] manufacturing skills australia, energy efficiency skill sets and resources. available at accessed [jan 25, 2017]. [14] a. muhaisen, j. ahlbäck, towards sustainable construction and green jobs in the gaza strip. geneva: ilo, 2012. ahmed s. muhaisen is a professor of sustainable architecture at department of architecture, islamic university of gaza. he holds a phd degree in architecture from the university of nottingham, uk. he has special interests in subjects related to energy efficiency of buildings, passive solar design and architectural heritage preservation. omar s. asfour is an associate professor of sustainable architecture at department of architecture, islamic university of gaza. he holds a phd degree in architecture from the university of nottingham, uk. he has a research interest in sustainable architecture, housing policies, energy efficient design, and urban planning. transactions template journal of engineering research and technology, volume 3, issue 4, december. 2016 78 ultra-wideband and low noise transimpedance amplifier jawdat y. abu-taha 1 1 department of electrical engineering, iug, gaza, palestine, e-mail: jtaha@iugaza.edu.ps abstract— we present a new transimpedance amplifier (tia) design possessing an improved bandwidth. this tia employs a parallel combination of two series resonate circuits with different resonate frequencies on the conventional regulated common gate (rgc) architecture. in the proposed tia, we employ the capacitance degeneration and series inductive peaking for pole-zero elimination. we implemented the layout of proposed tia in a 0.18-µm cmos process, where a 100-ff photodiode is considered. our post-simulation test results show that the tia provides 53-dbω transimpedance gain and 24pa/√𝐇𝐳 input referred noise. the designed tia consumes 11 mw from a 1.8-v supply, and its group-delay variation is 5-ps over 13-ghz 3-db bandwidth. index terms—bandwidth, capacitance degeneration network, input-referred noise, photodiode-detecor (pd), regulated common gate (rcg) and transimpedance amplifier (tia). i. introduction continuous growth in the wireless telecommunication has derived to high level of chip integration and focused research studies towards the field of high frequency applications [1]. the accelerated cmos technology is the only candidate that can satisfy the demands for low-cost and high integration with reasonable speed for analog applications in the giga-hertz range [2]. fig. 1. block diagram of an optical reciever. the transimpedance amplifier (tia) is the critical block in the optical communication system that converts the induced photodiode current into an amplified voltage signal to be used in the digital processing unit (fig. 1). the bandwidth is considered as the highest priority in tia design. the challenge in tia design lies in the large photodiode parasitic capacitance cpd, the input node that degrades the performances of the tia. therefore, it is required to decrease the input parasitic effects and prior to focusing on the compromise between the bandwidth and the noise [3, 4]. there have been two commonly used topologies in designing wideband cmos tias: the common gate (cg) amplifier and the shunt feedback amplifier [5]. several bandwidth enhancement efforts have been reported in published literature which were based on isolation of the input capacitance of the photodiode to minimize its effect on the bandwidth calculation. inductive peaking is one of the commonly used techniques to improve the bandwidth and decrease parasitic capacitance effects [6]. placing an inductor in a strategic location of the amplifier circuit provides a resonance with parasitic capacitances, which expands the bandwidth of the tia [79]. capacitive peaking has been used for bandwidth extension by using a capacitor to control the pole locations of a feedback amplifier [10, 11]. multiple shunt parallel feedback is another approach for enhancing the bandwidth [12]. the effect of the photodiode capacitance can be more professionally reduced from the bandwidth limitation by using regulated cascode (rgc) [3]. in this work, we have proposed a new tia design with improved bandwidth. the proposed tia is based on modification of the input part of the conventional rgc tia architecture by using parallel arrangement of two series resonate circuits with different resonate frequencies. capacitance degeneration and series inductive peaking networks are used for pole-zero elimination to improve the bandwidth. the paper is organized as follows: in section ii we present an overview of the traditional rcg input stag. the concept of modified rcg input stage and the analysis of the architecture of parallel arrangement of two series resonate circuits with different resonate frequencies are introduced. we present the capacitance degeneration architecture and proposed tia design in section iii and section iv, respectively. finally, we present the noise analysis in section v, demonstrative simulation results in section vi, and the conclusions in section vii. ii. regulated common gate (rcg) input stage a. conventional rcg input stage among all the building blocks in an optical communication system, the tia is the one of the most critical blocks in receiver design. it is a well-known fact that rgc input configuration can attain better isolation within the large photodiode capacitance 𝐶 by local feedback topology. fig.2 shows the schematic diagram of the conventional rgc with a pd, which converts the incoming optical signal to a small signal rent 𝐼 . the common-source (cs) amplifier consists of 𝑀 photodiode transimpedance amplifier high gain amplifier clock & data recovery de-multiplexer data decoding digital signal processing receiver jawdat abu_taha / ultra-wideband and low noise transimpedance amplifier(2016) 79 and 𝑅 operates as a local feedback technique and regulates the cg. as a result of the small-signal analysis, the input resistance of the rgc circuit is given by [13] and [14].   dmm rcgi rgg z 12 , 1 1   , (1) where 𝑔 and 𝑔 are the transconducatnce of 𝑀 and 𝑀 respectively. it is clearly seen that the input resistance decreased because the transconductance 𝐺 is (1 + 𝑔 𝑅 ) times larger than that of cg amplifier input stage, where 1 + 𝑔 𝑅 is the dc gain of the local feedback. therefore, rgc stage acts as a buffer between the pd and the tia stage and decreases the effect of the photodiode capacitance 𝐶 [14]. dd m1 bias rs buffer zi,rcg output voltage m2 cpd i pd rd fig. 2. regulated common gate (rcg) tia. b. modified rcg input stage in the design of ultra-wideband tias, the wideband input stage plays very critical role. the design methodology of the narrow-band tia is our first focus followed by demonstration of how to extend its input bandwidth. cgsrs l i i zi gm 1 rs vg l io ii zi (a) (b) i pd cpd fig. 3. the input part of a narrowband rcg tia. fig. 3 shows the input part of a typical narrowband tia topology. the rgc tia topology improves the bandwidth limitation due to the input pole that consists of the gate-source capacitance (𝐶 ) and the input resistance (𝑍 ). nevertheless, the large parasitic capacitance of photodiode 𝐶 still reduces both the bandwidth and the noise performance of the tia. the series inductive peaking technique is used to overcome this problem. the inductor l is placed between 𝐶 and of 𝐶 , which creates an inductive π network [14]. the expression used to analyze the performance of the current transfer function is derived from the small-signal mode circuit shown in fig.3. (b).   1 1 1 1 1)( 1 0 2 0 3 0 23                       s m ks k m ks ccsrlcscrcsi i gspdgsgspdpd i (2) where r = (1 𝑔 ⁄ )//𝑅 , 𝑘 = , 𝑚 = ( ) and the cutoff frequency 𝜔 = ( ) . inductive-peaking technique provides significant bandwidth extension ratio (bwer) by selecting different values for variables 𝑘 and 𝑚 [15]. m1-1 rs1 vg m1-2 rs2 l1 l2 io i i zi fig. 4. the input part of rcg tia with two input branches. to improve the input-bandwidth, we use a parallel combination of two series resonate circuits with different resonate frequencies, as shown in fig. 4. the input impedance is given by 21 // zzz i  (3) where . 1 1 222 2223 222 jgsjj jjjgsjigsj jgsjj j j rc lrclrcl j rc r z jjjj         (4) 𝐶 , 𝐿 and 𝑅 are the gate –source capacitance , serial inductor and equivalent input resistance of transistor 𝑀 , respectively (𝑗 = 1, 2). jawdat abu_taha / ultra-wideband and low noise transimpedance amplifier(2016) 80 in (4), one should note that, if the reactive elements are accurately selected, then the input impedance become purely resistive. moreover when the gate of 𝑀 and 𝑀 have the same bias voltages, 𝑀 and 𝑀 have identical cutoff frequency 𝜔 . as a result, the circuit can realize a wide bandwidth. iii. the capacitance degeneration modification of rgc input stage can be augmented through the possibility of achieving a broadband frequency response through the increment of the effective transconductance 𝐺 of the circuit at high frequencies [5],[16]. to emphasize more on the above stated point, we can compensate the dominant pole of the overall circuit with a zero, which can be reached through capacitive degeneration configuration [14]. dd mx rs rd cs vin vout v fig.5. configuration of capacitive degeneration. for the capacitive degenenation toplogy shown in fig.5, the transconductance equivalency is calculated as [17]. sm ss ss sm m m rg cr s csr rg g g      1 1 1 1 (5) which introduces a zero (𝑧 ) at (𝑅 𝐶 ) and a pole at (1 + 𝑔 𝑅 ) 𝑅 𝐶 ⁄ . the dominant pole can be compensated by the zero. as a result, the bandwidth is limited by the second lowest pole of the circuit. the propsed capacitive degenartion toplogy shown in fig.6 is employed to provide capacitive and resisitive degeneration. therefore, extra gain and bandwidth enhancement can be achived at the same time. dd m3 rd3 dd m4 rd4 r c z  za 1  za11  fig.6. the proposed configuration of capacitive degeneration. the transconductance equivalency of half of the circuit in fig.5 is expressed as   ssm m m csrrg srcg g    2 1 1 (6) note that in (6), the transconductance introduces a zero (𝑧 ) at (𝑅 𝐶 ) and brings an additional pole 𝑝 at (1 + 𝑔 𝑅 2⁄ ) 𝑅 𝐶 ⁄ . the dominant pole 𝑝 = appears at the drain node. if 𝑅 𝐶 = 𝑅 𝐶 , then the zero 𝑧 cancels the pole 𝑝 , therefore the bandwidth is extended to the second pole of the circuit 𝑝 = (1 + 𝑔 𝑅 2⁄ ) 𝑅 𝐶 ⁄ . in pole-zero elimination technique, if the zero is moved to a lower frequency (𝐶 large), the frequency response displays a source peaking so that the capacitor should be small to avoid the gain peaking. this is an important advantage of the intended circuit stems from the variation of the amplifier’s input impendance and thus the proceeding stage load is seen. iv. the proposed tia we present the proposed wideband tia based on rcg in fig. 7. the modification of the input network of rcg tia provides better enhancement of bandwidth and decreases the input-referred noise current. a 100-ff photodiode capacitor is jawdat abu_taha / ultra-wideband and low noise transimpedance amplifier(2016) 81 used at the input of the tia. the gain stage is composed of two common source amplifiers with capacitance degeneration technique. referreing to miller theorem [18], the shunt impedance 𝑍 = (𝑅//(1/sc)) connected between the drain nodes of 𝑀 and 𝑀 can be separated into a couple of grounded impedances. if a is the voltage gain between the two terminals of 𝑍 in fig.5, then the equivalent split impedances are −(𝐴 − 1)𝑍 and [(1 − 1 𝐴⁄ )𝑍] . these impedances produce zeros with 𝑅 and 𝑅 to make perfect cancellation of the poles at the drain of 𝑀 and 𝑀 . therefore, the bandwidth is further improved [14]. the source follower consisting of 𝑀 and 𝑅 as a buffer is used to evade affecting the frequency response of the tia due to input parasitic capacitances of the succeeding stage in the receiver system namely, the limiting amplifier (la). v. noise analysis in the proposed tia of fig. 6, we consider the thermal noise generated by the active devices (𝑀 , 𝑀 and 𝑀 ) and thermal noises of resistors (𝑅 ,𝑅 , 𝑅 and 𝑅 ). the flicker noise (1 𝑓⁄ ) is ignored because it is not dominant in mos transistor. the noise contribution of 𝑅 is neglected due to the parasitic capacitance in parallel with 𝑅 which makes its noise impact non dominant. the noise analysis is performed based on the noise model shown in fig.7. the thermal noise in mos transistor is modeled by a noise current source between the drain and source terminals with spectral noise of [19] mbdn gtki 4 2 ,  , (7) where 𝑘 is the boltzmann's constant (j/ºk), t is the absolute temperature (ºk) and γ is the complex function of transistor parameters and bias conduction. the equivalent input noise current spectral density can be given as       , 11 2 2, 2 21, 2 11, 22 2 2 22 1 222 , 2 , mnmnmnpd pdpdrninn iiic clclxii      (8) fig.8. modified rcg tia with noise sources. where 𝑖 , ̅̅̅̅̅ is the thermal noise of the resistors given by                                             2 2 2 1 2 2 2 21 2 , 11 1 1 11 4 ss b m b m ss rn rr r g r g rr kti  (9) and              bm gd bm rg cgs c rg ll x 2 2 2 2 212 11 1  (10)   . 1 14 2,1, 14 2 2 2 2 2, 2 1 1 2 1,                            b m b m mn im d jm jmn r g r gkt i j g r gkt i   (11) for half circuit of the capacitive degeneration stage, the input noise current spectral density (𝑖 , ̅̅̅ ̅̅ ̅̅ ̅̅ ̅) is given by fig.7. schematic of proposed rcg tia. m1-1 rs1 m1-2 rs2 l1 l2 dd m3 rs3 rd3 l3 m2 cpd i pd dd rb dd rd1 dd m4 rs4 output voltage dd m5 rs5 rd4 r c g2 cgs2 g m2 d1-2 v gs2 rb cpdi pd rb l2 cgs1-1 g m1-1 d1-1 v gs1-1 rd1 rd1cgs1-2 g m1-2 d1-2 v gs1-2 rs1 rs2 l1 d2 d1 jawdat abu_taha / ultra-wideband and low noise transimpedance amplifier(2016) 82 4,3, 1 1 1 4 2 , , ,0 2 , 2 , 2 2 , 2 2 ,                                  j r g rg r g cr ckt i eqsj mi eqsjmj dj jd eqsjeqsj eqdj hcdegn    (12) where 𝐶 , and 𝐶 , are the equivalent parasitic capacitance at drain and source nodes respectively, 𝑅 , is the equivalent resistance at source node and 𝑔 , is the zero-bias drain conductance. because the main deliberation is given to the modified rcg input stage and the capacitive degeneration stage, the contribution noise of the buffer is ignored, as shown in (7), the resistors noise is the main noise at low frequencies and the impact noise of 𝑀 and 𝑀 becomes dominant at high frequencies. note that the input noise current reduces appreciably at high frequencies using 𝐿 and 𝐿 at the input of the tia. furthermore the minimum noise can be realized by boosting the transconductance 𝑔 . vi. simulation results we performed simulation analysis of the proposed tia circuit using cadence tools. simulations are done by utilizing rf transistor model based on 0.18µm hv cmos technology with a 1.8-v single supply and a 100-ff photodiode capacitance. fig.9 shows the layout of the proposed tia with 147µm × 230µm of area cost. the frequency responses of the conventional rcg and the proposed tias are presented in fig.10. the rcg tia provides a bandwidth of 3.5 ghz, whereas the bandwidth of proposed tia extends up to 13 ghz. transimpedance gains of the conventional rcg and proposed tias are 47.7 db and 53.2 db, respectively. while the total power consumption of the conventional rcg tia is 5 mw, the proposed tia consumes only 11 mw. fig.9. the layout of the proposed tia fig.11 shows the simulation results of input noise current spectral densities of the rcg and the proposed tia. as shown, the proposed tia has less input referred noise current than the rgc configuration. it shows an average input noise current spectral density below 24pa/√hz within the bandwidth. fig. 12 shows the group-delay variation with frequency. as shown, the proposed tia provides smaller group-delay variation than the rgc configuration. fig. 10. frequency response results of the tia the tia has a minimum group delay of 4 ps, increases to 14 ps within the bandwidth of 13 ghz. this small variation means that output signal will not suffer from distortion as rgc tia. fig.11. spectral density of the input noise curren the transient response of the tia is shown in fig.13 at different process corners. the width of the input current pulse is 10 ps with a rise/fall time of 1 ps and the peak-to-peak current is 50 µa. the simulation result shows that, at different process fig.12. the group delay variation of tia. jawdat abu_taha / ultra-wideband and low noise transimpedance amplifier(2016) 83 corners, the output swing variations is very small. this depicts that the transient response of the tia is fast enough even for small inpt current. fig.13. transient response of the tia at different process corners. table 1. performance summery and comparison with the other works using (180nm cmos) technology parameter [20] [16] [21] [22] [23] this work gain (db ) 59 53 54 62.3 51 53.2 bw (ghz) 8.6 8 7 9 30.5 13 cpd (ff) 150 250 300 150 50 100 power (mw) 18 13.5 18.6 108 60.1 11 noise 𝑝𝐴/√𝐻𝑧 25 18 18 55.7 24 table 1 shows a comparison of the proposed tia performance with other works. it can be seen that the noise of the proposed tia is smaller than the other tia configurations, where the active inductors have been used. in addition, the power consumption is comparatively smaller than the other tia circuits. vii. conclusion the proposed tia design improves the performance of the rgc tia. use of parallel combination of two series resonate circuits with different resonate frequencies improves the bandwidth and minimizes the equivalent input noise current density of rcg tia. the capacitance degeneration and series inductive peaking networks are used for pole-zero elimination. the proposed design is implemented in a 0.18-µm cmos process in the presence of a 100ff photodiode capacitance. it is observed that the tia achieves a -3-db bandwidth at 13ghz and transimpedance gain of 53.2 db. the input referred noise current spectral density is 24pa/√hz and the average group-delay variation is 5 ps over the 3-db bandwidth and the tia consumes 11 mw from a 1.8 v supply. simulation results show that the tia displays a broadband flat response, provides an ultra-low noise performance, and hence it is proficient for applications in optical transceivers. references [1] m. gupta, u. singh, r. srivastava, bandwidth extension of high compliance current mirror by using compensation methods, publisher, city, 2014. [2] s. ravikumar, circuit architectures for high speed cmos clock and data recovery circuits, publisher, city, 2015. [3] p. sung min, y. hoi-jun, 1.25-gb/s regulated cascode cmos transimpedance amplifier for gigabit ethernet applications, publisher, city, 2004. [4] m.n. ahmed, trnsimpedance amplifier (tia) design for 400gb/s optical fiber communications, in: dept. of electrical eng., virginia polytechnic institute and state unive., blacksburg , va., 2013. [5] r. behzad, design of integrated circuits for optical communications, 2nd edition, wiley, 2012 [6] c. li, s. palermo, a low-power 26-ghz transformer-based regulated cascode sige bicmos transimpedance amplifier, publisher, city, 2013. [7] s.s. mohan, m.d.m. hershenson, s.p. boyd, t.h. lee, bandwidth extension in cmos with optimized on-chip inductors, publisher, city, 2000. [8] e. sackinger, w.c. fischer, a 3 ghz, 32 db cmos limiting amplifier for sonet oc-48 receivers, in: solid-state circuits conference, 2000. digest of technical papers. isscc. 2000 ieee international, 2000, pp. 158-159. [9] k. sung-eun, s. seong-jun, p. sung min, y. hoi-jun, cmos optical receiver chipset for gigabit ethernet applications, in: circuits and systems, 2003. iscas '03. proceedings of the 2003 international symposium on, 2003, pp. i-29-i-32 vol.21. [10] f.c.a.y. chan, bandwidth enhancement of tia by a capacitivepeaking design, publisher, city, 1999. [11] k. chin-wei, c.-c. hsiao, y. shih-cheng, y.-j. chan, 2 gbit/s transimpedance amplifier fabricated by 0.35 μm cmos technologies, publisher, city, 2001. [12] o. momeni, h. hashemi, e. afshari, a 10-gb/s inductorless transimpedance amplifier, publisher, city, 2010. [13] r.h. mekky, p.v. cicek, m.n. el-gamal, ultra low-power low-noise transimpedance amplifier for mems-based reference oscillators, in: electronics, circuits, and systems (icecs), 2013 ieee 20th international conference on, 2013, pp. 345-348. [14] s. qiwei, m. luhong, x. sheng, k. yuzhuo, novel pre-equalization transimpedance amplifier for 10 gb/s optical interconnects, publisher, city, 2015. [15] j.k.a.j.f. buckwalter, bandwidth enhancement with low groupdelay variation for a 40-gb/s transimpedance amplifier publisher, city, 2010. [16] l. zhenghao, y. kiat seng, m. jianguo, a.v. do, l. wei meng, c. xueying, broad-band design techniques for transimpedance amplifiers, publisher, city, 2007. [17] m.s. al-juaid, design of variable gain transimpedance preamplifier, in, 1998. [18] a.s. sedra , k. c. smith, microelectronic circuits, oxford university press 2014. [19] f. levinzon, comparison of 1/f noise and thermal noise in jfets and mosfets, in: piezoelectric accelerometers with integral electronics, springer, 2015, pp. 93-106. [20] c.-y. wang, c.-s. wang, c.-k. wang, an 18-mw two-stage cmos transimpedance amplifier for 10 gb/s optical application, in: solid-state circuits conference, 2007. asscc'07. ieee asian, ieee, 2007, pp. 412415. jawdat abu_taha / ultra-wideband and low noise transimpedance amplifier(2016) 84 [21] z. lu, k.s. yeo, w.m. lim, m.a. do, c.c. boon, design of a cmos broadband transimpedance amplifier with active feedback, publisher, city, 2010. [22] a.k. petersen, k. kiziloglu, y. ty, f. williams, jr., m.r. sandor, front-end cmos chipset for 10 gb/s communication, in: radio frequency integrated circuits (rfic) symposium, 2002 ieee, 2002, pp. 93-96. [23] j. jun-de, s.s.h. hsu, 40-gb/s transimpedance amplifier in 0.18µm cmos technology, publisher, city, 2008. transactions template journal of engineering research and technology, volume 8, issue 1, march 2021 1 a stress-state based peridynamics model for elastio-plastic material modeling mahmoud m. jahjouh – university college of applied sciences ucas https://doi.org/10.33976/jert.8.1/2021/1 abstract—a stress-state based pd (sspd) model using a well-known yield criteria is proposed in this paper and tested on the modeling of two dimensional bars under different loading levels as a first step for further development. sspd is based upon peridynamics (pd) which utilize temporal spatial integro-differential equation of motion and formulates continuum problems in terms of integral equations, which are capable of modeling discontinuities such as cracks. the proposed bond strength not only depends on the bond stretch, but on the current state of all bonds connected to a particle as well. thus, a stress-based peridynamics model is obtained. the tensile simulation compared to conventional fem shows promising performance with an error of 5%. compression simulations, however, need more investigation to include the effect of contact forces. index terms—peridynamics, stress-state, modeling, fem. i introduction peridynamics (pd) is a recently-developed novel continuum mechanics that strives to unify the distinction between "discrete" and "continuous" media currently present in the classical continuum mechanics theory [1] since its development in the landmark paper [1], pd was used to model problems where classical continuum mechanics, which rely on partial differential equations, fail to do so. the majority of research in classical continuum mechanics tries to amend the theory by "adding-on" terms or equations, which usually results in a modified theory with limited application for a certain class of problems [2]. one of the most prominent of such additions is extended finite element method (xfem) presented in moes et al. [3] and fries and belytschko [4] as an extension to conventional finite element method (fem) to capture discontinuities. pd, on the other hand, inherently provides a suitable framework that predicts material failure and discontinuities. a comparison between pd and xfem can be found in the work of agwai et al. [5]. nevertheless, pd has been successfully implemented in various problems investigating a continuum with discontinuities. the first bond-based pd model was presented in the landmark paper by silling [1] and further explained in [6]. yet, bond-based pd model suffer from the limitations imposed on incorporating the poisson's ratio. the introduction of a state-based pd model by silling et al. [7] and silling and lehoucq [8] presented a method of dealing with such limitation [9,10]. pd was also successfully applied in both material and geometric non-linear analysis. the behavior of elastic, plastic and damaged models based on pd was investigated by gerstle [11] most recently, an ordinary state-based peridynamic model was proposed for the analysis of models with geometrical non-linearities [12]. the bond stretch was defined using logarithmic functions suitable for large deformations. the flexibility and simplicity of pd enabled it to be couploed within a finite element analysis. the work of kilic and madenci [13] used pd to model the regions of discontinuities within the displacement field and coupled those with fem in an attempt to take advantage of both methods. similar research can be found in [14, 15, 16]. pd was also implemented in a fea framework in [17, 18, 19, 20]. interested readers may consult extensive literature surveys provided by madenci and oterkus [21] and javili [22]. pd is, however, still not a flawless theory. as will be shown in section (ii), a continuum is modeled via discrete particles connected via bonds that can be modeled as springs. the derivation of the spring constant relying on the bulk modulus seems to be heuristic [21]. engineers are used to using the young's modulus, also known as the elastic modulus, in describing tensile tests and its resulting stress-strain relationships. the research presented in this contribution derives bond forces within a pd state based on a well-established yield criteria, which enables a https://doi.org/10.33976/jert.8.1/2021/1 mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 2 deeper understanding of how bonds "perform" and simplifies the implementation of advanced analysis via pd such as fatigue. the research presented in this paper continues with a brief introduction of the general pd approach in section (ii), followed by the derivation of the governing equations for pd using different yield criteria in section (iii). the numerical procedure of analysis is presented in section (iv) followed by the case studies in section (v). finally, conclusions and recommendations are drawn in section (vi). ii bond-based peridynamics peridynamics relies on a temporal spatial integrodifferential equation of motion [1]. thus, in a pd model, the body being modeled is discritized into a set of particles with differential volumes. the response of a body 𝓡, shown in figure 1, to external forces is assumed to depend on the displacement of particles relative to their initial positions in the reference configuration [1]. the internal forces affecting a particle in pd are derived from a number of particles in its vicinity [22]. thus, the particle cannot interact with other particles beyond a horizon 𝛿 i.e. a particle 𝐱 in the reference configuration can only interact with another particle 𝐱′ that lies within a neighborhood 𝓗𝐱 of 𝐱 defined as 𝓗𝐱 = {𝐱 ∈ 𝓡 ∶|𝐱 ′ − 𝐱| < δ → 𝐱′ ∈ 𝓡} (1) the basis of pd is the integral equation 𝐋𝐮(𝐱, 𝑡) which relates to the force per unit volume 𝐟 of a particle 𝐱 at time 𝑡 resulting from the interaction of all other particles 𝐱′ ∈ 𝓗𝐱 [1]. this integral equation is defined as 𝐋𝐮(𝐱, 𝑡) = ∫ 𝑓(𝜂, 𝜉)𝑑𝑉𝜂 ∀ 𝓗𝐱 𝐱 ∈ 𝓗𝐱′ (2) as indicated in eq. (2), a bond force 𝑓 exists between two particles 𝐱′ and 𝐱 that is defined by the relative position 𝝃 = 𝐱′ − 𝐱 and the relative displacement 𝝃 = 𝐮(𝐱′, 𝑡) − 𝒖(𝐱, t) where 𝐮 is the displacement field at time 𝑡. the pd equation of motion [1] is defined as 𝝆(𝐱)�̈�(𝐱, 𝑡) = 𝐋𝐮(𝐱, 𝑡) − 𝒃(𝐱, 𝑡) (3) where 𝒃(𝐱, 𝑡) represents the body forces on particle 𝐱 at time 𝑡. one of the assumptions that can be used when modeling the bond force 𝑓 in pd is in its simplest form a linear spring. iii stress-state based pd (sspd) the pd theory formulates continuum problems in terms of integral equations unlike its classical counterparts, which relies heavily on partial differential equations, which as previously mentioned face difficulties whenever discontinuities are present in the continuum. one of the most common discontinuities is crack propagation and growth. cracks are considered in pd as bond breakage [2], which requires the definition of a limiting bond stretch that, when exceeded, results in a bond failure. however, the definition of pd bonds as linear or even non-linear springs represents a somehow heuristic approach. thus, the proposed model in this research formulates the bond forces using well-known stress-strain relationships and models the material behavior using established yield criteria. the proposed bond strength depends not only on the bond stretch, called strain in this research, but on the current state of all bonds connected to the particle as well. thus, a stress-state based pd (sspd) is realized. the bond strength between two particles in an sspd, 𝑓 can be calculated as 𝑓(𝜂, 𝜉) = 𝐴𝑒 |𝜂| − |𝜉| |𝜉| 𝐸 𝜂 |𝜂| ≤ 𝐴𝑒𝜖𝑦 𝐸 𝜂 |𝜂| (4) where 𝐸 is the modulus of elasticity (young's modulus), 𝜖𝑦 is the strain at yield and 𝐴𝑒 is the effective area allocated to the bond. the bond described in eq. (4) is subjected to the strain |𝜂|−|𝜉| |𝜉| not exceeding the rupture strain 𝜖𝑟 which is a characteristic value in a uni-axial tension test. if the strain between two particles exceeds 𝜖𝑟 , the bond is broken. broken bonds in the presented work cannot be healed. the implication of considering strain and young's modulus in eq. (4) is that the behavior of materials subjected to tensile stresses are to be incorporated in the sspd. such behavior is usually characterized by a linear fig.1. schematics of a body 𝓡. pd2-figure0.pdf mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 3 part, followed by a non-linear part. a typical stress-strain behavior for steel is shown in figure 2. to simplify such behavior, strain hardening is neglected and a bi-linear stress-strain curve shown in figure 3 is adopted. the stresses are obtained from the bond force 𝑓 by dividing it by the "tributary area". in the research presented in this contribution, the area is assumed to be 𝐴 = 𝑐 × δ𝑥 (5) where δ𝑥 is the lattice spacing and 𝑐 is the length of a side of a regular polygon inscribed in a circle of radius 𝑟𝑡𝑟 and is calculated as 𝑐 = 𝑟𝑡𝑟 √2 − 2 cos ( 2𝜋 𝑛 ) (6) where 𝑛 is the number of bonds for the particle in question. a particle is usually bound to multiple particles in its vicinity, as shown in figure 4. the bond forces described in eq. (4) can reach the forces obtained when stresses within the material reach yielding stresses 𝜎𝑦 . having multiple bonds with stresses that could reach 𝜎𝑦 leads to the conclusion that a particle could be subjected to stresses that produce an equivalent stress that is well beyond 𝜎𝑦 , even if eq. (4) is limited to 𝜎𝑦 . the equivalent stress resulting from a two dimensional stress state can be calculated to be 𝜎𝑣 = √𝜎1 2 − 𝜎1 𝜎2 + 𝜎2 2 (7) after calculating 𝜎𝑣 for a particle, two cases can emerge: case 1: 𝜎𝑣 does not cause the material to yield. case 2: 𝜎𝑣 causes the material to yield. the question whether 𝜎𝑣 causes the material to yield or not is based on the selected yielding criteria. for example, using the yielding criteria according to tresca [23] would result in a yield-surface for two dimensional stresses as shown in figure 5. if 𝜎𝑣 is outside the envelope defined by the yield criterion, a return-mapping should be performed to correct 𝜎𝑣 back to the yielding surface, as no stresses higher than the yield stress are acceptable, since strain hardening is not considered in the research presented here. such return-mapping is shown in figure 6. iv numerical implementation the simulation of a material with sspd starts by generating a reference configuration using a lattice. in the research presented here, an equally spaced lattice is adopted with a spacing defined to be δ𝑥, resulting in particles with a volume of δ𝑥 3. fig. 2. typical stress strain curve for steel. pd2-figure0.pdf fig. 6. return-mapping for invalid stress states. pd2-figure0.pdf fig. 3. bi-linear stress-strain curve. pd2-figure0.pdf fig. 5. yielding surface according to tresca. pd2-figure0.pdf mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 4 after defining the reference configuration, the material is defined by selecting 𝜖𝑦 , 𝜖𝑟 , 𝐸 and the yield criterion to be considered. as for integrating the equation of motion in eq. (3), the verlet integration scheme is used. the pseudo-code of sspd is shown in algorithm (1), whereas the return mapping is shown in algorithm (2). v case studies three study cases are presented to investigate the performance of sspd for 2d problems and gain first insights. of interest are the force displacement behavior and failure patterns. the former is important to check that the suggested stress-state peridynamics is able to capture the behavior of the test specimen and compare it with the well-known stressstrain behavior obtained in practice, whereas the later is important to check whether the failure patterns do meet the expectations based on sound engineering judgment. for all the cases stated below, the horizon was set to be 𝛿 = √2 δ𝑥, i.e. for an equidistant lattice, a particle is bound to its neighbors from 8 directions. the radius required to calculate the "tributary area" of the bonds was taken to be 𝑟𝑡𝑟 = 0.583 δ𝑥. furthermore, the material of all cases is assumed to be steel, with 𝐸 = 200 gpa, 𝜖𝑦 = 0.002, 𝜖𝑟 = 0.025 and 𝜌 = 7850 kg/m3. the analysis is performed under a gradually increasing external forces. this is realized using the function 𝐹(𝑡) = 𝜎𝑡 (δ𝑥) 2 1 + 𝑒 𝑎[− 20 𝑏 𝑡+10] (8) where 𝜎𝑡 is the targeted stress level. a typical stress history is shown in figure 7. 2d bar subjected to tension to start with, a simple bar, shown in figure 8, is subjected to tensile forces and analyzed. the bar dimensions were 60 mm x 21 mm x 3mm and δ𝑥 was assumed to be 3 mm. the discretized bar is shown in figure 9 in which the center of each element is shown as nodes and the bonds are shown as links. pd2-figure0.pdf fig. 9. discretization of 2d bar. fig. 7. typical external applied stress history. figure 8. 2d bar subjected to forces. mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 5 case 1 𝝈𝒕 ≤ 𝝈𝒚 to test the accuracy of sspd, a targetted stress level 𝜎𝑡 well below the yield stress 𝜎𝑦 is applied in eq. (8). thus, 𝜎𝑡 = 250 mpa is selected. the displacements resulting from the sspd analysis of the aforementioned bar are shown in figure 10 and figure 11 for the x-direction and y-direction respectively. for comparison purposes, the resulting displacements using a conventional finite element analysis software (autodesk robot) is shown in figure 12 and figure 13, respectively. a qualitative comparison of the displacement fields shown in figure 10 through 13 shows that both methods provide displacement fields that are similar. furthermore, a quantitative comparison shows that the maximum displacement 𝑈𝑥 is 0.70mm and 0.73mm for sspd and fem respectively. this indicates an error of 3mm or 4.3%. similarly, the maximum displacement 𝑈𝑦 is 0.042mm and 0.040mm for sspd and fem, indicating an error of 0.002mm or 5%. case 2 𝝈𝒕 > 𝝈𝒚 for the material characteristics defined in this research 𝐸 = 200 gpa, 𝜖𝑦 = 0.002 the yield stress 𝜎𝑦 is expected to be 400 mpa. thus, a stress level 𝜎𝑡 of 500 mpa is selected. multiple strain gauges are defined along the length of the bar at the centerline of the bar, namely at 𝑥= [16.5, 31.5, 46.5] mm. the resulting stress strain diagram measured at the strain gauges is shown in figure 14. this stress-strain diagram was obtained just before failure, which is evident from the strain of the sensor at x = 46.5mm which approaches a value of 0.025, which was the value set for 𝜖𝑟 . after failure, the stress strain diagram becomes chaotic, as the bar starts vibrating violently. the linear portion of the stress-strain diagram has a slope of 200 gpa, which corresponds to the provided young's modulus. the inelastic part follows a profile corresponding to a force-driven tensile test with no strain hardening. the bar just after exceeding 𝜖𝑟 is shown in figure 15. it can be seen that failure occurs near model non-discontinuities, i.e. near the boundary conditions and force application particles. fig. 13. fem results for 𝑈𝑦 . fig. 11. fem results for 𝑈𝑥 . fig. 10. sspd results for 𝑈𝑥. fig. 12. sspd results for 𝑈𝑦 . mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 6 2d bar subjected to compression the same bar is used in a compression test. here, a targeted stress level 𝜖𝑡 of 500 mpa is assumed. furthermore, identical sspd parameters is used in this test. an identical stress strain and failure diagram as shown in figure 14 and 15 is obtained. this, however, is not compatible with usual stress strain diagrams obtained from the usual compression tests, which indicates an area for future investigation. the sspd fails to capture compression correctly mainly due to the absence of contact forces particles used in the pd analysis. in the current proposed sspd, particles are allowed to get "crushed", i.e. to have a distance of zero between its neighbors. even more, the distance can actually become negative, which indicates that a particle has been crushed and even pierced through its neighbor. such behavior is non-physical and requires further investigation. a recommendation that can be given here is to include contact forces within the framework of sspd [22]. this is going to be the subject of future investigations. 2d bar with notches subjected to tension the bar described previously is notched from both sides and used in a tensile test. the bar is shown in figure 16. the targeted stress level 𝜎𝑡 of 500 mpa is applied. the resulting stress strain diagram is shown in figure 17, obtained just before failure. the linear portion of the stressstrain diagram has a slope of 200 gpa, which corresponds to the provided young's modulus. the inelastic part follows a profile corresponding to a force-driven tensile test with no strain hardening. the bar just after exceeding 𝜖𝑟 is shown in figure 18. it can be seen that failure occurs near model non-discontinuities, i.e. near the crack as well as boundary conditions and force application particles. fig. 15. 2d bar just after failure fig. 18. 2d bar with notches after failure. fig. 14. 2d bar stress strain diagram. fig. 16. discretization of the 2d bar with a notch. fig. 17. stress strain diagram for bar with notches. mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 7 conclusion a newly developed stress-based peridynamics model (sspd) was presented in the research work of this contribution and applied on three numerical cases:  the first numerical case applies stresses well below the yielding stresses and shows that the proposed sspd conforms well with fem results with an error of around $5\%$. furthermore, it shows the application of stresses well beyond the yielding stresses, resulting in stress-strain diagrams that conform with the expected stress strain diagram when a bilinear stress-strain relation ship is assumed.  the second numerical case shows a bar under compression. the results are identical to those obtained from tension stresses.  the sspd fails to capture compression correctly mainly due to the absence of contact forces particles used in the pd analysis. a recommendation that can be given here is to include contact forces within the framework of sspd. this is going to be the subject of future investigations.  the third numerical case show a bar with notches to encourage failure near those notches. the failure does indeed occur near those notches, but still occur near the force application points too. for future research, it is planned to investigate the inaccuracies with regard to compression simulation and derive ways of including contact forces to prevent the ``crushing'' of the mesh. also, implementing a stress-strain diagram that features strain hardening, as well as cyclic load analysis and fatigue. furthermore, tests with 3d bar models are planned to be implemented next and compared to the results obtained form 2d bars. references [1] [1] s. a. silling, “reformulation of elasticity theory for discontinuities and long-range forces,” j. mech. phys. solids, vol. 48, no. 1, pp. 175–209, 2000. [2] s. a. silling, “introduction to peridynamics,” in handbook of peridynamic modeling, chapman and hall/crc, 2016, pp. 63–98. [3] n. moës, j. dolbow, and t. belytschko, “a finite element method for crack growth without remeshing,” int. j. numer. methods eng., vol. 46, no. 1, pp. 131–150, 1999. [4] t. fries and t. belytschko, “the extended/generalized finite element method: an overview of the method and its applications,” int. j. numer. methods eng., vol. 84, no. 3, pp. 253–304, 2010. [5] a. agwai, i. guven, and e. madenci, “predicting crack propagation with peridynamics: a comparative study,” int. j. fract., vol. 171, no. 1, pp. 65–78, 2011. [6] s. a. silling and e. askari, “a meshfree method based on the peridynamic model of solid mechanics,” comput. struct., vol. 83, no. 17–18, pp. 1526–1535, jun. 2005, doi: 10.1016/j.compstruc.2004.11.026. [7] s. a. silling, m. epton, o. weckner, j. xu, and e. askari, “peridynamic states and constitutive modeling,” j. elast., vol. 88, no. 2, pp. 151–184, 2007. [8] s. a. silling and r. b. lehoucq, “peridynamic theory of solid mechanics,” adv. appl. mech., vol. 44, pp. 73– 168, 2010. [9] y. wang, x. zhou, y. wang, and y. shou, “a 3-d conjugated bond-pair-based peridynamic formulation for initiation and propagation of cracks in brittle solids,” int. j. solids struct., vol. 134, pp. 89–115, 2018. [10] y. wang, x. zhou, and y. shou, “the modeling of crack propagation and coalescence in rocks under uniaxial compression using the novel conjugated bondbased peridynamics,” int. j. mech. sci., vol. 128, pp. 614–643, 2017. [11] s. nikravesh and w. gerstle, “improved state-based peridynamic lattice model including elasticity, plasticity and damage,” comput. model. eng. sci., vol. 116, no. 3, pp. 323–347, 2018. [12] c. t. nguyen and s. oterkus, “ordinary state-based peridynamic model for geometrically nonlinear analysis,” eng. fract. mech., vol. 224, p. 106750, 2020. [13] b. kilic and e. madenci, “coupling of peridynamic theory and the finite element method,” j. mech. mater. struct., vol. 5, no. 5, pp. 707–733, 2010. [14] e. oterkus, e. madenci, o. weckner, s. silling, p. bogert, and a. tessler, “combined finite element and peridynamic analyses for predicting failure in a stiffened composite curved panel with a central slot,” compos. struct., vol. 94, no. 3, pp. 839–850, 2012. [15] w. liu and j.-w. hong, “a coupling approach of discretized peridynamics with finite element method,” comput. methods appl. mech. eng., vol. 245, pp. 163– 175, 2012. [16] y. h. bie, x. y. cui, and z. li, “a coupling approach of state-based peridynamics with node-based smoothed finite element method,” comput. methods appl. mech. eng., vol. 331, pp. 675–700, 2018. [17] r. w. macek and s. a. silling, “peridynamics via finite element analysis,” finite elem. anal. des., vol. 43, no. 15, pp. 1169–1178, nov. 2007, doi: 10.1016/j.finel.2007.08.012. [18] s. w. han et al., “peridynamic direct concentration approach by using ansys,” in 2016 ieee 66th electronic components and technology conference (ectc), may 2016, pp. 544–549, doi: 10.1109/ectc.2016.251. [19] [c. diyaroglu, s. oterkus, e. oterkus, and e. madenci, “peridynamic modeling of diffusion by using finiteelement analysis,” ieee trans. components, packag. mahmoud m. jahjouh / a stress-state based peridynamics model for elastio-plastic material modeling (2021) 8 manuf. technol., vol. 7, no. 11, pp. 1823–1831, nov. 2017, doi: 10.1109/tcpmt.2017.2737522. [20] z. yang, e. oterkus, c. t. nguyen, and s. oterkus, “implementation of peridynamic beam and plate formulations in finite element framework,” contin. mech. thermodyn., vol. 31, no. 1, pp. 301–315, jan. 2019, doi: 10.1007/s00161-018-0684-0. [21] e. madenci and e. oterkus, “peridynamic thermal diffusion,” in peridynamic theory and its applications, springer, 2014, pp. 203–244. [22] [1] a. javili, r. morasata, e. oterkus, and s. oterkus, “peridynamics review,” math. mech. solids, vol. 24, no. 11, pp. 3714–3739, nov. 2019, doi: 10.1177/1081286518803411. [23] h. tresca, “memoire sur l’écoulement des solides à de forte pressions,” acad. sci. paris, vol. 2, no. 1, p. 59, 1864. mahmoud m. jahjouh is an assistant professor at the universitz college of applied sciences (ucas) and is the head of its engineering and information systems department. dr. jahjouh has obtained his ph.d. in structural engineering from the leibniz university of hannover, and is interested in topics with regard to structural optimization, shape optimization, modeling and damage detection. transactions template journal of engineering research and technology, volume 4, issue 1, march 2017 22 figure 1: data mining process [20]. improving premium domain names registration for .ps domain cctld basing on knowledge discovery and data mining techniques ibrahim alfayoumi 1 , wael al sarraj 2 faculty of information technology, islamic university of gaza, gaza, palestine, me@ibrahim.ps , wsarraj@iugaza.edu.ps abstract—country code top-level domain represents the identity of its own country. about ten million palestinians spread in the world belong or affiliate to the domain name (.ps), for which it represents their identity although they do not have their own state. in addition, the (.ps) represents palestinians history, culture, and lifestyle as well as to the palestine homeland issue on the cyber and internet space. domain names are valuable in today's e-commerce and are considered one of the potential intellectual capital assets worldwide. however, premium domain names (pdn) is generic single keywords that are high value, memorable and easily marketable; its cost can be significantly more than a typical domain purchase due to its perceived higher value. in this paper, the research is based on knowledge discovery and data mining techniques to discover the most attention regions in the world that register .ps domains in order to manage acquisition knowledge, which will benefit in providing marketing plans, identify stakeholders and target customers. two phases were performed to achieve the required work plan: data mining (the knowledge discovery) phase and knowledge management (sharing and planning). the techniques in both phases were chosen to give the best accuracy and particularity of the involved target audience in the study. the results show that what we suggested could be efficiently used to recognize patterns in such data set behavior to generate new marketing strategies and plans. index terms— dns, data mining, dns mining, classification, knowledge discovery, domain name marketing. i introduction in today's ecommerce, domain names are a valuable intellectual capital asset, that is domains are unique, represents a unique ip address or a location/address on the internet, they often are comprised of words that introduce valuable trademarks and businesses spend huge amounts of money for marketing their domain names to facilitate access and traffic to their websites. premium domain names prices are usually higher than any regular domain because of because of its highly marketing value. the importance of premium domains appears in all novel internet usages, such as internet of things (iot), e-marketing and e-branding [1-3]. country code top-level domain represents the identity of its own country. about ten million palestinians spread in the world belong or affiliate to the domain name (.ps), for which it represents their identity although they do not have their own state. also the .ps represents palestinians history, culture, lifestyle and in addition to the palestine home land issue on the cyber and internet space. in addition to premium names, (.ps) domain is characterized by its letters which can be used to create distinctive names, for example (tri.ps) as a domain that could represent the travel and tourism companies, (ma.ps) which represents maps and gps locations and the domain (li.ps) as a cosmetics companies. all of the above gives a clear impression on the importance of this domain name as an intellectual capital, could support the development of generating perfect marketing plans and strategies, and could make remarkable revenue. data mining (dm) is a process of discovering patterns in data sets, a pattern is an arrangement of repeated parts, which represents knowledge, or we can say that it is a set of rows that share the same values in two or more columns of a data table. dm involves machine learning, artificial intelligence, statistics, and database systems. extracting information from and transform it into knowledge as an understandable structure for further use is dm [4, 5]. dm also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating as shown figure 1. dm is the analysis step of the knowledge discovery in databases process (kdd) [6, 7]. mailto:me@ibrahim.ps mailto:wsarraj@iugaza.edu.ps ibrahim s. alfayoumi, wael w. alsarraj / improving premium domain names registration for .ps domain cctld basing on knowledge discovery and data mining techniques (2017) 23 figure 2: knowledge management processes [21]. knowledge management (km) is a process of creating and utilizing knowledge[8]. km processes as shown figure 2 can be integrated with a corporation system which can help the marketers to get the knowledge easily [9]. good marketing decisions are based on knowledge that comes from customer’s behavior. it’s considered as a key for the marketing functions and can be found in the organization’s databases but most of it is hidden [10]. marketing decisions are very important for any organization to increase profit and can affect customer’s behavior. in this article, we propose to apply association rules and classification methods as a data mining techniques to discover the most attention regions of the world that register such domains under the country code top level domain (cctld) of the palestinian cctld (.ps) as a case study. as a secondary goal, we will try to find how many palestinians register such domain from international registrars, which will show if the palestinians in the diaspora are interested with their national domain, or not. the palestinian national internet naming authority (pnina) is the certified registry organization for the palestinian cctld (.ps and. فلسطين). this study will give pnina a good knowledge, which can be used to improve branding and premium domain marketing. the two phases we propose are as so: data mining (dm) and knowledge management (km), each phase is divided into two steps. dm phase is the knowledge discovery phase, which contains discovering useful patterns using association rules, and classification, the classification method we applied is k-nearest neighbor (k-nn) shows an accuracy up to 96.5%, which is the best figure we have obtained, compared to other mining techniques. the second phase is km, which contains the sharing process of the discovered patterns among employees and experts of .ps managing staff, and the decision step to generate real marketing plans that could achieve the overall goal for the organization. the rest of the paper is organized as follows: section 2 discusses related works, section 3 presints the methodology, section 4 shows results and their discussion, section 5 highlights some useful recommendations that the .ps cctld could use to improve domain registration and finaly section 6 concludes the paper. ii related works a systematic methodology that uses data mining and knowledge management techniques was proposed by shaw, m.j., et al. to manage the marketing knowledge and support marketing decisions. this methodology can be the basis for enhancing customer relationship management [8]. shu-hsien liao,yin-ju chen and hsin-hua hsieh analyzed consumer adumbration, lifestyle habits and purchasing behavior in an application of internet marketing to the direct selling industry and the cosmetics market in taiwan by implementing association rules and cluster analysis as approaches for data mining [9]. sérgio moro and raul m. s. laureano described an implementation of a dm project based on the crisp-dm methodology. their data set was collected from a portuguese marketing campaign related with bank deposit subscription. their business goal was to find a model that can explain success of a contact. they claimed that their model can increase campaign efficiency, helping in a better management of the available resources and selection of a high quality and affordable set of potential buying customers [10]. social networks have generated great expectations connected with their potential business value. surma, j. and a. furmanek purposed a research to prove that data mining techniques can bring statistically significant improvement in marketing response accuracy throughout the virtual community. in their test (classification and regression tree) approach was used to generate a classification tree to formulate specific rules to identify the proper target group and showed that it is possible to improve marketing response [11]. in 1997, karl m. wiig clarified that, progressive managers consider intellectual capital management (icm) and knowledge management (km) to be vital for sustained viability. recent practices support this notion and have provided important approaches and tools km supports icm by focusing on detailed systematic, explicit processes and overlap and synergy between icm and km. and advanced enterprises pursue deliberate strategies to coordinate and exploit them [12]. ya-hui ling, selected samples from a list of the top 1,000 taiwanese companies using a type of purposive sampling. the selection criteria required sample companies to be located in taiwan and to compete globally to confirm that intellectual capital is positively associated with a firm’s global performance and a moderating effect of knowledge management strategy on the relationship between intellectual capital and global performance [13]. zekić-sušac, m., & has, a. claimed that all previous research about integrating dm in km has shown success of data mining methods in marketing, but the integration in a knowledge management system is still need more investigated. so, they suggested an integration of two data mining techniques: association rules and neural networks in marketing modeling to integrate with knowledge management and produce better marketing decisions[14]. ibrahim s. alfayoumi, wael w. alsarraj / improving premium domain names registration for .ps domain cctld basing on knowledge discovery and data mining techniques (2017) 24 figure 3: the proposed integrated methodology figure 4: sample of the data set records and attributes used knowledge management and data mining techniques are really useful for marketing especially for organizations which have huge amount purchase transactions. knowledge management and data mining also can help to increase the profit because of the correct decisions made by marketers. al essa, a., & bach, c. showed how knowledge management and data mining can be used to provide better answers from huge amount of data of customers and purchase transactions. the goal of their article is to demonstrate the importance of using knowledge management and data mining for supporting marketing decisions. it shows how the data mining techniques and tools can extract hidden purchase patterns that can help to make better decisions by the marketers [15]. despite of the valuable contributions of the previous related works in helping us recognize and refine our research methodology, our research, focus on the importance of integrating knowledge discovery as dm and knowledge management processes especially in domain name marketing. also we would like to improve (.ps) domain name as an intellectual capital asset which represent the palestinian identity, palestinians history, culture, lifestyle and further to the palestine home land issue on the cyber and internet. iii methodology in this section, we are going to describe data mining techniques used in this research including data preparation and representation, association rules and data classification, as well as a description of how we integrated dm with km to gain knowledge and its usage for improving marketing. figure shows the integrated methodology which consists of two main phases, data mining phase and knowledge management phase. a detailed desicription of the methodology and its phases is discussed in section d. a data preparation and representation the palestinian national internet naming authority (pnina) – the registry organization for .ps cctld uses an open source domain registry system called cocca registry system. the dataset contains 12 attribute with 489 record for the 2 and 3 character and premium domains which has a high value in terms of registration fees, containing all domain registration information about the registrar and the owner of the domain including domain named, registrar name, id, email, phones, country tld, address and the same information for the owner [22]. pnina has 96 registrar companies, 21 of them are international and the others are spread over the whole of palestine. the cocca registry system adds an attribute for each registrar and owner as cctld, which identifies the nationality of the registrar company, and the owner of the domain according to other attributes like phone, fax or address as shown figure 4. we used a number of operators that satisfies the improvement of the data set to be ready for next mining techniques such as replace messing values, filtering, sampling, outlearning algorithms and attributes selection. b applying association rules (ar’s) we use ar’s data mining technique for discovering relations between patterns in large database to identify strong rules that could be used to capture knowledge [16]. in our case we can predict that ar may give us a good result when taking owner’s_country_tld attribute as a label to generate all ar’s according to the nationality of the owner. c applying classification methods we applied k-nearest neighbor classification method (k-nn) data mining technique on our data set which we labeled the "owner_contry_tld". this may help us to find which countries or regions around the world interested in registering the palestinian cctld. k-nearest neighbors algorithm (k-nn for) is a nonparametric method used for classification and regression[17]. the input is a feature node which will be defined according of the similarity to the nearest k nodes in the trained feature space. in classification, the output is class membership. input node is classified by the neighbor’s majority vote then it will be assigned to the class that contains the most k nearest neighbors. k is a positive integer that has to be given and known for the input which represents the number of nodes that the input will be processed with to find its best class, accuracy of the k-nn algorithm depends on k and k may depend on the structure of the dataset itself [18, 19]. ibrahim s. alfayoumi, wael w. alsarraj / improving premium domain names registration for .ps domain cctld basing on knowledge discovery and data mining techniques (2017) 25 figure 5: gained association rules figure 7: k-nn graphic count figure 6: k-nn confusion matrix d integrating dm and km (dki model) we propose a methodology that shows how we integrated dm techniques in km to generate a good marketing plan to improve premium domain marketing as shown in figure . we introduced our methodology as two interconnected phases: data mining (the knowledge discovery) and knowledge management (sharing and planinig). each phase consist of two steps beside data preparation. these four steps should be accomplished respectively. the data mining phase consists of two steps. step 1 is applying association rules (ars) to discover interesting patterns that should give a vision of relationship between registrars and customers. ar's provide patterns in the form of x → y (if x then y) where x is the registrar and y is the customer and vice versa. step 2 is the classification method where the dataset is classified using an algorithm known as k-nearest neighbor classification method (k-nn) that will classify customers according to their nationality to find the most attention region of the world about registering premium domain names. knowledge management phase also consists of two steps. step 1 is about sharing interesting patterns and classification results about customer profile with employees or with a set of employees involved in knowledge management to collect new ideas, rank these ideas and the select the ideas that could be transformed into a good marketing strategies, this step should achieve visibility and reasoning to enable employees to be actively participate in generating and innovating new marketing plans that could achieve the main goal and develop the organization environment and learning. step 2 is usually managed by marketing and sells managers or the general manager as in pnina as well as including other professional employees. to generate real marketing plans that could achieve the overall goal for the organization which is improving registry for premium domain names, mainly focus on the following types of marketing strategies: (1) using media and techniques for product promotion, (2) revision of pricing such domain names, (3) develop a full package for the domain name like offering the domain with a strategy of use with an idea of an innovation project, (4) gather all expected names in a domain field and promote it directly to the targeted customers, (5) find new registrars in neglected regions and (6) offer domain names according to countries and people culture and polices. this step effects should increase the sale, cross-selling index and competitiveness of the company. iv results and discussion for this section, we are going to list all results that we got from each step of our two phases in dki model that was described before in methodology section. a phase 1: data mining (dm) step 1 figure shows two selected association rules; we can see that palestinian owners cooperate with local registrars for registering domains. also from the second selected association rule, we conclude the all local registrars are selling these domains for palestinian customers. for the 1st rule, we can explain it because of the price range that pnina sells domains for local and international registrars. people try to buy with the least price. for the 2nd rule, we can predict that it’s due to the weakness in marketing and technical capacity. most of local registrars do not use a full-automated registration system that helps customers to buy and modify their domains. step 2 k-nn classifier confusion matrix as in figure shows an accuracy up to 96.5%, which is a good number that we can count on it. k-nn built 10 classes of classification depending on country location. the count graph as in figure shows that local sales have the biggest amount, us comes second and switzerland comes third. others comes last which are china (cn), great britain (gb), germany (de), indonesia (id), czech republic (cz), norway (no) and sweden (se). we can conclude that we need to concentrate about global marketing, especially in europe and the us. other places like africa, latin america and australia can be a future work. ibrahim s. alfayoumi, wael w. alsarraj / improving premium domain names registration for .ps domain cctld basing on knowledge discovery and data mining techniques (2017) 26 b phase 2: knowledge management (km) step1 the activities about sharing interesting patterns, classification results about customer profile with employees or with a set of employees to collect new ideas, rank these ideas and select the native ideas can be implemented by the help of km software tools that are available today, the choice of a tool should depend on functionalities needed and its cost. sharing is able to contribute organization environment and learning, generate some new ideas on marketing strategies and achieve the main overall goal step2 appointed managers and officials should take this step seriously because we count on it to generate real marketing plans that could achieve the overall goal for the organization. v recommendations as a result of the four steps in the two phases of the methodology and as results shown in figure 7, the registry of .ps domain is concentrated in palestine. this draws the attention to the fact that using media and techniques for product promotion should increase targeted audience worldwide and with reference to the types of marketing strategies mentioned in subsection iii.d we present the following recommendations:  using media and techniques for product promotion such as: o sponsoring contests o registrar referral incentive as a way of encouragement o promotion while supporting causes and charity o branded promotional gifts o registrar and customer appreciation events o customer surveys and feedback  revision of pricing such domain names to increase selling and encourage customers to get it.  develop a full package for the domain name like offering the domain with a strategy of use with an idea of an innovation project.  gather all expected names in a domain field and promote it directly to the targeted customers.  find new registrars in neglected regions and thus create new points of sale to facilitate dealing with customer  offer domain names according to countries and people culture and polices.  identifying renowned brands in each market area and begin a promotion plan for its names under the cctld (.ps) vi conclusion integration data mining techniques such as association rules and classification methods in knowledge management leads to facilitate the process of discovering knowledge and decision making for generating marketing strategies and solutions. this approach is used in a previous research. the originality here is applying the technique to the .ps domain name as a case study. as result of low prices of local registers, we can conclude that, local registrars register more than 94% of the (.ps) domains; the variation of prices between international and local registrars gives a big chance for locals to sell more domains. however, international customers do not register from a local, which is because of the weakness in marketing and technical capacity. local registrars do not use a full-automated registration system that helps customers to buy and modify their domains. the classification methodology shows that the most attention region for registering such domains is represented in palestine, small ratio can be noticeable in the us and switzerland (ch). also we can find some registered in china (cn), great britain (gb), germany (de), indonesia (id), czech republic (cz), norway (no) and sweden (se). this means that we still need to find good marketing plans for such domains in europe, asia and the usa. some markets do not exist like africa, australia, canada and south america. these markets must be exploited well in order to expand the target customers and thus increase sales references [1] smith, m.d. and e. brynjolfsson, consumer decision‐ making at an internet shopbot: brand still matters. the journal of industrial economics, 2001. 49(4): p. 541558. [2] murphy, j., l. raffa, and r. mizerski, the use of domain names in e-branding by the world's top brands. electronic markets, 2003. 13(3): p. 222-232. [3] atzori, l., a. iera, and g. morabito, the internet of things: a survey. computer networks, 2010. 54(15): p. 2787-2805. [4] chakrabarti, s., et al., data mining curriculum: a proposal (version 1.0). intensive working group of acm sigkdd curriculum committee, 2006: p. 140. [5] kriegel, h.-p., et al., future trends in data mining. data mining and knowledge discovery, 2007. 15(1): p. 8797. [6] fayyad, u., g. piatetsky-shapiro, and p. smyth, from data mining to knowledge discovery in databases. ai magazine, 1996. 17(3): p. 37. [7] han, j., j. pei, and m. kamber, data mining: concepts and techniques. 2011: elsevier. [8] shaw, m.j., et al., knowledge management and data mining for marketing. decision support systems, 2001. 31(1): p. 127-137. [9] liao, s.-h., y.-j. chen, and h.-h. hsieh, mining customer knowledge for direct selling and marketing. expert ibrahim s. alfayoumi, wael w. alsarraj / improving premium domain names registration for .ps domain cctld basing on knowledge discovery and data mining techniques (2017) 27 systems with applications, 2011. 38(5): p. 6059-6069. [10] moro, s., r. laureano, and p. cortez. using data mining for bank direct marketing: an application of the crisp-dm methodology. in proceedings of european simulation and modelling conference-esm'2011. 2011. eurosis. [11] surma, j. and a. furmanek. improving marketing response by data mining in social network. in asonam. 2010. [12] wiig, k.m., integrating intellectual capital and knowledge management. long range planning, 1997. 30(3): p. 399-405. [13] ling, y.-h., the influence of intellectual capital on organizational performance—knowledge management as moderator. asia pacific journal of management, 2013. 30(3): p. 937-964. [14] zekić-sušac, m. and a. has, data mining as support to knowledge management in marketing. business systems research, 2015. 6(2): p. 18-30. [15] al essa, a. and c. bach, data mining and knowledge management for marketing. international journal of innovation and scientific research, 2014. 2(2): p. 321328. [16] piatetsky-shapiro, g., discovery, analysis, and presentation of strong rules. knowledge discovery in databases, 1991: p. 229-238. [17] altman, n.s., an introduction to kernel and nearestneighbor nonparametric regression. the american statistician, 1992. 46(3): p. 175-185. [18] phyu, t.n. survey of classification techniques in data mining. in proceedings of the international multiconference of engineers and computer scientists. 2009. [19] archana, s. and k. elangovan, survey of classification techniques in data mining. international journal of computer science and mobile applications, 2014. 2(2): p. 65-71. [20] becerra-fernandez, i., a.j. gonzález, and r. sabherwal, knowledge management: challenges, solutions, and technologies. 2004: pearson/prentice hall:p. 367. [21] becerra-fernandez, i., a.j. gonzález, and r. sabherwal, knowledge management: challenges, solutions, and technologies. 2004: pearson/prentice hall:p. 262. [22] registered premium domain names under (.ps) cctld, the palestinian national internet naming authority (pnina), 2016. ibrahim s. alfayoumi is the dns administrator of the palestinian country code top level domain (.ps) at the palestinian national internet naming authority (pnina). he holds a bachelor of science degree in computer engineering from the islamic university of gaza in palestine 2006 and studying for a master's degree in information technology from iug. his main research interests are in the fields of dns security, data mining, dns mining and knowledge management. wael f. al sarraj is an assistant professor of computer science at faculty of information technology at the islamic university of gaza – palestine. he holds a bachelor of science degree in computer engineering from the iug in palestine 2000, a master degree in electronic-business from the unile in italy 2002 and a ph.d. degree in computer science from the vub in belgium 2012. his main research interests are in web engineering and human-computer interaction, in particular, the engineering of web information systems and applications that involve web and web technology, and end-user modelling, web usability evaluation, adaptation, and personalization. journal of engineering research and technology, volume 3, issue 3, september 2016 51 malware detection based on permissions on android platform using data mining tawfiq s. barhoom 1 , mohammed i. nasman 2 1 islamic university gaza, tbarhoom@iugaza.edu.ps 2 islamic university gaza, mnasman@gmail.com abstract— with the spreading of smart mobile devices to nearly every person, android operating system is dominating the mobile’s operating systems. due to the weak policy of submitting application to google play store, attackers developed malware to attack the users of the android operating system with malware application or by including malicious code into applications. researchers have been done in this area, but solutions required installing the applications to monitor the malware behavior, or by taking actions after installing the application .we proposed a new method using data mining to detect newly and unknown malware using the applications’ permissions as base features. in order to create binary dataset we collected up to ―103‖ benign and malware android app samples, the dataset consist of five different features collected based on different number of attributes and conditions. different evaluation measure used to evaluate the proposed method, the results show that we achieved 96.74% with f-measure and 0.993 with area under the roc curve.. index terms— malware, android, data mining, permissions, apk, classifications. i.introduction with the repaid growing of android application every day, there are growing threats for the mobile users by installing more malwares without ability to detect them before installing the applications to the user device. malware name came from ―malicious software‖, its software designed to secretly access a system without the owner's device knowledge. malware can effects mobile resources, or just make the devices not responding to the users, it may go to dangerous behaviors like steal private information, without the user notice any harmful action[1]. malware has different types, pcs and mobiles has the same types, which can be listed on different categories such as : adware, bots , rootkit, spyware, trojan horse or a ―trojan,‖ , virus, worms. according to data from the international data corporation (idc) the worldwide smart phone market grew 27.2% year over year in the second quarter of 2014, just over a third of a billion shipments at 335 million units.2014 promises to close at nearly 1.3 billion shipments, with android taking the lion's share, spread across over 180 tracked vendors[2]. market research firm strategy analytics has given the numbers for the second quarter of 2014 that estimate the market share of android platform’s on the global market has reached 84.6 percent.[3] for the mobile devices that use android as its platform, the official way to install the applications is the google play store[4], which serve as repository of the application developed for android, and it installed by default with all the android devices. the current reviewing process for the applications submitted by developers to google play store took only two hours[5], compared to process for apple app store takes 6 days[5]. google may phase out the discovered malware but after it’s spreading, for example: more than 50 applications on google's android market have been discovered to be infected with malware called "droid dream" which can compromise personal data by taking over the user's device, and have been "suspended" from the store[6]. currently mobile malware detection tools uses pattern recognition to identify the malware, but it fails to distinguish the threats. android gives accessing to the device’s resources (such as writing files, accessing the internet, locations, sms, etc.), with permissions system, which they defined on each android application package (apk) in special file called ―androidmanifest.xml‖. any application needs to access any of these resources will define the resources required on ―androidmanifest.xml‖ on development time, after the application compiled and uploaded to google play store, it will show to the users the permission required for installing the application. but with lack to understanding and knowledge for most of users, they can install the application that has access to special resource and it may be has a harmful use. according to above, the need to a new method to recognize the malware applications before installed by the users is important to prevent the malware attack their mobile resources and data .this paper focus on new method for http://www.theguardian.com/technology/android tawfiq s. barhoom, mohammed i. nasman /malware detection based on permissions on android platform using data mining (2016) 52 detecting malwares based on permissions required by the applications, using classification techniques to detect malware apps from benign. ii.related work many researches used different approaches to detect the malware, some of the methods require process on the mobile devices, and other methods do the processing on the cloud from the data collected on the mobile device: sanz et al[7], presented detection method using string analysis that will get the strings from android application by disassembling the android application and then extract the strings in const-string and using machine learning to training the dataset and assign category (malware or goodware).the problem with method, that developers of malware may using non english languages in const-strings will not make them detectable by this method, also if the developer of the malware application encrypt the strings, they will not be detectable in this method too. burguera and zurutuza[8], have developed a framework for detecting malware on android platform, the framework consist of multiple components: data acquisition which using application developed ―crowdroid‖ is small application installed from google play store, and it will monitor linux calls on the device, and compare it from same application that downloaded from other resources, then it may detected if the application is modified with some malware code, the other component is data manipulation: this component will manage and parse the data collected from the android users, and malware analysis and detection component: which is used to analyzing and clustering the feature vectors extracted from the other components. the method developed consist of several tools on client and server side, the main problem with this method that if the malware application submitted to google play store and has no other resources, it will not detect the application is malware. yerima et al[9], proposed and evaluated a new approach for detecting android malware by reverse engineering the android applications using apk analyzer, and building the dataset from set of 58 properties from api call, commands and permissions, then used a bayesian based classifier for learning and detection stages. the result of study showed the proposed method has better detection rates then signature based anti-virus, but the method require disassembly of application and then extracting the used features which may not suitable as preventable method. cheng et al[10], has presented a collaborative virus detection and alert system for smart phones (smartsiren), they used behavioral analysis of smart phone viruses by ontology, the certainty factor function (cf function) generation by the certainty factor theory and the reasoning process of detecting viruses by a fpn model. they developed mobile malware detection system (mmds), which will filter the files received by sms or mms by extracting their behaviors and determine the danger level and if the users have confirmed them danger of these files, the system will reject the files sent by the sms or mms. the presented method require an application to be installed on the smart phone (mmds), and also require an interactive from the users to confirm the danger of the files, novice users will have hard time determining if the received sms or mms has danger file or not, especially if that received from known numbers. koundel et al[11], proposed a method to build a dataset from installed application on user mobile phone, the method using an application that will be installed on user mobile and send list of the applications installed, and the permissions and applications battery’s usage, then sent to server to as csv file, then serve will parse the csv file and build it into the dataset. the downside of this method it’s require an application to be installed to gather the data from the end user mobile, also the application may itself drain the battery, which is another downside of this method. liu et al[12], proposed general malware detection method called virus meter, it’s monitor the usage of battery power on mobile devices, and compare it to pre-defined power consumption model to identify the abnormal usages of the battery power, using the os api it will calculate how much power used by the running services, and compared to the pre-defined model. the proposed model monitor only the power usage of the system to determine if there’s a malware on the mobile device or not, it may gave misleading alarm based on s normal services may require more power for various reasons such as background updating, or downloading the data. sahs and khan[13], in this paper the authors used an open source application called ―androguard‖ to extract features of the apk file and used scikit-learn framework to train vector machine to generate as much as positive marked as negative if there’s enough differences from the training data. this method treat all applications as benign except if it’s sufficiently different from training data, so this may mark malware application as benign because there’s not previously added in the training dataset. jacobsson et al[14], built two models ―bag-of-words― and ―meta eula model‖ to find spywares, they collected more than 1000 (900 clean, 100 bad) of ―end user license agreements (eulas)‖ and they apply the model with multiple classifiers such as: naïve bayes, decision stump, j48, etc , and results support their hypothesis that eulas can be used as a basis for classifying the corresponding software as good or bad. this method will not work if the spyware authors start to copy the good applications eula and use them with same text. shaban[15], has built a model to detect the spyware using data mining for windows portables files (pe), the researcher collected many windows pe that include benign and spyware executable files, then exported the api calls and put them on categories, then apply data mining classification for detecting the spyware. the proposed model in this study require the files to be saved first, after that the file need to be analyzed to extract the api calls, we need to find a way to find if the file is malware before install it. iii. data mining classifications methods we evaluating a variety of classification methods such as: journal of engineering research and technology, volume 3, issue 3, september 2016 53 k-nearest neighbor(knn), naïve bayes, support vector machine (svm)and decision tree, we used these classifications methods with different feature sets. performance evaluation 1. coincidence matrix for the classifications problems the main source of performance measurement is the coincidence matrix. we can calculate most commonly used metrics equations from coincidence matrix as shown in eq (1), eq (2), eq(3), eq (4), eq (5). t r ue p o sit i ve ra te = t r ue ne ga ti ve ra te = accuracy = precision = recall = 2. accuracy: is the percentage of true results (true positives or true negatives) between the total number of cases examined [18]. acc ur ac y = 3. precision: is the correctly retrieved instances of query [19]. p r ecis io n = ( 7 ) 4. recall: is part of the documents that relevant to the query that have been successfully retrieved[19]. rec all = 5. auc receiver operating characteristic is created by comparing the true positive rate (tpr) against the false positive rate (fpr) at various sill settings. the roc recently introduced to evaluate ranking performance of machine learning algorithms[20].the auc combine all of the features of roc into single value, by calculating the area of inclination shape below the roc, the closer roc get into optimal point of prediction, the auc gets closer to one[21]. 6. f-measure: f-measure considered as weighted average between precision and recall, it’s calculated as see in eq (1). 7. cross validation in k-fold cross-validation, initial data are indiscriminately divided into k reciprocally exclusive subsets or "folds", d1, d2, ..., dk, each one is approximately same size, training and testing done k times, in loop i , division di, is set for test set, and the other divisions are used to train the model, and then for other di, until dk.. 8. identification methods for the malware: mainly the malware detection techniques fall into these categories: ● signature based detection: it’s search for sequence of unique bytes that defined the malware, and compare it to the database of other malware data, most of anti-malware use this technique [22]. ● b e ha v io r s ba se d det ect io n : by monitoring many factors of the malware such as the source, target and other statistical properties, then evaluating the damage of the system under controlled environment using dynamic behaviors. v. methodology and experiments the method will work as shown in figure 1, when new application need to be downloaded, we read permission first, then after extracted them we will applying the classifier to the extracted data to find if the application is malware or not. figure 1 – overview of the method our method will start by collecting data to build the new app to download read permissions classification clear benign https://en.wikipedia.org/wiki/true_positive_rate journal of engineering research and technology, volume 3, issue 3, september 2016 54 dataset, then finding an appreciate classifier for our method, and finally we will evaluate and test the method 1. collect the data: at first, we collect the benign and malware applications from different sources. a. b en ig n ap p lica t io n s the benign applications has been downloaded from google play store, and due to google policy it doesn’t allow downloading the apk files directly from their website, but users can install them directly from the play store application on their mobile device, also android mobile phone doesn’t allow to extract the apk files because they are hidden with the system files, so we used ―apk downloader‖[23] website to download the apk files, it’s simulate the mobile devices as it act as mobile, then offering the apk to be downloaded from the website directly to our pcs. the downloaded files with apk downloader has been verified from virus using ―virus total‖ website, which verify the uploaded files with 53 anti-virus to make sure the applications has not been infected by any malware. b. malware applications: the malware dataset has been downloaded from ―free range security‖[16], it’s containing 189 malware applications. 2 . e xt ra ct per mi ss io n s then we extract the permissions from apk files. the android asset packaging tool and the read permissions tool built for automate this work, and to export the extract permissions into one file, after we cleaned the data and built our dataset from five different feature set, based on weight for attributes and from dangerous permission listed provide by google. 4. building up dataset: we use the three features sets by weight and the one that contain google’s dangerous permissions as fourth feature, the last feature set was the attributes used more in malware than benign and not listed in dangerous attributes list, the final dataset described in table 1. table 1 weighted feature sets the experimental environment that used for all the experiments was laptop with core i7 cpu, 500gb ssd with 16gb ram. software and tools are used , rapid miner 5, microsoft excel2010 , pspad , android sdk 4tools , delphi 2010 , 7-zip. 4.apply classification and evaluation the method after we prepared our dataset with 5 different feature sets, we applied the classification algorithms (k-nn , naïve bayes , svm , decision tree ) . the settings set in the evolution phase for each classifier as following figure 2 figure 2. rapid miner with knn and validation process ● experiment scenarios 1(feature set with weight > 0.1): the number of samples are 103, and number of attributes are38, the svm classifier was a higher in both auc & fmeasure, as shown in table 2 and figure 3 table 2 experimental result with feature set 1 classifier auc f-measure k-nn 0.5 88.58% naïve bayes 0.989 92.31% svm 0.993 95.20% decision tree 0.756 86.39% feature sets weight number of attributes feature set 1 0.1 38 regular 1 special (from attribute 16 to 54 as listed on table 4.1) feature set 2 0.2 26 regular 1 special (from attribute 29 to 54 as listed on table 4.1) feature set 3 .03 15 regular 1 special (from attribute 40 to 54 as listed on table 4.1) feature set 4 (dangerous permissions) no weight 24 regular 1 special feature set 5 (extended dangerous permissions) no weight 27 regular 1 special journal of engineering research and technology, volume 3, issue 3, september 2016 55 fi g ur e 3 . e xp er i me n tal r es ul ts o f f eat ur e se t 1 ● experiment scenarios 2 (feature set with weight > 0.2): in this experiment the number of samples are 103 and number of attributes are 36 , here the naïve bayes classifier was a higher in both auc & f-measure, but svm gave the same value with auc as nb see table 3 and figure 4 table 3 experimental result with feature set 2 classifier auc f-measure k-nn 0.5 88.89% naïve bayes 0.993 96.74% svm 0.993 94.48% decision tree 0.773 92.04% figure 4. experimental result with feature set 2 ● experiment scenarios 3 (feature set with weight > 0.3): we applied the 4 classifier to the dataset, the number of samples are 103, and number of attributes are 15, the naïve bayes classifier was a higher auc and svm gave the higher value with f-measure, as shown in table. 4 and figure 5 table 4 experimental result wit feature set 3 classifier auc f-measure k-nn 0.5 95.56% naïve bayes 0.988 90.43% svm 0.985 95.75% decision tree 0.766 92.17% figure 5. experimental result with feature set 3 ● experiment scenarios 4 (dangerous permissions): we applied the 4 classifier to the dataset, the number of samples are 103, and number of attributes are 15, the naïve bayes classifier was a higher f-measure and svm gave the higher value with auc ,as shown in table 5 and figure 6 : table 5 experimental result with dangerous permissions feature set 4 classifier auc f-measure k-nn 0.500 85.93% naïve bayes 0.979 92.85% svm 0.985 89.98% decision tree 0.908 91.59% figure 6. experimental result with dangerous permissions feature set 4 ● experiment scenarios 5 (dangerous permissions 2): we applied the 4 classifier to the dataset, the number of samples are 103, and number of attributes are27, the k value of knn was 1, the naïve bayes used with laplace correction is checked, both svm and decision tree used with default values set by rm, as shown in table 6 and figure 7: table 6 experimental result with dangerous permissions feature set 5 classifier auc f-measure k-nn 0.500 88.49% naïve bayes 0.983 94.64% svm 0.986 92.79% decision tree 0.894 92.13% 0 0.2 0.4 0.6 0.8 1 1.2 k-nn naïve bayes svm decision tree auc f-measure journal of engineering research and technology, volume 3, issue 3, september 2016 56 figure7 . experimental result with dangerous permissions feature set 5 from the experimental above, we notice the feature set 2 has the highest rates in the metrics we used for the evaluation (auc and f-measure) as shown in figure 8 . figure 8. experimental result summary we achieved highest score in auc (0.993) and f-measure (96.74%) with feature set 2 using naïve bayes classifier. feature set 1 gave same high accuracy as feature set 2 in auc (0.993) using support vector machine svm classifier. feature set 5 (dangerous permissions with extended attributes) gave higher rates then feature set 4 (dangerous permissions specified by google), we think these attributes should be consider by google, to warn users about the dangerous effect of the attributes. in our experimental k-nn classifier has the worst performance in both auc & fmeasure in all feature set ,but both naïve bayes and svm, has the best performance in our experimental. iv. conclusion in this paper we worked on building dataset from benign and malware android application. then we separate the database to five different feature sets based on attributes by weight and google dangerous permissions. after that we evaluated our method with rapid, to find the most attributes that has effect for malware application. our results show that attributes with feature set 2 using naïve bayes classifier, gave us the most accurate result for detecting the malware. also we found that there are more attributes should be categories as dangerous attributes by google, because in our experimental adding these three attributes gave us a better detection on all feature sets we used. our method aimed to detect the malware before installing them to the user’s mobile device. however, with new thousands applications added daily to google play store, we need to find a better way to get the permissions of the applications without extracting them by downloading the apk first, currently, google didn’t provide any official api to access the information of the applications in google play store information, some open source trying to achieve this, but may not work when google change their protocol. other options to reverse engineer the google’s api to find better way to get the permissions from the store, or by doing website scarping to gather the information from play store website. references [1]b. k. addagada, "intrusion detection in mobile phone systems using data mining techniques,," ed, 2010. [2] international data corporation. (2014) idc. [online]. http://www.idc.com/prodserv/smartphone-os-marketshare.jsp [3] strategy analytics. (2014) [online]. http://thenextweb.com/google/2014/07/31/androidreached-record-85-smartphone-market-share-q22014-report/ [4] google. play store. [online]. https://play.google.com/store [5] brendan fitzgerald. (2014, aug) appmakr.com. [online]. http://www.appmakr.com/blog/how-longapp-approved/ [6] charles arthur. (2011, mar) the guardian. [online]. http://www.theguardian.com/technology/blog/2011/mar/0 2/android-market-apps-malware [7] igor santos, javier nieves, carlos laorden, iñigo alonsogonzalez, pablo g. bringas borja sanz, "mads: malicious android application detection through string analysis," lecture notes in computer science, pp. 178191, 2013 [8] urko zurutuza iker burguera, "crowdroid: behavior-based malware detection system for android," acm association for computing machinery, pp. 15-26, oct 2011. [9] s. y. yerima, s. sezer, g. mcwilliams, and i. muttik, "a new android malware detection approach using bayesian classification," in advanced information networking and applications (aina), 2013 ieee 27th international conference on, 2013, pp. 121-128. [9] wikipedia. (2005) wikipedia. [online]. https://en.wikipedia.org/wiki/android_(operating_system) #history [10] starsky h.y. wong, hao yang, songwu lu jerry cheng, "smartsiren: virus detection and alert for smartphones," acm (association for computing machinery), pp. 258271, june 2011. 0 0.2 0.4 0.6 0.8 1 1.2 k-nn naïve bayes svm decision tree auc f-measure http://www.idc.com/prodserv/smartphone-os-market-share.jsp http://www.idc.com/prodserv/smartphone-os-market-share.jsp http://thenextweb.com/google/2014/07/31/android-reached-record-85-smartphone-market-share-q2-2014-report/ http://thenextweb.com/google/2014/07/31/android-reached-record-85-smartphone-market-share-q2-2014-report/ http://thenextweb.com/google/2014/07/31/android-reached-record-85-smartphone-market-share-q2-2014-report/ https://play.google.com/store http://www.appmakr.com/blog/how-long-app-approved/ http://www.appmakr.com/blog/how-long-app-approved/ http://www.theguardian.com/technology/blog/2011/mar/02/android-market-apps-malware http://www.theguardian.com/technology/blog/2011/mar/02/android-market-apps-malware https://en.wikipedia.org/wiki/android_(operating_system)#history https://en.wikipedia.org/wiki/android_(operating_system)#history journal of engineering research and technology, volume 3, issue 3, september 2016 57 [11] suraj ithape, vishkha khobaragae, rajat jain deepak koundel, "malware classfification using navie bayes classifier for android os," the international journal of engineering and science, vol. 3, no. 4, pp. 59-63, 2014 [12] guanhua yan, xinwen zhang, and songqing chen lei liu, "virusmeter: preventing your cellphone," association for computing machinery, pp. 244 264, oct 2009 [13] justin sahs and latifur khan, "a machine learning approach to android malware detection," in european intelligence and security informatics conference, dallas, 2012, pp. 141 147. [14] martin boldt, paul davidsson, andreas jacobsson niklas lavesson, "learning to detect spyware using end user license agreements," knowledge and information systems, vol. 26, no. 2, pp. 285-307, january 2010 [15] fadel omar shaban, spyware detection using data mining for windows portable executable files, 2013. [16] free range security. (2011) [online]. http://cgi.cs.indiana.edu/~nhusted/dokuwiki/doku.php?id= datasets [17] dursun delen david l. olson, advanced data mining techniques.: springer, 2008. [18] michal kepski bogdan kwolek, "improving fall detection by the use of depth sensor and accelerometer," neurocomputing, vol. 186, no. c, pp. 637-645, novmber 2015 [19] wikipedia. wikipedia. [online]. https://en.wikipedia.org/wiki/precision_and_recall [20] tom fawcett foster provost, "using auc and accuracy in evaluating," in proceedings of the third international conference on knowledge discovery and data mining, pp. 43-48. [21] raffael vogler. (2015) r news and tutorials. [online]. http://www.r-bloggers.com/illustrated-guide-to-roc-andauc/ [22] iman lotfi sara najari, "malware detection using data mining techniques," science publishing group, vol. 3, no. 6, pp. 33-37, oct 2014. [23] apk downloader. [online]. http://apps.evozi.com/apkdownloader/ tawfiq s. barhoom: is an associate professor of computer science and head of the computer science department faculty of information technology at the islamic university of gaza, palestine. received his ph.d degree from shanghai jiao tong university (sjtu), in 2004. his current interest research include secure software, xmls security, web services and its applications and information retrieving mohammed i. nasman. mohammed i.nasman is professional software developer and trainer. mohammed has completed his master thesis from islamic university of gaza, his research interests lie in the area of programming languages for topics includes: software engineering, security and mobile platforms. http://cgi.cs.indiana.edu/~nhusted/dokuwiki/doku.php?id=datasets http://cgi.cs.indiana.edu/~nhusted/dokuwiki/doku.php?id=datasets https://en.wikipedia.org/wiki/precision_and_recall http://www.r-bloggers.com/illustrated-guide-to-roc-and-auc/ http://www.r-bloggers.com/illustrated-guide-to-roc-and-auc/ http://apps.evozi.com/apk-downloader/ http://apps.evozi.com/apk-downloader/ transactions template journal of engineering research and technology, volume 6, issue 3, april, 2019 20 cul-de-sac in the sky "re-reading habitat 67" mohammed a. f. itma, maha a. f. atmeh abstract— “cul-de-sac” is in common a residential dead-end street. however, it is an important urban element used for articulating vernacular housing in the arab cities to form a wide range of cluster. it has also a main role of providing a safe and healthy environment for residents, which is suitable for social domestic activities. this paper aims to highlight a potential idea to recover the concept of vernacular cul-de-sac but in a contemporary way. thus, we will try to shade light on a modern type of residential streets, which is the upper street that functions as a main stem for articulating houses around it. this type of streets was named in the modern movement as a “street in the sky”. thus the paper investigates the connection between streets in the sky and the vernacular cul-de-sac, by discussing a famous case of the modern housing: habitat 67, which is cluster housing around layers of streets in the city of montreal. the paper concludes that, the upper streets of cluster housing are an adequate approach for recovering the vernacular cul-de-sac in terms of social and visual aspects, since it could combine characteristics of both vernacular cul-de-sac and the street in the sky. index terms—cul-de-sac, habitat 67, street in the sky, urban housing, modern architecture. i introduction a successful housing design takes into account the historical development of the urban space and tries to keep pace with contemporary life at the same time 1 . in this era of global housing concepts, urban housing environment is often devoid of spatial and social aspects. therefore, it is necessary to shed light on some of the successful examples of designing the urban space for housing in the modern history of architecture. in order to keep developing and adapting such examples for obtaining a better housing environment, this paper aims at shading lights on the possibility of recovering a vernacular concept of urban space into a contemporary way: this concept is the cul-de-sac. in order to clarify the possibility of this recovering, the following text will be a review of the architectural characteristics of the vernacular cul-de-sac in the arab cities with a special reference to palestinian cities, jerusalem in specific 2 . the modern ideas of designing upper domestic streets for articulating housing will be henceforth highlighted 3 . accordingly, the upper streets in habitat 67 in particular, which is a cluster housing that was designed by the well-known architect moshe safadi in montreal in 1967 4 , will be discussed thoroughly. in conclusion, we will try to link concepts of vernacular cul-desac to such modern ideas. cul-de-sac cul-de-sac 5 is a french word which means a road with a dead end; a way without an exit (rey, 1994, p. 523). in terms of architecture, it is a type of dead-end streets or passageway (sheppard, 2015, p. 232) used for providing privacy in the housing environment for a group of housing units (signoretta & others, 2003, pp. 54-56). it is considered as an efficient way for reducing crossing streets, and creating a suitable environment for pedestrians in the residential areas (eisner, gallion, & eisner, 1993, p. 308). this type is common, both in vernacular and contemporary architecture, in urban areas around the world. many contemporary scholars have studied the cul-de-sac and recommended it for increasing the social quality of housing environment (brown & werner, 1985). other scholars introduced safety benefits of using the cul-de-sac in planning residential areas (cozens & hillier, 2008). however, the idea of the cul-de-sac in the vernacular architecture of the arab cities goes beyond the concept of a dead-end passageway. it is relatively an organizing component that plays an important role of generating the compact fabric of old cities (dumper & stanley, 2007, p. 266). thus, the cul-de-sac is the urban space that has been the centre of the cluster in the residential areas. the summation of these spaces with their surrounding structures could generate a wide range and endless clusters (rapoport, 1969, p. 5). the cul-de-sacs are distinctive in terms of their various sizes and shapes inside the vernacular housing. this unique composition of arab cities was built in a flexible way to meet the families‟ gradual needs (ragette, 2003, p. 50). note the sequence between the exposed and covered areas within the cul-de-sacs by the presence of upper rooms named in arabic "al-qantara", (see figure1). these rooms are used to provide space for rest and protection from rain and sun for mohammed a. f. itma, maha a. f. atmeh / cul-de-sac in the sky "re-reading habitat 67" 21 pedestrians. al-qantara also has a great importance in the continuity of the urban fabric around the cul-de-sac: it increases the density of buildings by providing rooms above the parts of the open-to-air space that serves as part of the houses in the upper floors (ragette, 2003). in addition, the culde-sac became a key element in urban space design of vernacular housing of the arab cities that symbolizes the need for privacy. culde-sac was mainly found in the vernacular architecture to define the territory for each group of inhabitants in the peasant time (signoretta & others, 2003, pp. 54-56). it has also been considered as a private space for people with limited income living around it, compared to rich people living in large houses and palaces with their own inner space or courtyards (hakim, 2013, p. 168). through architectural treatments, elements are delivering a message without putting a written sign like „small entrances‟ or “the mouth” which gives the feeling that this is a private area that should not be trespassed by strangers without permission (hakim, 2013, p. 26). the cul-de-sac in the vernacular architecture of palestine for example, is usually composed of a sequence of corridors and interior spaces (courtyards). it starts with a main entrance, which is usually broken to provide privacy. then follows the sequence of corridors and courtyards to form a minor element for both static and dynamic activities. usually, the entrances of houses that surround the cul-de-sac are opened to the inner corridors and courtyards. the cul-de-sac is also surrounded by a series of rooms spread over more than one floor. as for the ground floor, it includes shared services like storages and guest rooms, while the upper floors constitute houses for families with the same assets. as for paths, they are bending and rarely straight due to the nature of the fabric which is the organic growth 6 . bending paths are also useful to increase privacy of inhabitants. the paths of the cul-de-sac are narrow and shaded; they are surrounded by high walls of houses. in some cases, paths are relatively wider to serve as a distributor for the houses instead of the courtyards which provide light compared to shaded paths, (see figure2). thus, the cul-de-sac has several shapes depending on the height of the surrounding houses; increasing height of the building requires wider courtyard. concerning the shape and the area of the cul-de-sac, they are also influenced by the number of houses surrounding it 7 . it was verified that the vernacular cul-de-sac is an idea in line with the time. we can observe its benefits by the existence of vernacular housing of palestine 8 . the cul-de-sac is still the main outdoor space of the vernacular housing in the old cities in the contemporary time. it is the main urban space that provides quiet, cool, and safe place for social interactions. in the modern era, this space is still suitable for women and children to keep away from crowded places and cars movement. besides, the cul-de-sac serves as a climate moderator, because it provides shade and moderate climate for residents, with minimum costs. this space is also able to enhance the cooperation between inhabitants by sharing spaces and circulation elements inside the cul-de-sac. this helps to create stronger relationships between residents based on cooperation. these benefits introduce the cul-desac as a motivating approach for designing contemporary social housing. finally, referring to the great benefits of vernacular culde-sac, there is a need to learn from modern ideas that may recover traditional concepts in a contemporary way 9 . therefore, the following will shed light on the „street in the sky‟, since it is an urban space used in modern housing projects to achieve similar goals presented by the vernacular cul-de-sac such as: density, privacy, and social interaction. figure 1: an example of al-qantara figure 2: up: the cluster housing in jerusalem old city. down: a sequence of spaces in the vernacular cul-de-sac. mohammed a. f. itma, maha a. f. atmeh / cul-de-sac in the sky "re-reading habitat 67" 22 street in the sky in the first half of the twentieth century, the idea of domestic street that articulates housing units was often used in modern housing blocks as a closed environment; it is an interior but shared corridor (sherwood, 1981, pp. 119-120). this idea is released from the main function of the corridor, which is a suitable place for circulation; to reach one‟s house safely. this corridor provides a closed environment for circulation between main entrances of the block, staircases, and houses entrances 10 . a wide use of corridors in modern housing has evolved to suit the need for high density housing in residential areas after the second world war (pfeifer & brauneck, 2008). using the corridor was mainly aimed to reach apartments on multi floors. this corridor was organized in a consistent manner with the ideas of modern architects, which built the architectural forms based on the repetition of a unit or network (hurlburt, 1982, pp. 9-21) 11 . the corridor is often connected to vertical circulation elements and provided a successful function in line with the functional characteristics of architecture at that time. alison and peter smithson -with the reference to le corbusier ideas 12 -have introduced the upper corridor “deck” as a development for the connecting corridor between housing units (mumford, 2001, p. 58). famous housing blocks in london known as robin hood gardens is an example of their work. in this project, all of the upper floors were considered as a ground floor and the corridor is the street that connects the residential units in each floor. in this project, the street is wide to be used in the social communication between residents to suit the daily activities of the residents. moreover, the corridor was open to the view in the site in order to increase visual communication with the outside while circulating in the street (see figure3). thus, the open view increases communication between the inside and outside to sustain the sense of the high floors which can be considered as the ground floors (leupen, 2006, p. 139). habitat 67 the streets of habitat 67 project are another important example of upper streets opened to the outside view. however, the architect moshe safadi used a different approach of housing organization around these streets compared to the mentioned blocks. habitat 67 is organized in a way that each house has its importance by being a nuclear cell in the hall organization, (see figure4). thus, the resulted organization can be described as vertical terraces-housing that are articulated through upper streets. the houses are collected in a way that provides a private garden for each house to form a wide range of cluster organization around streets (eckardt, 1978, p. 70). it is an iconic building that includes many revolutionary ideas that was considered a precedent for its time 13 . the urban spaces of habitat 67 may be influenced by the vernacular housing design of jerusalem city in many ways. the used arrangement of structure gives the feeling that houses are similar to cells, which grow in a spontaneous way, in which the urban space has the ability to pass through structures to be well-defined, but endless. thus, structures are configured to form a dominant urban mass for the surrounding, which encloses the space and controls it in a coherent way. therefore, it is difficult to separate between mass and space because of the coherent way of attaching structures with their surroundings. in addition, the placement of streets in the back facade of the project enhances the sense of dominancy for the structures, which also simulates some vernacular concepts of designing domestic streets. this is a place with the highest degree of peace away from the main road. however, the urban space of habitat 67 is also influenced by the trend of modernity through adapting the matrix. standardization and repetition of urban spaces between structures are noticeable as a result of designing the structures with reference to one repeated pattern. in addition, the use of vertical circulation in specific areas facilitates reaching each house in a modern way, by making a modular division for urban spaces, which is also influenced by the matrix. figure 3: robin hood gardens, london, uk. designed by a. & p. smithson. up: a side of the street in the sky and its relation with dwellings. down: a section shows the street in the sky as an open social space (balters. 2011). figure 4: habitat 67, montreal, canada. designed by the israeli/canadian architect: moshe safadie 1967 (safadie 2009.p 11). mohammed a. f. itma, maha a. f. atmeh / cul-de-sac in the sky "re-reading habitat 67" 23 thinking about the street as a social environment in this project included some ideas for a comfortable use of inhabitants. looking to habitat 67, we can note that the design of the street respects the privacy of the housing units in different floors. each group of houses has its shared space that is used as a distributor for their entrances. thus there is a gradual transfer from the public street to houses entrances passing through this collective area that allows houses entrances to be a third level of private areas. in this way, housing entrances could have much privacy than if it were directed straight to the street. the clear orientation of the street to the sea view helps also to create a pleasant environment for people to stay and meet each other. this idea can be a development for the function of the domestic street which is used to be simply a pathway for people movement. changing the direction -or bendingthe street also helps to break the long view for pedestrians which could be a suitable way for defining territories for groups of houses. in addition, this change is a way of respecting the emerged context of housing groups in this project, as these groups are defined by a slight bending to follow the shape of the cluster. in habitat 67, moshe safadi used three types of streets. in the ground floor there is a wide space around where residential units, shared spaces, and vertical movement elements are clustered 14 . this ground street has an organized shape for cars passing, but a wide and rich shape for pedestrians‟ flow. the bending shape is used to follow the pattern of cluster buildings that have been shaped by structures. different heights of ceilings are used in the pedestrian area of the street as a result of the flying structures above it, to give pedestrians a visual pleasure while moving. the second type of streets is a middle street, which is placed in the upper floors. it is more regular, and thinner than the street on the ground. the same bending shape is clear and moves along the housing composition, while being exposed, from several angles, to the surrounding view. however, the street is still able to reach the entrances of the residential houses and the vertical movement in a systematic way. the third type of streets is the upper passageway on the roof, with larger parts of it open to the air and reaching some squares on the roof. the upper passageway simulates clearly the vernacular culde-sac (see figure5). it has the sequence of path and squares, and also the sequence between covered and open-air spaces. structures surrounding the space and closing it use spontaneous organization of the houses. finally, modernizing and authenticity are clear in the design of domestic streets in habitat 67 for two reasons. the first is the conservation of the physical aspect of architecture that goes beyond the direct translation of functional aspects. in this project streets, and structures cooperate together to enrich the fabric, which is a revival for the vernacular concepts of designing urban housing. the second is conserving the social aspects of the street, which motivates streets of this project to be a successful example of streets for the community. conclusion this paper has re-read habitat 67 as a possible approach to recover the idea of vernacular cul-de-sac in a contemporary way. although the used streets in this project are not dead-end streets on the ground, they have included the spirit of the vernacular cul-de-sac in the arab cities. the main concept of vernacular cul-de-sac has been recovered, which is a safe collective space between houses for social activities. the concept of exceeding the functional use of the domestic street to become a part of social housing environment has also been recovered. thus, in an environment where cars movement is vital, raising the cul-de-sac in the sky could recover the vernacular environment in a contemporary way. the previous conclusion leads us to believe that upper streets of cluster organization can simulate vernacular culde-sacs. these can be used as a main stem of cluster organization the same as in the vernacular housing, but in multiple layers. the upper streets of cluster housing have the characteristics of streets in the sky that spread vertically in layers to reach all houses entrances and provide a pleasant view for pedestrians. in addition, it includes a diversity of shapes and forms of urban spaces able to provide visual attractiveness. as a result, the upper street of cluster housing combines characteristics of both cul-de-sac and the street in the sky, which can be described as “cul-de-sac in the sky”. (see table1). finally, it is hoped that this study can contribute to raising the designer‟s awareness about learning from traditional concepts of domestic streets in the arab cities. it is recommended that contemporary streets be designed in reference figure 5 streets types in habitat 67 (c.a.c, 2001). mohammed a. f. itma, maha a. f. atmeh / cul-de-sac in the sky "re-reading habitat 67" 24 to social and visual needs in addition to functional needs. it is also recommended that other studies try to fill a part of the gap between vernacular and contemporary streets, in order to lead towards the creation of more creative housing design in the arab cities. s o c ia l e n v ir o n m e n t o p e n t o t h e v ie w s e q u e n c e o f sp a c e s s e q u e n c e o f o p e n t o a ir a n d c o v e re d a re a s a v a il a b il it y f o r u p p e r fl o o rs s te m o f c lu st e re d h o u si n g b e n d in g a n d v is u a l a tt ra c ti v e n e ss vernacular cul-desac yes yes yes yes yes street in the sky yes yes yes upper streets of habitat 67 yes yes yes yes yes yes yes references [1] brown, b. b., & werner, c. m. (1985). social cohesiveness, territoriality, and holiday decorations: the influence of cul-de-sacs. environment and behavior, 17(5), 539-565. [2] balters, s. (2011).ad classics: robin hood gardens / alison and peter smithson. 18 aug 2011. archdaily. accessed 6 nov 2019. issn 0719-8884 [3] c.a.c. (2001). ad classics: habitat 67 / safdie architects. (m. university, editor) obtido de archdaily: www.archdaily.com [4] cozens, p., & hillier, d. (2008 ). the shape of things to come: new urbanism, the grid and the cul-de-sac. international planning studies, 51-73. [5] dumper, m., & stanley, b. (2007). cities of the middle east and north africa: a historical encyclopedia. california: abc-clio. [6] eckardt, w. v. (1978). back to the drawing board!: planning livable cities. michigan: new republic books. [7] eisner, s., gallion, a., & eisner, s. (1993). the urban pattern . new york: john wiley & sons. [8] hakim, b. s. (2013). arabic islamic cities: building and planning principles. new york: routledge. [9] hurlburt, a. (1982). grid: a modular system for the design and production of newpapers, magazines, and books. new jersey: john wiley & sons. [10] isachar, h. (26 de 2 de 2005). israel, jerusalem, an aerial view of jerusalem old city and mount zion. obtido de alamy: www.alamy.com [11] itma, m. (2018). impact of socio-cultural values on housing design in palestine. em c. k. hofbauer, e. m. kandjani, & j. m. meuwissen, climate change and sustainable heritage (pp. 130-142). cambridge: cambridge scholars publishing. [12] leupen, b. (2006). frame and generic space. rotterdam: 010 publishers. [13] messoudi, t. (2017). l'architecture vernaculaire une solution durable : cas de la maison traditionnelle kabyle (nord algérien). hal archivesouvertes. [14] mumford, e. (2001). the emergence of mat or field buildings. em h. sarkis, p. allard, & t. (. hyde, case: le corbusier's venice hospital and the mat building revival (pp. 48-65). new york: prestel. [15] pfeifer, g., & brauneck, p. (2008). courtyard houses: a housing typology. basel, boston & berlin: springer science & business media. [16] ragette, f. (2003). traditional domestic architecture of the arab region. sharjah: american university of sharjah. table1: an analysis of social and visual characteristics of the three types of streets mohammed a. f. itma, maha a. f. atmeh / cul-de-sac in the sky "re-reading habitat 67" 25 [17] rapoport, a. (1969). house form and culture. new jersey: prenticehall, engle wood cliffs. [18] rey, j. r.-d. (1994). le nouveau petit robert. montréal / paris: le petit robert. [19] safadie m., t. i. (2009). moshe safdie: volume 1. melbourne: images publishing. [20] senan, z. (1993). political impacts on the bult environment colonization and the development of place identity, the case study of the rural west bank (palestine). newcastle: university of newcastle. [21] sheppard, m. (2015). essentials of urban design. clayton: csiro publishing. [22] sherwood, r. (1981). modern housing prototypes. harvard: harvard university press. [23] signoretta, p., & others. (2003). urban design: method and techniques. oxford: routledge. [24] smithson a. & p. (1967). urban structuring: studies of alison & peter smithson, new york: studio vista. 1 the nature of man contains elements of constancy and change, which affects the subject of housing (rapoport, 1969, pp. 78-79). 2 jerusalem city was chosen as a case because of its important vernacular housing. 3 this was described by alison and beater smithson as „street in the sky‟. (smithson a. & smithson p. , 1967) 4 the works of moshe safadi are influenced by the vernacular architecture of jerusalem city, as well as the trend of prefabrication housing that prevailed at the time (safdie, 2009, p. 15). 5 “fond de certains objets, p. 523 (the bottom of a thing». 6 most vernacular housing did not have a previous plan. 7 see also (messoudi, 2017) 8 a study by the author sustains that cul-de-sac in vernacular housing of palestine is still able to respond to contemporary user‟s needs (itma, 2018). 9 in the beginning of the twentieth century, there was a shift of the vernacular concepts to international concepts in most of the arab cities. the use of types of urban spaces for social needs became therefore less important. 10 this corridor is common in designing housing blocks around the world. 11 the modular of le-corbusier made a great contribution to the modern architecture. 12le corbusier named the interior corridor a “street” in his famous housing project “unité d'habitation” in marseille france 1947 13 architect moshe safadi presented the idea of the project in his theses of architecture and then developed it to be a bold and important example in the design of housing on canada and global levels (moshe safdie, 2009). 14 such as parking, gardens, and services, which also simulate the pre-discussed idea of using the ground structures for shared functions in the vernacular cul-de-sac. mohammed a. f. itma is a phd holder of architectural engineering, specilised in urban housing design (2016). master of architectural conservation (2007) and bsc of architectural engineering (2000). working in the department of architectural engineering, faculty of engineering, an-najah national university as a resrach assistant (2000-2008) and as an instructor (2009 untill now). he is currently an assistant professor who has 8 publication in the field of housing and architectural heritage. maha a. f. atmeh is a phd holder of sociolinguistics, french language (2011). she is an assistant professor in the department of french, faculty of human sciences, annajah national university, nablus, palestine. https://www.google.com/search?client=firefox-b&q=le+corbusier&spell=1&sa=x&ved=0ahukewj0ibc204hgahuh-yukhsbbd6wqkeecccwoaa https://www.atlasobscura.com/things-to-do/marseille-france https://www.atlasobscura.com/things-to-do/marseille-france transactions template journal of engineering research and technology, volume 8, issue 1, march 2021 9 enhancing the documentation process of traffic accidents registry in gaza city using gis maher a. el-hallaq https://doi.org/10.33976/jert.8.1/2021/2 abstract— gaza city is the largest city in the gaza strip and the palestinian territories. the city has a population about 700,000 inhabitants with an average population density of 12600 person/km2. documentation procedure of traffic accidents in the city is traditional and inefficient. it is not only based on paper formatting but also is not performed in the correct way since it misses the determination of exact position of occurrence of these accidents. furthermore, documentation data do not involve appropriate level of details and need to be re-organized. this study aims to enhance the documentation process of the traffic accidents in gaza city using gis as well as developing appropriate recommendations that can raise the level of traffic safety in it. to enhance the documentation, a new mobile and desktop applications based on gis cloud are developed in order to automatically store spatial and descriptive data about traffic accidents in a computerized geodatabase. many analysis using gis of the accidents statistics of 2019 are performed. results indicate that the hot neighborhood in the city is northern remal, the hot road is salah al-din str., the hot period is 12 – 14 o'clock, most accidents are caused by cars, and medium injuries are the dominant of these accidents. the study recommends to build a digital geodatabase that involves all necessary data associated with traffic accidents because it is the basis for planning, analyzing and making decisions. it is also recommended to establish a gis department in ministry of transportation. there is a strong need to legislate binding laws to prevent the increase in the number of traffic accidents in addition to make field actions such as the necessity to set up traffic lights as much as possible near health centers, schools and other public places. index terms— documentation, gaza city, gis, mobile application, traffic accidents, traffic analysis i introduction traffic accidents is one of the most important and dangerous problems facing societies worldwide, as it leads to a large amount of human and physical damages. statistics issued by the world health organization, who, indicate that the world annually loses about 2.1 million people, and a range from 20 to 50 million people are also injured. annual losses of traffic accidents in the world are estimated to reach 518 billion dollars, which constitutes (1-3) % of the world's gross local income. many countries have come to the conclusion that national traffic mechanisms and strategies are needed to meet the traffic safety challenges associated with many sectors and relevant authorities [1]. in developing countries, road accident fatality rates (deaths per 10,000 vehicles) are very high and often more than 5 times greater than those for countries of western europe and north america [2]. in 2007, traffic accidents were considered the second leading cause of death in jordan. who [1] indicates that in saudi arabia, an average of 20 people die every day due to road crashes, which are the primary cause of death in males aged 16 to 36. if current trends continue, saudi arabia could have more than 4.0 million traffic accidents a year by 2030. every year in egypt, about 12,000 egyptians lose their lives as a result of road traffic accidents. many thousands are non-fatal injury; some with resultant long-term disability [3]. based on a world bank study in 2006, the annual rate of increase in the number of crashes and fatalities in the palestinian territories was about 5.0% in 1994. the percentage of pedestrians involved in the injury crashes was as high as 30%. the fatality rate increased sharply in 2003 compared to other years, where it was 16.5 fatality/10000 vehicles [4]. the available traffic statistics in gaza city are disorderly and without good management. they are also limited to statistics conducted for solving a particular problem or for developing some roads. it is reported that road crashes in the palestinian territories had been increased by 170% in the period (2007– 2013), injuries increased by 120%, and deaths increased by 33% for the same period [5]. the traffic crashes in the gaza strip was 1985 in year 2000 and increased to 4046 in year 2012, then slightly decreased in 2013. there has been a remarkable rise in the number of victims due to unlawful use of motorcycles and disrespect for rules and laws concerning the speed limit and automobile standards. in 2011, 75 persons have died due to traffic crashes; 36 of them were in motorcycle-related [6]. https://doi.org/10.33976/jert.8.1/2021/2 maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 10 ii problem statement gaza city is the administrative center as well as being the main city in the gaza strip. it is the most populated area in the world, as its population reaches more than 700,000 over a small region (55.6 km2) [5]. the city is suffering as the rest of the world from daily traffic accidents. the recorded traffic accidents are increased significantly resulting to a big number of deaths and injuries, which reflects the dangerous situation on its roads. in the city, traffic accidents are paper documented and there is no use of detailed databases which can be a platform for an effective process of analysis. in addition to, location of those accidents are not properly documented in terms of their local coordinates. the success of any preventive action depends on the intensive analysis of traffic accident records. thus, efficient and accurate analysis are consequently based on the descriptive documented data that are needed to understand which factors or causes are the most influential in traffic accidents, in addition to the spatial data that associated with location of those accidents. the location is one of the critical variables in the analysis process. standing on the main and most causes of traffic accidents will enable specialists to adequately prepare the best awareness and guidance programs aiming at directing the focus and targeting the real groups, whether they cause or affected by traffic accidents. this will make these programs more effective in performing their role. determining the hot spots related to traffic accidents as well as knowing the influencing causes as they occur, will help planners and decision-makers in taking actions and creating laws with real effectiveness in reducing traffic accidents, and consequently human losses and economic implications. thus, this study aims to enhance the documentation process of the traffic accidents registery in gaza city using gis as well as developing appropriate recommendations that can raise the level of traffic safety in it. the technology of gis is one of the important tools used globally for studying and analyzing traffic accidents in terms of their causes and consequences. iii ministry of transportation ministry of transportation, mot, is one of the ministries of the palestinian authority that was established since the arrival of the palestinian authority in 1995. its main mission is to preserve human lives by minimizing traffic accidents on roads and setting the standards and conditions necessary for establishing transport facilities and services, in coordination with other relevant ministries. mot is also responsible for [7]:  developing a comprehensive plan to rebuild the infrastructure of the land, sea and air transport sector to follow up programs and projects related to this sector.  preparing laws, regulations and instructions that regulate the various transport sectors and agencies affiliated to the ministry.  preparing systems and safety standards for establishing service facilities in coordination with the ministries competent.  supervising all sectors of land, sea and air transport, and coordinating with all sectors; governmental and non-governmental entities in order to provide the best service to citizens.  preparing and organizing transport agreements between mot and the sector companies regarding what ensures that this sector is run locally and internationally.  undertaking the necessary surveys of road networks and setting the standards and specifications necessary for their development, participation and supervision  supervising and taking over all means of government transport organization, operation, maintenance and follow-up.  regulating the issuance of licenses for vehicles, drivers, garages, spare parts, and driving education schools, institutes, training centers, rental companies, and travel agencies of car loans and payment of fees for all different types of licenses.  promoting the work of the land crossings, updating the working mechanisms in them and simplifying their procedures and organization of work. the "engineering and traffic safety unit", etsu, is considered one of the most important departments in mot, as it falls under its responsibility setting the necessary standards and final approvals for licensing the facilities to be followed by mot. among the most important basic tasks of this unit are; setting special and regulatory standards for parking lots, the use of safety barriers and islands, organizing traffic within the municipalities, preparing the necessary traffic studies such as (traffic volumes, peak hours, road intersections and their efficiency, levels of service) in addition to collecting and analyzing information about traffic accidents (places of occurrence, resulting damages, time, etc.) in order to stand on their causes and trying to find appropriate solutions. unfortunately, gaza suffers from problematic and limitations in data, lack of accuracy, and the absence of statistical series that allows monitoring of traffic accidents. there are lack of sufficient information about different age groups, the absence of standardization and classification of traffic accidents, and lack of commitment by many of the relevant official departments to record accurate information about traffic accidents [8]. iv the study area gaza city is the largest city in the gaza strip and the palestinian territories. the city is frequently termed "gaza city" in order to distinguish it from the larger gaza strip. the history of gaza, one of the oldest cities in the world, has been maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 11 shaped by its strategic location. the city is located on the mediterranean coastal route, between north africa and the greener lands of west asia. the area of the city is about 55.6 square kilometers and is located at 34olongitude and 31o latitude. the city has a population of about 700,000 inhabitants with an average population density of 12600 person/km2 [9]. figure 1 illustrates the charcteristics of the city regarding its geographic location, its neighborhoods, population density as well as its road network. gaza city participates borders with towns of jabalya, beit lahiya and beit hanoun in the north while it is enclosed by the mediterranean sea in the west, al-zahraa city in the south, and the remaining border of 1978 is the restrictions of the city from the eastern border. it is divided into twenty one neighborhoods; el daraj, sheikh radwan, el awda city, northern remal, southern remal, sabra, al-nasr, tuffah, ijdaida, east ijdaida, old city, sheikh ejleen, zaytoun, tal el-hawa, beach camp, turkman east, turkman, murabteen, new east extension and new west extension. zaytoun neighborhood involves the highest number of residents in the city, while al-shati camp is the most dense regarding population [9]. in gaza city, the transportation system relies on land transport. roads are considered the only mode of transportation; where there is no rail lines, water or air transport facilities. gaza city road network combines between the radial network system in the old part of the city and the grid system in the newer parts [10]. salah al-din and alrasheed streets are considered the major arterial roads that cross the city in the east-west direction. the city also includes main roads such as jamal abdel nasser, al-wihda, al-nasr, omer mukhtar and al-jalaa streets in addition to hundreds of other local streets. figure 1: gaza city characteristics. maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 12 v new methodology figure 2(a) shows the current documentation of traffic accidents [9]. documentation procedure of accidents in the city is not only based on paper formatting but also is not performed in the correct way since it misses the determination of exact position of occurrence of these accidents. for example, it is written that an accident was occurred at al-rasheed street, knowing that this street extends along over 45 km. furthermore, documentation data do not involve appropriate level of details and need to be re-organized and enhanced for meaningful and effective analysis. figure 2(b) indicates the suggested methodology of the new developed system for traffic accidents documentation and analysis. this new system involves three basic stages; field accident data collection, accident data management and accident analysis stages. all tasks of each stage are automatically performed under the gis umbrella. the first stage includes the field collection of accident data. such data involves both spatial and tabular data. spatial data regard to the location of these accidents interms of thier coordinates, while tabular data concern with the descriptive attributes of those accidents such as injuries information, accident causes, accident time, etc. for this purpose, a mobile electronic computerized application is created to spatially sign traffic accidents in terms of their local coordinates and replace the inefficient current paper registration system. here, an account on gis cloud and then an application on it are created to record and save accidents locations and attributes. each new accident will be automatically signed with a unique primary key. the second stage involves the management of the collected data of traffic accidents. once accidents data are completed by the mobile field application, they are automatically stored and saved in a file geodatabase, so that data can be easily accessible and smoothly managed by any gis software. each accident will be geographically represented in a point feature format that is located on the map as the value of its coordinates and hold the corresponding tabular data of this accident. the file geodatabase has also other data layers associated with traffic accidenets such as, for example, gaza city roads network (line vector format), gaza governorate and the city neighborhoods (area vector format), etc. tabular data involves fields with text, numeric, code formats, etc. storing data in a geodatabase will facilitate data retrieving, updating, sharing and performing quieries. the third stage is next established in order to perform statistical and spatial analysis regarding traffic accidents. it is necessary to determine hot spots, hot neighborhoods, hot streets, hot hours, hot months, in addition to other statistics such as injuries statistics, accident causes, etc. these analysis can help to conclude recommendations and decide actions to avoid future traffic accidents. figure 3 shows the main interface of the new desktop application on the central computer of the etsu of mot. it allows to control base map properties, add new data layers, import external traffic data tables in addition to export the existing database into other various formats. this application helps to electronically collect traffic accidents data in a single place and consequently enhance sharing, documenting, managing and analyzing it. it can also be set on a mobile smart phone in order to facilitate documenting accident data in the field. the yellow coloured feature in figure 3 represents an accident and similarly other documented accidents using the mobile application will automatically have the same appearance but with different location. old paper documented data can be arranged in excel or csv files and directly imported into the main interface as well as being stored in the file geodatabase with the new data. to document an accident in the field, the application is turned on a smart phone and login to the gis cloud can be possible by either entering a defined fig. 2. the current and suggested methodologies for traffic accident documentation. 1 3 1 2 1 3 maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 13 username and password or by using other ways such as facebook or google accounts. f ig u re 3 t h e m a in u se r in te rfa ce o f th e n e w a p p lica tio n . maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 14 a project concerning traffic accidents is first selected and then a template is filled regarding the descriptive attributes about this accident. it also allows to locate the position of the traffic accident either by typing it manually on a map or using the gps of the mobile phone. the application also allows to get many photos of the accident and save them in the database. when finishing, data is sent to the central computer by pressing the "send" button. after sending the file from the mobile phone, all entered data can be automatically archived, accessed and managed by the specialists of etsu. steps to illustrate using this application as well as the required data to construct a template form within it are shown in the appendix. vi analysis of traffic accidents since there is no enough information about traffic accidents and accidents history, analysis will be only limited to the documented statistics of 2019 as a case study to experiment the new developed application. number of traffic accidents in gaza governorate in this year reached 513 distributed among 21 neighborhoods of the gaza administrative governorate. unfortunately, 374 accident of them have correct geographic location, knowing that this location is expressed in a descriptive manner and not in terms of the accident coordinates. those accidents are transformed into excel format according to the developed template detail instead of the paper registry condition and then imported by the new application. figure 4 shows the characteristics of the spatial phenomenon of these accidents. at first, descriptive data about accidents are linked to their geographical location using gis software, and then a group of maps and graphs are produced in order to understand this phenomenon. figure 4a shows the distribution of accidents in the city while figure 4b indicates that the most neighborhoods exposed to accidents are; alnasr, northern and southern remal, due to the large traffic flow in these neighborhoods. fig. 4c locates the distribution of hot and cold spots for accidents in the neighborhoods, where the hottest point is in the northern remal and the coldest point in the old city neighborhood. fig. 4d shows the five most hot streets in descending order; salah al-din street, rasheed street, al-jalaa street, omar al-mukhtar street and awn al shawa street. figure 5 shows more additional analysis about the traffic accidents. it illustrates that most accidents occur in the daily time period between 12 and 14 o'clock, and the reason for this is due to the big movement of people at this time in particular school students, employees, workers, etc. april and october are the hot months. it is noticed that most of the accidents cause moderate injuries with 151 accidents, foll owed by damages with 102 accidents. the classification of accidents according to traffic composition shows that most of the accidents are caused by cars, with 260 accidents. maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 15 (a) hot hours (b) hot months (c) type of injuries (d) accidents per traffic composition figure 5. additional analysis of traffic accidents. analysis also indicates that:  most of the vehicles that cause accidents have no insurance.  the average age of drivers who cause accidents is 40 years.  most of injuries occur among children and the elderly. vii conclusion and recommendations this study shows that the documentation process of traffic accidents in gaza city is traditional and inefficient since it depends on using paper registry documents. to enhance the documentation, a new mobile and desktop applications based on gis cloud are developed in order to automatically store spatial and descriptive data about traffic accidents in a computerized geodatabase. many analysis using gis of the accidents statistics of 2019 are performed. results indicate that the hot neighborhood is northern remal, the hot road is salah al-din str., the hot time period is 12 – 14 o'clock, most accidents are caused by cars, and medium injuries are the result of these accidents. attention should be paid to build a digital geodatabase which can include all necessary data for roads and traffic accidents because it is the basis for planning and decisions making based on sound scientific foundations. it is recom mended to establish a department for geographic information systems in mot. there is a strong need to legislate binding laws to prevent the increase in the number of traffic accidents in addition to make field actions such as the necessity to set up traffic lights as much as possible near health centers, schools and other public places. acknowledgment i would like to express my thankfulness to all those who gave me the possibility to accomplish this study. i am deeply indebted to mahmoud abu rjaila, abdelmonim abu sultan and yanis al-kafarnah for their continuous valuable effort during data collection and data processing. references [1] world health organization who , "world report on road traffic injury prevention", geneva, 2004. online: www.who.int/publications/i/item/world-report-on-roadtraffic-injury-prevention [2] g. d. jacobs and i. sayer, “road accidents in developing countries,” accid. anal. prev., vol. 15, no. 5, pp. 337– http://www.who.int/publications/i/item/world-report-on-road-traffic-injury-prevention http://www.who.int/publications/i/item/world-report-on-road-traffic-injury-prevention maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 16 353, 1983. [3] world health organization who, "road safety in ten countries", department of injuries and violence prevention and disability, 2010. [4] world bank, "infrastructure assessment in the west bank and gaza: the transport sector assessment", final report, 2006. [5] y. sarraj, "developing road accidents recording system in palestine", an-najah university journal for research – natural sciences, vol. 30, no. 1, nablus, palestine, 2016. [6] al-mezan institute, "al mezan organizes workshop on traffic accidents". palestinian terretories, 2011, available from: www.mezan.org/en/post/12608 [7] ministry of interior and national security, interview with department head of traffic accidents, september 2020. [8] palestinian ministry of transportation gaza, online: www.mot.gov.ps/objectives/ accessed on october 2020. [9] k. al-sahili, and h. abu-zant, "development of major aspects of the traffic safety program for palestinian cities". ite annual meeting, seattle, washington, usa, august 2003. [10] m. a. el-hallaq, "spatio temporal analysis in land use and land cover using gis case study: gaza city (period 1999 –2007)", journal of engineering research and technology, vol. 2, no. 1, pp. 48 – 55, 2017. [11] b. bashbash, and y. sarraj, "a study to establish traffic statistical records in gaza city, palestine", international journal of engineering research, vol. 5, no. 10, pp. 1 – 15, 2017. maher a. el-hallaq was born in gaza city, palestine, on the 29th of december 1967. in 2010, he obtained a phd of surveying and geodesy engineering from cairo university, arab republic of egypt. currently, he works as an associate professor of geomatics engineering in the department of civil engineering at the islamic university of gaza. he participates in teaching many courses such as; surveying i and ii, geomatics, statics and dynamics, global navigation satellite systems, geodesy, remote sensing principles, cartography, and gis. in addition, he has many journal and conference publications in addition to a book entitled "map comparison using template image matching techniques". dr. el-hallaq is a consultant of many local municipalities and private agencies in the gaza strip. nowadays, he is a member of "geodesy" committee of geo-molg project, ministry of local government. he is also a reviewer of many journals as well as being an editorial member of american journal of remote sensing (ajrs). http://www.mezan.org/en/post/12608 http://www.mot.gov.ps/objectives/ maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 17 appendix a. accident time: date: day: hour: b. accident location: governorate: municipality: neighborhood: street info: nearest landmark: nearest intersection: coordinates: c. vehicle type: (repeated when there are more than one vehicle) ex.: ( ) private ( ) public taxi ( ) public bus ( ) heavy truck ( ) rented vehicle ( ) driving education vehicle ( ) motorcycle ( ) agricultural tractor ( ) tactic ( ) animal drawn cart ( ) bicycle ( ) year of production etc. d. legal status of vehicle: (repeated when there are more than one vehicle) licensed or not: insurance condition: insurance type: driver license status: driver license type: driver age: e. accident type and description: ( ) a vehicle with one or more vehicles ( ) a vehicle with a motorcycle ( ) a vehicle with a parked vehicle ( ) a vehicle with a bicycle ( ) a vehicle with individuals ( ) vehicle with road components ( ) a vehicle with an animal drawn cart ( ) vehicles with tree/concrete barriers/guardrails ( ) others accident description: f. accident reason: ( ) the driver does not comply with traffic signs ( ) wrong cut-off of traffic lights ( ) over speed ( ) wrong overrun ( ) go against the road ( ) the driver is distracted from driving ( ) driving under the influence of a drug ( ) for pedestrians, do not cross the road from the pedestrian area ( ) failure to maintain the vehicle ( ) for pedestrians not to abide by the pedestrian traffic lights ( ) bad road conditions ( ) weather/ rain, fog or raised sand and dust, strong winds ( ) the use of bright lights ( ) others description: g. injuries: (repeated when there are more than one injury) condition: ( ) death ( ) serious injury ( ) medium injury ( ) minor injury ( ) no injuries injury data: name: ................ age ................ gender ............... mobile ................... injury details: ( ) an animal-drawn cart user ( ) a motorcycle rider ( ) the driver of the vehicle ( ) the seat next to the driver ( ) a passenger in the back seat or in a public vehicle ( ) walk on the side of the road ( ) walk across the road ( ) it is carried in the box of a transport vehicle ( ) a cyclist use of safety factors: ( ) seat belt ( ) airbag ( ) the helmet ( ) a seat for children ( ) not use of any safety equipment h. non injuries: individuals who were in the interlocking vehicles in the accident and had no injuries: number of these persons ...................... did they use vehicle safety equipment .................... its type ........................ .. i. damages: damages description: a1: data involved in the traffic accidents template form. maher a. el-hallaq / enhancing the docummentation process of traffic accidents registery in gaza city using gis (2021) 18 1. login mobile app. 2. choose a project 3. locate an accident manually or by mobile gps 4. part of filling the template form and sending information to the central computer a2: steps to document accidents in field using the mobile application. transactions template journal of engineering research and technology, volume 7, issue 2, october 2020 17 integrated land use and transportation modeling within data poor contexts emad b. dawwas (*) (*)assistant professor of urban planning, urban planning engineering department, college of engineering, an-najah national university, nablus, palestine. p o box: 7, dawwas@najah.edu doi: https://doi.org/10.33976/jert.7.2/2020/3 abstract integrated land use and transportation models (ilutms) are revolutionary planning support tools that have been used in the developed countries since the early 1990s. ilutms evolved in response to the complexity of the urban planning process, which became more communicative and collaborative process involving different stakeholders with diverse and conflicting interests. the main challenge for the ilutms to be used in the developing countries is the cost of rich data needed for these models to give satisfactory results. this paper discusses the technical problems facing the researchers and the urban planners in adopting ilutms. the research proposes an alternative modeling approach that makes ilutms applicable in the developing countries’ context. the suggested approach is centered on the idea of functioning within data-poor context instead of the costly data-rich context. the paper concludes with the expected limitations in the new modeling approach and suggests some guidelines for the researchers in order overcome these limitations. keywords: land use planning, modeling land use and transportation, integrated land use and transportation modeling. i introduction planning is a process that mainly aims to produce future oriented plans used as guidelines by different levels of decision-making. this process is more complicated in the urbanized areas, in which almost 50% of the world’s total population and nearly three-quarters of all westerners live (fragkiasô and seto, 2007). the fast urban growth and higher densities call for sophisticated planning tools to be able to deal with the interconnected environmental, socioeconomic and geopolitical issues. due to the environmental impacts of the urban growth and the high cost of infrastructure required to accommodate this growth, the planning tools should be able to predict where and how much growth will occur, and how strong the change will be. this motivated scientists and researchers from different disciplines to cooperate in order to build what so called integrated land use and transportation models (ilutms). ilutms represents and advanced attempt to simulate the urban dynamics represented in the interactions among the urban development, land used changes and transportation (dawwas, 2018). ilutms are computerized models consisting of large data sets, which contain information about population, employment, commercial areas, and other socioeconomic characteristics. the data sets are stored in a central data bank at a basic period called base year (jianquan and ian, 2002; dawwas, 2018). base year data sets, which are spatially distributed, are used to run submodels, included in an ilutm, over a period of time, five or ten years. the base year is used as a temporal reference to predict and to allocate the different changes like migration rates into or out of the study area, economic changes, and built up area changes etc. it is very common to work on land use and transportation planning within poor data context, especially in developing countries, with no foreseeable solutions to acquiring more accurate and detailed data. this study aims to propose a conceptual model for modified ilutms that can function under poor data conditions in terms of the quantity and the quality of the available data sets. the proposed approach consists of developing new ideas and techniques that attempt to bridge the simplicity of the aggregate modeling approaches and the advantages of disaggregate approaches. this endeavor requires defining clear borderlines, in terms of statistical uncertainty, between lowest levels of aggregate data and the highest levels of disaggregate data for various data sets used in different submodels in an ilutm. ii literature review 2.1 modeling under data-poor context urban development is a complex dynamic process because it involves high number of unrepeatable events and various actors with different patterns of behavior. considering complexity, planners need to model the future urban development patterns in advance, which requires enormous amonts of data. therefore, modelers working in data poor situations should abandon the traditional models requiring rich data sets in favor of simpler models that can function using the available substitute mailto:dawwas@najah.edu https://doi.org/10.33976/jert.7.2/2020/3 emad b. dawwas/ integrated land use and transportation modeling within data-poor contexts (2020) 18 data. an excellent example, for modeling under the scarcity of data, is a study by fragkias and seto about modeling the urban growth in data-spares environment (fragkiasô and seto, 2007). taking into account that most of expected urban growth in the next two decades will occur in the developing countries where usually the available data are sparse (fragkiasô and seto, 2007), the challenge in the study was to develop an urban growth model that merely used spatially explicit data. utilizing the available binary urban/nonurban maps, which are usually generated by satellite images, the researchers, in this study, used a discrete choice framework to evaluate the probabilities of urban growth for a baseline period by employing a spatially explicit logistic regression analysis. the model could achieve relatively high accuracy (73%-77%), and the uncertainty could be captured and reduced by an explicit policy making framework, which in turn could effectively address problems relating to the predictive bias. there is another study by jianquan and ian (2002), in which the researchers worked under poor data conditions. they tried to answer a fundamental question about what should be modeled in spatial patterns of urban growth by modeling the urban growth pattern at three levels. the first level was the macro level, which was defined as the probability or the possibility of land use changing from nonurban area to urban use. the second level was the meso level defined as ‘the density’ or the possibility of land-use change agglomerated in any pixel. the third level was the micro level defined as ‘the intensity’ or the possibility of high-density land-use change intensified in any pixel. the results of this study were incomplete because of two reasons. first reason was that the third level reflecting more spatial behavior was excluded from the model due to the highly detailed data required in the spatial dimension. this micro level of details required more disaggregated data at parcel, census block, or building level, and from which information like number of floors, ownership, and land value can be extracted. the second reason was that the number of the independent variables used in the macro and meso level was limited because of the data limitation. the results showed that the hierarchical system used in the study was constrained by data limitation and it could partially provide a conceptual and logical framework for the spatial analysis and spatial patterns of urban growth. 2.2 aggregated models vs. disaggregated ilutms are disaggregated models that are mainly based on the discrete choice models, which are in turn based on the choice behavior. gensch and ghose (1997) attempted to compare aggregated with disaggregated models by studying one of the discrete choice models’ property namely the independence of irrelevant alternative (iia) at the two levels. according to this property, the ratio between two choices is assumed constant when more choices are introduced into the choices set where the two alternatives exist (koppelman and bhat, 2006). when gensch and ghose tested the iia violation at the individual level and at the aggregate level, they found that even when the iia assumption is valid for each individual, iia is always violated at the aggregate level. the only exception occurs when there was no heterogeneity among the individuals’ choices pattern. this implicitly means that all individuals have identical choice patterns. therefore, heterogeneity across the individuals could be the reason behind the violation of the iia at the aggregate level rather than the violations at the individual level. these significant findings make it essential to look at the iia property from a full choice set (the aggregated level) rather than a single pair perspective (the disaggregated level). consequently, the authors recommended that instead of developing sophisticated and complex choice models that require enormous data at the individual level, it is possible, in some cases, to develop more aggregated choice models that do not require highly detailed data and in the same time segment the study area in order to reduce the heterogeneity. this study will build on this pivotal finding to adapt ilutms from rich-data modeling approach into poor-data modeling approach. recently, the competition between aggregated and disaggregated modeling approaches has risen in travel demand modeling. the aggregated approach is represented by the traditional “4-step” travel demand models that relies on aggregate demographic data at a traffic analysis zone (taz) level. the disaggregated approach, on the other hand, is represented in the activity-based microsimulation methods that employs robust behavioral theory while focusing on individuals and households. one of the few studies have compared the two approaches is the one by mcwethy and kockelman (2007) who compared the microscopic activity-based and traditional models of travel demand. using identical sets of data, they tried to search for the tradeoffs between these two methodologies. they calibrated and then applied a based activity and traditional aggregate model on the same study area. the results of the analyses showed several differences regarding the performance and accuracy. activity-based models required more calibration and application effort in order to ensure synthetic populations matched key criteria and that activity schedules matched surveyed behaviors. at the same time, the modeling process is accomplished while being realistic and consistent across household members. on the other hand, activity-based models were found to be more sensitive to the changes in model inputs such as the capacity expansion and employment location tests (mcwethy and kockelman, 2007). this is an additional support to the notion about the aggregate models that they ignore behavioral distinction across the population. 2.3 missing data and imputation the presence of missing values is an important issue facing modelers and planners because these missing values make data analysis and usage problematic. this problem is more challenging in poor data environment because the missing data are usually duplicate. analyses from some of the highway agencies show that up to 50% permanent traffic counts have missing values (zhong et. al., 2002). in this case, it will be difficult to eliminate such a significant portion of data from traffic analysis. therefore, these missing data must be substituted through a process called data imputation that includes different methods with different levels of accuracy associated with these methods. emad b. dawwas/ integrated land use and transportation modeling within data-poor contexts (2020) 19 there are many studies concentrating on different methodologies for analyzing missing data, including basic concepts and applications of multiple imputation techniques and for analyzing results from multiply imputed data sets (yang, 2002). bradley (1994) discussed three main topics related to missing data: (1) bootstrap methods for missing data, (2) the relationship of bootstrap methods to the theory of multiple imputation, and (3) computationally efficient ways of executing them. the results showed that the simplest form of nonparametric bootstrap confidence interval turns out to give convenient and accurate answers. in addition, there were interesting practical and theoretical differences between bootstrap methods and the multiple imputation approach, as well as some useful similarities. in another study, a fully conditional-specification for multiple imputation of discrete and continuous data was used (buuren, 2007). in this paper, two approaches for imputing multivariate data were presented and their results were compared: joint modeling (jm) which is based on parametric statistical theory and fully conditional-specification (fcs), which is a semi-parametric and flexible alternative. jm and fcs were applied to a data set containing 3801 observations with missing data. imputations for these data sets were created under two models: a multivariate normal model with rounding and a conditionally specified discrete model. the jm approach introduced biases in the reference curves, whereas fcs did not. the paper concluded that fcs was a useful and easily applied flexible alternative to jm when no convenient and realistic joint distribution can be specified. regarding ilutms, using a proper imputation method can help maintain data integrity and improve the output accuracy of the models. this practically means improving the model capabilities to predict the future change in land use. based on a pattern matching technique, zhong et. al. (2006) used a new method for estimating data imputation for data from an automatic traffic recorder (atr) in alberta, canada. according to their results, the new method improved the model outputs and its level of performance over the traditional models. in another study, genetically designed neural network and regression models, factor models, and autoregressive integrated moving average (arima) models were developed. it was found that genetically designed regression models based on data from before and after the failure had the most accurate results. average errors for refined models were lower than 1% and with stable patterns, and for counts with relatively unstable patterns, average errors were lower than 3% in most cases (zhong et. al. 2004). this study will take advantage of these results in order to propose an alternative approach to ilutms to effectively function within data-poor context. iii poor-data context framework modeling in data-poor context means using lower levels of detailed data, which is widely available and can be easily obtained, as input to the ilutms. one of the data-poor contexts is to use aggregated data (course resolution) instead of disaggregated (fine resolution). data aggregation can be classified into two main types, spatial and temporal aggregation. the spatial aggregation means increasing the size of the smallest spatial unit in which people choices, activities, and behavior are assumed to be homogenous. for example, instead of using data at parcel level, we use neighborhood boundaries within a city. the temporal aggregation means aggregating data over longer period of time like months and could reach a year instead of days and daily time intervals. data-poor context could also lead to dealing with inconsistent data that come from different resources, collected by different techniques, and from different periods. this usually leads to misunderstanding, and misreporting about what these data sets mean and how the data should be interpreted. because of the data aggregation and the data inconsistency, more missing data are expected to appear, which increases the uncertainty that already exists in any model. generally, uncertainty is a major problem that enters into all aspects of the model development at two phases. first phase is the development of a conceptual model, which is a qualitative representation of the relationships between different parts of the urban system being modeled. uncertainty at this phase does not increase due to the data aggregation or data inconsistency because no quantitative data is needed. therefore, in this phase, we have almost the same level of uncertainty in both contexts: the data-rich context and in the data-poor context. the second phase is the development of a quantitative model, in which variables representing the relationships developed in the conceptual model are identified, and parameters of these variables are generated. this is a critical phase because all data are entered to the model, as well as the model outputs are obtained in this phase. the difference in the uncertainty between data-rich and data-poor contexts is expected to appear here, and it should be tested here as well. according to the flowchart in figure (1), the modeling process is conducted through three main steps discussed as follows: fig. 1. proposed modeling framework emad b. dawwas/ integrated land use and transportation modeling within data-poor contexts (2020) 20 step 1) input data this step is the main step in which the data will be prepared to be input in the following steps. major changes should take place to move from the full-data modeling approach to datapoor modeling approach. 1. data preparation the data sets are prepared on two stages as follows: a. data aggregation defining the limits of aggregation and the relevant aggregate population is the first issue to resolve in making aggregate forecast. this study will exploit the available procedures of aggregating data. here are the methods that will be used for aggregating the available data in order to be used as input to the aggregate model (ben-akiva and lerman, 1985): 1. average individual: in this method, an average individual will be constructed for the population and this average will be used as an approximation for the weight of individual; 2. classification: the population in this method is divided into a number of nearly homogenous subgroups with different sizes, and the choice probability of the average individual within each subgroup is used; 3. statistical differentials: this method is a “technique for approximating the expected value of a function of random variables from information about the moments of their joint distribution”; 4. explicit integration: in this method, the distribution of the attributes in the population is represented with an analytically continuous distribution. the main assumption in this method is that the population is defined in a way that all individuals in it have the same choice set. if this condition is violated then the population must be divided, and separate aggregate forecasts must be made for each subgroup; 5. sample enumeration: this method uses a random sample of the population to represent the entire population. b. missing data when treating large amount of data from different sources with different resolutions, missing data problem is inevitable. common methods for solving this problem usually introduce substantial bias and yield in most cases lower standard error (david, 2002). however, there are good methods that do not look so bad. three of these methods will be used, in this study, which are the listwise deletion (ld), maximum likelihood (ml), and multiple imputation method (mi). selecting one of these methods depends mainly on the assumption of the missingness and on the model we are estimating, as shown in figure (2). following is a brief description of these methods (david, 2002). listwise deletion listwise deletion (ld) is simply accomplished by deleting any observation with missing data on any variable in the fig. 2. missing data treatment model of interest and then doing the required analysis for the complete data set. taking into account that the missing data are mar and the amount of missing data are tolerable, ld has two obvious advantages. the first is its ability to be used for any kind of statistical analysis, and the second is that no special computational methods are required. furthermore, the obtained standard errors and test statistics with the ld data set are as appropriate as the full data. maximum likelihood maximum likelihood (ml) can be used if the missing data are mar but the amount of missing data are intolerable, and, in the same time, if we are dealing with linear or log linear models. one of the major limitations of this method is that it is limited to linear and log linear models which will create the need to find a suitable alternative to ml. multiple imputation multiple imputation (mi) is an alternative approach with the same optimal properties as ml, but it can deal with any kind of data and any kind of models. furthermore, mi usually produces consistent estimates when the data are mar. when the data are nor mar, the ml and mi can be used but the problem is to obtain high quality results, because these methods are very sensitive to the assumptions relating to the missing mechanism. in addition, there is no way to test these assumptions. 2. data integration ilutms require extensive amounts of data which makes the acquisition, maintenance, and calibration of these data the largest time consuming part of the modeling process. ilutms data requirements can be mainly classified into: a. socioeconomic data including data about the population and household characteristics; b. land use data including available land use and land supply, land use plans, and density of development; c. economic data including businesses and employment; emad b. dawwas/ integrated land use and transportation modeling within data-poor contexts (2020) 21 d. general measures of accessibility in urban areas; e. data about the environmental constrains. the data integration requires dealing with all these data coming from different sources and at different levels of details ranging from parcel level to growth boundary level. after preparing the data, a database containing these data will be built. in addition, a series of interconnected data tables will be connected to the study area, which will be converted to grid cells. therefore, each grid cell will contain these data and the related policies specifying the development rules, according to which the database will be updated overtime during the modeling process. step 2) modeling (adjusting models to new data and creating scenarios) the main difference between the full model and the aggregate model is the input data, so existing models will be modified in order to be able to deal with the aggregated data prepared in step 1. therefore, there will be differences in the number of variables in both models due to the differences in the input data. however, the variables used for the aggregate model will be a subset of those used in the disaggregate model. to examine the sensitivity of each model, policies and scenarios should be applied to the models. based on the scenarios, different results of the models can be compared and evaluated in step 3. some changes will be required in the existing models in order to accommodate the distinct changes in the input data. step 3) modeling output the output of modeling process will be sets of different maps and tables representing the results of different scenarios based on different sets of policies and regulations. more specifically, the results may include:  acreage by land use  housing units by housing type  square feet of nonresidential space by type  property values  businesses and employment by sector  households by type (income, age, presence of children, household size)  accessibility measures to employment by type and population by type iv conclusion & recommendations the output of the proposed modeling approach will certainly have lower levels of details when compared to the data-rich modeling approach. the results, however, will enable the decision makers and urban planners from predicting the future trends and patterns of the land uses and the corresponding transportation demands at higher levels of accuracy. urban planners who will adopt the proposed modeling approach and researchers who will do further research should keep in mind that there are some limitations in this modeling approach. firstly, there will be uncertainty about the source of errors whether they come from the data aggregation and the input variation or from the change in the aggregation model parameters. the uncertainty in the model parameters estimates may be a significant source of uncertainty in some model outputs. secondly, the validity of all results will be constrained by the limitations of the used model as integrated land use-transportation modeling software. finally, the results cannot be generalized because incomplete and missing data are unlimited to one or two studies. several studies should be conducted before we reach acceptable levels of generalizability where the results of one study can be applicable to other cases. references [1] ben-akiva, m. and lerman s. (1985). discrete choice analysis: theory and application to travel demand (pp. 253-275). london: the mit press, cambridge, massachusetts. [2] bradley, e. (1994). missing data, imputation, and the bootstrap. american statistical association, journal of the american statistical association, june 1994, vol. 89, no. 426. [3] buuren, s. (2007). multiple imputation of discrete and continuous data by fully conditional specification. statistical methods in medical research, vol. 16, no. 3, 219-242 (2007) [4] david, p. (2002). missing data (pp.4-12). california: sage publications inc.dawwas b. emad (2018). towards a land use –transportation interactive modeling: a conceptual model for collaborative planning. journal of engineering and architecture, vol. 6, no 1, 91-100 (2018). [5] fragkiasô, m. and seto, k. (2007). modeling urban growth in data-sparse environments: a new approach. environment and planning b: planning and design 2007, volume 34, pages 858-883 [6] gens, ch. and ghose, s. (1997). differences in independence of irrelevant alternatives at individual vs aggregate levels, and at single pair vs full choice set. omega, int. j. mgmt sci. vol. 25, no. 2, pp. 201-214, 1997 © 1997 elsevier science ltd. [7] jianquan, c. and ian, m. (2002). modelling urban growth patterns: a multiscale perspective. environment and planning a 2003, volume 35, pages 679 – 704 [8] koppelman, f. and bhat, c. (2006). a self instructing course in mode choice modeling: multinomial and nested logit models. (accessed on 02/29/2008 at www.civil.northwestern.edu/people/koppelman/pdfs/lm_draft_060131final-060630.pdf [9] mcwethy, l. and kara, m. (2007). comparing microscopic activity-based and traditional models of travel demand: an austin area case study. center for transportation research university of texas at austin. http://books.google.com/books?q=inpublisher:%22sage+publications+inc%22&source=gbs_summary_r http://www.civil.northwestern.edu/people/koppelman/pdfs/lm_draft_060131final-060630.pdf http://www.civil.northwestern.edu/people/koppelman/pdfs/lm_draft_060131final-060630.pdf emad b. dawwas/ integrated land use and transportation modeling within data-poor contexts (2020) 22 [10] parsons brinckerhoff quade & douglas, inc. (1998). land use impacts of transportation: a guidebook. transportation research board and national research council.(accessed on 02/19/2008 at http://nepa.fhwa.dot.gov/renepa/renepa.nsf/) [11] springfield mpo website. (http://www.bestplaces.net/city/springfield_or-54169600000.aspx [12] waddell, p. (2002). urbansim: modeling urban development for land use, transportation and environmental planning. (accessed on 02/15/2008 at the urbansim website: http://www.urbansim.org/papers/) [13] yang, c. (2002).multiple imputation for missing data: concepts and new development. (accessed on 02/29/2008 at www.sas.com/rnd/app/papers/multiple-imputation.pdf) [14] zhong, m., sharma, s.c. and lingras, p. (2006). matching patterns for updating missing values of traffic counts. journal of transportation planning and technology, vol. 29. [15] zhong, m. and lingras, p. , and sharma s. (2002). estimation of missing traffic counts using factor, genetic, neural, and regression techniques. (accessed on 02/29/2008 at www.unb.ca/civil/mingzhong.htm) [16] zhong, m., lingras, p. and sharma, s.c. (2004). estimation of missing traffic counts using factor, genetic, neural, and regression techniques. transportation research part c: emerging technologies , no. 12, pp. 139-166. emad b. dawwas an assistant professor in urban planning engineering at an-najah national university (anu) – college of engineering. i got my phd in 2011 from university of washington – seattle, usa. i have been working with municipalities, village councils and palestinian ministries since i got my master degree in 2002. i prepared many master physical plans and strategic investment plans for palestinian communities in the last five years. my research interests are focused on land use and transportation planning and employing the new technologies in imporving the planning process. i mainly interested in developing planning support systems and urban analytical models that suits developing countries where less data and budgets are available. http://www.bestplaces.net/city/springfield_or-54169600000.aspx http://www.bestplaces.net/city/springfield_or-54169600000.aspx http://www.urbansim.org/papers/ http://www.urbansim.org/papers/ http://www.sas.com/rnd/app/papers/multiple-imputation.pdf http://www.sas.com/rnd/app/papers/multiple-imputation.pdf http://v8nu74s71s31g374r7ssn017uloss3c1vr3s.unbf.ca/~ming/document/matchingpatterns-joftp.pdf http://v8nu74s71s31g374r7ssn017uloss3c1vr3s.unbf.ca/~ming/document/matchingpatterns-joftp.pdf http://v8nu74s71s31g374r7ssn017uloss3c1vr3s.unbf.ca/~ming/document/missing-tr-c.pdf http://v8nu74s71s31g374r7ssn017uloss3c1vr3s.unbf.ca/~ming/document/missing-tr-c.pdf http://v8nu74s71s31g374r7ssn017uloss3c1vr3s.unbf.ca/~ming/document/missing-tr-c.pdf transactions template journal of engineering research and technology, volume 6, issue 2, october 2019 1 development of a device for measuring parameters of the sea wave ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev abstract— in this work we study the developed measuring instrument of parameters of sea waves. the given theoretical studies of wave parameters determines wave speed, height, period and frequency in a digital form. the measured parameters from the microcontroller are transmitted through system gsm/gprs rs485/232 to the base station. index terms— wave parameters, microcontroller, encoder, infra-red sensor, signal transmission. i introduction the black sea is a closed sea; the tides are so small that they are almost invisible. the magnitude of the tidal fluctuations of the black sea level is from 3 to 10 cm [1]. in the open sea, winter waves reach a height of 6-7 m. the black sea shock waves reach a meter height. currents in the black sea at the coast of the crimea are predominantly counterclockwise. the currents are weak, their speed rarely exceeds 0.5 m/s. their main causes are river runoff and wind exposure. the greatest heights observed in the black sea were 14 m, the length of such waves was 200 m, on the approaches to the coast the maximum wave height was 6 m, the length – 120 m [1]. wind speed and length of its acceleration in the sea have a great influence; waves up to 3 m high usually prevail. in open waters, maximum wave heights reach more than 10 m, and during strong storms they may exceed this level. seasonal fluctuations have a great influence on the sea level. in may-july, a high rise in the level of sea water is observed, in october-november a decrease in the level of sea water is observed. the level between winter and summer sea position is 40 cm. the most often fluctuations of the level of the black sea are wind-driven. their formation depends on certain atmospheric processes within the natural synoptic period; their duration ranges from 4 to 8 days. the averaged wave oscillation value for the black sea is 0.8 m. the theory of the onset of wave flow was developed by academician v.v. shuleikin [2] in 1954. the black sea wave climate is assessed in [3] using a total of 38 years of data (1979–2016). as a first step, the longterm variations of the main wave parameters were evaluated using data provided by the european center for mediumrange weather forecasts (ecmwf). based on these values, the nearshore and offshore conditions from the black sea were evaluated. as to the satellite measurements, there is no correlation between the water depth and the wave resources, with more consistent values being reported in the western part of the basin. regarding the spatial distribution of the extreme events, it seems that the storm conditions occurring in the western part are more consistent, while in the eastern sector it is more likely to encounter storm conditions reported for a relatively short time window. based on these results, we can conclude that the black sea is a dynamic environment where the wave energy budget changes on a seasonal or inter-annual scale. these variations bring opportunities but also challenges, such as i beach erosion due to wave action. nevertheless, for navigation and offshore activities, more important are the occurrences of rough events, which influence in a negative way the safety and productivity of these sectors [3]. the paper [4] shows the results of a hindcast study of wind waves on the black sea based on a continuous numerical calculation for the period between 1949 and 2010. the large time span of this period makes it possible to obtain reliable statistical and extreme parameters of wind waves, as well as to assess the evolution of the black sea’s wave climate. during this research average and extreme parameters of wind waves on the black sea were derived, which generally match with most recently published results. additionally, an assessment of interannual and seasonal variability of storms on the black sea was carried out. a slight negative trend of both annual duration and quantity of storms was observed. the present state of the black sea wave power was estimated in [5] based on the period 2012-2015, using wind data from the limited area atmospheric model aladin. it was found that the mean annual wave energy flux reach 4.8 ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 2 kw/m for the south western black sea and above 4 kw/m for the western shelf. 110 years wave hindcast was performed to evaluate the changes in the wave power and it was found that the wave power increased during the first half of the xx century for the western part of the sea (where it is highest) and decreased after the seventies. the study of the influence of the teleconnections showed that the changes in the wave power at the western shelf are driven by other factors (mainly linked with nao and ea/wr) than the northern and eastern part of the sea, where it is linked with amo and pdo and highest when they are both negative. as for the applicability of the wave energy as a renewable energy resource the conclusions taking into account the negative trend and climate projections are hardly optimistic and it may have some applications only in a combined wind-wave energy converters. one of the urgent tasks of modern hydropower is the use of the energy of sea waves in order to convert wave energy into electrical energy. to solve this problem, it is necessary to have the technical parameters of the wave. since the waves are not systematic and varying in predetermined values, it is necessary to analyze the disturbing effects that create the waves. waves can be longitudinal and transversal. in longitudinal waves, particles of water oscillate along the distribution of the wave. perpendicular oscillations of water particles to the direction of wave propagation create transversal waves. wave movements include longitudinal and transversal oscillations, gravitational motions arising on the water surface in a circular motion, decrease with depth. to determine the energy transferred by a wave that is characterized by poynting vector or a vector of energy flux density, it is necessary to know the magnitudes, lengths and speeds of the wave. ii theoretical studies like any oscillation, waves can be represented as a superposition of harmonic waves, varying according to sinusoidal law with different parameters. the equation of onedimensional harmonic waves : 𝜑(𝑥,𝑡) = 𝐴 𝑠𝑖𝑛 [2𝜋 ( 1 𝑇 − 𝑥 𝜆 ) + 𝜑] (1) or 𝜑(𝑥,𝑡) = 𝐴 sin(𝜔𝑡 − 𝑘𝑥 + 𝜑) (2) where k = 2π λ⁄ , – wave number (the number of waves, reducing) a in this case is oscillation amplitude; т – wave period, 𝑇 = = ; 𝜔 – cyclic frequency; 𝒱 – linear frequency of oscillation of a particle in a wave; 𝜆 – the length of a wave; 𝜑 – particle deviation from positions in a wave. in the wave motion in an elastic medium, there is no matter transfer. in the fluctuation of the sea waves, there is a transfer of matter. depending on the direction of oscillation of particles of the medium (water) in a wave, the waves are longitudinal and transversal. in longitudinal waves, particles oscillate along the wave propagation. in transversal waves, the oscillation of the particles is perpendicular to the direction of the wave. in gravitational waves which contain components of both longitudinal and transversal oscillations, appearing, for example, on the surface of the water, particles make vertical movements along a circle with a radius decreasing with depth. the source of the waves, acting on the volumes adjacent to it, continuously transfers energy to them which moves the wave in the water environment. when the longitudinal wave propagates, which is characterized by equation (2), it is possible to determine the change in the energy of the volume dv. as the volume dv, let us choose an elementary cylinder (fig. 1). the wave of the weight p and the radius r is driven by the force of the wave f1 and the force of the wind f2. r – is the normal reaction force of the water plane shifted relatively to the center of inertia c of the wave bythe magnitude of the rolling friction coefficient fk in the direction of motion. f – reaction force of the wave crest is equal in magnitude to the force applied to the wave. fтр – friction force of the wave on the horizontal plane of the water surface. in accordance with the direction of the s axis, we shall take the positive direction of the angle of wave decrease α. let us write the theorem on the change in the kinetic energy of a system of material points when a wave moves unf1 f2 f ds c r1 r c1 r s ftp fxp α figure 1 cylinder ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 3 der the action of the force f and the speed of the center of inertia c of the wave at the moment it moves to the distance ds: 𝑇 − 𝑇 = ∑ 𝐴( ) + ∑ 𝐴( ) (3) since the wave is an immutable material system, the sum of the work of internal forces is zero, therefore 𝑇 − 𝑇 = ∑ 𝐴( ) (4) if the center of inertia c of the wave moves under the action of the force of the wave and the force of the wind on the elementary displacement ds directed along the s axis to the right, taking into account the position of the instantaneous center of speeds φ we shall let : = (5) where is elementary angular displacement of the wave around an instantaneous center of speeds φ. the elementary angular displacement of the wave is associated with the elementary displacement ds of the center of gravity c with the dependence. the work of external forces on the elementary displacement ds is 𝐴 = 𝐴( ) + 𝐴 + 𝐴( ) + 𝐴( ) (6) since the movement of the center of inertia c occurs horizontally, then 𝐴( ) = (7) elementary work of rolling friction 𝐴 = −𝑚 𝜑 (8) the work of the rolling friction pair is negative, since the direction of the moment of the pair is opposite to the direction of the wave motion. since 𝑚 = = ( − sin ) (9) then taking into account the formula (5), we shall find that 𝐴 = −( − sin ) (1 ) the friction force fтр does not work (when rolling without sliding 𝒱ф=0). 𝐴( ) = 𝒱 𝑡 = (11) let us calculate the elementary work of the force f. let us choose point c as the pole, then 𝐴( ) = + 𝑚 𝜑, (12) where ds – is the vector of the elementary displacement of the center of inertia c; 𝑚 – is a moment of force f relative to the axis passing through point c perpendicular to the fixed plane, i.e. 𝑚 = . then 𝐴( ) = cos + 𝜑 (13) using the formula (5), we shall have 𝐴( ) = ( + cos ) (14) after substituting formulas (7), (10), (11), (14) into (2), we shall have the elementary work of external forces applied to the wave on the elementary displacement ds. 𝐴 = [ ( + cos ) − ( − sin ) ] (15) to determine the amount of work of external forces on the displacement of the center of inertia s, we shall, using formula (15), take a certain integral in the range from 0 to ∞, as a result we shall get: ∑ 𝐴( ) = [ ( + cos ) − ( − sin )] (16) let us calculate the kinetic energy of the waves that is in the initial position the wave was at rest, that is 𝑇 = (17) the kinetic energy in the final position of the wave is 𝑇 = 1 2 𝑀𝒱 + 1 2 𝐼 𝜔 (18) where 𝑀 = – is the wave mass, 𝐼 = 𝜌 – is the moment of inertia, 𝜌 – is the radius of inertia, 𝜔 = 𝒱 – is the angular speed. therefore, 𝑇 = 𝒱 (1 + ) (19) ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 4 substituting (16), (17), (19) into equation (4) and solving this equation relatively 𝒱 , we shall find the desired speed of the center of wave c 𝒱 = √ 2𝑔 + 𝜌 [ ( + cos ) − ( − sin ) ] (2 ) it can be seen from formula (20) that the wave is in motion if the modulus of force f satisfies the condition > + cos + sin (21) the sliding friction of a fluid has the viscosity value 𝜂 which depends on the temperature 𝑡 of the water. the viscosity of water is 1 at 20º c. development of a device for measuring the parameters of the sea wave. various devices have been developed for measuring the wave energy [6-18], which do not allow obtaining the required data promptly; the accuracy of these devices is low, they have complex circuit solutions, the complexity of signal processing and design. in order to improve the accuracy of measurement, constant monitoring of wave parameters, storage and transmission of information about speed, height and wavelength over a distance, an electronic device has been developed for measuring the parameters of the sea wave which can be used both in the coastal zone and at a considerable distance from the coastal strip. the developed device (figure 2) for measuring the parameters of the wave contains: №1 unit for measuring the speed of the wave; infrared sensor; ultrasonic sensor; wavelength calculator; decoder; control panel; microcontroller; frequency generator; liquid crystal indicator; unit for reception and transmission of data the device for measuring the parameters of the sea wave consists of a unit for measuring the speed of the wave; ultrasonic sensor; unit measuring the height of the wave; infrared sensor; encoder; decoder; control panel; microcontroller; frequency generator; inductor; unit for reception and transmission of data. technical result will be: increased accuracy of measurement of parameters due to simplification of the design and use of modern electronic components and a microcontroller. the task is solved due to the fact that the device for measuring parameters of sea waves contains of units for measuring the speed and height of the wave, the unit for receiving and transmitting these parameters of the waveswhich includes an ultrasonic sensor for measuring the level of wave height which is connected with its output via an encoder with the input of the microcontroller, as well as an infrared radiation sensor the signal from which is fed to a disk encoder that has holes through which the light signal from the infrared radiation sensor enters on the receiving device and depending on the duration and the pulse wave speed can be determined; the disk rotation is due to the screw and reducer. data from the infrared receiver is sent to the second input of the microcontroller which processes the incoming signals, then goes through the decoder to the indicator; pulses of a megahertz frequency come from the reference frequency generator to the counting input of the microcontroller; the microcontroller stores information in a buffer device and transmits to transceiver devices over gsm/gprs cellular networks over a distance. figure 3 shows the constructive solution of the developed device for measuring wave strength which contains: float (1); control panels with liquid crystal display (1); disk (3); screw (4); control panel (5); infrared sensor (6); gearbox (7); hydraulic cylinder (8); protective pipe (9); elastic element – spring (10); ultrasonic sensor (11); rod (12); anchor (13). 1 2 3 4 5 6 7 8 9 10 11 figure 2. block diagram of the device for measuring the parameters of the sea wave ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 5 the following elements were used to measure the wave speed: screw (4), gearbox (7), disk (3), infrared sensor (6). as the wave speed increases, the screw rotation frequency increases. a screw through a reducer causes a disk with holes to rotate (fig. 4). the disk is an incremental encoder that allows encryption. a stepping optical encoder consists of the following components (fig. 5): a light source, a tagged disk, a photosensitive sensor and a disk having a certain number of holes through which light from the source hits the photosensitive sensor. when the disk is rotated a series of pulses (𝑣 = (𝑡)) comes from the photosensitive sensor the frequency of which is directly proportional to the speed of the wave. if a worm gear and an integrating mechanism for counting pulses were installed on the disk shaft, then in this case it would be possible to estimate the average value of the change in the wave speed in one place or another for a certain time interval. when the disk rotates, modulated pulses (fig. 5, b) come from the sensor (fig. 5, a) which are fed to the microcontroller (fig. 6) of the electronic unit of the measuring device. the wave speed meter operates in the frequency-pulse modulation mode, i.e. at the output we have pulses modulated in accordance with frequency depending on the speed, length and head of the wave. an increase in the wave speed leads to an increase in the screw rotation frequency and, accordingly, figure 6 block diagram of the electronic unit of the measuring device figure 3 design of the device for measuring the sea wave figure 7 parameters of the trochoidal wave figure 5 diagram of the device for measuring the wave speed; a) pulses at the receiver;b) figure 4 disk incremental encoder ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 6 in the disk there are m holes: m=72. n–rotations– 1 м/с. 𝑛 = (𝒱) 𝒱 – speed. n– rotational speed. 𝑘 = 𝑚 𝑛 , wheren=k·m k– is the disk constant. when the number of measurements on the disk is 72, it means that for every 72 pulses from the sensor, the disk completes a full cycle. to measure the pulse circulation of the wave, it is necessary to calculate the number of pulses generated by the sensors. suppose that the number of pulses generated per second is equal to 360 pulses; we shall divide it by the number of holes in the disk to get the number of disk cycles equal to 5 cycles per second. 𝑘 = 𝑚 𝑛 = 36 72 = 5 м ⁄ , therefore, the wave speed will be 𝒱 =18 km/h. when the radius of the disk is 25 cm and the number of measurements on the disk is 72, the angle between each measurement is 5 degrees. let us determine wavelength upon the given values 𝑙 = 2𝜋 𝑙 = 2𝜋 25 = 157 м suppose that the disk makes five full rotations θ=5·360=1800º, then 𝜆 = 𝜃 𝑙 36 ° = 18 157 36 = 785 м = 7,85 м wavelength measurement 𝜆 = 2𝜋𝑔 𝜔 = 2𝜋𝑔 (2𝜋) 𝑇 = 𝑔 𝑇 2𝜋 or 𝜆 = 2𝜋𝑔 (2𝜋) = 𝑔 2𝜋 𝜆 = (𝑇) и 𝜆 = ( ) t – wave period, f – wave frequency. the wavelength is directly proportional to the square of the wave period. the error in measuring the wave speed at n = 72 = 5 1 36 = 1,4 % to determine the wavelength – the horizontal distance between two successive wave crests, measured along the direction of propagation, we shall express the frequency (ω), wavelength (λ) and wave period (t) by formulas when considering the trochoidal wave parameters (fig. 6). angular speed of the wave is 𝜔 = 2𝜋 𝑇 = 2𝜋 ; the length of the wave is 𝜆 = 2𝜋𝑔 𝜔 period is the time interval between two successive wave crests at a fixed point. 𝑇 = √ 2𝜋𝜆 𝑔 = 2𝜋 𝜔 the speed of the wave is 𝒱 = 𝜆 · the time interval between two successive wave crests at a fixed point is determined by the formula 𝑡 = 𝜆 𝒱 the obtained data on the speed and length of the wave come to block 3 – the microcontroller (fig. 5). the output of the microcontroller (block 3) is connected to the input of the liquid crystal display (block 4) (fig. 5) and the results of speed and length of the wave are displayed on the liquid crystal display. if necessary, the data of these values – the speed and length of the wave can be transmitted through the transmitter or cellular communicator to the recording device of the meteorological monitoring station, if the mentioned device is located a short distance from the coast. to determine the level of the crest of the wave – the height of the wave it is proposed to use an ultrasonic distancefinder which allows determining the height of the wave h (fig. 7) according to the principle of operation of the echo sounder. ultrasonic sensor 11 (fig. 3) is connected with the float 2 and is mounted in the hydraulic cylinder 8; it is proahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 7 tected by pipe 9. its principle of operation is that when the wave height changes simultaneously with the wave, the position of the float changes and ultrasonic pulses are transmitted to the control panel from the sensor of ultrasonic signals, unit 2 (fig. 6), to the input of the microcontroller, unit 3 (fig. 6). the microcontroller (4) processes the signal and calibrates it, and from the moment of coming out from the microcontroller this ultrasonic signal, calibrated to the wave height parameter in meters, is transmitted to the reading device – the liquid crystal display 4 (fig. 6). the ultrasonic sensor (fig. 8) emits ultrasonic waves at a frequency of 40 khz. as an ultrasonic sensor, sensor of type hc-sr04 can be used. the sensor generates a signal that allows determining the distance, and, consequently, the height of the wave h = 2a (fig. 8). the ultrasonic sensor is an ultrasonic module hc-sr04 (fig. 9) with 4 contacts. contact 1 (fig. 9) is supplied with the supply voltage – 5v; contact 2 is supplied with positive pulse 10 μs – radiating, operating in the trigger mode; contact 3 – echo-pin, is supplied with reflected signal; contact 4 is connected to ground – the rw value. the sensor emits a short ultrasound pulse in the beginning of counting (at time 0) which is reflected from the object and received by the sensor. the distance is calculated from the moment the signal is emitted which is reflected from the object and received by the sensor that is, based on the radiation time and until the echo is received. the speed of sound (fig. 10). the sensor receives an echo signal and outputs the distance which is encoded by the duration of the electrical signal at the outlet of the sensor (echo). the next pulse can be radiated only after the echo from the previous one disappears. this time is called the cycle period. the recommended period between pulses must be at least 50 ms.if a 10 microsecond pulse is applied to the signal pin (trigger) (fig. 10) then the ultrasound module will emit eight packets of an ultrasonic signal at a frequency of 40 khz and register their echo. the measured distance to the object is proportional to the width of the echo and can be calculated by the formula 𝐻 = 𝑡 58 , wheret– is the time of the timer (echo signal). 58 – is calibration. the height of the crest of the wave is determined by the formula 𝑎 = 𝑡 58 2 , 𝑎𝑠 𝐻 = 2𝑎 the obtained values of the speed, length and height of the wave are fed from the sensors to the microcontroller – 3 (fig. 5) and recorded on the indicator – 4. a reference generator – 5 (fig. 6) is provided for the microcontroller to work. in order to transfer wave parameters to a distance that is, ashore, it is necessary to modulate the signal data of the wave parameters up to 150 mhz for the radio transmitter and install a demodulator on the shore. the power of the transmitter for a voltage of 9v can be done using batteries. figure 10 drawings of output voltage figure 8 ultrasonic sensor figure 9 ultrasonic module hc-sr04 ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 8 signals about the parameters of the wave – the height, length and speed of the sea wave at a considerable distance from the coast can be transmitted using wireless communication over cellular networks with further transmission to the internet. it is possible to transmit data over gsm/gprs cellular networks using the equipment developed by energiya-source llc, cheliabinsk. the structural block diagram is presented below (fig. 11). in the presented block diagram (fig. 11) of wireless data transmission: en i-405 (gsm/gprs – rs 485/232) is a terminal device (modem) of cellular communication f = 900/1800 mhz with a sim card; en i-750 is a programmable logic controller that manages the polling of sensors, the formation, the search for information or an on-line signal (upon request); plc is a programmable logic controller; en i-751 – conversion units which measure the current i = 4-20 ma from the sensors of the pulse transmission unit (ptu) and transmit it to en i 750; icu – information conversion units (icu). receiving equipment based on en i 405 using rs 485 network for connection to a laptop or personal computer (pc). further data from the network can be used in any convenient place. conclusions an electronic device has been developed for measuring the parameters of the sea wave: the speed, height and frequency of the wave. references [1] energy of sea waves. http://user ospu. 8dessa. ua/~shev/emd_m/nie/1_2_9.htm [2] a.a. zagorodnikov ‘radar survey of sea unrest’, l., gidrometeoizdat, 1978, p. 141-158. [3] f. onea , l. rusu. a long-term assessment of the black sea wave climate, sustainability 2017, 9, 1875; doi:10.3390/su9101875. [4] v. s. arkhipkin, f. n. gippius, k. p. koltermann, g. v. surkova,. wind waves in the black sea: results of a hindcast study ./ hazards earth syst. sci., 14, 2883– 2897, 2014. [5] v. galabov. the black sea wave energy: the present state and the twentieth century changes, /submitted on 5 jul 2015. [6] ‘determination of wave parameters by the combined system of measuring the vessel’s speed and wave height’ vanaev a.p., cherniavets v.a. – shipbuilding № 8-9, 1993, p. 6-8. [7] ‘device for measuring wave parameters’ cherniavets v.v., vanaev a.p., nebylov a.v. the patent of the russian federation № 2137153, 1999. [8] ‘radiation method for determining the parameters of the sea surface and the device for its implementation’ patent of the russian federation № 2024034, dobrovolskii d.d., putiashev n.n., iakubovskii e.g., 1994. [9] v.d. andreev. theory of inertial navigation. moscow, science, 1966, p. 580. [10] iu.m. smirnov, g.i. vorobyov. specialized computers. moscow, high school, 1989. – p. 144. [11] set of lsi k 1804 in processors and controllers. v.m. meshcheriakov, i.e. lobov, s.s. glebov et al. – edited by v.b. smolova/m., radio and communication, 1990, p. 256. [12] volosov p.s., dubinkoiu.s. and others. shipboard satellite navigation systems. – l., shipbuilding, 1983 – p. 272. [13] rivkin s.s. determination of linear speeds and accelerations of the ship’s motion by the international method. – l.; central research institute ‘rubin’, 1980. – p. 180. [14] network satellite radio navigation systems. v.s. shibshevich, p.p. dmitriev, n.v. ivantsevich et al. – m.; radio and communication, 1982, p. 272. [15] onboard devices of satellite radio navigation/n.v. kudriavtsev, i.i. mishchenko, a.i. volynkin et al. – m.; transport. 1988 – p.201. [16] zaitsev a.v., reznichenko v.i. determination of the ship’s ground speed based on signals from the mid-orbit space navigation system // notes on hydrography. – 1982 – № 208 a. – pp.62-64. [17] wave energy http://koi.tspu.ru/waves/ [18] shuleikin v.v. physics of the sea. publisher science, m., 1968, 1083s ua. figure 11 structural block diagram of wireless data transmission over cellular networks http://koi.tspu.ru/waves/ ahmed m. alqataa, eskender a. bekirov., and ennan r. murtazaev / development of a device for measuring parameters of the sea wave (2019) 9 ahmed m. alqataa. graduate student , physicsdechnology institute, department of power installations based on renewable energy, baccalaureate, electrical power and electrical engineering, record 15 dated on june 10th, 2014, master, electrical industry and electrical technology, proceedings 9 dated june 15, 2016, 5 articles published, 1 scopus and 4 in journals of the higher attestation commission (hac), 2 inventions are in print, 2 articles. eskender a. bekirov. doctor of technical sciences professor, head of department electric power and electrical engineering, has 7 monographs more than 100 inventions, more than 250 articles. ennan r. murtazaev. assistant, 4 inventions, 8 articles transactions template journal of engineering research and technology, volume 3, issue 3, september 2016 66 designing domain model for adaptive web-based educational system according to herrmann whole brain model mohammed ahmed ghazal 1 , nor azan mat zin 2 , zurina muda 2 1 university college of science and technology, khan younis – gaza strip, palestine, m.ghazal@cst-kh.edu.ps 2 university kebangsaan malaysia, selangor, malaysia abstract: educational materials represent a domain model of adaptive web-based educational system (awbes). however, these materials should be designed to cover the differences of learners‟ preferences. herrmann whole brain model (hwbm) is a reliable learning style (ls) model which can be used to extract the learner‟s preferences in educational environment according to brain structure of learner. in this paper, the learning materials of an essential programming language course (c++) are organized to cover all learners‟ differences according to their brain dominance. the learning materials were described and classified by instructional metadata to fit the preferences of four brain quadrants (rational, organizational, interpersonal and intuitive) within diverse learning objects. the main advantage of this approach is that it is not related to particular type of learners, but it covers the different learners according to their brain-structure. the system which could apply this model can be used to detect the learner preferences dynamically and thus personalize the learning materials within web-based educational system (wbes). index terms— domain model, adaptive web-based educational system, herrmann whole brain model, learning style, learner model. i introduction learning object (lo) represents any digital learning content which can be used to develop the learning environment in order to support learning process. the main importance of advent the learning objects is the need for re-using learning materials which are authored by the teacher or another person. currently, most of the researches in learning systems tend to enhance the machine-driven and automate of generating learning objects. for instance, the lesson is presented to study by the student through gathering a set of learning objects automatically. however, the most challenge is that how the learning objects and courses can be used to personalize the content presentation to the learner through adequate matching between learner preferences and most related learning objects. but today, achieving accurate adaptivity between the learners and their related contents of learning environment is not really possible. the automatic adaptivity requires further educational metadata to carry a useful information about each learning object[1]. ieee learning object metadata (lom) is the most widely accepted and used standard which is made to describe the learning objects from the very practical needs for assembling different learning materials from reusable learning objects [2]. this standard identified 76 different attributes to support the interoperability and adaptivity between learner and the domain of learning objects [3]. a metadata field named is used as the most dominant attribute that is related to pedagogical and instructional perspectives for educational resources. the possible values of this attribute are: exercise, simulation, questionnaire, diagram, figure, graph, index, slide, table, narrative text, exam, experiment, problem statement, self-assessment, or lecture [4]. dublin core metadata initiative (dcmi) standard has more broad-range purposes metadata schema which comprise 15 attributes in the dublin core metadata set to describe the wide range of learning objects [5]. dcmi has conducted different activities through working groups, conferences, global workshops and educational efforts to identify widespread acceptance of metadata standards. the dublin core metadata element set (dcmes) was the first metadata standard which was developed through dcmi as internet engineering task force (ietf) standard. the dcmes identified different sets of vocabulary to describe the core of information property (e.g., “title”, “creator”, “date” and “description”) [6]. furthermore, one of the main challenges in the existing standards is that ieee lom failed to represent enough and sufficient level of granularity to describe and identify the instructional part of learning resources [6, 7]. the elements of data related to learning resource type should contain both of technical and instructional information. therefore, lom mohammed ahmed ghazal, nur azan mat zin, zurina muda / designing domain model for adaptive web -based educational system according to herrmann whole brain model (2016) 67 has covered the part of instructional role of learning object (e.g., exercise, experiment, simulation) and the part of technical information for lo which concern their format (e.g., figure, graph, diagram, table, slide). however, lom and other learning object classifications have missed to cover several instructional types such as example, definition, terminologies, theorem, storytelling, journaling, faq, drama, and others that are needed for tracking the learners' needs and preferences in the context of holistic learning environment. to overcome this limitation, gascueña, fernandezcaballero [8] proposed a domain ontology to represent and describe the components of learning materials independently through organizing the courses into set of concepts and learning objects to be capable of providing the adaptivity and the reuse of the learning objects. these learning objects were described to cover the diversity preferences of feldersilverman learning style model. however, the previous standards and ontologies which are designed according to pedagogical learning theories missed to cover the requirements of lss of hwbm [9, 10]. ii adaptive web-based educational system (awbes) an adaptive web-based educational system (awbes) is a form of online instruction that is used to address the challenges of a wbes [11]. it provides mechanisms to track the learner interactions in order to identify the learner preferences which lead to personalising the design features of a wbes [12, 13]. this system also helps learners accomplish their learning tasks and obtain their required information by adjusting the environment according to their individual differences and thus, automatically fulfil the learners‟ requirements [14, 15]. as shown in figure 1, an awbes comprises three main components [11]: (1) learner actions, which track and audit a learner‟s interactions within the design features of wbes in order to derive a learner‟s characteristics such as learner‟s preferences and styles; (2) learner profile, which uses different methods (explicit i.e., questionnaire or implicit i.e., prototype) to identify a learner‟s characteristics in learner model; and (3) adaptation methods, which are derived from a learner‟s characteristics in a learner model. these learner characteristics are the basic features for developing the adaptation methods of an awbes [16]. in this research we will focus on learner actions as a main source of identifying learner characteristics implicitly. a learner actions learner actions are used to identify the interaction preferences of learner in the system. the learner behaviour is usually used to describe the real actions of a learner within the system. therefore, it is considered a more realistic and accurate source to build the learner model. there are two approaches of managing behaviour information within the system. in the first approach, in case of repetition of learner behaviour in the system, the system can translate the consistently repetitive behaviour into learning patterns. these patterns can be used to identify the learner‟s real interests and preferences according to his/her real behaviour, and thus, derive more accurate adaptation methods [17]. the second approach is a cognitive-science based approach, which focuses on investigating literature in different domains, such as the educational and psychological domains. for example, the learning style (ls) model was used to gather the prospective relationships between learners and their preferences by analysing the learners‟ interactions within a learning environment. these relationships are represented by predefined learning patterns [18]. therefore, this research is conducted based on hwbm as a brain-based learning style model [19] figure 1: the architecture of awbes b herrmann whole brain model learning style learning style is used to clarify the habitual approach and individual preferences and to organise and represent information [20]. ls reflects the individual learning preferences that affect how a learner tends to acquire knowledge in the learning process [21]. keefe in brown, brailsford [22] defined ls as the “characteristic, cognitive, affective and psychological behaviours that serve as relatively stable indicators of how learners perceive, interact with, and respond to, the learning environment.”. this research has attempted to apply a brain-based ls model, where, the learner‟s brain structure is the dominant factor in promoting effective learning [23]. additionally, becta and radwan [24] has showed that the best approach to integrate ls with the most innate and psychological preferences is to exploit ls based on brain-based learning theories. for example, the right hemisphere of the brain accommodates creative activities, while mohammed ahmed ghazal, nur azan mat zin, zurina muda / designing domain model for adaptive web-based educational system according to herrmann whole brain model (2016) 68 the left hemisphere of the brain accommodates logical activities. furthermore, ls is conceptualised as consistent patterns of learning activities that reflect the attitude, preferences, beliefs and motivational orientations of a learner towards his/her learning environment [25]. therefore, incorporating learning patterns of ls models with the design features of a wbes is useful in linking the identification process of the ls according to the behaviour of a learner with the system rather than make the identification process static. this research in particular has benefited from using hwbm lss for modelling the most innate and intrinsic learner preferences implicitly and automatically. the hwbm is one of the most reliable and important ls models [26-29]. hwbm is used to extract the most innate and intrinsic learner preferences, which are derived from identifying a learner‟s brain dominance [27]. furthermore, hwbm is represented by predefined learning patterns. these patterns aim to integrate a learner‟s brain dominance with several learning preferences and styles into the features of a learning environment [27, 30]. the hwbm shows that every learner‟s brain is classified into four brain quadrants [31], where each brain quadrant corresponds to a set of homogeneous lss. qa learners can be described as having rational, analytical, logical and theoretical lss. qb learners can be described as having organising and sequential lss. qc learners can be described as having interpersonal, emotional, kinaesthetic, expressive and practical lss. qd individuals can be described as having holistic, intuitive, integrated and synthesising lss [31]. iii designing content model for adaptive web-based educational system (awbes) based on the review of the hwbm ls which was conducted in [32], it has been found that learning content is an important part of the wbes design feature particularly when auditing and tracking learner behaviour in a wbes. the content model of a wbes should address the diverse learner requirements according to the hwbm ls. here, the content model was used to propose an adaptive learner model by identifying learner preferences and ls in the wbes, via analysis of learner behaviour interaction with learning content design features. this section presents a dedicated way of structuring and classifying learning content in a wbes. the learning content should be presented using more descriptive information so that more information can be gained from the behaviour of learners within each aspect of the learning content. a organising learning content of wbes in this study, the online course is the most complex learning object; the smallest learning objects can be represented using different parts of the learning resource including the introduction, abstract, image, figure, video, and example. the proposed structure was designed to give learners the main role in a learning process, in the context of traditional education classrooms or educational systems, as it is based on book taxonomy rather than course taxonomy. figure 2: organising of learning content in a wbes this research conceptualises learning content using a hierarchical organisation as shown in figure 2. each course consists of several modules; each module consists of a set of lessons; each lesson contains a topic or a set of topics; and each topic comprises of several different types of educational resources represented by fragments. the lowest granularity level comprises the smallest learning objects, which were implemented and stored as a physical file along with its associated metadata. the programming language c++ course was selected for this study. the structure of the learning content was designed according to this hierarchy: (1) the c++ course consists of several modules, where each module covers only one subject area; for instance, statements, loops and arrays represent three different modules that demonstrate three different subjects; (2) each module consists of different lessons designed to cover a set of learning objectives; for instance, the „for loop‟, „while loop‟ and „do while loop‟ are three different lessons related to the module of loop statements; (3) every lesson has different topics designed to achieve different learning objectives; each lesson comprises different global learning objects such as syllabus, objectives, overview and assessments; (4) every topic aims to achieve one learning objective and comprises different fragments represented by the smallest granularity of learning objects such as introduction, abstract, prerequisites, tests (pre-test or final test), example and other learning objects, which present the concepts in the topic with different styles. according to popescu [33], this organisation is the most applicable structural way that teachers tend to use when organising their material. moreover, this organisational manner can be used to resolve the following issues: (1) exchanging and reusing the learning objects in different manners; (2) mohammed ahmed ghazal, nur azan mat zin, zurina muda / designing domain model for adaptive web -based educational system according to herrmann whole brain model (2016) 69 tracking the learner‟s interactions with the different types of content learning; and (3) achieving the fine granularity of adaptivity. b designing learning object metadata educational metadata was used to add descriptive information to the learning object. the metadata was applied to facilitate the association between learning objects and learner preferences so that learner preferences of learning content could be modelled. for example, ullrich [7] and gascueña, fernandez-caballero [8] proposed two independent ontologies to represent the educational metadata that associates ls with the most appropriate learning objects. however, the proposed approach by ullrich [7] is fraught with problems. for example, the ontology that links the metadata with particular dimensions of ls is static and not related to the behavioural interactions of the learner. it also does not apply implicit techniques in learner modelling. also, the learning object does not have enough information about the learner. the limitations of the work of gascueña, fernandezcaballero [8], on the other hand, are related to linking learning objects with the felder-silverman learning style model. learning objects are classified into limited categories without including significant learning objects, which may be related to other lss such as communication los, help and support los, and several fundamental los (e.g., definition, objectives, problems, case studies, experiment information, etc.). therefore, this research has added some extensions to the metadata file to better cover the requirements of the hwbm and the design features of a wbes. these extensions aim to enhance previous approaches, including the dublin core metadata intuitive, gascueña, et al.‟s [8] instructional ontology, and ullrich‟s [7] instructional material. below are the metadata characteristics of the learning objects used in this research. c general metadata characteristics for learning object the following metadata characteristics were selected from the standard metadata characteristics that describe learning content: a. title (resource name) → dc:title; b. identifier (refer to resource address e.g., url) → dc:identifier; c. type (refer genre, nature or form of the content of the resource e.g., service, software, collection, moving, image or sound) → dc:type; and d. format (the digital or physical manifestation of the resource e.g., size, number of pages, and duration) → dc:format. d educational learning object metadata the hierarchal educational learning objects were used to describe the learning resources, which are related to the learners‟ preferences according to the herrmann whole brain learning theory. the proposed metadata does not describe the learning content. however, it is used to classify the learning content, where each class of metadata refers to a particular instructional role and its related learning resource [34]. the instructional role is a kind of protocol specification that identifies characteristics and behaviour, but not the role player itself [35, 36]. integrating instructional roles into the metadata model can solve the problem of annotating different theories and instructional principles in the learning design [36]. in other words, instructional roles can facilitate learner modelling, enable automatic modelling, and are initiated as centres of reference. as illustrated in figure 3, the proposed hierarchy of educational learning objects aims to represent the different instructional roles for learning resources. the hierarchy components were identified from the confirmed requirements design features of the hwbm ls, which investigated in [19, 32]. each class of the proposed metadata represents a particular instructional role that allows mapping, exchange, reuse and search at this level. the proposed hierarchy presents a set of categories; and each instructional role identifies a set of vocabulary within a category. the educational_object is the root of the metadata structure. two main classes are identified as subclasses of educational_object, i.e. the fundamental_concept and the auxiliary_concept. both classes are grouped into four categories of learning objects (i.e., theoretical, procedural, practical, and interactive) according to the confirmed learning content design features of hwbm ls. fundamental_concept refers to the main learning objects being presented for the whole lesson (covers a number of topics) in a particular course. auxiliary_concept covers the supplementary knowledge or resources being presented i.e. presentation of the details of each topic in a particular lesson. for instance, theoretical classes are subsumed under fundamental_concept and auxiliary_concept. theoretical class for fundamental_concept can be presented by a number of learning objects such as objectives, prerequisites, problems and individual assignments. on the other hand, theoretical class for auxiliary_concept can be presented by a number of resources and learning objects such as book chapters, flowcharts, and explanations. the aforementioned descriptors are structured based on the hwbm ls. however, a wbes can infer the actual learning preferences of learners by analysing the their behavioural interaction with the designed learning objects described by these metadata (e.g., time spent, hit rate and visited rate on each learning object). furthermore, the hierarchy of educational metadata is useful in gathering more behavioural information since the information about the visited learning resources will be identified later by the designer or teacher. the proposed structure of the instructional role is frequently associated with the diverse population of learning objects that cover all requirements of learners according to the mohammed ahmed ghazal, nur azan mat zin, zurina muda / designing domain model for adaptive web-based educational system according to herrmann whole brain model (2016) 70 brain-based structure. a teacher has to annotate these learning objects (static descriptions) once only. the behavioural interactions of a learner with the wbes are used to annotate the dynamic descriptions. therefore, learner modelling based on metadata is dependent on both static and dynamic descriptions. fundamental_concept  auxiliary_concept  theoretical  pre-requisites  objective  faq  wiki  individual_assignment  open_question  theoretical  reference  flowchart  explanations  rule  procedural  guideline (instructions)  exercise  brochures (catalogue)  wizard  procedural  slideshow  tutorial  notebook  practical  introduction  video_tour  group_assign ment  group_discussi on  practical  example  simulation (try and error)  case study  interactive  abstract  overview  outline  mind_map  summary  multiple_choices  comprehensive_exam  interactive  animation_flash  flash_cards  interactive_game iv conclusion and future work learning styles of hwbm is a new approach to be used for designing a domain model for awbes. the domain model is a basic feature of designing a learning environment. it can be used for tracking, auditing and identifying learner preferences, and thus adapt the behaviour of a system to match his/her preferences. this research overcomes the limitations of incorporating the educational and pedagogical theory in describing and organizing the learning content of educational systems. the outcome of the research aimed to help the web developer for building learner model implicitly as well as for adapting the learning contents of wbes to match the learner preferences according to the brain structure of learners. further research should be conducted to explore the impacts of domain model on designing awbes. references [1] anido, l.e., et al., educational metadata and brokerage for learning resources. computers & education, 2002. 38(4): p. 351-374. [2] committee, i.l.t.s., ieee standard for learning object metadata, in 1484.12.1. 2002. [3] brooks, c. and g. mccalla, towards flexible learning object metadata. int. j. cont. engineering education and lifelong learning, , 2006. vol. 16, nos. 1/2, 2006. [4] friesen, n., interoperability and learning objects: an overview of e-learning standardization. interdisciplinary journal of knowledge and learning objects, 2005. 1(1): p. 23-31. [5] intuitive, d.c.m., dublin core metadata element set, in version 1.1. 2012. [6] johnson, k. and t. hall, designing and testing an opensource learning management system for small-scale users in e-learning networked environments and architectures, s. pierre, editor. 2007, springer london. p. 209250. [7] ullrich, c., the learning-resource-type is dead, long live the learning-resource-type. learning objects and learning designs, 2005. 1(1): p. 7-15. [8] gascueña, j.m., a. fernandez-caballero, and p. gonzález. domain ontology for personalized e-learning in educational systems. in sixth ieee international conference on advanced learning technologies (icalt'06) 2006. ieee computer society press. [9] ghazal, m.a., n.a.m. zin, and z. muda, towards applying a brain-based learning style model to improve learner modelling in an adaptive web-based educational systems (awbes). international journal of digital content technology & its applications, 2015. 9(2). [10] ghazal, m.a., m.m. yusof, and n.a.m. zin. adaptive educational hypermedia system using cognitive style approach: challenges and opportunities. in international conference on electrical engineering and informatics (iceei). 2011. bandung: ieee. [11] inan, f., r. flores, and m. grant, perspectives on the design and evaluation of adaptive web based learning environments. contemporary educational technology, 2010. 1(2): p. 148-159. [12] lin, f., s. graf, and r. mcgreal, adaptive and intelligent web-based educational systems, in guest editorial preface. 2004, retrieve from http://64.225.152.8/files/prefaces/ijwltt%20preface%204( 1).pdf [last seen 23/7/2015]: athabasca university, canada. [13] yang, t.-c., g.-j. hwang, and s.j.-h. yang, development of an adaptive learning system with multiple perfigure 3: hierarchy of educational learning objects based on hwbm educational_object http://64.225.152.8/files/prefaces/ijwltt%20preface%204(1).pdf http://64.225.152.8/files/prefaces/ijwltt%20preface%204(1).pdf mohammed ahmed ghazal, nur azan mat zin, zurina muda / designing domain model for adaptive web -based educational system according to herrmann whole brain model (2016) 71 spectives based on students' learning styles and cognitive styles. educational technology & society, 2013. 16(4): p. 185-200. [14] radwan, n., an adaptive learning management system based on learner‟s learning style. international arab journal of e-technology, 2014. 3(4): p. 7. [15] lo, j.-j. and y.-c. chan, design of adaptive web interfaces with respect to student cognitive styles, in education and educational technology, y. wang, editor. 2012, springer berlin heidelberg. p. 331-338. [16] chen, l.-h., enhancement of student learning performance using personalized diagnosis and remedial learning system. computers & education, 2011. 56(1): p. 289299. [17] schiaffino, s. and a. amandi, intelligent user profiling, in artificial intelligence an international perspective, m. bramer, editor. 2009, springer berlin heidelberg. p. 193216. [18] gena, c. and s. weibelzahl, usability engineering for the adaptive web, in the adaptive web, p. brusilovsky, a. kobsa, and w. nejdl, editors. 2007, springer berlin / heidelberg. p. 720-762. [19] ghazal, m.a., n.a.m. zin, and z. muda, analysing the relationship between learner‟s brain dominance and behavioural learning patterns of in web-based educational systems. journal of theoretical and applied information technology, 2015. in press. [20] riding, r. and s. rayner, cognitive styles and learning strategies: understanding style differences in learning and behaviour. 1998: d. fulton publishers. [21] mampadi, f., et al., design of adaptive hypermedia learning systems: a cognitive style approach. computers & education, 2010. [22] brown, e., et al., evaluating learning style personalization in adaptive systems: quantitative methods and approaches. ieee transactions on learning technologies, 2009. january-march (2009 vol. 2)(1): p. 10-22. [23] scales, p., teaching in the lifelong learning sector. 1st ed. 2008: open university press. [24] becta and n. radwan. learning styles : an introduction to the research literature. 2005 [cited 2014 8/6/2014]; available from: http://dera.ioe.ac.uk/14118/1/learning_styles.pdf. [25] vermunt, j.d., metacognitive, cognitive and affective aspects of learning styles and strategies: a phenomenographic analysis. higher education, 1996. 31(1): p. 2550. [26] graf, s., adaptivity in learning management systems focussing on learning styles, in faculty of informatics. 2007, vienna university of technology: vienna. p. 192. [27] coffield, f., et al., learning styles and pedagogy in post16 learning: a systematic and critical review. 2004, london, england: the learning and skills research centre (lsrc). [28] herrmann-nehdi, a., the best of both worlds making blended learning really work by engaging the whole brain, in better results through better thinking. retrieved from http://www.hbdi.com/uploads/100016_whitepapers/1006 07.pdf [last seen 22/7/2015]. 2009. [29] bawaneh, a.k.a., et al., jordanian students' thinking style based on herrmann whole brain model. international journal of humanities & social science, 2011. 1(9): p. 89-97. [30] lin, t. and kinshuk, cognitive profiling in life-long learning, in encyclopedia of distance learning. 2009, information science reference: hershey, new york. p. 295. [31] herrmann, n., the theory behind the hbdi® and whole brain® technology, in better results through better thinking. 1999, herrmann international: retrieved from http://www.hbdi.com/home/?directory=100024_articles &actualfile=100543.pdf&savename=theory-behindthe-hbdi.pdf [last seen 22/7/2105]. [32] ghazal, m.a., et al., mapping between learning style and design features of web-based educational systems. under publication, 2016. [33] popescu, e., c. badica, and p. trigano, description and organization of instructional resources in an adaptive educational system focused on learning styles, in advances in intelligent and distributed computing, c. badica and m. paprzycki, editors. 2008, springer berlin / heidelberg. p. 177-186. [34] richter, c., w. nejdl, and h. allert. learning objects on the semantic web. explicitly modelling instructional theories and paradigms. in proceedings of world conference on e-learning in corporate, government, healthcare, and higher education 2002. [35] ullrich, c. description of an instructional ontology and its application in web services for education. in proceedings of workshop on applications of semantic web technologies for e-learning, sw-el. 2004. [36] allert, h., h. dhraief, and w. nejdl. meta-level category ‚role‟in metadata standards for learning: instructional roles and instructional qualities of learning objects. in the 2nd international conference on computational semiotics for games and new media. 2002. university of augsburg, germany mohammed ahmed ghazal. assistant professor at university college of science and technology (ucst). he has a phd in information technology from the national universty of malaysia in 2015 and a master degree in computer science from free university brussels in 2004. currently, he is working as a head of research department at ucst. his research interests are adaptive web-based educational system, learner modelling, development of user interaction and usability applications, user-centered website design and development. nor azan mat zain: associate professor at the national universty of malaysia. currently, she is a head of multimedia & usability research group. her primary research interest is advanced technology for learning. http://dera.ioe.ac.uk/14118/1/learning_styles.pdf http://www.hbdi.com/uploads/100016_whitepapers/100607.pdf http://www.hbdi.com/uploads/100016_whitepapers/100607.pdf http://www.hbdi.com/home/?directory=100024_articles&actualfile=100543.pdf&savename=theory-behind-the-hbdi.pdf http://www.hbdi.com/home/?directory=100024_articles&actualfile=100543.pdf&savename=theory-behind-the-hbdi.pdf http://www.hbdi.com/home/?directory=100024_articles&actualfile=100543.pdf&savename=theory-behind-the-hbdi.pdf mohammed ahmed ghazal, nur azan mat zin, zurina muda / designing domain model for adaptive web-based educational system according to herrmann whole brain model (2016) 72 zurina muda: associate professor at the national universty of malaysia.. currently, sh is a member of multimedia & usability research group. her primary research interests are multimedia intelligent design and development, spatial image annotation and retrieval, interactive game design and development. transactions template journal of engineering research and technology, volume 8, issue 1, march 2021 19 text file privacy on the cloud based on diagonal fragmentation and encryption tawfiq barhoom and mahmoud y. abu shawish https://doi.org/10.33976/jert.8.1/2021/3 abstract— despite the growing reliance on cloud services and software, privacy is somewhat difficult. we store our data on remote servers in cloud environments that are untrusted. if we do not handle the stored data well, data privacy can be violated with no awareness on our part. although it requires expensive computation, encrypting the data before sending it appears to be a solution to this problem. so far, all known solutions to protect textual files using encryption algorithms fell short of privacy expectations. thus is because encrypting cannot stand by itself. the encrypted data on the cloud server becomes full file in the hand causing the privacy of this data to be intrusion-prone, thus allowing intruders to access the file data once they can decrypt it. this study aimed to develop an effective cloud confidentiality model based on combining fragmentation and encryption of text files to compensate for reported deficiency in encryption methods. the fragmentation method used the strategy of dividing text files into two triangles through the axis. whereas the encryption method used the blowfish algorithm. the research concluded that high confidentiality is achieved by building a multi-layer model: encryption, chunk, and fragmentation of every chunk to prevent intruders from reaching the data even if they were able to decrypt the file. using the privacy accuracy equation (developed for the purpose in this research), the model achieved accuracy levels of 96% and 90% when using 100 and 200 words in each chunk on small, medium, and large files respectively. index terms— cloud computing; symmetric encryption; cloud confidential; privacy; fragmentation. i introduction organizations use clouds for various purposes such as organizational communication (email), services, and archiving correspondence to maintain mobility and efficiency with regard to organizational resources and cost across multiple platforms. nevertheless, cloud is risky. this is because it is vulnerable to unauthorized access, which is potentially harmful to organizations in ways that cause serious problems. therefore, it is highly critical to protect the privacy of cloud data through using the proposed combined model herein referred to as protected textual cloud model. the issue of protecting the privacy of files on the cloud environment has become a problem that affects all companies and institutions. the mounting need to use clouds resulted from the cheapness of its use compared to building an integrated server room associated with the human resources necessary to manage it. it is worth noting that the importance of files lies in the value of the data stored in them. when vulnerable to reviewing or modifying by intruders, this data affects companies and institutions in terms of security. cloud computing is to use computer resources such as networks, servers, storage, applications, and services over the internet while maintaining confidentiality, availability and integrity. the national institute of standards and technology (nist) defined various standard service models such as infrastructure-as-a-service (iaas), platform-as-a-service (paas) and software-as-a-service (saas).the iaas enables customers to operate over operating systems, storage with limited control of network. the paas allows customers to deploy applications with some configuration of hosting applications. the (saas) supports customers to use applications on cloud with limited configuration settings. nist defined standard deployment model such as the private model used by a single organization, the community model used by a specific group of institutions sharing common goals, the public model used by the public, and the hybrid model used by a composition of two or more cloud models (private-community-public)[1]. the said cloud models are believed to function better if they are protected in terms of confidentiality and privacy. cloud confidentiality and privacy is a very critical term in these days because of the huge usability of cloud services. several applications and research papers handle the issue of cloud privacy. these can be categorized into two terms: fragmentation-defragmentation and encryption-decryption. privacy is a core challenge in cloud computing with regard https://doi.org/10.33976/jert.8.1/2021/3 tawfiq barhoom and mahmoud y. abu shawish / text file privacy on the cloud based on diagonal fragmentation and encryption (2021) 20 to the need to protect identity information and policy components. many organizations are not comfortable in storing their data and applications on systems that reside outside their own premises and data centers [2]. cloud computing technologies provide organizations and individuals alike with the advantage to access their files from anywhere. yet, they are becoming more risky as the number of intruders increases[1]. thus, the growing dependency on cloud computing led to a rising concern regarding cloud privacy. cloud privacy is intended to protect personal data from collection, usage, disclosure, storage, and damage. it is very essential to protect the data from unauthenticated access. hence, it is necessary to identify an optimal solution to protect data on clouds. there are different techniques for cloud privacy such as encryption and fragmentation. the idea of cryptography depends on building methods and protocols to protect private messages to prevent any intruders from accessing them based on mathematical problems. it`s used in several types of information security including protecting data integrity and privacy. encryption is a mechanism to protect data or messages from unauthorized access to information by any unauthorized persons, and it`s used in many methods of data security such as protection of integrity of data or protection of privacy. there are several methods of encryption including symmetric and asymmetric (also known as public-key cryptography). symmetric algorithms have several types such as data encryption standard, advanced encryption standard, twofish and blowfish. however, encryption has defects such as difficulty in searching and modifying. [3]. on the other hand, using encryption alone makes full files in the hand. therefore, these files may be decrypted and, thus, the privacy is violated. beside encryption, fragmentation is a valid technique for cloud privacy. the main objective of fragmentation is to segment data into several parts and in different ways and store them on different cloud servers. fragmentation decreases processing time and optimizes data manipulation in terms of transferring data across cloudsو to illustrate this, by experimenting with sending a group of small files is faster than sending a file with the same size of small files, depending on the internet speed. in this context, several studies provided evidence for using fragmentation as a solution in tabular data including horizontal fragmentation, vertical fragmentation, and hybrid fragmentation[4] and [5]. based on what has been said, this paper introduced a model that aspires to produce a highly reliable model for cloud privacy using a combination of text fragmentation and encryption. the overarching objective is to develop an applicable model that good protecting the privacy of text files on the cloud environment by combining a well-known encryption algorithm with diagonal fragmentation techniques with good time. the value of this study lies in providing organizations such as businesses and academic institutions with workable solutions to address such a critical issue as textual cloud confidentiality. it makes important contribution of a useful textual cloud privacy model that combines an encryption algorithm with a diagonal fragmentation technique that organizations can use to control the textual cloud storage with high confidentiality while achieving better confidentiality to maximize their information security. our model avoids the problem of not owning a cloud environment of our own, in addition to our distrust in the owner of that environment. it is distinguished from other approaches in that the file will be fragmented after encrypting it to send every part to a different cloud. by doing so, it increases the strength of the encryption technique being used, as there will be no way to decrypt an incomplete file. in addition, this technique isolates every part in ways that prevent the cloud owner to assemble it. ii related works it is believed that encryption alone cannot provide security that can be destroyed by assaults or brute force techniques. [6]named as data security, privacy, availability and integrity in cloud computing: issues and current solutions; they used an encryption key that represents a combination of a user's password and file name and which is changed to bits to defend against brute force attack. the privacy of the file is secured by multiplication of matrix, and the validity is guaranteed by using a hash-based message. attribute based encryption (abe), known as fuzzy identity based or secret key, is provided by a collection of attributes. they also described the form of access used to control access. this access control uses encryption to encrypt and share data between users.( public key encryption is used to encrypt the data by using the public key. only the one who has the private key can decrypt this data. [7]introduced a confidentiality technique that is based on integrating encryption of confusion. confusion uses a function of mathematics or uses programming techniques to misinform illegal users. for numeric data type, the obfuscation algorithm is used. the obfuscation is a technique that uses specific mathematical functions or programming techniques to confuse data. when the data is alphabet or alphanumeric, the data will be encrypted using symmetric encryption because of its speed and computational efficiency to handle large data volume encryption. [8] suggested a solution to the privacy issue by saving the cpu and the memory using symmetric algorithm data encryption in mobile cloud computing, sending it to the private cloud, and then re-encrypting the data and sending it to the public cloud via asymmetric algorithm. the experimental results obtained from comparing and evaluating encryption algorithms such as symmetric blowfish algorithm and asymmetric dsa algorithm to get the least time to decrypt data. [9] proposed an algorithm that provides protection for confidentiality and integrity. the key for encryption is concatenated with a user's password. coding based scheme (cos) is used in the situation of uploading process where the system tawfiq barhoom and mahmoud y. abu shawish / text file privacy on the cloud based on diagonal fragmentation and encryption (2021) 21 forces the user to enter a password before uploading the file to the cloud. also, in the case of downloading files, the system requires entering the correct password. encryption based scheme (ens) includes md that performs data authentication and integrity checks. [10]. used a hybrid fragmentation by segmentation in vertical and horizontal on tabular data. in order to divide the data into several types, one type will be encrypted, another will be sent without encryption, and the third one will not need to be sent and should be stored on the owner's hand, all that were categorized in term of its sensitivity. [11] proposed a strategy of confidentiality based on hosting number of virtual machine (v1……vn) then splitting the file into number of chunks (c1…..cn) and sending every chunk randomly to its virtual storage. by fragmenting the file, the resulting information will not be complete on every virtual storage, thus making it difficult for intruders to violate the information. [12]. presented a vertical fragmentation algorithm in the system with complex attributes and complex methods. with distributed object oriented database systems, this form of fragmentation allows query decomposition, optimization, and concurrent treatment. [13] applied a vertical partitioning algorithm application to the problem of horizontal fragmentation. the main idea was to guarantee confidentiality by developing application of a vertical partitioning algorithm to horizontal partitioning. they achieved successful results. [8] applied and tested many cryptographic algorithms such as des, 3des, aes, rsa and blowfish. it was concluded the blowfish algorithm documented the least encryption time whereas the rsa algorithm recorded the slowest encryption time. if confidentiality and integrity are important factors, then the aes algorithm should be selected. it is concluded that there remains a problem in previous research regarding dealing with the protection of the privacy of text files using individual encryption because it keeps the file complete in hand on the cloud. we know that it is difficult to be certain on testing the privacy. therefore, we intend to increase the complexity of confusion by utilizing the idea of fragmentation. several approaches to protect the cloud data. iii material and methods this section presents the methodology used to achieve the research objective, which represents the privacy of text files on the cloud environment. it demonstrates the proposed architecture to determine the most effective confidentiality techniques. it also clarifies the implementation process of the confidentiality protection approach. the implementation is based on the architecture of the approach and realizes the confidentiality techniques specified as a basis for the data encryption and fragmentation. the steps involved in developing the textual file confidentiality model are further elaborated below. a methodology of the model java language was used in programming the model because it provides all the capabilities required for the research work. the model was divided into many sequential steps as shown in the following subsections illustrated in figures 1 for uploading the file. fig. 1. text file privacy on the cloud based on diagonal fragmentation and encryption figure1 shows the sequential steps involved in downloading the file including collecting the parts of the file from the cloud servers on which the file was previously uploaded and then defragmenting it to obtain the original file. fig. 2. the file retrieve strategy client side file web interface encryption blowfish fragmentation p2 cloud server2 s2 cloud server1 s1cloud service provider side upload p1,p2,p3 and p4 step 1 step 2 step 3 step 4 p1 p3 p4 cloud server1 s1 cloud server1 s1 tawfiq barhoom and mahmoud y. abu shawish / text file privacy on the cloud based on diagonal fragmentation and encryption (2021) 22 b implement the model the experimental environment consists of an hp omen “15tdc100” laptop with the following specifications: intel(r) core(tm) i7-9750 h vpu @2.60ghzz 2.59 ghz, 16g ram and geforce rtx 2070 gpu (8gb of vram). we use java language in our system, sql database and dropbox cloud storage files. i secure the file strategy the main idea is to split the file diagonally into four parts (p1, p2,p3,p4) before uploading them to the cloud environment and then uploading each part separately on a different server (s1, s2,s3,s4), after encrypting it. this process makes it difficult for the intruder to access the data because the incomplete, splitted file cannot be decrypted. ii encryption as shown in figure1, the model starts with browsing files and determining the file to be uploaded to the cloud environment. the second step (step 2) relies on pre-uploading file encryption, evaluating, and selecting the best and fastest symmetric algorithm of encryption. according to[13], it's known that the symmetric encryption like blowfish has a higher encoding speed than the asymmetric encryption like rsa. some key parts of the model were including checking the encryption mechanism of the blowfish algorithm, as shown in the figure3 of the pseudocode; using java language; and implementing a simple and successful experiment on a text file. the encryption process was executed using the blowfish algorithm from a client side. c fragmentation as a third step (step 3) in figure 1, and for detailing this part. two important factors must be considered when choosing a partitioning mechanism. the first is the accuracy ratio for privacy, and the second is the system speed. the figure 4 below shows the result of the fragmentation into two parts, and it extracted that error ratio approach in terms of privacy accuracy using the formula for calculating it, which is large, so we go to split the file into several chunks. fig. 4. one part result from tow part fragmentation techniques to achieve good accuracy and good system speed, the files must be divide into equal chunks. each chunk consists of a fixed number of letters . for the purpose of this research, 100 words were chosen for each chunk because it was evident that this number of words achieved excellent privacy accuracy. then, each chunk was divided into two parts. then, each part of the chunk was divided into two sections diagonally, thus resulting in four sections. the sections were later written in a text file and the process was repeated on the remaining chunks until the completion of the text file. d the file retrieval strategy this is the reverse process of the previous steps explained above (secure the file strategy). first, the file is selected for download through the control panel so that the model identifies the servers where the files are located as described in the first step (step 1) in figure 2. then, the parts of the file are defragmented as described in the second step (step 2) of the same figure. later, the four files are compiled in a reverse manner of the separation process, and then the same key is used to decode the file. then, the file is decrypted as shown in the third step (step 3) of figure 2. iv experiments and results a practical experiments and results of the research to measure the effectiveness and the efficiency of the diagonal fragmentation and encryption model in terms of high privacy for text files on the cloud environment. in the experiments, the privacy accuracy, the size of files, the time needed for the fragmentation and the encryption processes, and the time needed to retrieve the file were taken into considerfig. 3. encrypt method tawfiq barhoom and mahmoud y. abu shawish / text file privacy on the cloud based on diagonal fragmentation and encryption (2021) 23 ation. privacy accuracy refers to using the count of complete words ‘ cw’ divided by the whole words count ‘aw’ in the files as demonstrated in the following equation: privacy accuracy ‘pa’ = (1-(cw/aw))*100 where cw is ‘complete words’ count and aw is ‘all words’ count. the counting process takes place in two stages in the experiment. the first stage is to count all the words ‘aw’ in the original file and this is done programmatically before the fragmentation process. the second stage are done after fragmentation process to count completed words ‘cw’ stilled in the fragmented files and this is an error rate, and this process is done manually a files as shown in the microsoft windows explorer standard file size classification table below, various text files of different sizes were used in the experiments to check the sizes before and after fragmentation and to measure the time required for the fragmentation process. accordingly, the experiments were conducted on small, medium, and large files. table 1 microsoft windows explorer file size classification [22] small 10-100 kb medium 100 kb-1 mb large 1-16 mb gigantic >128 mb b effect of fragmentation we all know that to increase security you need a cost in terms of time and accuracy, so in our practical experience we will explain the amount of this cost. it must be clarified that to preserve the integrity of the data, all the fragmented files must be preserved to can recover the original file. i diagonal fragmentation for the file as one part: in this experiment, the file was divided diagonally as one chunk to take the necessary readings, including the percentage of privacy and the ratio of the impact of the division on the size of the files and the time needed for the fragmentation. figure 5 below shows the time needed to divide the 984byte file diagonally into two blocks or 4 blocks as one chunk. the results show that the model took more time to divide into 4 blocks than to divide two blocks. fig. 5. privacy accuracy of small file using 4 -2 files ii chunk: diagonal fragmentation for the file as multiple part when examining files with a larger size, such as 27 kb, the percentage of privacy accuracy decreased. the privacy accuracy ‘pa’=(1-(1586/3817))*100=58.44% for the 27kb file as show in figure 6. therefore, there is a need to develop a more advanced diagonal fragmentation for the text file by dividing the file to a number of chunks depending on the word count, and then dividing each chunk diagonally into two parts. the file, thus, was divided into chunks of words. each chunk contains a fixed number of words, and then these chunks were divided diagonally into two parts. fig. 6. privacy accuracy of small file (27kb) using 4 files iii measuring the best value for a chunk word count in order to be able to measure the best value for word count in each chunk, there is a need to use two important factors for this purpose. the first is the time needed by the model to use this number of words, and the second is the privacy accuracy the model achieves using the same number of words. time consumption factor :figure 7 shows the results for time consumption achieved by the model using three different words counts categories, namely, 100 words, 200 words, and 500 words in each chunk. the results show that the time of division by these three-word count categories was convergent for the three file size categories, illustrated in table tawfiq barhoom and mahmoud y. abu shawish / text file privacy on the cloud based on diagonal fragmentation and encryption (2021) 24 1 (small, medium and large). fig. 7. time consuming achieved by the model using three different words counts categories in ms iv privacy accuracy factor: given the privacy accuracy equation (privacy accuracy ‘pa’ = (1-(cw/aw))*100), the model achieved accuracy levels of 96%, 90%, and 88% when using 100 and 200 words in each chunk on small, medium, and large files respectively as showed in figure 8,. the case with 500 words in each chunk was not optimal as the privacy accuracy percentage ranged from 76% to 82%, which are not accepted. fig. 7. privacy accuracy ‘pa’ = (1-(cw/aw))*100). d sending data to the cloud the time needed to transfer the four files to the cloud was examined to compare it with the original single file transfer time without fragmentation to the cloud. the purpose was to measure the model efficiency in terms of fragmentation upload time. the results demonstrated a time increase when dividing files and distributing them on four servers on the cloud as shown in the table 2. however, the upload time depended on two additional factors: the speed of the internet and the cloud environment to which the files will be transferred. this issue can be resolved by increasing the speed of the internet and choosing a fast cloud service. table 2 upload time for the files to cloud. one file transfer time four files transfer time 3 kb 0.7 second 4 second 3.429 kb 2,226 second 4 second v conclusion and future works this comprises the conclusion of the research. it is intended to highlight the major contribution of the research to the existing body of knowledge in information security, with particular emphasis on confidentiality. also, it stresses the future work that needs to be attended by further research. the major contribution of this research is the presenting of an effective text file privacy model on the cloud based on diagonal fragmentation and encryption. the synergy between encryption and fragmentation added strength and accuracy to the privacy of cloud textual files. the idea is that, encryption alone does not mean that the file is not fully secured because once the file is decrypted, then the privacy is violated. yet, application of the encryption and the fragmentation technologies in one model makes it impossible to violate the privacy simply because an initially encrypted and then fragmented file will never make the information accessible to intruders even if they had the fragmented files in hand. another contribution of the current study is the application of fragmentation techniques to textual files. five experiments were carried out in this research with five different methods of fragmentation, each of which was performed on different sizes of textual files ranging from small, medium to large files in accordance with the standard classifications of file size. in each experiment, the time used by the model to secure the file on the cloud and then retrieve it was calculated. besides, the privacy strength was also calculated using a formula that was developed to measure the strength of privacy. furthermore, this research introduced an equation (privacy accuracy ‘pa’ = (1-(cw/aw))*100),) to measure privacy accuracy, thus resolving the issue of privacy measurement. it is worth noting that the model developed in this study achieved privacy accuracy exceeding 96% depending on fragmentation only. after integrating encryption with fragtawfiq barhoom and mahmoud y. abu shawish / text file privacy on the cloud based on diagonal fragmentation and encryption (2021) 25 mentation, the privacy accuracy was ideal, thus supporting the results of this research. there is still a compelling need to work on different cloudbased materials other than textual files such as media. in this regard, media can be administered as an array of bytes. then, it can be dealt with as a chunk of bytes and can be fragmented like words. the other option to deal with media is by dividing it into frames and then developing a mechanism to fragment each frame so that the model confuse the media. references [1] p. mell and t. grance, “the nist-national institute of standars and technologydefinition of cloud computing,” nist spec. publ. 800-145, p. 7, 2011. [2] d. chen and h. zhao, “data security and privacy protection issues in cloud computing,” proc. 2012 int. conf. comput. sci. electron. eng. iccsee 2012, vol. 1, no. 973, pp. 647–651, 2012. [3] v. mai and i. khalil, “design and implementation of a secure cloud-based billing model for smart meters as an internet of things using homomorphic cryptography,” futur. gener. comput. syst., vol. 72, pp. 327–338, 2017. [4] t. kalidoss et al, “data anonymisation of vertically partitioned data using map reduce techniques on cloud,” int. j. commun. networks distrib. syst., vol. 20, no. 4, pp. 519– 531, 2018. [5] a. hudic et al, “data confidentiality using fragmentation in cloud computing,” int. j. pervasive comput. commun., vol. 9, no. 1, pp. 37–51, 2013. [6] s. aldossary and w. allen, “data security, privacy, availability and integrity in cloud computing: issues and current solutions,” int. j. adv. comput. sci. appl., vol. 7, no. 4, 2016. [7] l. arockiam and s. monikandan, “efficient cloud storage confidentiality to ensure data security,” 2014 int. conf. comput. commun. informatics ushering technol. tomorrow, today, iccci 2014, pp. 1–5, 2014. [8] t. s. barhoom and m. m. abu ghosh, “reduce resources for privacy in mobile cloud computing using blowfish and dsa algorithms,” int. j. res. eng. sci. (ijres) issn (online, vol. 4, no. 1, pp. 2320–9364, 2016. [9] w. ren et al, “lightweight and compromise resilient storage outsourcing with distributed secure accessibility in mobile cloud computing,” tsinghua sci. technol., vol. 16, no. 5, pp. 520–528, 2011. [10] r. hussein al-talaa, “a confidentiality protection approach based on three-way fragmentation for cloud out sourcing of mobile data,” 2015. [11] a. butoi and n. tomai, “secret sharing scheme for data confidentiality preserving in a public-private hybrid cloud storage approach,” proc. 2014 ieee/acm 7th int. conf. util. cloud comput. ucc 2014, no. see iv, pp. 992–997, 2014. [12] y. zhang and m. e. orlowska, “on fragmentation approaches for distributed database design,” inf. sci. appl., vol. 1, no. 3, pp. 117–132, 1994. [13] m. nazeh abdul wahid et al, “a comparison of cryptographic algorithms: des, 3des, aes, rsa and blowfish for guessing attacks prevention,” j. comput. sci. appl. inf. technol., vol. 3, no. 2, pp. 1–7, 2018. [13]k. kartheeban and murugan ad, “privacy preserving data storage technique in cloud computing,” in 2017 ieee international conference on intelligent techniques in control,optimization and signal processing (incos) 2017 mar 23 (pp. 1-6). ieee. [14] j. domingo-ferrer, o. farras, j. ribes-gonzález, and d. sánchez, “privacy-preserving cloud computing on sensitive data: a survey of methods, products and challenges,” comput. commun., vol. 140, pp. 38–60, 2019. [15] r. kumar and r. goyal, “on cloud security requirements, threats, vulnerabilities and countermeasures: a survey,” comput. sci. rev., vol. 33, pp. 1–48, 2019. [16] a. singh and k. chatterjee, “cloud security issues and challenges: a survey,” j. netw. comput. appl., vol. 79, pp. 88–115, 2017. [17] p. li, j. li, z. huang, c.-z. gao, w.-b. chen, and k. chen, “privacy-preserving outsourced classification in cloud computing,” cluster comput., vol. 21, no. 1, pp. 277–286, 2018. [18] b. esslinger, “the cryptool book: learning and experiencing cryptography with cryptool and sagemath,” p. 531, 2018. [19] k. fan et al., “cloud computing top threats in 2016, ” futur. gener. comput. syst., vol. 101, no. 11, pp. 1028–1040, 2018. [20] r. mogull et al., “security-guidance-v4-final,” 2017. [21] s. t. lulu and barhoom ,“a model to detect the integrity violation of shared file in the cloud,” 2016. [22] z. saqallah t. barhoom," a model to ensure data integrity in the cloud",2016 tawfiq s. barhoom associate professor, computer science department faculty of it, islamic university-gaza, he got b.sc. computer science from omdurman ahlia universitysudan,(19911995) . master degree, and phd computer science, department of computer science and engineering from shang hai jiao tong university (sjtu)– shanghai – china, (1999, 2004 respectively). his current interest research information security. mahmoud y. abu shawish was born in gaza city, palestine, on the. instructor at information engineering department, university college of applied sciences, gaza, holder of ba. computer engineering (2002-2007) and mas-ters degree in information technolgy from islamic university, pales-tine. his current interest research cyber security. journal of engineering research and technology, volume 3, issue 4, december. 2016 85 study of the mechanical and physical properties of self -healing asphalt shafik jendia 1 , noor hassan 2 , khadija ramlawi 2 , hadeel abu-aisha 2 1 professor of highway engineering, islamic university of gaza, palestine 2 faculty of engineering, islamic university of gaza, palestine abstract— asphalt pavement needs continuous maintenance to fulfill the required level of performance. this research aims to study the effect of adding steel wool (sw) on the property of the self-healing of the wearing layer in the asphalt mix. several tests have been conducted on aggregates, sw and bitumen to evaluate their properties. asphalt mixes have been prepared in accordance with standard specifications followed by testing asphalt samples to obtain values of the stability, flow and specific gravity . marshall method has been used for the design of the asphalt mix to determine the optimum content of sw as well as to obtain the characteristics of sw modified asphalt mixture. 20 samples were prepared to determine the optimum sw content and to investigate the self-healing property through various tests. asphalt samples with sw content of 5% showed the best thermal and electrical conductivity. the results showed also that it is possible to use sw in the preparation of asphalt wearing layers to enhance the self-healing property by heat induction. asphalt samples with sw content of 3 and 5 % presented the strongest bonding among the different percentages. accordingly, marshall samples test results showed that the optimal content of sw is 4.33% (by volume of optimum bitumen content). index terms— self-healing asphalt, steel wool, heat induction. 1 introduction asphalt pavement consists of several layers: asphalt layers (mainly binder and wearing courses) and base layers (mainly base and sub base courses) constructed on a subgrade with a suitable bearing capacity. an asphalt layer consists of mineral aggregates, asphalt binder (bitumen) and air voids. the pavement behavior under traffic and climatic loads depends on many mechanisms that are strongly related to the load transfer between layers and aggregate particles in each layer. the increase of dynamic load (traffic) in combination with neglecting preventive maintenance cause an accelerated and continuous deterioration of the road network. to overcome this problem, a pavement management system is effective and strongly needed to control the pavement condition during its analysis or performance period. asphalt pavement performance is affected by several factors, e.g., the properties of the components (binder, aggregate and additive) and the proportion of these components in the mix. the performance of asphalt mixtures can be improved with the utilization of various types of additives, these additives include: polymers, latex, steel wool and many other additives [1]. induction heating of asphalt concrete is a technique that consists of heating electrically conductive particles, for example, steel wool fibers, previously mixed into the asphalt concrete mixture. then, with the help of an induction heating device, it is possible to heat the particles locally and, through heat diffusion, heat the binder and heal the cracks. it was illustrated in [2] that a very small volume of fibers, more than 0%, serves to increase the temperature via induction heating. the first prerequisite of induction heating is that the heated material must be conductive. the second prerequisite is that these fillers and fibers are connected in closed-loop circuits. first a micro crack appears in the bitumen. if enough volume of conductive fibers or fillers is added they will form closedloops circuits all around the micro crack. if this magnetically susceptible and electrically conductive material is placed in the vicinity of a coil, eddy currents are induced in the closedloops circuits, with the same frequency of the magnetic field. heat is generated through the energy lost when eddy currents meet with the resistance of the material and, finally, bitumen is melted and the crack is closed [3]. in this research the possibility of using conductive materials, such as steel wool (sw), to produce self-healing asphalt is investigated. the principle objectives of this research are to:  study the effect of adding different percentages of local sw on the self-healing mechanism of asphalt mix.  compare it with conventional mix properties.  identify the optimum percent of sw to be added in the hmas. 2 materials and test procedures 2.1 materials and laboratory tests bitumen: the laboratory tests performed to evaluate the bitumen properties are: specific gravity, ductility, flash point, penetration and softening point. the properties of asphalt binder are presented in table (1) and compared with astm specification limits. steel wool material: steel wool was obtained from a local market (kitchen wash steel wool) then chopped manually. properties of used steel wool material are shown in table (2). in order to define the properties of used aggregates, number of laboratory tests have been conducted, these tests are: a. sieve analysis( astm c 136) b. specific gravity test (astm c127). c. water absorption (astm c128) d. los angles abrasion (astm c131) http://en.wikipedia.org/wiki/construction_aggregate http://en.wikipedia.org/wiki/asphalt shafik jendia, noor hassan, khadija ramlawi, hadeel abu-aisha/ study of the mechanical and physical... (2016) 86 table 1: summary of bitumen properties test specification results astm specification limits penetration (0.01 mm) astm d5-06 65 60-70 ductility (cm) astm d11386 146 min 100 softening point ( o c) astm d362002 50 (48 – 56) flash point ( o c) open cup © astm d92-02 308 min 230 o c solubility (%) astm d2042 specific gravity (g/cm 3 ) astmd d70 1.05 1.01-1.06 table 2: steel wool properties property detail diameter (µm) 97.54 length (mm) 3.0 7.0 density (g/cm 3 ) 6.61 aggregates: the coarse and fine aggregates used were crushed rocks imported from egypt. aggregates used in asphalt mix can be divided as shown in table (3). table 3: aggregates types type of aggregate particle size (gradation) (mm) adasia 0/ 12.5 simsimia 0/ 9.50 trabiah 0/4.75 table (4) presents the gradation of proposed mix and table (5) presents the results of aggregates tests 2.2 methods 2.2.1 obtaining the optimum bitumen content (obc) marshall method for designing hot asphalt mixtures is used to determine the obc. a previously prepared gradation and obtained obc are used from reference [2]. however in this research, 20 samples of a weight of 1230 g were prepared with four different steel wool content (swc: 0, 3, 5 and 7%).and compacted with 75 blows according to standard marshal design method designated in astm d 1559 [4]. five samples were prepared from each sw percentage; three samples from each swc were used to perform marshal stability [5], bulk density [6] and theoretical maximum specific gravity tests [7]. table 4: gradation of proposed mix with astm specifications limits sieve size (mm) % passing 19.5 100 12.5 90 9.5 75 4.75 55 2.36 37 1.18 23 0.60 13 0.30 8 0.15 6 0.075 4 table 5: results of aggregates tests test eff. specific gravity (g/cm 3 ) absorption (%) adasia 0/ 12.5 2.67 1.8 simsimia 0/ 9.50 2.595 3.39 trabia 0/4.75 2.59 3.36 designation number astm : c127 astm : c128 specification limits -< 5 2.2.2 preparation of asphalt mix with different steel wool content there was no specific procedure for mixing the sw with the aggregate and bitumen therefore initial experimental trials were conducted to achieve the most applicable process of addition. in this study; the aim of adding sw to asphalt mix is to investigate the ability of self-healing in asphalt and not to enhance the strength of asphalt mix. after blending the aggregate with the previously obtained obc (5.6%), the 20 samples were prepared to evaluate the effect of adding sw to asphalt mixture samples by considering four proportions of sw (0.0, 3.0, 5.0 and 7.0% by the volume of bitumen). the procedure of incorporating sw in asphalt mix can be summarized as follows: a) sw have to be chopped by scissors to achieve the required length range (2.0 – 7.0) mm. b) aggregates (adasia (0/12.5), simsimia (0/9.5), tirabeya (0/4.75) are heated for 24 hours at 150 o c to obtain completely dry condition. c) requisite amount of bitumen is heated for 4 hours at a temperature of 150 o c. d) aggregates are mixed together followed by addition of hot bitumen at obc. e) requisite amount of chopped sw is thrown gently on the mix and over several batches with continuous mixing. it should be taken in consideration to mix uniformly in order to prevent forming clusters of sw. f) all ingredients are mixed vigorously to form a homogeneous asphalt mixture. g) after preparing modified asphalt mix, specimens are prepared, compacted with 75 blows, and tested according to standard marshal method (astm d 1559 [6]). 2.2.3 evaluation of self-healing mechanism in asphalt to perform these tests properly, the compacted specimen’s samples were cut into thinner cylindrical samples with thickness of 20 mm and a diameter of 101.6 mm, this is done to obtain flat surfaces which allow good contact between shafik jendia, noor hassan, khadija ramlawi, hadeel abu-aisha/ study of the mechanical and physical... (2016) 87 different elements of the test instruments and the surface of the sample. after cutting, the samples were placed in an oven at 40 ºc for 2 hours to remove the moisture which was coated during the cutting process and prevent the sw from corroding on the surface of the samples. inside the sample, the sw do not corrode, because they are completely coated with bitumen [8]. electrical resistivity measurements test to study the electrical conductivity of asphalt mixes, their electrical resistance was measured in experiments at a room temperature of 22.4 ºc. a fluke digital multimeter was used to measure the resistance below 36×106ω. accordingly, two aluminum plates electrodes connected with the multimeter were placed at both ends of the sample to measure its resistance. a small pressure was applied to the copper electrodes to obtain a good contact with the surface of the sample. after measuring the resistance, the electrical resistivity of each sample was obtained from the second ohm-law equation as follows: ……………………………………………..……… (1) whereas, = the electrical resistivity, [ω m]. r= the measured resistance, [ω]. s= the electrode conductive area, [m 2 ]. l = the thickness of the cylinder, [m]. heat transmission measurements test hot plate was used to study heat transmission through a 20 mm thic asphalt samples with different s c (0.0, 3.0, 5.0 and 7.0 ). temperature of hot plate was set to 70 c and a laser thermometer was used to measure the temperature at the top and bottom surface of the samples at a five minutes interval for 30 minutes. room temperature was recorded to be 22.4 c (the asphalt samples initial temperature). healing of full cracked asphalt samples using microwave asphalt samples with different sw contents (0.0, 3.0, 5.0 and 7.0 ) were cut to a thic ness of 20mm and frozen at a temperature of -17.5 c for 48 hours in order to ensure homogenous hardening of the sample that facilitates the breaking process. then a spontaneous full crack was introduced to the sample by a hammer in order to investigate the healing tendency of each sample. samples were then put in a microwave with a power of 700 watt for two stages. firstly, samples were heated in the microwave for 60 seconds and then their temperatures were measured using a digital thermometer. secondly, samples were put in the microwave for 90 seconds and their temperatures were also recorded, after that, the samples were immediately put back in their molds until they cool down. twenty-four hours later, they were extracted from the molds and visually inspected to determine which samples have a better healing tendency. 3 result analysis the effect of adding different sw percentages on the properties of asphalt mix are analyzed and compared with the conventional asphalt mix (no additive) properties, which acts as the control group. these properties include marshal stability, flow, bulk density, air voids (va) and voids in mineral aggregates (vma) in addition to other tests to evaluate the self-healing property of asphalt. 3.1 mechanical tests 3.1.1 stability – sw content relationship the stability of modified asphalt mixes oscillates. first, the addition of sw more than 0% (by volume of bitumen) reduces the stability by approximately 27% at a swc of 3%, then it starts increasing until it reaches a maximum of 1950.10 kg at a swc of 5% which is very close to the stability of conventional asphalt mix. the curve then start falling again until a swc of 7%. figure (1) shows the oscillation of the stability of modified asphalt mix with a maximum stability at a swc of 5%. figure 1: asphalt mix stability – sw content relationship. the noticeable drop in stability at first is attributable to the replacement of aggregate with a small amount of sw until it reaches a minimum stability of 1441.30 at 3% volume of sw. after that any additional replacement of aggregate increase the stability of the mix until it reaches a maximum value at 5% sw volume. again the stability starts decreasing due to large amount of aggregate being replaced by sw that does not provide the same stability aggregate does. the decrease in stability can also be explained by the nesting of sw around the aggregate that occurred during mixing which led to slight loss of bonding between the contents of the mixture. 3.1.2 flow – sw content relationship generally, the flow of modified asphalt mix is higher than the conventional asphalt mix (2.47 mm). figure (2) shows that the flow increases continuously as the swc increases. the flow value extends until it reaches (3.88mm) at swc of 7%. figure 2: asphalt mix flow – sw content relationship. 0 500 1000 1500 2000 2500 0 2 4 6 8 s ta b il it y (k n ) % of steel by volume of bitumen 0 1 2 3 4 5 0 2 4 6 8 f lo w (m m ) % of steel wool by volume of bitumen shafik jendia, noor hassan, khadija ramlawi, hadeel abu-aisha/ study of the mechanical and physical... (2016) 88 3.1 bulk density – sw content relationship the bulk density of compacted sw modified asphalt mixture is lower than the conventional asphalt mix (2.367 g/cm 3 ). the general trend shows that the bulk density decreases as the swc increases. the maximum bulk density is (2.30 g/cm 3 ) at swc (3.0 and 5.0%) and the minimum bulk density is (2.27 g/cm 3 ) at sw (7%). this decrease of bulk density can be explained to be as a result of increasing percentage of void as the swc increases. figure (3) shows asphalt mix bulk density – swc relationship figure 3: asphalt mix bulk density – sw content relationship. 3.2 air voids (va) – sw content relationship in general, the air voids proportion of modified asphalt mixes is higher than conventional asphalt mix (3.82 %). va % of modified asphalt mixes increases gradually as the sw content increase till it reaches the highest va% value at 7% swc. generally modified asphalt mixes have va% content exceeding the specifications range due to the mentioned nesting problem that indicates a higher compaction energy is required for modified asphalt mixes with sw. figure (4) shows the curve which represents asphalt mix air voids – swc relationship. figure 4: asphalt mix air voids (va %) – sw content relationship. 3.3 voids in mineral aggregates (vma) – sw content relationship the voids in mineral aggregates percentage vma% for asphalt mix is affected by air voids in asphalt mix va and voids filled with bitumen vb. vma% of modified asphalt mixes is generally higher than conventional asphalt mix (16.44%). vma % of modified asphalt mixes is approximately constant (20.94% at 7% of sw content). even though va % increases as the sw content increases, vma remains the same due to decrease of bulk density which decreases vb as well .figure (5) shows the curve which represents asphalt mix vma% – swc relationship. figure 5: asphalt mix voids of mineral aggregates (vma) – sw content relationship. 3.4 optimum steel wool (modifier) content a set of controls is recommended in order to obtain the optimum modifier (sw) content that produces an asphalt mix with the best mechanical properties [9, 10]. asphalt mix with optimum modifier content satisfies the following:  maximum stability  maximum bulk density  va % within the allowed range of specifications. figures (1, 3 and 4) are utilized to find sw percentages which satisfy these three controls. the sw percentages which satisfy controls are summarized in table (6).  the optimum sw content is the average of the previous three sw contents.  optimum sw content (by obc weight) = 4.33 %. table 6: summary of controls to obtain optimum modifier content property sw ( by obc volume) maximum stability 5 % maximum bulk density 5 % va % within the allowed range of specifications 3 % 3.5 comparison of control mix with sw modified mix a comparison of the mechanical properties of sw modified asphalt mix at the optimum swc (4.33 % by obc volume) and properties of the conventional (control) asphalt mix is shown in table (7). minimum and maximum allowed limits are also presented according to municipality of gaza specifications in table (8) [11]. 2.26 2.28 2.3 2.32 2.34 2.36 2.38 0 2 4 6 8 b u lk d e sn si ty ( g /m 3 ) % of steel wool by volume of bitumen 0 2 4 6 8 10 0 2 4 6 8 a ir v o id % % of steel wool by volume of bitumen 0 5 10 15 20 25 0 2 4 6 8 v m a [% ] % of steel wool by volume of bitumen shafik jendia, noor hassan, khadija ramlawi, hadeel abu-aisha/ study of the mechanical and physical... (2016) 89 table 8: properties of sw modified asphalt mix with specifications range it is clearly shown that asphalt mix modified with (4.33% swc by obc volume) has a slightly lower stability and stiffness compared to the conventional asphalt mix, but still higher than the specified minimum range. while for the other properties of modified mix are within the range except for air void that is slightly out of the allowed range of the specifications. this can be improved by performing a special mixing procedure that guarantees a better homogeneity of the mixture. 3.6 self-healing mechanism tests several tests were conducted to obtain the phenomena of self-healing of asphalt samples with different swc (0.0, 3.0, 5.0 and 7.0% by obc volume). these tests include: electrical resistivity measurements, heat transmission measurements and healing of cracked asphalt samples using microwave. 3.6.1 electrical resistivity measurement test the resistivity of conventional and modified asphalt mix samples were measured using fluke digital multimeter, figure (6) shows the relationship between electrical resistivity and swc in asphalt mix. figure (6): asphalt mix electrical resistivity – swc relationship. the figure shows that an amount of sw=0, the electrical resistivity is very high, it is exhibiting the case of insulating behavior. the addition of more sw to the mix decreases the electrical resistivity. the swc (3 – 5%) can be considered as the optimal content for conductivity purpose, because adding more steel wool above this content to the mixture improve its electrical conductivity weakly. 3.4.2 heat transmission measurement test the test was performed to investigate heat transmission through the samples using hot plate. figure (7) shows for example the temperature change in asphalt sample at swc of 5% with time at top and bottom surface. figures (8) and (9) illustrate how the temperature changes over a period of time of 30 minutes on top and bottom surfaces for asphalt samples with different swc. samples with swc of 3% and 5% exhibits the highest heat transmission between top and bottom surfaces of samples among the four percentages of swcs. although asphalt samples with swc of 7% has the highest electrical conductivity, it exhibits the lowest heat transmission. figure (10) summarizes the heat transmission rate for asphalt samples with different swc at bottom surfaces after five minutes. 0.00 0.10 10.00 1000.00 100000.00 10000000.00 0 2 4 6 8 e le ct ri ca l r e si st iv it y (1 0 3 ω m ) % of steel wool by volume bitumen 0 10 20 30 40 50 60 70 0 10 20 30 40 t e m p e ra tu re ( c̊ ) time (minutes) top surface bottom surface table 7: comparison of sw modified asphalt mix and conventional (control) mix properties property conventional asphalt mix (4.33%) sw modified asphalt mix (by obc volume ) change amount obc (%) 5.60 5.60 stability (kg) 1973.49 1811.67 8.20 % flow (mm) 2.47 3.65 + 47.77% stiffness (kg/mm) 798.98 492.60 37.22% vma % 16.42 19.43 +18.33 % va% 3.80 7.17 + 88.68 % bulk density (gm/cm3) 2.367 2.30 2.83 % property (4.33%) sw modified asphalt mix (by obc volume) local spec. (mog, 1998) [11] min. max. stability (kg) 1811.67 900 * flow (mm) 3.65 2.0 4.0 vma% 19.43 13.5 * va% 7.17 3.0 7.0 bulk density (g/cm3) 2.30 2.30 * shafik jendia, noor hassan, khadija ramlawi, hadeel abu-aisha/ study of the mechanical and physical... (2016) 90 figure (7): relationship between top and bottom surface temperatures with time at swc of 5% (by obc volume). figure (8) relationship between bottom surface temperatures with time and different swc. figure (9) comparison between top surface temperatures with time and different swc. careful consideration to figure (10) shows a significant results of heat transmission for asphalt samples with different swc. for example, considering the heat transmission after five minutes indicates a higher heat transmission rate for asphalt samples with swc of 3% and 5% (105% and 111% respectively) and a slow rate for conventional asphalt samples (swc = 0%) which integrates with the previously mentioned concept of electrical resistivity of asphalt samples with different swc. figure (10) heat transmission rate after five minutes at the bottom surface of asphalt samples. the specified optimum swc (4.33% by obc volume) lies within the range of swc that provides the highest heat transmission rate. further analysis of the figure shows that an increase in swc greater than the optimum swc range doesn't provide a higher heat transmission rate. on the contrary, asphalt samples with swc of 7% has a reduced efficacy by 5% compared to asphalt samples with 0% sw content which is referred to the inhomogeneity of mix that forms clusters which do not provide a homogenous distribution of heat through its surface instead it provides a local heating where clusters occur. 3.4.3 healing of cracked asphalt samples using microwave testing procedures were performed and results were recorded. table (9) shows the temperature of asphalt samples with different swc after 60 and 90 second of being heated at the microwave. table (9): temperature of asphalt samples after 60 and 90 seconds in microwave. sw content (% by obc volume) ) for 60 seconds for 90 seconds 0 24 31.5 3 60 80 5 55 75 7 50 70 in reference to the table, heating samples with different swc exhibits different behavior of microwave heat absorption for each time interval. it was observed that the same pattern of heat and electrical resistivity is followed for microwave heating. samples with 3% swc has the highest temperature after being heated in the microwave for 60 and 90 seconds, while temperature of samples with swc = 0% was around room temperature after 60 seconds which clearly shows the inefficacy of conventional asphalt mix when introducing heat for self-healing purposes. whereas, samples with swc of 7% exhibits a good microwave heat absorption but a partial damage to the sample was observed after heating to 90 seconds due to low stability and inhomogeneity of the sample. visual inspection of samples after 24 hours of being left to cool down to room temperature in molds where it was originally casted. samples with sw content of 3 and 5% were seen to have an excellent bonding between the fractured surfaces, whereas the 7% sw sample did not have an efficient healing. figure (11) shows the relationship between swc and the dramatic decrease in electrical resistivity while air void percentage increases gradually. 0 10 20 30 40 50 60 70 0 10 20 30 40 t e m p e ra tu re ( c )̊ time(minutes) 0% sw 3% sw 5% sw 7% sw 30 35 40 45 50 55 60 65 70 0 10 20 30 40 t e m p e ra tu re ( c )̊ time (minutes) 3% sw 5% sw 7% sw 0% sw 0 20 40 60 80 100 120 0 2 4 6 8 h e a t tr a n sm is si o n r a te (% ) % of steel wool by volume of bitumen shafik jendia, noor hassan, khadija ramlawi, hadeel abu-aisha/ study of the mechanical and physical... (2016) 91 figure (11): relationships between swc, electrical resistivity and air void ratio. 4 conclusions based on experimental work results for sw modified asphalt mixtures compared with conventional asphalt mixtures, the following conclusions can be drawn: a) the optimum amount of sw to be added as a modifier of asphalt mix was found to be (4.33%) by volume of obc of the asphalt mix. b) higher compaction energy is required for modified asphalt mixes with sw to obtain higher values of bulk density and reduce air void percentage to be within the specified range c) asphalt mix with sw needs more time in mixing compared with the conventional asphalt concrete mix due to its special mixing process whereas sw is added to the mixture of aggregate and bitumen by throwing chopped sw into the mix gently over several batches during continuous mixing. d) asphalt mix modified with sw exhibits higher flow values as the sw percentage increased. e) asphalt samples with sw content of 3 and 5% represent the best healing behavior compared with samples with 7% sw content. f) adding sw to the asphalt mix greater than the optimal content causes nesting (clusters) in the mixture, which reduces the stability of mix due to the higher air void percentage. g) adding steel wool to asphalt mix makes it electrically conductive. the electrical resistance of asphalt concrete depends on the content of sw. there is an optimal content of sw to obtain the highest conductivity. greater than this optimal content, addition of more steel sw does not increase the conductivity anymore. excess sw can make the mixture difficult to mix. h) asphalt concrete with steel wool can be heated with induction energy. content of the sw are important for the heating speed of asphalt concrete. there is an optimal content of steel wool. addition of sw above the optimum does not increase the induction heating speed anymore. i) the durability of asphalt concrete roads will be improved with induction heating due to the improvements in the healing capacity. 5 references [1] awwad, m & shabeeb, l. “the use of polyethylene in hot asphalt mixtures”, american journal of applied sciences, 2007, pp. 390-396. [2] garcía a., bueno m., norambuena-contreras j. and partl m., “induction healing of dense asphalt concrete”, switzerland, 2013. [3] schlangen e., “ addressing infrastructure durability and sustainability by self-healing mechanisms recent advances in self-healing concrete and asphalt.’’, a paper presented to the 2nd international conference on rehabilitation and maintenance in civil engineering, 2013. [4] astm d 1559, “standard test method for marshal test”, american society for testing and materials, west conshohocken, 2002. [5] astm, 2004. “test method for resistance to plastic flow of bituminous mixtures using marshall apparatus”, american society for testing and materials, west conshohocken, 2004. [6] astm d 2726, “test method for bul specific gravity and density of compacted bituminous mixtures using saturated surface-dry specimens”, philadelphia, us, 1992. [7] astm d 2041, “test method for theoretical maximum specific gravity and density of bituminous paving mixtures”, philadelphia, us, 1992. [8] liu, q., “induction healing of porous asphalt concrete”, phd-thesis delft university of technology, the netherlands, 2012. [9] jendia, s., “highway engineering: structural design. palestine” dar almanara library, first edition, gaza, palestine, 2000, pp. 63-68. (arabic reference). [10] jendia, s. el-sai aly, m., “study of the possibility to reuse waste plastic bags as a modifier for asphalt mixtures properties (binder course layer)”. the 11 th global conference on sustainable manufacturing. 22-25 september 2013, berlin, germany. [11] municipally of gaza (mog), “general and special conditions ith technical specification”, preparation and development of projects administrationmunicipally of gaza, gaza strip, palestine, 1998. ( arabic reference) 0 2000 4000 6000 8000 0 2 4 6 8 10 e le ct ri ca l r e si st iv it y (ω m ) a ir v o id % % of steel wool by bitumen volume air void va electrical resistivity(ωm) 3 5 7 http://www.sciencedirect.com/science/article/pii/s1877705813003597 transactions template journal of engineering research and technology, volume 7, issue 2, october 2020 1 maximum power point tracking control for grid-connected photovoltaic system under partial shading conditions mohammed s. ibbini and areen g. al-obeidallah doi: https://doi.org/10.33976/jert.7.2/2020/1 abstract—this manuscript uses an active bypass circuit to preserve the photovoltaic (pv) modules from a significant reduction in power generation, which is transferring from partial shading. the active bypass circuit controls each pv module separately, including the shaded ones, by detecting the shaded region on these modules using an image processing technique. the active bypass circuit ensures that every pv module on the solar system is working at its maximum power operating point. under partial shading conditions, the conventional maximum power point tracking (mppt) methods have usually failed to track the global mpp. in order to overcome this limitation, this paper attempts to combine a fuzzy logic technique and an active bypass circuit to ensure the detection of a unique global power point (gmpp). a current source inverter (csi) is used with a double-tuned resonant filter to eliminate the undesirable harmonics on the direct current (dc) side. in addition, an lc filter is used in the ac side to reduce the switching harmonics. the simulation results demonstrate improvement in the performance of the proposed mppt method under partial shading conditions. hence, the combination of the active bypass circuit and the image processing technique, based on a fuzzy logic decision-making approach, provides an effective process for the detection of the shaded region on the surface of the pv module. in addition, this combination ensures a unique global maximum power point (gmpp)for the proposed pv solar system. index terms— photovoltaic (pv) module, image processing, active by bass circuit, shadow detection, fuzzy logic, maximum power point tracking (mppt), grid-connected. i. introduction the performance of a photovoltaic (pv) module mainly depends on the amount of solar irradiance that can be absorbed by each module. pv modules must commonly be connected in series in order to generate the desired power. if any part of these modules is prohibited from receiving light by means of partial shading, their power generation will be reduced. as a result of partial shading on a pv module, a significant decrease in the total pv power generation is observed. to solve this problem, conventional approaches commonly connect a bypass diode in parallel with each pv module [14]. however, the shaded pv modules cannot produce their inherent power and the overall power generation solar system is still smaller than the power generation without shading conditions [5]. in order to handle the problem of partial shading on pv modules, this manuscript uses an active bypass circuit, a generation control circuit (gcc) [6-7]. in fact, the active bypass circuit controls each pv module separately, including the shaded ones, by detecting the shadow region on these modules based on the image processing technique [8-9]. the active bypass circuit ensures that every pv module of the solar system is working at its maximum power operating point [1011]. to track the maximum power point of the proposed solar system, a new method, based on a combination of an active bypass circuit and a fuzzy logic technique, is proposed. this manuscript is organized as follows: in section 2, the proposed pv solar system is presented. section 3 presents the shadow detection method on a pv array using a digital image processing technique. section 4, explores the proposed configuration of the active bypass circuit and its control principle. section 5 presents the fuzzy logic control technique for tracking the maximum power point. section 6, presents the simulation results. finally, in section 7 conclusions are given and discussion. ii. proposed pv solar system description fig.1 illustrates the circuit configuration for the proposed pv solar system, where two pv modules consist of ten solar cells are connected in series in order to generate the required voltage and circuit. monitoring camera gives a quick indication of an existing shadow on these modules by using the proposed digital image processing technique. the digital processing technique is based on a fuzzy logic decision, making system permitting the determination of the shadow type. the proposed technique enables the active bypass circuits, which are connected in parallel with each pv module, to generate a control signal from a pulse width modulator (pwm) to ensure the detection of unique mpp for solar system. a current source inverter (csi) with a double tuned parallel resonant circuit is connected to the dc side of the solar system to boost the dc input voltage with very reduced value of unwanted harmonics. the ac side of the system is connected to the grid using https://doi.org/10.33976/jert.7.2/2020/1 m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 2 an lc filter to avoid switching harmonics on the ac side. iii. proposed method of shadow detection on a pv array using digital image processing the need for an effective process, to detect the shadow on pv panels, becomes a necessity especially in a massive pv solar system where the shadow can cover a large part of panel. the main motive for using digital image processing technique is to establish a dynamic and effective way to detect the shadow on pv panels and to determine the its type [8-9]. the first step depends on installing a camera system to continuously monitor the pv arrays. the image obtained by the monitoring camera is then treated by a digital processing technique, based on a fuzzy logic decisionmaking system, and the shadow type is determined.fig.2 illustrates the different stages of the overall system which will be discussed in details. a. background of the subtraction method background subtraction consists of extracting a foreground mask from the preserved background model using the stationary camera. this technique is widely used for detecting objects or tracking targets by subtracting the input from the background model, which also known as the reference frame. the background subtraction method can be represented by the mathematical formula [12]: 𝑀𝑡(𝑥, 𝑦) = { 1, if 𝐷( 𝐼(𝑥, 𝑦, 𝑡), 𝑅(𝑥, 𝑦, 𝑡)) > 𝑇 (1) 0, otherwise where: mt (x, y) is the motion mask at time t, x and y are the pixel location variables, d is the distance between the input frame i (x, y, t) and the reference frame r(x, y, t), and t is the threshold value, which is an empirically selected value. the distance between the input frame and the reference frame is compared with the threshold value t, if the distance value is geater than that of the threshold, then this indicates a difference in pixels between foreground and background which means the presence of a new object in the image. this distance can be presented by the following formulas: d0 = |𝐼(𝑥, 𝑦, 𝑡) − 𝑅(𝑥, 𝑦, 𝑡)| (2) d1 = |𝐼 𝑅 (𝑥, 𝑦, 𝑡) − 𝑅𝑅 (𝑥, 𝑦, 𝑡)| + |𝐼𝐺 (𝑥, 𝑦, 𝑡) − 𝑅𝐺 (𝑥, 𝑦, 𝑡)| + |𝐼𝐵 (𝑥, 𝑦, 𝑡) − 𝑅𝐵 (𝑥, 𝑦, 𝑡)| where d0 is the distance for the gray image, and d1 is the distance for the rgb image. the flowchart of the background subtraction method is illustrated in fig.2. the main steps of this method can be summarized as follow: 1. estimate the reference frame r(x, y, t) at time t. 2. subtract the reference frame from the input frame i(x, y, t). apply a threshold t to the absolute difference to get the foreground mask. fig. 2. flowchart of the background subtraction method. fig.1. proposed grid-connected pv solar system. m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 3 the background subtraction method is sensitive to noise and illumination changes and it depends on the threshold value t, which is empirically selected value and is not a function of time similar to the input and reference frame. in order to improve the results of background subtraction, the dissertation uses a blob detection method that highlights the new object in the reference image, based on the hypothesis that a new object has different properties, such as color distribution and brightness, as opposed to the surrounding regions. the blob detection method returns a zero-pixel value if both inputs are equal and one-pixel value if they are different. the detection method uses a median filter to remove any unexpected noise from the gray image. finally, it traces the region boundaries in a binary image. the input and the reference images of the proposed pv solar panels and the results of applying background subtraction and blob detection methods are shown in fig.3. a. edge detection method edge detection is one of the essential steps in digital image processing. it depends on finding the boundaries of objects within images by detecting the sudden changes of intermission in the brightness of an image. there are many common edge detection algorithms such as roberts, sobel, prewitt, and canny [13-18]. before applying the edge detection method, this dissertation uses a morphological process to reduce the unexpected noise of the image obtained from background subtraction by applying dilation and erosion operations. dilation adds pixels to the boundaries of the shaded area in a pv image, while erosion removes pixels from these boundaries. the morphological process depends on the shape and size of the construction element used to treat the image [19]. then, canny algorithm will be applied to detect the shadow boundaries of a pv solar panel [17]. the canny edge detector is an image processing technique used to detect edges in an image with the minimum effect of noise and more likely to detect the weak edges [20]. it consists of a multistage edge detection algorithm; the main steps for this algorithm are shown in fig.4. the main goal of using the canny algorithm is to reduce the amount of data to be treated in the digital image and preserves the only significant structural properties of the shaded area of the pv image. the proposed edge detection method is shown in fig.5. a. fuzzy logic method fuzzy logic is one of the artificial intelligent approaches that are based on human reasoning rather than the boolean logic. this approach treats the levels of possibilities of input to obtain the obvious output. in order to benefit from the flexibility of fuzzy logic approach, the manuscript uses this method as a decision-making process to classify the type of shadow on the pv solar panel. the proposed fuzzy logic method takes the values of brightness (b) and color distortion (cd) of the shaded area on the pv solar panel as an input parameter and the type of this shaded area as an output parameters. bright figure 4. a flowchart of canny edge detector. fig. 5. the proposed edge detection method. fig. 3. the proposed detection methods for a pv solar panel. m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 4 ness is a relative expression of the intensity of the output energy of a visible light source. it can be represented as the total energy value, or as the amplitude where is the greatest value of intensity occurs at the wavelength of the visible light. in other words, brightness represents the intensity value of every pixel in the digital image. in the rgb color space, the amplitudes of red, green, and blue for a specific color can tack a range from 0 to 100 percent of full brightness. these level values are represented by the range of decimal numbers from 0 to 255 or hexadecimal numbers from 00 to ff. when the brightness value is 0, then the color space brightness is completely black. however, with the increase in the value, the brightness of the color space increases and shows different colors. in this manuscript, the normalized value of r, g, and b is used in the range [0, 1]. the following equation is used to calculate the value of brightness of the image by finding the weighted mean of intensity value for all pixels of the shaded are. some data points count more strongly than others, in that they are given more weight in the calculation: bshaded= ∑ (0.3𝑅𝑖 +0.3𝐺𝑖 +0.4𝐵𝑖) 𝑚𝑖=1 𝑚 (3) where bshaded is the brightness value of the shaded region, and m is the number of pixels in the rgb color space. color distortion is a deflection from rectilinear dropping of pixels in an image; it is a type of optical deviation. color distortion can be calculated by using mahalanobis distance between the arithmetic mean of red, green and blue pixel values of the shaded area and the arithmetic mean of pixel values of the reference image for pv solar panel. mahalanobis distance gives a useful method of measuring how similar some set of pixels is to an ideal set of pixels. the following formula is used to calculate the value of color distortion for the shaded region: 𝐶𝐷𝑠ℎ𝑎𝑑𝑒𝑑 = 𝑀𝑎ℎ𝑎_𝐷𝑖𝑠 ( 𝑀𝑒𝑎𝑛 (𝑅𝐺𝐵𝑠ℎ𝑎𝑑𝑒𝑑 , 𝑅𝑒𝑓 ) (4) where cdshaded is the color distortion of the shaded region, rgbshaded is the red, green and blue pixel value of the shaded area, and ref is the pixel value for the reference image. mahalanobis distance is based on arithmetic mean of rgb of the shaded region and the variance of the expectation variables, with tacks the covariance matrix of all the variables in consideration. the region of constant mahalanobis distance around the mean forms an ellipse in 2d space when only two variables are measured, or an ellipsoid or hyper ellipsoid when more variables are used. this distance is zero if the pixel of rgbshaded at the mean of the reference image, and it increases as the pixel moves away from the mean. the mahalanobis distance is given by the following equation: maha_dis = [(μshaded – μr ) s -1 (μshaded – μr ) t ] 1/2 (5) where μshaded is the arithmetic mean of the shaded region, μr is the arithmetic mean of the reference image, and s is the covariance matrix. this matrix is calculated by using the standard deviations of pixel values for the shaded region. the input membership functions of brightness and color distortion and the output membership function of the proposed fuzzy logic method are represented in fig.6. the fuzzy logic rules are shown in table 1. bshaded is the brightness value of the shaded region, cdshaded is the color distortion of the shaded region, s is small, m is medium, and b is big. to ensure the effectiveness of applying a digital image processing technique in shadow detection for pv solar system and classifies the type of this shadow, two different images were obtained from a mobile camera. the results of using the proposed digital image processing method based on matlab are shown in fig. 7 (a) and (b). in the first case, a shadow caused by a human hand covers the pv panel whereas in the second case, a piece of sacking covers the pv panel. as figure 7 (a) illustrates, the brightness and color distortion values of the shaded region are calculated using the proposed image processing method based on matlab. the results are as follows: bshaded = 0.1068, cdshaded = 0.2388 by applying rule number 6 on these values of brightness and color distortion, the output will be a shadow. for the second input image in figure 8 (b), the brightness and color distortion values are: bshaded =0.1323, cdshaded = 2.7019 by applying rule number 4 on these values of brightness and fig. 6. the input and output membership functions of the proposed fuzzy logic method. m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 5 color distortion, the output will be an object. the fuzzy logic decision-making approach defines the type of the shaded region on the pv input image, and then this information will be used as an input parameter for active bypass circuit. in addition, when the type of shadow has been determined the cleaning purpose will be used. if the shadow is an object, it should be removed. on the other hand, if a shadow is detected, it should be cleaned using any cleaning method. iv. proposed configuration of the active bypass circuit and its control principle fig.8 illustrates the proposed configuration of the active bypass circuit where a chopper circuit consists of capacitor, inductor, and mosfet are connected in parallel with each pv module. the arrangement forms a buckboost converter which converts the fixed dc input voltage to the variable dc output voltage directly. fig.9 shows the gate signals for the multistage chopper circuit. table 1 fuzzy logic rules. rule no rules 1 bshaded = b & cdshaded = b , then output = object 2 bshaded = b & cdshaded = m , then output = object 3 bshaded = b & cdshaded = s , then output = object 4 bshaded = m & cdshaded = b , then output = object 5 bshaded = m & cdshaded = m , then output = shadow 6 bshaded = m & cdshaded = s , then output = shadow 7 bshaded = s , then output = shadow fig. 7. the proposed shadow detection method. (a)shadow region fig. 7. the proposed shadow detection method. (b) object region. m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 6 fig. 8. the proposed configuration of the active bypass circuit. fig. 9. the gate signals for the multistage chopper circuit. the generation control voltage, for each pv module (vpvk), depends upon the switching frequency of the buck-boost converter which can be given as follow: vpv1 : vpv2 : ………. : vpvk = d1 : d2 : ………. : dk (6) the switching duty ratio is given by: (7) (7) where dk is the switching duty ratio, tk(off) is the offtime of switch gk, ttotal is total switching interval, k is the number of pv modules, vpvk is the generation control voltage and vo is the output voltage. in order to benefit from the active bypass features, with accreditation of shadow detection method which based on digital image processing technique, this approach uses a pulse width modulator (pmw) to control the active bypass circuit by defining the gate signals of the mosfet. the proposed pwm for the active bypass circuit is shown in fig.10 where the pwm can be controlled by the input value of brightness and color distortion of the shaded area which is obtained from the shadow detection method based on digital image processing techniques. fig.11 shows the gate signals g1 and g2 for the active bypass circuit. fig. 10. the proposed pulse width modulator. fig.11. the gate signals for the proposed active bypass circuit. v. fuzzy logic control technique for tracking mpp in order to track the mpp of the proposed gridconnected pv solar system, this paper uses a fuzzy logic controller (flc). this type of controller achieves a quick location to the mpp by controlling the gate signals of the mosfet of the proposed active bypass circuit. fig.12 shows the proposed simulation model of the flc based on mamdani method using the m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 7 matlab/simulink/fuzzy logic toolbox. fig. 13 illustrates the inductor current (il1) waveform in the active bypass circuit when the signal has been received from the fuzzy logic controller. fig. 12. block diagram of the proposed flc. figure 13: the inductor current waveform in the active bypass circuit. the inputs of the proposed flc are the pv array output power and current: where ∆ppv is the change in the output power of the pv array, ∆ipv is the change in the output current of the pv array, and j is the sampling instant. the output of the proposed flc is the duty ratio value (d) of the active bypass circuit which controls the onoff time of switching period for the gate signal of each mosfet. the input membership functions, the change in power and change in current of the pv array, and the output membership function of the proposed fuzzy logic method are presented in fig.14. the fuzzy logic rules are shown in table 2, where 16 fuzzy control rules are used to get the desired value of the mpp. the surface view for the proposed flc is shown in fig.15. where: nb: negative big, ns: negative small, pb: positive big, and ps: positive small. fig. 14. the input and output membership functions of the proposed flc. table 2 fuzzy logic rules. fig. 15. surface viewer for the proposed flc. the csi output current is illustrated in fig. 16. in order to reduce the output harmonics on the ac side, the lc filter is connected between the csi and the grid. the proposed fuzzy logic based mppt controller includes three basic components, fuzzification module, inference engine and defuzzification module as shown in fig. 17. the fuzzification makes it possible to pass fromthe crisp variables to the linguistic variables. the actual voltage(vpv) and current (ipv) of the pv module can be measured continuously and the power can then be calculated m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 8 (ppv = vpv× ipv). the control is determined on the basis of satisfaction of two criteria relating to the two input variables of the proposed controller, namely ∆ppv and ∆ipv at the sampling instant j. fig. 16. the csi output current. the output is the duty ratio value (d) of the active bypass circuit which controls the on-off time of the switching period for the gate signal of each mosfet fig. 17. structure of fuzzy logic controller. the inference engine applies the rules to the fuzzy inputs (that were generated from the fuzzification process) to determine the fuzzy outputs. therefore, before the rules can be evaluated, the crisp input values must be fuzzified to obtain the corresponding linguistic values (that are necessary to determine the active or fired rules) and the degree to which each part of the antecedent has been satisfied for each rule. it is noticed that the inference methods provide a function for the resulting membership variable; it thus acts on fuzzy information. as the dc-dc converter requires a precise control signal d at its entry, it is the necessary to envisage a transformation of this fuzzy information into deterministic one, this phase is called defuzzification. defuzzification can be performed normally by two algorithms: center of area (coa) and the max criterion method (mcm). the most used defuzzification method is that of the determination of the center of area (coa) of final combined fuzzy set. the fin union of all rule output fuzzy set using the maximum aggregation method. to validate the performance of the proposed pv solar system shown in fig.1, the maximum power point tracking design is simulated using a matlab/simulink. the circuit specifications are given in table 3. the output power and current of the proposed pv solar system under uniform condition (with irradiance level 500w/m^2) are shown in fig.18 (a) (b) fig. 18. (a) pv output power. (b) pv output current under the uniform conditions. when the second pv module in fig. 1 is partially shaded (with irradiance level 250w/m^2), the output power and current of the proposed pv solar system are shown in fig. 19. (a) m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 9 (b) fig. 19. (a) pv output power. (b) pv output current under partially shaded conditions. fig. 20. block diagram of the grid voltage and current. table 3 the pv solar system specifications. the comparison between the results of output power and current for the pv modules under the normal conditions and partial shading conditions with using the proposed mppt control method is shown in table 4. table 4 pv output power and current the ac grid voltage and current are shown in fig. 21, where the two signals are out of phase. in order to make both signals in phase, a phase-locked-loop is used as shown in fig. 22. the ac grid voltage and current with using phase-lockedloop are shown in fig. 23. (a) (b) fig. 21. (a) the grid voltage. (b) the grid current. fig. 24 shows the simulation result of the proposed mppt control method under different shadow conditions. the power generated from the two pv modules under the uniform conditions at t1 is equal to 225w. when the second module is partially shaded at t2, the pv output power decreases to 158.77w. when the shadow is removed at t3, the pv output power increases again to the same value. this result shows that the proposed mppt control method is stable under the different shadow conditions. pv output power without a shadow 225w pv output power with shadow 182.35w pv output current without a shadow 15a pv output current with shadow 13.5a m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 10 fig. 23. the ac grid voltage and current with using phase-locked-loop. fig. 24. simulation results of the proposed mppt under different shadow conditions. vi. conclusion with the increased demand for renewable energy resources especially the solar energy, it is important to preserve the performance of the pv array of the solar system on its maximum power point under the partial shading conditions. in this thesis, the partial shading problem of the photovoltaic solar system has been investigated and addressed. specifically, the global maximum power point has been tracked. in fact, the conventional algorithms cannot solve the problem of multiple peaks that appear in the p-v curve when the solar irradiance on the pv array is changed as a consequence of partial shading. to overcome that problem, this thesis has used an artificial intelligence algorithm that based on a fuzzy control algorithm. furthermore, this thesis presented an effective process to detect the shaded region on a pv module. this process uses a digital image processing technique to tack an image for the pv. module, and then this process determines the existence of a shadow or not on pv array. finally, using a fuzzy design making method, this thesis has introduced a practical method to discriminate the type of this shadow. the results obtained from the shadow detection method have been used to design a pulse width modulator which can enable the gate drive of the active bypass circuit. this active bypass circuit ensures that every pv module on the solar system is working at its maximum operating point. this should increase the power generation of the pv solar system. consequently, an effective method to track the maximum power point under the partial shading conditions has been developed based on the combination between the fuzzy logic technique and active bypass circuit which ensured a unique mpp for the solar system. in order to connect the pv array of the proposed solar system to ac grid, a current source inverter (csi) has been used with a double tuned resonant filter which showed up eliminations in the undesirable harmonics on the dc side. in addition, an lc filter has been used in ac side to reduce the switching harmonics the simulation results showed an improvement in the performance of the proposed mppt method under partial shading conditions. hence, the combination of active bypass circuit and the image processing technique, which relies on fuzzy logic decision-making approach, provides an effective approach to detect the shaded region on the pv module. in addition, this combination ensures a unique mpp of the proposed pv solar system. the proposed fuzzy control system can reach the maximum power in a short time and can maintain its stability under any partial shading conditions. references [1] s. silvestre, a. boronat, and a. chouder, “study of bypass diodes configuration on pv modules,” applied energy, vol. 86, no. 9, pp. 1632 – 1640, 2009. [2] h. ziar; s. mansourpour; e. afjei; m. kazemi, “by-pass diode characteristic effect on the behavior of solar pv array at shadow condition”,3rd power electronics and drive systems technology (pedstc), pp. 229 – 233, 2012. [3] h. ziar, m. nouri, b. asaei, and s. farhangi, “analysis of overcurrent occurrence in photovoltaic modules with overlapped bypass diodes at partial shading,” ieee j.photovolt., vol. 4, no. 2, pp. 713–721, mar. 2014. [4] m. a hasan; s. k parida, “effect of nonuniform irradiance on electrical characteristics of an assembly of pv panels”, ieee 6th india international conference on power electronics (iicpe), pp.1-3, 2014. [5] r.dash, s.c.swain, p.c.panda, “a study on the increasing in the performance of a solar photovoltaic cell during shading condition”, ieee international conference on circuit, power and computing technologies (iccpct), 2016. [6] t. shimizu, m. hirakata, t. kamezawa, and h. watanabe, “generation control circuit for photovoltaic modules,” power electronics, ieee transactions on, vol. 16, no. 3, pp. 293–300, may 2001. [7] t. shimizu, o. hashimoto, and g. kimura, “a novel high-performance utility-interactive photovoltaic inverter system,” power electronics, ieee transactions on, vol. 18, no. 2, pp. 704 711, mar 2003. m. ibbini and a. al-obeidallah/ maximum power point tracking control method for grid -connected photovoltaic system under partially shaded conditions (2020) 11 [8] m. karakose, m. baygin, "image processing based analysis of moving shadow effects for reconfiguration in pv arrays, "energycon, dubrovnik, croatia, may 2014. [9] m. karakose, k.firildak, "a shadow detection approach based on fuzzy logic using images obtained from pv array ", 2015 6th international conference on modeling, simulation, and applied optimization (icmsao), pp. 1-5, 2015. [10] r. giral, c.a. ramos-paja, d. gonzalez, j. clavinet, a. cidpastor, and l. martinez salamero, “minimizing the effects of shadowing in a pv module by means of active voltage sharing,” ieee int. conf. on ind. technol. (icit), pp.943-948, march 2010. [11] r. giral; c. e. carrejo, m. vermeersh, a. j. saavedra montes, c. a. ramos paja, “pv field distributed maximum power point tracking by means of an active bypass converter,” int. conf. clean electrical power (iccep), pp. 94 – 98, 2011. [12] y. benezeth, p.-m. jodoin, b. emile, h. laurent, and c. rosenberger, “review and evaluation of commonlyimplemented background subtraction algorithms,” in 2008 19th international conference on pattern recognition, 2008, pp. 1–4. [13] l. xuan, z. hong, “an improved canny edge detection algorithm,” in 8th ieee international conference on software engineering and service science (icsess), pp. 275 278, 2017. [14] i. ahmad, i. moon, s. joo shin, “color to-grayscale algorithms effect on edge detection a comparative study,” international conference on electronics, information, and communication (iceic), pp.1-4, 2018. [15] s. k. pal and r. a. king, “on edge detection of x-ray images using fuzzy sets,” ieee trans. pattern anal. mach. intell., no. 1, pp. 69–77, 1983. [16] v. torre and t. a. poggio, "on edge detection," in ieee transactions on pattern analysis and machine intelligence, vol. pami-8, no. 2, pp. 147-163, march 1986. [17] j. canny, “a computational approach to edge detection,” ieee trans. pattern anal. mach. intell., no. 6, pp. 679–698, 1986. [18] h. wechsler and m. kidode, “a new edge detection technique and its implementation,” ieee trans. syst. man. cybern., vol. 7, no. 12, pp. 827–836, 1977. [19] a. c. bovik, t. s. huang, and d. c. munson, “the effect of median filtering on edge estimation and detection,” ieee trans. pattern anal. mach. intell., no. 2, pp. 181– 194, 1987. [20] a. l. kabade, v.g. sangam, “canny edge detection algorithm,” ijarece, vol. 5, no 5, pp. 1292 – 1295, may 2016. http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.r.%20giral.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.c.%20e.%20carrejo.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.m.%20vermeersh.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.a.%20j.%20saavedra-montes.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.a.%20j.%20saavedra-montes.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.c.%20a.%20ramos-paja.qt.&newsearch=true http://ieeexplore.ieee.org/document/6036360/ http://ieeexplore.ieee.org/document/6036360/ http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6029683 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=6029683 transactions template journal of engineering research and technology, volume 3, issue 3, september 2016 58 energy management scheme for buildings subject to planned grid outages mohammed hijjo, felix felgner and georg frey chair of automation and energy systems saarland university, germany abstract—a huge attention is paid to integrating buildings with renewable energy resources (res) as a key factor to act on increasing demand. meanwhile, utilizing different power resources can be a viable alternative for communities suffering from frequent and planned power outages. conventionally, diesel generators are utilized as emergency backups to lessen the impact of power outages, but they consume a massive amount of fuel in case of extended outage periods. in this context, the work proposes an alternative microgrid power supply system incorporates photovoltaic solar array and lead-acid battery bank in addition to the conventionally utilized diesel generator for buildings experiencing frequent power outages. besides, an energy management scheme is proposed to carry out the proposed supply. the main components of the system are modeled in matlab and the simulation is performed over a relatively long period (two weeks) to capture the pertinent dynamics of the system. the work is conducted using the example of al-shifa’ hospital in gaza-strip to verify the effectiveness of the proposed approach. moreover, different operation scenarios are tested from different perspectives with respect to the planned outages in gaza-strip. simulation results indicate significant fuel savings in addition to a reduction in the total operation time of the diesel generator set (genset). index terms—microgrid, power outage, diesel generator, battery bank, energy management system i introduction local power supplies are becoming popular idea to overcome the problem of frequent power outages. these systems can be installed and operated to cover the essential demand of wide range of facilities. mainly, such systems are equipped with diesel generators which need an enormous amount of fuel in case of extended outages. however, a proper battery storage system in addition to renewable energy resources res can play a vital role to alleviate the risk of outage and decrease the fuel consumption significantly. in spite of that, robust operation of local power systems is complex since power outages are stochastic events, and local energy suppliers have different operation costs, constraints, and efficiency characteristics [1]. concurrently, different smart grid topologies are utilized and developed for specific purposes. effective outage management system (oms) is an ambitious target for the smart grid realization [2]. deeply, a microgrid is defined by the u.s. department of energy as “a group of interconnected loads and distributed energy resources (ders) with clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid and can connect and disconnect from the grid to enable it to operate in both grid-connected or island modes” [3]. microgrids firstly appeared in [4] as innovative concept assuming a cluster of loads and micro-sources; all are operating as a single controllable system. by this definition, microgrids can be a good alternative to serve different societies, buildings, rural areas and small districts. they are a viable solution for buildings depending on resources other than the main grid (where the main grid expansion is either impossible or not economical). especially in developing countries, a large number of buildings including healthcare facilities, schools and small businesses are not connected to a main electrical grid system [5]. although considerable related work has been done to manage power flows and energy in buildings, every new application is setting up different requirements, so that previous approaches cannot be fully applied. for instance, [1] and [6] present online energy management systems to supply constant loads for different purposes based on real-time solving the economical dispatch of generators. however, they consider relatively long time steps to work out this problem (e.g. one hour). besides, other online energy management schemes have been implemented as in [7], but they did not consider safety ranges for the operation of the battery bank which ensure longer lifetime [8]. the contribution of this work is to offer an online energy management scheme (ems) for buildings experiencing a planned grid electricity outage. it aims to reduce the fuel consumption of the diesel genset and maximize the use of res in order to supply the load efficiently as well as hold the state of charge (soc) of the battery bank within safety operation ranges. the proposed scheme is carried out over an accurate load profile with relatively short time steps (two minutes). also, a certain degree of uncertainty is imposed in order to offer a better understanding of the system dynamics. m. hijjo, f. felgner and g. frey / icscs (2016) 59 this paper is organized as follows: section ii presents the main problem which is going to be solved; section iii presents the system model including resources and loads; section iv presents the proposed energy management scheme; section v demonstrates a case study including the experienced outage and load profile of a hospital complex in gazacity in order to conduct this work with the corresponding simulation results concerning various criteria. finally, section vi discusses the results and concludes the outcomes of this paper. ii problem statement the focus of this work is to solve the problem of frequent and planned grid outage where the load is imposed to be disconnected from the main grid at certain periods of time. the distribution company usually informs the customers previously about the period of interruption, so they can manage to schedule their consumption or even use a proper standby power supply system to cover their demand. simply, this work proposes a microgrid-based power supply and the corresponding management criteria in which the load demand can be supplied more efficiently. specifically, such a problem includes, on the one hand, the modeling of the available power from the grid and other resources such as renewables, battery storage and diesel genset. in the other hand, it includes managing these resources to decrease the fuel consumption and increase the reliability of the system. iii system modeling the system under consideration can be described by three interacting subsystems which form the all power flows; namely: load demand, grid and secondary power supply. the system consists of a microgrid composed of pv array, leadacid storage bank and diesel genset representing the microgrid in addition to the critical load which all are connected to the main grid as depicted in figure 1. figure 1 general model of the system a load demand: the load demand or profile represents the instantaneous power consumption of the load and can be modeled by discrete values 𝑃 (𝜏) over a fixed time horizon (e.g., a day), where 𝜏 ∈ [𝑡 , 𝑡 + 𝑇]. obviously, the most important part of proposing an efficient ems to buildings is to identify their individual consumption patterns and maintain power supply even in blackouts, rather than to save energy. to this end, a method is required to mockup a load profile for a relatively long period from available basic data. for the purpose of testing control strategies, the load forecasting model should avoid complicated configuration processes [9]. essentially, such a method is advantageous when a comprehensive monitoring action is not possible either. considering a load profile 𝑃 (𝜏) is available over a specific period which represents the basic data window. in additon, the consumption pattern is constant or changed slightly, in which the load of the next window can be described as a term of the main window but including a scaling factor and time delay, then the load profile for the next day can be mathematically formulated as follows: 𝑃 (𝜏) = 𝛼 𝑃 (𝜏 + 𝛽 ) (1) where: 𝛼 is the weighting factor indicating the uncertainties of the power demand of the next window (day) referred to the basic data window, i.e. if its value is equal to 1.15 this means that the whole load profile of the day 𝑖 is 1.15 times the basic load profile. 𝛽 is the shifting factor representing the global shift over the next day referred to the basic day. in this work, these factors of α and β are generated using uniformly distributed random variables within a proper range of uncertainty. b main grid the grid is supposed to supply the load continuously, but according to different reasons, their might be a total deficit in supply at certain periods. here, we consider the grid as a binary-state power supply where the grid can be on at certain periods and off at the rest. note that grid power can supply the load sufficiently whenever it is in on state. such a timely grid behavior is illustrated simply in figure 2. figure 2 state of the grid c microgrid supply: microgrid is mainly acting as a secondary power supply which is represented by the different micro power resources. it can interact with the main grid to charge the battery bank in case of low charge due to its relatively low cost power. simultaneously, it can supply the load in case of the absence of grid according to specific management criteria, which will be discussed in the next section. mainly, the proposed m. hijjo, f. felgner and g. frey / icscs (2016) 60 micro grid supply consists of three sources: pv array, leadacid battery bank and diesel genset. they can be modeled as follows: pv array: the output power of a pv module can be obtained by its rated output power at the standard test condition, light intensity, and the operating ambient temperature [10]. 𝑃 = 𝑃 𝐺 𝐺 [1 + 𝑘(𝑇 − 𝑇 )] (2) 𝑃 is the output power from a single pv panel. the standard test condition (stc) means that solar irradiance 𝐺 is 1000 w/m 2 , pv temperature 𝑇 is 25℃, and relative atmospheric optical quality is am1.5 condition. 𝐺 is the irradiance of the operating point, 𝑘 is the power temperature coefficient, 𝑃 is the rated output power under stc, and 𝑇 is the pv temperature at the operating point. note that the output from the pv array are connected directly to a dc-dc power converter which has inside a maximum power point tracking unit (mppt) to maximize power extraction under the different operational conditions. battery bank: the energy storage system (ess) is the most essential part of most microgrids. it is, therefore, necessary to have a well-sized battery bank in order to ensure that the power supplied by ress during high generation periods will be available when the load requires it [11]. the strategy of managing batteries can significantly impact the performance of the overall system. the following condition is imposed to limit the power in/out flows of the battery: 𝑃 ≤ 𝑃 ≤ 𝑃 (3) where 𝑃 is the power thrown from or injected into the battery. it is positive at discharging and negative at charging. it should not exceed the limits of charging 𝑃 and discharging 𝑃 to ensure acceptable operation conditions [12]. besides, another variable that should be kept within a certain range is the state of charge (soc). it can be expressed as: 𝑆𝑂𝐶(𝜏) = 𝐸 (𝜏) 𝐶 (4) 𝑆𝑂𝐶 ≤ 𝑆𝑂𝐶 ≤ 𝑆𝑂𝐶 (5) where 𝐸 , 𝐶 , 𝑆𝑂𝐶 and 𝑆𝑂𝐶 are the actual energy stored in the battery at time 𝜏, the total energy capacity of the battery, and the minimum and maximum allowed state of charge of lead-acid batteries, respectively. the soc value at time (𝜏 + ∆) is determined by the soc value at time 𝜏 and the battery power during the time period. it can be expressed by the following equations: 𝐸 (𝜏 + ∆) = 𝐸 (𝜏) − 𝑃 (𝜏) × ∆ (6) 𝑆𝑂𝐶(𝜏 + ∆) = 𝑆𝑂𝐶(𝜏) − 𝑃 (𝜏) 𝐶 × ∆ (7) the charging efficiency and discharging efficiency are both assumed to be 𝜂 = 95 %. [13]. diesel generator basically, diesel generators act as a backup power source. in accordance to the concern of this work, the model of diesel genset is limited to fuel consumption fc (l/kwh), which can be formulated as a quadratic function of the corresponding generated power [14]. consequently, fuel cost can be found using eq. (9), where dc is the diesel fuel cost per liter ($/l) 𝐹 (𝑃) = ∑(𝑎𝑃 + 𝑏𝑃 + 𝑐) × ∆ (8) 𝐹𝑢𝑒𝑙 𝐶𝑜𝑠𝑡 = 𝐷 𝐹 (𝑃) (9) where a, b and c are the coefficients of fuel cost function and can be found by curve fitting according to the given chart by different manufacturer [15]. in addition, its generated power 𝑃 should not violate the rated capacity 𝑃 and also should not drop behind a certain lower limit 𝑃 to keep its efficiency higher [16]: 𝑃 ≤ 𝑃 ≤ 𝑃 (9) power inverter: a bi-directional power inverter is assumed to perform the needed power conversion between ac and dc buses. the following equation describes an abstract model of the used inverter: 𝑃 = 𝜂 𝑃 (10) where 𝜂 is the efficiency of the power conversion. iv energy management system at each time step, the following power balance must be fulfilled: 𝑃 + 𝑃 + 𝑃 = 𝑃 + 𝑃 (11) where: 𝑃 is the total output power from photovoltaic; 𝑃 is the total power losses on the system. the developed control strategy consists of three cascaded stages: 1. the first stage gives the priority to the grid in order to supply the load in case of insufficient power generated from renewables. 2. the second stage becomes active in case of power outage where the microgrid takes the responsibility of supplying the load in case of insufficient renewable power. 3. the third and last stage is a master-slave control strategy adopted from [17, 18] which is typically developed for islanded operation mode where the battery bank and the diesel generator serve in succession depending on the available soc and power from renewables. overall, the control strategy gives the priority of power supply to res even if it is modest comparable with diesel genset. basically, the diesel genset operates in case of grid outage and low soc of the battery to supply the load demand and charge batteries (with excess power) up to a certain point socmax according to the constraint in eq. (5). m. hijjo, f. felgner and g. frey / icscs (2016) 61 note that two maximum threshold are chosen to stop the charging process: socstp1 is chosen to stop the charging process from res and socstp2 is chosen to stop the charging process either from grid or from the diesel genset while it is running. here, socstp2 is chosen lower than socstp1 to maximize the usage of res rather than depend too much on the grid or the diesel genset. declare state variables soca, pgenset, pgrid, pload, pres, pbatt, fincr declare genset coefficients a, b, c read soci, socstp1, socstp2, socmin, pgen, fc, pbatt_max, gensetflag, eff, k fc = 0 for t = 1 to t demand[t] = pload[t] pres[t] if demand[t] < 0 pgenset[t] = 0 pgrid[t] = 0 if soca[t] < socstp1 pbatt[t] = eff * demand[t] else pbatt[t] = 0 end if soca[t+1] = soca[t] – (pbatt[t]/k) else if pgrid == 1 pgenset[t] = 0 if soca[t] >= socstp2 pbatt[t] = 0 else pbatt[t] = -pbatt_max end if pgrid[t] = demand[t] (pbatt[t]/eff) soca[t+1] = soca[t] – (pbatt[t]/k) else pgrid[t] = 0 if (soca[t] >= socmin) and (dgflag == 0) pbatt[t] = demand[t]/eff pdg[t] = 0 else pdgenset[t] = demand[t] pbatt[t] = demand[t] pgen if soca[t] < socstp2 gensetflag = 1 else gensetflag = 0 end if end if soca[t+1] = soca[t] – (pbatt[t]/k) end if end if fincr[t] = (a*pgenset[t] 2 + b*pgenset[t] + c)/k fc = fc + fincr[t] end for figure 3 a pseudocode of the proposed 3-stages ems algorithm. in addition, the control strategy aims to operate the diesel genset in its most efficient range by keeping the loading factor as high as possible instead of fluctuation according to load, where the excess power is used to charge the battery as long as it does not violate the charging constraints. bear in mind that too fast discharging of the batteries is to be avoided, cf. constraints in eq. (3). additionally, high frequent changes in the state of the diesel genset have to be prevented. therefore, the design parameters such as battery charging limit 𝑃 and capacity of diesel genset should be carefully chosen according to the system behavior and characteristics. the advantages of this strategy can be summarized as follows: 1. it reduces fuel consumption which is desirable from the environmental and economic point of view. 2. it allows the utilization of renewable energy by minimizing the operation time of the diesel generator. 3. it leads to increase the expected life span of the generator by reducing its daily operation time. 4. it keeps the soc within an acceptable range, even if there is no generation from res, which leads to increased battery lifetime. 5. it offers sufficient time for system maintenance by reducing the total operation time of diesel genset. in the following section, the ems is applied and simulation results are demonstrated in detail. v case study al-shifa’ hospital in gaza-city is chosen to conduct this work. it is considered as the largest healthcare facility in gaza-strip. it faces a semi-predictable daily power outage according to several electrification problems in the region [19]. an exhaustive description of the power system of that hospital was demonstrated and clearly discussed in a previous work [20]. an authentic load profile was adopted from [21] to conduct this work. it was a single-day load profile which was measured every two minutes. however, the simulation of the present work is carried out over an extended period (two weeks) to capture the system dynamics more comprehensively. according to [21], it was identified that the daily electrical load profile has almost the same pattern during the whole year; this can be observed since the clinical services are continuous all over the year [21]. obviously, this may be debatable because of seasonal variations. however, an acceptable explanation of this phenomenon is the mediterranean-arid nature of the climate in that region with mild winters and dry, hot summers subject to drought [22]. therefore, the cooling load in summer approximately substitutes the heating load in winter. besides, another scientific interpretation based on validation was mentioned by [9] and [23] that weather variables do not influence load consumption, but the work calendar which has more influence. hence, a fraction of this load (one fifth) is considered to be initially recognized as basic data from which the load profile is generated for a relatively long period according to m. hijjo, f. felgner and g. frey / icscs (2016) 62 eq. (1). the uncertainty parameters αi and βi are generated using uniformly distributed random variables. both load profiles, the basic and derived long-time one, are illustrated in figure 4. (a) (b) figure 4 load profile: (a) for the basic day, (b) over two weeks. two outage scenarios are frequently experienced in gazastrip [20] and modeled according to the earlier illustration in figure 2. the first scenario is called half-period schedule; it occurs when the grid deficit is around 50 % in which the grid is on for eight hours and then changed to off for the next eight hours over an extended period (cf. figure 5a). the second scenario, the one-third schedule, occurs when the deficit is around 70 % in which the grid is on for six hours and then changed to off for the next twelve hours over a long period of time (figure 5b). (a) (b) figure 5 grid states: (a) half-period schedule, (b) one-third schedule. the components of the proposed microgrid are adopted from [17] but without using the wind-turbines. they are listed in details in table i. table i components of microgrid system module power ratings quantity capacity pv solar panel 180 wp 556 100 kwp diesel generator 200 kw 1 200 kw deep cycle lead-acid battery 2v / 1000 ah 480 960 kwh note that the rating of the diesel genset is given in kw not in kva (kilo volt-ampere); whereas it is known that kw is the unit of real power and kva is a unit of apparent power. however, the power factor, unless it is defined, is therefore an approximate value (typically 0.8), and the kva value will always be higher than the kw value for [24]. the parameters used in simulation with the corresponding nomenclature and values are listed below in table ii. table ii simulation parameters parameter nomenclature value ebat total battery bank capacity 960 kwh socint initial state of charge 60 % socmin minimum allowable state of charge 45% socstp1 stop charging threshold from res 100% socstp2 stop charging threshold from genset 85% ηb efficiency of charging and discharging 95% ηc inverter efficiency 95% pbatt_max maximum charging/discharging power ∓𝐸 /(5h) pgenset maximum output power from genset 185 kw dc diesel fuel cost per liter 1.85 $/l ∆ time step (1/30) h the expected lifespan of the batteries is highly affected by the timely varation of the soc, i.e., by the depth of discharge. according to [12] and [25], it has been reported that the effective cumulative lifetime of lead-acid batteries is associated with its operating soc values. therefore, the maximum charging current should be carefully chosen to guarantee a long battery lifetime and protect the battery from overheating or fast degradation. different manufacturers have their different recommendations and preferences of the charging technology. generally, maximum current must not exceed a certain c/(4 h) [26]; c is the battery capacity in ampere hours (ah). the matlab simulation is performed using the the generated two-weeks load profile. to conduct this work, a baseline simulation is carried out firstly considering diesel genset only as a standby power supply. for illustration, figure 6 presents the baseline power consumption from both grid and diesel genset over three days, applying the half-period outage scenario. the fuel consumption of the diesel genset over the whole period of simulation is found to be 4850 and m. hijjo, f. felgner and g. frey / icscs (2016) 63 6444 liters in case of half-period and one-third schedules, respectively. obviously, during these schedules, genset is operated for the half period of time in the first case and almost two-thirds of the whole period. figure 6 baseline power consumption in half-period schedule case the solar metrological data required for carrying out this work is gathered from the global metrological database software meteonorm [27]. extensive simulation for the whole-year data indicates that solar power gained at the site can beat 1750 kwh/kwp annually. figure 7 presents the generated power from pv array over the 14 days. figure 7 generated power from pv array for illustration purposes, three days simulation results of the corresponding system are presented in figure 8, where the aggregated power flows of both grid and microgrid with the corresponding load profile and soc dynamics are shown. figure 8 aggregated power flows (grid & micogrid with ems) both the battery and the load can profit from the power of the pv array at the hours of daylight. in addition, the power gained from the pv array is modest as compared with the charging load of the battery or even the load demand. therefore, the soc is decreased in the morning slower than in the evening because of the availability and the assistance of the solar power. expectedly, the diesel genset is off when the grid is on or the soc is greater than its minimum allowed threshold. besides, the proposed ems tries to operate the diesel genset at its maximum efficiency with the highest loading factor instead of varying supply according to demand as in the baseline case (cf. figure 6). the final results according to the simulation parameters are listed in table iii. table iii final simulation results (14 days) output half-period schedule one-third schedule grid & diesel (baseline) grid & microgrid (with proposed ems) grid & diesel (baseline) grid & microgrid (with proposed ems) total energy supplied by genset (kwh) 15645 5826 20798 7117 total energy supplied by battery bank (kwh) (n/a) 6667 (n/a) 9829 total energy supplied by grid (kwh) 15673 13778 10520 11950 total energy supplied from pv array (kwh) (n/a) 7714 (n/a) 7714 fuel consumption (l) 4850 3724 6444 4034 operating time of the genset (hr) 168 59 222 71 final soc (%) (n/a) 85.44 (n/a) 63.04 operational cost ($) (*) 11794 9369 13815 9614 (*) end price of grid purchase per kwh is considered 0.18$ the total operating hours and the corresponding fuel consumption of genset is significantly decreased after applying the microgrid with the proposed ems. in addition, the cost savings are calculated and found to be 2425 $ and 4201 $ in case of half-period and one-third schedules, respectively. m. hijjo, f. felgner and g. frey / icscs (2016) 64 vi discussion and conclusion this work presents an energy management scheme for a microgrid operating in two modes: grid-connected and islanded mode. it consists of photovoltaic (pv) solar arrays and a deep-cycle lead-acid battery bank in addition to the conventionally utilized power source diesel generator (genset). simulations are performed on a load profile of a hospital in gaza-city. eventually, the genset is switched on when the grid is off and the state of charge (soc) of batteries is below a defined threshold socmin. likewise, the generator is kept at least until batteries reach a certain threshold below the fully charged level socstp2. the presented ems provides significant fuel savings and can enlarge the lifespan of diesel genset. lastly, simulation results indicate the advantage of performing such model-based analyses before utilizing new components on site. future work will consider long-time optimization of the system parameters in order to further increase the overall efficiency and maximize the life-time of the system components. acknowledgment the authors thank the german academic exchange service —deutscher akademischer austausch-dienst (daad)— for providing a scholarship for mohammed hijjo to pursue his phd degree at saarland university. references [1] a. hooshmand, b. asghari and r. sharma, “a power management system for planned & unplanned grid electricity outages,” innovative smart grid technologies latin america (isgt latam), 2015 ieee pes, montevideo, 2015, pp. 382-386. [2] g. kumar and n. m. pindoriya, “outage management system for power distribution network,” smart electric grid (iseg), 2014 international conference on, guntur, 2014, pp. 1-8. [3] microgrid exchange group. “doe microgrid workshop report” aug. 2011 [4] r. lasseter,“microgrids” power engineering society winter meeting, ieee 2002 [5] j. lukuyu, “wind-diesel microgrid system for remote villages in kenya,” north american power symposium (naps), champaign, il, 2012 [6] a. c. luna, n. l. diaz, m. graells, j. c. vasquez and j. m. guerrero, “online energy management system for distributed generators in a grid-connected microgrid,” 2015 ieee energy conversion congress and exposition (ecce), montreal, qc, 2015, pp. 4616-4623. [7] p. malysz, s. sirouspour and a. emadi, “an optimal energy storage control strategy for grid-connected microgrids,” in ieee transactions on smart grid, vol. 5, no. 4, pp. 1785-1796, july 2014. [8] ieee guide for optimizing the performance and life of lead-acid batteries in remote hybrid power systems, in ieee std 1561-2007 , vol., no., pp.c1-25, may 2008 [9] i. fernández, c. e. borges and y. k. penya, “efficient building load forecasting,” emerging technologies & factory automation (etfa), 2011 ieee 16 th conference on, toulouse, 2011, pp. 1-8. [10] e. gavanidou, a. bakirtzis, “design of a stand alone system with renewable energy sources using trade off methods,” in energy conversion, ieee transactions on , vol.7, no.1, pp.42-48, mar 1992 [11] f. marra, g. yang, c. træholt, j. østergaard and e. larsen, “a decentralized storage strategy for residential feeders with photovoltaics,” in ieee transactions on smart grid, vol. 5, no. 2, pp. 974-981, march 2014. [12] d. jenkins, j. fletcher, d. kane, “lifetime prediction and sizing of lead-acid batteries for microgeneration storage applications,” in renewable power generation, iet , vol.2, no.3, pp.191-200, september 2008 [13] powersonic, technical manual (pdf), p. 19, retrieved march 2015 [14] g. moshi, m. pedico, c. bovo, a. berizzi, “optimal generation scheduling of small diesel generators in a microgrid,” in energy conference (energycon), 2014 ieee international , vol., no., pp.867-873, 13-16 may 2014 [15] approximate diesel fuel consumption chart: retrieved in march. 2016: www.dieselserviceandsupply.com [16] c. wang, m. liu, l. guo, “cooperative operation and optimal design for islanded microgrid,” in innovative smart grid technologies (isgt), 2012 ieee pes , vol., no., pp.1-8, 16-20 jan. 2012 [17] b. zhao; x. zhang; j. chen; c. wang; l. guo, “operation optimization of standalone microgrids considering lifetime characteristics of battery energy storage system,” in sustainable energy, ieee transactions on , vol.4, no.4, pp.934-943, oct. 2013 [18] j. peas lopes, c. moreira, a. madureira, “defining control strategies for analysing microgrids islanded operation,” in power tech, 2005 ieee russia , vol., no., pp.1-7, 27-30 june 2005 [19] united nation office for the coordination of humanitarian affairs ocha, “the humanitarian impact of gaza’s electricity and fuel crisis,” march 2014; last accessed may 2016: http://www.ochaopt.org/documents/ocha_opt_electricity _factsheet_march_2014_english.pdf [20] m. hijjo, p. bauer, f. felgner, g. frey, “ energy management systems for hospitals in gaza-strip,” proceedings of the ieee global humanitarian technology conference (ghtc2015), pp. 18-25, seattle/wa, usa, oct. 2015 [21] m. mushtaha, g. krost, “performance study of selfsufficient and renewables based electricity supply of a hospital in the near east region,” in power and energy society general meeting, 2012 ieee , vol., no., pp.1-8, 22-26 july 2012 [22] the climate of gaza (palestinian territories), online: retrieved in march 2016: www.whatstheweatherlike.org m. hijjo, f. felgner and g. frey / icscs (2016) 65 [23] y. k. penya, c. e. borges and i. fernández, “short-term load forecasting in non-residential buildings,” africon, 2011, livingstone, 2011, pp. 1-6. [24] g. moshi, m. pedico, c. bovo, a. berizzi, “optimal generation scheduling of small diesel generators in a microgrid,” in energy conference (energycon), 2014 ieee international , vol., no., pp.867-873, 13-16 may 2014 [25] r. kaiser, “optimized battery-management system to improve storage lifetime in renewable energy systems,” journal of power sources, volume 168, issue 1, pages 58-65, may 2007 [26] e. koutroulis and k. kalaitzakis, “novel battery charging regulation system for photovoltaic applications,” in iee proceedings electric power applications, vol. 151, no. 2, pp. 191-197, mar 2004. [27] meteonorm: global metrological database. online: retrieved march 2016: http://www.meteonorm.com/en/ journal of engineering research and technology, volume 3, issue 4, december 2016 98 assessment of geometric accuracy of jordanian cadastral maps in the west bank-palestine najeh s. tamim 1 and ahmad a. taha 2 1 department of civil engineering, an-najah national university, nablus, palestine. 2 department of geography, an-najah national university, nablus, palestine. abstract— the cadastral maps in palestine were produced before 1967 by the jordanians and british mandate authorities. most surveyors, as well as the different governmental and local authorities are using these maps as they are without some processing, as if these maps were free of errors and problems. even though good for some applications, it was found that these maps suffer serious shortcomings concerning geometric accuracy and content (as they have not been updated). this paper aims to assess the internal geometric accuracy of the scanned jordanian block maps using different georeferencing techniques. based on the achieved results, it was found that the 2 nd order polynomial looks to be the most promising one, especially when supported with directly measured field data. index terms—cadastral maps, block maps, palestine, west bank. i a historical review of the cadastral mapping in palestine: the history of cadastral mapping in palestine dates back to the times of the british mandate which started after world war i. in the 1920’s, land registration saw the transition from registration, without proper reference to location, to statutory cadastral maps, which became indispensable for land settlement and registration [3]. for this purpose, the department of surveys was established in 1920. the entire palestine was covered with major and thirdorder triangulation networks and a 1:10,000 plane table surveys showing all topographical features. in urban areas, a dense network of fourth-order triangulation was provided. the village in palestine was the main registration unit and it was divided into blocks of convenient size called registration blocks. each block was subdivided into parcels, and a unique identification number was given to each block and to every parcel in it [8]. first, demarcation on the ground of the parcel boundaries as claimed by the individuals was made followed by field surveying of these boundaries. later, the areas of the parcels were computed, and the final paper registration block plans prepared by the survey department. these plans showed the location, shape and size of every individual parcel of land within the area described in the registration block. the measurements for cadastral mapping during the british mandate on palestine were performed mainly using the method of chain surveying. these measurements were linked to a network of traverse and national control points established around that time. the calculation of traverse points was based on separate adjustment of each traverse individually and not on a rigorous adjustment as a uniform network [3]. individual traverses were adjusted using the elementary bowdich rule which first deals with angular misclosure followed by the linear misclosure. by the end of the british mandate period (in 1948), and due to the political complications, only 20% of the land of palestine was registered and mapped [3]. it is estimated that 91% of this registered land (approximately 5 million donums) lies in the lands occupied in 1948, while the rest lies in the west bank (about 0.5 million donums). during the period 1948-1967, the west bank and gaza were temporarily controlled by jordan and egypt respectively until they fell under the (israeli) occupation. the lands and survey departments in both banks of river jordan were unified, with the headquarters in amman being responsible for all land registry offices throughout the country [6]. in 1952 and 1953 most of the laws concerning land and water settlement, registration, etc. were enacted and applied in both banks, and the process of land registration and settlement of land rights and cadastral mapping was resumed. since jordan was controlled by britain (the same as palestine), similar procedure of considering the village as the main registration unit by dividing it into blocks and parcels was followed. densification of the british geodetic control network was first carried out and then plane table measurements were performed for land parcels. additional information was added to the prepared plane table maps; these include the recording of direct tape measurements of some distances and the plotting of grid-line marks. these marks were inserted at 500m (in most maps) intervals in both directions as guidelines to establish the coordinate reference system for the block. overall, about 44% of the lands in the west bank were surveyed and mapped during the jordanian and british mandate periods (figure 1). these statistics najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 99 were estimated by rough digitization of the mapped areas shown in figure 1 and confirmed by the palestinian survey department. the mapping and settlement of rights to the land activities were ceased by the fall of the west bank under the (israeli military occupation in 1967. since then, and due to the political instability, no serious cadastral mapping projects on a national level were initiated. however, there was some sporadic registration of isolated land parcels initiated by individuals, known as new registration, to protect their rights to land. after the establishment of the palestinian authority in the mid-1990s, and according to sources from the palestinian survey department [5], few limited land registration and mapping initiatives were launched. these include pilot projects for registering some lands in hebron, bethlehem, ramallah and salfeet districts. ii status of existing cadastral maps: from the discussion in the previous section, it can be deduced that the land in the west bank takes the shape of patches of areas which have been completely registered and mapped, separated by areas for which the cadastral mapping has not been finished or has not even been initiated (see figure 1). those areas that have not undergone any settlement of rights to land and mapping, suffer several problems. these include the lack of concrete foundation for dispute resolution of boundaries between neighbors, given that land borders are not mapped, which also makes any planning in these areas difficult. moreover, landholders do not own firm documents of ownership that show the area and extent of their lands. this has made their lands subject to confiscation by the (israeli’s) for the purpose of building settlements and military camps, and also limited hopes for land development projects. those areas for which cadastral maps have been prepared, the maps shared no common guidelines, lacked uniformity with regard to cartographic method, scales, legal status, quality and appearance. specifically, the cadastral maps share the following characteristics:  they do not provide full coverage of the west bank and gaza.  they are graphic in nature.  the cadastral blocks have been prepared at different scales ranging from 1:625 to 1:10,000.  the internal graphic accuracy of these maps is in the range of 0.5 to 0.8mm [2].  the relative accuracy of adjoining map sheets is poor which makes these maps generally do not match when placed next to each other, due to the existence of gaps and overlaps. figure 1: the extent of cadastral mapping in the west bank (source: palestinian survey department). several factors have contributed to the poor quality of these cadastral maps. these include: 1. the surveying techniques that have been used for the measurements. as mentioned earlier, the measurements were made employing chain surveying and plane-table equipment. these instruments have a limited accuracy as compared to the modern surveying equipment currently available which include total stations, photogrammetric and gps instruments, especially for a complex topography like that of palestine. 2. drawing errors. if the finest drawing pen used was 0.2mm, then there will be a minimum inherent drawing error of 0.5 m in the position of any point on a map of 1/2500 scale (0.2*2500), other than the measurement errors. 3. the measurements have been tied to the low-accuracy geodetic control network, which was established in the 1920’s. this network has proven to be unreliable with today’s standards [4]. mapped areas unmapped areas najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 100 4. the measurements were performed with teams of varying skills and care. 5. the cadastral maps available in the hands of the palestinians are scanned images of copies of the original maps. the original cadastral maps prepared at the time of the british mandate are held by (israel), while those prepared between 1951 and 1967 are held by jordan. the graphic copies as well as the original maps have been subject to expansion, contraction and tearing which affected their physical condition and hence, their geometric accuracy. 6. the cadastral maps are old (50 to 90 years old) and do not reflect the changes which happened since then, especially land subdivision, consolidation of adjacent parcels, and so on. no updating mechanism has been followed. recently, there has been an increasing trend towards the computerization of paper cadastral maps. for example, jordan has initiated a project in 1995 [1] in order to create a digital cadastral map. for this purpose, all available paper cadastral maps were scanned and a technique to deal with minimizing scanning errors and edge-matching of adjacent maps was developed to create a seamless digital map for the country. the affine transformation was used in most of the analysis [1]. an error of up to 4m was observed in the 1/2500 maps. the egyptains have also launched several projects to automate their cadastral maps [7][9]. the available cadastral maps were digitized and verified but nothing is documented on the accuracy of these maps. concerning the west bank, there is a shortage of research related to the assessment of geometric accuracy and computerization of cadastral maps. this work aims to bridge this gap. iii assessment of geometric accuracy of jordanian block maps in palestine: in order to formulate an idea about the geometric accuracy of existing jordanian cadastral maps, three random sample scanned maps from three villages have been selected and closely inspected. these include a block from naqoorah village in nablus area, a block from the yamoon village in jenin area, and a block from silwad village in ramallah area (figure 2). the reason for choosing these blocks in particular is because the researchers have done some surveying works over there and this facilitated the availability of data for inspection. the following methodology was used for geometric accuracy assessment: step 1: manual inspection of paper cadastral maps. the distances between the grid-line marks as well as the distances between control points that appear on the chosen maps were precisely scaled and compared with their counterparts that are known from the coordinates. figure 2a shows the scaled distances between the grid-line marks of the naqoora/block 6. these distances are supposed to be 500 m. errors that range from -1 m to 7 m were observed on the three chosen maps. it is worth mentioning that larger errors were observed in the east-west direction in the three inspected maps. this could be due to thermal copying from the original maps that are stored in jordan in addition to small scaling errors. table 1 shows sample values for scaled and computed distances between some control points in block 6/ naqoora, given that there are 11 control points in this block. the errors range between 0.99m and 2.58m. in yamoon, an error of 4.53m was observed between scaled and computed distances of the 7 available control points. it was also noticed that larger errors are observed between points that lie in the east-west direction with respect to each other. this type of error could not be checked in the silwad block since it contains only one control point. step 2: evaluation of the control points accuracy within the study area. to check that the inherent errors and inconsistencies observed in the maps are not caused by the control points, the coordinates of three control points were observed using gps and compared with their old known coordinates. network real time kinematic (rtk) technique was used in the data collection of the control points. this is because the control points are located in an open sky area and the rtk horizontal accuracy in such areas is usually within centimeter level. on average, each control point was observed for about 10 seconds with horizontal precision less than 1 cm. it can be deduced from the linear errors shown in table 2 that the small differences in the coordinates are not necessarily the cause of errors in these maps. table 1: scaled and computed distances between control points in clock 6 / naqoora. from to scaled distance (m) computed distance (m) differences (m) 644 w 1309 bc 301 299.04 1.96 644 w 1320 bc 717 715.36 1.64 644 w 1419 bc 574 574.99 -0.99 644 w 1321 bc 434 431.42 2.58 najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 101 (a) naqoora/block 6 (b) yamoon/ block 16 (c) silwad/ block 14 figure 2: the three chosen blocks for this study. najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 102 table 2: old known and gps measured coordinates of control points in the study area (naqoora village). control point number old known coordinates gps coordinates linear error (m) √ easting (y) northing (x) easting (y) northing (x) 644 w 170760.50 185648.14 170760.51 185648.35 0.21 1309 bc 170464.66 185691.77 170464.42 185691.61 0.29 230 b 170047.72 184235.98 170047.65 184236.11 0.15 step 3: accuracy of distances that are registered on the blocks. as indicated earlier, some distances were directly taped and recorded on the blocks that have been prepared using the plane table instruments. to check the accuracy and consistency of these recorded distances, a search was made for old cut marks that are still existing, as well as some old angle irons that have not been moved and measured their coordinates using gps. the computed distances from these coordinates have been compared with the registered ones. figure 3 shows a small portion of block 6/ naqoora where some of these points were observed. figure 3: portion of block 6/ naqoora. table 3 shows a comparison between registered and measured distances. even though most errors are within an acceptable tolerance, some errors appear to be large (1.18m). this, in turn, affects the overall accuracy of the block and the level to which that surveyors can refer to these registered values for relocation of boundaries and solving border disputes between neighbors. similar and even more errors were observed by the researchers in other blocks during their surveying works. the blocks of yamoon and siwad could not be checked since they do not contain registered distances. table 3: comparison between registered and measured distances. step 4: georeferencing. to move from paper cadastre to a digital one, these maps need to be entered into the computer. this could be done through either direct manual digitization or scanning. the palestinain survey department has chosen the second option and scanned all the available blocks. these scanned images are available in “tif” format and are being used by licensed surveyors and local government authorities. to formalize an idea about the accuracy of these scanned images, the distances between the control points of table 1 were measured on the scanned image of block 6/naqoora. the results are summarized in table 4. it is clear from these results that the scanning process has degraded the accuracy of the blocks, especially in the east west direction. the same thing happened to the map of yamoon. these scanned images of the blocks are being used by most surveyors as they are and this does not produce satisfactory results by all means. some surveyors try to improve the block accuracy by locally rescaling the block in the work area based on two measured far points. table 4: scanned and computed distances between control points in block 6/ naqoora. line registered distance (m) measured distance (m) error (m) 1-2 47.40 47.25 0.15 2-3 42.70 42.26 0.44 3-4 45.90 45.86 0.04 4-5 44.90 43.72 1.18 5-6 13.50 13.42 0.08 from to scanned distance (m) measured distance (m) differences (m) 644w 1309 bc 302.30 299.04 3.26 644w 1320 bc 719.30 715.36 3.94 644w 1419 bc 574.60 574.99 -0.39 644w 1321 bc 435.30 431.42 3.88 najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 103 based on the previous tables and discussion, it is clear that the scanned block maps contain unacceptable errors, which are in the level of several meters in some cases. as an alternative, the best way to achieve a high level of accuracy is to re-survey all blocks using modern surveying instruments, such as gps and total stations, and to input the data into the computer directly in digital form. however, this is very costly and time consuming, given that there are several thousand block maps available in the survey department. in addition, a large number of old border marks have been lost or destroyed. therefore, the work in this research is focused on making an initial improvement to the geometric accuracy of the scanned blocks to make them relatively ready and available for use. to do this, several georeferencing techniques that use mathematical transformations have been applied on the scanned images of the blocks employing available grid and control points that appear on these blocks. to assess the accuracy improvement resulting from these techniques, a field survey using gps was performed to measure the coordinates of available old cut marks and angle irons. after that, the output coordinates from transformations were compared to the field ones and the errors computed. some of the suitable transformation techniques that are available in many softwares include the affine transformation, 2 nd order polynomial, 3 rd order polynomial, spline method and projective transformation. for this purpose, the capabilities of the arcgis 10.2 were used. the program was run on the three chosen blocks using several options that can be summarized as follows: 1) block 6/naqoora. the scanned block was georeferenced using all the above-mentioned transformation techniques, but the 3 rd order polynomial and projective transformation gave high residuals and are dropped from further analysis. emphasis is given here to the affine transformation, 2 nd polynomial and spline method. these transformations were applied first using the 11 available control points, then using the 11 control points with 12 grid points added to them, and lastly using additional 6 old marks whose coordinates have been measured in the field. the root mean square error (rmse), minimum and maximum residuals (min.res. & max.res.) were also computed for all these transformations. the maximum residual was checked and found to be less than 3*rmse (i.e., no blunders). figure 4a shows the distribution of points used in the transformation: control points (from 1 to 11), grid points (from 12 to 23) and old points (from 24 to 29). furthermore, figure 4b shows a sample output for the affine transformation results using the 11 control points, 12 grid points and 6 old marks. the transformation parameters resulting from these techniques were applied to the digitized coordinates of another 6 old marks that have been observed in the field using gps. this was performed for comparison and computation of residuals (a total of 12 old marks have been observed in the field: 6 for adjustment enhancement and 6 for comparison of coordinates). table 5 shows the residual errors. 2) block 16/yamoon. the previous procedure had been repeated for block16/yamoon. the georeferencing was first applied using the 7 available control points, then using both the control points and 9 grid points added to them, and lastly using additional 6 old marks whose coordinates have been measured in the field. again 6 measured old cut marks and angle irons were used for the comparison of georeferenced coordinates with the gps measured coordinates. the results are summarized in table 6. (a) (b) figure 4: distribution of points used in the transformation (a), and output affine transformation (b). najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 104 table 5: residual errors after using three transformations for block 6/naqoora. point # using 11 control points using 11 control points + 12 grid points using 11 control points + 12 grid points + 6 observed old marks affine rmse=0.45 min.res.=0.04 max.res.=0.83 2 nd order polynomial rmse=0.32 min.res.=0.09 max.res.=0.57 spline rmse=0.00 min.res.=0.00 max.res.=0.00 affine rmse=1.74 min.res.=0.36 max.res.=2.84 2 nd order polynomial rmse=1.38 min.res.=0.39 max.res.=2.35 spline rmse=0.00 min.res.=0.00 max.res.=0.00 affine rmse=1.73 min.res.=0.34 max.res.=3.30 2 nd order polynomial rmse=1.41 min.res.=0.42 max.res.=2.24 spline rmse=0.00 min.res.=0.00 max.res.=0.00 1 2.50 2.25 2.47 1.27 0.89 1.64 1.32 0.99 1.80 2 1.20 0.97 0.94 0.80 0.52 2.55 0.57 0.55 2.43 3 2.90 2.98 2.99 2.40 1.83 2.52 2.06 1.62 2.44 4 1.05 1.02 1.05 0.92 0.45 0.87 0.96 0.45 2.10 5 3.75 3.50 3.72 3.10 2.54 4.61 2.83 2.34 2.09 6 1.17 0.94 0.97 1.23 0.76 2.16 1.18 0.93 1.06 mean 2.10 1.94 2.02 1.62 1.17 2.39 1.49 1.15 1.99 table 6: residual errors after using three transformations for block 16/yamoon. point # using 7 control points using 7 control points + 9 grid points using 7 control points + 9 grid points + 6 observed old marks affine rmse=0.51 min.res.=0.15 max.res.=0.97 2 nd order polynomial rmse=0.28 min.res.=0.04 max.res.=0.46 spline n/a : needs more than 7 control points. affine rmse=0.68 min.res.=0.19 max.res.=1.48 2 nd order polynomial rmse=0.40 min.res.=0.14 max.res.=0.87 spline rmse=0.00 min.res.=0.00 max.res.=0.00 affine rmse=0.94 min.res.=0.08 max.res.=2.07 2 nd order polynomial rmse=0.69 min.res.=0.14 max.res.=1.58 spline rmse=0.00 min.res.=0.00 max.res.=0.00 1 0.50 0.59 0.58 0.48 0.32 0.63 0.36 0.55 2 1.10 0.63 1.04 0.91 0.46 1.13 1.14 0.59 3 1.51 1.78 1.43 1.48 1.48 1.22 1.21 0.89 4 2.35 2.28 2.16 2.04 1.99 1.87 1.53 1.25 5 1.16 1.66 0.96 1.13 0.85 0.75 0.90 0.73 6 0.29 0.73 0.28 0.25 0.26 0.28 0.28 0.51 mean 1.15 1.28 1.08 1.05 0.89 0.98 0.90 0.75 3) block 14/silwad. the previous procedure had also been repeated on block14/silwad. the affine and 2nd order polynomial transformations were first applied on the one available control point and 6 grid points. this number of points is not sufficient to run the spline method. again 6 measured old cut marks and angle irons are used for the comparison of georeferenced coordinates with the gps measured coordinates. later, two of the six observed old marks were added with another available old point to the control point and the six grid points to have a total of 10 points to run the three transformations. the remaining four observed field points were used for the comparison. the results are summarized in table 7. table 7: residual errors after using three transformations for block 14/silwad. point # using 1 control points + 6 grid points using 1 control points + 6 grid points + 3 observed old marks affine rmse=0.68 min.res.=0.45 max.res.=1.05 2 nd order polynomial rmse=0.03 min.res.=0.00 max.res.=0.05 spline n/a: needs more than 7 control points. affine rmse=0.65 min.res.=0.40 max.res.=0.95 2 nd order polynomial rmse=0.23 min.res.=0.04 max.res.=0.45 spline rmse=0.00 min.res.=0.00 max.res.=0.00 1 0.85 4.75 0.90 1.20 1.24 2 0.49 4.66 ---------- 3 0.39 2.45 0.31 0.11 0.29 4 0.40 0.52 0.10 0.19 0.35 5 0.82 1.77 0.93 1.54 1.40 6 0.66 4.63 ---------- mean 0.60 3.13 0.56 0.76 0.82 najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 105 iv analysis and results: a close look on the results of the different transformations in the previous tables, the following can be noticed: 1) a small rmse in the different transformations gives an indication of good precision achieved in the mathematical solution. however, this does not necessary mean a high level of accuracy. for example, rmse in the spline transformation is always 0.00 since this transformation passes exactly through all points in the solution, but the errors are not equal to zero (2 m on the average in naqoora block 6 in the three transformation tests). 2) given that the errors in the cadastral maps do not have a steady known pattern, it is difficult to judge in a firm way which mathematical transformation technique is the best. however, a close look on the results in the previous three tables reveals that the 2 nd order polynomial looks to be the most promising one, given that it deals with translation, scaling, rotation, curling and skewing errors. the best results are obtained when the number of control and grid point and observed old marks that are used in the transformation exceeded 10 points. this is particularly noticed from the results in table 7. even though the affine transformation gave better results when the number of control and grid points was 7, the mean error dropped from 3.13m to 0.76m when adding 3 observed points to the 2 nd order polynomial and it became close to the affine transformation results. the affine transformation results did not show any significant improvement when the number of points increased from 7 to 10 (mean error dropped from 0.60m to 0.56m only). 3) measuring the coordinates of few (say 5 to 10) welldistributed old marks in the field contributes slightly to the accuracy of the georeferenced maps (see tables 5 & 6). these points were given a similar weight in this research as all other control points and grid points. however, it is expected to get better results if these points are given a higher weight in the solution. to check numerically the accuracy improvement achieved by the georeferencing process, two procedures were followed: a. the errors from the transformation techniques used here were compared with their counterparts from the procedure followed by licensed surveyors (table 8). as mentioned earlier, surveyors use the scanned maps by rescaling them depending on two measured far points in the work area. the errors used for the comparison are those coming from the 2 nd order polynomial using all available control points and grid marks (when the number of control and grid points is more than 10 and this does not apply on silwad block). as can be seen from table 8, there is a noticeable difference (improvement) between the georeferenced map using the 2 nd order polynomial and the map as used by surveyors. to be specific, the mean error was reduced from 2.70m to 1.17m in naqoora/block 6, from 1.70m to 1.05m in yamoon/block 16. this represents an overall initial improvement of more than 40% in the geometric accuracy. even though the georeferencing techniques used in this research would not eliminate the errors completely, they are still much better than that used by surveyors. hence, the output adjusted map can be used by both the surveyors as well as the governmental authorities in charge of planning and doing engineering projects with more confidence than previous. b. the actual “correct” area of a five-sided figure was computed from the measured gps coordinates of the observed field points for the three chosen blocks. this same area was also computed from the surveyors’ map and from the map resulting from the 2 nd order polynomial procedure. the absolute value of the difference in area from the correct one in both cases was computed. table 9 shows the results. it can be seen that there was a noticeable improvement after applying the 2 nd order polynomial on the scanned images of the blocks. the improvement fluctuated from 43% for the silwad block to 97% (exceptional case) of the yamoon block. table 8: comparison of residuals with those according to surveyors’ procedure. point # naqoora/block 6 yamoon/block 16 errors from surveyors procedure errors after the 2 nd order polynomial errors from surveyors procedure errors after the 2 nd order polynomial 1 1.28 0.89 2.42 0.48 2 2.10 0.52 1.56 0.91 3 5.33 1.83 0.72 1.48 4 1.96 0.45 0.71 2.04 5 3.87 2.54 2.47 1.13 6 1.64 0.76 2.33 0.25 mean 2.70 1.17 1.70 1.05 najeh s. tamim, ahmad a. taha / assessment of geometric accuracy of jordanian cadastral maps in the west bank -palestine (2016) 106 table 9: comparison of areas from the surveyors’ procedure and 2 nd order polynomial with correct ones. v conclusion: existing cadastral maps in the west bank suffer problems that degrade their accuracy, which make them almost useless to be used as they are without some processing. these problems came from the old surveying techniques, stored paper maps and from the scanning process. tests have been performed by the researchers to assess the magnitude of the inherent errors in these maps employing three maps from different geographical areas. the errors ranged from few centimeters to several meters (up to 7 meters in some maps). it is difficult, expensive and time consuming to resurvey all the areas covered by these maps. therefore, and to make these maps readily useable by the surveying community as well as by the governmental authorities in charge of planning and doing engineering projects, the researchers have worked on making an initial improvement on the accuracy of these scanned maps. this is accomplished through georeferencing them mainly using the affine transformation and 2 nd order polynomial and employing the available control points and grid marks that appear on these maps. the 2 nd order polynomial gave batter results when the number of control and grid points exceeded 10. for maps that have fewer number of control and grid points (<10), the affine transformation can be used. however, and in order to achieve robust results, it is recommended to survey in the field a few old marks and add them to run the 2 nd order polynomial. in general, it had been observed that measuring in the field the coordinates of few old marks that are well-distributed all over the map enhances the accuracy when used in the transformation together with control points and grid marks. good improvement in the accuracy of points on the maps has been noticed. it is highly recommended that the palestinian survey department apply this approach on all available scanned cadastral maps. other countries that have similar situation, such as jordan, can also benefit from this technique. references: [1] alostah, zuhair and saad alkhatib (2005), “building jordan digital cadastral data base (jdcdb) in the department of lands and surveys (dls)”, proceedings of the fig working week 2005 and gsdi-8, 18 pages. accessed from the internet on 1 january 2016: https://www.fig.net/resources/ proceedings/fig_proceedings/cairo/papers/ts_28/ts28_03_osta h_khatib.pdf. [2] doytscher, yerach (2002), “spatial information infrastructures in israel: toward the 21 st century, department of civil engineering (geodesy), technion-israel institute, haifa, accessed from the internet on 1 october 2002, 7 pages, www.survey.ntua.gr/main/labs/photo/ laboratory/news/pdf/s11.%20yerach%20doytscher.pdf. [3] gavish, dov and ruth kart (1993), “the cadastral mapping of palestine, 1858-1928”, the geographical journal, vol. 159, no. 1, pp: 70-80. [4] mikkonen, kari and ian corker (2002), “using digital orthophotos to support land registration”, from geo community (www.geocomm.com), accessed from the internet on 1 october 2002, 9 pages, 0.05). this means that this design aspect requires further improvement from residents‘ point of view. similar to above, 60.2% of respondents said that kids‘ bedrooms are flexible for future changes (statement 6). however, this doesn‘t show statistically significant difference considering sig. value (0.472>0.05). this means that this design aspect also requires further improvement to enhance design flexibility in kids‘ bedrooms. the third aspect that requires further improvement from residents‘ point of view is kitchen area (statement 10). 60.6% of respondents said that kitchen area is sufficient to include a dining space. table 3 respondents‘ answers of the questions related to the second hypothesis no. statement mean (0-5) relative weight (%) sig.-value 1. internal columns form a main obstacle to future design alterations 3.20 63.96 0.058 2. open plan is a main advantage that facilitates future design alterations 3.73 74.63 0.00* 3. the possibility of vertical expansion to add new spaces to the housing unit enhances design flexibility, if applicable 3.57 71.35 0.00* 4. it‘s an advantage to give residents more flexibility in changing spaces functions 3.99 79.82 0.00* 5. the use of adaptable partitions such as sliding doors and panels is better than the fixed ones 3.74 74.73 0.00* 6. merging similar functions, such as living, guest and dining rooms, in one space helps enhancing design flexibility 3.46 69.19 0.00* 7. it is possible to designate two different functions to a space in day and night times. 3.28 65.56 0.00* 8. multi-functional spaces reduce housing cost 3.42 68.47 0.00* 9. the possibility of horizontal expansion to add new spaces to the housing unit enhances design flexibility, if applicable 3.87 77.45 0.00* 10. the use of studio housing units for the new families is practical, given that they will move to a larger one when family size increases 3.67 73.45 0.00* all statements 3.67 73.41 0.00* * mean is significant at α ≤ 0.05 as for the second hypothesis, table 3 shows that it is accepted. this hypothesis states that ―implementation of housing design flexibility to increase housing utilization efficiency is supported by residents.‖ the total mean is 3.67 and relative weight is 73.41%. this value is statistically significant considering sig-value (0.0<0.05). in addition, table 3 shows the following observations: respondents agreed on almost all the suggested design flexibility measures. flexibility in changing spaces functions comes on top (79.8%), which means that spaces shouldn‘t be designed and customized for a single use. this is supported by other answers such as the use of portable partitions (74.7%), the use of open plan (74.6%), and the use of time dimension by using two different functions to a space in day and night times (65.6%). 73.5% of respondents believe that the use of studio housing units for new families is practical under the condition that the family will move to a larger one when family size increases. this is important indicator since this type of housing is required in the gaza strip considering the high need and shortage of housing supply and available urban land. omar s. asfour, and raghda alsousi / exploring residents‘ attitude towards implementing housing design flexibility in the gaza strip (2016) 31 finally, table 4 shows that the third hypothesis is accepted too. this hypothesis states that ―implementation of furniture design flexibility to increase housing utilization efficiency is supported by residents.‖ the total mean is 3.98 and relative weight is 79.6%. this value is the highest score observed in the three hypotheses, which shows respondents‘ support of design flexibility of furniture. this value is also statistically significant considering sig-value (0.0<0.05). in addition, table 4 shows the following observations: respondents agreed on almost all suggested furniture flexibility measures. these measures were illustrated by colour illustrations in the actual questionnaire to ensure that they understand the idea. 89.5% of respondents understand that furniture selection is highly dependent on housing unit area. considering housing unit area limitation, 83% of them believe in multi-functional furniture. respondents were given six examples on furniture design flexibility. they agreed on them all and results were within the range of 73% to 81.8%. this gives an indication for furniture manufacturers to invest in the field of multi-functional furniture. some designs in this fields are quite simple and don‘t need high technical capacity or significant cost. table 4 respondents‘ answers of the questions related to the third hypothesis no. statement mean (0-5) relative weight (%) sig.-value 1. you consider easiness of furniture disassembly and reassembly as an advantage 4.17 83.42 0.00* 2. easiness of furniture disassembly and reassembly facilitates its portability 3.90 78.02 0.00* 3. you believe in multi-functional furniture 4.15 83.09 0.00* 4. new furniture designs should be introduced to fit small housing areas 4.48 89.54 0.00* 5. master bedroom can be used as a living room at daytime, e.g. by using a sofa that can be spread to form a bed 3.65 72.97 0.00* 6. multi-functional designs of kids‘ beds are practical, e.g. beds including study station and storage space. 3.77 75.50 0.00* 7. multi-functional designs of dining tables are practical, e.g. tables that can be folded and converted to storage chest 3.74 74.77 0.00* 8. the conventional work station can be replaced by portable one that can be folded down and stored 3.90 78.02 0.00* 9. in living rooms: multi-functional sofas can be used, e.g. for securing storage space 3.89 77.84 0.00* 10. it is practical to fold up the ironing table and use it as a mirror for instance to save storage space 4.09 81.80 0.00* all statements 3.98 79.6 *0.00 * mean is significant at α ≤ 0.05 c. study variables testing the study examined the effect of five variables on the three main assumptions, discussed above, as follows: the effect of gender using independent-samples ttest. the effect of age, family size, income level, and housing unit area using anova test. table 5 shows the effect of gender on the main study assumptions using independent-samples t-test. the null hypothesis suggests that there is no statistically significant difference between the means of male and female answers at significance level α ≤ 0.05. it can be noticed that means of female and male answers are close in the three parts of the questionnaire, which represent the three main assumptions of the study. the differences noticed between male and female answers are statistically insignificant, given that sig-value is higher than the significance level 0.05. therefore, the following null hypotheses are accepted: part 1 of the questionnaire: at significance level 0.05, there is no statistically significant difference between males and females regarding their satisfaction with omar s. asfour, and raghda alsousi / exploring residents‘ attitude towards implementing housing design flexibility in the gaz a strip (2016) 32 the current status of their housing unit design in terms of functional requirements. part 2 of the questionnaire: at significance level 0.05, there is no statistically significant difference between males and females regarding their support of the implementation of housing design flexibility to increase housing utilization efficiency. part 3 of the questionnaire: at significance level 0.05, there is no statistically significant difference between males and females regarding their support of the implementation of furniture design flexibility to increase housing utilization efficiency. table 5 the effect of gender on the main study assumptions using independent-samples t-test question naire part assumption means tvalue sig.value male (μ1) female (μ2) part 1 the current status of housing unit design satisfies the functional requirements 3.28 3.49 -1.239 0.218 part 2 residents believe that housing design flexibility increases housing utilization efficiency 3.64 3.72 -0.846 0.400 part 3 residents believe that furniture design flexibility increases housing utilization efficiency 3.98 4.12 -1.394 0.166 * means‘ difference is significant at α ≤ 0.05 ** h0: μ1 = μ2 table 6 shows another set of variables, which are age, family size, income level, and housing unit area. the aim here is to examine the effect of these variables on the main study assumptions. this was done using analysis of variance (oneway anova test). the null hypothesis again suggests that there is no statistically significant difference between the three specified means in each variable considering a significance level of α ≤ 0.05. table 6 shows the following results: as for age variable, it can be noticed that it has an effect on residents‘ satisfaction with the functional aspects of their housing units. observed means for the three examined age categories were between 3.08 and 3.55. this shows statistically significant difference as sig-value is less than the significance level of α ≤ 0.05. younger people showed more satisfaction with the functional requirements of their housing units. this could be possibly due to the relatively small size of their families. however, this is not the case for the second and third assumptions. results show that there is no statistically significant difference between the means provided by the different ages at significance level α ≤ 0.05 regarding their support of housing design and furniture design flexibility. this shows that these ideas are supported by all age categories. as for family size variable, it can be noticed again that it has no effect on the three assumptions of the study. results show that there is no statistically significant difference between the means provided by the different family sizes at significance level α ≤ 0.05. as for income level variable, it can be noticed that it has an effect on the first assumption, i.e. residents‘ satisfaction with the functional aspects of their housing units. observed means for the three examined income categories were between 2.99 and 3.54. the least value of satisfaction is observed at the lower income category. as for the second and third assumptions, results also show that there is no statistically significant difference between the means provided by the different income categories at significance level α ≤ 0.05. this shows that housing design and furniture design flexibility are supported by all income categories. finally, the effect of housing unit area is examined. it can be noticed that housing unit area has an effect on the first assumption of the study. residents‘ satisfaction with the functional aspects of their housing units increases as area of the unit increases (from 2.24 to 4.01). this is also true for their support of furniture design flexibility, where people having smaller housing units showed more enthusiasm to the suggested measures. omar s. asfour, and raghda alsousi / exploring residents‘ attitude towards implementing housing design flexibility in the gaza strip (2016) 33 table 6 the effect of age, family size, income level, and housing unit area on the main study assumptions using anova test questionnaire part/ assumption means according to age fvalue sig.value 29 & less (μ1) 30-39 (μ2) 40 & more (μ3) 1 3.55 3.24 3.08 3.494 0.034* 2 3.75 3.65 3.52 2.457 0.091 3 4.11 4.02 3.89 1.753 0.178 questionnaire part/ assumption means according to family size fvalue sig.value 2-4 (μ1) 5-6 (μ2) 7 & more (μ3) 1 3.28 3.50 3.36 0.562 0.572 2 3.63 3.73 3.68 0.361 0.698 3 3.97 4.09 4.07 0.586 0.558 questionnaire part/ assumption means according to income fvalue sig.value low (μ1) middle (μ2) high (μ3) 1 2.99 3.48 3.54 4.590 0.012* 2 3.60 3.79 3.59 2.401 0.095 3 3.94 4.11 4.01 1.042 0.356 questionnaire part/ assumption means according to housing unit area fvalue sig.value 100m 2 & less (μ1) 100-159 m 2 (μ2) 160m 2 & more (μ3) 1 2.24 3.28 4.01 28.025 0.000* 2 3.54 3.72 3.69 0.652 0.626 3 4.14 4.14 3.89 3.138 0.018* * means‘ difference is significant at α ≤ 0.05 ** h0: μ1 = μ2 = μ3 iv conclusion housing sector in the gaza strip faces several challenges. this includes the limited available resources including urban land, the deteriorated economic situation, and the great deficit between housing demand and supply. these challenges require housing solutions that help families find adequate housing that respond to their present and future needs. in this context, this study particularly pays attention to the issue of housing land consumption and the role of design flexibility to rationalise this consumption. implementing the principle of design flexibility in the gaza strip housing sector has been discussed and expanded to include furniture flexibility as well. in this regard, the study carried out a field study based on a questionnaire. the questionnaire examined three assumptions. the first assumption states that the current status of housing unit design satisfies the functional requirements from residents‘ point of view. the second one states that residents believe that housing design flexibility increases housing utilization efficiency. similarly, the third one states that residents believe that furniture design flexibility increases housing utilization efficiency. the quantitative analysis carried out leaded to accept these three assumptions. this means that residents are generally satisfied with the functional capacity of their housing units. however, they accept ideas that enhance this capacity such as the use of design flexibility and furniture flexibility. statistical analysis of study variables revealed that design and furniture flexibility is accepted by all the examined categories classified under the gender, age, income level, family size, and housing unit area. despite the statistically insignificant differences observed within these categories, the principles of design and furniture flexibility seem to be generally accepted in the gaza strip. thus, it is recommended to promote these deign strategies in the local market in order to improve efficiency of housing supply. furthermore, it is essentially vital for the gaza strip to encourage the culture of ‗reduce‘ in natural resources consumption. considering the scope of this study, this sustainable principle is achievable in the housing sector through implementing the principles of housing design flexibility. this could have a great impact on the housing sector in terms of increasing supply, reducing cost, and improving residents‘ satisfaction towards their housing units. omar s. asfour, and raghda alsousi / exploring residents‘ attitude towards implementing housing design flexibility in the gaz a strip (2016) 34 references [1] pcbs, palestinian central bureau of statistics, conditions of palestinians residing in palestine. ramallah: pcbs, 2013. [2] mpwh, ministry of public works and housing, housing situation in the gaza strip. http://www.mpwh.ps, retrieved 28/5/2015. [3] o. asfour, e. kandeel, ―the potential of thermal insulation as an energy-efficient design strategy in the gaza strip‖. journal of engineering research and technology, 1 (4), 117-125, 2014. [4] n. e. altas, a. gzsoy, ―spatial adaptability and flexibility as parameters of user satisfaction for quality housing‖. building and environment 33 (5), 311-323, 1998. [5] m. prins, the management of building flexibility in the design process: a design decision support model for optimization of building flexibility in relation to life cycle costs. in: m.p. nicholson (ed.), architectural management (pp. 65-75), london: e & fn spon, 1992. [6] w. tannous, z. mouhana, o. fakoush, ―design flexibility as one of the most important standards for economic housing‖. damascus university journal of engineering sciences, 29 (1), 619-638, 2013. [7] j. f. wong, ―factors affecting open building implementation in high density mass housing design in hong kong‖. habitat international 34, 174–182, 2010. [8] x. cao, z. li, s. liu, ―study on factors that inhibit the promotion of si housing system in china‖. energy and buildings. 88, 384–394, 2015. [9] pcbs, palestinian central bureau of statistics, localities in gaza governorate by type of locality and population 2007-2016 estimates. http://pcbs.gov.ps/portals/_rainbow/documents/gza.ht m. retrieved 6/6/2015. [10] m. dhamen, fundamentals of scientific research. amman: dar al-massira, 2007. [11] o. asfour, ―towards an effective strategy to cope with housing land scarcity in the gaza strip as a sustainable development priority‖. habitat international, 36 (2), 295-303, 2012. omar asfour is an associate professor of sustainable architecture at department of architecture, islamic university of gaza. he holds a phd degree in architecture from the university of nottingham, uk. he has a research interest in sustainable architecture, housing policies, energy efficient design, and urban planning. raghda m. alsousi is an architect since 2012. she holds a master degree in architecture from department of architecture, islamic university of gaza. she has a special interest in furniture design and its relation to architecture. she also has a research interest in islamic architecture and its integration into the modern context of architecture. transactions template journal of engineering research and technology, volume 4, issue 1, marsh 2017 5 treatment of desalination brine using an experimental solar pond yunes mogheir 1 , amal qarroot 2 1 environmental engineering department, iug gaza, palestine, e-mail: ymogheir@iugaza.edu.ps 2 infrastructure msc. program, civil engineering department, , iug, gaza, palestine. abstract— brine from seawater desalination plants is deposed to the sea causing a negative impact on the marine life. solar evaporation ponds are especially suitable to dispose of reject brine from inland desalination plants in arid and semi-arid areas due to the abundance of solar energy. nearly all forms of salt production require evaporation of water to concentrate brine and ultimately produce salt crystals. in this article research, an experimental shallow solar pond (ssp) having a surface area of 1*1 m2 and depth of 20 cm was built. solar pond using two reflector mirrors extending for five days from 12 to 16 july 2015 was tested. mirrors, which are movable for five different angles that makes with horizontal, were used as reflectors in order to increase the thermal energy for the surface of the solar pond during the day. the main factors affecting the evaporation rate which are relative humidity, wind speed, ambient air temperature and solar radiation were studied. the results showed that the little of decreasing evaporation rate was observed by increasing relative humidity and maximum evaporation rate was observed at relative humidity of 67.6%, while slight increasing of evaporation rate was observed by increasing ambient air temperature, evaporation rate appears to decrease slightly as wind speed increases and gradual increasing of evaporation rate with increasing solar radiation. comparisons between experimental and theoretical results have been performed which good agreement has been achieved. results showed that evaporation rate increases with decreasing the mirror's angle that makes with horizontal β. it was concluded that using two mirrors are very effective more than using one mirror when they are used as reflectors and that the best performance of the evaporation can be achieved when the mirrors are employed as reflectors. in conclusion, this system proved to be promising using two mirrors which reduced the solar pond area and hence reduced area needed for brine evaporation in gaza strip desalination plants. the research can be further developed to achieve better results using large scale solar pond. key words: brine, solar system, shallow solar ponds. i introduction the desalination of seawater is a common method for providing fresh drinking water in the gaza strip as a solution to increase water resources . however, the disposal of the brines generated by the desalination process poses significant environmental issues and negative impact, due to the high concentrations of salts and increases in the concentration of transition and heavy metals. this brine is usually discharged to inland water bodies or to the sea and constitutes a threat to ecosystems and species. in the last decade, new demonstration projects have been addressed to achieve an effluent volume reduction by either solar evaporation ponds or thermal evaporation. brine volume reduction by evaporation techniques results in a solid product that can more easily be disposed of comparing to the original concentrate, whereas the low salinity effluent can be reused to increase the water production ratio or it can be directly discharged into surface or ground water bodies [1]. due to the environmental concerns that brine disposal can cause, in addition to the high disposal cost, many technologies have been developed for recovery to avoid the disposal into the sea. examples are renewable energy generation and use in evaporation ponds to produce salt or chemicals for industry. nevertheless, more investigation is needed to reduce brine quantity and to allow recovery and reuse of brine. because of the declining economic situation, the gaza strip is suffering from energy crisis. on the other hand, solar energy is a renewable resource; it is abundant, inexhaustible and free [2]. solar evaporation consists of leaving brine in shallow evaporation ponds, where water evaporates naturally thanks to the sun's energy. salt is left in the evaporation ponds or is taken out for disposal. evaporation ponds are relatively easy to construct, while requiring low maintenance and little operator attention compared to mechanical systems. in addition, no mechanical equipment is required, except for the pump that conveys the brine to the pond, which keeps low operating costs. nevertheless, evaporation ponds for disposal of concentrate from desalination plants need to be constructed as per the design and maintained and operated properly so as not to create any environmental problem, especially with regard to groundwater pollution [3] mailto:ymogheir@iugaza.edu.ps yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 6 figure 1annual monthly average variations in solar radiation in the three climate zones of the palestinian territories [4] solar evaporation is a suitable technology to be used in arid regions where land is available. however, due to the quantity of terrain needed to treat large volumes of brine water , evaporation ponds may have limited use in the gaza strip where land is not available. by increasing the evaporation rate, the pond land may reduce. therefore, the use of available solar energy in gaza could be appropriate option to increase the evaporation rate. therefore, the main goal this article is to utilize solar energy for brine water evaporation using shallow solar pond (ssp) and optimize a model suitable for the gaza strip. justification of the study the gaza strip is semi-arid as well coastal region. fig. 1 shows the annual monthly average variation in solar radiation in the three climate zones of the palestinian territories[4]. solar insulation has an annual average of 5.4 kwh/m2.day, which fluctuates significantly during the day and all over the year, and approximately 2860 mean-hour sunshine throughout the year. the measured values in the different areas show that the annual average insulation values are about 5.24 kwh/m2.day, 5.63 kwh/m2.day, 5.38 kwh/m2.day in the coastal area, hilly area and jordan valley respectively. the average annual global horizontal radiation for all stations is 2017 kwh m-2 year-1[4]. i. shallow solar pond (ssp) the shallow solar pond is a large solar energy collector that consists of a plastic envelope containing water [6]. as the name of convective shallow pond suggests, the depth of water is relatively small ,usually between 4 and 15cm [7], and the layer is homogeneous. the concept underpinning the ssp has been known since the beginning of the twentieth century, when willsie and boyle used the idea to produce shaft power. they tried various designs of solar pond and one of these was composed of a wooden tank lined with tar paper and covered with a double glass window, while each side and bottom were insulated with hay. the water level in the tank was 7.5cm. other designs included asphalt and sand for insulation, however, the latter could not be kept dry, so the heat loss from the base was high. in 1906 and 1908, willsie and boyle succeeded in raising the temperature from 38 to 80 by using dual stages, and single and double glass covers (of 110m 2 ); 11kw of peak power was obtained. also in the beginning of the twentieth century, shuman] ran a steam engine on the same system used by willsie and boyle. furthermore, shallow ponds were used in japan for domestic purposes in the 1930s. after about half a century, the shallow pond technique was suggested to produce power by d‘amelio [8], and research to develop ssps was adopted by the office of saline water, us department of interior [9]. more recently, a research team at the university of arizona developed an ssp to be combined with a multiple-effect solar still for the purpose of desalination. this system produced 19m 3 /day of distilled water using 5 ponds (each about 90m x 2m) [10]. around 1975, the lawrence livermore laboratory in california, usa [11] and the solar energy laboratory at the institute for desert research in israel [12] were established and teams were formed for solar energy research. the former research center constructed several large-scale ssp projects in different designs [13] and soon after, many significant results were obtained and published by w. dickinson and other researchers [11]. in the latter center, the ssp was involved in a large-scale project of solar energy and good experiment results were delivered. after that, kudish and wolf [14] designed a portable shallow pond for camping and military use. during the past 30 years, ssps have been used in many countries, such as iran [14] and egypt [15]. a typical ssp consists of a low-depth volume of water enclosed in 60 m x 3.5m (approximately) plastic bag, with a blackened bottom and colorless top film. this bag is insulated below with foam insulation and on the top with single or double glazed panel, as shown in fig. 2 [13]. the shallow solar pond can be operated in batch or continuous modes. in batch operation, the water is insulated during daytime. before nightfall, it is pumped into a large insulated tank for night storage and then pumped back into the bag after sunrise every day. if the water flows continuously through the water bag, this operational method is then called the flowthrough mode, which is also named by some researchers [13] as deep salt less solar pond [16]. yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 7 figure 2a typical shallow solar pond [17]. figure 3. schematic diagram of the reflectors (adapted from [18]). ii. formulation of mirror system used as a reflector for overcoming heat loss due to heat transfer from the surface of the solar pond to the atmosphere during nights and also for increasing the solar energy harnessing area during days, a reflection mirror system as shown in fig. 3 was designed and used. in this section, a mathematical formulation of this system will be given. for calculating the amount of the sun light energy which is reflected by the reflectors, a mathematical formulation was carried out. in the derivation of the equations, the model seen in figure 3 was used. x1y1 = m1l1 x cos 𝐶 (1) note: for calculation results see appendix e. where: c = the angle between x1y1 and the reflector and it is a function of time. m1l1 = the side length of the reflector. c is given by c = 90 [180 𝛼 𝛽1] = 90 – 180 + 𝛼 + 𝛽1 = -90 + 𝛼 + 𝛽1 (2) note: for calculation results see appendix e. substituting c in eq. (3.47), the length of x1y1 in the triangle, x1y1o1, is x1y1 = m1l1 x cos(−90 + 𝛼 + 𝛽 ) (3) the projection area of the reflector normal to the incident light, sm1, is sm1 = x1y1 x ạ (4) where ạ is the length of one side of the pond or the length of the reflector. substituting the value of x1y1 given sm1 = m1l1 x cos(−90 + 𝛼 + β ) x ḁ (5) note: for calculation results see appendix e. the amount of the solar energy which will be reflected by the first reflector into the solar pond is g = sm x b (6) note: for calculation results see appendix e. where b1 is amount of the solar energy falling on one-meter square area perpendicular to the incident light per unit time and it is equal to b2. the amount of the solar energy which will fall on one-meter square area of the solar pond in per unit time, u1, is obtained using g1 u1 = g1/ ḁ 2 (7) or u1 = m1l1 x cos(−90 + 𝛼 + β ) x b1/ ḁ (8) note: for calculation results see appendix e. a. solar energy reflected by the second mirror following the similar way an expression which will give the amount of the energy to be reflected from the other reflector, u2, is u2 = m1l1 x cos(90 + 𝛼 + β ) x b1/ ḁ (9) note: for calculation results see appendix f. where m1l1 is equal to m2l2. in the computational modeling, it is necessary to know the angles between the light beams coming from reflectors and yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 8 figure 4. ssp schematic diagram figure 5. photographic of locally fabricated ssp the normal of the surface of the solar pond. these angles were denoted by f1 and f2, respectively, and their expressions have been obtained using the geometry of the system shown in figure 3 in terms of ℇ, 𝛽 and 𝛽 f1 = 90 + 𝛼 2𝛽 (10) note: for calculation results see appendix e. and f2 = 270 -2𝛽 𝛼 (11) note: for calculation results see appendix f. these equations have been used in the theoretical model calculations. in order to model the solar pond with mirrors, numerical solution of the equations, defining how much energy is incoming from reflectors to the pond surface, is implemented to our existing code [19,20]. iii.experiment description a. detailed description of the construction of ssp a schematic diagram of the constructed small scale ssp is shown in figure 4 and as a photo in fig. 5. the locally made shallow solar pond is shown in fig. 5 consists wooden box (carpentry timbre) with a depth of 0.2 m and a bottom surface area of 1.0 m 2 . a galvanized-iron sheet (0.001m thick) was used for fabricating the pond with a depth of 0.2 m and a bottom surface area ap of 1 m 2 , which acts as the absorbing surface for the incident solar radiation of the pond. the surface of the absorber plate exposed to the sun was painted by black paint to maximize the amount of the absorbed solar radiation. in order to minimize the heat losses from the sides and back of the ssp, a 0.05 m thick layer of sawdust was used as an insulating material. a movable plane mirror with an area equal to that of the pond surface (1 m 2 ) is hinged at the top of the pond to increase the intensity of solar radiation incident on the pond cover and to improve the thermal performance of the pond. the mirror was also used as an insulation cover for the pond during the night by using 0.05m thick layer of sawdust lying between the back surface of the mirror and a wooden sheet. the angle β between the mirror and the horizontal is usually adjusted to increase the amount of solar radiation reflected to the pond (fig. 5). b. experimental procedure the experiment was carried out using the following steps: the experiments were carried out outdoors from 12 am to 12 pm for 11 successive days (12–22 july) of the summer season of the year 2015. to follow-up the brine evolution, brine sample from the seawater desalination plant located in deir al-balah (gaza strip, palestine) was collected to determine its chemical constituents. the climate at the site was always wet and hot with no rain fall during the studied period. the brine sample was collected in clean bottles without any air bubbles. the bottles were tightly sealed and labeled outside in the field and constantly weighed. the pond is filled with natural brine at 12 am by continuous addition of brine until the pond becomes completely filled with brine to an initially depth of 0.075 m inside the ssp. the level of water in the pond was fixed at 12 cm by using an overflow system consisting of pvc pipe with 2 inches diameter. the system was oriented to face south to maximize the solar radiation received by the pond. the ambient air temperature and relative humidity conditions, wind speed and solar radiation at which the experimental work were done, recorded every 1 hour during the day at the field work by means of an computerized automatic weather data (http://www.accuweather.com). brine is heated by solar radiation and thus it gets evaporated, the levels of water in the studied solar pond were measured in situ every 24 hours (1 day) using ruler, then evaporation rates determined by the difference between initial and final http://www.accuweather.com/ yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 9 figure 6. growth rate of salt figure 7. output salt samples readings. during evaporation process, brine samples were taken in sequence at different densities for chemical analysis. all brine samples were analyzed for major ions (na + , and cl − ) by titration, ph and total dissolved solids tds were determined by ph and tds meters respectively. to evaluate theoretically and experimentally the performance of the solar pond, the design of the evaporation pond was accomplished using three distinct scenarios: 1)the first scenario was used two reflecting mirrors with five different angles (β) for both mirrors, this scenario extending for five days from 12 to 16 july 2015. the angles were 125 o , 130 o , 135 o , 142 o and 145 o ., 2)the second scenario was without using any mirrors at 17 july 2015 and 3) the third scenario was by using one reflecting mirror with five different angles extending for five days from 18 to 22 july 2015. the daily rate of evaporation of the solar pond for each scenario is then calculated with the aid of eq. (1). the theoretically and experimentally results then compared with each other. a computer program was prepared for the solution of the evaporation rate equations for the solar pond. the input parameters to the computer program include climatic, and design parameters. another computer exercise has been performed for calculating the total solar-radiation incident on the mirror and that reflected to the pond. the same procedure is repeated with new values of different climatic conditions for every day through the experiments and so on. iv. results and discussion a. effect of brine salinity on salt (nacl) output: to illustrate the utility of predicting salt-making process, brine sample was collected from the desalination plant. sample no.1 have nacl concentration differ than that of sample no. 2 as shown in table 1. note that sample no. 2 collected and analyzed after 10 days during the experiment of brine evaporation. table 1. brine chemical analysis no. test unit sample no. 1 sample no. 2 1 ph 7.78 8.03 2 tds mg/l 63360 3 ec μs 99000 >199000 4 cl -1 mg/l 35600 100000 5 na +1 mg/l 37622 60774 the first sample takes 10 days of evaporation to concentrate the brine sufficiently to begin collecting salt, about 8.03 kg are then collected at the end of the period. the second sample produced about 6.77 kg of salt and took about 8 days. the total amount of 14.8 kg of salt was produced from saltmaking process during the experiment period. there is an increased amount of salt about 15.7% when salinity increased from 73222 mg/l to 160774 mg/l. the experiments are continued every day until the level of water decreasing and becomes equal to zero and the layers of salt begin to appear as shown in fig. 6. the salt (solid) samples were collected in small polyethylene plastic bags and tightly sealed and labeled in the field to be weighed using an electronic balance and for examination techniques (fig. 7). the salt samples were carefully kept away from any atmospheric conditions until examinations. b. solar evaporation rates process in this section, the results obtained from the model and experiment were discussed and compared with each other, and then the effects of the various parameters were examined. to illustrate the utility of predicting solar evaporation rates, using the steps outlined above and numerically analysis the evaporation process using three distinct scenarios, the first scenario was used two reflecting mirrors with five different angles for both mirrors, this scenario extending for five days from 12 to 16 july 2015, the second scenario was not used any mirrors (without mirror) in 17 july 2015, and the third scenario was used one reflecting mirror with five different angles extending for five days from 18 to 22 july 2015 (table 2). each scenario requires a weather data and site specific information file that were created using the measured data yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 10 covering the period from 12 to 22 july 2015 at which the experimental work was done derived. the online databases were recorded every 1 hour during the day at the field work by means of an automatic weather data (http://www.accuweather.com), and used as an input to the computer excel model. solar radiation, ambient air temperatures, relative humidity rh and wind speed were recorded every one hour. evaporation were observed in situ daily using a scale ruler. hourly evaporation was approximated by using penman equation explained above for each hour and proportioning the distribution of the daily total evaporation to the 24 h period. table 2: average hourly ambient air temperature, maximum ambient air temperature and minimum ambient air temperature for the three scenarios. scenario date average ambient air temperature, maximum ambient air temperature, minimum ambient air temperature, two mirrors 12/7/2015 26.71 32 22 13/7/2015 27.04 33 22 14/7/2015 27.75 34 23 15/7/2015 26.54 30 24 16/7/2015 27.625 34 21 without mirror 17/7/2015 27.54 34 22 one mirror 18/7/2015 27.623 32 24 19/7/2015 27.875 33 23 20/7/2015 27.54 33 23 21/7/2015 27.875 33 22 22/7/2015 28.21 35 23 c. effect of the reflector mirrors on evaporation rate to see the effect of the reflectors depending on their positions, simulations have been carried out in five different angles β. the angle between the horizontal axes and rhs reflector β2 was kept at 35°, 38°,45°,50° and 55° using one reflector mirror. when using two reflector mirrors, mirror 1 (lhs reflector) and mirror 2 (rhs reflector) make β1 and β2 angles with horizontal, respectively ,the angle between the horizontal axes and lhs reflector was kept at 35°, 38°, 45°, 50° and 55° and the same angles for rhs reflector. to see the effect of the reflectors depending on their dimensions, the lengths of the reflector mirror we changed for different values and substituted in equations 5 and 6. the data in table 3 and 4 indicated that evaporation rate did not change when dimensions of the reflector mirrors have changed. table 3: effect of reflector mirror dimensions on evaporation rate using one mirror one mirror (rhs) evaporation rate (mm/d) date u, km/hr t, %rh rs, mj /m²/d angle of mirror, β ḁ =6m ḁ =4m ḁ =2m ḁ =1m 3.34 3.34 3.34 3.34 18/07/15 10.17 28 68.83 10.47 β2 =35° 3.74 3.74 3.74 3.74 19/07/15 9.88 28 67.42 10.45 3.69 3.69 3.69 3.69 20/07/15 9.46 28 68.38 10.44 3.65 3.65 3.65 3.65 21/07/15 9.21 27.5 66.63 10.42 4.59 4.59 4.59 4.59 22/07/15 9.13 29 67.46 10.40 3.18 3.18 3.18 3.18 18/07/15 10.17 28 68.83 10.47 β2 =38° 3.55 3.55 3.55 3.55 19/07/15 9.88 28 67.42 10.45 3.5 3.5 3.5 3.5 20/07/15 9.46 28 68.38 10.44 3.46 3.46 3.46 3.46 21/07/15 9.21 27.5 66.63 10.42 http://www.accuweather.com/ yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 11 4.4 4.4 4.4 4.4 22/07/15 9.13 29 67.46 10.40 2.84 2.84 2.84 2.84 18/07/15 10.17 28 68.83 10.47 β2=45° 3.15 3.15 3.15 3.15 19/07/15 9.88 28 67.42 10.45 3.09 3.09 3.09 3.09 20/07/15 9.46 28 68.38 10.44 3.05 3.05 3.05 3.05 21/07/15 9.21 27.5 66.63 10.42 3.98 3.98 3.98 3.98 22/07/15 9.13 29 67.46 10.40 2.63 2.63 2.63 2.63 18/07/15 10.17 28 68.83 10.47 β2 =50° 2.9 2.9 2.9 2.9 19/07/15 9.88 28 67.42 10.45 2.84 2.84 2.84 2.84 20/07/15 9.46 28 68.38 10.44 2.79 2.79 2.79 2.79 21/07/15 9.21 27.5 66.63 10.42 3.73 3.73 3.73 3.73 22/07/15 9.13 29 67.46 10.40 2.44 2.44 2.44 2.44 18/07/15 10.17 28 68.83 10.47 β2 =55° 2.68 2.68 2.68 2.68 19/07/15 9.88 28 67.42 10.45 2.62 2.62 2.62 2.62 20/07/15 9.46 28 68.38 10.44 2.56 2.56 2.56 2.56 21/07/15 9.21 27.5 66.63 10.42 3.49 3.49 3.49 3.49 22/07/15 9.13 29 67.46 10.40 table 4. effect of reflector mirrors dimensions on evaporation rate using two mirrors two mirrors (rhs and lhs) evaporation rate (mm/d) date u, km/hr t, %rh rs, mj /m²/d angle of mirror, β ḁ =6m ḁ =4m ḁ =2m ḁ =1m β1=β2 =35° 4.59 4.59 4.59 4.59 12/07/15 9.63 27.00 61.42 10.57 4.16 4.16 4.16 4.16 13/07/15 10.29 27.50 60.79 10.56 3.95 3.95 3.95 3.95 14/07/15 10.08 28.50 65.04 10.54 3.38 3.38 3.38 3.38 15/07/15 10.13 27.00 70.21 10.52 3.82 3.82 3.82 3.83 16/07/15 9.38 27.50 65.92 10.51 4.65 4.65 4.65 4.65 12/07/15 9.63 27.00 61.42 10.57 β1=β2 =38° 4.14 4.14 4.14 4.14 13/07/15 10.29 27.50 60.79 10.56 3.93 3.93 3.93 3.93 14/07/15 10.08 28.50 65.04 10.54 3.47 3.47 3.47 3.47 15/07/15 10.13 27.00 70.21 10.52 3.77 3.77 3.77 3.77 16/07/15 9.38 27.50 65.92 10.51 4.6 4.6 4.6 4.6 12/07/15 9.63 27.00 61.42 10.57 β1=β2 =45° 4.13 4.13 4.13 4.13 13/07/15 10.29 27.50 60.79 10.56 3.88 3.88 3.88 3.88 14/07/15 10.08 28.50 65.04 10.54 3.53 3.53 3.53 3.53 15/07/15 10.13 27.00 70.21 10.52 3.49 3.49 3.49 3.49 16/07/15 9.38 27.50 65.92 10.51 4.63 4.63 4.63 4.63 12/07/15 9.63 27.00 61.42 10.57 β1=β2 =50° 4.18 4.18 4.18 4.18 13/07/15 10.29 27.50 60.79 10.56 3.76 3.76 3.76 3.76 14/07/15 10.08 28.50 65.04 10.54 3.41 3.41 3.41 3.41 15/07/15 10.13 27.00 70.21 10.52 3.15 3.15 3.15 3.15 16/07/15 9.38 27.50 65.92 10.51 4.68 4.68 4.68 4.68 12/07/15 9.63 27.00 61.42 10.57 β1=β2=55° yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 12 4.26 4.26 4.26 4.26 13/07/15 10.29 27.50 60.79 10.56 3.72 3.72 3.72 3.72 14/07/15 10.08 28.50 65.04 10.54 3.19 3.19 3.19 3.19 15/07/15 10.13 27.00 70.21 10.52 2.85 2.85 2.85 2.85 16/07/15 9.38 27.50 65.92 10.51 the concluded regression model equations of brine evaporation are shown as follows in table 5 and 6: table 5. the concluded regression equations for brine evaporation using one mirror one mirror intercept residuals r 2 standard error estimate parameters angle β -179.607 0 1 0 -1.13846 u β2 =35° 0.908896 t -0.24522 %rh 17.76127 rs e(mm/d) = 17.76rs-0.245(%rh)+0.91t-1.14u-179.607 -139.775 0 1 0 -0.928 u β2 =38° 0.860304 t -0.22247 %rh 13.71736 rs e(mm/d) = 13.72rs-0.222(%rh)+0.86t-0.928u-139.775 -65.0362 0 1 0 -0.5127 u β2 =45° 0.764461 t -0.18134 %rh 6.128827 rs e(mm/d) = 6.13rs-0.181(%rh)+0.764t-0.513u-65.0362 -21.1513 0 1 0 -0.27107 u β2 =50° 0.715664 t -0.15283 %rh 1.625545 rs e(mm/d) = 1.626rs-0.153(%rh)+0.716t-0.27u-21.1513 0.755127 0 1 0 -0.13858 u β2 =55° 0.679059 t -0.13376 %rh -0.64117 rs e(mm/d) = -0.641rs-0.134(%rh)+0.679t-0.138u+0.75513 yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 13 table 6. the concluded regression equations for brine evaporation using two mirrors two mirrors intercept residuals r 2 standard error estimate parameters angle β -115.236 0 1 0 -0.51559 u β1=β2 =35° 0.060771 t -0.04607 %rh 11.91926 rs e(mm/d) = 11.92rs-0.046(%rh)+0.061t-0.516u-115.236 -150.069 0 1 0 -0.5246 u β1=β2 =38° 0.01953 t -0.02496 %rh 15.21147 rs e(mm/d) = 15.21rs-0.025(%rh)+0.019t-0.525u-150.069 -201.731 0 1 0 -0.32797 u β1=β2 =45° -0.01718 t 0.000645 %rh 19.8606 rs e(mm/d) = 19.86rs+0.00064(%rh)-0.017t-0.328u-201.731 -254.51 0 1 0 -0.16342 u β1=β2 =50° -0.07504 t 0.000313 %rh 24.85698 rs e(mm/d) = 24.857rs+0.00031(%rh)-0.075t-0.163u-254.51 -305.506 0 1 0 -0.06071 u β1=β2 =55° -0.05097 t -0.01147 %rh 29.60008 rs e(mm/d) = 29.6rs-0.01147(%rh)-0.051t-0.061u-305.506 where: e = the evaporation rate expressed as mm/day, r = the solar radiation (mj m day ), rh = the relative humidity (%), t = the ambient air temperature, °c, u = wind speed, km/hr. the previous models have shown the importance of the variables as the global solar radiation, relative humidity, ambient air temperature and wind speed. it assumed that secondary variables had been neglected, the coefficient of determination r 2 for the final models was 100% . tables 5 and 6 show that when solar radiation was 10.99 mj/m 2 /d, daily average humidity was 66% , average ambient air temperature was 28 and daily average wind speed was 9.5km/hr, the mirror's angle makes with horizontal was 55°, 50°,45°,38° and 35°, the evaporation rate resulted was 2.56mm, 4.1mm ,6.9mm, 11.62mm and 14.06mm respectively when using one mirror and 17.03 mm, 15mm, 12.98mm, 10.98mm and 9.5mm respectively when using two reflector mirrors. this is about 84.93%,72.72% ,46.81% and 30.7% improvement in the efficiency of the solar pond used two reflector mirrors as compared with solar pond used one reflector mirror when β was 55°,50° and 45° respectively. there is performance deficiency about 5.47% and 32.16% of the solar pond used two reflector mirrors as compared with solar pond used one reflector mirror when β was 38°,50° and 35° respectively (tables 7 and 8). this means yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 14 that reflectors play a vital role on the performance of solar ponds contributing to harvesting much more solar energy and increasing the energy harvesting area. table 7. the increased efficiency in evaporation rate using one mirror and two mirrors and comparing it in case of without using any mirror evaporation rate (mm/day) mirror angle one mirror two mirrors the increased efficiency % β = 35° 4 4.5 11.11 β = 38° 3 5 40 β = 45° 2.7 4 32.5 β = 50° 2.5 3.6 30.6 β = 55° 2.2 3 26.7 evaporation rate (mm/day) without mirror one mirror the increased efficiency % 2.5 4 37.5 3 14.3 2.7 7.4 evaporation rate (mm/day) without mirror two mirrors the increased efficiency % 2.5 4.5 44.44 5 50 4 37.5 3.6 30.6 3 16.7 table 8. reduction percentage in solar pond area using one mirror and two mirrors: evaporation rate (mm/day) mirror angle one mirror two mirrors percent reduce in solar pond area % β = 35° 4 4.5 88.89 β = 38° 3 5 60 β = 45° 2.7 4 67.5 β = 50° 2.5 3.6 69.4 β = 55° 2.2 3 73.3 evaporation rate (mm/day) without mirror one mirror percent reduce in solar pond area % 2.5 4 62.5 3 83.33 2.7 92.6 evaporation rate (mm/day) without mirror two mirrors percent reduce in solar pond area % 2.5 4.5 55.6 5 50 4 62.5 3.6 69.44 3 83.33 v. conclusions in this article the brine characteristics were studied by using shallow solar pond ssp. the main input parameters for three different scenarios (solar pond without mirror, solar pond using one mirror and solar pond using two mirrors) where solar radiation, ambient air temperature, relative humidity and wind speed were used as variables. the method described herein cover one technique for evaporating brine is solar evaporation that does not require fuel but may take days or weeks to accomplish and is limited to geographic areas with high evaporation and little precipitation. at the end of the experiments the following important conclusions were drawn: evaporation rate increases with decreasing the mirror's angle that makes with horizontal. gradual increasing of evaporation rate with increasing solar radiation using two reflector mirrors. reflectors play a vital role on the performance of solar ponds contributing to harvesting much more solar energy and increasing the energy harvesting area. experimental and theoretical model results, obtained for the solar pond without a mirror, with one reflector mirror and with two reflector mirrors are in a good agreement with each other. mirrors are very effective when they are used as reflectors and that the best performance of the pond can be achieved when the mirrors are employed as reflectors. using one mirror and two mirrors reduced the solar pond area and yunes mogheir, amal qarroot/ treatment of desalination brine using an experimental solar pond (2017) 15 hence reduced area needed for brine evaporation in gaza strip desalination plants. vi. references [1] mogheir y. and al bohissi n., optimal management of brine from seawater desalination plants in gaza strip: deir al balah stlv plant as case study, journal of environmental protection, 6, 2015, pp. 599-608. [2] truesdall j., mickley m. and hamilton r., survey of membrane drinking water disposal methods, desalination, 102, 1995, pp. 93–105. [3] koening l., r&d progress report no 20, office of saline water, washington, dc, 1958. [4] ―palestinian energy authority.‖ [online]. available: http://pea-pal.tripod.com/. [accessed: 20-jun-2013]. [5] alaydi. d., the solar energy potential of gaza strip, glob. j. res. …, vol. 11, no.7, 2011. [6] duffie, j. a. and beckman w. a., solar engineering of thermal processes, 3rd edn. , new jersey: john wiley & sons, 2006. [7] garg, h., advances in solar energy technology: collection and storage systems. v1, dordrecht: d. reidel publishing company, 1987. [8] d'amelio, l., thermal machines for the conversion of solar energy into mechanical power‘ un conference on new sources of energy, rome, 1961, pp.12. [9] brice, d., saline water conversion‘ advances in chemistry series, american chemical society, 38 , 1963, pp. 190199 . [10] hodges,c. n. thompson, t. l. groh, j. e. and frieling, d. h. ‗solar distillation utilizing multiple-effect humidification‘ office of saline water research and development progress, 1966. (report no. 194). [11] dickinson w. c. and cheremisin off, p., solar energy technology handbook. new york: marcel dekker,1980, pp. 374. [12] solar energy laboratory at the institute for desert research in israel website, solar ponds. available at: http://www2.technion.ac.il/~ises/papers/israelsectionisesfi nal.pdf (accessed: jan. 2011) [13] kreider, j. f. and kreith, f., solar heating and cooling active and passive design. 2nd edn. london : hemisphere,1982, pp. 284 [14] ali, h., mathematical modelling of salt gradient solar pond performance, energy research, 10(4), 1986, pp. 377384. [15] sebaii, a., thermal performance of a shallow solarpond integrated with a baffle plate‘ applied energy, 81(1), 2005, pp. 81-33. [16]enersalt website solar pond. available at: http://www.enersalt.com.au/local%20publish/html/more_in fo.html (accessed: mar. 2011). [17] casamajor, a. b. and parsons, r. e. ‗design guide for shallow solar ponds‘ lawrence livermore laboratory, livermore, california, 1979.( ucrl-52385rev.1). [18] nalan c¸ . bezir a, orhan do¨nmez b, refik kayali b,*, nuri o¨ zek a, "numerical and experimental analysis of a salt gradient solar pond performance with or without reflective covered surface, 2008". [19] kayali r, bozdemir s, k±ymac k. a rectangular solar pond model incorporating empirical functions for air and soil temperatures. sol energy 1998;63: pp: 6–345. [20] kayali r. derivation of analytic functions for air and soil temperatures and usage of these functions in a computer model developed for solar ponds. j eng environ sci., 1993, pp17–65 . http://pea-pal.tripod.com/ transactions template journal of engineering research and technology, volume 3, issue 2, june 2016 35 quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors eyad haddad abstract—the management of resources is an essential task in each construction company. the aim of this study is to develop a new technique for predicting the quantities of key construction materials ―cement, reinforced steel and aggregate‖ for building projects in gaza strip, through developing a model that is able to help parties involved in construction projects (owner, contractors, and others) epically contracting companies to go ahead or leave the project . this model build based on artificial neural networks. in order to build this model, quantitative and qualitative techniques were utilized to identify the significant parameters for the predicting quantities of key construction materials (cement, steel, aggregate). a database of 72 weeks was collected from the construction industry in gaza strip. the ann model considered eleven significant parameters as independent input variables affected on three dependent output variable " passing (cement, steel, aggregate) per ton ". neurosolution software was used to train the models. the results of the trained models indicated that neural network reasonably succeeded in predicting the quantities of three key materials. the correlation coefficient (r) is 0.98, 0.99, 0.97 for cement, reinforced steel, aggregate respectively, indicating that; there is a good linear correlation between the actual value and the estimated neural network quantities. the performed sensitivity analysis showed that the ―open crossings‖ factor has the highest rate of influence on the total quantities of materials. index terms— construction materials, artificial neural networks (ann), neurosolution. i introduction buildings are part of the built environment in which many activities are performed. in many countries, the building industry is a major economic driver. as such, facilities need to be constructed efficiently while meeting the aesthetic and functional requirements. one of the key factors affecting the successful delivery of construction projects is managing the construction resources in a good way [1]. this study aimed at developing a new technique for predicting the quantities of key construction materials ―cement, reinforced steel and aggregate‖ for building projects in gaza strip based on artificial neural networks. ii artificial neural networks (anns) there is no universally accepted definition of neural network (nn), but most of definitions are similar to some extent with each other, as such, swingler (1996) defined neural networks as "statistical models of real world systems which are built by tuning a set of parameters. these parameters, known as weights, describe a model which forms a mapping from a set of given values known as inputs to an associated set of values the outputs"[7]. artificial neural networks (anns) as the name suggests are inspired by the biology of a brain‟ s neuron. human brain can perform a wide range of complex tasks in a relatively easier way as compared to computers. therefore, researchers were looking for ways in which human intelligence can be incorporated into machines so that they can also perform certain complex tasks easily. anns resembles the human brain in two aspects; the knowledge acquired by the network through a learning process, and interneuron connection strengths known as synaptic weights used to store the knowledge [2]. in early stage of a project, there is a limited availability of information, and limited application of traditional methods that require a precise knowledge of all parameters and their interrelations. therefore, the researchers have worked to develop a new technique for predicting the quantities of key construction materials ―cement, reinforced steel and aggregate‖ for building projects in gaza strip. weckman, et al., (2010) see that, the major benefit of ann is its ability to understand and simulate complex functions including those dimensions, attributes, and other factors. in concerning of the structure of anns, they are inspired to the human brain functionality and structure which consist of a set of neurons, grouped in one or more hidden layers connected by means of synapse connections [4]. the connections between neurons are called synapses and could have different levels of electrical conductivity, which is referred to as the weight of the connection. this network of neurons and synapses stores the knowledge in a „„distributed‟ ‟ manner: the information is coded as an electrical impulse in the neurons and is stored by changing the weight (i.e. the conductivity) of the connections [5]. iii neurosolution 5.07 application several applications support the establishment of eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (201 6) 36 neural networks like spss, matlab, etc. in this research, neurosolution application was selected. where neurosolutions is the premier neural network simulation environment. as mentioned in neurodimension, inc., (2012) neurosolutions combines a modular, icon-based network design interface with advanced learning procedures and genetic optimization. perform cluster analysis, sales forecasting, sports predictions, medical classification, and much more with neurosolutions, which is:  powerful and flexible: neural network software is the perfect tool for solving data modeling problems, so it's flexible to build fully customizable neural networks or choose from numerous pre-built neural network architectures. modify hidden layers, the number of processing elements and the learning algorithm (neurodimension, inc., 2012).  easy to use: neurosolutions is an easy-to-use neural network development tool for microsoft windows and intuitive, it does not require any prior knowledge of neural networks and is seamlessly integrated with microsoft excel and matlab. neurosolution also includes neural wizards to ensure both beginners and advanced users can easily get started. (neurodimension, inc., 2012). iv factors affecting the quantities of resources from gaza crossings in fact, one of the most significant keys in building the neural network model is identifying the factors that have real impact on the quantities of resources from crossing. depending on this great importance of selecting these factors, several techniques were adopted carefully to identify these factors in gaza strip building projects; as reviewing literature studies, and delphi technique by conducting expert interviews. v delphi technique different technique has been used to determine the effective factors on the quantities of resources from crossing. this technique relies on the concept of delphi technique, which aimed to achieve a convergence of opinion on factors affecting the quantities of resources. it provides feedback to experts in the form of distributions of their opinions and reasons. then, they are asked to revise their opinions in light of the information contained in the feedback. this sequence of questionnaire and revision is repeated until no further significant opinion changes are expected [16]. for delphi process, several rounds should be conducted where first round begins with an open-ended questionnaire. the open-ended questionnaire serves as the cornerstone of soliciting specific information about a content area from the delphi subjects, then after receiving the responses, the researcher converts the collected information into a well-structured questionnaire to be used as the survey instrument for the second round of data collection. in the second round, each delphi participant receives a second questionnaire and is asked to review the items summarized by the investigators based on the information provided in the first round, where in this round areas of disagreement and agreement are identified. however, in third round delphi panelist are asked to revise his/her judgments or to specify the reasons for remaining outside the consensus. in the fourth and often final round, the list of remaining items, their ratings, minority opinions, and items achieving consensus are distributed to the panelists. this round provides a final opportunity for participants to revise their judgments. accordingly, the number of delphi iterations depends largely on the degree of consensus sought by the investigators and can vary from three to five [17]. five experts in construction field were selected to reach a consensus about specifying the key parameters. the results with those five experts were significantly close to the questionnaire results, and only three rounds were conducted due to largely degree of consensus. where they proposed to exclude retaining wall and curtain wall from these factors because of their rarity in gaza’s projects. vi model development a introduction a neural network training program, neurosolution, was used as a standalone environment for neural networks development and training. moreover, for verifying this work, a plentiful trial and error process was performed to obtain the best model architecture. the following sections present the steps performed to design the artificial neural network model, the limitation of adopted model, and finally the discussion and analysis of results. b model limitations in spite of great accuracy of using ann in construction material prediction, it has a considerable defect, as it depends mainly on historical data; this dependency has several disadvantages as the following;  diversity of variables for effective factors is imited to what available in collected data.  new variables which were not included in adopted model will not be handled. therefore, in this study most of construction variables used in gaza strip was included except those that haven't enough frequency. after analyzing the collected data, there was found that some limitations on input parameters should be assigned to give the best output. table 1 illustrates the available range of input data in ann model as; number of crossings is from one to three…etc table 1 input limitation in model model variables minimum value maximum value number of crossings 1 3 maximum capacity of one crossing 0 700 eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (2016) 37 c model building there are several types of anns softwares are used to predict the future values based on the past data like spss, matlab, neurosolution …etc. many researchers used neurosolution application in building their neural networks that it achieved good performance as (edara, 2003; gunaydın & dogan, 2004; bouabaz & hamami, 2008; dowler, 2008; attal, 2010; wang, et al., 2012). the developed model in this research based on neurosolution 5.07 for excel program. it was selected for its ease of use, speed of training, flexibility of building and executing the nn model. in addition, the modeler has the flexibility to specify his own neural network type, learning rate, momentum, activation functions, number of hidden layers/neurons, and graphical interpretation of the results. finally, it has multiple criteria for training and testing the model. d data encoding artificial networks only deal with numeric input data. therefore, the raw data must often be converted from the external environment to numeric form (kshirsagar & rathod, 2012) [18]. this may be challenging because there are many ways to do it and unfortunately, some are better than others are for neural network learning [19].in this research data were converted to numeric form as shown in table 2.1. and table 2 shows the data organization for eleven factors. 1) data organization initially, the first step in implementing the neural network model in neurosolution application is to organize the neurosolution excel spreadsheet. then, specifying the input factors that have been already encoded, which consist of 11 factors; numbers of opened crossings, the percentage of closed time, amount of first payment, type of project, the value of nis in table 2 inputs/output encoding no input factor encode code 7 taxes more than normal normal less than normal = 1 = 2 = 3 8 needed quantities of cement by tons per week number form ton 9 needed quantities of reinf. steel by tons per week number form ton 10 needed quantities of aggr. by tons per week number form ton 11 labour wages more than normal normal less than normal = 1 = 2 = 3 no. output parameter encode code 1 quantities of passing cement from all crossings/week number form ton 2 quantities of passing steel from all crossings/week number form ton 3 quantities of passing aggregate from all crossings/week number form ton table 2 inputs/output encoding no input factor encode code 1 numbers of opened crossings number form for example tunnels, rafah, karm abo salem 1 or 2 or 3 2 the percentage of closed time percentage % 3 amount of first payment complex middle easy = 1 = 2 = 3 4 type of project international projects governmental projects people & special projects = 1 = 2 = 3 5 the value of nis in dollars number form i.e.:3.55 nis 6 transportation fees more than normal normal less than normal = 1 = 2 = 3 eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (201 6) 38 dollars for example (1$ = 3.55 nis), transportation fees, taxes, needed quantities of cement by tons, needed quantities of reinforced steel by tons, needed quantities of aggregate by tons, and labour wages. the desired parameter (output) are the quantities of cement, steel, and aggregate that reach from several crossings. 2) data set the available data were divided into three sets namely; training set, cross-validation set and test set. training and cross validation sets are used in learning the model through utilizing training set in modifying the network weights to minimize the network error, and monitoring this error by cross validation set during the training process. however, test set does not enter in the training process and it hasn‟ t any effect on the training process, where it is used for measuring the generalization ability of the network, and evaluated network performance [15]. in the present study, the total available data is 75 exemplars that were divided logical randomly, according to previous literatures in section (2.9.4), into three sets with the following ratio: training set (includes 54 exemplars ≈ 75%). cross validation set (includes 10exemplars ≈ 14%). test set (includes 8 exemplars ≈ 11%). 3) building network once all data were prepared, then the subsequent step is represented in creating the initial network by selecting the network type, number of hidden layer/nodes, transfer function, learning rule, and number of epochs and runs. an initial neural network was built by selecting the type of network, number of hidden layers/nodes, transfer function, and learning rule. however, before the model becomes ready, a supervised learning control was checked to specify the maximum number of epochs and the termination limits, figure 1 presents the initial network of multilayer perception (mlp) network that consists of one input, hidden, and output layer. before starting the training phase, the normalization of training data is recognized to improve the performance of trained networks by neurosolution program which as shown in figure 2 which ranging from (0 to +0.9). figure 2: selecting the normalization limits of data 4) model training the objective of training neural network is to get a network that performs best on unseen data through training many networks on a training set and comparing the errors of the networks on the validation set [20]. therefore, several network parameters such as number of hidden layers, number of hidden nodes, transfer functions and learning rules were trained multiple times to produce the best weights for the model. as a preliminary step to filter the preferable neural network type, a test process was applied for most of available networks in the application. two types multilayer perceptron (mlp) and general feed forward (gff) networks were chosen to be focused in following training process due to their good initial results. it is worthy to mention that, previous models that have been applied in the field of quantities estimation by neural networks used earlier two types of networks because of giving them the best outcome. the following chart illustrates the procedures of training process to obtain the best model having the best weight and minimum error percentage. the chart shows the procedures of the model training, which starts with selecting the neural network type either mlp or gff network. for each one, five types of learning rules were used, and with every learning rule six types of transfer functions were applied, and then one separate hidden layers were utilized with increment of hidden nodes from 1 node up to 40 nodes this layer. by another word, thousand trials contain 40 variable hidden nodes for each was executed to obtain the best model of neural network. figure 3 clarifies training variables for one trial. it compromises of number of epochs, runs, hidden nodes, and other training options. figure 1 multilayer perceprtorn (mlp) network eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (2016) 39 figure 3: training options in neurosolution application ten runs in each one 3000 epochs were applied, where a run is a complete presentation of 3000 epochs, each epoch is a one complete presentation of all of the data [19].however, in each run, new weights were applied in the first epoch and then the weights were adjusted to minimize the percentage of error in other epochs. to avoid overtraining for the network during the training process, an option of using cross-validation was selected, which computes the error in a cross validation set at the same time that the network is being trained with the training set. the model was started with one hidden layer and one hidden node in order to begin the model with simple architecture, and then the number of hidden pes was growing up by one node up to 40 hidden nodes. 5) model results as mentioned above, the purpose of testing phase of ann model is to ensure that the developed model was successfully trained and generalization is adequately achieved. through a system of trial and error. the best model that provided more accurate quantities estimate without being overly complex was structured of multilayer perception (mlp) includes one input layer with 11 input neurons and one hidden layer with (22 hidden neurons) and finally three output layer with three output neuron (passing quantities of cement, reinforced steel and aggregate from gaza strip crossings). however, the main downside to using the multilayer perception network structure is that it required the use of more nodes and more training epochs to achieve the desired results. table 3 summarizes the architecture of the model . table 3: architecture of the model 6) results analysis the testing dataset was used for generalization that is to produce better output for unseen examples. data from 8 cases were used for testing purposes. a neurosolution test tool was used for testing the adopted model accordingly to the weights adopted. table 4 and table 5 present the results of these 8 cases with comparing the real quantities of tested cases with estimated quantities from neural network model, and an absolute error with both price and percentage are also presented. table 4 results of neural network model at testing phase case actual quantity of cement (ton) actual quantity of steel (ton) actual quantity of aggregate (ton) 1 13990 2260 45325 2 13090 1850 45300 3 11660 3185 31618 4 13090 2280 45586 5 56928 23878 121032 6 50122 20143 130620 7 57600 22360 121503 8 38604 14511 90656 estimated quantity of cement (ton) estimated quantity of steel (ton) estimated quantity of aggregate (ton) 13066 2221 43458 13066 2221 43458 11552 3081 29817 13066 2221 43458 55589 23566 120774 50062 20117 123818 57283 22292 131209 45681 17483 101933 absolute percentage error(%) for cement absolute percentage error(%) for steel absolute percentage error(%) for aggregate 7% 2% 4% 0% 20% 4% 1% 3% 6% 0% 3% 5% 2% 1% 0% 0% 0% 5% 1% 0% 8% 18% 20% 12% eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (201 6) 40 0 5000 10000 15000 20000 25000 30000 35000 s e n s it iv it y open cross. c losed% f irst p aym ent t ype of project n is to $ transp t axes n eeded c em ent n eeded s teel n eeded a ggregate labour w ages input name sensitivity about the mean cement st ag table 5 results of neural network model at testing phase absolute error ae of cement (ton) absolute error ae of steel (ton) absolute error ae of agg. (ton) absolute error ae of cement % absolute error ae of steel % absolute error ae of agg. % 924.3343 39.27626 1867.1 7% 2% 4% 24.33432 370.7237 1842.1 0% 20% 4% 108.0029 104.2349 1801.453 1% 3% 6% 24.33432 59.27626 2128.1 0% 3% 5% 1339.272 312.0733 257.0527 2% 1% 0% 59.34351 25.39074 6802.272 0% 0% 5% 316.9003 68.37791 9705.52 1% 0% 8% 7077.461 2971.878 11277.49 18% 20% 12%  mean absolute error the mean absolute error (mae) for the presented results in table 5.3 equals (59.3 tons for cement, 25.4 tons for reinforced steel, and 25.7 tons for aggregate), it is largely acceptable for gaza strip construction industry. however, it is not a significant indicator for the model performance because it proceeds in one direction, where the mentioned error may be very simple if the project is large, and in turn; it may be a large margin of error in case the project is small.  mean absolute percentage error the mean absolute percentage error of the model is calculated from the test cases as shown in table 5.3, which equals 4% for cement, 6% for steel and 6% for aggregate; this result can be expressed in another form by accuracy performance (ap) according to wilmot and mei, (2005) which is defined as (100−mape) %. ap= 100% 6% = 94% for steel and aggregate, and 96% for cement. that means the accuracy of adopted model for estimating quantities of the important construction materials is 94% for steel and aggregate, and 96% for cement. it is a good result especially when the construction industry of gaza strip is facing a lot of obstacles.  correlation coefficient (r) regression analysis was used to ascertain the relationship between the estimated quantities and the actual quantities. the results of linear regressing are illustrated in table 6. the correlation coefficient (r) is 0.98, 0.99, 0.97 for cement, reinforced steel, aggregate respectively, indicating that; there is a good linear correlation between the actual value and the estimated neural network quantities. table 6 results of performance measurements performance cement / ton steel /ton aggregate /ton mse 16716008.43 2525194.80 113330607.53 nmse 0.05 0.03 0.09 mae 2999.60 971.88 8594.01 min abs error 59.34 25.39 257.05 max abs error 7610.85 3251.87 21053.52 r 0.98 0.99 0.97 the results of performance measures are presented in table 5.4, where the accuracy performance of adopted model is 98%, 99%, 97%. figure 4 describes the actual quantities comparing with estimated quantities for cross validation (c.v) dataset. it is noted that there is a slight difference between two quantities lines. figure 4 comparison between desired output and actual network output for test set 7) sensitivity analysis sensitivity analysis was carried out by neurosolution tool to evaluate the influence of each input parameter to output variable for understanding the significance effect of input parameters on model output. figure 5 presents the sensitivity analysis results for each input parameter. figure 5: sensitivity about the mean desired output and actual network output 0 50000 100000 150000 1 2 3 4 5 6 7 8 exemplar o u tp u t p a ssin g ce me n t / to n p a ssin g s te e l /to n p a ssin g a g g re g a te /to n p a ssin g ce me n t / to n o u tp u t p a ssin g s te e l /to n o u tp u t p a ssin g a g g re g a te /to n o u tp u t eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (2016) 41 the increase of standard deviation refers to the strength influence of this parameter on the overall quantities; figure 5 shows that the ―open crossings‖ factor has the highest rate of influence on the total quantities of materials. 8. users interface building after testing qpm model using varied projects and the results of the sensitivity analysis has been a logical then the model can be generalized. for more facilitation, visual basic interface was developed to facilitate data entry for the model. this interface provides the user with many alternatives options according to the nine input parameters, which describe the project. 8.1 custom solution wizard (csw) the custom solution wizard is a tool that will take an existing neural network created with neurosolutions and automatically generate and compile a dynamic link library (dll). this allows programmer to incorporate neural network models easily into other neurodimension products and other application, such as visual basic (vb) [21]. while using the wizard to create the dll, it is gives the option of creating a shell for any of the following programming environments: [visual basic, visual c++, microsoft excel, microsoft access, active server pages (developers level only)]. each shell provides a sample application along with source code to give programmer a starting point for integrating the generated dll into the desired application. the generated neural network dll provides a simple protocol for assigning the network input and producing the corresponding network output [21]. 8.2 generating dll the following steps were followed to generate dll file using csw tool: choose breadboard type. after run, csw the first panel appears is the "choose breadboard type" panel. this panel gives the option of using the active neurosolutions breadboard or allows opening an existing neurosolutions breadboard for generating the neural network dll. the active neurosolution breadboard was shown in figure 6 is the breadboard of the best gff model, which was chosen to create the dll figure 6: choose breadboard type  choose project type in this step as shown in figure 7, the panel allows choosing the type of project with which the programmer prefers to use the generated dll. in this research, the type that was chosen is visual basic 6. figure 7: choose project type. after the network dll has been created, the custom solution wizard created a project shell in the format of visual basic 6. the shell is provided as a guide to help programmer get started with developing a custom application using the generated neural network dll. 8.3 visual basic interface the neurosolutions object library is neurosolutionsol.dll, which was installed in the windows\system or winnt\system32 directory (depending upon your operating system) during the custom solution wizard installation. the neurosolutions object library provides a simple protocol (made up of properties and methods) for communicating with neural network dlls generated by the custom solution wizard. this protocol makes it extremely easy to use the generated network dlls from within the application. the object library allows creating the neurosolution recall network type of neural network objects. the easiest way to build a visual basic application for using a neural network dll is to start with a project shell. a vb project shell was generated automatically as described in the previous section by choosing the vb project type on the choose project type panel during the creation of the neural network dll. this creates a sample application (with source code) that will load in the dll and allow training the network and getting the networks output. the main aim of the vb application interface for the qpm model is to facilitate the data entry for the model. therefore, the interface was drawn using vb buttons, see figure 8 and then the vb code was written into the vb shell code panel. mk:@msitstore:e:/خاص/master/رسالة%20الماجستير/hassn%20ann/references/neurosolution%20help/customsolutionwizard.chm::/customsolutionwizard/nsrecallnetwork.htm mk:@msitstore:e:/خاص/master/رسالة%20الماجستير/hassn%20ann/references/neurosolution%20help/customsolutionwizard.chm::/customsolutionwizard/choose_project_type_panel.htm eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (201 6) 42 figure 8: the vb interface for the qpm model. 9. conclusion  quantities predictor model (qpm) gives the contractor a mechanism to decide if he can go ahead on the construction project or no according to the predicting quantities of the key materials (cement, steel, and aggregate).  developing ann model passed through several steps started with selecting the application to be used in building the model. the neurosolution5.07 program was selected for its efficiency in several previous researches in addition to its ease of use and extract results. the data sets were encoded and entered into ms excel spreadsheet to start training process for different models.  many models were built but the best model that provided more accurate results was multilayer perception network model (mlp) which structured from one input layer included 11 input neurons, one hidden layer contained 22 hidden neurons, three output neuron, tanhaxon transfer function, and moentum learning rule .  the accuracy performance of the adopted model recorded 98%, 99%, and 97% for cement, reinforced steel, and aggregate respectively where the model indicating that; there is a good linear correlation between the actual value and the estimated neural network quantities.  sensitivity analysis was performed using neurosolution tool to study the influence of adopted factors on predicting quantities of the key materials (cement, steel, and aggregate)., the concept of calculating sensitivity analysis of input factors based on determining the standard deviation of each. however, the performed sensitivity analysis was in general logically where the number of opened crossings had the highest influence.  according to quantities predictor model (qpm) the result shows that the quantities of key materials (cement, steel, and aggregate) which passing from all gaza strip crossings can be affected by several factors such as numbers of opened crossings, the percentage of closed time, amount of first payment, type of project, the value of nis in dollars for example, transportation fees, and taxes, respectively.  some assumption and limitation were assumed in the study according to available collected data. these limitations include: the first one, the number of crossings (1 to 3). this limitation is assumed because the other crossings don’t use for materials. and the other limitation, the maximum capacity of one crossing (700 trucks). references [1].harmon km. conflicts between owner and contractors: proposed intervention process. journal of management in engineering 2003;19(3):121–5. [2].edara, p., 2003. mode choice modeling using artificial neural networks, virginia. master thesis in civil engineering. virginia polytechnic institute and state university. [3].cheng, m.-y., tsai, h.-c. & sudjono, e., 2010. conceptual cost estimates using evolutionary fuzzy hybrid neural network for projects in construction industry. expert systems with applications 37, p. 4224–4231. [4].weckman, g. et al., 2010. using neural networks with limited data to estimate manufacturing cost. journal of industrial and systems engineering, 3(4), pp. 257-274 [5].cavalieri, s., maccarrone, p. & pinto, r., 2004. parametric vs. neural networkmodels for the estimation of production costs: a case study in the automotive industry. int. j. production economics , volume 91, p. 165–177. [6].elsawy, i., hosny, h. & abdel razek, m., 2011. a neural network model for construction projects site overhead cost estimating in egypt. international journal of computer science issues (ijcsi), 8(1). [7].swingler, k., 1996. applying neural networks, a practical guide. 3rd edition ed. s.l.:morgan kaufmann. [8].bouabaz, m. & hamami, m., 2008. a cost estimation model for repair bridges based on artificial neural network. american journal of applied sciences, 5(4), pp. 334339. [9].haykin, s., 1999. neural networks a comprehensive fundamentals 2nd ed.. new jersey: prentice-hall. [10].kim, g.-h., an, s.-h. & kang, k.-i., 2004. comparison of construction cost estimating models based on regression analysis, neural networks, and case-based reasoning. building and environment, february, volume 39, p. 1235 – 1242. [11].hegazy, t. & ayed, a., 1998. neural network model for parametric cost estimation of highway projects. journal of construction engineering and management, pp. 210-218. [12].sodikov, j., 2005. cost estimation of highway projects in developing countries: artificial neural network approach. the eastern asia society for transportation studies, volume 6, pp. 1036 1047. [13].willmott, c. & matsuura, k., 2005. advantages of the mean absolute error (mae) over the root mean square error (rmse) in assessing average model performance. climate research, volume 30, p. 79–82. eyad haddad /quantities predictor model (qpm) based on artificial neural networks for gaza strip building contractors (2016) 43 [14].gunaydın, m. & dogan, z., 2004. a neural network approach for early cost estimation of structural systems of buildings. international journal of project management, volume 22, p. 595–602. [15].arafa, m. & alqedra, m., 2011. early stage cost estimation of buildings construction projects using ann. journal of artifical intelligence, 4(1), pp. 63-75. [16].creedy, g. d., skitmore, m. & sidwell, t., 2006. risk factors leading to cost overrun in the delivery of highway construction projects, australia: research centre: school of urban development. [17].hsu, c., 2007. the delphi technique:making sense of consensus. practical assessment, research & evaluation electronic journal, 12(10). [18].kshirsagar, p. & rathod, n., 2012. artificial neural network. international journal of computer applications [19].principe, j. et al., 2010. neurosolution help, s.l.: neurodimension, inc. [20].dindar, z., 2004. artificial neural networks applied to option pricing., s.l.: s.n. [21].principe, j. et al., (2010). neurosolution help, s.l.: neurodimension, inc. eyad haddad. assistant professor, civil engineering department, faculty of engineering, university of palestine. gaza, gaza strip transactions template journal of engineering research and technology, volume 4, issue 2, june, 2017 48 hybrid flc/bfo controller for output voltage regulation of zeta converter h. elaydi and m. alsbakhi abstract— renewable energy sources are usually connected to the power grid via power converters. zeta converters are very important for microgrid and smart grid applications. the objective of this paper is to design a mamdani fuzzy logic controller (flc) and a hybrid fuzzy logic controller with the bacterial foraging optimization algorithm (flc/bfo) to improve and regulate the output voltage response against disturbances like the change in the voltage source or the load for the zeta converter operating in continuous conduction mode (ccm). analysis and comparison among simulations of the open loop, closed loop fuzzy logic controller, and hybrid flc/bfo controller results were performed for different output voltages and for different working conditions such as the change in the voltage source or the load. the results show that there is a significant improvement in the results for the proposed flc/bfo controller. the designs and simulations were performed in matlab/simulink environments. the results were compared with other results which used the particle swarm optimization (pso) algorithm. index terms— bacterial foraging optimization algorithm, continuous conduction mode, fuzzy logic controller, renewable energy sources, zeta converter. i introduction power systems produce electricity depending on load demands. over the years, load demands increased in devloped countries. energy sources have limitations on relability of the supply, and cause environmental pollution, global warming, and the risk of occurrence of nuclear accidents; thus, a need for renewable energy sources was born [1], such sources include wind, solar, hydro, and geothermal [2]. renewable energy sources are usually connected to power grids via power converters. the choice of the appropriate topology for inverters depends on many factors such as the type of the renewable energy source and the total amount of power that will be handled [3]. dc to dc zeta converters are one type of converters that are used to interface renewable energy sources to the grid. moreover, zeta converters are used in many applications like supplying suitable dc voltage to modern portable electronic equipment which are not directly connected to the ac mains, power quality improvements, power factor correction, and industrial applications. in this paper, we will use a zeta converter for converting and supplying suitable dc voltage from a dc voltage source to a load. in june 2010, vuthchhay and bunlaksananusorn used a linearized model zeta converter in the ccm mode to regulate the output voltage against disturbances [4]. in order for the output voltage to meet a desired value, a pwm feedback controller was used, then a pi controller was added to improve the system response. in 2011, moaveni, et.al. presened a model reference adaptive controller (mrac) with back-propagation neural networks (nn) to control the output voltage of the zeta converter operating in ccm [5]. in 2012, izadian, et.al. implemented a model reference adaptive controller (mrac) to the zeta converter operating in ccm mode for output voltage tracking [6]. in 2013, sarkawi, et.al. studied the zeta converter operating in the ccm mode to regulate the output voltage using a full-state feedback controller [7]. they presented the system model by the ssa technique. the small signal linear model considered two inputs to the system: the input voltage and the load current. the feedback gain matrix k was found by two methods: the pole placement method and the linear quadratic regulator (lqr). they found that lqr gave them better results than the pole placement method, because lqr found the optimal control effort. but their system model was complicated. in june 2014, ahmad and sultan studied the zeta converter operating in ccm mode to improve its output voltage, and to control the output voltage under different working conditions or disturbances such as changes in the load resistance or input voltage [8]. a fuzzy logic controller (flc) and a flc with particle swarm optimization which is known as a hybrid flc/pso controller were presented to achieve the control goal. they compared the results of the open loop system with flc and flc/pso which concluded that flc/pso produced the best results. sarkawi, et.al. work's is one of the few reported works in the literature to present a hybrid flc/pso controller, which reduced the system modeling of zeta converter for controlling the output h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 49 voltage. but the output response had small ripples. this paper presents a design of a fuzzy logic controller (flc) to reduce the control complexity of the zeta converter system in ccm mode to improve its performance under different working conditions such as the load and the voltage source disturbances. the designs and simulations were performed in matlab/simulink environments. the main contribution is to use a new optimization algorithm which is the bacterial foraging optimization algorithm (bfoa) to improve the flc performance by optimizing its scaling gains, which results in designing a hybrid flc/bfo controller [9]. the effectiveness of the bfo algorithm will be proved via the improvement of the flc performance for different working conditions. this paper is organized as follows: section 2 presents zeta converter and its modeling using the ssa technique. section 3 presents the fuzzy logic control design. section 4 presents the bacterial foraging optimization algorithm (bfoa) as an optimization method that will be used to get the best performance for the flc. section 5 presents the hybrid flc/bfo controller design. section 6 presents the results and discussion. section 7 concludes this paper and presents the future work. section 8 presents the references. ii zeta converter and its modeling zeta converter is a 4 th order nonlinear dc-dc converter [10], as shown in figure 1, has two inductors each with a dc resistance (dcr), two capacitors each with an equivalent series resistance (esr), and a diode. the zeta converter can operate in step up or step down modes to supply a load. the input to the zeta converter is a dc voltage. the zeta converter circuit has an operating switch (mosfet). zeta converters may operate in one of two operating modes, the first mode is the continuous current mode (ccm), and the second mode is the discontinuous current mode (dcm). within one switching period t, ccm mode offers two circuit states while dcm mode offers three circuit states. this paper focuses on ccm mode. figure 2 illustrates the difference between ccm and dcm modes in on and off states [11]. the modeling of the converter is represented as a state space model. as explained in the next section, the overall model is obtained by the state space averaging technique (ssa) from two state space models by calculating the weighted average of two sets of equations using the nominal values of the time spent in each circuit state as the weights. a description of each circuit state when the mosfet switch is on, the diode is reverse biased, thus open circuited as shown in figure 3 below. in this state, the inductors 1 l and 2 l are in the charging state, and the inductors currents are increasing linearly. the second state is when the mosfet switch is off, the diode is forward biased, thus short circuited as shown in figure 4 below. in this state, the inductors are in the discharging state, and the energies in 1 l and 2 l are discharged to capacitors 1 c and 2 c which are the output parts respectively, and the inductors currents are decreasing linearly. to insure that inductors currents are increasing and decreasing linearly, the following equations must be satisfied [4]: figure 1 zeta converter circuit (a) ccm mode (b) dcm mode figure 2 inductors currents waveforms in ccm and dcm modes figure 3 the equivalent zeta converter circuit when the switch is on figure 4 the equivalent zeta converter circuit when the switch is off h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 50 2 (1 ) 12 (1 ) 1 2 1 (1 ) 2 (1 ) 2 2 rrd r dcl l df r r d rd r l l f r                (0) where f is the switching frequency and d is the duty cycle of the switch. b state space modeling of each circuit state when the mosfet switch is on as shown in figure 3,the state space model is as follows: 1 1 1 1 12 2 2 1 22 2 2 2 2 21 1 1 2 2 2 2 2 2 0 0 0 1 1 1 0 1( ) 1 0 0 0 0 01 0 0 ( ) ( ) l l l c l l c lc c c c c c c c r di l dt i lr r rdi r r il r r l l r rdt ldv v dt c v dv r dt c r r c r r                                                                               s v    1 22 12 2 2 0 0 l lc o cc c c i ir r r v vr r r r v                     when the mosfet switch is off as shown in figure 4, the state space model is as follows: 1 1 1 1 1 122 2 22 2 2 2 1 1 1 2 2 2 2 2 2 1 1 ( ) 0 0 1 0 0 ( ) 1 0 0 0 1 0 0 ( ) ( ) l c l lcl l lc c c c c c c c di r r l l dt ir r rdi r il r r l r rdt dv v dt c v dv r dt c r r c r r                                                                         1 22 12 2 2 0 0 l lc o cc c c i ir r r v vr r r r v                     c state space averaging technique (ssa) during the first state, the mosfet switch is on for an interval dt, while during the second state the mosfet switch is off for an interval (1-d)t. the averaged (overall) state space model for the zeta converter is obtained as follows [7]: 1 2 1 2 1 2 (1 ) (1 ) (1 ) av av av a a d a d b b d b d c c d c d          (2) in this paper, we assume ideal zeta converter, where all dc resistances and equivalent series resistances have a value of zero; thus, the state space model becomes as follows:   1 0 0 01 1 1 1 10 02 22 2 1 1 1 20 0 01 1 2 1 1 0 2 0 0 2 2 1 2 0 0 0 1 1 2 d di l l ddt id lldi l il l ddt l vs d ddv v l c c c cdt v c dv c c rcdt i l i l vo v c v c                                                                                (3) the relation between the input and the output voltages in the ideal zeta converter is characterized by the duty ratio as follows [4]: 1 d v vo s d         (4) for ccm mode, the critical values for the inductance and capacitance in the ideal zeta converter are as follows [4]: h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 51 2 (1 ) 1 2 (1 ) 2 2 1 8 (1 ) 1 2 8 d r l df d r l f d c f d r c fr                   (5) the produced ripples in the inductors currents and in the capacitors voltages in the ideal zeta converter are given in terms of vs and f , the switching frequency, as follows [8]: 1 1 2 2 1 2 8 1 1 2 2 8 2 2 dvs i l fl dvs i l fl dvs v c f c l dvs v c f c l                      (6) iii fuzzy logic controller in this paper, three linguistic variables are used which are two input variables to the fuzzy logic controller; (the error, and the change of the error), and one output variable; that is the control signal to the zeta converter system after being defuzzified. each input variable has 5 triangular membership functions; thus, forming 5*5 or 25 rules. the mamdani inference system is used, and the centroid method is used as the defuzzification method. the membership functions and their ranges for the three linguistic variables are as shown in figures 5-7: the fuzzy associative memory (fam) or the table of rules is shown in table 1 [8]: the membership functions shown in figures 5-7 need to be tuned in addition to the flc scaling gains for the inputs which represent a pd controller, and for the output in order to get the desired output performance. thus, the control is achieved by a flc with pd controllers. the tuning was performed manually as it will be explained in section 5. iv bacterial foraging optimization algorithm (bfoa) bacterial foraging optimization algorithm (bfoa) proposed by passino [12] is a simulation of the social foraging behavior of escherichia coli bacteria present in human intestine. generally, this type of bacteria move for a longer distance in a friendly environment. the chemotaxis of bacteria could be a continuous swim, a swim followed by a tumble, a tumble followed by a tumble, a tumble followed by a swim, or a combinatable 1 the rule base of the flc e / δe n ns z ps p n n n n ns z ns n n ns z ps z n ns z ps p ps ns z ps p p p z ps p p p figure 5 the error mf figure 6 the change of the error mf figure 7 the output voltage mf h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 52 tion of them. [13]. figure 8 shows the swim and tumble modes for a bacterium. a processes of bfoa if ( )j  is the problem to be optimized, where  is a p dimensional vector, the four processes of the bfoa are as follows: 1. chemotaxis the chemotactic step is considered to be a tumble followed by a tumble or a tumble followed by a swim. let ( , , ) { ( , , ) | 1, 2, ..., } i p j k l j k l i s  represent the position of each bacterium of the population at the -thj chemotactic step, -thk reproduction step, and -thl elimination dispersal event, or simply i  . the position of the bacterium in the next chemotactic step after a tumble can be represented as follows: ( ) ( 1, , ) ( , , ) ( ) ( ) ( ) ii i j k l j k l c i t i i         (7) where  is a vector in the random direction whose elements lie in [-1,1]. if the fitness values of the bacterium improved after the tumble, it will continue swimming until the fitness value degrades, then it will tumble. 2. swarming swarming means that the bacteria send signals to each other to congregate into high bacterial density groups and move to reach the desired location. let ( , , , )j i j k l represent the cost or fitness at the location of the -thi bacterium ( , , ) i j k l . swarming can be represented as follows: ( , ( , , )) ( , ( , , )) 1 2 [ exp( ( ) )]attractant attractant 1 1 2 [ exp( (( ) )] repellant repellant1 1 s i j p j k l j j k l cc cci ps i d w m m i m ps i h w m m i m                        (8) where ( , ( , , ))j p j k l cc  is the fitness function value to be added to the actual fitness function which is to be optimized to present a time varying fitness function, 1 2 [ , ,..., ] t p     is a point in the p dimensional search domain, and attractantd , attractant w , repellant h , and repellant w are different coefficients which should be chosen properly. 3. reproduction the reproduction means that the bacteria which have had sufficient nutrients will reproduce an exact replica of itself, and the least healthy bacteria will die. the number of the reproduced bacteria will equal the number of the dead ones, thus, the population size of the bacteria will be constant in the evolution process. 4. elimination and dispersal elimination and dispersal simulates the sudden environmental changes or attacks that may occur in the real bacteria, thus, a group of bacteria may be killed, and others may move to some other places. while simulation, this reduces the trapping in a local optimal point. v hybrid flc/bfo controller in this paper, we will optimize the scaling gains for the normalized manually tuned membership functions by using the integral of the absolute value of the error or iae as the fitness function. the scaling gains for the inputs and the output of the flc will be used as variables that will be optimized using bfoa. in this case, the controller is called hybrid flc/bfo controller. the bfo algorithm will produce trial solutions for the scaling gains, and it will determine if they minimize the error in the system response by using the integral of the absolute value of the error as a fitness function. then, the best scaling gains will be selected for the best system response. figure 9 illustrates the process of the hybrid flc/bfo controller to control a system plant. (a) tumble mode (b) swim mode figure 8 modes of an e.coli bacterium figure 9 hybrid flc/bfo controller h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 53 vi results and disscussion the simulation of the open loop zeta converter system, the designing of a fuzzy logic controller (flc) for the closed loop zeta converter system, and the designing of a hybrid flc/bfo for the closed loop zeta converter system were performed for different output voltages 9, 12, and 15 v for the nominal values under different working conditions such as load disturbance, voltage source disturbance, or both. comparisons were made between our results and the results of ahmad, et.al [8] to demonstrate the effectiveness of our results and our methodology. the designs and simulations were performed under matlab/simulink environment. a the normal open loop zeta converter system analysis the averaged state space model in equation (3) has six variables that must be defined in order to find the state space matrices which are , , , , , and 1 2 1 2 l l c c r d . the critical values of , , , and 1 2 1 2 l l c c in ccm mode mainly depend on the switching frequency f , the load r , and the duty ratio d . the inductors currents and the capacitors voltage ripples are also affected. the critical values or limits and the inductors currents and the capacitors voltage ripples are as shown in equations (5) [4] and (6) [8] respectively. the duty ratio d can be obtained as follows: v o d v v o s   (9) selecting 12 vv s  , then, for each v o , there is a duty ratio d . table 2 illustrates the duty ratio d for 9, 12, and 15 vv o  . table 3 illustrates the critical values or limits for , , , and 1 2 1 2 l l c c under different d and different v o when the switching frequency f = 5khz and the load 10 r   . it is clear that we must choose values that satisfy all the critical limits in order to design a zeta converter system that is valid for converting the input voltage to the output voltages 9, 12, and 15 v, in which we must choose 0.764 mh 1 l  , 0.572 mh 2 l  3.12 f, 1 c  and 2.5 f 2 c  table 4 illustrates the values for the zeta converter system parameters that are used in this paper. the normal open loop responses with the reference voltages are shown in figure 10. the normal open loop systems performances are illustrated in table 5. table 2 the duty ratio for different voltages vs = 12 v the output voltage (vo) v the duty ratio (d) 9 0.428 12 0.5 15 0.555 table 3 critical values parameters zeta converter parameters vs=12 v vo=9 v, d=0.428 vo=12 v, d=0.5 vo=15 v, d=0.555 l1 (mh) 0.764 0.5 0.356 l2 (mh) 0.572 0.5 0.445 c1 (µf) 1.87 2.5 3.12 c2 (µf) 2.5 2.5 2.5 table 4 the zeta converter parameters zeta converter system parameters f 5 khz r 10  l1 5 mh l2 5 mh c1 90 f c2 10 f h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 54 b fuzzy logic controller analysis the simulation of the three normal closed loop zeta converter systems which are for tracking the output voltages 9, 12, and 15 vv o  when 12 vv s  compared with the three normal open loop zeta converter systems for a simulation time 0.1 sect  is shown in figure 11. figure 10 the normal open loop system responses table 5 the normal open loop zeta converter systems performances vs (v) d vo (v) the normal open loop zeta converter systems performances os (%) s t (ms) ss e (%) vo ripples (v) 12 0.428 9 26.5 28.9 0.23 0.513 0.5 12 25.33 20.5 0 0.6 0.555 15 22.15 14.7 0.22 0.66 figure 11 the response of the normal flc closed loop and the normal open loop zeta converter h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 55 the normal flc closed loop systems performances are illustrated in table 6. a comparison between table 6 and table 5 which is for the normal open loop systems performances is shown in figure 12. c the system disturbance analysis the simulation of the open loop and the closed loop zeta converter systems for the output voltages 9, 12, and 15 v with a simulation time 0.15 sect  under fuzzy logic controller is performed here. changes in the load current and the load voltage is followed efficiently when the load changes, voltage source changes, or both occur; thus, protecting the load from damage or malfunctioning. c.1 system analysis with the load disturbance the load r is considered to change linearly sweeping the values 10 40 10    , in which it starts to change from 10  at time 0.05 sect  to reach 40  at time 0.1 sect  , then it will change from 40  to reach 10  at time 0.15 sect  . figure 13 shows the changes in the values of the gain 1/r. the response of the three closed loop zeta converter systems compared with the three open loop systems with load disturbance is shown in figure 14. table 6 the normal flc closed loop zeta converter systems performance vs (v) vref (v) the normal flc closed loop zeta converter systems performances os (%) st (ms) sse (%) vo ripples (v) 12 9 1.49 2.8 0.67 0.128 12 0.74 3.6 0.71 0.128 15 0.50 4.15 0.9 0.153 figure 12 the normal flc closed loop vs. normal open loop systems figure 13 the load disturbance in the 1/r gain signal h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 56 it is clearly shown that the open loop response is affected by the load change in which the variations in the open loop output voltage according to the load change increase as the converted output voltage increases, while in the flc closed loop systems, these variations were minimized and are under control. c.2 the system analysis with the voltage source disturbance the voltage source v s is considered to change linearly sweeping the values 12 11 13 14 12 v    , in which it starts to change from 12 v at time 0.05 sect  to reach 11 v at time 0.055 sect  , then it will change from 11 v to reach 13 v at time 0.1 sect  , then it will change from 13 v to reach 14 v at time 0.125 sect  , then it will change from 14 v to reach 12 v at time 0.15 sect  . figure 15 shows the changes in the values of the voltage source. figure 16 flc closed loop and open loop systems responses with voltage source disturbance the response of the three closed-loop and open-loop zeta converter systems with voltage source disturbance is shown in figure 16. figure 14 flc closed loop and open loop systems response with load disturbance figure 15 the voltage source disturbance signal h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 57 it is clearly shown that the open-loop response is greatly affected by the voltage source changes in terms of the large variations in the open-loop output voltage. on the other hand, the flc closed loop system handles and controls these variations efficiently. c.3 the system analysis with both the load and the voltage source disturbances in the real implementation of the zeta converter system, both the load and the voltage source disturbances are expected to occur simultaneously, and this is the worst case scenario for the zeta converter system. the response of the three open loop and the three closedloop zeta converter systems for a simulation time 0.15 sect  with both types of disturbances is illustrated in figure 17. the disturbances in this case were changed linearly in the same manner explained previously. thus, we may conclude that the designed fuzzy logic controller (flc) handles and controls the response of the worst case of disturbances efficiently. c.4 hybrid flc/bfo controller design in this section, v o is limited to 15 v as the worst case scenario for simulating results. furthermore, our results in this section will be compared with the results from the work of ahmad, et.al [8]. the designed hybrid flc/bfo controller is simulated considering both types of disturbances are present, and in which the flc scaling gains will be tuned using the bfo algorithm. table 7 illustrates the parameters used in the bfo algorithm. the bfo algorithm is implemented by matlab/simulink using three matlab m-files: the first m-file is the bfoa main code; the second m-file is a function to run the zeta converter system with each bacterium which is a trial solution, that computes its fitness or figure 17 flc closed loop and open loop systems responses with both disturbances table 7 the bfo algorithm parameters the bfoa parameters symbol value p 3 s 16 nc 25 ns 4 nre 4 ned 2 ped 0.25 h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 58 performance using the fitness function which is the integral of the absolute value of the error or iae; and the third m-file is the cell to cell attraction function to simulate the swarming behavior of the bacteria in the population. the used fitness function is as follows: 0.15 0 j iae e dt   (10) the response of the closed loop zeta converter system and the reference voltage vref =15 v for a simulation time 0.15 sect  with both types of disturbances, previously applied, is illustrated in figure 18. table 8 illustrates the performance of the normal closed loop flc/bfo zeta converter system when vref =15 v compared with the normal open loop and flc closed loop systems. first if we compare our designs with one another, table 8 shows that the flc/bfo controller handles and controls the load and the voltage source disturbances more efficiently than the other controller types developed in this work. the performance of the flc/bfo closed loop controller, when compared to the flc closed loop controller, improved the overshoot by 22%, the steady state error by 44%, and the output voltage ripples by 0.8%. however, there was a 9% increase in the settling time as a direct result of decreased overshoot, owing to the increased damping introduced to the system, which naturally increases the settling time. when comparing our results with the results of ahmad, et.al [8], we note that they made simulations of zeta converter system using 9, 12, and 15 v as output voltages for the open loop system when the input voltage vs = 12 v. they also used 9, 12, and 15 v as reference voltages for the flc closed loop system, and they optimized the scaling gains of the flc using particle swarm optimization algorithm (pso) which resulted in designing the hybrid flc/pso controller. the zeta converter circuit parameters used in [8] are illustrated in table 9. figure 18 the closed loop zeta converter response using the flc/bfo controller table 8 zeta converter system performance comparison for different system designs, vref =15 v system design zeta converter systems performance comparison for different normal system designs os (%) st (ms) sse (%) vo ripples (v) open loop system 22.15 14.7 0.22 0.66 flc closed loop system 0.50 4.15 0.9 0.153 flc/bfo closed loop system 0.39 4.52 0.5 0.1518 h. elaydi and m. alsbakhi, hybrid flc/bfo controller for output voltage regulation of zeta converter (2017) 59 as previously discussed, table 3 illustrated the critical limits of , , , and 1 2 1 2 l l c c when 5 f khz and 10 r   . thus, from table 9, we can conclude that the selected values for and 1 2 l l in [8] did not satisfy the critical limits. these critical limits guarantee that the currents in 1 l and 2 l are increasing and decreasing linearly which guarantee that the average current in the load r is equal to the average current in the output inductor 2 l . on the other hand, table 10 compares our results with ahmad, et.al [8] results for the normal systems when vref = 15 v. thus, we can conclude that our work compared with ahmad, at.al [8] gave better results where all of the critical limits of the zeta converter system parameters were satisfied, the open loop system performance was very good in terms of the overshoot, settling time, and steady state error; our designed flc clearly improved the open loop performance, and the bfoa for the hybrid flc system gave better results than pso with regards to the overshoot and the settling time. the steady state errors in our results in the flc closed loop and the hybrid flc systems were close to the steady state errors in [8]. vii conclusion and future work in this paper, the simulations of the open loop zeta converter system, the designing of a flc and a hybrid flc/bfo controllers were performed under matlab/simulink environment for the nominal values and for different working conditions such as the load disturbance, the voltage source disturbance, or both for the different output converted voltages 9, 12, and 15 v when the voltage source was 12 v. the flc using mamdani inference system performed better than the open loop performance, in which it improved the overshoot, the settling time, and the output voltage ripples with a very small increase in the steady state error for the different output converted voltages and the different working conditions. the hybrid flc/bfo controller performed better than the flc controller, in which it added improvements to the overshoot, the steady state error, and the output voltage ripples with a very small increase in the settling time. a comparison between our results and ahmad, et.al [8] results which used the flc/pso controller for the reference voltage 15 v was performed. the comparison led to conclude that our results were better in terms of the overshoot and the settling time for the open loop systems, the flc closed loop systems, and the hybrid flc controller. the steady state error in our results for the open loop systems was better than in [8], while in the flc closed loop and the hybrid flc systems, the steady state error was close to the steady state error in [8]. thus, we may conclude that bfoa is competitive in comparison with the pso in which it presented better and more competitive results in the hybrid flc system. table 9 the system parameters used in [8] the zeta converter system parameters used in [8] f 5 khz r 10  l1 0.5 mh l2 0.5 mh c1 900 f c2 1000 f table 10 system performance comparison between our results and ahmad, et.al [8] for vref=15v zeta converter systems performance comparison for different normal system designs system design ahmad, et.al [8] our design os (%) s t (ms) ss e (%) os (%) s t (ms) ss e (%) open loop system 51.7 38 6.7 22.15 14.7 0.22 flc closed loop system 2.7 7 0.47 0.50 4.15 0.9 flc/pso closed loop system 0.91 5 0.4 -- flc/bfo closed loop system --0.39 4.52 0.5 first a. author, second b. author., and third c. author / research name (2016) 60 future work may include: using the bfo algorithm to optimize the rule base component or the membership functions of the flc, using the improved bfo (ibfo) algorithm, using sugeno inference system in the flc design, using type-2 fuzzy logic system in the flc design, or using other optimization algorithms with the flc such as genetic algorithm (ga) or ant colony optimization (aco) algorithm. references [1] r. c. viero and f. s. dos reis, "designing closed-loop controllers using a matlab dynamic model of the zeta converter in dcm," 10th ieee/ias international conference on industry applications(induscon), 2012. [2] a. kumar, h. a. giftson, v.a. rinoj, g.a. jebamani, r. balakrishnan, and m. s. chinnathampy, "solar energy implementation with grid interfacing." international journal of advanced research in management, architecture, technology and engineering (ijarmate), 2015. 1(2): p. 9-12. [3] e. f. camacho, t. samad, m. garcia-sanz, and i. hiskens, "control for renewable energy and smart grids." the impact of control technology, control systems society, 2011: p. 69-88. [4] e. vuthchhay and c. bunlaksananusorn "modeling and control of a zeta converter", ieee international power electronics conference (ipec), 2010. [5] b. moaveni, h. abdollahzadeh, and m. mazoochi, "adjustable output voltage zeta converter using neural network adaptive model reference control," 2 nd ieee international conference on control, instrumentation and automation (iccia), 2011. [6] a. izadian, p. khayyer, and h. yang, "adaptive voltage tracking control of zeta buck-boost converters." ieee energy conversion congress and exposition (ecce), 2012. [7] h. sarkawi, m.h. jali, t. a. izzuddin, and m. dahari, "dynamic model of zeta converter with full-state feedback controller implementation." international journal of research in engineering and technology (ijret), 2013. 2(08): p. 34-43. [8] a. h. ahmad and n. s. sultan, "design and implementation of controlled zeta converter power supply." american journal of electrical and electronic engineering, 2014. 2(3): p. 121-128. [9] m. alsbakhi, “hybrid flc/bfo controller for output voltage regulation of zeta converter”, ms thesis, islamic university of gaza, 2016. [10] h. sira-ramirez and r. silva-ortigoza, control design techniques in power electronics devices. 2006: springer science & business media. [11] s. maniktala, switching power supplies a to z. 2006: elsevier inc. [12] s. das, a. biswas, s. dasgupta, and a. abraham, foundations of computational intelligence volume 3: global optimization. vol. 203. 2009: springer. [13] h. supriyono, novel bacterial foraging optimisation algorithms with application to modelling and control of flexible manipulator systems. phd thesis, the university of sheffield, united kingdom 2012. hatem a. elaydi received a b.s. degree in electrical engineering from colorado technical university in 1990, and m.s. and ph.d. degrees in electrical engineering from new mexico state university in 1992 and 1997, respectively. he is currently an associate professor at the electrical engineering department, the islamic university of gaza. he held several position such as department head, assistant dean, and head of the resources development center, head of quality assurance unit, and associate vice president for academic affairs. his research interest includes control systems with concentration on optimal control, robust systems, convex optimization; in addition to quality assurance in higher education and uiversity governance. he conducted several studies and consultations in palestine and the region. he is certified as a regional subject and institutional reviewer. he is a member of ieee, siam, tau alpha pi, ams, palestine engineering association, and palestine mathematic society. he served as editor board member, member of technical council, member of scientific committees for several local, regional and international journals and conferences. mohammed alsbakhi got the bachelor degree in electrical engineering from the islamic university of gaza in 2006. then he got the msc degree in the electrical engineering\control systems from the same university in 2016. for more than 10 years, most of his trainings and work experiences focuses in the field of computing and information technology in the various fields which include computer networking infrastructure, microsoft systems engineering, computers and computer networks maintenance in both of hardware and software fields, helpdesk, technical support and office applications. mohammed alsbakhi currently works in ministry of health (moh) as computer networks engineer and technical support. also he works as technical instructor in the it field in the private sector. transactions template journal of engineering research and technology, volume 7, issue 1, april, 2020 10 health risk assessment of groundwater contamination case study: gaza strip luay i. qrenawi (*) , reem abu shomar (**) (*) corresponding author, civil engineering department, university college of applied sciences, gaza, palestine. p o box: 1415, lqrenawi@ucas.edu.ps (**) program coordination unit-pwa, palestinian water authority, reemabushomar@gmail.com https://doi.org/10.33976/jert.7.1/2020/2 abstract—gaza governorates are suffering from shortage problems and poor quality of groundwater that is being pumped from 281 municipal wells. according to the latest data available at the palestinian water authority (pwa), the water consumption can be distributed on municipal consumption 96.428 mcm and agricultural sector consumption 95.3 mcm. the annual recharge is less than the pumping rate with more than 90 mcm; resulting in declining water level, sea water intrusion and hence high chloride concentrations. nitrate levels are increasing due to the improper systems of wastewater disposal, excess use of fertilizers and landfill leachate. the nitrate level exceeds the who limit in more than 90.6 % of gaza governorates municipal wells for the year 2018 (223 wells from a total of 245 wells. due to the health impacts of nitrate, health risk assessment was conducted based upon the available quality data of 245 municipal wells. the risk assessment method adopted by the united states environmental protection agency was utilized in this study. three categories of receptors were assessed; infants, children and adults. the study revealed that the health risk values for adults is acceptable in 22 wells only while it is unacceptable in the other 223 wells. for children and small infants, the situation was riskier and the study outlined that none of the municipal wells in gaza governorates was suitable for drinking purposes for these two categories of people. the study recommended that actions should be taken to minimize the risk associated with drinking groundwater, looking for alternative water resources is to be seriously considered, community participation should be encouraged, people should know that their source of water is unsuitable and further studies that consider the impact of nitrate in groundwater on the public health in gaza strip should be performed. index terms— gaza governorates, nitrate, municipal wells, health risk assessment, groundwater, public health. i. introduction gaza strip is a narrow area located along the coastal southwestern zone of the occupied palestine just near the mediterranean sea. it is divided into five governorates with a total are of about 365 km 2 and a population of about 2 million. high current and expected future population growth rates will undoubtly lead to greater impacts on natural resources, especially the water. the over pumping of the gaza groundwater aquifer (the only source of water) has resulted in continuous declining of the local groundwater levels and degrading its quality. seawater intrusion and up-coning of deep brine water are considered as the major challenges impacting the existing groundwater in gaza strip. as gaza strip is located in an arid area with an unsustainable water resource, it faces serious problems of water in terms of quantity and quality. according to the water budget components of gaza strip, it can be realized that the water resources in gaza are usually fluctuating around a stationary average, while the population is increasing continuously. the higher natural population growth rate of gaza strip (as indicated by many organizations as the highest rate in the world) has transformed water to a chronic and worsening imbalance in the population – water resources equation (qrenawi, 2007, eldadah et al., 2007, qrenawi et al., 2002). the present state of the water sector in gaza is distressing and has been described by many organizations as a humanitarian crisis. the main source of domestic and agricultural water is the groundwater, which is almost totally polluted and at the present yields a flow of unacceptable quality for domestic usage. the amount of water available to the people of gaza is also insufficient, while its deteriorated quality causes large adverse public health impacts. according to the latest data available at the palestinian water authority (pwa) database, the water consumption can be distributed on the different sectors as follows: municipal consumption 96.428 mcm (52%); of which 13 mcm is suitable for drinking purposes and agricultural sector consumption 95.3 mcm (48%) (wrd-pwa, 2014, wrd-pwa, 2018). the annual net deficit in the groundwater aquifer in 2016 is about 90 mcm and predicted to reach 180 mcm by 2035; indicating that the only source of fresh water in gaza strip is mailto:lqrenawi@ucas.edu.ps mailto:reemabushomar@gmail.com https://doi.org/10.33976/jert.7.1/2020/2 luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 11 being drastically over pumped and hence the aquifer is showing clear signs of irreversible failure or collapse, with quickly advancing the deterioration of the gaza water resources in terms of both quality and quantity (aiash and mogheir, 2017, eldadah, 2013). the water municipal consumption varies slightly according to the time of the year; august is the month having the maximum consumption while february has the minimum consumption. figure 1 outlines the annual municipal water consumption for gaza strip governorates in 2018. according to the data available at pwa, this amount of water was extracted from 281 municipal wells distributed all over the gaza strip; as shown in figure 2 (eldadah et al., 2007, wrdpwa, 2018). figure 1 municipal water consumption in gaza governorates (wrd-pwa, 2018) currently, groundwater extraction is far exceeding the rate of aquifer recharge. the level of groundwater is declining and the chloride concentrations are increasing, these results are expected; rendering the water unsuitable either for drinking or irrigation purposes. the random disposal of raw wastewater and solid waste to the ground surface and the uncontrolled use of fertilizers have also contaminated groundwater and raise the nitrate levels in certain areas to unacceptable concentrations (eldadah et al., 2007). recently, and due to the continuous degradation of groundwater quality in gaza strip, public attention has significantly grown and has concentrated on anthropogenic causing of the problem. the gaza strip aquifer is the most important source of water to agricultural, domestic, and industrial demands (lubbad and al-yaqoubi, 2007). the nitrate concentration in groundwater has increased in the last years; this phenomenon has primarily taken place in the coastal area, where the water sources are close to population clusters and to industrial and agricultural regions. the increased accumulation of nitrates in groundwater is responsible for creating health dangers to the population who are using this contaminated water (lubbad and al-yaqoubi, 2007). in this paper, health risk assessment – a decision support tool – of groundwater contamination by nitrate will be presented; risk management will also be outlined. ii. literature review sources of nitrate: nitrate is considered as a stable oxidized form of the combined nitrogen in the majority of environmental media. different sources of the combined organic or ammonia nitrogen can be considered as the main source of nitrate, this is due to the fact that most of nitrogenous compounds in water tends to be transformed with a certain means to nitrate. nitrates exists naturally in mineral stores including sodium or potassium nitrate, in soils, seawater, freshwater sources, the atmosphere, and in the animal and plant life. main nitrate sources include, but not limited to, agricultural fertilizers, wastewater, landfill leachate, and livestock waste (shelton and lance, 1999). for the case of gaza governorates, nitrate is commonly incorporated into groundwater source through widespread or diffuse sources, normally known as non-point sources, which can’t be easily identified. point source contaminants are also a major cause of contamination. by looking at the land use map of gaza strip, one can conclude that the agricultural activities are mainly located in the eastern part which has a thick unsaturated zone with very low permeable layers, while the urban areas are located mainly in the western part which has a relatively thin unsaturated zone with high permeable sandy soil. figure 2 location map of municipal wells in gaza strip (pwa, 2018) luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 12 the routine analysis results of nitrate for different wells located in different agricultural and urban locations showed that the problem is particularly serious in public water supply wells as these are mainly located in un-sewered urban areas. these areas rely on cesspits for wastewater disposal which is basically not an efficient system of disposal. wastewater infiltrates into the soil, which is mostly sandy of high permeability to the groundwater leading to the high levels of nitrate. main causes of high nitrate in groundwater of gaza are:  infiltration of untreated wastewater in cesspits, septic and sewage discharges.  leaching of chemical fertilizers.  landfills' leachate (lubbad and al-yaqoubi, 2007). health impacts of nitrate: nitrate is an inorganic dissolved substance that may generally found in groundwater samples. nitrate is naturally an occurring chemical in the environment as a component of the nitrogen cycle. relatively, the small amount of the nitrate found in natural waters is of mineral origin, most of it is coming from organic (solid waste and wastewater discharges) and inorganic sources (artificial agricultural fertilizers). however, oxidation by bacteria and nitrogen fixation by plants can both result in the formation of nitrate. high levels of nitrate and nitrite can cause dangerous illness due to acute exposure. high concentrations of nitrate in drinking water cause both environmental and health concerns due to its toxicity. methaemoglobinaemia, or known as blue baby syndrome, is the major health concern attacking infants that are bottle fed with formula prepared with drinking water (cawst, 2009, shelton and lance, 1999, epa, 2001). nitrate toxicity results since the human’s body reduces nitrate to nitrite. this reaction occurs in human saliva at all ages and in the infants’ gastrointestinal tract within the first three months of life. nitrite toxicity is demonstrated by vasodilatory/cardiovascular impacts at high dose levels and methemoglobinemia at lower dose levels. methemoglobinemia is an impact in which oxidation of hemoglobin to methemoglobin takes place; and hence asphyxia results. infants up to the age of three months, the most sensitive population to nitrate. this can be figured out by the fact that; in the case of adult and child, approximately 10 % of the taken in nitrate is converted to nitrite, while 100 % of taken in nitrate by the infant can be converted to nitrite. the impacts of methemoglobinemia are quickly reversible, and there are, therefore, no accumulation of these impacts. it results in the difficulty of breathing and turning skin to blue due to the absence of oxygen. it is a dangerous case that can sometimes cause death (cawst, 2009, shelton and lance, 1999). due to the insufficient data of animal and humans, the epa classifies both nitrate and nitrite as group d contaminant. for the case of infants, nitrate compounds tend to demonstrate adverse toxic impacts. because of the widespread occurrence of this toxicity in water, nitrate has been regulated. recent advances in research have proved that nitrate of high concentrations may cause cancer for adults. the world health organization (who) stated that the concentration of nitrate in drinking water should be < 50 mg/l. this regulation is set to protect the bottle-fed infants, in the short-term exposure, from methaemoglobinaemia illness. (shelton and lance, 1999, qrenawi, 2006, cawst, 2009). iii. significance of the research while the guideline stated by the who for nitrate is 50 mg/l, and it exceeds 300 mg/l in some areas of gaza strip. since groundwater is the most important potable water source in gaza strip; this encourages the researchers to be care of performing health risk assessment of groundwater contamination in gaza strip. figures 3 to 7 outline the nitrate concentration in gaza governorates in the year 2018 for 243 municipal wells. figure 3 nitrate concentration in northern governorate luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 13 municipal wells (pwa, 2018) figure 4 nitrate concentration in gaza governorate municipal wells (pwa, 2018) figure 5 nitrate concentration in the middle governorate municipal wells (pwa, 2018) figure 6 nitrate concentration in khan-younis governorate municipal wells (pwa, 2018) figure 7 nitrate concentration in rafah governorate municipal wells (pwa, 2018) due to the unstable political issue in the palestinian territories and its negative impacts on the security and socioeconomics, the water situation in gaza strip became worse in terms of water quality deterioration, water depletion and water supply system efficiency. by referring to the previous figures, one can conclude that nitrate level exceeds the who limit in more than 90.6 % of gaza governorates municipal wells for the year 2018 (222 wells from a total of 245 wells); indicating that the problem of water is not only a quantity problem but also a quality one. a who study found a high concentration of nitrates in the water supply from wells in different localities within the gaza strip, and this nitrate contamination was found to be the cause of the incidence of ―blue-baby syndrome‖ among infants in the gaza strip (ibrd, 2009). whilst this disease primarily affects young children, nitrate contamination can also affect pregnant women and might increase the risk of certain types of cancer (abu naser et al., 2007). (shomar et al., 2008) stated that recent observations revealed a high positive correlation between the concentrations of nitrate in groundwater of the gaza strip and the occurrence of methemoglobinemia in babies younger than 6 months. among 640 babies tested in gaza, 50% showed signs of methemoglobinemia in their blood samples. a study into the relationship between the concentration of nitrates in drinking water and the disease of methemoglobinemia in children under 6 months old was conducted in 2001. twelve primary health care centers were involved in the study and results showed a strong positive relationship between levels of nitrates and contraction of the disease. the highest incidence of the disease was found in khan younis coinciding with the highest levels of water nitrates. moreover, the proportion of methemoglobinemia incidence was highest in children between 1 – 3 months old due to their dependence on milk and hence their higher intake of nitrates, while it decreased in those between 3 – 6 months old (ramahi, 2013). iv. risk assessment using risk assessment as decision making process tool has given more importance in the last two decades, because it has become evident that different statuses cannot be easily referred to as either safe or unsafe (langley et al., 2002). risk does not have a specific and clear definition; everyday language uses the term risk to indicate a chance of danger or catastrophe. when used in the risk assessment theme, it has a concise definition; the combination of the probability or frequency of occurrence of a defined hazard and the magnitude of the consequences of the occurrence. or it is the systematic steps that determine the potential effects of a chemical, physical, microbiological or psychosocial hazard on a certain human population or ecological system under a certain set of conditions and for a specified period of time. it is a set of logical, systemic and well defined activities that give comprehensive information to risk managers, specifically those who put policies and regulations and decision makers with a good identification, measurement, estimation and evaluation of the risk linked to specific natural incidents or man-made activities, so that the best possible decisions are made (blumberga, 2001, langley et al., 2002). the risk assessment process can give a systematic concept for characterizing the nature and extent of the risks related with specific hazards. the main goal of risk assessment is to give the best available scientific, social and practical information concerning the risks, so that these information can be extensively studied and therefore the best alternatives can be formulated and hence the best decisions can be taken (petts and eduljee, 1994, langley et al., 2002). tracking and following of risk needs an accident, pathway for transport and a receptor that could be impacted at the place of exposure. basically, risk assessment gives a wellluay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 14 organized concept for figuring out the nature and extent of the relationship between the cause and effect (petts and eduljee, 1994, langley et al., 2002). good risk assessment needs a high level of scientific skill and objectivity and should be differentiated from the risk management process which chooses alternatives as a result of the health risk assessments process. risk management needs scientific, social, financial and political data and also needs judgments to measure the degree of tolerance required of risk and reasonableness of expenses. risk assessment should give a confidential, objective, applicable and equiponderant analysis (langley et al., 2002). health risk assessment: the process used to determine the potential effects of physical, biological, chemical or social agents on a certain human population under specified conditions within a certain time period is referred to as health risk assessment (langley et al., 2002). it includes identifying, analyzing and presenting information in terms of risk to human health, to support planning and making decisions. in this process, facilitating all social and economic data concerning the decision making is not always required. also, its approach is not intended to be a part of the planning and management processes (molep., 2000). in the past two decades, the public’s consciousness of the health risks has been greatly raised despite the conducted limited risk assessment works. a great portion of the increased public awareness has raised because of the extensive risk associated with some wastes such as the nuclear waste, and the consequent dumping of such waste into the ground; this practice will transfer the toxic substances present in this waste to the groundwater and hence its contamination. the public generally do not have benchmarks for making a scientific comparison among the different threats to their health and the safety and the quality of the environment. the publics’ increased awareness about the risks due to groundwater contamination has an impact not only on looking for clean and safe water, but also greatly contributed to searching for alternative safe water resources (garrick, 2002). strengths of risk assessment:  it is a mechanism that aids decision making especially the choice between options for risk reduction.  it is a means of comparison between risks to determine whether there is equity of action or that the action is proportionate to the risk.  it is a technique that can break down complex systems and identify areas of processes or plant where risk reduction options can be most effective.  it clearly outlines the relationship that connects the natural environment and the human activities.  it removes the doubt of stakeholders so that probable changes to the environment from human activities are being taken into account.  it is valid scientifically, defensible and applicable (molep., 2000). uncertainty of risk assessment: although risk assessment is a well-established scientific approach, it has some demerit. it must be recognized that the present state of knowledge concerning the impacts of specific constituent is incomplete. thus, each step in the risk assessment involves uncertainty and the best possible utilization of available information must be ensured. in hazard identification, most assessments depend on animal tests and yet the biological systems of animal are different from those of humans. in dose response, it is often unknown whether safe levels or threshold exists for any toxic chemical. exposure assessment usually involves modeling, with the attendant uncertainty as to substance release, release characteristics, meteorology and hydrology. because of these uncertainties associated with risk assessment; the process gives only an estimation of the risk and not the real impacts accompanied to a certain proposed project or existing facility, so, the results of such an analysis should only be used as a guide in decision making (langley et al., 2002, moe, 1999, molep., 2000). to overcome these uncertainties; the worst scenario or reasonable worst scenario of hazard exposures are supposed. using higher than the expected exposure cases confirms any expected problems and defines the dangerous pollutants or sensitive exposure pathways for more detailed study. in general, this will give and over estimation to the real risk values. the resulted risk values that exceed the environmental or human health protection standard can be considered as a warning flag, rather than a real risk or impact (moe, 1999). v. methods and materials risk assessment models: the risk assessment work is mainly dependent on deterministic approaches whose objective is to track and follow the transport of the contaminants from its source to the receptor by different pathways. these approached are usually simple, and not necessary to be complicated. the deterministic analysis models usually use the mean or the median values that are at the central part of their distributions. another method of analysis is the use of model parameter values to foresee the risks for people of exposure more than the 90 th percentile of the population. to overcome the parameter and approach uncertainty, and to estimate the risk at different pre-selected percentiles of the population, probabilistic analysis is utilized in the risk assessment process. the outputs of such approaches can be presents as point estimates or as probabilistic risk estimates (50 th , 90 th , 95 th , 97.5 th and 100 th percentiles). the values of the estimated risk can be obtained for different categories of receptors; for example; adults, children, residents, etc. (metcalf and eddy, 2003). luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 15 steps of risk assessment: risk assessment involves four distinct steps; they are: 1. hazard identification: it includes defining the pollutants that are assumed to pose human health hazards, measuring their concentrations in the environment and identifying the exact form of toxicity and the conditions under which these forms of toxicity may be found in exposed population. the step also includes determining the available evidence and specifying whether a substance or pollutant causes a certain adverse health hazard. as a part of hazard identification, evidence is gathered on the potential for a substance to cause negative health impacts to human or unacceptable environmental impacts. for humans, the principal sources of this information are clinical studies, controlled epidemiological studies, experimental animal studies and from evidence gathered from accidents and natural disasters (garrick, 2002, metcalf and eddy, 2003). 2. exposure assessment: exposure is the method by which the hazard comes into contact with the organism; exposure or access is the path that links the gap between the hazard and the population. for humans, exposure can occur through different pathways including air breathing, food ingestion or water drinking, absorption through the skin either via dermal contact or exposing to radiation. the main steps of exposure assessment are: defining the expected receptors from a given population, evaluating pathways and routes of exposure and quantifying the amount of exposures (garrick, 2002, watts, 1998, metcalf and eddy, 2003). 3. dose response assessment: the fundamental goal of this step is to define a relation – usually mathematical – between the toxic substance quantity to which human is exposed and the risk (unhealthy response) of that dose in humans. typical dose response models that have been proposed and used for human exposure include: the single hit model, the multi-stage model, the linear multistage model, the multi-hit model and the probit model (metcalf and eddy, 2003). 4. risk characterization: it is the last step in risk assessment, in which the question of who is affected and what are the likely effects are defined to the extent they are known. it involves the integration of hazard identification, dose response, and exposure assessments. it gives a general evaluation of the whole risk assessment process quality, as well as the confidence levels the assessors will have to estimate the risk and to formulate conclusions. it provides a risk description to individual persons and to communities in terms of extent and sharpness of potential harm. this step also connects the outputs of the risk assessment process to the risk manager (garrick, 2002, metcalf and eddy, 2003). for carcinogenic pollutants, human exposures were transformed into a lifetime cancer risk. many standards specify a lifetime risk of 0.000001 or 10 -6 or less to be insignificant. for pollutants responsible for other effects of toxicity – lung disease, birth defects or nerve damage – exposure to such pollutants is compared to an established standards of health protection, and the exposure ratio is then estimated. obtaining a ratio < 1 indicates that minimal or no negative health impacts are expected (moe, 1999). based upon the previous steps, the risk value can be calculated by using the equation adopted by watts, 1998. 𝐼 = 𝐶𝑊 × 𝐼𝑅 × 𝐸𝐹 × 𝐸𝐷 𝐵𝑊 × 𝐴𝑇 (1) 𝑅𝑖𝑠𝑘 = 𝐻𝐼 = 𝐼 𝑅𝑓𝐷 (2) where; i = daily intake (mg/kg.day) cw = contaminant concentration (mg/l) ir = ingestion rate (l/day) ef = exposure frequency (days/year) ed = exposure duration (years) bw = body weight (kg) at = averaging time (years) hi = hazard index rfd = reference dose vi. results and discussion nitrate, a soluble non-carcinogenic chemical, will be used in the risk assessment task. it is a pollutant that has a reference dose (rfd) of 1.6 mg/kg/day and the exposure route is ingestion. nitrates level in excess of 150 mg/l poses an extreme risk to infants' health in the form of blue baby syndrome. moreover, high nitrates may have carcinogenic effects for adults (sharma and reddy, 2004, agha, 2006). the concentrations for the years 2007, 2011, 2015 and 2018 of municipal wells in gaza governorates will be used in risk assessment calculation and presentation. by referring to figures 8 to 11, it is clear that the risk value of adults is acceptable (slightly less than 1) in 27, 45, 38 and 22 municipal wells only for the years 2007, 2011, 2015 and 2018 respectively. this indicates that these wells may be used for municipal purposes. most of these wells are located the middle and gaza governorates. few of them are located in rafah, north and khan younis governorates. this unbalanced distribution is due to intensive agricultural activities the northern and rafah governorates and lack of full sewage network coverage in khan younis governorate. the remaining wells all over the gaza strip can't be used for some municipal purposes since the associated health risk to its direct use is unacceptable. therefore, the probability the appearance of adverse health effects on people who drink this water is high. it is worth to mention that the number of suitable wells for municipal purposes increases in the year 2011 due to digging new wells in al-mawassi area (that has good water quality). unfortunately, the quality of this water is deteriorated immediately. this is because gaza aquifer is over pumped with four folds of its sustainable yield, resulting in sweater intrusion. luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 16 the situation is riskier for the case of children (as indicated in figures 12 to 15) once they depend on theses municipal wells for drinking purposes. for the case of infants, the situation is the riskiest once they depend on theses municipal wells for drinking purposes, as shown in figures 16 to 19. by referring to figures 12 to 19; one may conclude that the risk value for children and small infants is unacceptable in almost all wells of gaza governorates. for the case of children, the number of wells with acceptable risk value is 3, 5, 1, 0 wells for the years 2007, 2011, 2015, and 2018 respectively. on the other hand, for the case of infants, only 2 wells had acceptable risk value for the year 2011. currently, none of the wells is suitable for drinking purposes for children and infants all over gaza strip. be referring to the risk values for the year 2018, it is noted that the smallest value of the risk for the children is 1.5 while for infants it is 2.1 (>1), which is unacceptable while the largest value is > 30 and 40 for children and small infants respectively; which is extremely risky. figure 8 risk map for adults (2007) figure 9 risk map for adults (2011) luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 17 figure 10 risk map for adults (2015) figure 11 risk map for adults (2018) figure 12 risk map for children (2007) figure 13 risk map for children (2011) luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 18 figure 14 risk map for children (2015) figure 15 risk map for children (2018) luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 19 figure 16 risk map for infants (2007) figure 17 risk map for infants (2011) figure 18 risk map for infants (2015) figure 19 risk map for infants (2018) luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 20 the causes of higher risk values for small infants and children than adults are that; the dose is normalized to the body weight which is small for children and infants in addition to that the immunity system of children and infants is still feeble. groundwater should not be used for drinking purposes to children and small infants all over the gaza governorates. in gaza governorates, it is recognized that the health risk value due to drinking of groundwater is acceptable for adults at a very limited number of wells, while it is un-acceptable at all for small infants. the risk value is expected to increase in the future since the concentrations of nitrate are likely to increase unless immediate actions and intervention plans are taken. qrenawi in 2006 performed risk assessment of drinking groundwater in northern jordan. the study was supported by experimental measurements and the health risk values were acceptable for adults (0.269 < 1) while it was unacceptable for small infants (1.037 > 1). the outcomes of these studies indicate that the problem of groundwater contamination is a regional problem and therefore efforts are to be united and cooperation should be implemented to the highest levels to find reliable solutions. vii. risk management risk assessment alone gives a quantification of the risk (numeric value); however, the risk assessment is usually performed through risk management. risk management is considered as a decision making process and is based on a quantitative value obtained from risk assessment coupled with judgment and experience (watts, 1998). the primary function of risk management is to propose mitigation options that should minimize the risk and this can be accomplished by different actions, including: avoidance of the action, lowering the amount of the action, correcting the impacts by rehabilitation or restoration, compensating for the impact by providing substitute resources or environments (petts and eduljee, 1994). risk management is commonly described as the process of evaluating alternative regulatory or other actions directed at reducing the health risk, and selecting among these alternatives. the selected alternative should be applicable, for example; reducing the risk values in gaza governorates' wells to the acceptable levels needs the reduction of nitrate to 50 mg/l, 16 mg/l and 11 mg/l for adults, children and small infants respectively which is not applicable at this time for the case of children and small infants due to the current local complicated situation. health risk can be managed when dependency of groundwater for drinking purposes is reduced or eliminated. residents of gaza governorates have the right to know that their sources of water are contaminated to unsafe levels. viii. conclusions and recommendations  the problem of groundwater in gaza governorates is a combined one since it is a quantity and a quality problem.  the health risk values for adults are acceptable in only 8 municipal wells in gaza governorates which means that 237 wells should not be used for some municipal uses and direct drinking without proper treatment.  the risk values for small infants is unacceptable in all municipal wells in gaza governorates indicating that none of them is suitable for drinking.  the main sources of nitrate in gaza governorates are: infiltration of untreated wastewater to the subsurface layers, leaching of chemical fertilizers and landfills' leachate.  the problem of groundwater contamination is a regional problem that needs gathering efforts to find suitable and sustainable solutions.  risk assessment is widely used in the last decades as a powerful decision support tool.  immediate actions should be taken to minimize the risk associated with drinking groundwater to the acceptable levels in order to protect the public health of people who rely on this groundwater.  looking for alternative water resources is to be put on the top of the agenda of responsible authorities to play their roles so that the public health is protected.  community participation should be encouraged through education and awareness campaigns. schools, universities and non-governmental organizations (ngos) have the right form to conduct such activities.  residents of gaza governorates who depend on groundwater in drinking have the right to know that their source of water is unsafe.  further studies that consider the impact of nitrate in groundwater on the public health in gaza strip should be performed. these studies are to be supported by field surveys and reviewing medical reports to find a relationship between nitrate concentration and accompanied diseases. ix. acknowledgements this work has been performed during the study at phd program in water technology, civil engineering department at the islamic university of gaza. special thanks are directed to the middle east desalination research centre (medrc) for their fellowship and financial support, and for the palestinian water authority (pwa) for providing data. x. references abu naser, a., ghbn, n. & khoudary, r. 2007. relation of nitrate contamination of groundwater with methaemoglobin level among infants in gaza. eastern mediterranean health journal, 13. agha, s. r. 2006. use of goal programming and integer programming for water quality management—a case study of gaza strip. european journal of operational research, 174, 1991-1998. luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 21 aiash, m. & mogheir, y. 2017. comprehensive solutions for the water crisis in gaza strip. iug journal of natural studies, 25, 6375. blumberga, m. 2001. risk assessment of the skede landfill in liepaja, lativa msc. thesis, stockholm. cawst, c. o. a. w. a. s. t.-. 2009. introduction to drinking water quality testing. canada. eldadah, j. 2013. using treated wastewater as a potential solution of water scarcity and mitigation measure of climate change in gaza strip. journal of water resources and ocean science, 2, 79-83. eldadah, j., latif, m. a. & bana, m. e. 2007. agricultural and municipal water demand in gaza governorates for 2006. gaza: palestinian water authority. epa, e. p. a.-. 2001. parameters of water quality "interpretation and standards", ireland. garrick, b. j. 2002. the use of risk assessment to evaluate waste disposal facilities in the united states of america. safety science, 40, 135-151. ibrd 2009. west bank and gaza: assessment of restrictions on palestinian water sector development. washington dc 20433 middle east and north africa regionsustainable development langley, a., dempsey, j., davies, l., fitzgerald, j., taylor, r., hinwood, a., heiskanen, l., batt, s., mangas, s., tassone, b., cunliffe, d., hrudey, s., map, p. l., mcneil, j., heyworth, j., maynard, t., marco, p. d., goode, j., neller, a., fitzgerald, j., turczynowicz, l., mangas, s., morgan, g., simon, d., buckett, k., clapton, w., mclean, a., lloyd-smith, m., luscombe, d., geschke, a., landy, b., cameron, s., adams, g., sowerby, s., hrudey, s., langley, a., davis, h. & davies, l. 2002. environmental health risk assessment: guidelines for assessing human health risks from environmental hazards [book review], australia, commonwealth of australia. lubbad, i. & al-yaqoubi, a. 2007. investigation of the main sources of nitrate in the groundwater of gaza strip. gaza: palestinan water authority (pwa). metcalf & eddy 2003. wastewater engineering: treatment and reuse. wastewater engineering: treatment and reuse. mcgraw hill, new york, ny. moe 1999. environmental risks of municipal non-hazardous waste landfilling and incineration: technical report summary. ontario: ministry of the environment, environmental sciences & standards division. molep. 2000. environmental risk assessment (era):an approach for assessing and reporting environmental conditions. british columbia: ministry of environment, lands and parks (moelp) habitat branch, technical bulletin 1 petts, j. & eduljee, g. 1994. environmental impact assessment for waste treatment and disposal facilities, john wiley & sons ltd. pwa, p. w. a.-. 2018. quality data of municipal wells. in: directorate, p. w. a.-w. r. (ed.). gaza strip, palestine. qrenawi, l. 2006. environmental and health risk assessment of al-akaider landfill. master degree, jordan university of science and technology. qrenawi, l. 2007. wastewater situation in gaza governorates – facts and challenges. palestinian water authority. qrenawi, l., qeshta, m. & syam, m. 2002. construction and demolition wastes reuse and management. bachelor, slamic university of gaza. ramahi, s. 2013. the health risks posed by water pollution in the gaza strip. united kingdom london: middle east monitor (memo). sharma, h. d. & reddy, k. r. 2004. geoenvironmental engineering: site remediation, waste containment, and emerging waste management technologies, john wiley & sons. shelton, t. b. & lance, s. e. 1999. interpreting drinking water quality analysis: what do the numbers mean?, rutgers cooperative extension. shomar, b., osenbrück, k. & yahya, a. 2008. elevated nitrate levels in the groundwater of the gaza strip: distribution and sources. science of the total environment, 398, 164-174. watts, r. j. 1998. hazardous wastes: sources, pathways, receptors. wrd-pwa, w. r. d. 2014. gaza water resources status report 2013/2014. gaza: palestinian water authority (pwa), water resources directorate (wrd). wrd-pwa, w. r. d. 2018. gaza water status report 2017. palestinian water authority(pwa), water resources directorate (wrd). luay i. qrenawi, reem abu shomar/ health risk assessment of groundwater contamination case study: gaza strip (2020) 22 luay i. qrenawi: mr. qrenawi is a phd student at the joint water technology program between the islamic university and al azhar university. he has the msc in environmental and water resources engineering from the jordan university of science and technology (2006), and the bsc in civil engineering from the islamic university of gaza (2002). currently, he is a lecturer at the university college of applied sciences. he has more than 15 years in both the academic and professional fields related to civil engineering. he has a strong experience in the field of environmental and water resources engineering, with a focus on solid waste, wastewater and water sectors. he occupied the position of wastewater reuse expert at the palestinian water authority. he also was the project coordinator at the islamic university of gaza on a project concerning the improvement of drinking water quality in elementary schools using reverse osmosis technique. he effectively worked on many local and regional projects concerning solid waste management, solid waste reuse, recycling and recovery, as well as assessing the impacts and risks associated with landfilling practices. he participated in a comprehensive study about solid waste dumpsites in gaza strip. during many research and environmentally oriented projects, he mastered the concepts and techniques of compost production from agricultural and municipal solid wastes. mr. qrenawi publications are: 1. sludge management in water treatment plants: literature review, international journal of environment and waste management, july, 2019 (accepted for publication). 2. developing an automated system for irrigation, university college of applied sciences, 2014 3. solid waste landfills as a source of green energy: case study of al akeeder landfill. international conference and exhibition on green energy & sustainability for arid regions & mediterranean countries, 2009 4. simulation of leachate production from an arid landfill using help model. solid waste technology and management, 2008 5. health risk assessment of groundwater contamination in gaza strip, jan., 2008 6. agricultural wastewater reuse guidelines and standards in selected countries, november, 2007 7. wastewater situation in gaza governorates (facts and challenges), november, 2007 8. assessment of solid waste dumpsites in gaza strip, march, 2007 (arabic + english versions) 9. monitoring program report of rafah beach, 2007 10. protection of marine environment, the safe and sustainable utilization of fish, october, 2006 11. pesticides and their effects on the environment and public health, august, 2006 12. utilization of construction and demolition waste in making concrete hollow blocks, icpcm a new era of building, egypt, 2003 https://www.researchgate.net/profile/luay_qrena wi2 reem abu shomar: mrs. abu shomar is a phd student at the joint water technology program between the islamic university and al azhar university. she is a senior public health specialist who hold master degree in public health from al quds university, jerusalem. she has professional experience in management of public health research projects implemented by multi-disciplinary teams and coordinated with multi-stakeholders. her working experience involved national and international organizations and she is currently working as a public health and environmental specialist at the program coordination unit hosted by the palestinian water authority. she participated in the developing of national strategies and guidelines. she also participated in many local and international conferences related to public health, water, sanitation and hygiene. research gate: https://www.researchgate.net/profile/reem_abu_shomar https://www.researchgate.net/profile/luay_qrenawi2 https://www.researchgate.net/profile/luay_qrenawi2 https://www.researchgate.net/profile/reem_abu_shomar transactions template journal of engineering research and technology, volume 5, issue 1, march 2018 1 porous asphalt: a new pavement technology in palestine shafik jendia 1 , ziad aldahdooh 2 mohammed aburahma 2 , mahmoud abujayyab 2 , abdelkarim eldahdouh 2 . 1, professor of highway engineering, islamic university of gaza, palestine 2, faculty of engineering, islamic university of gaza, palestine abstract—porous asphalt (pa) is researched and produced in europe and the united states of america to increase the skid resistance of the asphalt surface. this can be caused when storm water infiltrate directly in the ground through the porous surface layer. so far, in palestine this type of pavement has not been used. if pa is used in palestine, it may contribute to solving many of the local problems especially groundwater deficit. therefore, this research deals with studying the possibility of producing porous asphalt as new pavement technology in palestine. based on several international researches, a proposal for the limits of aggregate gradation was determined. to investigate the applicability of using local material and the proposed gradation limits, several tests were conducted, including sieve analysis, specific gravity, absorption, abrasion, impact and crushing value. bitumen tests were also conducted such as, penetration test, softening point, ductility and specific gravity. also, asphalt mixtures were prepared in accordance with the proposed gradation curves, followed by testing of 24 paspecimens to determine the mechanical properties, especially stability, flow, and bulk density. the results showed that any aggregate blending curve lie between the limits of proposed gradation can be used to produce pa. marshall test results showed that the optimal bitumen content was approximately 4% by the weight of total mixture, and the void ratio obtained was approximately 21% for the produced asphalt. index terms— porous asphalt, marshall, void ratio, aggregate gradation. i. introduction porous asphalt (pa) is a new pavement technology. it has been researched and produced in several sites worldwide, especially in europe and the united states. the surface permeability and high porosity of pa allow water to pass vertically through the pavement to the subgrade below to naturally recharging groundwater levels. the water in the base is stored temporarily in stone reservoir consisting of uniformly graded layer thick enough to allow sufficient water storage during anticipated rain events, clean crushed stone with around 40% voids, often allowed to infiltrate into permeable subgrade soils, and can recharge the groundwater directly [1, 2, 3, 4]. unlike conventional pavements, pa is typically built over an uncompacted subgrade to maximize infiltration water into the soil. above the uncompacted subgrade is a geotextile fabric, which prevents the migration of fines from the subgrade into the stone recharge bed while still allowing for water to pass through, as shown in figure (1). pa has not been used in palestine despite its potential benefits like reducing runoff, which leads to increasing the skid resistance and improving surface water quality which infiltrate directly in the ground through the porous surface layer. pa can in palestine due to its advantages (mentioned below) contribute to solving many of local problems. the objectives of this study are: a) to determine a suitable aggregate gradation for local aggregates used in the asphalt mix. b) to determine the highest void ratio that can be reached using the local material. figure 1: typical section of porous asphalt pavement. shafik jendia, ziad aldahdooh, mohammed aburahma , mahmoud abujayyab, abdelkarim eldahdouh. porous asphalt: a new pavement technology in palestine 2 ii. advantages there are several advantages for the porous asphalt pavement [1 – 8]. some of them are summarized below: 1. removing the pollutants and improving water quality. 2. melting snow and ice fast and reducing the need for deicing salt. 3. recharging groundwater to underlying aquifers and providing flood control. 4. increasing permeability, potentially improving the water quality through filtering capabilities. 5. improving water and oxygen transfer to nearby plant roots. 6. improving skid resistance, splash, and spray, and driving speed. 7. reducing hydroplanes on pavement surfaces by reducing glare on the road surfaces specifically during wet night conditions. 8. absorption of noise from tires and engines (sound is not reflected but absorbed by the porous layer). 9. reducing fuel consumption due to enhanced smoothness. 10. reducing tires wear on the asphalt. 11. extending pavement life due to well drained base. iii. materials and test results a. materials properties in this study, all important laboratory tests were conducted to evaluate the properties of the used bitumen. table (1) illustrates the test results. table 1: bitumen properties and specifications. bitumen tests test standard specification test result specification values density (g/cm 3 ) aashto t 228-94 astm d 07 – 03 1.03 1.01-1.06 penetration 1/10 mm aashto t 49-96 astm d 5 – 97 68.33 60-70 ductility (cm) aashto t 51-94 astm d 113 – 99 150 min 100 softening point (℃) aashto t 53-96 astm d 36 – 95 49.6 min 48 max 56 flash and fire points aashto t 48-96 astm d 92 – 02 b 286 +326 min 230 max 330 also, laboratory tests were conducted on the aggregate to determine their properties. table (2) illustrates the tests results. table 2: aggregates test results according to astm specifications. aggregates tests specification test result specific gravity (g/cm 3 ) 2.58-2.61 water absorption (%) 1.87-3.0 resistance to degradation of small-size coarse aggregate by abrasion in the los angeles machine (%) 19.2 sieve analysis of fine and coarse aggregates see appendix [b] materials finer than (no. 200) sieve in mineral aggregates by washing see appendix [b] b. blending of aggregates several specifications and researches [1-10] were studied to determine the best aggregate gradation as shown in appendix [a]. the result is presented in figure (a-1) which illustrates the gradation limits in comparison with the international gradations. the suggested limits and the result of the aggregates blending process followed for this purpose is illustrated in in figure (2) and in table (3). the blending procedure [9] of all aggregate types are presented in table (b-1). for this purpose, all aggregates were brought from the stockpiles available in the asphalt factories in the gaza strip. from table (b-1) is clear that the part of filler (particles size less than 0.075 mm) is approximately 4%. the maximum percentage (5.0 %) of filler is relatively to figure 2: gradation curves in comparison with limits curves that for dense asphalt is small. this means that in order to produce a large void ratio in the pa the amount of mortar (bitumen + filler) should be relative small. table 3: blending of stockpile aggregates. aggregate size (mm) blending (%) simsimia (0/12.5) 50 adasia (0/25) 45 folia (0/37.5) 5 0 20 40 60 80 100 0.01 0.1 1 10 100 p a s s i n g ( % ) seive size (mm) req. blending suggested gradation limits figure 2: aggregate gradation curve in comparison with suggested limit curves shafik jendia, ziad aldahdooh, mohammed aburahma , mahmoud abujayyab, abdelkarim eldahdouh. porous asphalt: a new pavement technology in palestine 3 c. mechanical test results in order to study the mechanical properties (stability, flow, bulk density 𝝆 , air voids 𝑽 , volumetric part of bitumen content 𝑽 , voids in mineral aggregates vma and voids filled with bitumen vfb) of the mixture of porous asphalt (pa), marshall method was used in this research. accordingly, the selected aggregates with the determined gradation were mixed carefully with three different percentages of bitumen content in the laboratory. eight marshal specimens were produced for each bitumen content (24 specimens). the results of mechanical properties are given in table (4). table 4: marshall test results. 𝒎 (%) stability (kn) flow (mm) 𝝆 (g/cm 3 ) 𝑽 (%) 𝑽 (%) vma (%) vfb (%) 3.50 6.57 2.85 1.90 20.90 6.45 27.37 23.64 4.00 4.97 2.88 1.91 20.19 8.59 27.60 26.85 4.50 5.38 2.90 1.91 19.42 8.34 27.78 30.09 from table (4), it is obvious that the reached air void ratio in the produced paspecimens is relatively large (approximately 20%) comparatively to that in dense asphalt (maximum 8%). this ratio lies in the acceptable range discussed in references, mentioned above [110]. d. optimum bitumen content 𝒎 % based on the specimen testing, the main relationships between bitumen content and the obtained values of marshall stability, flow, bulk density and air void content were presented in table (4). the optimum bitumen content of the mixture is the numerical average of the three values as it is described in the following equation [9]: 𝒎 = ( + + 𝒄) / 𝟑 where mb = optimum bitumen content (%). a = bitumen content at maximum bulk density (%). b = bitumen content at maximum stability (%). c = bitumen content at minimum air void content (%). table (5) illustrates the bitumen content for each property. table 5: mechanical properties of pa and bitumen contents property value 𝒎 % maximum stability 6.75 kn 3.50 % maximum bulk density 1.91 g/cm 3 4.50 % 𝑉 required 20.9 % 3.50 % 𝑶𝒑𝒕𝒊𝒎𝒖𝒎 𝒎 % = 3.5 + 4.5 + 3.5 3 = 𝟑.𝟖𝟑 ≈ 𝟒% figure (3) presents the water permeability through one of the produced specimen with va= 20.9 %. iv. conclusions and recommendations 1. porous asphalt (pa) can be produced successfully with local material in palestine, provided the gradation of the selected aggregate should lie within the limits suggested in table (6). table 6: sieve size passing percent for the limit curves. sieve size (mm) percent passing (%) min max 22.4 100 100 16 93 100 12.5 85 100 11.2 70 100 9.5 5 100 4.75 5 35 2 5 15 0.075 2 5 2. the effective bitumen content obtained using marshall method should be approximately 4%. bitumen content much less than 4% increases the possibility of causing surface raveling. 3. the maximum air void ratio can be reached using local material approximately 21%. 4. marshall stability and bulk density of paspecimen are lower than that for the dense asphalt concrete. 5. the values of marshall flow are suitable, they lie in the acceptable range (24) mm. figure 3: porous asphalt specimen while permeability test shafik jendia, ziad aldahdooh, mohammed aburahma , mahmoud abujayyab, abdelkarim eldahdouh. porous asphalt: a new pavement technology in palestine 4 v. references [1] environmental protection agency (epa), "porous pavement. national pollutant discharge elimination system" 2007. [online]. available: http://cfpub.epa.gov/npdes/stormwater/menuof bmps/index.cfm. [accessed on 1 november 2016]. [2] lebens, m. “porous asphalt pavement performance in cold regions”, minnesota department of transportation, minnesota 2012. [3] wisconsin asphalt pavement association, (wapa) "tech bulletin porous asphalt pavements" wisconsin, 2015. [4] u.s. federal highway administration, (fhwa) "porous asphalt pavements with stone reservoirs" 2015. [5] lori, k. s. “porous asphalt pavement designs: proactive design for cold climate”, university of waterloo, canada, 2007. [6] the unh storm water center, "porous asphalt pavement for storm water management" 2009. [7] cahill h., michele, a., courtney. m. "stormwater management with porous pavements" 2005. [8] quantao, l. “induction healing of porous asphalt concrete”, university of technology, p.r. china, 2012. [9] jendia, s. “highway engineering structural design”, dar al manara, gaza, 2000. [10] velske, s., mentlein, h., eymann, p. strassenbau, “strassenbautechnick”, germany, 2778. shafik jendia, ziad aldahdooh, mohammed aburahma , mahmoud abujayyab, abdelkarim eldahdouh. porous asphalt: a new pavement technology in palestine 5 figure a-1: the proposed gradation limits in comparison with the international gradations appendix a 0 10 20 30 40 50 60 70 80 90 100 0.01 0.1 1 10 100 p a s s i n g ( % ) sieve size(mm) dutch napa franklin velske pa11 velske pa8 suggested graduation limits shafik jendia, ziad aldahdooh, mohammed aburahma , mahmoud abujayyab, abdelkarim eldahdouh. porous asphalt: a new pavement technology in palestine 6 appendix b aggregate mix aggregate size(mm) blending % 0-0.075 0.075-0.3 0.3-0.6 0.6-2 2-4.75 4.75-9.5 9.5-12.5 12.5-25 25-37.5 simsimia (12.5) 5.58 1.82 0.88 4.31 31.53 55.45 0.42 50 2.79 0.91 0.44 2.15 15.76 27.73 0.21 0.00 0.00 adasia (25) 2.72 1.31 0.21 0.31 1.22 21.18 42.31 30.73 45 1.22 0.59 0.10 0.14 0.55 9.53 19.04 13.83 0.00 folia (37.5) 1.50 0.50 0.10 0.21 1.01 7.84 16.60 69.71 2.54 5 0.07 0.02 0.01 0.01 0.05 0.39 0.83 3.49 0.13 total 4.09 1.52 0.54 2.31 16.36 37.65 20.08 17.32 0.13 100 req. blending 4.09 5.61 6.16 8.46 24.83 62.48 82.56 99.87 100 100 min. of the proposed aggregate gradation 2 2.3 2.8 5 5 5 85 100 100 max. of the proposed aggregate gradation 5 6 8 15 35 100 100 100 100 table b-1: local aggregate blending procedure. transactions template journal of engineering research and technology, volume 7, issue 2, october 2020 23 blockchain-based quality of service for healthcare system in the gaza strip abdelkhalek i. alastal, raed a. salha, maher a. el-hallaq doi: https://doi.org/10.33976/jert.7.2/2020/4 abstract— electronic health records ehrs are critical, highly sensitive private information in healthcare, and need to be frequently shared among different parties for example patients, physicians and administration. blockchain provides a shared, immutable and transparent health records of all the transactions to build applications with trust, accountability, and transparency. this provides a unique opportunity to develop a secure and trustable ehr data management and sharing system using blockchain. this study aims to develop the use of health records as well as finding the current status of ehrs by designing a checklist to measure the extent of use of health records in the gaza strip hospitals and exploring the possibilities of using blockchain technology to develop the use of the electronic health records to share accurate and complete health data between multiple parties such as patients, doctors, and managers in an effective, transparent and secure manner. index terms— blockchain technology, distributed ledger technology, electronic health records, healthcare, information management. i introduction actually, correct, accurate and timely information is the most important foundation for smart cities. blockchain, big data and artificial intelligence are the most important technologies used in smart city applications [1-3]. one of the most important applications of smart cities is healthcare. ehr management system aims to provide a full history of a patient, that contains the right information in the right place in the right order, at the right time for the right person at the lowest cost, considering the security of patient data. there are no ehrs designed to manage multi-institutional, lifetime medical records. many global healthcare systems are in crisis because of prohibitive costs, limited access to care, patient safety and privacy concerns, data breaches, and varying quality of care. patient’s health record is an important part of information for the process of medical treatment of patients including personal data, personal medical profile, allergies, etc. several hospitals adopt information communication and technology to manage patient medical records which are called electronic health record system. the current ehr system in most countries is not very effective, as hospitals have used different programs. the ehr systems in hospitals are private and developed to use within the organization only, where some hospitals have developed their own system, while others have purchased ready-to-use application software. thus, it is not connected to each other. if a patient transfers for any reason to another hospital, access to the patient’s medical profile in the new hospital could not be done. as a result, patients receive limited benefits from the data in the private ehr system. such problems make hospitals unable to transfer important information between the stakeholders in the system with confidence. as well as, patients are not confident about the security and privacy of their data [4]. as a result, advancement of blockchain technology is necessary to have the potential to accommodate an exchange of existing data safely. the operation of blockchain comprises information confidential, accurate and ready-to-use information [5]. blockchain offers a potential solution; it enables us to put the patient at the center of the healthcare ecosystem and increases the security, auditability, and privacy of sensitive health data and interoperability of systems that contain such personal information. therefore, doctors can diagnose symptoms precisely from the past medical treatment, and pharmacists can supply medicines correctly and accurately from the prescriptions through the system [6]. additionally, it can provide the information accurately and timely for the strategic planning of the healthcare service providers more effectively [7]. blockchain’s success depends on whether all stakeholders are willing not only to adopt its technical infrastructure and its core principles but also to participate in healthcare standards development and ongoing governance of blockchainbased healthcare platforms. the need to increase the efficiency of healthcare delivery and reduce duplication and erhttps://doi.org/10.33976/jert.7.2/2020/4 a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 24 rors is essential to enable health organizations to meet the increasing demand for services in the future. many countries experience identifies that adoption of a comprehensive emr system will assist in achieving these objectives [7]. this study aims to investigate the availability, use and efficiency of the electronic health record in the gaza strip hospitals, and explore the possibility of applying blockchain technology to improve health care in it and standing on the obstacles that hinder the use of such technology in the field of ehrs. for this purpose, a checklist is designed and analyzed to achieve the above aim. ii the study questions unleashing blockchain’s potential in healthcare will require organizations to address significant challenges. since its greatest benefits revolve around streamlining, the coordination among multiple providers, and patients, healthcare organizations will need to measure and study the current situation of ehrs and study how appropriate an ehr is to use and benefit from blockchain technology. this what research tries to address. it also offers a new vision of smart healthcare, by potential formularization of effective healthcare policies for decisionmakers, patients, health care providers with different health care institutions, and other stakeholders. in addition to, it tries to identify the reality of the ehr and its effectiveness in the providing of healthcare in the gaza strip, explore the possibility of applying blockchain technology to improve healthcare in its medical centers. considering the gaza strip case study, study importance can be realized from answering the following key research questions.  what are the capabilities of ehr?  how medical centers such as hospitals and clinics are ready for using blockchain technology?  what are the obstacles that obstruct the use of blockchain technology in the field of ehr?  what are the capabilities of the new blockchain technology in the field of healthcare and ehr?  what are the benefits of using blockchain technology for patients, medical staff, hospital management, and community? iii blockchain and ehr concepts a longitudinal electronic record of patient health information generated by one or more encounters in any care delivery setting. included in this information are patient demographics, progress notes, problems, medications, vital signs, past medical history, immunizations, laboratory data, and radiology reports. the ehr automates and streamlines the clinician’s workflow. the ehr has the ability to generate a complete record of a clinical patient encounter – as well as supporting other care-related activities directly or indirectly via interface – including evidence-based decision support, quality management, and outcomes reporting [8]. an emr, electronic medical record, is an application environment composed of the clinical data repository, clinical decision support, controlled medical vocabulary, order entry, computerized provider order entry, pharmacy, and clinical documentation applications. this environment supports the patient’s electronic medical record across inpatient and outpatient environments, and is used by healthcare practitioners to document, monitor, and manage healthcare delivery within a care delivery organization (cdo). the data in the emr is the legal record of what happened to the patient during his or her encounter at the cdo and is owned by the cdo [8]. conceptually, blockchain can be easily explained as a type of database for recording and confirming transactions. each transaction is verified, recorded and combined with other transactions to produce a new block in the ledger that is then copied to peer nodes in the participating network, thus creating a distributed ledger of sorts. these transactions can range from moving data to transferring money, and even relating the electronic health record and confidential personal information, etc. salha et al. [9] discuss in depth the concept of blockchain in terms of definitions from different perspectives related multiple uses of stakeholders by different domains and applications. healthcare is one of the most important areas that is suitable to be effectively used in emerging blockchain technology and healthcare initiatives. in less than a decade, blockchain has surged from a technology with narrow applications related to digital currencies, to one with important applications in many domains especially in healthcare, so, drawing attention to policymakers at different levels too. ehrs are never designed to manage multi-institutional, lifetime medical records. patients leave data scattered across various organizations. thus they lose easy access to past data, as the provider, not the patient, generally retains primary stewardship on ehr. patients thus interact with records in a fractured manner that reflects the nature of how these records are managed [10]. table 1 presents the most important advantages of ehrs and their challenges. table 1 advantages and challenges of ehrs. advantages of ehrs [11] challenges to ehrs adoption [12, 13] • patient information is accurate, up-to-date, and complete • quick and convenient information access at the point-ofcare • secure information sharing with patients and other clinicians • safer and reliable medication prescribing • legible and complete documentation • accurate and streamlined billing • high financial cost. • user resistance to using the system . • organizational cultural change • lack of user support. • lack of computer experience. • technical limitations – slow system performance. • lack of quality in patients’ information. • interoperability/no standard protocols for data exchange. • lack of ehr standards. • lack of incentives a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 25 • reduce medical errors and provide safer care • improve provider efficiency and productivity • reduce costs – ordering and documentation • confidentiality concerns. • difficulty transition of data • issues of implementation, maintenance, upgrades, and training. • privacy and/or security blockchain technology can enhance ehrs efficiency that are essential in fields of health care. at its core, blockchain relies on a decentralized, digitalized and distributed ledger model. by its nature, this is more robust and secure than the centralized models which are currently used in the health care ecosystem. blockchain technology creates a viable, decentralized record of ehrs transactions – the distributed ledger – which allows the substitution of a single master database. it keeps an immutable record of all transactions, back to the originating point of a transaction. this is also known as the provenance, which is essential in health care, allowing health care institutions, stakeholders and patients to review all transaction steps and reduce the risk of fraud, prevents changing and tampering with the data of the electronic health record of the patients. table 2 shows current issues within the smart healthcare industry. table 2 current issues within the smart healthcare industry [14]. issue activity healthcare data interchange data must pass between healthcare providers to necessary third parties, insurers, and patients while meeting data protection regulation in the healthcare sector. nationwide interoperability having a single standard for patient data exchange allows for ease of passing data between healthcare providers, which legacy systems often do not provide. medical device tracking medical device tracking from supply chain to decommissioning allows for swift retrieval of devices, prevention of unnecessary repurchasing, and fraud analytics. drug tracking as with medical devices, blockchain offers the capability to track the chain of custody from supply chain to patients, allowing for tracking any transaction and prevention of counterfeit drugs. iv related work a problem facing healthcare record systems is how to share the medical data with more stakeholders for various purposes without sacrificing data privacy and integrity. blockchain as a promising technology to manage the transactions has been gaining popularity in the domain of healthcare. it has also the potential of securely, privately, and comprehensively manage patient health records. zhang m. and ji y. [15] discuss the latest status of blockchain technology and how it could solve the current issues in healthcare systems. they evaluated the blockchain technology from the multiple perspectives around healthcare data, including privacy, security, control, and storage. they reviewed the current projects and researches of blockchain in the domain of healthcare records and provide the insight into the design and construction of next generations of blockchainbased healthcare systems. conceic a. et al. [16] discuss how blockchain technology, and smart contracts, could help in some typical scenarios related to data access, data management and data interoperability for the specific healthcare domain. they propose the implementation of a large-scale information architecture to access ehrs based on smart contracts as information mediators. the main contribution for this study is the framing of data privacy and accessibility issues in healthcare and the proposal of an integrated blockchain based architecture. park et al. [17] confirm that it is possible to exchange ehr data in a private blockchain network. they concluded that to develop a blockchain-based ehr platform that can be used in practice, many improvements are needed, including reductions in data volume, improved protection of personal information, and reduced operating costs. ekblaw et al. [18] propose a novel and decentralized record management system to handle the ehrs, using the blockchain technology. their system gives patients a comprehensive, immutable log and easy access to their medical information across the providers and treatment sites. xiao yue et al. [19] propose centric access model ensures the patients to control their healthcare data on their own; simple, unified indicator – centric schema makes it possible to organize all kinds of personal healthcare data practically and easily. their method also indicates the fact that mpc (multiparty computing) is one promising solution to enable an interested third party to conduct computation over patient data without violating privacy. zhang et al. [20] address the adoption of blockchain in social network domain but not fully explores the benefits of the blockchain. v ehrs in the gaza strip a the study area gaza strip constitutes the south west part of palestinian coastal plain of mediterranean sea and it is confined between the mediterranean sea in the west, sinai of egypt in the south, negev desert in the east and green line in the north. the gaza strip area is about 365 km2. the length is about 41 km on the western mediterranean coast and the width varying from 7 to 12 km, figure 1. it is located on a latitude of 31˚ 16´ to 31˚ 45´ north and 34˚ 20´ to 34˚ 25´ east [21]. the population density of palestine is generally high at 826 persons/km2, particularly in the gaza strip, where it is 5,453 persons/km2 compared to a lower population density in the west bank of 528 persons/km2 in mid-2019 [22]. b healthcare in the gaza strip healthcare sector in the gaza strip is composed of; (a) primary healthcare represented in clinics; (b) secondary a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 26 healthcare represented in hospitals [23]. ministry of health (moh) is considered the main provider of secondary healthcare in the gaza strip. it defines hospital as a place prepared to receive patients for more than one day for diagnosis or treatment [24]. there are 30 hospitals in the gaza strip; 13 hospitals supervised by moh, 14 hospitals supervised by ngos, and 3 hospitals supervised by ministry of interior (moi). private sector and the united nation relief and works agency for palestine and refugees in near east (unrwa) do not operate hospitals in the gaza strip [25]. the distribution of healthcare facilities according the supervision agency is showed in table 3. table 3 the distribution of healthcare facilities according to the supervision agency. supervision agency number of primary healthcare facilities number of secondary healthcare facilities moh 53 13 moi 4 3 ngos 19 14 unrwa 21 0 total 97 30 c diseases status vaccination coverage levels are one of the best indicators of the health system performance. therefore, moh provides all the needed services to ensure that immunizations for each child according to the national immunization schedule are granted. figure 2 shows number of communicable diseases status in the gaza strip of the year 2018 [26]. as shown in table 4, non-communicable diseases ncds such as ischemic heart disease is ranked the first (40%) from reported chronic diseases in the gaza strip in 2010. whereas, cancer is ranked the second (20%), followed by cerebrovascular disease cvd (13%). respiratory disease is ranked the fourth (10.4%) among reported chronic diseases in the gaza strip population. table 4 non communicable diseases in the gaza strip [27]. rank cause total deaths (per 100,000 people) (%) 1 heart diseases (ihd) 72.8 (40.0) 2 cancer (lung & breast) 36.6 (20.0) 3 cerebrovascular disease 23.3 (13.0) 4 respiratory disease (copd) 18.9 (10.4) 5 chronic kidney disease 11.2 (6.2) 6 hypertension 10.5 (5.8) 7 diabetes mellitus 8.5 (4.6) d ehrs challenges challenges facing the development of ehrs in the gaza strip involve [26]:  interoperability  lack of adoption of uniform standards and interoperability.  privacy and confidentiality.  social and organizational barriers.  technology limitations. fig. 1. geographic location of the gaza strip fig. 2. number of communicable diseases status in the gaza strip a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 27  preserving electronic records.  the legal status of emr  lack of national information standards.  a lack of funding.  concern about physician usage.  resistance to ehr systems.  patient confidentiality and privacy.  system maintenance and down time.  ehr software quality and ease of use.  lack of awareness and experience about the usefulness of ehr. vi checklist analysis and discussion a checklist is designed to assess the performance of ehr and the potential using of blockchain at governmental and ngos hospitals in the gaza strip. this design is highly accepted to assist to meet the study purposes. the target population of this study is hospital managers and it managers who are working in governmental and ngos hospitals in the gaza strip. the total number of healthcare providers is 84, and thus, the target community in the study consists of 84 samples. the survey is conducted over six weeks. the checklist has been distributed to hospital administrators, and if the it department is available, the head of the it department is asked to answer the checklist. ethical approval is obtained from the islamic university of gaza as well as the official approval from the general directorate of human resource department in moh -gaza. every participant in the study received a complete explanation of confidentiality and research purposes. a checklist design the checklist has been prepared on the subject of the study and consists of two sections. section one includes the demographic data such as personal, general and technical data of the respondent like institution's name, city name, name of governorate, institution level, supervising authority, institution age, etc. on the other hand, section two involves fifth dimensions which are:  dimension one: the capabilities of ehr (17 variables).  dimension two: challenges in using ehr technology (16 variables).  dimension three: the capabilities of the approved ehr technology in terms of coding, user authentication and record access (11 variables)  dimension four: the audit log and the metadata characteristics adopted in the ehr technology (17 variables).  dimension five: the approved ehr features for patients' access to their data (8 variables). b checklist validity and reliability content validity refers to the extent to which the paragraphs in the checklist, or the measuring tool, represent the content that has been chosen for inclusion in the test, by presenting the checklist to the arbitrators for their opinions. the checklist is presented to a number of experts as shown in table 5. the arbitrators' opinions are taken into consideration and the amendment is made in light of the submitted proposals, thus the checklist is finalized. table 5 arbitrators names and positions. name position dr. medhat abbas director-general of primary health care ministry of health – palestine dr. rami hader alabadla infection control consultant ministry of health – gaza palestine dr. ayman yassin al astal director of the emergency department, nasser medical complex, ministry of health, gaza, palestine dr. ibrahim hamed al astal dean faculty of education, islamic university, gaza, palestine dr. faisal abdelfattah department of psychology, college of education, imam abdulrahman bin faisal university (formerly known: university of dammam), ksa. internal consistency is a check to ensure all of the test items are measuring the concept they are supposed to be measuring. table 6 shows the internal consistency performed using cronbach's alpha to verify the reliability of each dimension. cronbach's alpha test for all checklist variables is 0.964. table 6 alpha cronbach test results for each dimension. checklist dimension number of paragraphs cronbach's alpha self-validity* dimension one: the capabilities of her 17 .843 0.918 dimension two: challenges in using ehr technology 8 .710 0.842 dimension three: the capabilities of the approved ehr technology in terms of coding, user authentication and record access 35 .953 0.976 dimension four: the audit log and the metadata characteristics adopted in the ehr technology 58 .968 0.983 dimension five: the approved ehr features for patients' access to their data 23 .767 0.875 all dimensions 0.848 0.920 *self-validity = positive square root of the alpha-cronbach factor c demographic data analysis figure 3 summarizes the demographic data analysis of the checklist. figure 3-a shows, that gaza governorate comes first in terms of number of hospitals and clinics in the governorates of the gaza strip. this is due to its high population as well as the presence of governmental, educational, economic a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 28 and other institutions. figure 3-b indicates that 46% of health institutions age is between 10-20 years and 23.1% are over 30 years old. these results mean that health institutions in the gaza strip possess medical and technical expertise, experiment, and cadres that can improve the quality of healthcare. (a) (b) (c) (d) (e) (f) a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 29 figure 3. the demographic data analysis of respondents according to: (a) number of hospitals in each governorate, (b) institution experience, (c) years of using her, (d) the official's degree, (e) title position of respondent, and (f) kind of ehr used in the institution. figure 3-c shows that 53.8% of the sample use ehr in the past 0-5 years. this indicates that the use of ehr is relatively recent and it may not meet all the needs of patients in the health sector. it also appeared from the study that there are a number of health institutions that do not have ehr. figure 3-d clearly shows that 73.3% of the sample who responded to the study hold a bachelor’s degree in computer science, 19.2% a master’s degree and 7.7% a doctoral degree. these results confirm once again that the health sector has the scientific and technical personnel ability to develop the ehr and improve healthcare. it can be observed as in figure 3-e that 46.2% of the respondents are heads of computer departments in health institutions, as well as the same percentage 46.2% of respondents are managers of health institutions, and this matches the importance and accuracy of the information that can be obtained about ehr for this study. figure 3-f indicates that 46.2% of ehr technology used is one or more products developed internally, and 38.5% is a combination of ready-made products and internally developed. that means that there are different groups of health systems technology. consequently, there may be incompatibilities between these different systems, as well as noninteroperability between these systems. d analysis of checklist paragraphs the cross-tabulation is used to analyze the paragraphs of the checklist's dimensions, where cross tables are appropriate to the analysis of this kind of data, which consists of dichotomies. dimension one: the capabilities of the current ehr table 7 summarizes the analysis result of the first dimension. it indicates, for example, the sum of those agreeing to paragraph no. (2) "it is possible to order medical checks and follow up laboratory results through ehr" is 91.7%. this means that radiology, medical and laboratory examinations are conducted and followed widely on most health institutions, through the ehr. this occurs in most health facilities separately, and not between them through an international, regional, or even local network. this answers the first question of this research, about the capabilities of the electronic health record for hospitals and clinics in the gaza strip. the same result is obtained where the total of those who agree with the paragraph (7) is 12.5% that says "patient allowed to access his health record electronically or share it with other specialists in the patient's case". therefore, this indicates that the patient cannot share or view his ehr through the internet. here, blockchain technology can greatly help to connect health institutions and stakeholders, especially patients, so patients can view their data on the health record. respondents to paragraph no. (8) report that 12.5% agree with this paragraph which says that "there is an alarm system in ehr when results for a medical analysis appear to need an urgent intervention". this indicates that the ehr in the health institutions has not an alarm system. hence, there is an urgent need to use blockchain technology to develop the use of the ehr, to provide true, accurate and up-to-date information anytime and anywhere for the patient through the ehr. thus, the second and third questions in this study are answered about the capabilities of the electronic health record, and the capabilities and benefits of blockchain technology in the healthcare field, especially in the gaza strip. table 7 results of dimension one. paragraph governorate total khan younis gaza north gaza middle rafah (1) the ehr technology available operateerate according to a network in all hospitals and clinics. 8.3% 16.7 % 8.3% 8.3% 12.5 % 54.2 % (2) it is possible to order medical checks and follow up laboratory results through ehr. 16.7 % 41.7 % 12.5 % 8.3% 12.5 % 91.7 % (3) the ehr be used in the field of scientific research by extracting accurate statistics about diseases, their symptoms, and geographical distribution. 12.5 % 29.2 % 12.5 % 4.2% 8.3% 66.7 % (4) it is possible to view (by specialists inside or outside the institution) the previous laboratory tests of the patient through her. 20.8 % 41.7 % 12.5 % 8.3% 12.5 % 95.8 % (5) request radiology examinations and track results through the her. 16.7% 41.7% 12.5% 8.3% 4.2% 83.3% (6) doctor allowed to monitor the patient's health record from his home or private clinic. 4.2% 8.3 % 8.3% 0.0% 4.2% 25.0 % (7) patient allowed to monitor his health record electronically or share it with other specialists in the patient's case. 0.0% 0.0 % 4.2% 4.2% 4.2% 12.5 % (8) there is an alarm system in ehr when results for a medical analysis appear to need an urgent intervention. 4.2% 4.2 % .0% 4.2% 0.0% 12.5 % (9) the ehr have a warning system for communicable diseases in which the patient may have infected. 4.2% 4.2 % .0% 8.3% 0.0% 16.7 % total 20.8 % 41.7 % 16.7 % 8.3% 12.5 % 100 % percentages and totals are based on respondents. dichotomy group tabulated at value 1. dimension two: challenges facing the use of ehr a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 30 table 8 illustrates that phrase no. (1) which says "the current ehr technology meets the necessary requirements for entering all data", where, 45.8% of the respondents agreed to this phrase, as well as 29.2% of the previous percentage is for government supervision bodies and 16.7% for the private sector. this means that it must be ensured that the ehr has the capacity and competence to meet the basic needs in order to enter data correctly and accurately. phrase no. (5), which is "is there a mobile application that allows monitoring of electronic health records (by the doctor, patient or family of the patient) or requesting medical examinations?", 29.2% agree on this phrase, all respondents are from government health institutions, and the percentage of respondents from private health institutions is 0%. there may be lack of clarity among respondents, as there is a special program for the ministry of health, but this program does not include functions for the ehr that can be viewed by the patient or doctor via the internet. thus, it can be said that the healthcare system needs to develop a mobile program to enable patient stakeholders and doctors to view the patient's health record details according to the health law as well as the validity and privileges of each user. here, the answer to the second and third question from the study is confirmed in terms of the capabilities of the ehr. the urgent need to use blockchain technology to link all governmental and private health institutions is also evident. table 8 results of dimension two. paragraph supervision side total governmental private (1) the current ehr technology meets the necessary requirements for entering all data 7 4 11 29.2% 16.7% 45.8% (2) the current ehr technology allows all patient necessary information to be recorded? 10 3 13 41.7% 12.5% 54.2% (3) are there enough computers capable of achieving the required tasks? 8 7 15 33.3% 29.2% 62.5% (4) are ipads available to followup the patient's case in the morning follow-up tour? 5 1 6 20.8% 4.2% 25.0% (5) is there a mobile app that allows ehr to be monitored (by the doctor, patient, or patient's family) or to order medical tests? 7 0 7 29.2% .0% 29.2% (6) is a medical secretarial available for data entry? 11 8 19 45.8% 33.3% 79.2% (7) are the necessary technical and medical cadres available to achieve, operate, monitor and maintain ehr? 11 8 19 45.8% 33.3% 79.2% (8) do technical and medical cadres receive the necessary training to achieve, operate, monitor and maintain ehr? 10 7 17 41.7% 29.2% 70.8% total 15 9 24 62.5% 37.5% 100.0% dichotomy group tabulated at value 1. dimension three: adopting ehr technology on coding, user authentication, and ehr access. table 9 shows the important factors in the dimension three. regarding phrase no. (1) "hospital has plans to adopt computer-assisted coding", total responses are 94.4% agreeing with the phrase, as hospitals responded by 50%, and the medical community 27.8%. this result is positive as it becomes clear that there is an awareness among the health institutions of the importance of developing performance by adopting the development of coding, user authentication, and ehr access. the rest of the phrase on this axis has very weak approval, and this indicates weakness or lack of access by external entities to the ehr. thus, private hospitals, insurance companies, and patients are not able to access the health record. we have answered here the fourth and fifth questions of the study, in terms of the capabilities and benefits of blockchain technology that can link health institutions to each other, address the lack and problems that exist in the ehr, and allow patients, doctors, and stakeholders to access the patient's health record, thus, making the patient at the center of attention of all health operations. table 9 results of dimension three. paragraph institution level total medical complex hospital clinic (1) hospital have plans to adopt computer-assisted coding. 5 9 3 17 27.8% 50.0% 16.7% 94.4% (2) hospital allow any outside entity (such as private hospitals insurance companies patients) access to the her technology. 0 2 0 2 0.0% 11.1% 0.0% 11.1% (3) hospital establish unique user ids to track outside entities' activity. 0 1 0 1 0% 5.6% 0.0% 5.6% (4) hospital limit outside entities' access. 0 3 0 3 0.0% 16.7% 0.0% 16.7% (5) are outside entities allowed access to patient's audit logs and metadata. 0 1 0 1 .0% 5.6% .0% 5.6% total 5 10 3 18 27.8% 55.6% 16.7% 100.0% percentages and totals are based on respondents. dichotomy group tabulated at value 1. dimension four: audit log and metadata properties approved in the ehr. dimension four scene is summarized in table 10 to table 12, where the most important paragraphs are presented in table 10 and details of phrases no. (2) and (3) are illustrated in table 11 and table 12 respectively. for example, paragraph no. (3), which says that "audit log include the following data" reveals that 50% of all respondents who agree to this paragraph are head computer departments. this clearly indicates the presence of highly qualified medical and technical cadres to deal with the ehr project. pargraph no. (4) "internet protocol (ip)/ media access control (mac) address" in table 12 is one of the most important phrases. the approval rate of the computer department heads is 35.7%. likewise, they are high in the rest of the paragraphs. this indicates the importance of the fourth-dimension data in terms of documenting the movement of access to ehr data. the same idea applies to paragraph (2) of table 12. paragraph no. (6) in table 10 "can the audit record be deleted?", a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 31 the total of those approving it is 14.3%. this percentage indicates that there is an important basis in maintaining the audit record from deletion or change, and this saves the patient's ehr from alteration and manipulation and thus protect the patient's rights. thus, blockchain technology is needed to enhance the documentation of movement to access the ehr, where the patient's health information is provided through the ehr for both the patient, the doctor, stakeholders and society. this is the answer of the fifth question of this study. table 10 results of dimension four. paragraph position total head computer dept. dept. director manager (1) is there an audit record in the institution? 7 2 5 14 50.0% 14.3% 35.7% 100% (2) audit log record the following event data * 7 50.0% 2 14.3% 5 35.7% 14 100% (3) audit log include the following data ** 7 50.0% 2 14.3% 5 35.7% 14 100% (4) is the audit log operational whenever the ehr technology is available for updates or viewing? 5 35.7% 1 4 10 7.1% 28.6% 71.4% (5) can the audit log be disabled? 2 14.3% 2 1 5 14.3% 7.1% 35.7% (6) can the audit log be deleted? 1 7.1% 0 1 2 .0% 7.1% 14.3% (7) can the audit log be edited? 4 28.6% 2 3 9 14.3% 21.4% 64.3% (8) does the ehr technology allow the destruction of ehr audit log data or any other data according to the hospital's data retention policies? 0 0.0% 2 2 4 14.3% 14.3% 28.6% (9) can the ehr technology produce a user friendly version of the audit log (i.e., a summary of audit data in a readable format or embedded in an electronic form) for transmitting, printing, or exporting? 5 35.7% 2 4 11 14.3% 28.6% 78.6% (10) does any qualified or certified person in the hospital analyze the audit log data? 3 21.4% 2 4 9 14.3% 28.6% 64.3% total 7 50% 2 5 14 14.3% 35.7% 100% dichotomy group tabulated at value 1. * details are in table 11 ** details are in table 12 dimension five: approved ehr technology features for patients' access to their data. the important paragraphs in dimension five are shown in table 13. regarding the barriers, paragraph (4) "concerns about the patient's security and privacy" indicates that 64.7% agree with this paragraph and consider it one of the most important obstacles in preventing patients from accessing their data through the ehr. of these, 17.6% are to the medical complex, 29.4% to the hospital and 17.6% to the clinic. this means that the issue of maintaining the privacy and security of patient information is a very important component, and must be addressed at the level of health laws, policies, and procedures before addressed on electronic health records. also, paragraph (8) "concerns with ehr system performance", 29.4% of respondents agree to this paragraph, out of this percentage 11.8% for medical complex, 17.6% for the hospital, and 0% for the clinic. table 11 audit log record the following event data and position crosstabulation paragraph variable position total head computer dept. dept. director manager each entry or access to the ehr. 6 42.9% 2 5 13 14.3% 35.7% 92.9% signature event (the proactive or auto default completion of a patient encounter). 4 28.6% 2 4 10 14.3% 28.6% 71.4% export of ehr document (printed, electronically exported, emailed). 4 28.6% 2 4 10 14.3% 28.6% 71.4% corrections or modifications of data. 6 42.9% 2 3 11 14.3% 21.4% 78.6% import of data. 6 42.9% 2 4 12 14.3% 28.6% 85.7% disabling of audit log. 2 14.3% 2 3 7 14.3% 21.4% 50.0% release of encounter for billing. 4 28.6% 2 5 11 14.3% 35.7% 78.6% access by an authorized outside entity. 3 21.4% 2 2 7 14.3% 14.3% 50.0% total 7 50.0% 2 5 14 14.3% 35.7% 100.0% percentages and totals are based on respondents. a. dichotomy group tabulated at value 1. table 12 audit log include the following data and position cross-tabulation paragraph variables position total head computer dept. dept. director manager (1) patient national number (id number) 7 50.0% 1 5 13 7.1% 35.7% 92.9% (2) date/time/user stamps. 7 50.0% 2 5 14 14.3% 35.7% 100% (3) access type (creating, editing, viewing, printing, etc.). 5 35.7% 0 5 10 0.0% 35.7% 71.4% (4) internet protocol (ip)/ media access control (mac) address. 5 35.7% 1 2 8 7.1% 14.3% 57.1% (5) network time protocol (ntp)/ simple network time protocol (sntp) synchronized time. 5 35.7% 1 1 7 7.1% 7.1% 50% (6) method of data entry (direct entry, speech recognition, automated, copy/import, copy forward, dictation). 3 21.4% 2 3 8 14.3% 21.4% 57.1% (7) date/time/user stamp of original author when data are copied. 5 35.7% 2 3 10 14.3% 21.4% 71.4% (8) date/time/user stamp of original author if data are entered on behalf of another (e.g., an assistant enters. clinical information for a physician). 3 21.4% 2 2 7 14.3% 14.3% 50% total 7 50.0% 2 5 14 14.3% 35.7% 100% dichotomy group tabulated at value 1. this means 70.6% of the respondents do not agree with this paragraph, and they consider the performance of the ehr a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 32 system is good, but it may need improvement, and this is considered a strong point in the electronic health system. as for the second part of table 13, which talks about procedures to check a patient's ehr data, paragraph (2) "verifying identity based on information an individual can verify (e.g., address, date of birth)", reveals that 91.7% of the respondents agreed to it. the distribution of this percentage is 29.2% medical complex, 50.0% hospital and 12.5% clinic. on the other hand, paragraph (3) "biometric identification", the percentage of approval of respondents is 12.5%. this means that the procedures used to verify the patient's ehr are the same as those in the old system, and these procedures and tools need to be developed to fit into the electronic health system. this axis addresses the obstacles facing the ehr, which are the same obstacles that face the implementation of blockchain technology, which can be resolved by enacting laws and legislation and implementing measures that maintain the privacy and security of health information, also developing procedures for identifying the personality of the patient who is trying to access its own ehr and other procedures that enhance confidence in the ehr. here, the answer to the third question of the study is emphasized. table 13 dimension five results barriers to patients accessing their ehr and institution level cross tabulation paragraph institution level total medical complex hospital clinic (1) ehr technology does not support access to their information. 4 23.5% 4 0 8 23.5% .0% 47.1% (2) hardware does not support access to their information. 1 5.9% 3 1 5 17.6% 5.9% 29.4% (3) resistance by physicians to have patients access to their information. 0 0.0% 3 2 5 17.6% 11.8% 29.4% (4) concerns with patient security and privacy. 3 17.6% 5 3 11 29.4% 17.6% 64.7% (5) funding restrictions/additional costs to implement. 2 11.8% 3 2 7 17.6% 11.8% 41.2% (6) insufficient training on ehr technology. 2 11.8% 4 2 8 23.5% 11.8% 47.1% (7) inability to integrate with existing systems. 2 11.8% 3 2 7 17.6% 11.8% 41.2% (8) concerns with ehr system performance. 2 11.8% 3 0 5 17.6% .0% 29.4% (9) hospital policy prevents such access. 1 5.9% 2 1 4 11.8% 5.9% 23.5% total 5 29.4% 8 4 17 47.1% 23.5% 100% procedures to check a patient's ehr data and institution level cross tabulation institution level total medical complex hospital clinic (1) photo identification. 1 4.2% 5 0 6 20.8% .0% 25.0% (2) verifying identity based on information an individual can verify (e.g., address, date of birth). 7 29.2% 12 50.0% 3 12.5% 22 91.7% (3) biometric identification. 0 0.0% 3 12.5% 0 .0% 3 12.5% total 7 29.2% 14 3 24 58.3% 12.5% 100% dichotomy group tabulated at value 1. vii discussion based on the checklist results, there are some strengths at the current ehr in the gaza strip, which involves qualified medical and technical cadres, experience in the field of ehr that extends for many years, as well as the infrastructure of health systems from using computer technology and networks. ehrs systems need to develop legislation, laws, and procedures in line with modern technology. developing tools to improve the quality of health services through the use of blockchain technology to improve the use of the ehr by linking government and private health institutions to a unified network so that the patient can finally have permanent access to his medical data through the electronic health record anytime and anywhere. setting a roadmap to improve the performance of ehr using blockchain in the health sector in the gaza strip should include:  digitization of all health institutions, where it is found that there are some health institutions still operating on the basis of the paper system.  unifying medical terms by setting clear and specific a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 33 standards to achieve high efficiency in the performance of the ehr.  developing laws, legislations, and health procedures to suit modern technology and improve the performance of the ehr through blockchain technology.  ensuring that the current ehr meets all the requirements for entering the patient's medical and demographic data and health institutions. where there are no ehrs were designed to manage multi-institutional, lifetime medical records.  addressing issues related to patient information privacy and security.  connecting all health facilities to a unified network through blockchain technology to enable the patient, doctor and stakeholders to access the electronic health record anytime and anywhere.  improving the performance of ehr programs to deal with a large amount of health data, which is significantly enlarged.  developing and enhancing procedures to verify the identity of the patient who is trying to access his medical data through the ehr. in addition, enhancing public confidence in the electronic health record.  developing model to improve performance of the current ehr gaza strip using blockchain.  developing software for mobile that allows the patient, doctor, and stakeholders to access the ehr anytime and anywhere.  setting a road map to awareness the public society and medical society the importance of ehr for the patient and all stakeholders. in addition, enhancing confidence in the information of ehr. viii conclusion this study shows that the use of ehr in the gaza strip is relatively recent, with 50% of respondents using the electronic health record 0-5 years ago. and it may not meet all the needs of patients in the health sector. it also reveals that there are number of health institutions that do not have ehr. the study also shows that 70.6% of respondents believe that the performance of the ehr system is good, but it may need improvement, and this is considered a strong point in the electronic health system. the procedures used to verify the patient's access to ehr are the same as those in the old paper system, and these procedures and tools need to be developed to fit into the ehr. the study indicates that the issue of maintaining the privacy and security of patient information is a very important component, and must be addressed at the level of health laws, policies, and procedures before addressed on electronic health records. it also illustrates that conducting radiology report, medical examinations and laboratory data follow-up on a large scale in most health institutions, through the ehr. this occurs in most health facilities separately, not between them through an international, regional, or even local network. this confirms the presence of important patient-specific medical data distributed among different and independent health institutions, as the patient does not benefit from it through the ehr that was designed on the basis of the paperbased medical system, and not on the basis of computer use. the study concludes that the patient cannot share or view the ehr online. here, blockchain technology can greatly help connect health institutions and stakeholders, especially patients, so that patients can display their data in the health record. in addition to, there are different software for the ehr, which differ from one health institution to another, some of these software are ready-made, some are internally developed, and some are a mixture internally developed and readymade, and thus, the problem of incompatibility between these programs, and the inability to interoperate between it. therefore, the urgent need to use blockchain technology to solve these problems. ehr in health institutions do not have an alarm system when results for medical analysis appear to need urgent intervention. consequently, there is an urgent need to use blockchain technology to develop ehr use, to provide true, accurate, and up-to-date information anytime, anywhere to the patient through ehr. the study shows that the health care system needs to develop a mobile program to enable stakeholders and doctors to view patient health record details in accordance with the health law as well as the validity and privileges of each user. finally, it is concluded that there is a strong need for blockchain technology to link public and private health institutions to one another, address the deficiencies and problems present in ehr, and allow the patient, physician, and stakeholders to access the patient health record. references [1] alastal, a. i. (2019). enhancing sustainable urban development through smart city using (gis & bim): case study of hamad city khan younis. master theses, islamic university of gaza. [2] alastal, a. i., salha, r. a., and el-hallaq, m. a. (2019). the reality of gaza strip cities towards the smart city’s concept. a case study: khan younis city. current urban studies, 7, 143-155. [3] el-hallaq, m.a., alastal, a.i. and salha, r.a. (2019) enhancing sustainable development through web based 3d smart city model using gis and bim. case study: sheikh hamad city. journal of geographic information system, 11, 321-330. [4] wanitcharakkhakul, l. and rotchanakitumnuai, s. (2017). blockchain technology acceptance in electronic medical record system. the 17th international conference on electronic business, dubai, uae, december 4-8, 2017. [5] linn, l. a., and koo, m. b. (2016). blockchain for health data and its potential use in health it and health care related research. in onc/nist use of blockchain for healthcare and research workshop. gaithersburg, maryland, united states: onc/nist. [6] ricciardi, l., mostashari, f., murphy, j., daniel, j. g., and siminerio, e. p. (2013). a national action plan to support a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 34 consumer engagement via e-health. health affairs, 32(2), 376-384. [7] blaya, j. a., fraser, h. s., and holt, b. (2010). e-health technologies show promise in developing countries. health affairs, 29(2), 244-251. [8] the mitre corporation. health information management systems society. (2006). electronic health records overview. [9] salha, r.a., el-hallaq, m.a. and alastal, a.i. (2019). blockchain in smart cities: exploring possibilities in terms of opportunities and challenges. journal of data analysis and information processing, 7, 118-139. [10] ekblaw a., azaria a., halamka j. and lippman a. (2016). a case study for blockchain in healthcare: “medrec” prototype for electronic health records and medical research data. white paper. [11] rocha r., collins s. and ramelson h. (2017). electronic health record systems for providers and patients. division of general internal medicine and primary care, department of medicine, brigham and women’s hospital, harvard medical school. [12] kruse, c.s., kothman, k., anerobi, k., et al. (2016). adoption factors of the electronic health record: a systematic review. jmir med. inform. 4(2): e19. [13] el mahalli a. (2015). adoption and barriers to adoption of electronic health records by nurses in three governmental hospitals in eastern province, saudi arabia. perspectives in health information management. [14] bell l., buchanan w., cameron j. and lo o. (2018). applications of blockchain within healthcare. blockchain in healthcare today™, issn 2573-8240 online. [15] zhang, m., and ji, y. (2018). blockchain for healthcare records: a data perspective. peerj preprints 6: e26942v1. [16] conceic, a., silva, f., rocha, v., locoro, a., and barguil, m. (2108). eletronic health records using blockchain technology. arxiv:1804.10078 [cs.cy]. [17] park, y., lee, e., na, w., park, s., lee, y., & lee, j. (2019). is blockchain technology suitable for managing personal health records? mixed-methods study to test feasibility. journal of medical internet research. [18] ekblaw, a., azaria, a., halamka, j. and lippman, a. (2016). a case study for blockchain in healthcare: “medrec” prototype for electronic health records and medical research data, ieee conference. [19] yue, x., wang, h., jin, d., li, m. and jiang, w. (2016). healthcare data gateways: found healthcare intelligence on blockchain with novel privacy risk control. journal of medical systems, 40 (10). [20] j. zhang, n. xue, and x. huang. (2016). a secure system for pervasive social network-based healthcare, ieee. [21] mopic, (1994): gaza environmental profile, part i. environmental planning directorate (epd), ministry of planning and international co-operation (mopic), gaza, palestine. [22] palestinian central bureau of statistics (2019). about 13 million palestinians in the historical palestine and diaspora, on the occasion of the international population day 11/7/2019. [23] health cluster in the occupied palestinian territory. (2014). joint health sector assessment report. palestine: health cluster. [24] palestinian central bureau of statistics, (2006). health care providers and beneficiaries survey-2005: main findings. ramallah-palestine. [25] ministry of health. (2016). annual report for hospitals in gaza strip. palestine: ministry of health. [26] ministry of health. (2018). phic, health status. [27] mosleh, m., aljeesh, y., dala, k. (2016). burden of chronic diseases in the palestinian healthcare sector using disability-adjusted life years (daly), palestine. diversity and equality in health and care, 13(3): 261268. abdelkhalek i. alastal was born in khan younis city, palestine, on the 10th of april 1964. he obtained a ba of geography from tanta university in the a.r.e in the year 1986. he obtained a ma of geography and information systems from islamic university in gaza 2019. he worked as an it spcialist, programer, it coordinator and it administrator in faculty of humanities and social sciences in u.a.e university since 1994 to 2012. he participated in the teaching of many courses such as; hardware and software maintenance, operating systems (mac & windows), and applications such as; spss and ms office for faculty members, students and staff. in addition, he has many publications in refereed specialized journals. he also published a book entitled "big data in smart cities: exploring possibilities in terms of opportunities and challenges" published in "lambert academic publishing" at 28 may 2020. now he works in a research entitled "big data for sustainable efficiency in advanced healthcare". raed a. salaha had his bc. degree in geography from king suad university-suadi arabia kingdom. he also obtaind his master and phd in geography and urban planning from research instuituteegypet. nowadays, dr. salha is associate professor of geography and urban planning at department of geography and information systems, the islamic university of gaza as well as being the dean of faculty of arts. maher a. el-hallaq was born in gaza city, palestine, on the 29th of december 1967. he obtained a phd of surveying and geodesy engineering from cairo university in the arab republic of egypt in 2010. currently, he works as an associate professor of geomatics engineering in the department of civil engineering at the islamic university of gaza since 2010. he participates in the teaching of many courses such as; surveying i, surveying ii, geomatics, engineering mechanics: statics and dynamics, global navigation satellite systems, geodesy, remote sensing principles and applications, cartography, gis/dss of infrastructure. in addition, he has many publications in refereed specialized journals and international scientific conference proceedings. he also published a book entitled "map comparison using template image matching techniques". dr. el-hallaq is a consultant of many local municipalities and private agencies in the gaza strip. nowadays, he is a member of "geodesy" committee of geo-molg project, ministry of local government. he is also a reviewer for asci-surveying, jert, iug and other many journals as well as being an editorial member of american journal of remote sensing (ajrs). a. i. alastal, raed a. salha and maher a. el-hallaq / blockchain-based quality of service for healthcare system in the gaza strip (2020) 35 transactions template journal of engineering research and technology, volume 4, issue 2, june, 2017 67 sustainable household water model: abasan al-kabera as a case study husam al-najar civil and environmental engineering department, faculty of engineering, the islamic university of gaza. e. mail: halnajar@iugaza.edu.ps abstract— in this study, a viable model at household level is adopted to reuse the grey and stormwater at a household level. abasan al-kabera is studied specifically due to its rural and urban characteristics. the model constitutes of rainwater collection from rooftops of houses as well as graywater reuse in flashing the toilet while the surplus will be injected to the groundwater. the total inflow to the aquifer from storm water accounted for 1,756,875 m 3 /year out of it 146,060 collected from the rooftops of public buildings and household while the recovery of greywater is 571,536 m 3 /year. additionally the estimated return flow from irrigation equals 506220 m 3 /year. the outflow for domestic and agricultural equals 738,895 and 1,687,400 m 3 /year, respectively. in conclusion the water balance is achieved, but it requires a proper storm water collection system. moreover the greywater treatment and reuse systems should be developed and enhanced to guarantee the quality of groundwater recharge. index terms—rooftops, greywater, water balance, rainwater, absan al-kabera. i introduction researchers adopt water balance models prepared for the developed countries in countries lacking the capacity to implement these models due to the lack of experience and economical situation. gaza strip is one of the semi-arid area where rainfall is falling in the winter season from september to april, whereas the long term average rainfall rate in all over the gaza strip is between 200 mm/ year in the southern area to 400 mm/year in the northern areas (moa, 2009; pwa, 2015). groundwater aquifer is considered the main water supply source for all kind of human usage in the gaza strip (domestic, agricultural and industrial). groundwater has been ovehelmed and deteriorated in both quality and quantity due to the increased in the urban areas which led to a decrease in the recharge quantity of the aquifer, also increasing the population will increase the demand and therefore, deplete the groundwater aquifer leading to seawater intrusion (qahman, 2009; shomer, 2010; cmwu, 2016 a) . the groundwater aquifer beneath gaza strip is limited in its area, while the natural boundary of this aquifer reach haifa in the north and goes to sinai in egypt in the south, and it’s also bounded from hebron in the east till the meditation sea in the west (metcalf and eddy, 2000; pwa, 2003; mohd s. abu jabal et al., 2017). the groundwater quality is monitored through all the cation’s and anion’s twice a year with the cooperation of both moh and cmwu (cl, no3, mg, ca, na, k, f, nh3, so4, tds, ec, ph, alkalinity and hardness) is monitored through all municipal wells and some agricultural wells distributed all over the gaza strip (cmwu, 2016). the groundwater quality is varies from place to another and from depth to another. the chloride ion concentration varies from less than 250 mg/l in the sand dune areas as the northern and south-western area of the gaza strip to about more than 10,000 mg/l where the seawater intrusion has occurred. the fresh groundwater area in the gaza aquifer (cl ≤ 250 mg/l) is existing in limited part of the aquifer located in the north of gaza and west of khan younis (mawasy) see figure 1. the major parts of the aquifer have a cl concentration of 500 -1500 mg/l, while along the coastal line exceeds 2000 mg/l of cl concentration because of seawater intrusion influence. the map shows also that the cl concentration in the southeastern part of the gaza strip is more than 1500 mg/l reflecting the upward leakage of the high saline water from the underneath water horizons (pwa 2015). while the source of the nitrate ion in the groundwater chemical components has resulted from different sources i.e. intensive use of agricultural fertilizers beside the existence of septic tanks to dispose the domestic wastewater in the areas where there is no proper wastewater collection system. the nitrate ion concentration reaches a very high range in different areas of the gaza strip, while the who standard recommended nitrate concentration less than 50 mg/l for drinking purpose. as shown in figure 2, it is clear that the no3 concentration in the pumped domestic water is ranging between 50 mg/l and > 300 mg/l. where the high no3 concentration mainly occurred in the different residential areas of gaza strip reflecting the percolation of the wastewater to the underneath aquifer through the networks or cesspits. husam al-najar / sustainable household water model (2017) 68 figure 2 nitrate contour map (pwa, 2015) khan younis and specially absan al-kabera has the highest concentration since most of the residential area is not served by sewerage system and many areas are still served by cesspits facilities and characterized by rural areas where fertilizers used intensively (al-najar, et.al., 2014). moreover, abu jabal et. al., 2014 and 2015, discussed new parameter from khanyounis domestic wells showing high flouride concentration. the prevoisely metioned facts about the groundwater beneath khanyounis governorate in general and abasan alkabera in particular agree with the united nation (un) report concerning the gaza strip environmental and health status ―gaza in 2020 is a liveable place?‖ (un, 2012). the report briefly emphasizes that, without remedial action now, gaza's problems in water, education and health will only get worse over the coming years, the top united nations official for humanitarian and development aid in the occupied palestinian territory, maxwell gaylard, warned today. "gaza will have half a million more people by 2020 while its economy will grow only slowly. in consequence, the people of gaza strip will have an even harder time getting enough drinking water. mr. gaylard, together with jean gough of unicef and robert turner of the united nations relief and works agency for palestine refugees (unrwa), launched a new report of the united nations that summaries trends in gaza and forecasts for the year 2020. the report says that the population of the gaza strip will increase from 1.6 million people today to 2.1 million people in 2020, resulting in a density of more than 5,800 people per square kilometer. infrastructure in electricity, water and sanitation, municipal and social services are not keeping pace with the needs of the growing population. gaza’s population of about 1.6 million is still overwhelmingly groundwater and urban areas. by all accounts, demographic pressures in the gaza strip in terms of population density, growth rate, poverty and unemployment are extraordinarily high compared to neighboring countries and regions. the population pressure, combined with limited resources, places immense strain on the natural environment. politicians and planners are faced with many competing claims for the use of scarce water and land in the gaza strip to fulfill the growing demand for development. the aim of the current research is an emergency action to highlight the possible means to remediate the resources and sustainable water cycle as a response to the un 2020 report to save water for the coming generations. absan al-kabera was discussed as a case study due to its special rural and urban characteristics. ii study area and methodology to achieve the planned objectives of water sustainable cycle in the gaza strip, abasan al-kabera is proposed as a model. the approach is to start from the household water cycle to reach the large scale water cycle. the main source of domestic water is the 6 municipal groundwater wells: n9, n22 and rashwan 1,2,3,4. the water distributed from 2 main reservoirs (ground reservoir 2000 m 3 and high reservoir 300 m 3 ). as shown in fig. 3 and table 1, abasan structural plan area is 7028 dunums (1 dunum = 1000 m 2 ) out of it 42.84% is residential area while the agricultural residental areas represent 43.66%, the rset represent the commercial, roads and green areas (moak, 2016). figure 1 chloride contour map (pwa, 2015) husam al-najar / sustainable household water model (2017) 69 figure 3 the structural plan of abasan al-kabera, 2016 the population is 22,493 person (pcbs. 2016) and the number of water connections are 3042 customer (cmwu. 2016 b). irrigation water requirement is estimated by modeling of average meteorological data for the last ten years by using cropwat version 8.0. it is a program that uses the fao (2004) penman-monteith method for calculating reference crop evapotranspiration and the irrigation water requirements. agricultural water demand in addition to domestic water demand is calculated and compared with the registered data at the municipality archive. table 1 the landuse of abasan al-kabera structural plan 2007 landuse dunum % agricultural residential area 2052 29.2 assistant agricultural residential area 1016 14.46 resiential area 3011 42.84 commercial area 189 2.69 public buildings 176 2.5 green area 56 0.8 cemetry 28 0.4 main streets 499.5 7.11 iii water cycle at household level a. stormwater collection as stated previously, the increasing urbanization in the gaza strip represents a significant land use change, which has affected the major components of the overall water balance within the gaza strip. the recommended safe yield of the coastal aquifer in the gaza strip is 55 million cubic metres a year (pwa, 2012) and it would be good if a significant proportion of this could be returned to the aquifers through natural recharge. however, while urbanization will increase overland runoff, this will be at the expense of infiltration and groundwater recharge making the attainment of this objective difficult (al-najar and adeloy, 2005; hamdan et.al., 2007; pwa, 2007). so while the use of a comprehensive modelling tool might be desirable, it is certainly a matter for the future as far as the gaza strip is concerned. as an interim solution, an attempt was made to estimate the quantities of water that can potentially be available from rainwater from rooftops of the houses. to estimate the annual collected volumes, the basic water balance equation is used: q = ap, where q is the annual collected volume from the rooftop, a is the roof area, p is the annual rainfall. from the data of water custmor services, the number of connections are 3042 giving estimation about the number of houses. considering the total population and the number of houses resulting in 7 persons per house. assuming the averge rooftop surface in abasan al-kaberia is 120 m 2 and the annual rainfall is 250 mm (moa, 2016), the total collected rainwater from each house is 30 m 3 . the required water supply per year per house equals 0.09 m 3 /capta/day x 7 persons x 365 = 230 m 3 /year per household. thus, the collected rainwater from each house represent 13% of total family demand. the collected rainwater should be directly infiltrated to the groundwater to minimize the area of storage and to prevent the growth of mibncroorganisems as gaza strip experienced bad water quality due to the lack of monitoring programs and prevalence of water borne diseases (who, 2006; yassin et. al. 2006; sadallah and al-najar, 2015). b. greywater treatment and reuse greywater reuse is a promising alternative water source, which could be exploited on a continuous basis and treated for non-potable uses (chong et al., 2015). the decentralized household grey and wastewater treatment units in gaza strip were used for long time in rural areas in a conventional form that is a cylindrical shape constructed from concrete bricks for external walls without any lining in the bottom. these units consist of unsanitary construction system and depend mainly on infiltration of the wastewater that contaminated the ground water. another case of onsite treatment units in palestine was adding a separation rectangular tank before the septic tank. it is estimated that 40% of the houses in gaza have such conventional septic tank (al-najjar, 2013). the idea of decentralized wastewater treatment plants (dewats) has been used husam al-najar / sustainable household water model (2017) 70 by the local community as a tradition. unsanitary septic tank system was used for long time as a result of unavailable sewerage (el-halabi, 2005). the first initiative to develop the decentralized treatment system was adapted by the ngo's at the beginning of last decade to treat gray water in the rural as the following: twenty five septic tanks were implemented by the union of agricultural work committees (uawc) using the traditional system but with some additional sanitary measures as tank ground lining and totally closed walls of tank that prevented to some extent the infiltration to the ground water, but with very small fraction in treatment efficiency. the main two ngo's were the palestinian hydrology group (phg) through their wastewater treatment and reuse in agriculture project and palestinian agricultural relief committees (parc). palestinian hydrology group (phg) model was one of the first trials in gaza strip to treat wastewater in rural areas. the system was designed to treat gray water and to utilize the system as a potential source for treated wastewater reuse. the system was implemented in many parts of gaza strip specially in the rural areas where treated wastewater can be reused. the aim of the system implementation were to protect the environment and to enhance the nontraditional water resources use and decreasing the use of cesspools. the palestinian agricultural relief committees (parc) model has been implemented in west bank and gaza in the rural areas to reuse treated wastewater in irrigating farms. the action against hunger (acf) has installed 25 units in the eastern villeges of khanyounis including absan al-kabera (acf, 2016). the decentralized wastewater management approach on the other hand could be a valuable alternative to conventional, centralized approaches, if low cost processes adapted to the local conditions are applied and properly maintained (epa, 2008). water is increasingly becoming a scarce resource. large and small scale users need to take action to conserve it not only because it is prudent practice to do so for their own benefit, but also because it is an active demonstration of their concern about the global pollution and environmental problems. acquiring innovation capacity in developing and implementing grey water recovery technology on the residential areas is essential to alleviate the sequences of water scarcity. iv water balance in abasan al-kabera considering the number of the family in abasan al-kaberia of 7 persons, the need to flush the toilet is 2 times per person a day producing around 8 liters/person/day as a wastewater (i.e. 56 liters/ family/day). traditionaly the residents use part of the water supply to irrigate the surrounding garden and wash the yard leading to generated wastewater (both blackwater and greywater) equals to 90% of water supply. the rest of the water supply [(90-0.9) 8 = 73 l/person/day) generated as greywater, the potential recovery of greywater per family per year equals 0.073 m 3 x7 persons x 365 days of the year = 186 m 3 /year. the water supply per family per year equals 230 m 3 , the percentage of greywater recovery represents 81%. to make the balance of water supply, balckwater generation, grey water recovery and rainwater collection, around 94% of the water suppy could be recovered (81% greywater + 13% rooftop rainwater collection). this calcuation model is nearly fixed where the use of flushing the toilet is restricted, the people use the water in huge amounts in the bathroom, kitchen, washing machines which all produce greywater in other words if the water supply increase, as a consequence the grywater production increase. as showin in fig. 4, not all the produced greywater could be used to flush the toilet only 56 l/famil/day could be utilized, the rest 511-56 = 455 l/ day should be infiltrated to the groundwater through the rainwater injection boreholes. the reuse of grey system is very suitable in the agricultural and assistant agricultural residential areas which represent 43.66% of the toal area of abasan see table 1. moreover, the greywater could be used to irrigate the agricultural lands which cultivated with olives, guava and citrus and the rest should be infiltrated. figure 4 recovery model of greywater and rainwater collection as shown in table 1, the pure residential area represents only 42.84% of the toal area of abasan, thus it is characterised as urban area, while 43.66 is agricultural and assistant agricultural with scattered buildings. this area is characterized as rural areas. the recovery model figure 4, is designed for the houses which accounted for 3402 houses, the total collected rainwater from household rooftops is 30 m 3 / house x 3,402= 102,060 m 3 /year. while the public buildings represents 176 dunum as shown in table 1, thus the collected rainwater from the public buildings rooftops equals 44,000 m 3 /year. the collected rainwater could be infiltrated to the groundwater. the total entire area of abasan is 7027.5 dunums, the quantity of rainwater that could be collected equals 1,756,875 m 3 /year out of it 146,060 collected from the rooftops of public buildings and household rooftops. husam al-najar / sustainable household water model (2017) 71 the water cycle in abasan consists of abstraction from the groundwater for domestic and irrigation water and the recharge of the groundwater as the following: a) outflow from the groundwater: domestic demand 90l/c/d x 22493 person x 365= 738,895 m 3 /year. agricultural demand in agricultural and assistant agricultural areas = 3068 dunum x 550 m 3 /dunum = 1,687,400 m 3 /year assuming all the agricultural and assistant agriculture are cultivated and planted. it is clearly the agricultural demand is two times higher than the domestic demand. b) inflow to the groundwater: rainwater = 1,756,875 as a total rainwater volume, but lets assume 20% lost by runoff, the recharge of the groundwater from rainwater = 1,405,500 m 3 /year. return of irrigationwater to the groundwater = 0.3 x 1,687,400 = 506220 m 3 /year recovery of greywater = 168 m 3 /household x 3402 = 571,536 m 3 /year. finally, the outflow from the groundwater accounted for 2,426,295 while the inflow from stormwater/ or greywater reuse accounted for 2,483,256 m 3 /year. in conclusion the water balance is achieved in case of absan, but it is required to adopt proper stormwater collection system in the level of household and from agricultural areas. moreover the greywater treatment and resue systems should be developed and enhanced to guranttee the quality of groundwater recharge based upon the palestininan standards. references acf, 2016. the enhance resilience and maintain livelihoods of palestinian food insecure households affected by the conflicts, palestine project wastewater and greywater expert. final report. greywater and wastewater treatment on household level in the gaza strip: extension and potential reuse. al-najar h., a.j. adeloye 2005. the effect of urban expansion on groundwater as a renewable resource in the gaza strip. rics research 5(8): 7-21 al-najar, h., al-dalou, f., snounu, i. and j. al-dadah. 2014. framework analysis of socio-economic and health aspects of nitrate pollution from urban agricultural practices: the gaza strip as a case study. journal of agriculture and environmental sciences. vol.3 (2): 355-370. coastal municipal water utility cmwu. 2016 a. water and wastewater situation in the gaza strip summary about water and wastewater situation in gaza strip. coastal municipal water utility cmwu. 2016 b. assessment of the customers satisfaction from the service providers. survey study supported by icrc. hamdan, s., uwe troeger and abelmajid nassar. 2007. stormwater availability in the gaza strip, palestine. int. j. environment and health, vol. 1, no. 4: 580-594. metcalf and eddy consultant co. (camp dresser and mckee inc.), 2000. coastal aquifer management program, integrated aquifer management plan (gaza strip), usaid study task 3, executive summary, vol. 1, and appendices b-g. gaza, palestine. ministry of agriculture 2016. rainfall seasonal report‖ 2015/2016, pna. mohd s. abu jabal, abustan, i., rozaimy, m.r., and h. al najar. 2014. fluoride enrichment in groundwater of semiarid urban area: khan younis city, southern gaza strip (palestine). journal of african earth sciences. 100: 259–266 mohd s. abu jabal, abustan, i., rozaimy, m.r., and h. el najar. 2015. groundwater beneath the urban area of khan younis city, southern gaza strip (palestine): hydrochemistry and water quality. arabian journal of geosciences. vol. 8 (4): 2203-2215. mohd s. abu jabal, ismail abustan, mohd remy rozaimy and hussam el najar. 2017. groundwater beneath the urban area of khan younis city, southern gaza strip (palestine): assessment for multi-domestic purposes. arab j geosci 10: 257 pp 1-15. municiplaity of abasan al-kabera-moak 2016. structural plan of abasan al-kabera. planning directorate. palestinian central bureau of statistics, pcbs. 2016. ―statistic brief (population, housing and establishment census)‖, palestinian national authority, gaza, palestine. palestinian water authority 2007. guiding information towards domestic groundwater supply management in the gaza strip governorates-palestine. water resources directorate. palestinian water authoritypwa 2012. national water strategy for palestine. toward building a palestinian state from water perspective. pwa library, gaza. palestinian water authority pwa, 2015. evaluation of water resources in the five governorates of gaza strip. water resources planning directorate. qahman, abdelkader larabi, driss ouazar, ahmed naji, alexander h.-d. cheng 2009. optimal extraction of groundwater in gaza coastal aquifer. j. water resource and protection, 4, 249-259 sadallah h., and h. al-najar. 2015. disinfection of intermitted water supply system and its health impact: um al nasser village as a case study. world journal of environmental engineering, vol. 3(2): 32-39. doi: 10.12691/wjee-32-2 shomar, b. 2010. groundwater contaminations and health perspectives in developing world case study: gaza strip. environ geochem health, 11 june 2010. husam al-najar / sustainable household water model (2017) 72 united nation (un) report 2012. gaza in 2020 is a liveable place? world health organization. guidelines for drinking-water quality, 3rd ed, 2006. yassin maged, salem s. abu amr, husam m al-najar. 2006. assessment of microbiological water quality and its relation to human health in gaza governorate, gaza strip . public health. 120, 1177–1187. el.halabi, m. 2005. evaluation and design model of decentralized units for wastewater treatment. thesis of master degree, the islamic university. gaza strip. al-najjar, y. h. 2013. onsite wastewater treatment for semi urban areas: abasan case study. thesis of master degree, the islamic university. gaza strip. epa/625/r-00/2008. usepa onsite wastewater treatment systems manual. plish humanitarian actionpah. 2012 interim progress report. improvements of household sanitation by using low cost treatment unit supported by solar energy and apply wastewater reuse for agriculture in the rural area of abasan gaza. chong, m.n., cho, y.j., poh, p.e., jin, b., 2015. evaluation of titanium dioxide photocatalytic technology for the treatment of reactive black 5 dye in synthetic and real greywater effluents. journal of cleaner production 89, 196-202. husam al-najar. he has a bachelor, a master and a phd in environmental sanitation. he has worked several years for local as well as international consultancy firms and gained a wide experience in the field of water and environmental sanitation. he also leaded training and research groups in the field of water resources management, infrastructure planning and soil and environmental protection. currently, he is working as a lecturer at the islamic university of gaza teaching water supply, irrigation and drainage, wastewater treatment and reuse courses in addition to courses in post disaster management. transactions template journal of engineering research and technology, volume 4, issue 3, septemper 2017 105 arabic text genre classification alaa m. el-halees faculty of information technology, islamic university of gaza, gaza, palestine, email alhalees@iugaza.edu.ps abstract— text genre is a type of written text. arabic text genre classification predicts genre of specific text document written in arabic independent of its topic. in this paper, an approach was proposed that takes an arabic document and classify it into one of four genres which are advertisements, news, subjective and scientific documents. since the frequency of words approach produces a low performance when used in the genre, an attempted was made to generate attributes based on the style of the text. this approach evaluated using corpus collected for this purpose. using four machine learning methods, our approach compared with the word frequency approach, and it found that our approach is better than this mainstream approach. it, also, found that predicting subjectivity and scientific genre is more accurate than predicting advertisements and news. index terms— text genre, text genre classification, arabic language processing, text mining, machine learning methods. i introduction text genre classification is concerned with predicting the type of an unknown text correctly, independent of its topic [1]. genre means kind of text; it is functional role of the text, not its topic. examples of text genre are scientific articles, news reports, reviews, and advertisements. the importance of text genre comes from that user wants a specific type of text. the typical example is in informational retrieval and search engine where the user may desire to see documents for a specific reason such as a review for some object (i.e. people opinion in a product) or scientific article in some subject [2]. text classification gene is different from traditional text classification where traditional classification is based on the frequency of certain words in the document using tfidf representation. in classifying genre, text style is used instead. most research in the area of text genre classification deals with english text. some works deal with other languages, but in arabic, which is a language for millions of people, there is no work in text genre classification. arabic is a challenging language for some reasons. it has a complex morphology as compared to other languages like english. this is due to the unique nature of arabic language. the arabic language is an inflectional and derivational language which makes monophonical analysis a very complex task [3]. the first and the most important task of classification genre are to choose genre types. based in the field of linguistic three abstract and very general classes are used, namely, expressive, appellative, and informative text [4]. accordingly, the text tagged as subjective (expressive), advertisement (appellative) and scientific papers and news (informative). then, the text genre needed to identify cues such as the structure of the sentence, the length of sentence, characters used and punctuation are used to generate the features. then, machine learning methods are used to classify the genre. four machine learning methods were used which are: support vector machine, naive bays, k-nearest neighbors and decision trees. to evaluate this approach, corpus was collected from many arabic websites since no other work was done on this topic. finally, our method compared with a traditional tfidf method that used in topic classification. the remainder of the paper is organized as follows: section two about related work in this area, section three about genre classification, section four about our methodology, section five about the experiment and results, and finally, this paper closed with a conclusion and an outlook for future work. ii related works in english language, text genre classification was addressed by many works such of kessler et. al. in [1] who proposed a theory of genres as bundles of facets, which correlate with various surface cues. they argued that genre detection based on surface cues as successful as detection based on deeper structural properties. they developed a taxonomy of genres and facets. also, they found an effective strategy for variable selection to avoid overfitting during training with neural networks that have higher performance on average. karlgren and cutting in [5] used discriminate analysis to categorize texts into pre-determined genre categories. they argued that discriminate analysis make it possible to use a large number of parameters that may be specific for a certain corpus, and combine them into a small number of functions, with the parameters weighted by how useful they are for discriminating text genres. also, liu et. al. in [6] discussed the automatic genre classification and its application. they argued that word level features and sentence level features are two important measures which vary in number among different genres. based on the two aspects of views, they explore an approach where the co-training method is employed to obtain genre classification. stamatatos et. al. in [7] took full advantage of existing natural language proalaa m. el-halees / arabic text genre classification (2017) 106 cessing tools to propose some style markers including analysis-level measures that characterize the way in which the input text has been analyzed and capture valuable stylistic information. they present a set of small-scale experiments in text genre detection, author identification, and author verification tasks. they showed that the proposed method performs better than the most distributional lexical measures, functions of vocabulary wealth and frequencies of occurrence of the most frequent words. galitsky et. al. in [8] proposed to use methods based on deep textual parsing, which depends on finding complex features such as syntactic and discourse structures of the text, to improve the quality of genre classification. in their paper they had presented three experiments on style and genre classifications. for the genre classification task they adopted a corpus annotated with 7 different genres and conducted a series of pairwise classification between two genres. melissourgou and frantzi in [9] investigated a range of genres involved in writing tasks presented in english language teaching material. they explained how they identified genres based on systemic functional linguistics (sfl) principles. they added another stage which is ‗naming‘ of genre categories mainly based on purpose and mode to guide anyone with a need to understand genre requirements. in multi-language text genre classification, petrenz in [10] described a new approach to classifying text genres across languages. it can bring the benefits of genre classification to the target language without the costs of manual annotation to achieve good results. in his experiments, he considered english and chinese languages, because these languages are very dissimilar linguistically. he expected the approach to work at least equally well for more closely related language pairs. iii genre classification genre classification is different from the topic classification that most classification research has dealt with. from an information retrieval point of view, a retrieval query about a certain topic would retrieve many documents related to that topic, but they may be of the different genre [11]. for example, if someone searches for a certain product, the retrieved page will be any document that contains the name of that product. however, genre means you can specify if somebody wants for example news, advertisement, or critical review about that product [2]. genres give a way to describe the nature of a text, which allows for assigning the document to classes. arabic genre classification is concerned with predicting the genre classes of unknown arabic documents correctly, independent of its topic. in arabic genre classification: let c = {c1, c2, ...cm} be a set of genre classes and d = {d1, d2, ...dn} a set of arabic documents. the task of the arabic genre classification consists in assigning class label ci to each document dj , if the document dj belongs to ci, which exactly one class must be assigned to each dj. based in the field of linguistic , text genre can be classified into three general classes, namely, expressive, appellative, and informative [4]. expressive means that text aims to express the attitude, expression of feelings, attitudes, and opinions of a person. according to this definition, opinion mining corpus mapped to the expressive genre. appellative means appealing to the receiver‘s experience, feelings, knowledge and sensibility to make him/ her react in a specific way [12]. the best text maps to this genre are an advertisement which used in this research. finally, the informative text provides information about any topic of knowledge. they identify impersonal, objective, nonemotive style [4]. two classes were mapped to this genre which are scientific papers and news. iv methodology our methodology consist of the following steps: a. generate features text genre mostly characterized by its text style. to generate features in this work, it concentrated on two levels of text styles: token level and lexicon level. token level considers the text as a set of tokens grouped in sentences. in this level, features were generated from each document such as average number of words in a sentence, average number of short words in a document where it considered short words are the words with less than six characters, average number of words per phrase and the average number of characters per word. in lexicon level, features were generated from each document such as an average number of nouns, adjectives, and verbs per word. also, features were added such as an average number of pronouns, coordinating conjunctions, cardinal numbers, and determines per document. b. corpus as stated above, this researh used four text genres: subjective, advertisements, news and scientific. since no other works in arabic genre classification, there is no corpus exist in the literature. therefore, our own corpus was collected. as shown in table , 78251 documents were used for the four genre types where each genre type contains more than one topic. for example, the subjective genre has positive and negative reviews on topics such as movies, hotels, books…etc. advertisements have topics from many products such as electronics, furniture, medical and sports equipment. news has topics from culture, economy, international and sports. finally, scientific papers have topics from medicine, science, economy and literature. alaa m. el-halees / arabic text genre classification (2017) 107 table 1 corpus used in the experiments genre type no. documents total no. of documents subjective 1.positive 2. negative 1430 1430 2860 advertisement 1. computers and electronics 2. furniture 3. medical equipments 4. sports equipments 340 522 254 342 1456 news 1. culture 2. economy 3. international 4. sports 500 500 500 500 2000 scientific 1. medicine 2. science 3. economy 4. literature 512 327 378 292 1509 c. methods in our experiments to classify documents to genres using two approaches tfidf representation and text style extraction, four classifiers were applied, which are naïve bayes, k-nearest neighbors and support vector machine and decision trees. naïve bayes classifier is widely used because of its simplicity and computational effectiveness. the model assigns a class label to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. in the text, it uses training methods consisting of relative-frequency estimation of words in a document as words probabilities and uses these probabilities to assign a class to the document. to estimate the term p(d | c) where d is the document and c is the class, naïve bayes decomposes it by assuming the features are conditionally independent [13]. k-nearest neighbors is a method to in classification. the training examples are vectors in a multidimensional feature space, each with a class label. the training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. in the text, the training phase documents have to be indexed and converted to vector representation. to classify new document d; the similarly of its document vector to each document vector in the training set has to be computed. then its k nearest neighbor is determined by measuring similarity which may be measured by, for example, the euclidean distance [14]. support vector machine is a classification algorithm proposed by [15]. in its simplest linear form, it is a hyperplane that separates a set of positive examples from a set of negative examples with maximum margin. in the text, test documents are classified according to their positions on the hyperplanes. a decision tree is a structure that includes a root node, branches, and leaf nodes. each internal node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a class label. decision tree text classifier is consists of a tree in which internal nodes are labeled by words, branches departing from them are labeled by tests on the weight that the words have in the representation of the test document, and leaf nodes are labeled by categories ci . such a classifier categorizes a test document dj by recursively testing for the weights. that the words labeling the internal nodes have in the representation of dj , until a leaf node ci is reached; the label of this leaf node is then assigned to dj [16]. v experiment and results a experiments two sets of experiments have been applied, first experiments for topic classification as a baseline and the second experiments to evaluate generated features. for the first set of experiments, our baseline used corpus described above with topic base classifications. before classification, some pre-processing was done such as tokenization, stop words removal and arabic light stemming. then, vector representations were obtained for the terms from their textual representations by performing tfidf weight which is a well-known weight presentation of terms often used in text mining. some terms with a low frequency of occurrence were removed. for classification, four methods described above were used which are naïve bayes, k-nearest neighbors, support vector machine and decision trees. in the second set of experiments, using the same corpus as described above, features generated based on nature and lexicon of the documents in the corpus. part-of-speech (pos) was used to generate word classes, such as nouns, adjectives, and verbs. then the four machine learning methods were applied. our experiments was evaluated using 10-cross-validation, and then f-measure was computed, which is a combined metric that takes both precisions and recalls into consideration. b results table 2 and figure 1 show f-measure for baseline classification which based on tf-idf and classification based on generated features from text using four machine learning methods for both classifications. it is clear that generated features have better results than baseline in all machine learning methods. however, there is little difference in performance when using naïve bayes. moreover, the biggest difference is when using decision trees where it is 48.87% using baseline and 100% using generated features. that is mainly because baseline depends on the frequency of the words and the words may be frequent in more than one genre if it is on the same topic (e.g. word in sports topic can be in the news, advertisements, subjective or scientific). that is not the case for generated features which depends on style not frequency. alaa m. el-halees / arabic text genre classification (2017) 108 figure 1: f-measure for arabic genre classification table 2 f-measure for baseline and generated features arabic genre classification table 3 shows f-measure for the four selected corpus genre where b.l is for base-line and f.g. for generated features. it was noted that all methods accurately recognized subjective and scientific. also, there is a little confusion between advertisements and news and that is natural because there are many common characteristics between them. table 3 f-measure for four arabic text genres support vector machine % k-nearest neighbour % naïve bays % decision trees % b.l g.f b.l g.f b.l g.f b.l g.f advertisements 84.9 96.3 79.6 90.32 90.7 68.57 28.7 100 news 94.4 96.0 80 88.0 81.2 53.3 67.4 100 subjective 86.2 100 85.1 100 76.25 100 0 100 scientific 98 100 97 100 97.23 100 99.3 100 from generated decision tree, as seen in figure 2, , it can seen that average words per phrase ,average characters per word and averege number of words per sentence are the most important attributes that distinguish genre. figure 2: decision tree for generated features vi conclusion and future works there are some significant differences between text topic classification and text genre classification. text topic classification depends mainly on the frequency of some words in a document to recognize that document. this does not work for text genre classification because words may be frequent in multiple genres. in this paper, arabic document was classified to a certain genre. four types were chosen to classify arabic genre which are an advertisement, news, subjective and scientific. we generated attributes based on arabic language style. the work evaluated using some corpus collected manually. using four machine learning methods we found that our generated feature has better performance than the results obtained from using tfidf method using same machine learning methods and same corpus. also, we concluded that subjective and scientific genres have better performance than news and advertisements. in future work, it may use another arabic genre such as a poem, islamic scripts, events, biography..., etc. also, it may need to look for other attributes which can recognize the genre such as syntactical level of the arabic language. also, the generated feature is done manually, using techniques such as deep learning, it can be generated automatically. references [1] b. kessler, g. numbers, and h. schütze. "automatic detection of text genre." in the proceedings of the 35th annual meeting of the association for computational linguistics and 8th conference of the european chapter of the association for computational linguistics: 7–12 july, madrid, 1997. [2] y. b. lee and s. h. myaeng, ―text genre classification with genre-revealing and subject-revealing features,‖ in proceedings of the 25th annual international acm sigir conference on research and development in information retrieval, pp. 145-150. 2002. [3] b. hammo and s. lytinen, ―qarab: a question answering system to support the arabic language,‖ in the proceedings of computational approaches to semitic languages., p. 11, 2002. [4] h. wachsmuth and k. bujna, ―back to the roots of 0 20 40 60 80 100 120 support vector machine k-nearest neighbour naïve baysdecision trees baseline generated features % method baseline % generated features % support vector machine 90.89 98.07 k-nearest neighbour 85.42 95.58 naïve bays 80.37 80.47 decision trees 48.87 100 alaa m. el-halees / arabic text genre classification (2017) 109 genres : text classification by language function motivation : filter search results,‖. in the proceedings of the 5th international joint conference on natural language processing, chiang mai, thailand, november 8-13, 2011. [5] j. karlgren and d. cutting, ―recognizing text genres with simple metrics using discriminant analysis,‖ in the proceedings of the 15th conference of computer linguists. vol. 2, pp. 1071–1075, 1994. [6] r. liu, m. jiang, and z. tie, ―automatic genre classification by using co-training,‖ in the proceedings of the 6th international conference of fuzzy systems and. knowledge discovery, vol. 1, pp. 129–132, 2009. [7] e. stamatatos, n. fakotakis, and g. kokkinakis, ―automatic text categorization in terms of genre and author,‖ computer linguists, vol. 26, pp. 471–495, 2000. [8] b. a. galitsky , d. a. ilvovsky, e. l. chernyak s. and o. kuznetsov " style and genre classification by means of deep textual parsing", in the proceedings of the international conference computational linguistics and intellectual technologies: ―dialogue 2016‖ moscow, june 1–4, 2016 [9] m. melissourgou and k. frantzi "representation of text types and genres in english language teaching material", corpus pragmatics, april, 2017, springer international publishing. [10] p. petrenz, ―cross-lingual genre classification,‖ proceedings of the13th conference of. european. chapter association of computer linguists, no. april, pp. 11–21, 2012. [11] k. crowston and b. h. kwasnik, ―can documentgenre metadata improve information access to large digital collections,‖ library trends, vol. 1, no. 315, pp. 1–29, 2003. [12] j. vaičenonienė, "the language of advertising: analysis of english and lithuanian advertising texts", studies about languages. no. 9, pp. 43–55, 2006. [13] s. l. ting, w. h. i, and a. h. c. tsang, ―is naïve bayes a good classifier for document classification?‖ international journal of. software. engineering and its applications, vol. 5, no. 3, pp. 37–46, 2011. [14] b. dasarathy. nearest neighbor (nn) norms: nn pattern classification techniques. ieee computer society press, 1991. [15] c. cortes and v. vapnik. "support-vector networks". machine learning, 20, 1995 [16] i. ilovich, and s. markovitch. "feature generation for text categorization using world knowledge". in the proceedings of the nineteenth international joint conference on artificial intelligence, edinburgh, scotland, uk, july 30august 5, 2005.pp 1048-1053. alaa m. el-halees is a professor in computing in the faculty of information technology at islamic university of gaza, palestine. he holds a phd degree in data mining from leeds metropolitan university, uk in 2004, msc degree in software engineering from leeds metropolitan university, uk in 1998 and bsc in computer engineering from university of arizona, usa. alaa has more than 24 years of experience including leading a range of it-related projects. prof. alaa supervises m.sc. students in information technology. he also leads and teaches modules at both bsc and msc levels in information technology. his research activities are in the area of data mining, in particular text mining, machine learning and e-learning, software engineering and computer ethics. transactions template journal of engineering research and technology, volume 4, issue 4, december 2017 124 the impact of sea groins in the egyptian side of rafah on the erosion of the beaches of the southern area of the gaza strip using remote sensing and gis maher a. el-hallaq associate professor of surveyingy and geodesy, civil engineering department, the islamic university of gaza, palestine, mhallaq@iugaza.edu.ps abstract— understanding spatio-temporal changes is essential to many aspects of engineering, geographic and planning researches. the coastal zone is the most important and the most intensively used area compared with the other populated areas in the gaza strip. the rapid increase of population on gaza coastal area leads to depletion of the coastal zone resources and change the coastal morphology. in this research, five landsat satellite imageries are collected during the period from 2008 to 2016, first, all satellite images are radiometric and atmospherically corrected. remote sensing techniques and geographic information system are used for spatiotemporal analysis in order to detect changes in the shoreline position and coastal areas of the southern governorates. results indicate that the net change on the beach area of rafah governorate equals to -71 donum during the analysis period, which is equivalent to -8.9 donum/year. the net change on the beach area of khan younis governorate equals to -105.5 donum with the rate of -13.2 donum/year. the analysis of digital shoreline analysis system (dsas) indicates that the net average rate for rafah’s beach is equal to -3.7 m/year (erosion) and -2.7 m/year for khan younis. all of these statistics indicate that the obvious trend in the southern beach is under serious erosion problem. this study is emphasized that the coastal band is considered as a critical area, it is therefore necessary to move all stakeholders to monitor and protect the southern area of gaza strip beach from the risk of drift that threatens vital installations and environmental parameter along the beach, such as streets, hotels, tourism facilities, mosques and houses. index terms— gaza strip, shoreline erosion, remote sensing, spatio-temporal analysis. i introduction the coastal zone is one of the most important and most densely populated areas than other regions in the gaza strip. the rapid increase in population in this region leads to an increase in the consumption of its resources and is therefore subject to high pressure from both human and geomorphological activities. in some parts of the southern coast of the gaza strip, the process of seashore erosion is sensitive as it threatens to demolish many buildings and roads that are directly on the coast. in 2010, egypt built a sea groins at the coast of the egyptian side of rafah using large rocks. it is placed 2 km from the sea border between gaza and egypt, and extends for about 1 km inside the sea. recently, erosion can be clearly seen in the southern governorates of gaza strip, figure 1. the beach is deteriorated as a result of the sea waves hitting the beach and in the case of high waves, water reaches the sand dunes and thus removes parts of it. when the establishment of a sea obstruction to the movement of sea currents that are loaded with sand, sand accumulation occurred south of the southern groin and on the other hand, erosion took place north of the northern groin [1]. it is expected to face serious problems in the coming few years. in the absence of enough number of studies on the coastal area of the gaza strip, therefore, this study is performed to highlight the impact of the egyptian marine groins on the erosion of the southern shores of the gaza strip using gis. gis is one of the most advanced technologies that is capable to deal with a large amount of data and conduct many computerized operations as well as extensive spatial analysis. figure 1 seashore erosion of rafah citypalestine mailto:mhallaq@iugaza.edu.ps maher a. el-hallaq / the impact of sea groins in the egyptian side of rafah on the erosion of the beaches of the southern area of the gaza strip using remote sensing and gis (2017) 125 ii the study area rafah is an egyptian-palestinian city, half of which is located inside the egyptian border and is called rafah-egypt. the other half is located in the gaza strip and is called rafah-palestine. the latter is located in the southern part of the gaza strip. its population in 2016 is about 233,490 people [2] and is located 13 km from khan younis, 16 km from the village of sheikh zuwayd in sinai, and 45 km from the egyptian city of el arish, rafah rises from the sea by 48 meters. it is characterized by sandy land, surrounded by sand dunes from each side, then less rainfall and ends fertile towards the desert. the average temperature is between 30 degrees in summer and 10 degrees in winter. the average rainfall in rafah is 250 mm [3]. figure 2 shows the egyptian and palestinian sides of rafah city. iii the study aim this study aims to conduct spatio-temporal analysis to detect the changes of the southern coastline of the gaza strip during the period between (2008-2016) based on analyzing satellite imagery using remote sensing techniques and geographic information system. to implement this study, the following objectives should be achieved: • detecting the amount and rate of change in the area of the coast of the southern gaza strip. • calculating the linear rate of change along the shoreline using the digital shoreline analysis system (dsas). • making recommendations to those responsible for advancement. iv methodology there are multiple approaches used in geographic research, as each approach meets the requirements of a particular stage of research, but this study depends mainly on: (a) descriptive approach: it addresses the geographical, historical, social and economic profile of the study area, as well as concepts and knowledge of gis. it also involves data collection from the usgs website with the focus on satellite imagery captured by landsat satellites,(b) historical approach: this approach is to follow a historical phenomenon, which is not to understand the past but for future planning as well, (c) applied or analytical approach: the use of an appropriate image processing environment such as erdas software to preprocess, enhance, classify and transform imagery. in addition to, the use of arcgis and its tools in detecting changes and rates of change along the coast of the study area. figure 3 outlines the overall framework of the used methodology. a data collection in this study, satellite images from the u. s. geological survey (usgs) website are downloaded during the period between (2008-2016) as shown in table 1 according to the following criteria: downloading images of january, choosing images that are nearly free from noise to reduce preprocessing operations, preferring thematic mapper (tm) images than multi spectral scanner (mss) and enhanced thematic mapper plus (etm+) to avoid black gaps [4]. landsat 7 etm+ downloaded imageries (2008-2012) are slc-off data (contains black gaps, dn=0). this type of gaps should be minimized by taking two etm+ scenes, radiometrically corrected, and then combines them for more complete coverage. at last, using all bands in geotiff format. figure 3 methodology framework figure 2 rafah-egypt and rafah-palestine maher a. el-hallaq / the impact of sea groins in the egyptian side of rafah on the erosion of the beaches of the southern area of the gaza strip using remote sensing and gis (2017) 126 b preprocessing task preprocessing of downloaded images involves various operations as clipping the study area by image subset, performing the radiometric and geometric corrections, enhancing and reducing image noise by removing black gaps. since the digital sensors record the electromagnetic radiation intensity of each point displayed on the surface of the earth in the form of digital number (dn) for each spectral range, the range of the dn value that is captured by the sensor depends on its radiation discrimination. the landsat mss sensors measure the radiation on a scale of dn (0 63) while landsat tm and etm + measure it on a scale of (0 – 255), which includes the correction of the digital image processing to improve the accuracy of the amount of brightness value [5]. it should be noted that the sources of noise and the correct ways to correct them depend in part on the type of sensor, the nature of the image and the nature of the imaging kind used to capture digital image data. figure 4 shows examples of some preprocessing tasks of the collected imagery. c image classification the supervised classification is used because the difference is clear between land and water in the collected images [6]. several land and water training samples are selected using erdas imagine 2014 software. the user specifies the various pixels values or spectral signatures that should be associated with each class (here, land or water). this is done by selecting representative sample sites of known cover type called training sites. it is important to choose training sites that cover the full range of variability within each class to allow the software to accurately classify the rest of the image. the computer algorithm then uses the spectral signatures from these training sites to classify the whole image. figure 5 shows an example of a supervised classification of one of the images under consideration d change detection analysis at this stage, two types of analysis are conducted.; the change of the beach area between any two sequent shorelines of any selected interval and the other is the linear rate of change of the shoreline of the considered interval using the digital shoreline analysis system (dsas) tool. to calculate the change in the area between the two shorelines of the beach, arcgis tools are used, first the shorelines are merged using the append tool until they become in one feature class. then, they are processed to form a closed space and the lines are converted to polygon using "feature to polygon" tool. finally, accretion and erosion areas for both rafah and khan younis can be calculated. to calculate the linear rate of change along the shoreline, the dsas is used. it enables the user to calculate the rate of change over different time periods for different locations along the shoreline. the dsas tool creates vertical transects that are defined by the user and calculates statistics for the rate of change of the shoreline within the attribute table. the tool requires two shorelines and a reference line; the reference line is done by the user, which is the starting point for the generation of transects that are perpendicular to it at regular distances specified by the user and depend on the tool in the conduct figure 5 supervised classification of images figure 4 part of preprocessing task table 1 imagery characteristics. spatial resolution, m imagery date image source image no. 30 x 30 9-1-2008 landsat7 1 30 x 30 9-1-2010 landsat7 2 30 x 30 9-1-2012 landsat7 3 30 x 30 9-1-2014 landsat8 4 30 x 30 9-1-2016 landsat8 5 maher a. el-hallaq / the impact of sea groins in the egyptian side of rafah on the erosion of the beaches of the southern area of the gaza strip using remote sensing and gis (2017) 127 of statistical calculations. figure 6 illustrates the basic steps to generate transects and the corresponding statistical calculations by dsas tool [7]. thus, all data entered for the dsas tool must be within the personal geodatabase, which involves the reference line and shorelines for every two-year periods in a single feature class. the end point rate (epr) is calculated by determining the distance between the old date and modern date shorelines divided by the number of years between them. the basic advantage of this method is the simplicity of calculations and the spread of their application and also gives excellent accuracy over long periods. but the main drawback is that they cannot handle more than two beach lines. v analysis and results the study area is classified into two zones; (a) rafah, 2.4 km shoreline long and (b) khan younis, 10.4 km shoreline long. the two zones are shown in figure 7. figure 8 highlights the extracted shorelines of rafahpalestine in the period (2008–2016). a the change in area analysis the study interval between 2008 to 2016 is divided into four periods, each of two-year period. using gis tools, the confined space between each of the four time periods is calculated and presented in tables 2 and 3, where the negative values represent the erosion case and the positive values represent the accretion case. table 2 illustrates the results of zone a that corresponds to the analysis of the shoreline of rafah-palestine governorate. it is noticed that the quantities of the erosion are in the case of continuous increase with time, especially in the periods between (2012-2014) and (2014-2016). as for the amounts of accretion is somewhat instable and closer to be fixed with time. the total erosion area during the study period (2008-2016) is about 71 donum with a rate of -8.9 m 2 /year. table 2 the change of area analysis for rafah-palestine shoreline image period erosion accretion net of change total x 10 3 [m 2 ] rate x 10 3 [m 2 / year ] total x 10 3 [m 2 ] rate x 10 3 [m 2 / year] total x 10 3 [m 2 ] rate x 10 3 [m 2 / year] 20082010 -10.25 -5.12 4.28 2.14 -5.96 -2.98 20102012 -11.14 -5.57 6.05 3.02 -5.10 -2.55 20122014 -30.84 -15.42 1.18 0.59 -29.66 -14.83 20142016 -35.12 -17.56 4.54 2.27 -30.57 -15.29 total -87.35 -10.92 16.05 2.01 -71.29 -8.91 note: (+) sign indicates accretion, (-) sign indicates erosion . figure 7 the study area zones figure 8 the shoreline of zone a (2008 – 2016) figure 6 dsas basic steps maher a. el-hallaq / the impact of sea groins in the egyptian side of rafah on the erosion of the beaches of the southern area of the gaza strip using remote sensing and gis (2017) 128 table 3 summarizes the results of zone b which corresponds to the analysis of the shoreline of khan younis governorate. one can notice that starting in 2010, there is an increase in the quantities of erosion at a linear rate as expected due to the presence of the egyptian marine groins, and the quantities of accretion are also oscillatory and atypical. the total erosion area is about 105.5 donum during the study period (2008-2016) with a rate of -13.2 m 2 /year. b the linear change in shoreline analysis coastline linear change rates are calculated using the dsas tool with the epr statistical technique. 341 transects are created and spaced at a regular distance of 40 m. zone a is covered by 56 transects. table 4 shows the average linear change rate results for zone a, rafah-palestine. its average erosion rate is 3.7 m/year. transects from 57 to 341 are generated by dsas tool to cover zone b, khan younis city which has a shoreline of 10.4 km long. table 5 summarizes the average linear change rate of this zone. its average erosion rate is 2.69 m/year. table 3 the change of area analysis for khan younis shoreline image period erosion accretion net of change total x 10 3 [m 2 ] rate x 10 3 [m 2 / year] total x 10 3 [m 2 ] rate x 10 3 [m 2 / year] total x 10 3 [m 2 ] rate x 10 3 [m 2 / year] 20082010 -52.83 -26.42 26.25 13.13 -26.58 -13.29 20102012 -28.86 -14.43 67.43 33.72 38.57 19.29 20122014 -60.34 -30.17 2.02 1.01 -58.31 -29.16 20142016 -89.78 -44.89 30.62 15.31 -59.16 -29.58 total -231.8 -28.98 126.32 15.79 -105.48 -13.19 note: (+) sign indicates accretion, (-) sign indicates erosion . figure 9 annual shoreline linear change rate (2008–2016) table 4 average linear change rate for rafah-palestine (2008-2016) image period transect erosion accretion net average (m/year) average (m/yr) max (m/yr) average (m/yr) max (m/yr) 20082010 1-56 -5.45 -9.33 3.21 6.99 -3.44 20102012 1-56 -4.36 -9.00 3.32 8.87 -1.15 20122014 1-56 -8.03 -15.45 1.85 2.92 -6.78 20142016 1-56 -5.35 -15.97 6.34 10.83 -3.44 average linear change rate from 2008-2016 -3.70 note: (+) sign indicates accretion, (-) sign indicates erosion table 5 average linear change rate for khan younis (2008-2016) image period transect erosion accretion net average (m/year) average (m/yr) max (m/yr) average (m/yr) max (m/yr) 20082010 57-341 -5.23 -13.83 4.91 13.45 -1.71 20102012 57-341 -3.53 -8.43 4.59 11.05 1.70 20122014 57-341 -8.11 -21.24 1.51 4.15 -7.51 20142016 57-341 -6.00 -15.54 5.74 12.87 -3.25 average linear change rate from 2008-2016 -2.69 note: (+) sign indicates accretion, (-) sign indicates erosion maher a. el-hallaq / the impact of sea groins in the egyptian side of rafah on the erosion of the beaches of the southern area of the gaza strip using remote sensing and gis (2017) 129 the results for both zones which are obtained above is presented graphically in figure 9. from figure (9), the following notes can be concluded:  during the period (2008-2010), fluctuations in the values of erosion and accretion along the coast before the construction of the egyptian marine groins is observed. • during the periods (2010-2012), (2012-2014) and (2014-2016), the prevailing pattern on the shoreline is the erosion as the quantities of shoreline deterioration are much larger than the amounts of accretion that are negligible on these periods. • the period (2012-2014) is the most exposed period of erosion relative to otherl periods. • presence of accretion amounts distributed along the shore during the period (2014-2016) is noticed, due to the construction of several small marine groins on the palestinian side as a partial solution to reduce the erosion of the shore. • the effect of the egyptian marine groin is equal to both rafah and khan younis governorates as the pattern during each period is similar to the entire length of the beach. vi conclusion the analysis of satellite observations of landsat for the mediterranean coast in the governorates of rafah and khan younis during the period (2008-2016) shows that there are a change in the patterns of erosion and accretion. the results reveal that the net change in the area of rafah governorate equals about 71 donum of sand at an erosion average of -8.9 donum annually. the net erosion on the area of khan younis beach equals about 105.5 donum at a rate of -13.2 donum annually. the impact of the egyptian sea groin on rafah, about three times greater than khan younis. this confirms the previous results that the mean net change of rafah beach using epr analysis is equal to 3.7 m / year (erosion) and for khan younis is 2.69 m / year (erosion). it is recommended to support researchers and projects in this field as its greatest importance. for future studies, it is suggested to calculate the volume change of critical areas and to refine the analysis taken into account the tidal data. the concerned authorities should conduct periodic studies to follow the future changes. to benefit from the results of this study, strategies and systematic steps to solve the problem of erosion of the coast of the study area, namely rafah and khan younis should be made. references [1] zviely, d. and m. klein, "the environmental impact of the gaza strip coastal constructions" j.coast. res.vol. 19 (4):1122-1127, 2003. [2] palestinian central bureau of statistics (pcbs), "population, housing and establishment census 2007-2016". rafah governorate . census final results, accessed on 01 april 2017. url: http://www.pcbs.gov.ps/portals/_rainbow/documents/r afaa.htm [3] wikipedia, the free encyclopedia. accessed on 10 march 2017. url:https://en.wikipedia.org/wiki/rafah [4] el-hallaq, m.a. and habboub, m.o. "using gis for time series analysis of the dead sea from remotely sensing data". open journal of civil engineering, 4, 386-396, 2014. http://dx.doi.org/10.4236/ojce.2014.44033 [5] edmund green, peter mumby, alasdair edwards and christopher clark.. remote sensing handbook for tropical coastal management vol. 3, unesco publishing, paris, 2000. [6] jesús d. chinea. supervised classification. universidad de puerto rico, recinto universitario de mayagüez. [online] 2006. [cited: 3 7, 2017.] http://www.uprm.edu/biology/profs/chinea/gis/lectesc/tu t4_3.pdf. [7] himmelstoss, e.a., zichichi, j.l., and ergul, ayhan. 2009 digital shoreline analysis system (dsas) version 4.0 — an arcgis extension for calculating shoreline change: u.s. geological survey open-file report 2008-1278. *updated for version 4.3. maher a. el-hallaq an associate professor of surveying and geodesy. he is a member of civil engineering department at the islamic university of gaza since 1996. he also works as a consultant of many local municipalities and private institutions in the gaza strip. his primary research and professional interests are in the various fields of geomatics. in addition to, he published a book and a great number of conference and journal papers as well as being a reviewer to local and world journals. http://www.pcbs.gov.ps/portals/_rainbow/documents/rafaa.htm http://www.pcbs.gov.ps/portals/_rainbow/documents/rafaa.htm https://en.wikipedia.org/wiki/rafah http://dx.doi.org/10.4236/ojce.2014.44033 transactions template journal of engineering research and technology, volume 4, issue 3, septemper 2017 119 combining iwc and pso to enhance data clustering ahmed z. skaik, wesam m. ashour computer engineering department, the islamic university of gaza gaza strip, palestine, 2017 ahmskaik@gmail.com , washour@iugaza.edu.ps abstract—in this paper we propose a clustering method based on combination of the particle swarm optimization (pso) and the inverse weighted clustering algorithm iwc, it is shown how pso can be used to find the centroids of a user specified number of clusters and basically uses pso to refine the clusters formed by iwc. since pso algorithm was showed to successfully converge during the initial stages of a global search, but around global optimum, the search process will become very slow. on the contrary, iwc algorithm can achieve faster convergence to optimum solution, experimental results show that the proposed technique has much potential to improve the clustering process. index terms— data clustering, particle swarm optimization, inverse weighted k-means. i introduction data clustering is the process of grouping together similar multi-dimensional data attributes into a number of clusters or groups. clustering algorithms have been applied to a wide range of problems, such as exploratory data analysis, data mining, pattern recognition and machine learning [1]. more specifically, objects are represented by a set of features which characterize them. the object features are usually represented as a data point in a multi-dimensional space. so clustering can be considered as partitioning of data points based on a homogeneity criterion. when the number of clusters, k, is known as a priori knowledge, clustering is formulated in such a way that objects in the same cluster being more similar in some sense than those in different clusters. the iwc algorithm, starting with k arbitrary cluster centres in space, partitions the set of given objects into k subsets based on a distance metric. the centres of clusters are iteratively updated based on optimization of an objective function. this method has been shown to be less sensitive to poor initialisation than the traditional k-means algorithm [2]. recently, many clustering algorithms based on evolutionary computing such as genetic algorithms have been introduced, and only a couple of applications used particle swarm optimization [3]. unlike the genetic algorithm (ga), pso does not have complicated evolutionary operators such as crossover and mutation [4]. in the pso algorithm, the potential solutions called particles, are obtained by ‗‗flowing‘‘ through the problem space by following the current optimum particles. generally speaking, the pso algorithm has a strong ability to find the most optimistic result, but it suffers from converging to a local optimum. by suitably modulating the pso parameters, convergence can be accelerated and the ability to find the global optimistic result can be enhanced. idea is the fact that pso at the beginning stage of algorithm is able to search whole space for the optimum solution and reduce the search area. when the pso algorithm reaches to a solution roughly close to the optimum solution, the clustering process switches to iwc algorithm to finish the process faster and more accurately. a proper stage for switching the clustering process is sensed by inspecting the pso fitness function along the process. the paper has been organized as follows: in the next section we show the related works in that field and in section 3 introduce iwc algorithm. in section 4 we review standard pso algorithm. we explain the proposed algorithm in section 5. in section 5.1 we present the result of experiments on synthetic and real data sets. finally we draw the paper to the conclusion in section 6. ii related work various researches have been carried out to improve the efficiency of k-means algorithm with particle swarm optimization. particle swarm optimization gives the optimal initial seed and using the best seed k-means algorithm produces better clusters and produces much accurate results than traditional k-means algorithm. w. barbakh and c. fyfe. [5,6] proposed an enhanced methods for assigning data points to the suitable clusters and solve the problem of sensitivity to initial conditions. shafiq alam [7] proposed a novel algorithm for clustering called evolutionary particle swarm optimization (epso)clustering algorithm which is based on pso. the proposed algorithm is based on the evolution of swarm generations where the particles are initially uniformly distributed in the input data space and after a specified number of iterations; a new generation of the swarm evolves. lekshmy p chandran et al. [8] describes a recently developed meta heuristic optimization algorithm named harmony search helps to find out near global optimal solutions by searching the entire mailto:ahmskaik@gmail.com mailto:washour@iugaza.edu.ps ahmed z. skaik, wesam ashour/ combining iwc and pso to enhance data clustering (2017) 120 solution space. k-means performs a localized searching. chunqin gu, qian tao [9] proposed a new combination between chaotic particle swarm and k-means which features better search efficiency than k-means, pso and cpso. iii inverse weighted clustering one of the most important components of a clustering algorithm is the measure of similarity used to determine how close two patterns are to one another. the iwc algorithm [10] which solve the problem of sensitivity to initial conditions in the k-means algorithm groups the set of data points in space into a predefined number of clusters. in this regard, the euclidean distance is commonly used as a similarity measure. the strategy in this algorithm is to group data points in such a way that the euclidean distance between data points belonging to each group being minimized. the data points in each group (cluster) are represented by the group centre of mass, referred to as the cluster centroid. hence the iwc algorithm attempts to find the best points in space as the cluster centroids. the iwc algorithm has the following logic: (1) (2) where (3) the partial derivative of ji with respect to mk will maximize the performance function ji. therefore, the implementation of (2) will always move mk to the closest data point to maximize ji to ∞, however, the implementation of (2) will not identify any clusters as the prototypes [11] always move to the closest data point. but the advantage of this performance function is that it doesn‘t leave any prototype far from data: all the prototypes join the data. the authors enhance this algorithm to be able to identify the clusters without losing its property of pushing the prototypes inside data by changing bik in (3) to the following: (4) where mk* is the closest prototype to xi. with this change, they have an interesting behavior: (4) works to maximize ji by moving the prototypes to the freed data points (or clusters) instead of the closest data point (or local cluster). note that (3) and (4) never leaves any prototype far from the data even if they are initialized outwith the data. the prototypes always are pushed to join the closest data points using (3) or to join the free data points using (4). but (3) doesn‘t identify clusters while (4) does. (4) keeps the property of (3) of pushing the prototypes to join data, and provides the ability of identifying clusters. the clustering process terminates when one of the following conditions is satisfied: 1. the number of iterations exceeds a predefined maximum. 2. when change in the cluster centroids is negligible. 3. when there is no cluster membership change. iv particle swarm optimization particle swarm optimization (pso) is an optimization algorithm which simulates the movement and flocking of birds [12]. particles are the agents that represent individual solutions while the swarm is the collection of particles which represent the solution space. the particles then start moving through the solution space by maintaining a velocity value v and keeping track of its best previous position achieved so far. this position value is known as its personal best position and denoted by vector pi={pi1, pi2,…, pin}, and at each iteration, the velocity of particle and its new position is defined according to the following equations vi (t) =w* vi (t-1) + c1 * r1(pi – xi (t-1) ) ) + c2 * r2(g – xi (t-1) ) (5) xi (t) = xi (t-1) + vi (t) (6) where, ω is called the inertia weight that controls the impact of previous velocity of particle on its current one. in the references [13,14], several selection strategies of inertial weight ω have been given. generally, at the beginning stages of pso algorithm, the inertial weight ω should decrease rapidly, once the swarm converge around the optimum solution, the inertial weight must decrease slowly. r1 and r2 are two independently uniformly distributed random variables in range [0,1]. c1 and c2 are positive constant parameters called acceleration coefficients which control the maximum step size between successive iterations. global best denoted by vector g ={g1, g2,…, gn}is another best solution which is the best fitness value which is achieved by any of the particles. the fitness of each particle or the entire swarm is evaluated by a fitness criterion. the flow chart of basic pso is shown in figure 1 according to equation (5) the velocity of the particle at each iteration is calculated using three terms: the velocity of the particle at previous iteration, the distance of particle from its the best previous position and the distance from the best position of the entire population. having the velocity of particle, the particle flies to a new position according to equation (6). this process is repeated until a termination 121 condition is reached. two common conditions used for terminating the pso algorithm are exceeding the number of iterations from a predefined level and negligible change for particles in successive iterations. figure 1. basic flow diagram of pso. v. hybrid pso-iwc for clustering the proposed algorithm works in two phases. phase i is describe the particle swarm optimization and how can it find the global optimal, while phase ii is describe the iwc algorithm. the phase i gives better seed selection and reduce the search area, since the pso algorithm is a global search algorithm, which has a strong ability to find global optimistic result. however, the convergence speed of pso algorithm near to the solution is very slow. the iwc algorithm, on the contrary, converge fast to a local optimum result, but its ability to find the global solution takes too many iterations. the output of phase i is given as input to phase ii which generates the final clusters. the cluster generated by this proposed algorithm is much accurate, faster and of good quality in comparison to iwc algorithm. by combining the pso and the iwc algorithms, a novel clustering approach is formulated in this paper. we refer to it as pso–iwc hybrid algorithm. the motivation for combining these clustering methods is 1. solve different distributions of centroids for multiple runs. 2. accelerate searching for centroids by reducing searching area. 3. dealing with multi-dimensional data (three dimensions are proposed and tested experimentally. we start the data clustering by pso algorithm it allows to search all space for a global solution. when the region of global optimum is found by pso we continue the clustering using iwc. this strategy accelerates the convergence speed as well as accuracy. in this way the iwc algorithm finalizes the clustering task. we detect the proper stage for switching from pso to iwc, using pso fitness function. when the value of fitness function for a number of successive iterations changes negligibly the clustering algorithm switches to iwc. like the pso-km [15] algorithm we start with initializing a group of random particles in solution space. first, all the particles are updated according to the equations (5) and (6), until a new generation set of particles are generated. the flying particles are used to search the global best position in the solution space [16]. finally the iwc algorithm is used to search around the global optimum. in this way, the proposed hybrid algorithm would find the optimum solution more quickly. the procedure for this pso–km algorithm can be summarized as follows: step 1: initialize the position and velocity of particles randomly. each particle is a potential solution for clustering problem in hand. in the context of clustering, a single particle represents the centroid of clusters. hence the i-th particle is initialized as follows: x (0) i = (z (0) i1, z (0) i2, …z (0) ik) (7) where z (0) ij refers to the j-th cluster centroid in solution suggested by the i-th particle. therefore a swarm suggests a number of candidates for clustering centroids. step 2: evaluate the fitness for each particle based on clustering criteria. the fitness of particle i in swarm is defined as below: ( ) ∑ ∑ ( ) where np is the number of data points as inputs to clustering process. by minimizing the fitness function, the dispersion of clusters would be minimized. step 3: if the number of iterations exceeds a predefined level go to step 7, otherwise go to step 4. step 4: the position of best particle among the particles in swarm is stored. then the position of all the particles are updated according to equations (5) and (6). if a particle flies beyond the boundary [x min, x max],(the range of possible solutions) then the position of particle is set to the x min or x max; similarly if a new velocity is beyond the boundary [vmin, vmax ], the new velocity will be set to vmin or vmax . step 5: reduce the inertia weight ω according to the strategy described in section 3. step 6: if the global best of particles, g , remains unchanged for a number of iterations (ten in our implementation) go to step 7; otherwise go to step 3. step 7: use the iwc algorithm to finish clustering task. the clustering terminates when one of conditions stated in section 2 satisfied. ahmed z. skaik, wesam ashour/ combining iwc and pso to enhance data clustering (2017) 122 vi. experiments in order to evaluate the performance of the proposed clustering algorithm, we conducted two experiments using synthetic and real data. in these experiments we compare the proposed pso-iwc method with pso clustering, pso-km and standalone iwc. all the experiments are carried out using matlab r2015a on the same machine with a core i7 cpu 2.70 ghz, 16.0gb ram, and windows 10 operation system. figure 2 shows the result of applying pso-iwc algorithm to the synthetic, wine and liver-disorders data sets and shows that the algorithm consistently performs better than the other three approaches even executing many times, so we can see that there are the same two clusters have resulted by applying the algorithm two times. while figure 3 summarizes the result of applying pso-km four times and ensure that the results are differ for each execution. the second experiment was conducted using iris and cancer datasets. these data sets are very classical and often used to examine and compare the performances of algorithms in the fields of classification. the dataset (*) is available online. the second and the third columns of table 1 show the number of data points in each dataset and in each individual cluster respectively. the results of clustering on these datasets using the proposed hybrid pso-iwc, pso and pso-km are presented in table 2, and here we can see that the results obtained from proposed algorithm is significantly better than the other three approaches, the comparative analysis for different attributes like time, accuracy, error rate and number of iterations are tabulated in table 3 and the results show a general improvement of performance when using pso-iwc. figure 2. top: results of applying pso-iwc algorithm on artificial data set, middle: results of applying pso-iwc algorithm on wine data set, bottom: results of applying pso-iwc algorithm on liver-disorders data set figure 3. result of applying pso-km algorithm four times (different clusters distribution obtained). table 1 (information for datasets) d a ta s e t # d a ta in se t # d a ta in c lu ste r s # c lu ste r s # s p a c e d im e n sio n set i 210 each 70 3 2 set ii 210 each 70 3 2 iris 150 each 50 3 4 cancer 683 444 & 239 2 9 wine 10782 59, 71, 48 3 13 liverdisorders 7297 each 21 7 7 123 table 2 information for synthetic and real datasets data set criteria pso pso-km psoiwc set i error rate 0% 0% 0% set ii error rate 7.6% 7.2% 7.01% iris error rate 12% 12% 10.5% cancer error rate 4.7% 3.7% 2.87% wine error rate 6.4% 4.5 3.2 liverdisorders error rate 5.02% 8.1% 6.3% table 3 the performance of three clustering method data set time (seconds) iterations accuracy p s o -iw c p s o -k m p s o -iw c p s o -k m p s o -iw c p s o -k m set i 0.32 0.47 3 4 93.03 % 92.1 1% set ii 0.58 0.78 2 3 91.11 % 91.1 5% iris 0.33 47 1.78 30 2 8 90.69 84.9 cancer 1.26 99 5.30 59 3 14 89.7 87.1 wine 0.36 95 1.83 61 7 18 92.2% 88.0 5% liverdisorders 0.64 73 3.76 87 8 14 91.36 % 90.0 6% vii. conclusions in this paper, we have proposed a method based on combination of the particle swarm optimization (pso) and the iwc algorithm. we showed that the combined method has the advantage of both pso and iwc methods. as the pso algorithm successfully searches all space during the initial stages of a global search, we used pso algorithm at earlier stage of pso-iwc. as long as the particles in swarm being close to the global optimum, the algorithm switches to iwc as it can converge faster than pso algorithm. we detected the proper stage for switching from pso to iwc using the fitness function. future studies will extend the fitness function to also explicitly optimize the higher dimensional problems-and large number of patterns. the pso-iwc clustering algorithms will also be extended to dynamically determine the optimal number of clusters. references [1] dw van der merwe and ap engelhrecht, data clustering using particle swarm optimization. university of pretoria: south africa, 2013. [2] chen, c.-y., and ye, f, " particle swarm optimization algorithm and its application to clustering analysis,'' in proc. the ieee international conference on networking, sensing and control, taipei, taiwan , pp. 789–794,2004. [3] paterlini, s., and krink, t., "differential evolution and particle swarm optimization in partitional clustering'' in proc. computational statistics and data analysis, 50, pp 1220–1247. 2006 [4] d.w. boeringer, and d.h. werner, "particle swarm optimization versus genetic algorithms for phased array synthesis,'' ieee transaction of antennas propagation 52 (3) pp 771–779. 2004 [5] wesam barbakh and colin fyfe, "inverse weighted clustering algorithm'', ieee transaction of antennas propagation 52 (3) pp 771– 779. 2004 [6] w. barbakh. the family of inverse exponential k-means algorithms. computing and information systems, 11(1):1–10, issn 1352-9404. 2007 [7] w. barbakh, m. crowe, and c. fyfe. a family of novel clustering algorithms. in 7th international conference on intelligent data engineering and automated learning, ideal2006, pages 283–290,. issn 03029743 isbn-13 978-3-540-45485-4. 2006 [8] shafiq alam, gillian dobbie, patricia riddle, "an evolutionary particle swarm optimization algorithm for data clustering", swarm intelligence symposium st. louis mo usa, september 21-23, ieee 2008. [9] chunqin gu, qian tao, " clustering algorithm combining cpso with k-means", communication and knowledge (ictck), international congress on. 2015 [10] lekshmy p chandran,k a abdul nazeer, ―an improved clustering algorithm based on k-means and harmony search optimization‖, ieee 2011. [11] d. j. mackay. information theory, inference, and learning algorithms. cambridge university press., 2003. [12] w. barbakh and c. fyfe. inverse weighted clustering algorithm. computing and information systems, 11(2):10–18, issn 1352 9404. 2007 [13] j kennedy, and rc eberhart, "particle swarm optimization,'' in proc. the ieee international joint conference on neural networks, vol. 4, pp 1942–1948, 1995. [14] y. shi, and r.c. eberhart, "a modified particle swarm optimizer,'' in proc. ieee world conf. on computation intelligence (1998) pp 69– 73. [15] r.c. eberhart, and y. shi, "comparing inertia weights and constriction factors in particle swarm optimization,'' in proc. congress on evolutionary computing, vol. 1 (2000) pp 84–88. [16] alireza ahmadyfard, hamidreza modares, " combining pso and k means to enhance data clustering,'' in telecommunications, 2008. transactions template journal of engineering research and technology, volume 4, issue 1, marsh 2017 28 a new model in arabic text classification using bpso/rep-tree hamza naji 1 , wesam ashour 2 and mohammed alhanjouri 3 1 department of computer engineering, islamic university of gaza, palestine. 2 department of computer engineering, islamic university of gaza, palestine. 3 department of computer engineering, islamic university of gaza, palestine. abstract—specifying an address or placing a specific classification to a page of text is an easy process somewhat, but what if there were many of these pages needed to reach a huge amount of documents. the process becomes difficult and debilitating to the human mind. automatic text classification is the perfect solution to this problem by identifying a category for each document automatically. this can be achieved by machine learning; by building a model contains all possible attributes features of the text. but with the increase of attributes features, we had to pick the distinguishing features where a model is created to simulate the large amount of attributes (thousands of attributes). to deal with the high dimension of the original dataset, we use features selection process to reduce it by deleting the irrelevant attributes, words, where the rest of features still contain relevant information needed in the process of classification. in this research, a new approach which is binary particle swarm optimization (bpso) with reduced error pruning tree (rep-tree) is proposed to select the subset of features for arabic classification process. we compare the proposed approach with two existing approaches; binary particle swarm optimization bpso with k-nearest neighbor (knn) and binary particle swarm optimization bpso with support vector machine (svm). after we get the subset of attributes that result from features selection process, we use three common classifiers which are decision trees j 48, svm and the prepared algorithm rep-tree (as a classifier) to build the classification model. we created our own arabic dataset; the bbc arabic news dataset that are collected from the bbc arabic website and another one existing is used datasets in our experiments, alkhaleej news dataset. finally, we present the experimental results and showed that the proposed algorithm is missionary in this area of research. index terms—text classification, bpso, rep-tree, binary particle swarm optimization. i introduction the huge increase of using text in the electronic devices and web sites, in particular, is a motivation for categorizing these texts in automatic manner. that’s because of the insufficiency of human ability to handle them manually. the core task in the categorization is called the text categorization or classification tc. the previous task is the ability of classifying a huge amount of groups of texts; each of them is called a text data-set or corpora, to some predefined classes. in case of news data-set; for example, the classes can be sport, health etc., and other various classes based on their contents. text classification process in general consists of two phases. the first one is the preprocessing phase defined as the process that implements on the amount of texts to make some improvements for reducing the unnecessary terms. the preprocessing phase also contains reducing the extra phrases of one term by a process called stemming. stemming is the process of eliminating the derived words of one basic word such as the words ―making makes‖ and turning them to their roots as the word "make". another example of the stemming process are the words (argue, argued, argues, arguing, and argues) turning them to the stem "argu". on the other hands, (argument and arguments) are turned to the stem "argument". the preprocessing phase includes the removing of some prefixes and suffixes from the word instead of extracting the original root. the second phase of text classification process is the classification step. the process of classifying the preprocessed text in the previous phase and presenting the corpora using a mechanism is called a classifier. to apply such two phases, we need to convert each dataset to a term vector which is the basic of text processing [ 1 ]. but how many terms we need in each dataset based on what term we need is a question to be answered. the previous question leads us to add a new step in the text classification process, arabic text classification in this paper. there is a middle step between preprocessing and classification process called "feature selection" [ 2 ], it is a complementary process to the preprocessing stage performed after it to reduce the redundant terms (features) and to keep the sufficient terms to continue the classification process [ 3 ]. we demonstrate a combination of binary particle swarm optimization bpso and reduced error pruning tree reptree for the last process of selecting good sets of features for the arabic tc task. then we use the second half of the hyhamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 29 bridized approach the rep-tree and use it as a classifier as mentioned above. the text classification processes can be done easily on the english language due to the smooth environment of it. in contrast, arabic language is considered a complex language that contains many formations and many different kinds of forms of the word. the aforementioned difficulty in the arabic language requires greater efforts in dealing with the classification of texts. paper focuses on the classification of the arabic text which is the difficulty of arabic expressive style when being employed in alternative languages like persian, urdu, iranian language and alternative regional languages of pakistan, afghanistan and persia. the arabic language contents constitute a 3% of the web text content with the fourth order in languages ordering on-line [ 4 ]. the previous amount of content needs an accurate and effective classification to help the humans to easily use it .thus, in the last 10 years the need for the effective and accurate classification has quickly been grown. there are some classification algorithms that can be done in general text classification and can be proposed in arabic such as: support vector machine (svm), naïve bayes (nb), k-nearest neighbor(knn), maximum entropy (me), artificial neural network (ann), decision tree (dt)and the rocchio feedback algorithm. more recently, reduced error pruning tree rep-tree is investigated in arabic tc. rettree is a fast decision tree learning machine and it builds a decision tree based on the information gained or reducing the variance. also, rep-tree is a fast decision tree learner which builds a decision/regression tree using information gained as the splitting criterion, and prunes it by using reduced error pruning [ 5 ]. rep-tree was first used in indian and english text classification in 2015 and 2012 [ 6 ], [ 7 ].the rest of the paper is organized as follows: section 2 reviews related work. section 3 explains bpso concepts. section 4 explains the second term of the proposed approach reptree. section 5 shows proposed work. section 6 presents the results, and finally, we tend to conclude the paper in section 7. ii related works in the discussion below, we focus on the works addressing arabic tc. since the number and quality of features used to express texts has a direct effect on classification algorithms, the following will discuss the main goal of feature reduction and selection and their impact on tc. (brahimi, touahria and tari, 2016) [ 8 ] addressed sentiment analysis for tweets in the arabic language using some approaches with two free available datasets of (2000 tweets). they applied the light and root stemmer as a preprocessing phase and investigated the impact of reducing the size of the dataset by selecting the most relevant features on the classification efficiency and accuracy of three well used machine learning algorithms support vector machine (svm), naïve bayes (nb), and k-nearest neighbor (knn). (oraby, el-sonbaty and el-nasr, 2013) [ 9 ] worked on the impact of stemming by applying the khoja stemmer [ 10 ], information science research institute (isri) stemmer [ 11 ], and tashaphyne light arabic stemmer [ 12 ] on two datasets of the opinion classification problem, the results show that the khoja stemmer is the best one. (shoukry andrafea, 2012) [ 13 ] performed the classifiers support vector machine svm and naïve bayes nb on a dataset collected from twitter website. they applied the experiments on 2 documents of arabic tweets and the results showed that the support vector machine svm was better than naïve bayes nb. (al-thwaib, 2014) [ 14 ] used the sakhr summarizer sakhr company website 2016 as a feature selector to choose the best words of documents instead of using all words and they used the tf feature. documents, after using tf for feature selection, are classified using svm classifier; the data set they used consists of 800 arabic text documents. it is a subset of 60913-document corpus collected from many newspapers and other web sites. he succeeded to increase the accuracy by using the summarized corpus as input for support vector machine svm classifier. (al-hindi and al-thwaib, 2013) [ 15 ] made a comparison between two data-sets, each one contained 1000 arabic documents.text summarization was applied on one without the other. accuracy has not improved much, but there was a difference in the time. when they used summarized documents, less time was needed to build the learning model. (abu-errub, 2014) [ 16 ] proposed a method to classify arabic text by comparing a document with predefined documents categories based on its contents using the term frequency times inverse document frequency tf.idf method measure. after that the document is classified into the appropriate sub-category using chi square measure. the dataset used in this study contained 1090 documents for training and 500 documents for testing, categorized into ten main categories. the results show that the proposed algorithm can classify the arabic text datasets into predefined category. (goweder, elboashi and elbekai, 2013) [ 17 ] used their developed technique, centroid-based, to classify arabic text. the proposed algorithm is evaluated using a dataset containing a 1400 arabic documents collecting from 7 different classes. the results show that the adapted centroid-based algorithm can classify arabic documents without problems. they used some measurements micro-averaging recall, precision, f-measure, accuracy, and error rates respectively. the measurements factors record a performance percentage of 90.7%, 87.1%, 88.9%, 94.8%, and 5.2% according to the previous order of measurements. hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 30 (abidi and elberrichi, 2012) [ 18 ], in this paper, they presented a comparative study to assess the effect of a conceptual representation of the text. the k-nearest neighbor used and feature extraction was achieved via three preprocessing schemes bag of words, n grams, and a conceptual representation. the f-measure of bag of words is 64%, 68% for n gram’s f-measure, and 74% for f-measure conceptual representation. finally, the conceptual representation was the best one as the results shown. (raho,al-shalabi, kanaan and nassar, 2015) [ 19 ] investigated the importance of feature selection in arabic corpus classification by making a comparison of the performance between different classifiers in different situations using feature selection with stemming, and without using stemming. the dataset collected from bbc arabic website and the classifiers they used are dt, k nearest neighbors knn, naïve bayesian model nbm method and naïve bayes nb; also they used factors measurements such as precision, recall, f-measures, accuracy and time. the results showed the accuracy of each classifier as the following: (d.t 99.4%, knn 66.3%, nbm 92%, and nb 91.9%). (mohammad, al-momani and alwada, 2016) [ 20 ] provided a comparative study of arabic text classification between three types of classifiers (k-nearest neighbor, decision trees c4.5, and rocchio classifier). these well-known algorithms are applied on a collected arabic data set. data set used consists from 1400 documents belongs to 8 categories, the same number of documents was used in the study experiments. they used two types of measurements precision and recall, and the results of the experiments showed that the knearest neighbor records an average of 80% for recall and 83% for precision, while rocchio classifier records an average of 88% for recall and 82% for precision. both of the previous classifiers are better than c4.5 with average of 64% for recall and 67% for precision. (kanan and fox, 2015). [ 21 ] this study talks about a new approach in arabic text classification stemming; they developed a new model called tailored stemming, a new arabic light stemmer, with the usage of support vector machine svm classifier. the experiments were performed under 10fold cross-validation training type, and gave these results for the predefined classes after using svm as the following: art and culture 91.8%, economics 93.5%, politics 91.5% and society 99.1%. (al-anzi and abuzeina, 2016) [ 22 ] grouped the similar unlabeled document into pre-specified number of topics using latent semantic indexing lsi and singular value decomposing svd methods. the corpus they used contains 1000 documents of 10 topics, 100 documents for each topic. the results showed that em method is the best of other methods with an average categorization accuracy of 89%. (zubi, 2009). [ 23 ] this study is about using the web contents and applies some arabic classification techniques on it. the general purpose of this study is to compare between two classifiers. the author used the k-nearest neighbor knn classifier and naïve bayes nb classifier to apply the experiment. as mentioned by the author in his study. a corpus of arabic text documents was collected from online arabic newspapers archives, including al-jazeera, al-nahar, alhayat, al-ahram, and aldostor as well as few other specialized websites. he collects 1562 documents classifying it into 6 different categories. after the comparison experiment finished, the results showed that the k nearest neighbors knn with an average of (86.02%) was better than classifier naïve bayesian with accuracy of (77.03%). (zrigui, ayadi, mars and maraoui, 2012). [ 24 ] they developed a new model based on the latent dirichlet allocation (lda) and the support vector machine svm; they used the lda to sample ―topics‖ of groups of texts. the results showed that the proposed lda-svm algorithm is able to achieve high effectiveness for arabic text classification task (macro-averaged f 1 88.1% and micro-averaged f – 91.4%). iii binary particle swarm optimization bpso before talking about bpso as a feature selection algorithm, we will first describe the intended of the word ―swarm‖ in full definition of pso ―particle swarm optimization‖ algorithm. what is the swarm and where this name came? that’s what we got from the final meaning of the definition. many forms of life in some organisms affected the aspirations of some researchers and invited them to develop some successful theories for solving problems based on this random life. there is a group of successful theories based on this mode of thinking, including the dna counting, membrane algorithm, particle swarm optimization algorithm, artificial immune systems algorithm, and ant colony optimization algorithm. one of the algorithms is the particle swarm optimization algorithm that was developed in the 1995 by eberhart and kennedy [ 25 ]. this idea has been built on the basis of the collective behavior of flocks of birds. pso creates a random optimization algorithm to give solutions, particles, for some positions in the search space. each of those particles holds an initial random velocity within the search space symbolized by v i = ( v i 1 ; v i2 ; ...v in ), and each particle is symbolized by p i = ( p i 1 ; pi2 ; ...; p in ). update its velocity according to its experience or other particles experiences. for the best particle in the search space, swarm, we called it the best global symbolized by g, and when the velocity has been updated, the particle it finds the new position with the latest velocity according to the following equations [ 26 ] hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 31 the main equation is: 𝑋𝑖𝑑 = 𝑋𝑖𝑑 +𝑉𝑖𝑑 (1) new position = current position + new velocity. 𝑉𝑖𝑑 = 𝜔 ∗ 𝑉𝑖𝑑 +𝐶1 ∗ 𝑟𝑎𝑛𝑑( ) ∗ (𝑃𝑖𝑑 − 𝑋𝑖𝑑) + 𝐶2 ∗ 𝑟𝑎𝑛𝑑( ) ∗ (𝑃𝑔𝑑 − 𝑋𝑖𝑑) (2) where rand () is a random number between (0, 1) [ 27 ]. c1, c2 are acceleration factors. usually c1 = c2 = 2. pgd = global best. vid = velocity of particle [ 28 ]. xi is the current position of the particle initialized with random binary values. where 0 means that the corresponding feature is not selected and by 1 means that the feature is selected. pi is the best previous position of the particle and initialized by the same value of xi. vi is the velocity of pi. what if there was no previous velocity, then particles will navigate to the same position (current position), and that is the (local search). but if we get a new velocity, then particle will extend its search (the global search). some problems resulted from the previous questions. inertia weight ω solve these problems by balancing the local and global search. [ ]perform a sequence of experiments to give the best value of ω which is 1.2.in binary particle swarm optimization binary pso, particle position is considered as a binary vector, but how binary vectors deal with velocities. [ 29 ] provided some equation to deal with velocity, a vector, (with real value in which this value is kept between (0, 1)), provides a group of probabilities. according to the previous we can use the bpso to select the relative features in the arabic text classification. as mentioned in [ 30 ], the probability of bit changing is determined by the following: 𝑺(𝑽𝒊𝒅) + 𝟏 𝟏 + 𝒆 𝑽𝒊𝒅 (𝟑) 𝑰𝒇 (𝒓𝒂𝒏𝒅( ) < 𝑺(𝑽𝒊𝒅))𝒕𝒉𝒆𝒏 𝑿𝒊𝒅 = 𝟏; 𝑬𝒍𝒔𝒆 = 𝟎 (𝟒) where rand () is a random number between (0, 1) [27]. c1, c2 are acceleration factors. usually c1 = c2 = 2. pgd = global best. vid = velocity of particle [29]. iv reduced error pruning tree rep-tree more recently, reduced error pruning tree rep-tree is investigated in arabic tc [ 31 ]. rep-tree is a fast decision tree learning algorithm and it builds a decision tree based on the information gained or reducing the variance. rep-tree is a fast decision tree learner which builds a decision/regression tree using information gained as the splitting criterion, and prunes it using reduced error pruning. reptree was first used in indian and english text classification in 2015 [ 32 ] and 2012 [ 33 ]. the rep-tree first starts the training process on the existing dataset, and then builds the training model by decisions, then get a mix results of some instances from the first learning step and from the pruned dataset which is a part of the dataset for post-pruning of the tree, then performing the test process. for a sub-tree of the tree, if replacing it by a node or leaf, which doesn't take more prediction errors on the pruning set than the original set, the tree replaces by a leaf. that means that the rep-tree prunes each node after the natural classification. if the misclassification error determined for the instances from the pruned data set is not larger than the misclassification error rate computed on the original training data, the misclassification error can be presented in the figure (1) below. figure 1 the misclassified detection in the pruning set of rep-tree (binary sample), [ 34 ] by using a pruning set shown in the following table: table 1 contains some samples category x y z a 0 0 1 b 0 1 1 b 1 1 0 b 1 0 0 a 1 1 1 b 0 0 0 figure 2 the final rep-tree hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 32 the rep-tree begins from bottom from node three. we show that node three can be produced into a leaf which makes the minimum errors, on the pruning set, than as a sub tree. as a sub-tree (the pruned tree) the classification occurs at nodes four and five. one error happened in node five; but no errors happened in node three. the same matter happened in node six and node nine. however, node number two cannot be made into a leaf since it makes one error while as a sub tree, with the newly-created leaves three and six. it makes no errors as shown in figure (2). the pruning comes as a solution to the sub-tree replication problem that happened with the decision tree starts splitting. the definition of this case as ―when sub tree replication occurs, identical sub trees can be found at several different places in the same tree structure‖ [28]. v proposed work in this section, the whole arabic text classification process will be explained then it will divide the work into a collection of systems, each system has special combinations to produce the final process of classification after preparing the dataset. these combinations are taken from what has already been explained in previous section. arabic text datasets in this subsection, we will present the datasets used in the experiments of our paper. the used datasets are as the following: bbc-arabic news dataset the first data set contains the number of 4680 documents of bbc-arabic news, classified into the following predefined categories {'middle east', 'world news', 'business', 'sport', 'newspapers', 'science', 'misc.'}. we choose a random set of existing documents 3000 documents manually; with the knowledge that classifies types in all documents as ―single label‖ classification as mentioned in section (3.2.1 section) ―types of text classification‖. the following table, table (2), shows the division of the documents into seven preset categories. table 2 the division of bbc-arabic news dataset based on 60% training set. note that bbc-arabic data-set is collected during our work, and other two datasets are already existing in the literatures (arabic corpora mourad abbas.) and (arabic corpora alj-news.). alkhaleej news dataset we present the second data set which contains a number of 5690 documents for alkhaleej news dataset (arabic corpora mourad abbas. ), (arabic corpora alj-news.) that classified into the following predefined categories {'international news', 'local news', 'sport', 'economy'}. we choose a random set (2770 documents) with the knowledge that classifies types in all the documents as a single label classification (abbas, smaili 2005). the following table, table (3) shows the division of the documents into four preset categories. table 3 the division of alkhaleej news dataset based on 60% training set. the tables above show that data is partitioned into two parts data for learning and data for testing based on 60% of learning; this style existed in weka tool with many options for this purpose. the proposed systems in this section, we will give a set of regulations contain some processes that listed in the previous section, and then a comparison will be performed between all the existing combinations in the form of independent systems and extract the # class training set testing set full dataset 1 middle east 630 420 1050 2 world news 222 148 370 3 business 124 82 206 4 sport 348 232 580 5 newspapers 234 155 389 6 science 141 94 235 7 misc. 102 68 170 total 1801 1199 3000 # class training set testing set full dataset 1 local news 630 400 1030 2 international news 480 320 800 3 economy 264 176 440 4 sport 300 200 500 total 1674 1096 2770 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 33 results in the next section. system a: binary particle swarm optimization and knearest neighbor. system a is the first proposed system. it works on the classification of arabic documents using the three main processes preprocessing, feature selection, and classifications as mentioned. this system contains three processes shown in figure (3): (1tokenization, stop words discarding 2bpso/knn, 3j 48). (1tokenization, stop words discarding 2bpso/knn, 3 svm). (1tokenization, stop words discarding 2bpso/knn, 3 rep-tree). . figure 3. system a. figure 3 shows the processes of system a using the bbc-arabic dataset with the previous processes. bpso+knn experiment steps step 1. we need to prepare a population of particles in the features space and spread particles randomly. xi is the current position of the particle initialized with random binary values. where0 means that the corresponding feature is not selected and by 1 means that the feature is selected. pi is the best previous position of the particle and initialized by the same value of xi. vi is the velocity of pi. • according to the evaluation of each particle in the swarm gbest (global best) initializes by the best fitness value of a particle. step2. (determining the fitness).fitness of subset resulted by particle with the evaluation process occurs after each feature selection iteration. the best fitness is the best accuracy in the evaluation process of the selected subset of features measured by classifiers algorithms (knn) according to the following equation [27]. 𝐹𝑖𝑡𝑛𝑒𝑠𝑠 = (𝛼 ∗ 𝐴𝑐𝑐) + (𝛽 ∗ (𝑁 −𝑇 𝑁⁄ )) (5) where • acc refers to the classification accuracy of the particle using chosen classifier. • to make a balance between classification accuracy and the dimension of the feature sub set that selected by particles, we use the β and α parameters to do this purpose, with range of [0, 1] for α, and 1α for β. n refers to the all features. t refers to the selected features using particle p. • the fitness now is updated and then the private best of each particle is updated for each particle. step 3. (updating gbest).the gbest is now updated. step 4. (updating position).according to the bpso velocity equation from section three, we can alter and update both velocity and position for all particles (mendes, kennedy and neves, 2004). equation (1) and (2). as mentioned in [25], the probability of bit changing is determined by the following: equations (3) and (4). where rand () is a random number between (0, 1) [27]. c1, c2 are acceleration factors. usually c1 = c2 = 2. pgd = global best. vid = velocity of particle [28]. step 5.if the fitness value is better than the best fitness value (gbest) in history then set current value as the new gbest. step 6.now for evaluation in our case knn, we use the euclidean distance ed to measure the relevancy between current instance and the other instances in the data-set. step 7.define the repository r. • if the predicted classifications of instances were similar to the predefined classification, increase repository r by 1. step 8. now, we can measure the classification accuracy of particle p by [27]. 𝐶𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑅 𝑁 (6) where r is the group of results after testing the features from all training set n. the experiment parameters (bpso+knn). (1) inertia weight (ധ): in the previous equation (2) is to balance the local search and the global search [27], and from the literature the best value of ധ is 1.2. (2) the swarm dimension is 50 units. (3) iterations are 200 iterations. (4) [0, 1] for α, and 1α for β. if we use the 1 for α then β = 0 and this mean that the dimension of the features subset is neglected, so we choose a random number between [0, 1] for α (0.70); and β is 1 – 0.70 = 0.30. system b: binary particle swarm optimization and support vector machine. the second system in this also studies inserting the second middle phase (feature selection). in this system we will use the bpso with svm, and then classify the resultant features by (decision trees j 48, support vector machine svm, and reduced error pruning-tree rep-tree) as shown in figure (4). hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 34 figure 4 system b. figure (4) shows the processes of system b using the bbcarabic dataset with the previous processes by adding the bpso+svm as a feature selection, and the resultant features will be classified using the three classifiers svm as classifier, j48, and rep-tree for arabic words. bpso+svm experiment steps: step 1. the same in system a. step2. (determining the fitness). here we use the previous equation in system a, (1). here we use svm to measure the classification instead of knn in the previous system a. step 3. (updating gbest) the same in a. step 4. (updating position) the same in a using the equations (2), (3), and (4). step 5. the same in a. step 6.now for evaluation in our case svm, we use the svm classifier in weka tool to measure the relevancy between current instance and the other instances in the dataset. then repeat both step 7 and 8 as mentioned in system a. also the same previous parameters in system a. experiments. system c: binary particle swarm optimization and reduced error pruning tree. the last system in this study also involves inserting the middle phase feature selection including the previous processes and contents in system a, and b. in this system we will use the bpso with reduced error pruning-tree rep-tree where it was not used in arabic text classification field yet and it was recently used in english news classification. finally, we will classify the resultant features by decision trees (j 48), support vector machine svm, and reduced error pruningtree. rep-tree) (as a classifier) as shown in figure (5). figure 5 system c. figure (5) shows system c with adding the bpso+reptree as a features selection rep-tree here (evaluator), and the resultant features will be classified using the three classifiers (svm, j48, and rep-tree (as classifier)) for arabic words. bpso+rep-tree experiment steps step 1.the same in system a. step2. (determining the fitness).here we use reduced error pruning-tree rep-tree as a feature evaluator to measure the classification accuracy of the particle in a training set, instead of knn in system a. step 3. (updating gbest) the same in a. step 4. (updating position) the same in a. using the equations (2), (3), and (4). step 5. the same in a. step 6.now for evaluation in our case rep-tree we use rep-tree classifier in weka tool to measure the relevancy between current instance and the other instances in the dataset. then repeat both step 7 and 8 as mentioned in system a. also the same previous parameters in system a. experiments. we can alternate the last three steps by measuring the f measure factor to estimate the classification accuracy. we can list the previous steps in short and general points as the following: (1) first and after preparing the features ,terms, space and spread particles randomly, we determine the accuracy of the classification (acc) of a particle p in training data-set by using reduced error pruning-tree rep-tree. (2) start extracting and filtering the features subset of the training set that selected by particle. (3) evaluate the previous extracted features data-set by the hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 35 rep-tree by 60 % training set validation. (4) determine the f measure factor that result from the rep-tree experiment to determine the fitness of the particle. vi. experimental results in this section, the experimental results of the previous systems are described in last section. we have executed our experiments on two data-sets, the bbc-arabic news dataset and alkhaleej news dataset. as mentioned in the previous section, we split the data into 60% for training and 40% for testing, and then display the results in tables and figures. after that, we will compare every system with the other in specific graph. we will start presenting the results of system a using the three classifiers which have been previously described in section 4. then gradually we will review the results of system b, and finally we end with system c. 6.1 system a.a (“bpso+knn”/j 48) the experimental results of system a with j 48 tree are shown by table (4) and (5) using the previous two datasets: table 4 system a with j 48 tree applied on bbc-arabic dataset table (4) shows the classification of bbc-arabic documents using bpso+knn as a feature selector and j 48 decision tree as a classifier. as it is clear from the table, the results are as the following: the best classification is in ―newspapers‖ class with precision of 87.3, recall of 88.9 and f1measure of 88.0. the second performance rank of classes is the ―misc.‖ with precision of 83.9, recall of 89.6 and f1measure of 86.6. there is a convergence in the outcome of both ―word news‖ and ―sport‖ with a little outperforming in recall of 85.4 for ―word news‖ class. the worst two classes were the ―science‖ and the ―middle east‖ classes with precision of 62.7, recall of 86.1 and f-measure of 72.5 for ―science‖ and the worst precision with 67.3 and f-measure with 68.4 for ―middle east‖ class. then we have the second data-set (alkhaleej news dataset) with the same previous experiment, table (5) shows the results as the following: table 5 system a with j 48 tree applied on alkhaleej news dataset table (5) shows the classification of alkhaleej news dataset documents using bpso+knn as a feature selector and j 48 decision tree as a classifier. the best f-measure is for ―sport‖ class with 84.2, and the worst f-measure is for ―economy‖ class with 62.8. 6.2 system a.b (“bpso+knn”/svm) the experimental results of system a with svm classifier are shown by tables (6) and (7) using the previous two datasets (bbc arabic, and alkhaleej datasets as the following: table 6 system a with svm classifier applied on bbc-arabic dataset table (6) shows the classification of bbc-arabic documents using bpso+knn as a feature selector and svm as a classifier. as it is clear from table (6), the results are as the following: the best classification is for ―misc.‖ class with precision of 89.4, recall of 95.6 and f1-measure of 92.3. the second performance rank of classes is the ―business‖ with precision of 84.5, recall of 92.4 and f1-measure of 88.2. there is a convergence in the f1-measure outcome of both ―middle east‖ and ―science‖ with f1-measure of 83.7 and 83.4 gradually. the worst class is the ―sport‖ with precision of 87.2, recall of 79.7 and f-measure of 83.2. now we will apply system a (the same previous experiment with svm) on the second data-set (alkhaleej news dataset) and table (7) shows the results as the following: class precision% recall% f1-measure% local news 75.8 78.4 77 international news 74.6 72.3 73.4 economy 65.2 60.7 62.8 sport 81.3 87.5 84.2 average 74.2 74.7 74.3 class precision% recall% f1-measure% middle east 67.3 69.7 68.4 world news 81.5 85.4 83.4 business 72.4 73.4 72.8 sport 84.2 79.7 81.8 newspapers 87.3 88.9 88.0 science 62.7 86.1 72.5 misc. 83.9 89.6 86.6 average 77 81.8 79 class precision% recall% f1-measure% middle east 88.3 79.7 83.7 world news 81.7 87.3 84.4 business 84.5 92.4 88.2 sport 87.2 79.7 83.2 newspapers 86.4 88.2 87.3 science 81.4 85.6 83.4 misc. 89.4 95.6 92.3 average 85.5 86.9 86 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 36 table 7 system a with svm classifier applied on alkhaleej news dataset table (7) shows the classification of alkhaleej news dataset documents using bpso+knn as a feature selector and svm as a classifier. the best f-measure is for ―sport‖ class with 92.3, and the worst f-measure is for ―international news‖ class with 82. 6.3 system a.c (“bpso+knn”/rep-tree) the third combination of system a is our proposed classifier rep-tree which has recently been used in english text classification as mentioned previously in the past sections. here, the rep-tree is a classifier used to classify a group of feature resulting from the operation of features selection by bpso+knn. the experimental results of system a with rep-tree classifier are shown by tables (8) and (9) using the previous two datasets bbc arabic and alkhaleej datasets as the following: table 8 system a with rep-tree classifier applied on bbc-arabic dataset table (8) shows the classification of bbc-arabic documents using bpso+knn as a feature selector and rep-tree as a classifier. as it is clear from table (0), the results are as the following: the best classification is for ―middle east‖ class with precision of 87.7, recall of 91.5 and f1-measure of 89.5. the second rank of performance is for classes ―newspapers‖ with precision of 89.2, recall of 88.7 and f1measure of 88.9. we can detect the convergence between the previous class performance and the ―business‖ class performance with precision of 86.1, recall of 90.6 and f1measure of 88.2. the worst performance was the ―misc.‖ class with precision of 79.2, recall of 72.3 and f-measure of 75.5. as in all previous experiments we'll apply the reptree classifier on the other datasets. now we will apply system a (the same previous experiment with rep-tree) on the second data-set (alkhaleej news dataset) and table (9) shows the results as the following: table 9 system a with rep-tree classifier applied on alkhaleej news dataset accuracy results were comparable between rep-tree and svm with average f1-measure of 87% for rep-tree and 88% for svm. for more details of the results the best fmeasure is for ―local news‖ class with 89.9, and the worst f-measure is for ―economy‖ class with 81.8. 6.4 system b.a (“bpso+svm”/j 48) the experimental results of system b with j 48 tree are shown by tables (10) and (11) using the previous two datasets (bbc-arabic news dataset and alkhaleej news dataset): table 10 system b with j 48 tree applied on bbc-arabic dataset table (10) shows the classification of bbc-arabic documents using bpso+svm as a feature selector and j 48 decision tree as a classifier. as it is clear from the table, the results are as the following: the best classification performance is ―newspapers‖ class with precision of 85.2, recall of 87.3 and f1-measure of 86.2. the second rank of classification performance is the ―world news‖ with precision of 88.3, recall of 83.1 and f1-measure of 85.6. we can see that the worst classes are the ―middle east‖ and the ―science‖ classes with precision of 70.4, recall of 72.6 and f-measure of 71.4 for ―middle east‖ and the worst precision with 61.0 and f-measure with 68.2 for ―science‖ class. here we can be quite sure that the j 48 tree failed in the classification accuracy of ―science‖ class by 31.8% according to its fmeasure. now we have the second data-set (alkhaleej news class precision% recall% f1-measure% local news 86.1 90.4 88.1 international news 82.4 81.7 82 economy 91.6 87.8 89.6 sport 95.3 89.5 92.3 average 88.8 87.3 88 class precision% recall% f1-measure% middle east 87.7 91.5 89.5 world news 85.9 85.7 85.7 business 86.1 90.6 88.2 sport 80.3 72.2 76 newspapers 89.2 88.7 88.9 science 83.8 87.8 85.7 misc. 79.2 72.3 75.5 average 84.6 84.1 84.2 class precision% recall% f1-measure% local news 88.4 91.5 89.9 international news 93.2 85.2 89 economy 80.1 83.6 81.8 sport 92.7 82.7 87.4 average 88.6 85.7 87 class precision% recall% f1-measure% middle east 70.4 72.6 71.4 world news 88.3 83.1 85.6 business 77.5 71.2 74.2 sport 87.7 78.5 82.8 newspapers 85.2 87.3 86.2 science 61 77.4 68.2 misc. 82.5 87 84.6 average 78.9 79.5 79 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 37 dataset) with the same previous experiment, table (11) shows the results as the following: table 11 system b with j 48 tree applied on alkhaleej news dataset table (11) shows the classification accuracy of alkhaleej news dataset documents using bpso+svm as a feature selector and j 48 decision tree as a classifier. the best fmeasure is for ―sport‖ class with 76.6, and the worst fmeasure is for ―local news‖ class with 51. also here we can be quite sure that the j 48 tree failed in the classification accuracy of ―sport‖ class by 49% according to its fmeasure. 6.5 system b.b (“bpso+svm”/svm) the experimental results of system b with svm classifier are shown by tables (13) and (14) using the previous two datasets (bbc arabic and alkhaleej datasets as the following: table 12 system b with svm classifier applied on bbc-arabic dataset table (12) shows the classification of bbc-arabic documents using bpso+knn as a feature selector and svm as a classifier. as it is clear from table (12), the results are as the following: the best classification is for ―misc.‖ class with precision of 90.4, recall of 98.8 and f1-measure of 94.4. the second performance rank of classes is the ―world news‖ with precision of 98.7, recall of 90.3 and f1-measure of 94.3. the worst class is the ―sport‖ with precision of 60.3, recall of 80.7 and f-measure of 69. now we will apply system b (the same previous experiment with svm) on the second data-set (alkhaleej news dataset), and table (13) shows the results as the following: table 13 system b with svm classifier applied on alkhaleej news dataset table (13) shows the classification of alkhaleej news dataset documents using bpso+svm as a feature selector and svm as a classifier. the best accuracy (f-measure) is for economy class with 93.6 and the worst f-measure is for ―local news‖ class with 85.8. 6.6 system b.c (“bpso+svm”/rep-tree) the third combination of system b is our proposed classifier rep-tree as we mentioned in the previous experiments which has recently been used by (kalmegh, 2015), (patel and upadhyay, 2012) in english text classification and by (naji and ashour, 2016) in arabic text classification (a previous paper related to the existing paper), as mentioned previously in the past sections specifically in the first section. here the rep-tree is a classifier, which is used to classify a group of features resulting from the operation of features selection by bpso+svm. the experimental results of system b with rep-tree classifier are shown by tables (14) and (15) using the previous two datasets (bbc arabic and alkhaleej datasets as the following: table 14 system b with rep-tree classifier applied on bbc-arabic dataset table (14) shows the classification of bbc-arabic documents using bpso+svm as a feature selector and rep-tree as a classifier. as it is clear from table (14), the results are as the following: the best classification is for ―world news‖ class with precision of 98.3, recall of 96.1 and f1-measure of 97.1. the second rank of performance is for classes ―newspapers‖ with precision of 88.2, recall of 88.9 and f1measure of 88.5. we can detect the convergence between the ―middle east‖ class performance and the ―business‖ class performance with f1-measure of 82.7 and 82.6. the worst performance was the ―sport‖ class with precision of class precision% recall% f1-measure% local news 49.8 52.4 51 international news 93.3 62.4 74.7 economy 67.1 77.5 71.9 sport 85.3 69.8 76.7 average 73.8 65.5 68.5 class precision% recall% f1-measure% middle east 67.9 88.7 76.9 world news 98.7 90.3 94.3 business 87.9 89.3 88.5 sport 60.3 80.7 69 newspapers 79.8 84.2 81.9 science 99.2 85.6 91.8 misc 90.4 98.8 94.4 average 83.4 88.2 85.2 class precision% recall% f1-measure% local news 83.2 88.6 85.8 international news 88.5 85.7 87 economy 96.6 90.9 93.6 sport 90.3 89.7 89.9 average 89.6 88.7 89 class precision% recall% f1-measure% middle east 77 89.4 82.7 world news 98.3 96.1 97.1 business 87.2 78.5 82.6 sport 79.5 75.8 77.6 newspapers 88.2 88.9 88.5 science 85.4 87.1 86.2 misc 89 69.4 77.9 average 86.3 83.6 84.6 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 38 79.5, recall of 75.8 and f-measure of 77.6. as in all previous experiments we'll apply the rep-tree classifier on the other datasets. now we will apply system b (the same previous experiment with rep-tree) on the second data-set (alkhaleej news dataset), and table (15) shows the results as the following: table 15 system b with rep-tree classifier applied on alkhaleej news dataset from table (15) we see that the best accuracy of rep-tree f1-measure is 91.2 for ―sport‖ class, and the worst fmeasure is for ―local news‖ class with 75. we note that the results were comparable with svm classifier. now we will apply the rep-tree on another data-set. 6.7 system c.a (“bpso+rep-tree”/j 48) system c consists of binary pso as a feature selector and the proposed rep-tree as an evaluator to check the best group of features then we use the three previous classifiers (j 48, svm, and rep-tree) to build the classification model; the classification in the resultant group of features in the training set to reduce the dimension of the original data-set and then apply the classifiers on the test data-set. we have previously noted that rep-tree has recently been used by (kalmegh, 2015), (patel and upadhyay, 2012) to classify english text and by (naji and ashour, 2016) in arabic text classification. the experimental results of system c with j 48 tree are shown by tables (16) and (17) using the previous two datasets (bbc-arabic news dataset and alkhaleej news dataset): table 16 system c with j 48 tree applied on bbc-arabic dataset table (16) shows the classification of bbc-arabic documents using bpso+rep-tree as a feature selector and j 48 decision tree as a classifier. as it is clear from the table, the results are as the following: the best classification performance is ―world news‖ class with precision of 90.4, recalling of 87.4 and f1-measure of 88.8. the second rank of classification performance is the ―middle east‖ with precision of 88.7, recall of 83.3 and f1-measure of 85.9. we can note that the worst class was the ―business‖ class with precision of 75.2, recall of 70.5 and f-measure of 72.7. here we can be quite sure that the j 48 tree failed in the classification accuracy of ―science‖ class by 27.3% according to its f-measure. now we have the second data-set (alkhaleej news dataset) with the same previous experiment, table (17) shows the results as the following: table 17 system c with j 48 tree applied on alkhaleej news dataset table (17) shows the classification accuracy of alkhaleej news dataset documents using bpso+rep-tree as a feature selector and j 48 decision tree as a classifier. the best f-measure is for ―economy‖ class with 82.5, and the worst f-measure is for ―local news‖ class with 58.4. also here we can be quite sure that the j 48 tree failed in the classification accuracy of ―local news‖ class by 47.6% according to its f-measure. 6.8 system c.b (“bpso+rep-tree”/svm) the experimental results of system c with svm classifier are shown by table (18) and (19) using the previous two datasets (bbc arabic and alkhaleej datasets as the following: table 18 system c with svm classifier applied on bbc-arabic dataset class precision% recall% f1-measure% local news 72 78.3 75 international news 89.6 92.2 90.8 economy 87.3 88.3 87.7 sport 95.4 87.5 91.2 average 86 86.5 86.1 class precision% recall% f1-measure% middle east 88.7 83.3 85.9 world news 90.4 87.4 88.8 business 75.2 70.5 72.7 sport 84.8 74.2 79.1 newspapers 80.1 83.8 81.9 science 79.8 78.3 79 misc. 77.6 85.7 81.4 average 82.3 80.4 81.2 class precision% recall% f1-measure% local news 60.3 56.8 58.4 international news 68.6 70.9 69.7 economy 90.4 75.9 82.5 sport 84.8 72.5 78.1 average 73.5 69 72.1 class precision% recall% f1-measure% middle east 98.6 94.4 96.4 world news 68.2 88.9 77.1 business 82.3 85.7 83.9 sport 64.6 78.5 70.8 newspapers 81.4 82.8 82 science 97.2 87.1 91.8 misc. 92.5 96.9 94.6 average 83.5 87.7 85.2 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 39 table (18) shows the classification of bbc-arabic documents using bpso+rep-tree as a feature selector and svm as a classifier. from table (18) we note the equality in fmeasure average value using the same classifier svm with a different features selection combination (bpso+rep-tree). the current results have been compared with table (12, 13) (bpso+svm features selection). we get here an average fmeasure of 85.2 and 89.6 for svm (the same classifier but different feature selector). as usual, we will apply system c (the same previous experiment with svm) on the second data-set (alkhaleej news dataset), and table (19) shows the results as the following: table 19 system c with svm classifier applied on alkhaleej news dataset table (19) shows the classification of alkhaleej news dataset documents using bpso+rep-tree as a feature selector and svm as a classifier. the best accuracy (f-measure) is for ―local news‖ class with 95.4 and the worst f-measure is for ―sport‖ class with 79.7. in this experiment, we note the equality and convergence in the classification process results using the same classifier svm with a different features selection combination (bpso+rep-tree). 6.9 system c.c (“bpso+rep-tree”/rep-tree) the third combination of system c consists of binary pso as a feature selector and the proposed rep-tree as an evaluator then we use rep-tree as a classifier, as we mentioned in the previous section system c subsection. the experimental results of system c with rep-tree classifier are shown by tables (20) and (21) using the previous two datasets (bbc arabic and alkhaleej datasets as the following: table 20 system c with rep-tree classifier applied on bbc-arabic dataset table (20) shows that the rep-tree has been effective enough in the classification for bbc-arabic documents using bpso+rep-tree as a feature selector and rep-tree as a classifier. the results are as following: the best classification is for ―middle east‖ class with precision of 97.2, recall of 95.3 and f1-measure of 96.2. next we have the second classification performance the ―newspapers‖ with precision of 86.1, recalling of 98.4 and f1-measure of 91.8. the third classification accuracy is the ―business‖ with f-measure of 87.9. we can detect the convergence between the ―science‖ class performance and the ―world news‖ class performance with f1-measure of 83.3 and 83.2. the worst performance was the sport class with f-measure of 77.8. as usual, we will apply the rep-tree classifier on the other datasets. now, we will apply system c, the same previous experiment with rep-tree, on the second data-set (alkhaleej news dataset), and table (21) shows the results as the following: table 21 system c with rep-tree classifier applied on alkhaleej news dataset from table (21), we see that the best accuracy of rep-tree (f1-measure) is 97.6 for ―local news‖ class, and the worst f-measure is for ―economy‖ class with 86.3. the average accuracy of the rep-tree in this experiment was 91.8. 6.10 performance of the three systems in this subsection, we will make a comparison between the previous results on the previous two datasets (bbc-arabic and alkhaleej) before adding some enhancements to each system in the preprocessing phase. both table (22) and figure (6) show the results of this comparison. table 22 comparison between the f-measure averages of the three systems datasets system a (bpso+knn )% system b (bpso+svm )% system c (bpso+re p-tree)% bbc-ar (j48) 79 79 81.2 bbc-ar (svm) 86 85.2 85.2 bbc-ar (rep) 84.2 84.6 86.7 alkhaleej(j48) 74.3 68.5 72.1 alkhaleej(svm) 88 89 89 class precision% recall% f1-measure% local news 97.2 93.7 95.4 international news 94.5 82.9 88.3 economy 90.3 95.5 92.8 sport 79.5 80 79.7 average 90.3 88 89.05 class precision% recall% f1-measure% middle east 97.2 95.3 96.2 world news 88.6 78.5 83.2 business 87.3 88.6 87.9 sport 79.9 75.9 77.8 newspapers 86.1 98.4 91.8 science 80 86.9 83.3 misc. 82.5 92 86.9 average 85.9 87.9 86.7 class precision% recall% f1-measure% local news 98 97.4 97.6 international news 91.3 92.5 91.8 economy 85.7 87.1 86.3 sport 93.8 89.6 91.6 average 92.2 91.6 91.8 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 40 alkhaleej(rep) 87 86.1 91.8 figure 6 comparison between the accuracy of the three systems. from table (22) and figure (6), we draw the overall results of all the experiments, calculate the average for f1-measure values, and compare all the systems with each other. vii. conclusion this paper proposed a new feature selection approach to select the best subset of features from the original arabic document .we showed that the proposed approach works well in this area after extracting the experimental results. the proposed approach can be used in the field of arabic search engines and classifying huge amounts of arabic websites pages into hierarchal classes, labels. we proposed the reduced error pruning-tree classifier, which was not used with arabic text classification before for two purposes. the first one is an evaluator to evaluate the subset of features that resulted from the features selection algorithm binary particle swarm optimization bpso. to evaluate this approach (bpso+rep-tree), we used two arabic datasets, bbc-arabic news dataset and alkhaleej news dataset. the second purpose of the rep-tree is to use it as a classifier to build the learning model. we compare the first purpose (bpso+rep-tree) with two existing approaches, (bpso+knn) and (bpso+svm), and the second purpose (rep-tree classifier) with two well-known classifiers, j 48 and svm. we named the three features selection approaches with a for (bpso+knn), b for (bpso+svm), and c for (bpso+rep-tree). after we get the experimental results, we concluded that the proposed approach system c is effective. we choose the f1-measure to estimate the accuracy of the classification process which came from two factors, precision and recall factors. the values of f1-measure for system a with the classifier j 48 is in the range of 73% 79%, with svm is in 86% 88% and with the proposed classifier rep-tree is in the range of 84% 87%. next, we get the f1-measure values of the second system (b) with the same classifiers as the following, with j 48 are in the range of 60.9% 84.6%, with svm is in 85.2% 89.6% and with the proposed classifier rep-tree is in the range of 84.6% 89.5% and here is the last two algorithms which are comparable in the accuracy. finally, we apply the experiments on our proposed approach system (c) in features selection domain and it gave these ranges of accuracy as the following, with j 48 was in the range of 69.5% 79.6%, and with svm is in 87% 89.8% and with the proposed classifier rep-tree is in the range of 86.7% 91.8%. references [1] g. salton and c. buckley, "term-weighting approaches in automatic text retrieval. information processing & management, vol.24, no.5, (1988), pp.513-523. doi:10.1016/0306-4573(88)90021-0 [2] s. li, r. xia, c. zong and c. huang, "a framework of feature selection methods for text categorization". proceedings of the joint conference of the 47th annual meeting of the acl and the 4th international joint conference on natural language processing of the afnlp: volume 2 acl-ijcnlp '09, (2009), doi:10.3115/1690219.1690243 [3] y. saeys, i. inza, and p. larranaga, "a review of feature selection techniques in bioinformatics". bioinformatics, vol.23, no.19, (2007), pp.2507-2517. doi:10.1093/bioinformatics/btm344 [4] m. m. al-tahrawi and s. n. al-khatibb, (2015).arabic text classification using polynomial networks. journal of king saud university computer and information sciences, vol. 27, no. 4, (2015), pp. 437-449. http://dx.doi.org/10.1016/j.jksuci.2015.02.003 [5] t. aimunandar and e. winarko, regional development classification model using decision tree approach. http://dx.doi.org/10.1016/j.jksuci.2015.02.003 hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 41 international journal of computer applications ijca, vol. 114, no. 8, (2015), pp.29-34. doi:10.5120/200001755 [6] s. kalmegh, analysis of weka data mining algorithm rep-tree, simple cart and random tree for classification of indian news. paripex paripex indian journal of research, vol. 2, no. 2, (2015), pp. 438446. doi:10.15373/22501991/feb2015. [7] patel, n., & upadhyay, s. study of various decision tree pruning methods with their empirical comparison. international journal of computer applications, vol. 60, no.12, (2012), pp. 20-25. doi:10.5120/97444304 [8] brahimi, m. touahria, and a. tari, data and text mining techniques for classifying arabic tweet polarity. journal of digital information management, vol.14, no.1, (2016), pp. 12-19. [9] s. m. oraby, y. el-sonbaty, m. a. el-nasr, exploring the effects of word roots for arabic sentiment analysis. in international joint conference on natural language processing, nagoya, japan (2013), pp. 471-479. [10] khoja, s. (1999).stemming arabic text, lancaster, u.k, computing department, lancaster university. [11] k. taghva, r. elkhoury, and j. coombs, arabic stemming without a root dictionary. paper presented at international conference on information technology: coding and computing (itcc'05). (2005). doi:10.1109/itcc.2005.90 [12] tashaphyne, arabic light stemmer, 0.2. available at https://pypi.python.org/pypi/tashaphyne (2010). [13] a. shoukry, and a. rafea, sentence-level arabic sentiment analysis. international conference on collaboration technologies and systems (cts). (2012). doi:10.1109/cts.2012.6261103. [14] e. al-thwaib, text summarization as feature selection for arabic text classification. world of computer science and information technology journal (wcsit), vol. 4, no.7, (2014), pp. 101-104. [15] k. al-hindi, and e. a. al-thwaib, comparative study of machine learning techniques in classifying full-text arabic documents versus summarized documents. world of computer science and information technology journal (wcsit), vol. 2, no. 7, (2013), pp. 126-129. retrieved august17, 2016, from http://www.wcsit.org/pub/2013/vol.3.no.7/a comparative study of machine learning techniques in classifying full-text arabic documents versus summarized documents.pdf [16] a. abu-errub, arabic text classification algorithm using tf.idf and chi square measurements. international journal of computer applications ijca, vol. 93, no. 6, (2014). pp. 40-45. doi:10.5120/16223-5674 [17] goweder, a., elboashi, m., & elbekai, a. (2013).centroid-based arabic classifier. the international arab conference on information technology (acit’2013), 108(3). retrieved june 27, 2016, from http://acit2k.org/acit/2013proceedings/108.pdf [18] abidi, k., & elberrichi, z. (2012). arabic text categorization: a comparative study of different representation modes. journal of theoretical and applied information technology, 38(1), 465-470. retrieved may 21, 2016, from http://ccis2k.org/iajit/pdf/vol.9,no.5/2983-10.pdf [19] raho, g., al-shalabi, r., kanaan, g., & nassar, a. (2015). different classification algorithms based on arabic text classification: feature selection comparative study. international journal of advanced computer science and applications ijacsa, 6(2) 23-28. doi:10.14569/ijacsa.2015.060228 [20] mohammad, a. h., al-momani, o., & alwada, t. (2016). arabic text categorization using knearest neighbor, decision trees (c4.5) and rocchio classifier: a comparative study. international journal of current engineering and technology, 6(2), 477-482. retrievedmay29, 2016,from http://inpressco.com/wpcontent/uploads/2016/03/paper16477-482.pdf [21] kanan, t., & fox, e. a. (2015).automated arabic text classification with p-stemmer, machine learning, and tailored news article taxonomy. journal of the association for information science and technology j assn inf sci tec. doi:10.1002/asi.23609 [22] al-anzi, f. s., & abuzeina, d. (2016).toward an enhanced arabic text classification using cosine similarity and latent semantic indexing. journal of king saud university computer and information sciences. doi:10.1016/j.jksuci.2016.04.001 https://pypi.python.org/pypi/tashaphyne http://acit2k.org/acit/2013proceedings/108.pdf http://ccis2k.org/iajit/pdf/vol.9,no.5/2983-10.pdf http://inpressco.com/wp-content/uploads/2016/03/paper16477-482.pdf http://inpressco.com/wp-content/uploads/2016/03/paper16477-482.pdf hamza naji, wesam ashour and mohammed alhanjouri / a new model in arabic text classification using bpso/rep-tree (2017) 42 [23] zubi, z. s. (2009). using some web content mining techniques for arabic text classification. recent advances on data networks, communications, computers, 73-84. doi:10.1109/mmcs.2009.5256696 [24] zrigui, m., ayadi, r., mars, m., & maraoui, m. (2012). arabic text classification framework based on latent dirichlet allocation. journal of computing and information technology cit, 20(2), 11-14. doi:10.2498/cit.1001770 [24] kennedy, j., & eberhart, r. (1995).particle swarm optimization. proceedings of icnn'95 international conference on neural networks, 4, 1942-1948. doi:10.1109/icnn.1995.488968 [25] kennedy, j., & eberhart, r. (1995).particle swarm optimization. proceedings of icnn'95 international conference on neural networks, 4, 1942-1948. doi:10.1109/icnn.1995.488968 [26] yang, y., & pedersen, j. o. (1997).a comparative study on feature selection in text categorization. machine learning international workshop then conference, 412-420. morgan kaufmann publishers, inc 27 chantar, h. k., & corne, d. w. (2011). feature subset selection for arabic document categorization using bpso knn. third world congress on nature and biologically inspired computing. doi:10.1109/nabic.2011.6089647 [28] tsai, m., su, c., chen, k., & lin, h. (2012).an application of pso algorithm and decision tree for medical problem. neural comput & applic neural computing and applications, 21(8), 124-126. retrieved september 07, 2016, from http://psrcentre.org/images/extraimages/31012565.pd f [29] shi, y., & eberhart, r. (1995).a modified particle swarm optimization. proceedings of the 1998world congress on computational intelligence, 6, 69-73. doi:10.1109/icnn.1995.4889684 [30] kennedy, j., & eberhart, r. (1997).a discrete binary version of the particle swarm algorithm. proceedings of the 1998world congress on computational cybernetics and simulation, 4, 4104-4108. [31] naji, h., ashour, w. (2016). text classification for arabic words using rep-tree. international journal of computer science and information technology ijcsit, 8(2), 101-108. doi:10.5121/ijcsit.2016.8208 [32] kalmegh, s. (2015).analysis of weka data mining algorithm rep-tree, simple cart and random tree for classification of indian news. paripex paripex indian journal of research, 2(2), 438-446. doi:10.15373/22501991/feb2015. [33] patel, n., & upadhyay, s. (2012). study of various decision tree pruning methods with their empirical comparison. international journal of computer applications, 60(12), 20-25. doi:10.5120/9744-4304 [34] mitchell, t. m. (1997). machine learning.mcgraw hill. retrieved may 3, 2016, from http://personal.disco.unimib.it/vanneschi/mcgrawhil l_-_machine_learning_tom_mitchell.pdf http://psrcentre.org/images/extraimages/31012565.pdf http://psrcentre.org/images/extraimages/31012565.pdf http://personal.disco.unimib.it/vanneschi/mcgrawhill_-_machine_learning_http://personal.disco.unimib.it/vanneschi/mcgrawhill_-_machine_learning_transactions template journal of engineering research and technology, volume 4, issue 3, septemper 2017 83 traffic impact of planned gaza seaport on major roads in gaza strip, palestine hussein kh. abu zarifa 1 , yahya r. sarraj 2 1 department of civil engineering, islamic university of gaza, palestine, hussein12380@gmail.com 2 department of civil engineering, islamic university of gaza, palestine, ysarraj@iugaza.edu.ps abstract—the establishment of a commercial seaport in gaza strip, palestine is a strategic national project that has several implications on different aspects of life. the aim of this research is to study the impact of establishing the gaza commercial seaport on the roadway network in the gaza strip. data was collected on the main roads and transcad program has been utilized as a research tool. the results showed that the traffic morning peak occurs between 7:00 to 10:00, and that the average peak hour factor is 0.91. the heaviest peak hour traffic flow was 20,915 vehicles/hr. this was recorded at the intersection of jalaa and omar al-mukhtar streets, known as the saraya intersection. the results also showed that traffic in the areas near the seaport is expected to be mostly affected by the seaport construction. the traffic flow at the intersection of al-rasheed and al-hurreya streets (known as netzareem intersection) is estimated to increase by more than 10%; however, no effect is expected on traffic flow at the saraya intersection. the total vehicle hours of travel on the network (vht), was 19,981 vehicle hours in 2015 and has been estimated at 23,729 vehicle hours in 2020 without the presence of the port. the latter figure is expected to reach 32,635 vehicle hours in 2020 if the port is constructed. it is recommended to redesign gaza seaport with larger capacity and to expedite its construction in order to respond to the increasing local demand of goods considering that the seaport has been found to have a limited effect on the traffic network in the gaza strip. index terms—seaport, gaza strip, palestine, traffic impact. i introduction the transportation system in gaza strip currently consists of only road transport. the gaza strip has a limited and poorly developed road network. the road network consists of 76 km of main roads, 122 km of regional roads and 99 km of local roads (pcbs, 2010). transportation planning relies on travel demand forecasting which involves predicting the number of vehicles or travelers that will use a particular transportation facility in the future (almasri, sarraj, & el jamassi, 2010). ii background before 1967, there used to exist a single railway line running from north to south along the center of gaza strip. nowadays, the railway track is deserted and in disrepair, and little parts of the rack remains. gaza strip had a small airport to the east of rafah governorate; however, it was destroyed by the (israeli) occupation forces in 2001. at the start of the palestinian national authority (pna), a small seaport has been built which is only used by fishermen. since the time of building this seaport, it was not allowed for foreign ships to dock at the seaport (almasri, 2012). figure 1 shows a map of gaza strip indicating the proposed site of the seaport. the gaza strip is suffering from strict siege by land, sea and air. this siege was imposed by the (israeli) occupation on gaza strip after the legislative elections in 2006. (israel) then reinforced the blockade in june 2007. the siege includes closing all borders between gaza strip and both egypt and (israel); and preventing cement, gravel, fuel and many other commodities from entering gaza strip. another aspect of the siege is restricting fishing area in the sea. palestinian national authority (pna) has worked hard to create a seaport south of gaza city, which is considered as one of the most important strategic projects in palestine; politically and economically. the political importance of this project is embedded in establishing the concept of the rule of the palestinian state on the international territorial water. the project also works to determine the dimensions of the territorial waters and the right of the palestinian state in the areas of international water and in exploration of natural resources. in july 2014, the (israeli) occupation launched a new, cruel and devastating aggression on the gaza strip, which lasted for 50 days. the most important palestinian demands in the ceasefire negotiations were lifting the siege and establishing a commercial seaport. the aim of the palestinians of establishing this seaport is to provide a free crossing from palestine to the outside world, which would improve the economic situation. this is because the construction of the commercial seaport is considered by the palestinians as one of other important steps to connect the local economy with the global economy. it also helps expanding international trade and developing exports, local industries and business services. this, in turn, would work to increase the gdp and to raise the level of income in addition to the creation of many permanent jobs. mailto:hussein12380@gmail.com mailto:ysarraj@iugaza.edu.ps hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 84 the gaza commercial seaport has many expected positive effects on the palestinian economy; however, the establishment of the new commercial seaport should be based on scientific and careful planning. furthermore, the effects resulting from the establishment of the seaport on different sectors must be studied. one of the most important sectors is transportation, which is the subject of study of this research. iii gaza seaport the the palestinian national authority (pna) has a plan to develop a new deep-water port in gaza, just south of gaza city. it expects that direct access to the port will enable the economy of gaza, as well as that of the west bank, to expand, diversify its foreign trade, and foster growth in exportoriented industries and trade related services. growth in eternal trade-oriented industries and services will in turn entail growth in domestic output and incomes and create new and sustained employment opportunities. a further important benefit will be lower transportation costs for palestinian imports and exports (parsons brinkerhoff international, inc. , 2001). landside uses related to port development and operations include direct and indirect industrial activities and the transportation system needed to support them. the palestinian authority, in its regional plan for gaza governorates, volume ii, december, 1997, states that the authority plans to establish a "harbour free trade and export processing zone" of 1,700 donums next to the port, which will be designed to handle heavy products for shipment. in addition, plans are to establish several industrial areas throughout gaza that will absorb new industrial investment and provide sites for relocating some existing industries. (parsons brinkerhoff international, inc. , 2001). the ministry of planning reaffairmed that it is necessary to develop and rehabilitate the current road network in gaza strip and to re-operate the airport and to construct the seaport as soon as possible (ministry of planning, 2006). the euro-mid observer in its report on gaza seaport stated that "amidst the chronic crisis, the most effective long-term solution has been ignored: reopen gaza's seaport routes to the outside world" (euro-mid observer, 2014). iv methodology the main objective of this research is to study the effect of establishing a new commercial seaport in gaza strip on traffic flow. the methodology of this research is based on figure 1: gaza strip map (adown.tk/gaza-strip-map, accessed april 2017) hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 85 the travel demand forecasting approach, as it is a major step involved in transportation planning. data including information required for modelling the network such as the characteristics of links and zones was collected. link characteristics such as name, classification, length, free flow speed, travel time, direction and capacity were collected. zone characteristics comprise size, boundaries, centroids and connecters to the links. for the purpose of analysis of the existing situation, traffic counts at 79 locations on selected main roads were conducted. these locations were carefully selected to be as close as possible the the main 22 road intersections in gaza strip. traffic counts were performed on 25/10/2015 for three hours from 7:00 am to 10:00 am with the assistance of civil engineering students.the research was carried out in five stages: first stage (network building): gaza strip was divided into traffic analysis zones (taz) in order to build the network. second stage (base year o-d estimation): in this stage, o-d matrix table was estimated based on traffic volume counts collected at the major road intersections in the base year. third stage (future od estimation): future vehicular attraction at gaza seaport was forecasted using cargo forecasting in the gaza seaport and ite trip generation manual. fourth stage (estimating traffic generated by seaport): the impact of gaza seaport on traffic demand and highway network in gaza strip was determined. fifth stage (traffic flow assignment): the final step in the transportation forecasting process was to determine the actual street and highway routes that will be used and the number of vehicles that are expected on each highway segment. for more details about the methodology and the mathematical approach reference could me made to master thesis entitled traffic impact of planned gaza seaport on major road in gaza strip (abu zarifa, 2016). v results and analysis a network building the network building is very important to determine the traffic assignment. the first step of network building is determining the streets in the study area. the second step is to input the required data for building od matrix. there are two kinds of data: the first type is traffic data (traffic flow, and speed) and the second one is geometric data (direction, time, and capacity). traffic flow was calculated on 79 different roads (links) making use of traffic flow counts at 22 main road intersections in gaza strip. al-saraya intersection has the highest traffic flow of 7,615 vehicles per hour; this is because alsaraya intersection is located at the heart of a commercial center in gaza city. al-rasheed street intersection with khan younis entrance has the lowest traffic flow of 399 vehicles per hour during the peak period. b base year o-d estimation o-d matrix estimation from traffic counts is preferred because of the lack of data on the socioeconomic characteristics of people living in gaza strip. in order to find out the number of trips, o-d matrix method is used. this method depends on traffic counts and geometric streets data such as capacity and travel time. it was not possible to use a trip generation or a trip distribution model. the reason for that was as explained by dar al handasa report as they stated: “however, developing countries usually suffer from the lack of socioeconomic and land use data in addition to the expected growth factor. thus, estimates of the rate of growth in vehicle ownership can be estimated by direct projection of their historical data” (dar al handasa, 1999). c future o-d estimation after the establishment of the palestinian national authority in 1994, dr. sarraj in a study on the behavior of road users in gaza, palestine, stated “statistics show that there was a very sharp and sudden increase of more than 20% in the number of registered vehicles in the gaza strip between 1993 and 1994. in 1995 the increase in the number of registered vehicles was even greater; it was about 35%” (sarraj, 2001). based on data published by the palestinian central bureau of statistics about the number of rergistered vehicles in gaza strip between 1970 and 2014 (pcbs, 2014), the annual growth rate of the number of registered motor vehicles in gaza strip was almost constant in the period 1970-1985, and highly fluctuating between the years 1985 and 1995. however, it seems to be almost steady during the last few years. for the benefit of this research, the average growth rate for the past fifteen years starting from 1999 to 2014 was calculated as 3.1% and was used to determine the predicted number of vehicles in the future. d estimating additional traffic due to new seaport to assess the impact of sensitive and important new facilities in the area, a study should be conducted on the private sector facilities in the region that may be affected as a result of the creation of this new facility. in order to calculate the number of trips that are attracted to the port there are two methods: 1. ite trip generation manual method this method depends on the number of berths in the seaport. the berth is a place in which a vessel is moored or secured; place alongside a quay where a ship loads or discharges cargo. the number of berths in gaza seaport is different in each construction phase as shown in table 1. the ite trip generation manual suggests the use the following equation for a water port/marine terminal (ite, 2012): trips/day = 172 × number of berths the number of trips per hour can be estimated by dividing the total number of trips per day by the number of working hours. it was assumed that the gaza seaport works just for 6 hours per day, since the official daily working time for hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 86 civil servants in gaza is from 8 a.m. to 2 p.m. therefore, the following equation was used. trips/hour = (trips/day)/6 the trips in this method include all trips (trucks and passenger cars) attracted by gaza seaport. 2. average trucks load method this method depends on the maximum number of tons of the import and export forecasts at the seaport. the generated trips in this method are just truck trips. the maximum of the import and export forecasts in gaza seaport are different at each phase. to estimate the number of trips, it was assumed that the average truckload is 30 tons, which is the average loading capacity of the trucks that enter gaza through karm abu salem crossing. table 2 presents an estimation of import and export forecasts in each phase of construction as suggested by smaling (smaling, velsink, groenveld, & booy, 1996). as could be noticed, ite method considers all trips that will be generated because of the construction of the gaza seaport. this includes cars and trucks. however, the average trucks load method only considers the resulting trips by trucks. thus, this study will adopt the first method. table 3 shows the number of all trips per hour that are expected in each phase after the construction of gaza seaport in 2020. it was not possible to deal separately with public transportation. this is simply because it is included in the figures in table 3 and this service is not well developed at the local level. the mostly used public transportation service in gaza is the shared taxi service. e traffic flow assignment stage the first result is the o-d matrix. this is considered as the most essential input for the current and future traffic prediction when assigned to the network. the traffic assignment process should have a prior accurate o-d matrix. the last and the most important performance measure is the volume overcapacity ratio (voc), which is calculated for each line in the network. figure 2 presents the max voc map for gaza strip in the base year 2015. transcad usually estimates the traffic volumes for each link in the traffic network. this process needs an o-d matrix (the estimated one), and a line network layer with its attributes. the stochastic user equilibrium method was selected to perform traffic assignment because it gives results that are more realistic. more details may be obtaind from a study entitled “network capacity with probit-based stochastic user equilibrium problem” (lu l, 2017). figure 3 shows the estable 3 number of trips per hour for gaza seaport. phases of gaza seaport trip/hr phase i 86 phase ii 114 phase iii 200 table 1 number of berths and trips in each phase source: (smaling, velsink, groenveld, & booy, 1996). construction phase phase i phase ii phase iii no. of berths 3 4 7 trips/day 516 686 1201 trips/hr 86 114 200 table 2 import and export forecasts at gaza seaport and trips in each phase source: thesis of d. smaling, (smaling, velsink, groenveld, & booy, 1996) construction phase phase i phase ii phase iii containers full (teu) 1 10,991 42,500 175,000 empty (teu) 3,428 6,428 25,000 general cargo (t) 215,300 340,000 1,400,000 dry bulk grain (t) 92,000 115,000 350,000 marble (t) 100,000 100,000 100,000 liquid bulk (t) 366,000 970,000 2,000,000 total (t) 905,192 2,035,000 5,950,000 trips (truck 30t)/year 30,173 67,833 198,333 trips (truck 30t)/day 96 217 634 trips (truck 30t)/hr 16 36 106 note: 1 teu = 12 ton hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 87 timated total traffic flow in 2015 on each link represented by line width. according to the analysis of the available data described previously, the estimation of the future o-d matrix was based on the average growth rate of vehicles in gaza strip, which is 3.1 %. therefore, the future o-d matrix in 2020 can be obtained by applying this growth rate to each current (2015) o-d matrix cell. it is well known that the future o-d estimations is obtained in the future demand o-d matrix by multiplying base-year o-d matrix with an expected growth in trips as shown in this equation (o'flaherty c.a., 1997). 𝑇 = 𝑇 × 𝐺 ………………………………..… (7) where: t = trips for o-d pair ij in future year f; t = trips for o-d pair ij in present year p; g = expected growth in trips between year f and p. the study period that was considered for forecast was 5 figure 2: max volume over capacity (voc) in gaza strip for 2015 hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 88 years from the year 2015 to the year 2020. this is because gaza seaport construction is expected to take about 3.5 to 5 years according to the seaport report by the ministry of transportation (ministry of transportation, 2005). in table 4, the difference between total flow in the future year 2020 without gaza seaport and total flow with gaza seaport at the main intersections is presented. intersections located far from gaza seaport are not expected to be heavily affected by the construction of the gaza seaport. these intersections include alsaraya and dolah intersections. at these intersections, the difference between the future traffic flow in 2020 with the seaport and without it was close to zero. however, the intersections near the port were heavily affected such as netzareem intersection, with 10.18% increase, and alzahra intersection, with 14.3% increase. as road intersections get far from the gaza seaport the impact on them gets less and less, and vice versa. the impact of the construction of gaza seaport on major intersections in gaza strip is graphically presented in figure 4. figure 3: total estimated traffic flows in 2015 hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 89 vi conclusions and recommendations traffic counts have been carried out by iug civil engineering students from 7 to 10 a.m. at 22 locations near main intersections distributed over gaza strip. traffic counts analysis showed that the greatest traffic flow was recorded in gaza city namely close to the intersection of al jalaa street with omar al mukhtar street. traffic volume in both directions in the vicinity of this intersection was 20,915 vehicle /3 hours, and 7,615 vehicle/hr. in the peak hour. this is followed by the intersection of al jalaa street with jamal abdul nasser street. traffic volume in both directions at this intersection was 18,062 vehicle/3 hrs, and 6,656 vehicle/ hr. in the peak hour. the peak-hour-factor values ranged from 0.98 to 0.77, and the average value for the network flow was 0.91. the average percentage of passenger cars in gaza strip traffic composition was 79%, while the percentage of trucks was 6.5% and the remaining other vehicle types was 14.5%. the future estimation of o-d matrix was based on the average growth rate of registered vehicles in gaza strip, which is 3.1%. the future o-d matrix in 2020 was obtained by applying the growth rate factor to each cell in the current (2015) o-d matrix. the impact of the establishment of gaza seaport was found to be concentrated on the nearby main roads in the gaza strip. this impact is reduced as the distance increases between these intersections and the seaport. the most significant impact is expected to be at the following intersections: netzareem intersection, alzahra entrance on al-rasheed road, as well as deir al-balah entrance on al-rasheed road. the total hours of travel (vht) on the network, was 19,981 vehicle hours in 2015 and has been estimated at 23,729 vehicle hours in 2020 without the presence of the port. this shows an increase of 18.7%. the latter figure is expected to reach 32,635 vehicle hours in 2020 if the port is constructed. this shows an increase of more than 37% in total hours of travel. the following actions are recommended: it is recommended to carry out further studies to cover the afternoon traffic activity. to increase the number of roads included in the network analysis, as well as the number of traffic counting nodes to cover more areas of gaza strip. to update traffic counting and traffic network data every 3-5 years in order to help traffic planning and decision making in gaza strip. to carry out further studies in order to investigate the impact of other strategic facilities on traffic such as arafat (gaza) international airport. to construct a bridge above the intersection of salah al deen and al karama streets (netzareem intersection) in order to serve the increasing traffic flow at this intersection especially after the construction of gaza seaport. this is because this intersection is planned to be the main entrance table 4 the estimated traffic flow in 2020 with and without the construction of the gaza seaport. id intersection name total flow future year 2020 without seaport total flow future year 2020 with seaport change in traffic flow (%) 2 salah al-din intersection with al-quads (zemo) 3739 3818 2.11 5 salah al-din intersection with al-wahada and omar al mokhtar (al shejaiya) 7110 7406 4.16 6 omar al mokhtar intersection with al jalal (al saraya) 10161 10161 0.00 9 salah al-din intersection with number "8" (dolah) 6879 6879 0.00 11 salah al-din intersection with al karama (netzsareem) 3913 4311 10.18 13 salah al-din intersection with al nuseirat 6218 6671 6.42 15 salah al-din intersection with deir al balah 2443 2559 4.77 17 salah al-din intersection with abasan (banisuhaila) 4502 5688 1.67 18 salah al-din intersection with taha hussein (khrbat al adas) 1736 1753 0.95 22 al-rasheed intersection with al palestine ( al zahra) 2323 2655 14.30 23 al-rasheed intersection with the nuseirat 495 528 6.65 24 al-rasheed intersection with akeala 495 528 6.65 25 al-rasheed intersection with khan younis 572 605 5.75 hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 90 of the seaport by the ministry of transportation (2015). it is essential to develop the main roads, which are expected to play a main role in the transport of goods to and from the seaport, which are al-karama, al-rasheed, and salah aldin streets. finally, it is recommended to redesign gaza seaport with larger capacity and to expedite its construction in order to respond to the increasing demand of goods considering that the seaport has been found to have a limited effect on the traffic network in the gaza strip. references abu zarifa, h. k. (2016). traffic impact of planned gaza seaport on major roads in gaza strip. gaza, palestine: islamic university of gaza. adown.tk/gaza-strip-map. (accessed april 2017). adown. retrieved april 20, 2017, from gaza strip map: http://adown.tk/gaza-strip-map/ almasri, e. h. (2012, october 15-16). improving traffic performance by coordinating traffic signals using transyt7f model (eljalaa arterial road in gaza city as a case study. proc. of the fourth international engineering conference. gaza, palestine: islamic university of gaza. almasri, e. h., sarraj, y. r., & el jamassi, a. (2010, vol. 49 no. 3). transportation challenges in developing cities a practice from rafah, palestine. alexandria engineering journal, pp. 283 295. dar al handasa. (1999). the greater amman urban transport study. amman, jordan. figure 4: impact of gaza seaport construction on major intersections in gaza strip hussein kh. abu zarifa, yahya r. sarraj/ traffic impact of planned gaza seaport on major roads in gaza strip, palestine (2017) 91 euro-mid observer. (2014). gaza seaport: a windowpane to the world. geneve: euro-mid observer for human rights. ite. (2012). trip generation manual. s.l. institute of transportation engineers. lu l, w. j. (2017). network capacity with probit-based stochastic user equilibrium problem. plos one 12(2): e0171158., 12(2). retrieved 8 24, 2017, from https://doi.org/10.1371/journal.pone.0171158 ministry of planning. (2006). regional plan: southern governorates 2005-2015. gaza: palestine news agency wafa. ministry of transportation. (2005). maritime transport sector, palestinian. palestinian: ministry of transportation. o'flaherty c.a. (1997). transport planning and traffic engineering. london: butterworth-heinemann elsevier . paltoday.ps. (2017, retrieved february 23, ). gaza map. palestine today, https://paltoday.ps/ar/post/233632. parsons brinkerhoff international, inc. . (2001). gaza seaport study and assessmentfinal report . gaza: submitted to usaid west bank & gaza. pcbs. (2010). annual census palestinian. palestinian central bureau of statistics (pcbs). ramallah: palestinian central bureau of statistics , palestinian national authority. pcbs. (2014). demographic and transportation statistics. palestinian central bureau of statistics. palestinian national authority. sarraj, y. r. (2001, vol. 9 no. 2). behavior of road users in gaza, palestine. journal of the islamic university of gaza, pp. 85 101. smaling, d., velsink, h., groenveld, r., & booy, n. (1996). gaza sea port. delft university of technology. delft : delft university of technology. tsinker, p. g. (2004). port engineering. usa: wiley & sons. hussein kh. abu zarifa.bsc. degree in civil engineering, 2013; msc. degree in civil engineering; infrastructure, 2016;worked as a project manager of several infrastructure projects in gaza strip. research intersests include traffic management and road safety; member of the association of engineers in palestine. yahya r. sarraj. bsc. degree in civil engineering, 1985; msc. degree in transport planning and engineering, 1987; phd degree in traffic management, 1996.rector of the university college of applied sciences and technology (ucas); associate professor of transportation, islamic university of gaza; senior consultant, rai consult, gaza; published several publications in the behavior of road users in gaza strip, estimating passenger car units at signalized intersections; developing road accidents recording system in palestine; member of the association of engineers in palestine, member of the board of commissioners of the independent commission of human rights in palestine. transactions template journal of engineering research and technology, volume 5, issue 1, march 2018 7 turbo-coded v-blast/map mimo system alaa h. al habbash1; ammar m. abu-hudrouss2 faculty of engineering, islamic university of gaza, palestine.1 associate professor of communications, islamic university of gaza, palestine.2 abstract—multiple input multiple output (mimo) systems can greatly increase the spectral efficiency. there is a need to design detection algorithm that can recover the transmitted signals with acceptable complexity and using suitable coding system to get high performance. in this paper, several mimo detection techniques with turbo coding were introduced and evaluated in terms of bit error rate. vblast with maximum a posteriori (map) detection techniques is introduced for turbo coded mimo system. index terms—map; vblast; mimo; mmse; turbo coding i introduction multiple-input-multiple-output (mimo) systems are integral technology in the implementation for the fourth and fifth generation wireless systems. the advantages of mimo system include high capacity, improved error performance, and interference suppression[1]. however, the high complexity associated with mimo technology is the main limitation for some applications [2]. it is known that the computational complexity of any optimal, joint detection and decoding scheme for multiple input multiple output (mimo) systems grows exponentially with the burst size [2-4]. in order to solve the detection problem in mimo systems, research has focused on suboptimal receiver models which are powerful in terms of error performance and in the same time are practical for implementation purposes [2, 3]. different transmission technique can be used with mimo systems such as space-time codes, (stc)[5] and the vertical bell labs space-time architecture (v-blast)[6]. stcs are used for diversity gain while vblast is used for capacity advantage. there are many types of detection techniques that were introduced for spatial multiplexing mimo channels [7-11]. vertical bell labs space-time architecture/ maximum a-posteriori (v-blast/map) is a symbol detection algorithm for spatially multiplexed mimo channels, which utilize the map rule in the detection process of the v-blast algorithm[9]. this leads to substantial performance enhancement; symbol error rates of the v-blast/map are close to the optimal and complex maximum likelihood (ml) scheme and in the same time have low-complexity close to the v-blast [9]. low density parity codes (ldpc) [12] and turbo codes [13]are considered optimal in their bit error rate performance. turbo coding using multiple convolutional coders and a random interleaver to counter or to minimize the effects of bulk error. turbo coding can operate close to the shannon limit to prove itself as one of the most efficient codes which has a reasonable complexity [2, 14]. recently, some of high potential research considers the case of using principle of iterative (“turbo processing”) in improving the performance of multiple antenna systems. one of the resulting classes of mimo system referred to as turbo-vblast [15, 16]. therefore, turbo codes with independent fading coefficients at each coded bit in a codeword will get the best performance. in this paper, the symbol error rates of the v-blast algorithm with zero forcing (zf), minimum mean square estimation (mmse) detections are investigated. the performance of turbo-v-blast algorithm with zf, mmse detections are also evaluated. the v-blast /map detection technique is used with turbo coding. the bit error rate performance of this scheme is investigated using simulation in matlab software. ii v-blast/map detection method the error performance and the decoding complexity of any spatial multiplexing mimo should be always taken into consideration. the aim of this study is to design a structure that is powerful in terms of error performance and is practical to be implemented. when mimo system is used for multiplexing gain, maximum likelihood (ml) receiver suffers from a very high computational complexity [2]. suboptimal receiver models are used to reduce the high decoding complexity in mimo systems. research in this area has focused on developing algorithms that has error performance close to the ml while being practical in the implementation purposes. the v-blast receiver is an example of figure 1 uncoded v-blast system. alaa h. al habbash, and ammar m. abu-hudrouss / turbo-coded v-blast/map mimo systems (2018) 8 these suboptimal receivers which uses a layered architecture and applies successive cancellation by dividing the channel vertically[2]. the maximum a posteriori (map) rule is used in code detection to minimize the probability of errorpe[15]. the map rule defined as, �̂� = arg𝑎′∈𝐴𝑀 max{pr(𝑎 ′|𝑟 𝑖𝑠 𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑑)} (1) map rule offers optimal error performance, nevertheless it has exponential complexity order. v-blast/map has combinations of v-blast and map rules. this algorithm has a layered structure similar vblast, but uses a different technique inspired by the map rule in ordering the channel processing. as a result of the combination, v-blast/map has higher complexity than the v-blast but with the substantial performance enhancement. simulations show that vblast/map has symbol error rates with marginal decline when compared to the optimal maximum likelihood (ml) scheme while having much lower complexity inherited from the v-blast [17]. iii system model in this study, an m × n mimo channel model has been considered. in each transmission interval, a vector 𝑎 = (𝑎1, 𝑎2, ⋯ , 𝑎𝑀 ) 𝑇 of modulated signals is sent and a vector 𝑟 = (𝑟1, 𝑟2, ⋯ , 𝑟𝑁 ) 𝑇 is received. we assume an input-output relationship of the form, 𝒓 = 𝑯𝒂 + 𝒗, (2) where h is an m ×n matrix represents the channel and is given by, 𝑯 = [ 𝒉𝟏𝟏 ⋯ 𝒉𝟏𝑴 𝒉𝟐𝟏 ⋯ 𝒉𝟐𝑴 ⋮ ⋱ ⋮ 𝒉𝑵𝟏 ⋯ 𝒉𝑵𝑴 ] , (3) where {ℎ𝑖𝑗 } is the complex channel gain between transmitter, j and receiver, i. each entry of h is assumed to be independently identically distributed (i.i.d) zero mean complex gaussian random variable with unity variance [18], and 𝑣 = (𝑣1, 𝑣, ⋯ , 𝑣𝑁 ) 𝑇 is the white gaussian noise vector, we assume throughout the paper that the complex elements of v is a drawn from i.i.d. gaussian distribution 𝑣𝑖 ~𝐶𝑁(0,1). perfect channel state information (csi) is assumed only at the receiver side which can be practical for a relatively slowly timevarying channel [18]. the v-blast system is introduced in[19]. figure 1 shows the transmitter and receiver of uncoded v-blast system withm transmit and n receive antennas. the bit stream, b is demultiplexed into m sub-streams: 𝑏1, 𝑏2, ⋯, and 𝑏𝑀. these sub-streams are mapped to complex symbols 𝑠1, 𝑠2, ⋯, and 𝑠𝑀 and transmitted from 𝑇𝑋1, 𝑇𝑋2, ⋯, and 𝑇𝑋𝑀 ; respectively. the v-blast algorithm uses a layered structure. the layering is horizontal as all the symbols of a certain stream are transmitted through the same antenna. in the transmitter side, the streams are independently transmitted; the m transmitted streams are separated and then modulated separately with the modulators. in the receiver side, one of the v-blast detectors zf, mmse or v-blast/map is used [8].the input of the detector is the received vectors: 𝑟1, 𝑟2, ⋯ , and 𝑟𝑁 and the output is an estimation of transmitted symbols denoted by 𝑠1 , , 𝑠2 , , ⋯, and 𝑠𝑀 , . the estimated symbol vector is demodulated and multiplexed to recover the transmitted data bits. figure 2 shows the v-blast process for a transmitterwith 4 antennas. after demultiplexing and modulation of the bit stream, b, the symbol vectors are transmitted from the modulators: 1, 2, 3 and 4 which are denoted as 𝑠1, 𝑠2, 𝑠3and 𝑠4; respectively. 𝑠1 can be expressed as [𝑠11, 𝑠12, 𝑠13, 𝑠14]. similarly,𝑠2, 𝑠3and 𝑠4 can be expressed as [𝑠21, 𝑠22, 𝑠23, 𝑠24] ,[𝑠31, 𝑠32, 𝑠33, 𝑠34], and [𝑠41 , 𝑠42, 𝑠43, 𝑠44]. figure 3 shows the basic block diagram of a coded v-blast transmitter with m transmit antennas. the bit stream b is demultiplexed into m sub-streams, 𝑏1, 𝑏2, ⋯ , and 𝑏𝑀 and each substream is coded separately by ½ -rate turbo code which consists of two convolutional encoders. each sub-stream bits is encoded using he first encoder and the same bits are encoded using the figure 2uncoded v-blast vectors at transmitter. figure 2uncoded v-blast vectors at transmitter. figure 4 codewords interleaving at the transmitter. figure 3 coded v-blast transmitter. . alaa h. al habbash, and ammar m. abu-hudrouss / turbo-coded v-blast/map mimo systems (2018) 9 second encoder after they has been interleaved. the output of the turbo encoder consists of the systematic bits, cij of the first encoder, and the parity bits, cpij . the parity bits are punctured using puncturing vector, based on the pattern pp = [1,0] of the first encoder and the parity bits of the second encoder are punctured using puncturing vector, based on pattern pp'=[0,1]. the 𝑐1, 𝑐2, ⋯, and 𝑐𝑀 bits are interleaved using a pseudo-random interleaver. then the interleaved bits, 𝑐1 , , 𝑐2 ′ , ⋯ and 𝑐𝑀 ′ are mapped to complex symbols 𝑠1, 𝑠2, ⋯, and 𝑠𝑀 using k-ary qam modulation. finally these symbols are transmitted from 𝑇𝑋1 , 𝑇𝑋2, ⋯, and 𝑇𝑋𝑀 ; respectively. figure 4 shows the code word interleaving at the transmitter.figure 5 shows the basic block diagram of coded v-blast receiver with n receiving antennas. after receiving the vectors: 𝑟1, 𝑟2, ⋯ , and 𝑟𝑁 , estimation of transmitted symbols s1 , , s2 , , ⋯, and sm , are calculated by one of detection types (zf, mmse, vblast/zf, v-blast/mmse, v-blast/zf/map or vblast mmse/map). after the demodulation, each output bits of c1 , , c2 ′ , ⋯ and cm ′ are de-interleaved to compensate the interleaving at the coded v-blast transmitter. then the output bits of each de-interleaver are arranged and separated to two-bit streams y1 and y2. the first stream bits are the systematic bits with parity bits for first encoder and second bit streams are the de-interleaved systematic bits. this is done to compensate the interleaving between the two encoders in turbo code with parity bits for the second encoder. now the bit streams are ready to be fed to the decoders. the detailed detection process can be found in [8]. iv performance analysis in this paper, all the simulations were done in matlab 2013 software using i7 processor and 4g mam. the bit-error-rate performance of the system was simulated for different value of the signal-to-noise-ratio (snr). the snr is a figure of merit that is measured at the receiver side. the schemes under investigation are the blast scheme (uncoded v-blast and coded v-blast using turbo code). while, the detection strategies used in this paper are zero-forcing, mmse, vblast/ zf, v-blast/mmse, v-blast/zf/map, vblast/mmse/map,v-blast/zf/ ordering and vblast/mmse/ordering. we have also considered in our simulation different frame lengths for turbo decoder. the channel encoder is 1/2 rate turbo encoder which has two puncturing 4-state convolutional encoders. the convolutional encoder is punctured with pattern in the table 1 and with generators polynomial (7, 5) octal, see figure 6. table 2 shows a 1/2 rate convolution code used in this paper. the type of channel decoder is log-map-decoder and type of modulation is 16-qam. the channel is rayleigh fading with additive white gaussian noise (awgn). for each frame, a new random realization of the channel matrix, h, is used. number of frames is assumed to be 10000 and each frame has 16 bits. a frame is considered figure 6 illustration of how the generator polynomials is determined. figure 5 coded v-blast receiver. figure 7 the ser performance for different frame size of turbo/ normal mmse. figure 8 the ser performance for coded v-blast/zf using turbo code and coded v-blast/zf using turbo code with best order. alaa h. al habbash, and ammar m. abu-hudrouss / turbo-coded v-blast/map mimo systems (2018) 10 to be received incorrectly if any single bit of the frame is wrongly decoded. figure 7 shows comparison of the symbol error rate performance for different frame size of turbo/ normal mmse without interference nulling and interference cancellation. it can be seen that the larger the frame size, the better is the ser performance. figure 8 and figure 9 show comparison of the symbol error rate performance for coded v-blast using turbo code without ordering and with best order architectures using 44 mimo system and 16-qam modulation. the detection is done by zf and mmse techniques. from figure 8 and figure 9, we can see that the performance of turbo/v-blast/zf with best order architecture is better than turbo/v-blast/zf without order architecture. for example, in case of ser = 10−1, we have a coding gain of 3.3 db for the ordered coded system compared to the system without symbol ordering. whereas the gain in case of symbol ordering for turbo code with mmse detection technique is 5.3 db at symbol error rate of 10−2. figure 10 shows comparison of the symbol error rate performance for turbo/normal zf without interference nulling and interference cancellation, turbo/v-blast/zf and proposed turbo/v-blast/zf/map techniques with 4×4 antennas and 16-qam modulation. it can be seen from figure 10 that the performance of turbo/v-blast/zf/map technique is the best among the three techniques. the performance of turbo/v-blast/zf is better than turbo/normal zf without interference nulling and interference cancellation. for example in case of ser = 10−1, we have a coding gain of 4 db for the turbo/v-blast/zf system compared to the turbo/normal zf system. whereas the gain in case of turbo/v-blast/zf/map system is 1.6 db compared to the turbo/v-blast/zf system at symbol error rate of 10−2. figure 11 shows comparison of the symbol error rate performance for turbo/normal mmse without interference nulling and interference cancellation, turbo/v-blast/mmse and proposed turbo/v-blast/mmse/map techniques with 4×4 antennas and 16-qam modulation. from figure 11, we can see that the performance of turbo/vblast/mmse/map technique is best among the three techniques. the performance of turbo/v-blast/mmse is better than turbo/normal mmse without interference nulling and interference cancellation. for example in case of ser = 10−2, we have coding gain of 5 db for the turbo/vblast/mmse system compared to the turbo/normal mmse system. whereas the gain in case of turbo/vblast/mmse/map system is 1 db compared to the figure 11 the ser performance for coded normal mmse, vblast/mmse and v-blast/mmse/map using turbo coding. figure 9 the ser performance for coded v-blast/mmse using turbo code and coded v-blast/mmse using turbo code with best order. figure 10 the ser performance for coding normal zf, v-blast/zf and v-blast/zf/map using turbo coding. figure 12 the ser performance for uncoded v-blast/mmse and coded v-blast/mmse using turbo code. alaa h. al habbash, and ammar m. abu-hudrouss / turbo-coded v-blast/map mimo systems (2018) 11 turbo/v-blast/mmse system at symbol error rate of 10−3. figure 12 shows comparison of the symbol error rate performance for vblast/mmse without coding and vblast/mmse with using turbo code techniques with 4×4 antennas and 16-qam modulation. from figure 12, we can see that the performance of coded vblast/mmse technique is better than uncoded vblast/mmse. for example in case of ser = 10−2, we have coding gain of 4.3 db for the system with turbo code compared to un-coded vblast/mmse system. v conclusion this paper has addressed a number of important issues associated with turbo coded mimo detection techniques. in particular, provided a detailed description, analysis and comparison of ser performance of several detection techniques and gave a recommendation for those promising techniques that are potentially amenable to hardware implementation. in this paper, a successful implementation of a system design of “turbo/vblast/map”, which combines turbo code with the detection technique vblast/map is presented. the turbo/v-blast system was also presented with different detection techniques. comparison between these schemes was made to observe that the mmse algorithm performs slightly better than zf algorithm. the same stands in case of using v-blast/mmse is perform better than vblast/zf. v-blast/mmse/map performs better than vblast/zf /map. using v-blast/map with either zf or mmse improves the performance of the system significantly. the main conclusion of this paper that turbo coded vblast/map offers significantly better ser performance than others v-blast detection techniques at a modest increase in complexity. references [1] n. h. m. adnan, i. m. rafiqul, and a. z. alam, "massive mimo for fifth generation (5g): opportunities and challenges," in computer and communication engineering (iccce), 2016 international conference on, 2016, pp. 4752: ieee. [2] n. b. sinha, r. bera, and m. mitra, "capacity and vblast techniques for mimo wireless channel," journal of theoretical & applied information technology, vol. 14, 2010. [3] m. salemdeeb and a. abu-hudrouss, "performance and capacity comparison between hybrid blast-stbc, vblast and stbc systems," international journal of emerging technology and advanced engineering, vol. 2, no. 10, pp. 12-22, 2012. [4] a. elshokry and a. abu-hudrouss, "performance evaluation of mimo spatial multiplexing detection techniques," journal of al azhar university-gaza (natural sciences), vol. 14, pp. 47-60, 2012. [5] v. tarokh, n. seshadri, and a. r. calderbank, "space-time codes for high data rate wireless communication: performance criterion and code construction," ieee transactions on information theory, vol. 44, no. 2, pp. 744765, 1998. [6] p. w. wolniansky, g. j. foschini, g. golden, and r. a. valenzuela, "v-blast: an architecture for realizing very high data rates over the rich-scattering wireless channel," in signals, systems, and electronics, 1998. issse 98. 1998 ursi international symposium on, 1998, pp. 295-300: ieee. [7] g. golden, c. foschini, r. a. valenzuela, and p. wolniansky, "detection algorithm and initial laboratory results using v-blast space-time communication architecture," electronics letters, vol. 35, no. 1, pp. 14-16, 1999. [8] s. yadav, s. jani, and b. pal, "analysis of various symbol detection techniques in multiple-input multiple-output system (mimo)," arxiv preprint arxiv:1204.5839, 2012. [9] z. luo and d. huang, "integrated map detector for vblast systems," in information theory, 2008. isit 2008. ieee international symposium on, 2008, pp. 2668-2672: ieee. [10] m. di renzo, h. haas, a. ghrayeb, s. sugiura, and l. hanzo, "spatial modulation for generalized mimo: challenges, opportunities, and implementation," proceedings of the ieee, vol. 102, no. 1, pp. 56-103, 2014. [11] j. choi, d. j. love, d. r. brown iii, and m. boutin, "quantized distributed reception for mimo wireless systems using spatial multiplexing," ieee trans. signal processing, vol. 63, no. 13, pp. 3537-3548, 2015. [12] a. lulu and a. m. abu-hudrouss, "ldpc code construction using randomly permutated copies of parity check matrix," al-najah university journal for research natural sciences, vol. 38, no. 1, 2018. [13] c. berrou, a. glavieux, and p. thitimajshima, "near shannon limit error-correcting coding and decoding: turbocodes. 1," in communications, 1993. icc'93 geneva. technical program, conference record, ieee international table 2 the parameters of a ½ rate convolutional code memory size generator polynomial in octal generator polynomial in binary 2 [7, 5] [111, 101] table 1 puncturing pattern puncturing patterns systematic output of the first encoder [1 1] parity output of the first encoder [1 0] parity output of the second encoder [0 1] alaa h. al habbash, and ammar m. abu-hudrouss / turbo-coded v-blast/map mimo systems (2018) 12 conference on, 1993, vol. 2, pp. 1064-1070: ieee. [14] a. nooraiepour and t. m. duman, "randomized turbo codes for the wiretap channel," in globecom 2017-2017 ieee global communications conference, 2017, pp. 1-6: ieee. [15] r. johannesson and k. s. zigangirov, fundamentals of convolutional coding. john wiley & sons, 2015. [16] i. g. p. astawa, y. kurniawan, a. pratiarso, m. yoedy, b. hendy, and z. ahmad, "performance of single-rf based mimo-ofdm 2× 2 using turbo code," in radar, antenna, microwave, electronics, and telecommunications (icramet), 2017 international conference on, 2017, pp. 83-88: ieee. [17] d. na, g. pin-biao, and c. ning, "a low-complexity iterative receiver scheme for turbo-blast system," in signal processing (icsp), 2010 ieee 10th international conference on, 2010, pp. 1548-1551: ieee. [18] g. j. foschini and m. j. gans, "on limits of wireless communications in a fading environment when using multiple antennas," wireless personal communications, vol. 6, no. 3, pp. 311-335, 1998. [19] g. j. foschini, "layered space‐time architecture for wireless communication in a fading environment when using multi‐element antennas," bell labs technical journal, vol. 1, no. 2, pp. 41-59, 1996. alaa h. al habbash was born in riyadh, saudi arabia, in 1985. he received the b.sc. degree and the m.sc. degree in telecommunication engineering from islamic university gaza, palesine in 2008 and 2013, respectively. he is currently a communication engineer at ministry of communication and information technology in gaza, palestine. his current research interests are space time coding, turbo codes, and spatial modulation. ammar m. abu-hudrouss was born in khan-younis, palestine, in 1977. he received the b.sc. degree from islamic university gaza, palestine, in 1995. he received the m.sc. degree in telecommunication engineering and the ph.d. degree in communication engineering from birmingham university, birmingham, u.k., in 2003 and 2007, respectively. he was a visiting researcher at university of york from 9/2012 to 9/2013. the research visit was funded by arab fund for social and economic development as a part of distinguished scholar award. he is currently an associate professor at islamic university of gaza, palestine. his current research interests are spatial modulation, space time coding, turbo codes, and low density parity codes. transactions template journal of engineering research and technology, volume 1, issue 2, june 2014 40 improving trajectory tracking performance of robotic manipulator using neural online torque compensator mahmoud m. al ashi1, iyad abu hadrous2, and hatem elaydi3 abstract— this paper introduces an intelligent adaptive control strategy called neural online torque compensator (notc) based on the learning capabilities of artificial neural networks (anns) in order to compensate for the structured and unstructured uncertainties in the parameters of a robotic manipulator trajectory tracking control system. a two-layered neural perceptron was designed and trained using an error backpropagation algorithm (eba) to learn the difference between the actual torques generated by the joints of a 2-dof robotic arm and the torques generated by the computed torque disturbance rejection controller that was designed previously. an objected oriented approach based on modelica was adopted to develop a model for the whole robotic arm trajectory tracking control system. the simulation results obtained proved the effectiveness of the notc compensator in improving the performance of the computed torque disturbance rejection controller by compensating for both structured and unstructured uncertainties. index terms—trajectory tracking control, robotic manipulator, neural network control, computed torque method, backpropagation algorithm i introduction robotic manipulators are widely used in different applications of industrial automation such as car manufacturing, space exploration, search and rescues, waste treatment in nuclear plants, in addition to their different applications in medical surgery. due to their increasing versatile and complex tasks, the development of intelligent control mechanisms to optimize the trajectory tracking capability of robotic manipulators has become a necessity and an important research area. one of the commonly-used methods for controlling the trajectory tracking of robotic manipulators is the computedtorque method [1]. this method was used in [2] as a disturbance torque rejection method to cancel out the coupled torques generated by the joints of a 2-dof robotic arm due to its inverse dynamics. despite the proved effectivenss of the computed-torque method in improving the trajectory tracking of the robotic arm, it showed a poor performance in compensating for both structured and unstructured uncertainties. a structured uncertainty means that the mathematical models for both the manipulator and the actuators are accurate but the values of the parameters used in these models are imprecise. an unstructured uncertainity represents any unmodeled dynamics such as gear frictional forces, sudden changes in the payloads held by the end-effector during the online operation, etc. [3]. several adaptive approaches have been used to maintain the accuracy of the trajectory tracking of robotic manipulators in the presence of structured uncertainties [4, 5]. however, the adaptive control approaches may not be effective to compensate for the unstructured uncertainties. this led to the development of more intelligent adaptive control strategies that help to maintain the trajectory tracking capability of robotic manipulators even in the presence of unstructured uncertainties. artificial neural networks (anns) are one of the modern intelligent tools that have been utilized in position trajectory tracking applications of robotic manipulators. this is due to their simple structure and model as well as their universal complex function approximation and learning capabilities gained through simple training algorithms [6]. the primary goal of incorporating a neural network in a robotic manipulator trajectory tracking control system is to either learn only the unknown factors in the system such as the structured and unstructured uncertainties in what so called a model-based control system configuration, or to identify the whole plant dynamics and adaptively compensate for any online changes in such dynamics in what so called a non-model-based control system configuration [7]. in [8], a radial basis neural network was used in a new adaptive control approach that aims to improve the transient response of a robotic manipulator by formulating an objective performance bound. this performance bound enables the robot end-effector to track a desired setpoint or trajectory within a unified bound. in this paper, a two-layered neural network has been designed and utilized in a model-based trajectory tracking control system configuration involving a computed-torque controller that was designed and tested on a 2-dof robotic manipulator ———————————————— 1. mahmoud m. alashi is a postgraduate researcher with the electrical engineering department, islamic university of gaza, gaza, palestine. 2. iyadabu hadrous is with the department of engineering professions, palestine technical college, dier el balah, gaza, palestine. 3. hatem elaydi is with the electrical engineering department, islamic university of gaza, gaza, palestine. improving trajectory tracking performance of robotic manipulator using neural online torque compensatormahmoud m. alashi, iyad abu hadrous, and hatem elaydi (2014) 41 in a previous work [2]. the neual network will be trained online using the steepest descent error backpropagation algorithm to compensate for both structured and unstructured uncertainties that caused the poor performance of the computedtorque controller. this paper is organized in the following manner: section 2 consists of two subsections the first of which will show how the neural network is incorporated into the dymola model of the computed-torque trajectory tracking control system for a 2-dof robotic arm. the second subsection is devoted to explain the steepest descent error backpropagation algorithm used in the online training of the neural network. in section 3, the results of simulating the neural online torque compensator are analyzed and compared to those obtained by the computed torque control system. the paper ends with concluding remarks and future work in section 4. ii neural network controller a system configuration a two-layered neural network was designed and incorporated into the computed torque trajectory tracking control system of the 2-dof robotic arm that was designed in [2] in order to learn the uncertainties involved in the system during its operation. the neural network contains two neurons in the output layer with a pure-linear activation function and three neurons in the hidden layer with a log-sigmoid activation function as shown in figure 1. figure 1: structure of the neural network controller figure 2: dymola model of the trajectory tracking control system figure 2 shows the configuration of the computed torque trajectory tracking control system involving the neural network controller. the blocks d̂ and ĥ represent the estimated inertia matrix and the estimated centrifugal, corioilis and gravitational forces, respectively. these estimated matrices were computed using the nonlinear inverse dynamics equations of the 2-dof robotic arm. the designed neural network controller receives the actual angular positions of the joints as input signals and generates two corresponding output signals to compensate for the torque difference caused by the structured and unstructured uncertainties. b learning algorithm as mentioned earlier, the designed neural network must be trained to learn the changing behavior of the system dynamics in the presence of both structured and unstructured uncertainties. the algorithm used for the training of the designed neural network in this paper is the steepest descent error backpropagation algorithm. this algorithm aims to continuously change the weights and biases of the neural network according to a specified error performance index function which is usually called the teaching signal of the network [6]. figure 3 shows a flow chart summarizing the steps of the steepest descent error backpropagation algorithm to train the neural network. figure 3: flow chart of the steepest descent algorithm where 𝐸(𝑘) in the flow chart denotes the value of the error teaching signal at the 𝑘𝑡ℎ iteration. in this paper, we will use improving trajectory tracking performance of robotic manipulator using n eural online torque compensatormahmoud m. alashi, iyad abu hadrous, and hatem elaydi (2014) 42 the teaching signal of the designed neural network to be the vector containing the outputs of the linear pd controllers of the joint actuating motors shown in figure 2. the weights and biases of the designed neural network controller are updated using the following rules: 𝐖𝑚 (𝑘 + 1) = 𝐖𝑚 (𝑘) − 𝛼𝐒m(𝐚𝑚−1)𝐓 (1) 𝐛𝑚 (𝑘 + 1) = 𝐛𝑚 (𝑘) − 𝛼𝐒m (2) where 𝐖𝑚 (𝑘) and 𝐛𝑚 (𝑘) denote the weight matrix and bias vector of layer 𝑚 of the network at the 𝑘𝑡ℎ iteration, respectively. 𝐒m is the sensitivity vector associated with layer 𝑚 of the network and is computed using the following formulae: 𝐒m = −2�̇�m(𝑛m ) 𝐞(𝑘) (3) 𝐒𝑚 = �̇�𝑚 (𝑛𝑚 )(𝐖𝑚+1)𝐓𝐒𝑚+1 (4) where (m) is the order number referring to the output layer of the neural network, and �̇�m(𝑛m) is the diagonal matrix containing the derivatives of the activation functions used in the output layer with respect to their corresponding net inputs. likewise, �̇�𝑚 (𝑛𝑚 ) denotes the diagonal matrix containing the derivatives of the activation functions used in layer 𝑚 of the neural network. iii simulation results in order to test the effectiveness of the proposed neural network online torque compensator to compensate for both structured and unstructured uncertainties, the following simulation is conducted: the structured uncertainty will be simulated by changing the values of the parameters of the motors and the robotic arm from their inaccurate values used in [2] to their accurate values mentioned in table 1. the unstructured uncertainty will be simulated by applying a constant disturbance torque of 𝜏𝑑 = 2 𝑁. 𝑚 on each joint at the time instant 𝑡 = 5 𝑠𝑒𝑐. as shown in figure (2). figures (4) and (5) show the simulation results of the computed torque control system without the neural network compensator in the presence of the structured and unstructured uncertainties, respectively. it is clearly shown in these figures that the computed torque and the feed-forward (ffc) controllers were not capable of compensating for neither the parameter uncertainties nor the externally applied disturbance torques. this is because both the computed torque and the feed-forward (ffc) controllers are designed based on mathematical models involving the uncertain parameters of the robotic manipulator and the actuating motor. figure 4 shows a steady-state error in the horizontal parts of the figure of about 0.5 𝑟𝑎𝑑 for the shoulder joint motor position and about 0.04 𝑟𝑎𝑑for the elbow joint motor position. the steady state errors in the vertical parts are about 0.5 𝑟𝑎𝑑 for the shoulder joint motor position and about 0.12 𝑟𝑎𝑑 for the elbow joint motor position. figure 4: simulation of computed torque control in the presence of structured uncertainties figure 5: simulation of computed torque control in the presence of unstructured uncertainties figure 6 shows the simulation results of the trajectory tracking control system involving the designed neural network online torque compensator in the presence of both structured and unstructured uncertainties. it can be observed from figure 6 that the designed neural network compensator has effectively learned both the uncertain and unknown factors involved in the system model and was capable of adaptively compensating for such uncertainties in order to maintain the table i parameters for the motor and the robotic arm dc-motor parameters robotic arm parameters inaccurate accurate inaccurate accurate 𝑅 = 3.5ω 𝑅 = 4ω 𝑎1 = 0.25 𝑚 𝑎1 = 0.35𝑚 𝐿 = 1.3𝑚𝐻 𝐿 = 2.4𝑚𝐻 𝑎2 = 0.15𝑚 𝑎2 = 0.30𝑚 𝐾𝑏 = 𝐾𝑚 = 0.047 𝐾𝑏 = 𝐾𝑚 = 0.053 𝑚1 = 1.95𝐾𝑔 𝑚1 = 2.3𝐾𝑔 𝐽𝑚 = 3.3 × 10−6 𝐽𝑚 = 4.2 × 10−6 𝑚2 = 0.93𝐾𝑔 𝑚2 = 1.2𝐾𝑔 𝐵𝑚 = 0.0001 𝐵𝑚 = 0.001 improving trajectory tracking performance of robotic manipulator using neural online torque compensatormahmoud m. alashi, iyad abu hadrous, and hatem elaydi (2014) 43 desired trajectory tracking capability of the joint actuating motors which is clearly reflected by the results in figure 6. figure 6: simulation of trajectory tracking control system with notc in figure 6, the steady state errors in both the horizontal and vertical parts are both equal to approximately 0.006 𝑟𝑎𝑑 for both shoulder joint and elbow joint motor positions. figures 7 and 8 show another indicator of the effectiveness of the proposed neural network controller by plotting the total disturbance torque applied on the inertia of each actuating motor before and after adding the neural network compensator, respectively. figure 7: disturance torques on the motors before adding the notc figure 8: disturance torques on the motors after adding the notc in figure 7, it is shown that before the time instant 𝑡 = 5 𝑠𝑒𝑐, the total disturbance torques were about −0.05 𝑁. 𝑚. for the shoulder joint motor and about −0.005 𝑁. 𝑚. for the elbow joint motor. these disturbance torques were generated due to the structured uncertainties represented by the uncertain parameters of both arm and the actuating motors. at the time instant 𝑡 = 5 𝑠𝑒𝑐when the unstructured uncertainties began to have its effect, the disturbance torques applied on the actuating motors have increased to become about −0.07 𝑁. 𝑚. for the shoulder joint motor and about −0.014 𝑁. 𝑚. for the elbow joint motor. when the designed neural network online compensator was added to the system, it successfully learned the disturbance torque differences and hence eliminated the total disturbance torque applied on each joint driving motor within a period of 9 𝑚𝑆𝑒𝑐 as shown in figure 8. figures 9 and 10 show the sum of the outputs of the linear pd and feedforward (ffc) controllers for each motor control system before and after adding the neural network online torque compensator (notc), respectively. it is clearly seen in figure 9 that the pd and ffc controllers supply a non-descreasing control signal to each actuating motor. before the time instant 𝑡 = 5 𝑠𝑒𝑐, the control signal supplied to the shoulder joint motor has the value of 3.8 and the control signal supplied to the elbow joint motor has the value of 0.4. after the time instant 𝑡 = 5 𝑠𝑒𝑐, the pd and ffc controllers supplied a signal of about 5.4 to the shoulder joint motor and a signal of about 1.06 to the elbow joint motor. figure 9: sum of the outputs of pd and ffc controllers after adding the notc figure 10: sum of the outputs of pd and ffc controllers after adding the notc these non-decreasing control signals supplied by the linear pd and ffc controllers to the actuating motors indicate their improving trajectory tracking performance of robotic manipulator using n eural online torque compensatormahmoud m. alashi, iyad abu hadrous, and hatem elaydi (2014) 44 incapability of compensating for the structured and unstructured uncertainties in system model. however, after adding the neural online torque compensator, the control signals supplied by the linear pd and ffc controllers to the joint driving motors began decreasing dramatically until they both have reached the value of 0.006 within a period of about 9 𝑚𝑆𝑒𝑐 as shown in figure 10. iv conclusion and future work this paper presented an adaptive control strategy called neural online torque compensator (notc) to enhance the trajectory tracking capability of a robotic manipulator using a neural network. a two-layered neural network was designed and trained using the steepest descent error backpropagation algorithm to identify and compensate for the structured and unstructured uncertainties involved in a 2-dof robotic manipulator trajectory tracking control system which is driven by a computed-torque controller. an object-oriented model was developed using modelica for the system components and was simulated on dymola. the simulation results proved the superior effectiveness of the designed neural network controller in adaptively identifying and compensating for both structured and unstructured uncertainties. in this paper, we designed the neural network controller to handle the challenges faced by the computed-torque controller through training it to learn only the unknown factors and the uncertain parameters of the system model. such a control system seems to be simple for a 2-dof robotic manipulator. however, for higher dof manipulators, the design of a computed torque controller requires the derivation of highly nonlinear and more complicated inverse dynamics equations. therefore, we aim in a future paper to avoid this problem by designing a neural network controller which works as both a system identifier to learn the complex inverse dynamics of the manipulator and as a compensator for the structured and unstructured uncertainties involved in the system. references [1] mark w. spong, seth hutchinson, m. vidyasagar, robot modeling and control. first edition, john wiley and sons, pp. 244-247. [2] m. alashi, h. alaydi, i. abu hadrous, "object-oriented modeling, simulation, and control of a robotic manipulator using pd-computed torque method", submitted for publishing. [3] s. okuma, a. ishiguro, t. furuhashi, y. uchikawa, "a neural network compensator for uncertainties of robotic manipulators", proceedings of the 29th conference on decision and control, honolulu, hawaii, december 1990. [4] j. j. craig, "adaptive control of mechanical manipulators", addison wesley publishing company, 1988. [5] j.e. slotine, w. li, "adaptive manipulator control: a case study". ieee transactions on automatic control, vol. ac-33, no. 11, november 1988. [6] martin t. hagan, howard b. demuth, mark beale, neural network design. pws publishing company. [7] suel jung, "neural network controllers for robot manipulators", phd dissertation, university of california, 1996. [8] xiang li, chien chern cheah, "adaptive neural network control of robot based on a unified objective bound". ieee transactions on control systems and technology, vol. 22, no. 3, april 2014. microsoft word manuscript.doc 22 journal of engineering research and technology, volume 5, issue 2, june 2018 utilization of waste iron powder as fine aggregate in cement mortar bassam a. tayeh1 doha m. al saffar2 1civil engineering department, the islamic university of gaza, gaza, palestine, btayeh@iugaza.edu.ps 2civil engineering department, al mansour university college, baghdad, iraq, doha.mothefer@muc.edu.iq abstract— this paper reports about the use of recycled iron powder (ip) in producing cement mortar under normal conditions. flow table test was performed on fresh mortar. destructive tests were conducted on cubes of the hardened mortar to obtain the compressive and flexural strengths of the cement mortar. the effects of adding 10%, 20%, 30%, and 40% of waste ip as a natural sand replacement were assessed and compared. waste iron are of two types: iron ip, which shows a similar particle size distribution to that of the sand used in making the samples, and fine iron powder (fip), which contains fine particles. the compressive strength decreased with the increased amount of added ip in the mixtures, but it increased with the addition of 10% fip and decreased gradually with the increased fip level. by contrast, the flexural strength significantly increased with increased fip in the mixtures. recommendations regarding the applications of recycling to conserve resources and raw materials and prevent environmental pollution are provided. index terms— recycled materials, iron powder, waste materials, green product i introduction concrete is currently the most widely used construction material worldwide; its numerous applications include that in bridges, dams, house constructions, highway pavements, and sidewalks [1]. the use of manufactured fine aggregates has been increasing in the united states because good-quality natural sand is not economically viable in many areas. manufactured fine aggregates differ from natural sand in terms of grading, particle shape, and texture [2]. research and field experience have shown that goodquality concrete with proper workability and finishability can be realized using manufactured fine aggregates [3,4]. various image analysis techniques [5,6] have been used to determine the shape and texture of aggregates. the increasing amount of waste iron in the gaza strip is one of the major environmental issues in gaza. the large amount of wastes originates from the industrial sector. these wastes are deposited in landfills. the present study investigated the utilization of the large amount of iron waste from workshops, factories, and demolished buildings in building construction. consequently, new opportunities will be created with the use of new material in construction, thereby improving many of the overall building parameters. moreover, the shortage of natural sand in several areas has been increasing annually [7-12]. in recent years, commonly recycled materials can be used for either building constructions or road repairs, such as that of asphalt paving. these materials include wood, gypsum wallboard, building concrete, and metals. thus, in this study, waste iron powder (ip) was reused as a partial sand replacement in a mortar mixture to achieve higher compressive strengths and flexural strengths than those with standard mortar mixes [7-13]. the municipal waste components in gaza city consist of organic matter (57%), paper and cardboard (15%), plastics (15%), iron metal (4%), glass (3%), and other materials (6%), as shown in fig. 1. this study mainly aimed to evaluate the use of waste ip in cement mortar mixtures and its effects on their properties. this objective was achieved as follows: first, the effects of adding different percentages of waste were examined and compared with that of a conventional mixture. afterward, the optimum percentage of waste ip added to the mortar mixture to enhance its properties was determined. this study also aimed to evaluate the effects of using waste ip as a part of the solution for environmental catastrophes resulting from disposal. the cement mortar performance was improved by using waste ip as a sand replacement in the mixture. bassam a. tayeh and doha m. al saffar/utilization of waste iron powder as fine aggregate in cement mortar (2018) 23 figure 1 municipal waste components ii experimental investigation a testing program the compressive and flexural strengths were determined on 50 mm cubic specimens, [14] with dimensions of 1.6 in×1.6 in×6.3 in or 4 cm×4 cm×16 cm, respectively [15]. a total of 69 cubes were used for the compression test. three cubes were utilized for each percentage at 7, 28, and 54 days. fifteen prisms were used for the flexural test, with three cubes for each percentage for 28 days only. b material and mixture proportion portland cement type i 42.5 n was used to complete all the mortar mixtures [16]. the chemical compositions are provided in table 1. two types of waste ip were utilized: ip, which showed the same size as the sieve analysis of sand [17], and fine ip (fip), which was passed through a 1.18 mm sieve and retained on sieve #200, as shown in figs. 2. the measured size distributions are presented in table 2 and fig. 3. nine mortar mixtures were cast, of which one was a conventional mixture, four were ip, and four were fip. three specimens were evaluated for each test, and the mean values were reported. the mixture proportions are given in table 3. this study considered different amounts of waste ip (0%, 10%, 20%, 30%, and 40%) as sand replacement. figure 2 waste iron pwoder used in this study table 1 chemical and physical properties of cement chemical properties oxides composition content % lime, cao 66.07 silica, sio2 19.01 alumina, al2o3 4.68 iron oxide, fe2o3 3.2 magnesia, mgo 0.81 so3 1.17 lime saturation factor, (l.s.f) 1.08 physical properties specific surface area (blaine method), (cm2/g) 2900 min. 2800 setting time (vicate apparatus) initial setting, hrs. : min final setting, hrs. : min 2:15 3:30 not less than 45 min not more than 10 hrs compressive strength (mpa) for 3-day for 7-day 20.4 28.2 not less than 15 mpa not less than 23 mpa water demand 0.26 % no limit bassam a. tayeh and doha m. al saffar/utilization of waste iron powder as fine aggregate in cement mortar (2018) 24 table 2 physical properties of fine aggregate and iron powder figure 3 grading curves of aggregates table 3 mixture composition of all experiment series, kg/m3 mixture id cement sand ip fip % of replacemen w/c water flow (mm) mr 500 1500 0.5 250 193 mip1 500 1350 150 10 0.5 250 187 mip2 500 1200 300 20 0.5 250 186 mip3 500 1050 450 30 0.5 250 185 mip4 500 900 600 40 0.5 250 183 mfip1 500 1350 150 10 0.5 250 190 mfip2 500 1200 300 20 0.5 250 186 mfip3 500 1050 450 30 0.5 250 187 mfip4 500 900 600 40 0.5 250 176 iii experimental results and discussion a fresh properties effects of ip and fip as aggregate replacements: as shown in fig 4., the mixtures generally showed a low flow with increased percentages of ip and fip, which resulted in low fluidity, especially for mfip4. flow table test results [12] revealed a reduction in diameter with increased waste ip compared with the reference mixture; the reduction was approximately 2.95%, 3.73%, 4.41%, and 5.44% for mpi1, mpi2, mpi3, and mpi4 mixtures, respectively. the reduction in fip diameter was about 1%, 3.1%, 3.4%, and 9.6% for mfip1, mfip2, mfip3, and mfip4, respectively, as shown in fig 4. this trend may be due to the heterogeneity and angularity of waste ip, which was consistent with that reported by ismail [7]. figure 4 effect of ip and fip replacement on flowability cement mortar b hardened properties figs. 5–7 show the variability (standard deviations as error bars) in hardened density, compressive strength, and flexural strength, respectively, of different mixtures. table 5 presents the detailed results. effects ip and fip as aggregate replacements on hardened density: the specimens made with ip and fip displayed physical properties property fine aggr gate ip specification specific gravity 2.884 6.584 astm c127-04 [11] absorption, % 39 astm c127-04 dry loose unit weight, gm/cm³ 1.596 astm 29/c29m/02 dry rodded unit weight, gm/cm³ 1.742 astm 29/c29m/02 bassam a. tayeh and doha m. al saffar/utilization of waste iron powder as fine aggregate in cement mortar (2018) 25 higher densities than that of the mr mixture. this difference can be attributed to that the waste ip possessed a 2.28 times higher specific gravity (sg) than that of the sand. the unit weight of cement mortar increased with the incorporation of ip as aggregate replacement. such replacement by fip also decreased the air content, which consequently increased the unit weight of mixtures. effects of ip and fip as aggregate replacements on compressive strength: the compressive strength decreased with the increased ip percentage. the reduction in mip1 was 2.17%, 6.45%, and 6.69% after 7, 28, and 54 days, respectively. the corresponding values after 7, 28, and 54 days were 16.6%, 21.9%, and 33.3% for mip2; 13.56%, 25.97%, and 42.3% for mip3; and 10%, 17.7%, and 28.76% for mip4, respectively. the compressive strength of the mfip1 mixture increased by 10.14% and 10.17% after 7 and 28 days, respectively. this trend may be due to the high density and strength of the waste ip, which was consistent with the findings in [18]. the percentage of increment was 6% for 28 days. the compressive strength of mfip2 after 7 days was nearly the same as that of the mr mix, but a small reduction of 4.48% was observed after 28 days. mfip3 and mfip4 showed significant reductions of 2.42% and 11.19% after 7 days and 10.37% and 11.17% after 28 days, respectively. these reductions may be attributed to the small voids appearing on the internal texture of the specimen after failure in the compression test or the high percentage of ip, which may affect the hydration process of cement and consequently reduce the strength [7]. the optimum percentage of fip replacement was obtained at 10% with an increase of 10.14% and 10.17% after 7 and 28 days, respectively, compared with the reference mixture. table 5 hardened properties mixture i density, kg/m3 compressive strength, mpa flexural strength, mpa 7 days 28 days 7 days 28 days 54 days 28 days mr 2470 2480 27.6 33.92 39.87 4.07 mip1 2560 2590 27 31.73 37.2 4.12 mip2 2640 2640 23.76 29.32 35.87 4.63 mip3 2670 2720 21.53 25.11 32.8 4.68 mip4 2760 2760 18.4 22.23 28.4 4.82 mfip1 2478 2480 30.40 37.37 4.65 mfip2 2520 2570 27.51 32.40 4.93 mfip3 2680 2720 26.93 30.40 4.94 mfip4 2790 2810 24.51 30.13 5.21 bassam a. tayeh and doha m. al saffar/utilization of waste iron powder as fine aggregate in cement mortar (2018) 26 figure 1 effect of waste iron powder on hardended density at different age figure 2 effect of waste iron powder on compressive strength at different age figure 7 effect of waste iron powder on flexural strength at different age. failure occurred during compression test on the specimens, and voids appeared on the internal surface texture of the specimen. the number of voids increased with the increased waste ip percentage. thus, the reduction in the compressive strength may be attributed to this phenomenon, as shown in fig. 8. effects of fip as aggregate replacement on flexural strength: the preceding table and figs. 8 and 9 show that the flexural strength increased with increased fip percentage compared with that of the reference mixture. the mfip4 mixture presented the highest flexural strength, which was 14.83% higher than that of the mr mixture. these results were consistent with those reported in [7,18]. the highest flexural strength was that of the mfip2 mixture after 28 days; the value was 27.86% higher than that of the reference mixture at the same curing period. iv conclusions according to the present findings, the following conclusions can be drawn: the cost showed no increase when waste ip-modified mortar mix was used compared with that using conventional mortar mix. flow test result revealed a reduction in diameter with increased waste compared with that of the reference mix. the dry densities of the cement mortar specimens with 10%, 20%, 30%, and 40% ip and fip were higher than that of the reference mix. the compressive strength decreased with increased ip percentage. when 10% ip was added in the mixture, the compressive strength decreased by 2.17%, 6.45%, and 6.69% after 7, 28, and 54 days, respectively, compared with the control mixture. the corresponding values when adding 20%, 30%, and 40% ip decreased: 16.6%, 21.9%, and 33.3% after 7 days; 13.56%, 25.97%, and 42.3% after 28 days; 10%, 17.7%, and 28.76% after 54 days. the cement mortar mix modified with 10% ip increased by 10.14% after 7 days and 10.17% after 28 days. with the addition of 20% fip, the compressive strength after 7 days was nearly the same as that of the reference mix, and a small reduction of 4.48% was detected after 28 days. with the addition of 30% and 40% fip, significant reductions of 2.42% and 11.19% after 7 days and 10.37% and 11.17% after 28 days were observed. figure 8 specimens under flexural and compressive stress bassam a. tayeh and doha m. al saffar/utilization of waste iron powder as fine aggregate in cement mortar (2018) 27 the flexural strength increased with increased fip percentage. the specimens with 40% fip showed the highest flexural strength, which was 14.83% higher than that of the reference mix. acknowledgement the authors are grateful to the staff of the islamic university of gaza (iug) soil and materials lab for their help during the sample preparation and testing. special thanks are directed to senior civil engineering students, feras emad al-khozondar, mahmoud adnan albuhaisi, mohammed bassam alqady and suliman mohammed alagha for helping the authors in carrying out the experimental program. references [1] shehdeh,g., husamn., and rosav.,"experimental study of concrete made with granite and iron powders as partial replacement of sand", sustainable materials and technologies, volume 9, september 2016, pages 1-9. [2] quiroga, p. n., ahn, n., & fowler, d. w. (2006). concrete mixtures with high microfines. aci materials journal, 103(4), 258. [3] de larrard, f. (2014). concrete mixture proportioning: a scientific approach. crc press. london, 320pp. [4] bigas, j. p., & gallias, j. l. (2002). effect of fine mineral additions on granular packing of cement mixtures. magazine of concrete research, 54(3), 155-164. [5] kim, h., haas, c. t., rauch, a. f., & browne, c. (2002). dimensional ratios for stone aggregates from threedimensional laser scans. journal of computing in civil engineering, 16(3), 175-183. [6] kuo, c. y., frost, j., lai, j., & wang, l. (1996). threedimensional image analysis of aggregate particles from orthogonal projections. transportation research record: journal of the transportation research board, (1526), 98103. [7] ismail z. z. and al-hashmi e. a., “reuse of waste iron as a partial replacement of sand in concrete,” waste management, vol. 28, no. 11, 2008, pp. 2048–2053. [8] bassam a. tayeh, (2018) investigation the effect of marble, timber and glass powder as a parti al replacement of cement. journal of civil engineering and construction 7, 02, 63-71. [9] zhao s., fan j., and sun w., “utilization of iron ore tailings as fine aggregate in ultra-high performance concrete,” construction and building materials, vol. 50, pp. 540–548, 2014. [10] alwaeli and j. nadziakiewicz, “recycling of scale and steel chips waste as a partial replacement of sand in concrete,” construction and building materials, vol. 28, no. 1, pp. 157–163, 2012. [11] ahmed s.o., hamdy a. abdel-g., " the effect of replacing sand by iron slag on physical, mechanical and radiological properties of cement mortar", hbrc journal, volume 13, issue 3, december 2017, pages 255-261. [12] astm c109 / c109m 2013e1, "standard test method for compressive strength of hydraulic cement mortars (using 2-in. or [50-mm] cube specimens) ", american society for testing and materials standard practice. [13] astm c 348 – 2002, "standard test method for flexural strength of hydraulic-cement mortars", american society for testing and materials standard practice c348. [14] astm c150, "2004, standard specification of portland cement" american society for testing and materials standard practice. [15] astm c566, 2004,"standard test method for bulk density (" unit weight") and voids in aggregate", american society for testing and materials standard practice c566. [16] astm c127-c128, 2004, "standard test method for specific gravity and absorption of fine aggregate”, american society for testing and materials standard practice c128. [17] astm c230/c230m, 2008," standard specification for flow table for use in tests of hydraulic cement1", american society for testing and materials standard practice c230. [18] ali n. alzaed1, "effect of iron filling in concrete compressive and tensile strength" international journal of recent development in engineering and technology, volume 3, issue 4, october 2014) pages 121-125. bassam a. tayeh is assistant professor at civil engineering department at islamic university of gaza (iug), gaza, palestine. currently he is manager of iwan center of the engineering department at islamic university of gaza. since 2015 to date he is a president of the engineers association at north gaza governorate – palestine. he has an extensive experience in both academic and practice in many fields of civil engineering, he has published many papers in international journals with high impact factors. he was a presenter at several international conferences. he is a reviewer for many international journals. email btayeh@iugaza.edu.ps orcid: http://orcid.org/0000-0002-2941-3402 doha m. al-saffar is an assistant lecturer of civil engineering at almansour university college. she received her bsc in 2009 from university of technology/ building and construction engineering department; her msc in 2012 from the university of technology, iraq. she has the appropriate expertise to judge building materials. her research interests include lightweight concrete, using recycled materials in concrete, fiber reinforced concrete. she has many years' experience of teaching at al mansour university college in civil engineering department. emaildoha.mothefer@muc.edu.iq orcid: http://orcid.org/0000-0002-8580-2672 transactions template journal of engineering research and technology, volume 1, issue 1, march 2014 design of a hierarchical sugeno fuzzy controller for hvac in large buildings basil hamed1, loai khashan2 1electrical engineering department, islamic university of gaza, gaza, palestine 2gaza ministry of health, gaza, palestine 1bhamed@iugaza.edu.ps 2loai7000@yahoo.com abstract— in this paper fuzzy controller is used to control heating, ventilating and air conditioning (hvac) system, which is time varying nonlinear system. this controller consists of two fuzzy levels. the first fuzzy level controls two varying feedback parameters (air temperature and air quality) and to make the controller adequate with these changes. while the second level controls the error and change of error that comes from first fuzzy control level. in this research, a hierarchical sugeno fuzzy controller type will be designed .the proposed structure is used to reduce the number of rules and the computational time required for the simulation processes, so it is suitable for nonlinear temperature control for low energy large buildings with features such as large capacity and longtime delay. the controller is developed using a computer simulation of a virtual building contains most parameters of a real building. fuzzy rules are learned from experts and system performance observations. the proposed controller is tested using matlab/simulink environment, the results show that the sugeno controller has a good response. achieving these purposes will increase the thermal comfort and reduce energy consumption. index terms— thermal comfort, hvac, sugeno fuzzy control, large buildings. i. introduction after the energy crisis in the 1970s, energy conservation has been considered as a major parameter in all buildings. the greatest energy consumption in buildings occurs during their operation rather than during their construction, there are many types of construction buildings, some of these buildings called low energy buildings. low energy buildings are any type of buildings that from design, technologies and building products use less energy, from any source than a traditional or average contemporary building. in the practice of sustainable design, sustainable architecture, low-energy building, energy-efficient landscaping low-energy houses often use active solar and passive solar building design techniques and components to reduce their energy expenditure. based on surveys, the energy consumption in the hvac equipment in all residential, commercial, and industrial buildings constitutes about 40% to 50% of the world’s energy consumption [1]-[4] as shown in figure 1. thus, in recent years, many techniques have been considered for reducing the energy consumption in hvac systems. the implementation of different control methodologies for controlling parameters of heating, ventilating and air-conditioning (hvac) systems as a part of building automation systems and other energy consumption factors and energy sources were investigated [1]. figure 1 energy consumption by sector [5] classical hvac control techniques such as the on/off controllers (thermostats) and the proportional-integral-derivative (pid) controllers are still very popular because of their low cost. however, in the end, these controllers are expensive because they operate at a very low-energy efficiency. proper control of low energy buildings [2], which is more difficult than in conventional buildings due to their complexity and design of a hierarchical sugeno fuzzy controller for hvac in large buildings , basil hamed, and loai khashan (2014) 19 sensitivity to operating conditions, is essential for better performance. this paper presents hierarchical fuzzy controller the hvac system capable of maintaining comfort conditions within a thermal space with time varying thermal loads acting upon the system with high air quality. to achieve this objective, we carry out the design of an hvac control system that counteracts the effect of thermal loads on the space comfort conditions. the controller achieves this objective by adapting the varying parameters of thermal loads acting upon the system and using the hierarchical fuzzy with two levels to take the appropriate control actions to maintain space comfort conditions. first fuzzy level is adaptive level for varying parameters. it controls the deference temperature just after entering the room and the real temperature in the room. this variation in temperature is due to slow spread nature of heating the air in an open large spaces and it is due to changes happened in this space as large windows opened or any external disturbances [3]. the second varying parameter is the quality of air inside this space cause if people are crowded in this space co2 concentration will change and the need of new air is essential, so a new cold air must enter the space. these varying parameters are nonlinear and cannot be expected, when it will be changed such as adaptive controller is needed, also an intelligent controller as fuzzy control method is very useful and flexible with unknown systems [4]. ii. sugeno type fuzzy inference in this section the sugeno method of deductive inference for fuzzy systems based on linguistic rules is introduced. the sugeno procedure was proposed in an endeavor to expand a systematic method for producing fuzzy rules from a certain input-output data collection. a generic rule in a sugeno model, which has two—inputs x and y, and output z, is as follows [6]: if x is a and y is b, then z is z = f (x; y) where z = f (x; y) is a crisp function. usually f (x; y) is a polynomial function of the inputs x and y. however, in general it can be any public function characterizing the output of the system inside the fuzzy area. when f (x; y) is a constant the inference system is known as a zero-order sugeno model. it is a particular case of the mamdani system in which each rule’s resultant is determined as a fuzzy singleton. when f (x; y) is a linear function of x and y, the inference system is known as a first order sugeno model. in a sugeno model each rule has a crisp output presented by a function; for this reason the total output is gained via a weighted average defuzzification (eq. 1) [7]. the weighted average method is one of the most popular methods used in fuzzy applications as it is a very effective method in terms of calculation. the algebraic expression is as follows: 𝑍∗ = ∑ 𝜇𝑐(𝑧). 𝑧 𝜇𝑐(𝑧) (1) where ∑ represents the algebraic sum while z is the centroid of each symmetric membership function. in the design procedure of such a controller two input linguistic variables are used, namely error (e) as x and its rate of change (˙e) as y. increasing or decreasing the control signal is assumed as output linguistic variable (u). in order to form fuzzy if then rules gaussian membership functions are considered for input linguistic variables (x) and (y), respectively. the general shape of input membership functions are as follows: 𝜇(𝑧) = exp ( (𝑧.𝑐)2 2𝛼 ) (2) where c is the mean and α is the variance of each membership function. the parameter z is the crisp input amount which has to be fuzzified and z is its membership function degree with a numerical value in the interval [0, 1]. also 25 output polynomial functions are defined for first-order sugeno type fuzzy inference. applying inputs’ membership functions and output polynomial functions will result in a rule-base which is composed of 25 rules as the table 1: r1: if x is zero and y is zero then, u1 = p1x + q1y + r1 table 1: fuzzy control rule base as shown in figure 2, the above 25 if-then rules are combined together in the form of first-order sugeno model. er cher zero small medium large very large zero z s m l vl small s m l vl vl medium m l vl vl vl large l vl vl vl vl very large vl vl vl vl vl design of a hierarchical sugeno fuzzy controller for hvac in large buildings , basil hamed, and loai khashan (2014) 20 figure.2 sugeno fuzzy model iii. hierarchical fuzzy control for designing a fuzzy system with a good amount of accuracy, an increase in the number of input variables to the fuzzy system results in an exponential increase in the number of rules required. if there are n input variables and m fuzzy sets are defined for each of these, then the number of rules in the fuzzy system is mn, this can be shown with the help of a small example. suppose there are 5 input variables and for each variable 3 fuzzy sets are defined, then the total number of rules is 243. now, suppose number of fuzzy sets is increased to 5 (to increase the accuracy of the system), then the new number figure 3 a three level hierarchical fuzzy control of the rules would be 3120. thus resulting a huge increase in the number of the rules. the idea behind the construction of a two-level hierarchical scheme is to make a layered structure of control where each layer takes into account a certain number of variables and gives a single variable as the output. hence the complexity of the system reduces, and along with it, the number of rules to be framed [8]. a hierarchical fuzzy rule based control strategy is proposed for the optimum control of the heating system. fuzzy rule based controllers are widely used on systems with high uncertainties and can be interpreted linguistically figure 3. fuzzy rules are generated from optimization results calculated for values of the inputs at the centers of the fuzzy sets. even if it is assumed that each of the fuzzy input variables is described by 5 fuzzy sets, the total number of rules would be very large (625). therefore, it would take many years to generate the rules and they would be very difficult to understand. a hierarchical approach is adopted to reduce the number of fuzzy rules to 25 rules each level only which mean (75) rules for whole system with reduction of (550) rules with conventional method. iv. hvac system a single-zone hvac system is shown in figure 4. it consists of the following components: a heat exchanger (air conditioner), a circulating air fan, the thermal space [9], the chiller providing chilled water to the heat exchanger, connecting ductwork, dampers, and mixing air components. in our discussion, we assume the system is operating on the cooling mode (air conditioning). the basic operation of the system in the cooling mode is as follows first, 25% of fresh air is allowed into the system and it gets mixed with 75% of the recirculated air (position 5) at the flow mixer. second, air mixed at the flow mixer (position 1) enters the heat exchanger where it gets conditioned. third, the air coming out of the heat exchanger already is conditioned to enter the thermal space, and it is called supply air (position 2). fourth, the supply air enters the thermal space to offset the sensible (actual heat) and latent (humidity) heat thermal loads acting upon the system. finally, the air in the thermal space is drawn through a fan (position 4), 75% of this air is recirculated and the rest is exhausted from the system. the control inputs for the system are the pumping rate of cold water from the chiller to the heat exchanger and the circulating airflow rate using the variable speed fan. these set of control actions characterize the hvac system as:  a variable-air-volume system (vav) that results in the lowest energy consumption.  a variable chilled water flow rate system that allows a reduction of pump energy at light loads [10]. design of a hierarchical sugeno fuzzy controller for hvac in large buildings , basil hamed, and loai khashan (2014) 21 figure 4: model of the hvac system the system consists of: system: air temperature inside the room or hall. fuzzy level 1: control the error of varying parameters to adjust main controller. fuzzy level 2: the main temperature controller. disturbances: as opening windows or doors and co2 concentration. temperature t0: the reference air temperature. temperature t1: air temperature after disturbances. temperature t2: air temperature after being heated from heating system (ahu). temperature t3: air temperature inside the room as shown in figure 5. figure 5: hvac system v. sugeno fuzzy controller design to design a fuzzy control with a good amount of accuracy, an increase in the number of input variables to the fuzzy system results in an exponential increase in the number of rules required. therefore, we use hierarchical fuzzy logic control (sugeno method) to be implemented in large buildings as malls, hypermarkets, or large centers. it is extremely hard to get mathematical model for the system, therefor we have to adapt the varying parameters and the essential varying parameters are:  occupants crowded and the amount of heat needed.  air quality degradation and need of fresh air.  doors and windows opening make disturbance of the system. therefore, it is hard to expect these varying parameters offline or online. applying the principle of fuzzy control to hvac system. the proposed fuzzy control system structure is shown in figure 6 figure 6: fuzzy controller scheme we suppose that the initial air temperature values for both t1 and t3 will be -20 c ᵒ. both t1 and t3 temperature will have the same value of temperture until they reach 20 cᵒ then t1 and t3 will have different value of temperature. the change of t1 and t3 happen because in the beginning of the day we predict that no new air is needed then a variation in temperature between t1 and t3 is due to disturbances we previously talked about. controller’s performance in the period between -20 cᵒ and 30 cᵒ as shown in figure 7, is needed to stabilize the system and give the desired design requirments performance. design of a hierarchical sugeno fuzzy controller for hvac in large buildings , basil hamed, and loai khashan (2014) 22 figure .7: air temperature t1 and t3 by using the fuzzy logic graphical user interface, the fuzzy inference system (fis) for sugeno type based controller and each input variable has five membership functions as shown in figure 8 and figure 9 respectively. figure 8: membership function of input error e figure 9: membership function of input change of error e the error membership functions are; zero, small, medium ,large , very large while the change of error membership functions are zero, small , medium ,large , very large. the output variable also has five membership functions namely; zero, small, medium, large, and very large as shown in figure 10 below: figure 10 fuzzy control output these rules in table 1 were applied to the inputs and the output of the sugeno-type fuzzy inference system based controller as shown in figure 11. figure 11: hvac fuzzy controller simulink block diagram the system is time varying and we need an intelligent control as fuzzy control approach. the control method is depending on two control level; in first fuzzy level the inputs are the change of error .in second fuzzy level inputs are the error and the result of fuzzy control level 1(change of error). error is the difference between reference air temperatures (t0) and the air temperature inside the room (t3) as any traditional close loop control and fuzzy control level 1(change of error) correct control operation of fuzzy level 2 because of these reasons: 1. if a new fresh air enters the system there is no feedback for this change and based on that no change in the error t0. t3. 2. the nature of heat spread is slow so we cannot know what air temperature is just after heating system .if we don’t take it in to consideration it never mind that heating system works as on / off method (open loop control for heating system) therefore, these design of a hierarchical sugeno fuzzy controller for hvac in large buildings , basil hamed, and loai khashan (2014) 23 problems make fuzzy control level 1 is very important to correct this control loop. sugeno fuzzycontroller will control the first level of its adapted to the varying parameters. these parameters may caused by disturbances or unexpected system performance. results of sugeno fuzzy controller with a desired reference temperature of 30 c is shown in figure 12. these results show good results. figure12. sugeno method result blue line is t2, red line is t1 and green line is t3 iv. conclusion a hierarchical fuzzy control approach (sugeno method) is introduced to control a hvac system for large buildings. this approach reduces the fuzzy rule numbers but still maintains the linguistic meaning of fuzzy variables and adapt to changes and disturbances that may be happened to the system in any time. hierarchical fuzzy method help to reduce the number of rules used in this controller and make it easy to understand rules evaluation and make it possible to increase the number of inputs without fearful of rules increase. hierarchical fuzzy make it easy to partition the controller and this approach gives a better understand of controller running. the proposed controller tested using matlab/simulink program. the results show that the heat ventilation air condition (hvac) systems by sugeno fuzzy logic controller has good results. references [1] jiang z, "an information platform for building automation system", proceedings of the ieee international conference on industrial technology; 2; 1455-1460 ieee. 2005. [2] j. k. w. wong, h. li, and s. w. wang, “intelligent building research: a review,” autom. construction, vol. 14, no. 1, pp. 143–159, 2005. p. 145-153, 2007. [3] dounis a, caraiscos c, advanced control systems engineering for energy and comfort management in a building environment – a review. renew. sust. energ. rev., (2009). 13: 1246-1261. [4] fadi alalami “hierarchical fuzzy controller for building automation systems to reach lowest energy consumption” islamic univ. gaza march 2013. [5] source: u.s. department of energy buildings energy data book, sept. 2008 [6] timothy j. ross, "fuzzy logic with engineering applications", third edition © 2010 john wiley & sons, ltd. isbn: 978-0-470-74376-8. [7] khalil t. elnounou “design of gasugeno fuzzy controller for sun and maximum power point tracking in solar array systems” islamic univ. gaza march 2013. [8] satish maram, hierarchical fuzzy control of the upfc and svc located in aep's inez area, virginia polytechnic institute and state university. [9] tashtoush b, molhim m, al-rousan m. "dynamic model of an hvac system for control analysis", energy, elsevier volume 30, issue 10 (2005): 1729-1745. [10] mohammad hassan khooban, mohammad reza soltanpourb, davood nazari maryam, abadia zahra esfahania “optimal intelligent control for hvac systems” journal of power technologies 92 (3) (2012) 192–200. journal of engineering research and technology, volume 5, issue 2, june 2018 28 modeling disputes -rmfaas decision support system -dss to proceed through arbitration khalil m. alboursh 1 , hussam a. alborsh 2 1 phd in civil engineering, ncst in gaza, palestine, e-mail: dr.kboursh@gmail.com 2 msc in civil engineering, islamic university of gaza, palestine, e-mail: allhussam88@hotmail.com abstract—although the scale size of disputes in construction projects mainly follows the claim amount disputed, going through arbitration is considered as a one scale from the client point of view. so, study of foreseen risks for current and future situations is worthwhile to move in effective claim. actually, due to the multivariate nature of construction contracts, things never go as planned. thus, humans have developed many methods to resolve disputes, and arbitration is one of them. the study focused on modeling disputes occurring in construction industry field especially in gaza strip. mathematical model, regret model for arbitration (rmfa), has been built as a decision support system (dss) which will recommend the user (contractor) whether to proceed to arbitration or not. the developed model depends on regret approach mainly and two logical and mathematical methods; net present value (npv) and multi criteria analysis (mca) to obtain more accuracy. this required to survey thirty questionnaires of respondents and some interviews with arbitration experts for identifying the influential evaluation criteria that need to be input into the model. after statistical analysis process using statistical package for the social sciences (spss) for these evaluation criteria which have been selected and weighed in order to measure their relative importance and impacts for the three probability values (pe, pf and po) winning, current loses and future losses probabilities respectively which will be determined automatically by the rmfa, the weight values range of the selected ten evaluation criteria was (3.62 – 5). the contract criterion is the highest and the time and boq criteria are the lowest. results of the model were tested in comparison with actual four disputes cases and the efficiency of the model achieved 75%. index terms—arbitration, modeling, regret approach, rmfa, mca and npv. i introduction the nature of human relationships is harmony or difference stimulating the disputes usually, so we face various disputes in our life; social disputes, financial disputes, political disputes, job disputes, etc. but the adorable thing is that humanity has legislated different lawful effective methods to resolve these disputes. this paper focused on the dispute which occurs between the contractor and the client or the owner in engineering construction projects. the followed approach to solve such dispute is arbitration based on legal references. there are many methods to resolve disputes and arbitration is one of these methods which also include conciliation, mediation, mooting, early neutral evaluation (ene), fact finding method and medarb. they are widely used in dispute resolution and each one has some advantages and disadvantages points. arbitration is considered one of recently regulated dispute resolution methods in constructions in gaza strip, and the engineering arbitration center (eac) is considered the first responsible dispute resolution center in gaza strip. the importance of arbitration increases as the projects sizes increase. primarily, classification of the projects size depends on the budget. recently, palestine as a developing country has got many funds to implement vital projects in various fields especially after the last sequent three wars in 2008, 2012 and 2014 years which have been triggered by israel against gaza strip. actually, due to the multivariate nature of construction contracts, things never go as planned. so, many conflicts were raised by some contractors as an official claim that is considered a very critical step which most of contractors try to avoid it. the method of resolving conflicts and disputes may have differing consequences. going to arbitration to resolve construction disputes may not be an easy thing to do because the consequences may be dire. a contractor‘s reputation may be affected by the arbitration case. even if the contractor is certain to win an arbitration case, it may lose any potential future projects with the same client or even others in the market. therefore, the long-term losses to the contractor may overweigh its immediate benefit in going through the arbitration. therefore, going through arbitration may be a reason of regret [1]. this paper illustrates a mathematical regret model for arbitration (rmfa) proposed as a decision support system for going through arbitration using a regret theory approach including some uncertain factors that will be extracted accurately by two mathematical logical methods; net present value (npv) and multi criteria analysis (mca) used to obtain more accurate results. mailto:dr.kboursh@gmail.com mailto:allhussam88@ khalil m. alboursh, hussam a. alborsh / modeling disputes-rmfa-as decision support system-dss-to proceed through arbitration (2018) 29 ii modelling dispute-rmfa a research concept regret model for arbitration (rmfa) has been built as a decision support system (dss) advising the user (contractor) whether to proceed to arbitration or not. figure (1) shows the framework of this mathematical model which consists of three stages (input, analysis and output), the method of regret approach has been developed by two mathematical logical methods (multi criteria analysis mca and net present value npv) in order to obtain more accuracy in the results in which, results of a mathematical model depend on efficiency of entered data, in other words, uncertain inputs leads to inaccurate outputs produced by analysis stage in any mathematical model. the developed mathematical model rmfa has taken into consideration the flexibility of the required data that need to be input into the model. some of these variable data are factors in regret approach method as probable percentages, these probabilities will be a source of error if the user could not estimate them professionally. so, this model has been adjusted and regulated to help the user to avoid uncertain data and enter confirmed data related to financial data mainly, then the probable variables in regret approach will be calculated automatically during analysis stage. based on that, there are two classes of variables that need to be input into the model by the user (evaluation criteria and historical data) as shown in table (1). finally, the predicted outcome of the model allows the decision-maker to understand whether or not raising a claim is worth the risk. table 1 input data into the model class required data input evaluation criteria crit. 1 1 – 5 crit. 2 1 – 5 crit. 3 1 – 5 crit. 4 1 – 5 historical data net cost of projects with client yearly (cc) $ payments from client yearly (bc) $ internal rate (i) % budget of project (a) $ cost of the claim (c) $ disputed claim amount (d) $ figure 1 framework of mathematical model rmfa practically, this model, rmfa, has been programmed and enhanced by a specific programming language to be presented simply as shown in figure (2) (interface of the model) in order to facilitate dealing with it at all steps; data entry, analyzing processes and results presentation. in addition to that, the model was tested to calibrate its results by four real previous disputes cases which were projected to the model and analyzed, then the results of actual disputes were compared for each case study and they were almost matched. figure 2 rmfa interface khalil m. alboursh, hussam a. alborsh / modeling disputes-rmfa-as decision support system-dss-to proceed through arbitration (2018) 30 b materials and methods as shown in the previous section-figure (1), the improved mathematical model, rmfa, using regret theory approach depends on two methods (mca and npv) in order to support decision more accurately for going through arbitration. these methods will be discussed as follows: b.1 regret theory approach theoretically, the regret could be interpreted simply by the following famous example; during winter season, the chance for rain is 50%, so taking an umbrella or not will be referred to traditional decision-theory completely. there are four different scenarios that may be decided in these events. respectively, the first two scenarios have positive outcomes and the other two scenarios have negative outcomes, 1) the person who does not take an umbrella and it does not rain or, 2) the person who takes an umbrella and it rains, 3) the person who may decide to take the umbrella, but it does not rain, causing the person to regret his choice, and 4) the person may choose not to take an umbrella and it rains. however, due to uncertainty, the person cannot truly predict the outcome. hence, there is a 50% chance of positive or negative outcome equally. although the chances of the outcomes are equal, the person will regret one of those outcomes more than another [2]. therefore, it does matter which choice the person makes even though the chances are equal, due to regret. since arbitration may cause uncertain consequences on how both parties react to the procedure, then it is evident that a regret model is better for decisionmaking. the regret approach depends on different factors which are taken into consideration in dispute resolutions and arbitration cases. since arbitration takes a long time costly, then different factors need to be assessed to understand the overall risk of going through arbitration, the predicted results, and the benefits. the following factors are inputs into the regret equation as shown in equation [3] (1): pe(d) – c – pf(f) – po(o) > (1 – pe)[c + pf(f) + po(o)] (1) disputed claim amount (d) cost of arbitration (c) probability of winning (pe) amount of possible effects on current projects‘ losses (f) probability of current projects‘ losses (pf) amount of future opportunity loss (o) probability of future opportunity loss with the same client or others (po) where; the probability of winning (pe), probability of current (pf) and future opportunity loss (po) are ranged between (0-1). the probability of winning depends on having strong evidence. also, the costs that will be estimated by the contractor within this model must be dependent on real present values of the future costs or benefits [1]. the outcome of regret approach will advise the user to proceed to arbitration if the left-side value is higher than the rightside value. b.2 net present value (npv) net present value method is considered as a financial indicator to study the feasibility of the financial step which will be decided. theoretically, npv is the difference between the present value of cash inflows and the present value of cash outflows as the following equation (2) [4]. ∑ ( ) (2) where; ct = net cash inflow during the period t co = total initial investment costs r = discount rate, and t = number of time periods so, this method has been utilized to extract accurately some financial factors which are included in regret theory approach. these factors are the amount of possible effects on current projects‘ losses and the amount of future opportunity loss. actually, the results of npv method may be net positive or negative revenue for limited period, this value will be translated to the lost value currently or in future if the contractor decides to go through arbitration. and the bottom side of the cash flow diagram (cfd) illustrated in representative figure (3) represents the expected current and future losses as negative cash flows related to npv for going through arbitration. figure 3 cfd for going through arbitration in addition, the upper side of cfd which represents the current paid payments or disputed claim amount (d) and other financial commitments to be paid by the client to the contractor as positive cash flows. net cash inflow (ct) during an identified period (t) will be the sum of positive and negative cash flows. the npv method helps to measure the value of future cash flows. because of the time value of money (tvm), money in the present is worthy more than the same amount in the future. this is both because of earnings that could potentially be made using the money during the intervening time and because of inflation. in other words, a future opportunity loss in the future won‘t be worthy as much as one lost in the present. b.3 multi criteria analysis (mca) in general, multi-criteria analysis is undertaken to make a khalil m. alboursh, hussam a. alborsh / modeling disputes-rmfa-as decision support system-dss-to proceed through arbitration (2018) 31 comparative assessment between heterogeneous measures. in the evaluation field, multi-criteria analysis is usually an evaluation tool, and is particularly used for the examination of the strategic choices. in this study, mca was used to expect the accurate values of probability of the following measures: 1current projects‘ losses (pf), 2future opportunity loss (po) and 3winning (pe). whereas these probabilities will be crucial factors used in regret theory approach, the influential criteria on these values were studied well and determined by questionnaire survey developed and designed in arabic language to be more understandable to the targeted population then was analyzed by statistical package for social sciences (spss). the generated evaluation criteria were weighed as a first step in multi-criteria analysis in order to measure their relative importance and impacts for the three probability values (p'e, p'f and p'o) which will be determined by the npv method then it could expect pe, pf and po values accurately by the following equations (3) and (4) [5]. √ (3) where; table 2 shows all variables generated as selected evaluation criteria which have the most impact on the expected values (pe, pf and po). expected values % (pe, pf ,po) = probability of (p'e, p'f ,p'o)* impact (4) table 2 selected evaluation criteria evaluation criteria the literature review and some interviews with arbitration experts and all the information that could help in achieving the study objectives were collected, reviewed and organized to be suitable for the study survey, then a questionnaire was developed with closed and open-ended questions. the question follows a scale as in the following table (3) likert quintuple criterion used in the study [6]. table 3 used scale of questions level scale strongly agree 5 agree 4 neutral 3 disagree 2 strongly disagree 1 statistical package for social sciences (spss) was utilized to analyze the questionnaires data targeted to obtain the evaluation criteria which have been ranked according to their effects on the extent of arbitration as a dispute resolution indicator. weighting criteria one of the rules in multi-criteria analysis is to weigh these criteria using the relative important index and the mean values were used in this study. the relative index techniques have been widely used in construction study for measuring attitudes with respect to surveyed variables. triple scaling was used for ranking questions that have an agreement levels. the respondents were asked to give their perceptions in group of questions on five-point scale which reflects their assessment regarding the arbitration procedures. the importance index was computed using formula relative importance index (5) [7]: (5) where w is the weighting given to each factor by the respondent, ranging from 1 to 5, (n1 = number of respondents who strongly disagree, n2 = number of respondents who disagree, n3 = number of respondents for neutral, n4 = number of respondents who agree, n5 = number of respondents for strongly agree. a is the highest weight (i.e 5 in the study) and n is the total number of samples. the relative importance index ranges from 0 to 1 [8]. khalil m. alboursh, hussam a. alborsh / modeling disputes-rmfa-as decision support system-dss-to proceed through arbitration (2018) 32 iii matrix and analysis the presented model in this paper provides the user the final result for regret approach which is whether to proceed through arbitration or not. regret approach consists of main four steps to reach the final step 5th called regret approach as shown in equation (1) with taking into consideration additional two significant factors as follows [1]: 1) total contract amount (a) and 2) acceptable negotiated amount (n). in addition to that, the developed model has a basic step (step 0) in order to determine the probability of winning (pe) automatically using mca and npv methods then going ahead in the other steps sequentially (from 1st step, 2nd step, 3rd step and 4th step) as shown in figure (4). each step has a specific result recommending the user to do necessary action. these sub results are considered very important for the decision-maker to understand the risks involved to make a decision whether or not to raise a claim without current or future losses. according to decision theory, the following would describe the typical decision flow of going to arbitration. step 0: expecting the probability of winning mca & npv methods, then step 1: decision to raise a claim pe(d) > c? if yes, then step 2: decision to negotiate pe(d) – c > n? if yes, then step 3: decision to accept an amicable settlement is ((pe(d) – c)/a) significant? if yes, then step 4: decision to arbitrate pe(d) > c + pf(f) + po(o)? if yes, then proceed to arbitration. figure 4 claim’s flowchart however, when looking at it from a regret theory approach, the maximum regret coincides with the maximum loss, which would be due to losing the arbitration case or losing the opportunity of winning the claim if the case went to arbitration. in such a case, the reputation of the contractor falls through in addition to loss of future opportunity with the same client. therefore, an additional step needs to be included to understand the cost that would be least regretted. [1] the best outcome is pe(d) – c – pf(f) – po(o). and on the other hand, the worst outcome is the total cost of c + pf(f) + po(o). step 5 (regret approach): decision of arbitration pe(d) – c – pf(f) – po(o) > (1 – pe)[c + pf(f) + po(o)] ? if yes, then proceed to arbitration. in the extra step above, it is important to predict the outcome whether or not regretting the decision of proceeding with arbitration and losing, or not proceeding to arbitration and winning. the last step for a regret approach may be adaptable to any other model available, as the principle of regret is a major factor in realistic decision-making. [1]. iv results and discussions the results which have been obtained by the developed mathematical model rmfa were almost expected and satisfying. as mentioned above, this model depends on some evaluation criteria related to the disputed issue that need to be estimated by the contractor within a scale (1-5). the selected evaluation criteria have been determined by a questionnaire distributed to thirty dispute parties of questionnaires then analyzed by statistical package for social sciences (spss) and the results were as shown in the following table (4): table 4 evaluation criteria analysis as illustrated in the previous table, the weight value is considered by comparison. these weights reflect the difference of impacts force on the three probability values (pe, pf and po) which will be determined automatically by the rmfa. the highest weight value is (contract) and the lowest weight value is (previous financial problems with client), and in total there are ten influential criteria highlighted by gray color and they have been identified according to the weighting analysis. khalil m. alboursh, hussam a. alborsh / modeling disputes-rmfa-as decision support system-dss-to proceed through arbitration (2018) 33 these selected criteria have been presented in chart (1) with their weight values which are ranged between (3.62 – 5), the maximum value was the (contract) which is considered the main reference of the relationship between the contracted parts because it includes the legally binding obligations between them. also, the contract sets out those obligations and actions that can be taken financially and legally if they are not met. the minimum values were (time & boq) and other values which have lower effect on the three probability values (pe, pf and po) are not reliable sources relatively because they could be attacked under the pretext fraud reasons. chart 1 wight of selected evaluation criteria simply, the evaluation criteria support the user to utilize regret approach in expecting the probability of winning then going ahead through other steps. the final result will be recommended by the model (rmfa) as follows ―arbitration‖ or ―no arbitration‖, but what will happen if the contractor goes through the opposite way. for this, the accuracy of the model results was tested by setting four real previous disputes cases into the developed model as several case studies in transportation, structural buildings, infrastructure fields (sources: association of engineers in gaza governorates and the contractors), then by comparison between actual actions and the model recommendation for each case study. the results have almost matched as shown in detailed table (5) that evidences the high accuracy in the model efficiency which is 75% at least. this percent is dependable relative to easy and quick action using the model (rmfa). table 5 efficiency of the rmfa v conclusion the developed model including mainly regret approach and two logical and mathematical methods; net present value (npv) and multi criteria analysis (mca) produces accurate results for some probability values (pe, pf and po) winning, current loses and future losses probabilities respectively which will be determined automatically by the model. this required to survey thirty questionnaires for identifying the influential evaluation criteria that need to be input into this model. the selected evaluation criteria were weighed in order to measure their relative importance and impacts for the three values. the weight values range of the selected ten evaluation criteria was (3.62 – 5) the contract criterion is the highest and the time & boq criteria are the lowest. results of the model were tested in comparison with actual four disputes cases and the efficiency of the model achieved at least 75%, whereas this percent is dependable with the multivariate nature of construction contracts. thus, using regret model for arbitration, rmfa, is considered an abridged and quick action to make decision for going to arbitration or not; to save time, cost and thinking. references [1] galadari, a. (2011) ―regret model for arbitration: predicting the outcome.‖ international journal of innovation, management and technology, vol. 2, no. 6. [2] d. e. bell, (1982) ―regret in decision making under uncertainty,‖ operations research, 30: 961 – 981. [3] a. al-yousuf, a. al-ali, a. ustadi, and a. galadari, (2009) ―deciding the way forward of construction contracts during cash flow deficits,‖ international conference on financial theory and engineering, 28 – 30, dubai, uae. [4] khan m., jain p., (2007). financial management, text problems and cases, new delhi, indian institute of technology delhi. [5] wang jj, jing yy, zhang cf et al (2010) review on multi-criteria decision analysis aid in sustainable energy decision-making. renew sust energ rev 13:2263– 2278. [6] wuensch, karl l. (october 4, 2005). "what is a likert scale? and how do you pronounce 'likert?'". east carolina university. retrieved april 30, 2009. [7] barron, f.j. and barret, b.e (1996) ‗decision quality using ranked attribute weights‘, management science, 42, pp.1515–23. [8] kmietowicz, z.w. and pearman, a.d. (1984) ‗decision theory, linear partial information and statistical dominance‘, omega, 12, pp.301–99. 5 3.62 3.88 4.13 4.64 3.62 3.98 4.34 4.64 4.85 c b t s se ti p ic r m http://en.wikipedia.org/wiki/international_standard_book_number journal of engineering research and technology, volume 4, issue 4, december 2017 137 investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete mohamed arafa bassam a. tayeh* mamoun alqedra samir shihada hesham hanoona civil engineering department, islamic university of gaza, gaza, palestine *corresponding author: bassam a. tayeh , civil engineering department, islamic university of gaza, gaza, palestine, tel: + 972-82644400; fax: + 972-82644800 , e-mail: btayeh@iugaza.edu.ps abstract— this research aimed at studying the effect of sulfate attack on the compressive concrete strength with various percentages of recycled aggregate replacements (0%, 30%, 60% and 100%). water cement ratio (0.42) are used, mgso4 solution was used with two concentration (6% and 9 %) to represent the effect of sulfate attack on the concrete compressive strength. the experimental tests focused on physical properties of recycled aggregate; density, unit weight, sieve analysis, los angles test and specific gravity. tests have also been performed on 108 concrete cubes samples at 7 days, 14 days and 28 days for compressive strength. the results of compressive strength at 28 days using (0%, 30%, 60% and 100%) recycled aggregate are (330, 280, 266 and 244) kg/cm2 respectively, with reduction in compressive strength was (15.2%, 19.4% and 26%) for replacement ratio (30%, 60% and 100%) respectively. mgso4 solution has an effect on compressive strength after 90 days of immersion in 6% and 9% concentrated of mgso4 solution. the results of this study show that the reduction in compressive strength using recycled aggregate is more sensitive against the sulfate attack compared with natural aggregate. index terms— recycled aggregate, recycled aggregate concrete, sulfate attack, compressive strength, mgso4 solution, replacement ratio. 1. introduction as a result of the fast economy development, rapid growth in urbanization has led to huge scale new construction, these construction works require large amounts of consumption and production of natural aggregate, which result in the condensation of natural aggregate resources shortage and the difficulty of sustainable development [1] [2]. about 70% of the construction wastes are composed of concrete wastes, so it is important to establish a recycling technology that reuse the concrete wastes as the material for production the concrete [3]. it is very important to investigate other sources of raw materials in order to reduce energy consumption and available natural resources, crushing concrete to produce coarse aggregate for the production of new concrete is a common way to achieve more environment-friendly concrete, which will reduces the consumption of natural resources as well as the consumption of landfill space [4] [5]. since aggregate makes up most of the concrete by volume, it makes sense to investigate the use of concrete waste as aggregate in new concrete, the reuse and recycling these type of waste will lead to save and reduction in valuable landfill space and savings in natural resources [6]. generally the compressive strength of concrete with using recycled coarse aggregate lower than when use natural coarse aggregate. the strength of the concrete with used recycled coarse aggregate can be 10–25% lower than that of conventional concrete made with natural coarse aggregate [6]. the compressive strength (28 days) of concrete with recycled coarse aggregate was 2730% less than the strength of the concrete made with natural coarse aggregates [7]. the compresive strenght of the concrete with recycled coarse aggregate at 28 day was slightly lower than the target strength [4]. to improve the strength of concrete with recycled coarse aggregate to equal or exceed to the strength of concrete with natural coarse aggregate [8, 9] suggested to add fly ash or silica fume to the concrete mixture as a fine aggregate replacement. after successive wars on the gaza strip resulted in a large amount of concrete waste, the disposal of these wastes in gaza strip is one of the challenging problems; due to the shortage of open lands and the limited size of municipal dumping sites to accommodate large quantities of debris and unprocessed concrete wastes. the random and uncontrolled disposal of construction and demolition wastes creates several undesirable impacts. this demolition waste was estimated according to unrwa (2009) reports as 600000 tons, mailto:btayeh@iugaza.edu.ps mailto:cemamj@eng.usm.my,%20mgtazmi@yahoo.com mohamed arafa, bassam a. tayeh*, mamoun alqedra,samir shihada, hesham hanoona / investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete (2017) 138 and this quantity is added to the previous available construction and demolition [10]. but the demolition waste in the summer of 2014 was estimated according to undp reports as 75000 tons [11]. this research is aimed at investigation the possibility of using recycled aggregate in concrete mixes, and choose the optimum recycled ratio after the determination of physical and mechanical performance of recycled aggregate used in concrete mixtures. in addition to identify the effects of use sulfate attack solution (mgso4) with different concentrations on the compressive strength. 2. experimental program the concrete samples were prepared using 4 different mixes, each mix distinguish from the other in percentage content of recycled coarse aggregate. each set of these sets were conducted several tests. these tests were (concrete cube unit weight, concrete cube density, slump test (workability measurement), compressive strength test at 7, 14, 28, 58, 88, 118 days). 2.1 materials the materials that used in this study ordinary portland cement, natural coarse aggregate, recycled coarse aggregate, sand. 2.1.1 cement portland cement type i, grade 42.5r was used throughout the investigation. the cement was obtained from local concrete manufacture and kept in dry location. 2.1.2 water: tap water, potable without any salts or chemicals was used in the study. the water source was the soil and material laboratory in islamic university gaza. 2.1.3 natural coarse aggregates: two main categories of aggregate were used, coarse and fine aggregates. the classification of aggregate into fine and course is referred to astm c33 [12]. three sizes of crushed limestone coarse aggregate were used with maximum nominal size (25mm) and minimum size of (4.75mm) as illustrate in table 1. these aggregate were foliya, adasiya, and simsymia as locally known in gaza strip. to prepare the samples several aggregate properties should be known. table 1: used aggregates types commercial name used in gaza size (mm) foliya (type 1 (25mm)) 25-19 adasiya (type 2 (19mm)) 19 -9.5 simsymia (type 3 (9.5mm)) 9.5-4.75 2.1.3.1 specific gravity the aggregate specific gravity is a dimensionless value used to determine the volume of aggregate in concrete mixes. table 2 illustrates the specific gravity value for all natural coarse aggregate which used in the preparation of concrete mixes. the determination of specific gravity of coarse and fine aggregate was done according to astm c 127 and astm c128 respectively, [13, 14]. the specific gravity was calculated at two different conditions which are dry and saturated surface dry conditions. 2.1.3.2 moisture content the aggregate moisture content is the percentage of water present in a sample of aggregate either inside pores or in the surface. moisture content of coarse and fine aggregate was done according to astm c 566 [15]. the moisture content was 0 .23% for all types, the equipment used in this test are dry oven and weighting balance. 2.1.3.3 unit weight the unit weight or bulk density of aggregate is the weight of aggregate per unit volume. the bulk density value is necessary to select concrete mixtures proportions. astm c 29 procedure was used to determine aggregate bulk density [16]. 2.1.3.4 aggregate absorption absorption of aggregate is the weight of water present in aggregate pores expressed as percentage of aggregate dry weight. astm c127 was used to determine coarse aggregate absorption and astm c128 for fine aggregate [13, 14]. table 2 illustrates the absorption percentages of all aggregates. table 2: natural coarse aggregate physical properties aggregate type ssd wt. (g) dry wt. (g) wt. in water (g) gsb (dry) gsb (ssd) absorption % type 1 (25mm) 2008 1973 1194 2.424 2.467 1.789 type 2 (19mm) 2015 1964 1208 2.411 2.496 2.6 type 3 (9.5mm) 1901 1872 1155 2.509 2.548 1.6 2.1.3.5 grading and sieve analysis the sieve analysis of aggregate includes the determination of coarse and fine aggregate by using a series of sieves. astm c136 procedure was used to determine the sieve analysis of course and fine aggregate [17]. table 3 shows the sieve grading of the three types of coarse aggregates and shows the maximum and minimum passing limits according to [12]. it is also noticed in table 4 that the sieve grading for every type of coarse aggregate doesn’t fit the requirements of [12]. so it can be mixed to fit the requirements of sieve grading. many trials of changing the mohamed arafa, bassam a. tayeh*, mamoun alqedra,samir shihada, hesham hanoona / investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete (2017) 139 weight of every type in the all mix was made to reach the optimum sieve grading of coarse aggregate. table 3: sieve grading of natural coarse aggregate types aggregate type astm c 33-03 check type 3 (9.5mm) check type 2 (19mm) check type 1 (25mm) max min sample description % % % % % sieve size passing passing passing passing passing mm ok 100 ok 100 ok 100 100 100 37.5 ok 100 ok 100 ok 100 100 100 25 ok 100 ok 97.8 not ok 67.54 100 90 19 100 57.6 4.23 12.5 not ok 93.4 not ok 14.3 not ok 4.62 55 20 9.5 not ok 26.25 ok 3.56 ok 3.06 10 0 4.75 not ok 6.54 ok 2.32 ok 2.06 5 0 2.63 table 4: coarse aggregate grading -astm c33-03 check with standard astm astm c 33-03 type 3 (9.5mm) type 2 (19mm) type 1 (25mm) sample description 23.70% 26.80% 49.50% coarse aggregate max min mix of aggregate % % % passing sieve size passing passing mm ok 100 100 100 37.5 ok 100 100 100 25 not ok 100 90 83.3 19 80 35 41.2 12.5 ok 55 20 28.2 9.5 ok 10 0 8.6 4.75 ok 5 0 3.2 2.63 figure 1: natural coarse aggregate gradation -all in coarse aggregate, the 95th percentile is 22 mm. 2.1.4 natural fine aggregates: table 5 illustrates the fine aggregate according to sieve grading as defined in [12]. the physical properties of used sand illustrates in table 6. table 5: sieve grading of gaza sand check with standard astm astm c33-03 fine agg. sand sample description % passing % passing % passing opening mm ok 100 95 100 4.75 ok 100 80 100 2.36 not ok 85 50 100 1.18 not ok 60 25 98.75 0.6 not ok 30 10 89.35 0.425 not ok 30 5 50.52 0.3 ok 10 2 2.02 0.15 ok 7 0 0.6 0.075 ok 0 0 0 pan mohamed arafa, bassam a. tayeh*, mamoun alqedra,samir shihada, hesham hanoona / investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete (2017) 140 figure 2: sieve grading fine aggregate (gaza sand) table 6: recycled coarse aggregate physical properties gsb dry 2.275 gsb ssd 2.375 absorption % 4.429 % dry unit weight 1370 kg/m3 2.1.5 recycled aggregates: the recycled aggregate is collected from one of the gaza municipality sitesbecause the following reasons, maximum size is not much greater than 37.5mm size, good grading and the impurities like bricks, asphalt, glass, wood …….etc. were neglected. recycled coarse aggregate were collected and tested. table 7 illustrate the physical properties of recycled coarse aggregate. table 8 and figure 3 showed the sieve analysis of recycled coarse aggregate. the los angeles test also done, the percentage of abrasion of the recycle coarse aggregate was 36% which is considered a high ratio compared to natural aggregate (6 %). table 7: recycled coarse aggregate physical properties gsb dry 2.275 gsb ssd 2.375 absorption % 4.429 % dry unit weight 1370 kg/m3 table 8: grading of recycled coarse aggregate check with standard astm astmc33-03 sand sample description max min % passing % passing % passing opening mm ok 100 100 100 37.5 ok 100 100 100 25 ok 100 90 99.86 19 51.26 12.5 ok 55 20 20.21 9.5 not ok 30 10 2.59 6.3 ok 10 0 0 4.75 figure 3: grading of recycled coarse aggregate the 95th percentile is 18 mm that mean it’s within poorly graded aggregate. 2.7 job mix: according to aci c-211.1 this mix design was performed, table 9. table 9: mix design. rr% component 0% wt.(kg/m3) 30% wt.(kg/m3) 60% wt.(kg/m3) 100% wt.(kg/m3) type 1 (25mm) n 634 443.8 253.6 0 r 0 190.2 380.4 634 type 2 (19mm) n 343 240.1 137.2 0 r 0 102.9 205.8 343 type 3 (9.5mm) n 304 212.8 121.6 0 r 0 91.2 182.4 304 sand 639 639 639 639 water 169.5 179 188.5 201.2 cement 309 309 309 309 3. results & discussion 3.1 density: table 10 and figure 4 show the results of the concrete density of at 28 day age with the versus percentage of coarse recycled aggregate. it’s shown that the concrete density decreases as percentage of coarse recycled aggregate (replacement ratio) increases, the decrease in density between rr% of 30 and 60 is small compared with decrease between rr% of 0 and 30 that can be referred due to interlocking between aggregate particles. table 10: average 28 day density of concrete strength replacement ratio (rr %) density (kg/m3) 0 2405.6 30 2367.7 60 2357.1 100 2330.0 mohamed arafa, bassam a. tayeh*, mamoun alqedra,samir shihada, hesham hanoona / investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete (2017) 141 figure 4: average 28 day concrete density versus percentage of recycled aggregate 3.2 workability: the slump value was used as indication of mix workability and all the mixes were designed for 80-100 mm slump value. table 11 and figure 5, shows a decreasing of workability when the replacement ratio increases that can be referred to friction due to cementation of aggregate particles. table 11: average slump value of concrete specimens replacement ratio (rr %) slump (mm) 0 60 30 35 60 15 100 10 figure 5: average slump value of concrete specimens 3.3 compressive strength: the compressive strength of concrete is affected by both the aggregate properties, and the characteristics of the new cement paste that is developed during the maturing of concrete. the potential strength of concrete is partially a function of aspects related to mix proportioning such as cement content, water/cement ratio and choice of suitable aggregate but also a function of proper curing when chemical bonding develops. the w/c ratio, proper compaction and adequate curing, affect the development of concrete microstructure, and also affect the amount, distribution and size of pores. the most important parameters of the aggregate that affecting on the compressive strength are shape, texture, maximum size and the strength of coarse aggregate [18-20], in addition the effect of replacement ratio is considered in this study. 3.3.1 effect of replacement ratio (rr %) on the compressive strength: table 12 and figure 6 illustrate the average compressive strength of the concrete at 7days, 14days and 28 days in relation with the recycled aggregate replacement ratio. it’s clear that when the replacement of recycled aggregate ratio increases (rr %), the compressive strength decreases, table 12: average compressive strength of concrete specimens at (7d, 14d and 28d) (rr%) 7 days 14 days 28 days fc' (kg/cm2) % of loss fc' (kg/cm2) % of loss fc' (kg/cm2) % of loss 0 324.4 0 279 0 330.3 0 30 259.4 20 254.1 8.9 281.7 14.7 60 257.1 20.7 242.3 13.2 266.6 19.3 100 221.8 31.6 208.3 25.3 240.4 27.2 figure 6: average compressive strength of concrete specimens at (7d, 14d and 28d) 3.3.2 effect of mgso4 solution on the compressive strength: the concrete cubes after curing for 28 days were immersed in solution with different concentrations (6% and 9%) of mgso4 for 30, 60 and 90 days, the compressive strength was recorded for each replacement ratio. table 13 and figure 7 show the results of compressive strength of concrete specimens after immersed them in 6% mgso4. the compressive strength decrease when the replacement ratio of recycled coarse aggregate increase. mohamed arafa, bassam a. tayeh*, mamoun alqedra,samir shihada, hesham hanoona / investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete (2017) 142 table 13: average compressive strength of concrete specimens at 6% mgso4 (rr%) 58 days 88 days 118 days fc' (kg/cm2) % of loss fc' (kg/cm2) % of loss fc' (kg/cm2) % of loss 0 324.4 0 303.7 0 291.8 0 30 259.4 20 234.1 22.9 224.4 23.1 60 257.1 20.7 225.5 25.7 213.6 26.8 100 221.8 31.6 203.1 33.1 190.3 34.8 figure 7: average compressive strength of concrete specimens at 6% mgso4 table 14 and figure 8 show the results of compressive strength of concrete specimens after immersed them in 9% mgso4. the compressive strength decrease when the replacement ratio of recycled coarse aggregate increase, also the compressive strength decrease as the mgso4 concentration increases. table 14: average compressive strength of concrete specimens at 9% mgso4 (rr%) 58 days 88 days 118 days fc' (kg/cm2) % of loss fc' (kg/cm2) % of loss fc' (kg/cm2) % of loss 0 317.5 0 298.3 0 285.9 0 30 256.7 19.1 231.4 22.4 219.5 23.2 60 252.5 20.5 218.5 26.8 204.7 28.4 100 215.2 32.2 195.2 34.6 182.4 36.2 figure 8: average compressive strength of concrete specimens at 9% mgso4 4. conclusions experimental works on the use of recycled aggregates have proven that good quality concrete could be produced with recycled aggregates. the use of aggregates produced from recycled construction and demolition waste should be further promoted. based on the experimental investigation reported in the work, the following conclusions are drawn: • the dry density of recycled aggregate is about 93 % of the dry density of natural aggregate, the all density of recycled aggregate is about 96 % of natural aggregate concrete, which is not much lower than natural aggregate concrete density. • the workability of recycled aggregate concrete mix is lower than natural aggregate concrete mix. • the compressive strength of concrete increases as the percent of recycled aggregate decreases, the reduction in compressive strength was (15.2%, 19.4% and 26%) for replacement ratio (30%, 60% and 100%) respectively. • the compressive strength of concrete increases as the concentration of (mgso4) decreases, the reduction in compressive strength after 90 days of immersion in 6% concentration of mgso4 solution was (11.6%, 20.4%, 19.9% and 20.8%) for replacement ratio (0%, 30%, 60% and 100%) respectively. • the reduction in compressive strength after 90 days of immersion in 9% concentrated of mgso4 solution was (13.4%, 22.1%, 23.3% and 24%) using replacement ratio (0%, 30%, 60% and 100%) respectively. acknowledgement the authors are grateful to the staff of the islamic university of gaza (iug) soil and materials lab for their help during the sample preparation and testing. special thanks are directed to senior civil engineering students, rostom a. alkashef, sufyan a. abu negaila and rafiq m. el ramlawi for helping the authors in carrying out the experimental program. references 1. zhou, c. and z. chen, mechanical properties of recycled concrete made with different types of coarse aggregate. construction and building materials, 2017. 134: p. 497-506. 2. babashamsi, p., et al., recycling toward sustainable pavement development. jurnal teknologi, 2016. 78(72): p. 25-32. 3. gull, i., testing of strength of recycled waste concrete and its applicability. journal of construction engineering and management, 2010. 137(1): p. 1-5. 4. rahal, k., mechanical properties of concrete with recycled coarse aggregate. building and environment, 2007. 42(1): p. 407-415. mohamed arafa, bassam a. tayeh*, mamoun alqedra,samir shihada, hesham hanoona / investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete (2017) 143 5. guo, y., x. wang, and j. qian, experimental research on the wettability of recycled aggregate concrete with fly ash. open civil engineering journal, 2013. 7: p. 232-236. 6. tabsh, s.w. and a.s. abdelfatah, influence of recycled concrete aggregates on strength properties of concrete. construction and building materials, 2009. 23(2): p. 1163-1167. 7. rustom, r.n., et al., properties of recycled aggregate in concrete and road pavement applications. iug journal of natural studies, 2015. 15(2). 8. corinaldesi, v. and g. moriconi, influence of mineral additions on the performance of 100% recycled aggregate concrete. construction and building materials, 2009. 23(8): p. 2869-2876. 9. behera, m., et al., recycled aggregate from c&d waste & its use in concrete–a breakthrough towards sustainability in construction sector: a review. construction and building materials, 2014. 68: p. 501516. 10. unrwa, the reconstruction of the southern provinces. gaza: report. unrwa, 2009. 11. undp, the reconstruction of the southern provinces. gaza: report. undp, 2015. 12. astm, astm c33. standard specification for concrete aggregates. 2004. 13. astm, astm c127. standard test method for specific gravity and absorption of coarse aggregate. 2004. 14. astm, astm c128. tandard test method for specific gravity and absorption of fine. 2004. 15. astm, astm c566. standard test method for total evaporable moisture content of aggregate by drying. 2004. 16. astm, astm c29. standard test method for bulk density (“unit weight”) and voids in. 2004. 17. astm, astm c136. standard test method for sieve analysis of fine and coarse aggregates. 2004. 18. xuan, d., b. zhan, and c.s. poon, assessment of mechanical properties of concrete incorporating carbonated recycled concrete aggregates. cement and concrete composites, 2016. 65: p. 67-74. 19. lotfi, s., et al., performance of recycled aggregate concrete based on a new concrete recycling technology. construction and building materials, 2015. 95: p. 243-256. 20. arora, s. and s. singh, analysis of flexural fatigue failure of concrete made with 100% coarse recycled concrete aggregates. construction and building materials, 2016. 102: p. 782-791. transactions template journal of engineering research and technology, volume 5, issue 1, december 2018 13 applying reused steel bars to new constructions (a case study in gaza strip) bassam a. tayeh 1,a* mohammed w. hasaniyah 1,b mohammed ziad abu anza 1,c mohammed a. abed 2,d 1 civil engineering department, faculty of engineering, islamic university of gaza, gaza, palestine 2 budapest university of technology and economics, müegyetem rkp. 3, budapest 1111, hungary a btayeh@iugaza.edu.ps b mhasaniyah@gmail.com c mohemm34@gmail.com d mohammed.a.m.abed@hotmail.com *corresponding author: bassam a. tayeh , civil engineering department, islamic university of gaza, gaza, palestine, tel: + 972-82644400; fax: + 972-82644800 , e-mail: btayeh@iugaza.edu.ps abstract—demands for construction materials and for steel in particular are globally increasing. in 2008, the construction sector consumed 56% of the total 1088 million tons of steel demand. steel production is a major contributor to greenhouse emissions with an estimated 25% of total co2 emissions. therefore, reusing and recycling steel could be beneficial in lowering the global levels of co2 emissions. this paper examines the possibility of using steel form the debris of damaged buildings during the 2014 war on gaza, palestine. the lack of steel bars and their uprising prices in gaza strip encouraged the trend of using used steel in new constructions. the paper examines the properties of used steel in comparison with the standards. it also compares between steel of known and unknown extraction sources and between steel extracted under an expert supervision and steel extracted by local residents. the validity of reused steel is examined through a process of re-certification. the process includes applying a tensile and bend and re-bend test to used steel bars. the results indicate that some reused steel bars meet the specification for new constructions. the results also show that steel bars extracted under a specialist supervision shows better performance than those extracted by local steel collectors in gaza. index terms: co2 emissions, bend & re-bend test, destroyed buildings, gaza strip, reused steel bars, tensile test. i introduction steel is the most widely used engineering material around the world; the global demands for steel in 2008 was estimated at 1088 million tons. the construction sector consumed approximately 56% of global steel demands. these were divided into 358 million tons in buildings and 238 million tons in infrastructure projects [1]. steel production is one of the major causes for greenhouse gas emissions. the global industrial carbon emissions are around 10000 million tons co2; steel industry approximately produces 25% of these emissions [2]. therefore, research attempts for recycling or reusing steel might have beneficial effects on reducing co2 emissions. in a research carried in the netherlands, the united kingdom, and sweden, it was found that at the end of a facility life, 83% of steel are recycled, 14% are reused and 3% are landfilled [3]. although several steel sections are reused, steel bars from reinforced concrete buildings are never reused globally [4]. it was also found that reusing a particular amount of steel for one time can reduce co2 emissions by 35% in comparison of using newly-fabricated-steel members. recycling these used members for one more time can decrease co2 emissions by 45% in comparison with using new members. these numbers reflect the importance of recycling and reusing steel instead of fabricating new steel members. they also show that there is a barrier for steel reusing and that industry prefers steel recycling [5, 6]. reusing steel members offers greater environmental advantages than recycling since there is no (or few) environmental impacts associated with reprocessing. for example, https://www.facebook.com/mohammed.w.hasaniyah?hc_ref=art4vgbpq4wg_wtdxrfn40ox7nh-h9hyx4cbo05pm5s73mpgrtxls5foyomjvjjy-iy&fref=nf mailto:btayeh@iugaza.edu.ps mailto:mhasaniyah@gmail.com mailto:c%20%20mohemm34@gmail.com mailto:mohammed.a.m.abed@hotmail.com mailto:btayeh@iugaza.edu.ps mailto:cemamj@eng.usm.my,%20mgtazmi@yahoo.com bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 14 reusing a steel beam in its existing form is more efficient energy and cost wise than re-melting the beam and fabricating a new member out of it. over 500 million tons of steel are recovered and recycled annually worldwide. the united kingdom construction industry consumes around 420 million tons of materials annually and generates some 90 million tons of construction, demolition and excavation waste, of which 25 million tons end up in landfills [7]. the economic value of reusing steel can be strongly linked to the cost of new steel [8]. reusing steel is advantageous in construction from both economic and environmental perspectives. however, there are some barriers that limit the applications of reused steel in new constructions. these barriers are summarized in the following points: 1. sourcing steel: reusing steel members in new constructions requires that the extracted members from old constructions meet the design requirements of the new construction. this requires the design team to investigate the reuse supply early in the design process to attempt to secure appropriate sections [6]. there is usually a limited supply of reused steel that fits the new design, which results in a mixture of reused and new steel. sourcing steel requires an intensive work from the structural design team leading to an increase construction cost. the sourcing process generally requires a longer project preparation program to ensure the steel is sourced, tested, re-fabricated (where required) and delivered to site ready for construction. sourcing of steel can be a challenge since construction is usually faster than demolition which limits the supply of reused steel [6, 9]. 2. cost implications for structural steel reuse: there are no inclusive data about the costs of reusing steel due to inexperience of reusing steel worldwide. however, it must contain the costs of deconstruction, shot-blasting, labor work, recondition/certification and fabrication with reclaimed steel. additional costs of reused steel could emerge from delays in the construction process. these delays could be prevented if using reused steel is planned for at an early stage [2, 6, 10]. 3. steel re-certification: if steel is to be reused, a specialist must take responsibility for certifying its suitability. a visual inspection is firstly required to identify distortion, deflection and significant corrosion segments of the member to be reused. if the steel grade is unknown, either the lowest grade could be assumed, and the structure is designed accordingly, or a tensile test can be conducted to determine the steel grade [4, 6]. 4. lack of client demand or negative client perceptions: clients of new constructions prefer new steel members over reused steel. this results in a lower demand rates for reused steel [2, 10]. case study gaza strip is one of the most crowded areas in the world, where about 2 million inhabitants live in approximately 365 km 2 [11, 12]. during the past decade, the strip suffered the consequences of consecutive wars which destroyed plenty of buildings and infrastructure either wholly or partially. the prices of steel in gaza have increased as a result of continuously increasing demand and limited supply. the amount of construction waste from the destroyed houses after the war of 2014 is estimated at about 2.5-3.0 million tons; 22% of this amount is made of steel bars [4, 13, 14] (fig. 1). the 2008 war left approximately 1 million tons of construction wastes [12]. the strip requires huge amounts of construction materials for buildings rehabilitation and natural growth requirements. in october, 2016, gaza strip requirements for construction materials were estimated at about 31 million tons [15]. according to the united nations office for projects services (unops), the allocated quantities of construction materials on their system, gaza reconstruction mechanism (grm), bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 15 are 1,046,005 tons cement, 3,904,891 tones aggregates, and 185,161 tons steel bars with a total of 5,136,057 tons [15]. the demand for steel bars in 2014 was 1,049,079.75 tons. however, only 113,702 tons of steel bars were allowed into gaza strip between 2014 and 2016. this has left a deficit of 935,377.75 tons in steel supply [15]. this study attempts to validate the reuse of steel bars from destroyed constructions in new buildings. this approach fig.1 steel bars collection might help in lowering the strip needs for new steel bars. the sources of reused steel bars are destroyed buildings due to bombing or demolishing. this paper examines the possibility of using steel form the debris of damaged buildings during the wars on gaza. the paper examines the properties of used steel in comparison with the standards. it also compares between steel of known and unknown extraction sources and between steel extracted under an expert supervision and steel extracted by local residents. ii methodology this study aims to confirm the performance and quality assurance requirements for reused steel bars [16] by examining one of the steel reusing barriers, steel re-certification. the grade and performance of reused steel are determined and compared with the standards of new steel bars. thereby, the behavior and suitability of reused steel bars can be determined. the samples of steel bars are collected and categorized according to their extraction source and whether the bars were sampled under a specialist supervision or not. 27 steel bar samples of various diameters are collected. the three different categories of samples are illustrated in table (1). table 1: categories of steel bar samples according to extraction source and the availability of a specialist sample no. extraction source specialist supervision during extraction notes 1 unknown extraction place no the samples are collected from local shops in gaza that sells reused steel. 2 known extraction place no 3 known extraction place yes the samples of known extraction sources are compared with those of unknown extraction sources. the samples extracted under the supervision of a construction engineer are compared with those extracted by local residents. this metho dology is followed to check whether the extraction place and the specialist supervision affect the performance and quality of reused steel or not. steel re-certification is applied to all samples by firstly visbassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 16 ually inspecting steel bars to determine if there any distortions, deflections or significant corrosions in the steel bar. then, to determine the performance of steel bars, two tests are conducted: 1. steel tensile test: this test is conducted to determine the steel grade. the tensile test is performed according to (astm a370) standard to determine the yield stress (n/mm 2 ), the ultimate stress (n/mm 2 ), the elongation percentage, and the fu/fy ratio for each steel bar [17] (fig. 2). fig. 2 tensile test of steel bars 2. bend and re-bend test: this test is executed according to (astm a370) standard to determine if there are any cracks in the steel bars [17] (fig. 3). fig. 3 bend and re-bend test the tested reused steel bars are then compared with the specifications of the (ps 52-1997) standard [18]. iii results table (2) represents the results of the tensile test for the 27 steel bar samples of unknown extraction source. the results indicate that 40.7% of all samples failed at least one limitation standard test (fig. 4). out of six ф14 bars, five have failed to reach the minimum yield stress point. one out of four ф18 bars has failed to reach the minimum as well. all ф10 and two out of seven ф16 bars have exceeded the maximum yield stress of 520 n/mm 2 . table (2) also shows that three ф14 bars have failed to reach the minimum ultimate strength of 500 n/mm 2 . fig. 4 samples after tensile test table (3) shows the results of the tensile test for 33 steel bar samples that are collected form known sources under a specialist supervision. the table reveals that 60.6% of samples have failed at least one limitation of the tensile standard test. some steel bar samples do not have yield stress point which indicates that the bar has reached its yield limit before testing in the extraction site. table (3) also shows that 18 steel bars failed to reach the minimum elongation percentage. the ground beam steel bars, which were extracted under supervision, have met all the requirements of the tensile test. bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 17 table 2: tensile test results of samples from unknown source (shaded cells indicate a failure of sample) bar no. specified size (mm) yield stress (n/mm 2 ) ultimate stress (n/mm 2 ) elongation % fu/fy ratio 400-520 min 500 min 12 % min 1.25 1 8 478.5 657.9 20.5 1.38 2 8 478.5 657.9 21 1.38 3 8 477.5 676.5 20.5 1.42 4 10 534.7 702.2 15.7 1.31 5 10 536.7 704.2 12.5 1.31 6 10 530 670 13.5 1.26 7 12 484.9 657.1 17.6 1.36 8 12 420 530 12.5 1.26 9 12 450 567 12.6 1.26 10 14 423 584 14 1.38 11 14 419.5 542 13.2 1.29 12 14 395 658 12 1.66 13 14 311.4 543.3 16 1.74 14 14 251.8 377.7 17.5 1.5 15 14 185.5 497 17.5 2.68 16 14 231.9 463.8 17 2 17 16 463.8 589 12.9 1.27 18 16 474 730.6216 14.9 1.54 19 16 474 739.2 16.9 1.56 20 16 547.9 740.7 17 1.35 21 16 522.5 700.1 16 1.34 22 16 507.3 700.1 16.5 1.38 23 16 507.3 700.1 16.5 1.38 24 18 409.9 668.6 15.5 1.63 25 18 394.8 592.2 15 1.5 26 18 403.5 585.5 14.5 1.45 27 18 403.5 549.9 14.5 1.36 table 3: tensile test results of samples from known extraction source (shaded cells indicate a failure of sample) bar no. extraction place specified size (mm) yield stress (n/mm 2 ) ultimate stress (n/mm 2 ) elongation (%) fu/fy ratio bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 18 limits 400-520 min 500 min 12 % min 1.25 1 slab 12 505.1 721.5 10 1.43 2 slab 12 519.3 662.5 6 1.28 3 slab 12 504.9 685.3 8.5 1.36 4 slab 12 432.8 694.3 8.5 1.6 5 slab 12 289.2 385.6 9.5 1.33 6 column 12 432.8 712.3 10.5 1.65 7 column 12 423.8 694.3 7 1.64 8 slab 12 487 708 12.5 1.45 9 slab 12 443 700 15 1.58 10 column 12 735 15 0 11 column 12 531 717 15 1.35 12 column 12 576 691 10 1.2 13 column 12 461 708 10 1.53 14 slab 14 760 5 0 15 slab 14 617 2 0 16 slab 14 487 747 4 1.53 17 slab 14 682 5 0 18 *ground beam 14 461 721 17.5 1.56 19 *ground beam 14 461 734 15 1.59 20 *ground beam 14 516 720 16 1.35 21 *ground beam 14 520 688 14 1.32 22 *ground beam 14 515 735 15.5 1.42 23 *ground beam 14 500 704 16 1.408 24 *ground beam 14 516 705 12.5 1.36 25 column 14 448 682 15 1.52 26 column 14 454 689 17.5 1.51 27 column 14 682 7.5 0 28 slab 16 429.9 687.8 8 1.6 29 slab 16 546.3 698 10.5 1.28 30 slab 16 489.6 636 8.5 1.3 31 column 20 448.1 746.8 12.5 1.67 32 column 20 438.3 766.2 12.5 1.75 33 column 20 431.8 756.5 11.5 1.75 *under the supervision of an expert table 4 shows the results of the bend and re-bend test for 27 samples collected from known extraction sources. the reults indicate that all the samples have passed the test (fig. 5). bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 19 table 4: the results of bend & re-bend tests of reused steel samples. ø (mm) sample extraction place actual ø pass or fail 8 1 slab 7.73 pass 2 slab 7.57 pass 3 slab 7.71 pass 12 4 slab 11.92 pass 5 slab 11.9 pass 6 slab 11.88 pass 7 slab 11.98 pass 8 slab 11.89 pass 9 slab 11.93 pass 10 slab 12 pass 11 column 12 pass 12 slab 12 pass 14 13 slab 14.56 pass 14 slab 13.67 pass 15 slab 14.83 pass 16 slab 13.72 pass 17 slab 14.59 pass 18 slab 15.32 pass 19 slab 15.56 pass 20 slab 16.33 pass 21 slab 15.46 pass 22 slab 15.38 pass 23 slab 16.19 pass 24 slab 15.56 pass 25 slab 16 pass 26 slab 16 pass 20 27 column 19.6 pass bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 20 fig. 5 samples after bend & re-bend test iv conclusion the possibility of applying reused steel to new constructions in gaza has been studied in this paper through the process of re-certification; a visual inspection, a tensile test and a bend and re-bend test are performed on each reused steel bar. the results were varying depending on the site and the construction elements from which the steel bars were collected. the demolition process of the source building is another important factor affecting the quality of the steel bar. the extraction of steel bars under the supervision of a specialist has clearly enhanced the performance of reused steel bars. the steel bars extracted from ground beams under a specialist supervision did not fail any of the tensile test requirements. the bars that passed all the tests could be used in new constructions as alternatives to newly casted steel bars. in conclusion, although steel reuse is not a common practice worldwide, it is recommended to use reused steel bars in gaza strip after applying all the required tests to recertify the steel bar. acknowledgement the authors are grateful to the staff of the islamic university of gaza (iug) soil and materials lab for their help during the sample preparation and testing. special thanks are directed to senior civil engineering students mohammed mohamed abu mostafa, tareq atef zourob hamdan sofian elastal, for helping the authors in carrying out the experimental program. references 1. j. cullen and m. drewniok, "structural steel reuse. steel and the circular economy", the building centre, london. university of cambridge. 30 november 2016. 2. j. cullen, "steel reuse in construction. resource efficiency collective", university of cambridge. jun. 2016. 3. e. durmisevic, and n. noort. "re-use potential of steel in building construction", in cib publication. 2003. 4. d.r. cooper and j.m. allwood, "reusing steel and aluminum components at end of product life. environmental science & technology", vol. 46, no. 18, pp. 10334-10340, 2012. 5. m. fujita, and m. iwata. "the reuse management model of building steel structures". in proceedings of the international ecce conference euroinfra. 2009. 6. d.d. tingley and j. allwood. "reuse of structural steel: the opportunities and challenges". 7. c.s. poon "management and reuse of construction wastes". hong kong: the hong kong polytechnic university. 8. gorgolewski m., straka v., edmonds j. and sergio c. "facilitating greater reuse and recycling of structural steel in the construction and demolition process". ryerson university. can. inst. steel construct, 2006. 9. m. gorgolewski, "designing with reused building components: some challenges", building research & information, vol. 36, no. 2, pp. 175-188, 2008. 10. p. hradil, "barriers and opportunities of structural elements re-use", research raport. vtt technical research centre of finland, espoo, 2014. 11. information center statistics. -information center general administration of civil status palestinian ministry of interior, oct 2016. 12. palestinian central bureau of statistics, "the annual report of census", 2016. bassam a. tayeh, mohammed w. hasaniyah, mohammed ziad abu anza and mohammed a. abed / applying reused steel bars to new constructions (a case study in gaza strip) (2018) 21 13. report on the environmental effects of the israeli aggression on the gaza strip. environmental quality authority. state of palestine, sep. 2014.. 14. the war on the gaza strip 2014, the environmental impact assessment of the gaza war using participatory methodology. palestinian environmental ngos network friends of the earth, palestine. oct. 2015. 15. w. sh. abushaaban, "evaluation of construction materials supply into gaza strip case study of karm abu salem crossing border". research of master of business administration. the islamic university–gaza. oct. 2016. 16. the education and research arm of the building and construction authority. 2008. design guide on use of alternative steel materials to bs 5950. bca sustainable construction series – 3. building and construction authority, singapore. 17. astm a370, "tensile testing and bend testing of steel reinforcement bar", 2012. 18. the palestinian standars for testing reinforcement ps 52-1997. journal of engineering research and technology, volume 4, issue 4, december 2017 130 solar energy to optimize the cost of ro desalination plant case study: deir elbalah swro plant in gaza strip hussam a. alborsh 1 , said m. ghabayen 2 1 msc in civil engineering, islamic university of gaza, palestine, e-mail: allhussam88@hotmail.com 2 phd in civil engineering, islamic university of gaza, palestine, e-mail: sghabayen@iugaza.edu.ps abstract—seawater desalination by reverse osmosis (swro) is currently considered as one of the most widely used and reliable technology in providing additional water supply for areas suffering from water scarcity. high energy consumption of reverse osmosis plants is one of the biggest challenges, particularly in developing countries such as palestine. the future demand for fresh water and thus, energy, triggers researchers to find methods to integrate the use of renewable energy for the desalination process. palestine has a high solar energy potential, where the average of solar radiation intensity on horizontal surface is 5.31kwh/m2 per day. in this research, the possibilities of using solar energy to optimize the cost of the desalination process in gaza were studied. the research focused on the optimal use of solar energy and selection of the most economically feasible configuration of utilizing this source either fully or partially in the swro process. internal rate of return (irr) was used as an economic indicator to analyze the feasibility of establishing a swro desalination plant with a capacity of 600m3/d in gaza based on the optimal energy sources. the available options of energy sources were traditional system, off grid and on grid solar systems. the results for the economic study found that the irr was 6.6%, 3.80%, and 7.64% for the first, second, and third options respectively. the higher the irr, the more attractive is the option for the investment. the irr should be more than the market interest rate by a comfortable margin (6.43% in palestine). based on the results, the on grid solar system has the ability to balance the system production and plant power requirements, which is about 105kwh. considering the on-grid system, the unit cost for desalinated water was reduced from 1.08$/m3 (electric utility as a baseline) to 0.89$/m3 which is about 17% saving. index terms—optimization, desalination, solar energy and irr. i introduction the problem of scarcity of fresh water has been faced by most countries because of increasing consumption and population growth. gaza strip, in particular, has a problem in terms of water quantity and quality due to depletion of ground water aquifer and intensive land use exploitation. in gaza strip, the main source of fresh water comes from the coastal aquifer (shallow aquifer). where gaza strip lies on the seashore with coastal length of approximately 42 km, so most of the water pumped from water wells has high salinity due to seawater intrusion and does not meet the world health organization (who) drinking water guidelines and palestinian water authority (pwa) standards. therefore, the pwa believes that desalination is a strategic option for gaza strip [1]. in desalination by reverse osmosis (ro), which is found to be the most economically feasible technique in gaza strip [2], energy cost in ro desalination plants comprises about 30% to 50% of the total cost of the produced water based on the type of energy used [2] therefore, the total cost of desalination can be reduced significantly by reducing the energy consumption. under the current energy crisis in gaza strip, the gaza strip's energy resources are controlled by israel, which employs policies to restrict the electrical production capacity of the palestinian territories, so from the point view of the sustainable, it is recommended to utilize the renewable energy sources for desalination purposes. more studies show that solar energy is the most feasible among all types of renewable energy for the region climate (middle east) [3]. palestine has a high solar energy potential, where the average solar energy varies between 2.63 kwh/m2 per day in december to 8.5 kwh/m2 per day in june, and the daily average of solar radiation intensity on horizontal surface is 5.31 kwh/m2 per day while the total annual sunshine hours are about 3000 hours [3]. all previous studies that depended on the solar system especially in water field in gaza were for environmental purposes only, but this study has three essential dimensions for a desalination process. the first is an environmental dimension by using the solar energy system. the second is an economical dimension by using the internal rate of return (irr) method as economic indicator for feasibility study, and the third is a technical dimension by using the ro technique that was recommended in previous studies. mailto:allhussam88@ mailto:sghabayen@iugaza.edu.ps mailto:sghabayen@iugaza.edu.ps hussam a. alborsh, said m. ghabayen / solar energy to optimize the cost of ro desalination plant …. (2017) 131 ii methodology a research concept the research focused on the optimal use of solar energy and selection of the most economically feasible configuration of utilizing this source either fully or partially in the seawater reverse osmosis (swro) process. methodology of the research consisted of three main stages, research data were collected as a first stage, the second stage was modeling of three main scenarios of energy to establish a swro desalination plant with a capacity of 600 m3/d in gaza based on the optimal energy sources that are represented in figure (1), the sources are by electricity energy system (ees), solar energy system (ses), and combined energy system (ces). in the third stage, internal rate of return (irr) was used as an economic indicator to analyze the feasibility of energy system size that fits the adequate voltage and current ratings for each component of the photovoltaic system to meet the electric demand at the facility. figure 1 research framework in general, the aim of this research is to optimize the cost of desalination using the solar energy as a complementary source of energy. studying the best configuration to introduce the following specific objectives was pursued in order to achieve the aim above: system modeling of energy consumption processes in deir el-balah desalination plant. economic feasibility analysis of energy alternatives sizing optimal solar system for optimal energy cost. b case study: swro desalination plant-technical description at present, gaza strip has three desalination seawater plant (under construction), and existing one (under operation) in deir elbalah city that was the case study in this research as illustrated in figure (2). figure 2 location of swro desalination plant in gaza strip the plant consists in general of the following process units [4]: seawater intake (beach wells), seawater pretreatment plant, reverse osmosis desalination plant, post treatment plant, and potable water storage and distribution. the source of seawater comes from two beach wells close to the shore, and the wells have been constructed for the delivery of 65 m³/h raw water (maximum quantity 1.500 m³/d), and the pumps and the electromechanical equipment will also be designed for 75 m³/h raw water per well. after the water has passed the cartridge filters (5 micron) that is equipped with a difference pressure transmitter maximum 1.5 bar, the high-pressure pump pressurizes the pretreated seawater to a required operation pressure of 65 to 70 bar and feeds the seawater into the ro unit. the ro unit is equipped with spiral wound thin film composite ro membrane elements of 8-inch diameter and 40-inch length. six pieces of these ro membrane elements may be installed in the pressure vessels. in our case, the feed water flow rate to the ro unit is 65 m 3 /hr, the permeate flow rate is 26 m3/h and the brine flow rate is 39 m 3 /h, the water recovery is therefore 40 % and the remaining 60 % brine stream is discharged to the sea [5]. an automatic flushing of the pressure vessel, the ro elements, high-pressure pump and pipe work is performed at every plant stop with permeate with flushing pump. the highhussam a. alborsh, said m. ghabayen / solar energy to optimize the cost of ro desalination plant …. (2017) 132 pressure feed pump system consists of the pumps connected in series. the first pump is connected with a pelton turbine to recover the energy from the high-pressure brine stream. the second pump is equipped with a speed controlled electric motor to increase the pressure to the abovementioned operation pressure. iii materials and methods a first stage: data collection there were three main types of collected data to model energy source scenarios in the swro plant, power consumption data of the swro desalination plant, a new energy source data that was solar energy system and investment costs to establish this plant.  power consumption data of swro desalination plant the design capacity of the swro desalination plant is to produce 600 m3/day. so the mechanical and electromechanical equipment of the reverse osmosis unit itself was designed for a maximum capacity of 600 m³/d potable water. the calculated power consumption was based on seawater with a feed tds of 40,000 ppm and a plant recovery of 40 %. the total power load required to operate the ro during 24 hours was calculated 2,520 kw/day, 90 kwh/day for high-pressure unit and 15 kwh/day for other units in desalination plant as shown in the following table (1) [5]. table 1 power consumption of the swro plant description required unit operation power specific power consumption well pumps for water supply 14 kwh/h 0,65 kwh/m3 pretreatment and ro plant 105 kwh/h 4 kwh/m3 water filling and distribution station 27 kwh/h max. 0,50 kwh/m3  solar energy data palestine has a high solar energy potential, where the average solar energy is between 2.63 kwh/m2 per day in december to 8.5 kwh/m2 per day in june, and the daily average of solar radiation intensity on horizontal surface, peak sunshine hour pssh, is 5.31 kwh/m2 per day while the total annual sun shine hours are about 3,000 hour. the annual average temperature amounts to 22 co while it exceeds 30 co during summer months [6], these figures were very encouraging to use photovoltaic generators for swro desalination plant. the solar radiation data had a great effect on the performance of photovoltaic (pv) systems. table (2) shows the history data for monthly values of solar energy [6]. table 2 average of pssh in palestine month mean of pssh kwh/m2/day (1989-2002) jan 3.36 feb 3.97 mar 4.33 apr 5.19 may 6.46 jun 7.78 jul 7.40 aug 6.76 sep 5.88 oct 4.73 nov 4.31 dec 3.53 mean 5.31 standard deviation 1.5 figure (3) shows the average monthly solar energy on horizontal surface (kw/m2-d) plotted from data of table (2). figure 3 average monthly solar energy on horizontal surface based on the illustrated power data for the swro desalination plant (105 kwh), when we used any type of solar system, the produced power (ac energy) was calculated as the following figure (4) for each month that illustrates the variation in the monthly daily average in total insolation on horizontal surface that depends on pssh for each month, which is up to higher value during summer season. 0 2 4 6 8 10 ja n f e b m a r a p r m a y ju n ju l a u g s e p o ct n o v d e c m e a n o f p s s h month hussam a. alborsh, said m. ghabayen / solar energy to optimize the cost of ro desalination plant …. (2017) 133 figure 4 produced monthly energy  swro desalination plant investment cost swro desalination plant investment cost unit product cost calculations depend on the plant capacity, site characteristics, and design features. plant capacity specifies sizes for various process equipment, pumping units, and membrane area. site characteristics have a strong effect on the type of pretreatment and posttreatment equipment, and consumption rates of chemicals. in addition, design features of the process affect consumption of energy power and chemicals. desalination plant implementation costs can be categorized as capital costs (fixed costs) and operation and maintenance costs (variables costs). these costs were prepared by referring to the experts in the coastal municipalities water utility (cmwu), palestinian water authority (pwa), palestine energy & natural resources authority (penra), palestine land authority (pla) and other companies. the annual cash flow of swro plant with series time (lifespan) that is 22 years is as the same age as any solar system age. the investment costs were calculated as presented in figure (5). some costs were the same in different scenarios such as the costs of the plant establishment and land costs. but the profits of the project were estimated based on the sale price and production unit of the plant ($ per one cubic meter of desalted water), that is 1.7 $/m3. figure 5 types of plant costs with time b second stage: modeling energy source systems to model the solar energy systems, we used a solar photovoltaic calculator which was originally developed by (abualtayef, 2015) and modified by the researchers [7] as the following figure (6). figure 6 solar photovoltaic calculator the model was built using microsoft excel as an analysis tool to determine sizing of the photovoltaic generator and system costs that are dependent on some of energy and economic parameters in gaza. the energy sources were electricity energy system, solar energy system, and combined energy system.  electricity energy system (ees) figure (7), represents the first scenario of the proposed plant, which depended on local electricity network. the required energy of the desalination plant was 105 kwh, while the commercial price of one kilowatt of electricity was estimated to be 0.172 $/kwh. figure 7 swro plant depends on electricity source  solar energy system (ses) figure (8), represents the second scenario of the proposed plant, that depended on solar energy system 0 20,000 40,000 60,000 80,000 100,000 120,000 1 2 3 4 5 6 7 8 9 10 11 12 a c e n e g y ( k w h ) month hussam a. alborsh, said m. ghabayen / solar energy to optimize the cost of ro desalination plant …. (2017) 134 only that is known as (off grid system). off-grid systems are not connected to the electricity grid. the output of the off-grid system is entirely dependent upon the intensity of the sun [8]. the more intense the sun exposure, the greater the output. figure 8 swro plant depends on off grid system  combined energy system (ces) figure (9), represents the third scenario of the proposed plant, that depended on solar energy system combined with the traditional source that is known as (on grid system) [9]. the prime advantage of this type of systems is the ability to balance the system production and plant power requirements. when a grid inter-tied system produces more power than the plant consumes, the excess can be sold back to the utility in a practice known as net metering. when the system does not produce sufficient power, the plant can draw power back from the utility grid. figure 9 swro plant depends on on grid system c third stage: economic feasibility analysis proposed capital projects can be evaluated by economic feasibility study that may also include an economic analysis of the project. the purpose of economic analysis is to determine whether there is an economic case for the investment decision. in this study, internal rate of return (irr) method was used as an economic indicator to compare the feasibility of establishing a swro desalination plant with a capacity of 600 m3/d in gaza based on the best energy sources. this method refers to the percentage rate of return implicit in the flows of benefits and costs of projects. a margin defines the internal rate of return irr as "the discount rate at which the present value of return minus costs is zero". thus, irr is the discounted rate which equates the present value of cash inflows that represent the fixed annual profits with the present value of cash outflows that represent direct and indirect costs of the project as shown in figure (10). irr is also based on discount technique like net present value npv method. under this technique, the future cash inflows are discounted in such a way that their total present value is just equal to the present value of total cash outflows.. irr can be measured by the following equations [10]: where, a1, a2 are the cash inflows at the end of the first and second years respectively. and the rate of return is computed as follows. c = a1 (1 + r)^ n where, 1 is the cash outflow or initial capital investment, a1 is the cash inflow at the end of first year, r is the rate of return from investment. figure 10 irr function at microsoft excel in palestine, with the absence of a national currency, irr is calculated from the debit and credit interest rates on major currencies in circulation in palestine: jordanian dinar, the u.s. dollar and the israeli shekel, periodically depending on data supplied by banks to the palestine monetary authority pma on the basis of the mean weighted average. for economic analysis of this project which has taken interest rate 6.43 % as mean value of last three years for us dollars. the irr function, which is available as a financial function within excel tools, can be used to build the economic feasibility model for the three scenarios. as shown in previous figure (10). iv results and discussions a comparison between the three scenarios will be discussed as regard to the cost and the saving values and its reflections hussam a. alborsh, said m. ghabayen / solar energy to optimize the cost of ro desalination plant …. (2017) 135 on the specific cost of desalinated water. a energy system cost investment cost of the swro desalination plant that depended on the combined system (on grid solar system) as energy source was the lowest cost between other scenarios which was up to 0.13 $/kwh. this was reflected positively according to the economic indicator (irr value), and we can note there is an inverse relationship between the irr and the energy system cost as shown in table (3). table 3 comparison of energy systems cost energy system irr $/kwh solar energy system (off grid) 3.8% 0.27 electricity energy system (electric utility) 6.6% 0.17 combined energy system (on grid) 7.64% 0.13 the interest rate in palestine is 6.43 % so the irr value should be higher than the local interest rate to consider the project as a cost effective project from the standpoint of investors, see following figure (11) to recognize the differences. figure 11 relationship between irr and energy system cost b analysis of combined system costs on grid solar system was able to balance the system production and plant power requirements. in may, june, july and august, the grid inter-tied system produced more power than the plant consumed, the excess could be sold back to the utility which was up to 61,046 kwh/year from the plant. when the system did not produce sufficient power, the plant could draw power back from the utility grid which was estimated to be 133,719 kwh/year for the plant as shown in table (4). table 4 saving cost of combined system by net metering system so it would be reading of net meter with each year which was up to 72,673 kwh/year that was estimated to be 12,500 $/year as shown in figure (12). figure 12 net cost of on grid solar system c saving values of optimal system to estimate the saving value, we should take the traditional power as a baseline value. the cost of electricity system during the lifespan of the project was higher than the investment cost of the on grid solar system. the net metering process increased the solar systems efficiency and could save more values, which was up to 17 % as saving value by on grid solar system compared with other systems as shown in table (5) and figure (13). table 5 comparison of the saving value for energy systems ces ees ses energy system 7.64 6.60 3.8 irr value % 0.13 0.17 0.27 energy cost $/kw 17 0.00* 30 saving value % 0% 2% 4% 6% 8% 0.13 (ces) 0.17 (ees) 0.27 (ses) ir r $/kwh -40,000 -20,000 0 20,000 40,000 1 2 3 4 5 6 7 8 9 10 11 12 c o st p e r u s $ month hussam a. alborsh, said m. ghabayen / solar energy to optimize the cost of ro desalination plant …. (2017) 136 * the traditional power is the baseline to calculate the saving value knowing that the net saving value (17%) was enough to the operating cost during 7.5 year for the swro desalination plant. figure 13 saving value of combined system there are many desalination plants in gaza strip that desalinate brackish water (bw) from water wells by using reverse osmosis (ro) technique that depends on the traditional energy (electric utility) as an energy source. as it is known, the total cost of ro plants that desalinate the brackish water (bwro) is lower than the seawater desalination plants (swro) as regard to the tds and plant recovery, and that will be reflected on the used energy costs. so, the saving value that has been calculated increases when we desalinate fully or partially the brackish water by ro. so, the study chose seawater desalination as it is the worst case in energy consumption, so the results will have an effect on decision makers. d specific cost reduction of desalinated water when reduction of operation and maintenance cost in the plant happened, that would decrease the production cost of desalinated water per cubic meter (m3). see the following figure (14). figure 14 specific cost of desalinated water e cost reduction of pv modules there is a big potential for further reductions of pv system costs. less expensive materials would allow to cut significantly the costs of pv modules. even if it is still uncertain how fast and to what extent this potential could be tapped by the pv industry, in the research there was enough evidence to indicate that pv systems in gridconnected building-integrated applications would be able to reach the break-even price and then compete without incentives with electricity [11]. v conclusion the third scenario (combined system-on grid system) was selected as the best economic option to optimize the cost of the desalination plant in gaza. considering the on-grid system, the unit cost for desalinated water will be reduced from 1.08 $/m3 to 0.89 $/m3 (electric utility as a baseline) which is about 17% saving that is enough for the operating cost during 7.5 years for the swro desalination plant. this study was as an optimal method in palestine to plan strategic water projects that are dependent on an energy source for operational purposes specially desalination plants projects that its structural design should include all specific details of solar energy system such as plant site, roof areas, storage locations, etc. finally, using on grid solar energy system as the energy source in desalination plants in gaza decreases the treated water cost that will be more decreased in future surely when we support the solar industry of all types in our region. references [1] el sheikh r., ahmed m., hamdan m., (2003). strategy of water desalination in the gaza strip. international jornal of elseveir desalination, 156(2003), 39-42. [2] ghali k., ghaddar n., and alsaidi a., (2010). optimized operation of an integrated solar desalination and air-conditioning system. environment and power quality, paper presented at international conference on renewable energies and power quality, granada, spain. [3] ouda m., (2003). prospects of renewable energy in gaza strip, energy research and development center, islamic university of gaza, palestine. [4] palestinian water authority (2006). the ro plant of deir el-balah city. final report 1. gaza: pwa. [5] wassertechnik-gwt, (2005). process description and design data for reverse osmosis seawater desalination plant. final report 8. gaza: pwa. [6] yasin a., (2008). optimal operation strategy and economic analysis of rural electrification of atouf village by electric network, diesel generator and photovoltaic system, najah university,nablus, palestine. [7] abualtayef m., (2015). solar photovoltaic calculator, islamic university, palestine. [8] zriba a., shublaq r, gunaim e, (2012). assessment of maswdp and solar water model, islamic university, gaza, palestine. [9] kenneth l., (2013). an updated study on the undergrounding of overhead power lines, usa , edison electric institute. [10] khan m., jain p., (2007). financial management, text problems and cases, new delhi, indian institute of technology delhi. [11] laughton, m., (2002). renewable and the uk electricity grid supply infrastructure, platts power in europe, uk. 83 % 17 % combined system net saving 0 0.2 0.4 0.6 0.8 1 1.2 electricity system combined system s p e ci fi c co st ( $ /m 3 ) http://en.wikipedia.org/wiki/international_standard_book_number transactions template journal of engineering research and technology, volume 5, issue 3, september 2018 34 influence of internal curing on the mechanical properties of normal strength concrete mamoun a. alqedra associate professor of structural engineering, faculty of engineering, islamic university of gaza, palestine. abstract— the current study investigated the influence of internal curing on the mechanical properties of normal concrete produced at the islamic university of gaza (iug) laboratories. due to high absorption capacity of the local broken pottery fragments, crushed pottery material was utilized in this study as the internal water storage source. the specific gravity and the absorption capacity of the crushed pottery (cp) sieves were identified to ensure the suitability of such material. the current study investigated the partial replacement of 0%, 10%, 15%, 20% and 25% of fine aggregates with cp material. slump tests were carried out for the cp samples to obtain the effect of various contents on the workability of the mixes. the cp specimens were also prepared to obtain the mechanical properties, including the compressive strength and flexural strength. the compressive strength of the samples was obtained at ages of 7, 14 and 28 days. the optimum cp content was identified and applied to study the flexural strength. the results of the experiments showed that the use of the locally available crushed pottery as an internal curing material was effective in curing the concrete samples. the partial replacement of the natural fine aggregates with various contents of cp was also positive for the slump values. further, the application of cp material was successful in improving the early compressive strength and flexural strength. a slight improvement in the 28-days strength results was revealed with the increasing content of cp. the slump and strength results indicated that 20% partial replacement of fine aggregates with cp material was the optimum. index terms— internal curing, crushed pottery, compressive strength, flexural strength. i introduction concrete curing is the term given to the procedures used for promoting the hydration of the cement, and consists of a control of temperature and of moisture movement from and into the concrete [1]. curing allows continuous hydration of cement and consequently continuous gain of the strength. proper curing of concrete structures is significant to meet the concrete performance and durability requirements. in conventional curing this is achieved by external curing applied after mixing, placing and finishing. curing of concrete maintains satisfactory moisture content in concrete during its early stages in order to develop the desired properties. however, effective curing is not always practically achieved in many cases. therefore, the need to develop more effective curing agents attracted several researchers [2-5] self-curing or internal curing (ic) is a technique that can be used to provide additional internal moisture inside concrete in order to enhance hydration of cement and reduce selfdesiccation. internal curing is a mean of maintaining moisture or supplying an internal water source for concrete that promotes more cement hydration. self-curing concrete is a special concrete that would overcome insufficient curing due to human negligence. further, such concrete would help in case of scarcity of water in arid areas, inaccessibility of structures in difficult terrains and in areas where the presence of fluorides in water will badly affect the characteristics of concrete [6-8]. among the methods available for internal curing are the use of saturated porous lightweight aggregate (lwa), use of polyethylene glycol which reduces the evaporation of water from the surface of concrete and helps in water retention. further, application of superabsorbent polymers is utilized as an internal curing method, which could absorb and retain extremely large amounts of a liquid relative to their own mass [8, 9]. bentur, et al. [10] studied concrete with replacement of 25% of the normal weight aggregates with saturated lightweight aggregate. they indicated that this concrete exhibited no autogenous shrinkage, whereas the normal-weight concrete with the same matrix exhibited large shrinkage. the results showed that such concrete was very effective in eliminating the autogenous shrinkage and restrained stresses of the normal-weight concrete. kamal, et al. [11], in their first stage of the study, investigated the effect of internal curing agents on the main properties of normal-strength and high-strength self-compacted concrete. the internal curing agents included chemical curing agents and light expanded clay aggregates ―leca‖ as internal reservoirs. results indicated that curing agents remamoun a. alqedra / influence of internal curing on the mechanical properties of normal strength concrete (2018) 35 duced the water evaporation from concrete, and increased the water retention capacity with sufficient hardened concrete properties. shen, et al. [12] utilized internal curing (ic) with up to 50 % prewetted lightweight aggregates (lwas) to enhance the early-age behavior of high performance concrete. this was included temperature, shrinkage, creep deformation and stress for low cracking potential. they indicated that inter nally cured concrete with prewetted clay lwas is more robust for construction at early ages. wu, et al. [13] investigated the replacement of normal aggregates by several proportions of waste recycled brick aggregate (rba) as an internal curing material in concrete. the results demonstrate that rba has a great potential for internal curing purpose in recycled aggregate concrete. kevern and nowasell [14] obtained the effect of replacing small fraction of normal fine aggregates in concrete with prewetted lightweight aggregates. this study concluded that the results of the concrete strength, degree of hydration, shrinkage, and freeze–thaw testing showed substantial improvements over the control mixture. therefore, the study strongly recommended applying internal curing as a routine curing for concrete. lee, et al. [15] investigated the potential of utilizing recycled aggregates (ra) as an internal curing agent for an alkali activated slag (aas) system compared with using artificial lightweight aggregates. they indicated that ra could reduce the autogenous shrinkage of an aas system without a decrease in compressive strength. further, the addition of ra did not increase degree of hydration for aas mortar. this resulted from the dilution effect of the alkali activator, which was caused by the additional water supplied from internal curing materials. in high performance concrete, high self-desiccation and high temperature rise occur due to the low water-to-cement would increase the cracking potential of concrete at early age. therefore, shen, et al. [16] studied experimentally the effect of using pre-wetted lightweight aggregates (lwas) on the tensile creep and cracking potential of high performance concrete at early age under adiabatic condition. they showed that using pre-wetted lwas to internally cure concrete reduced the autogenous shrinkage, tensile creep/shrinkage as compared with the corresponding values of normal concrete. mousa, et al. [17] investigated the mechanical properties of concrete having replaced pre-soaked lightweight aggregate with several ratios of sand (0%, 10%, 15% and 20%). they indicated that the optimum ratio of pre-soaked lightweight aggregate was 15%, which showed improved mechanical properties. mousa, et al. [18] indicated also that replacing 20% of pre-soaked lightweight aggregate with sand was effective for improving permeability and mass loss, but adversely affects the sorptivity and volumetric water absorption of the tested concrete samples. the aim of the current study is to investigate the influence of internal curing on the mechanical properties of normal concrete produced at iug laboratories. crushed pottery material was utilized as an internal water storage source. several contents of crushed pottery were added to concrete mixes in partial replacement of fine aggregates to obtain its effect on the mechanical properties. ii martials and experimental program. a. material properties crushed pottery (cp) was utilized as an internal source of water to perform the internal curing process. the crushed pottery introduced to the mix as a partial replacement of fine aggregates in several percent. the specific gravity and the absorption capacity of the crushed pottery were tested in accordance to astm d6473 [19] and astm c642 [20], respectively. the crushed pottery (cp) was obtained from several factories that deal with such damaged material, as shown in figure 1. the cp was crushed and dried at 105 c 0 for 24 hours and the absorption capacities of various sizes of cp are presented in table 1. figure 1. crushed pottery (cp) the absorption values of cp was compared with light weight aggregates (lwa) used by dayalan and buellah [21]. the absorption of the 2.36 to 0.6mm cp sizes ranges between 10.82% to 12.2%, respectively, which compares well with that of lwa (10.03%) indicated by dayalan and buellah [21]. therefore, an equal combination of cp sizes of 2.36, 1.18 and 0.6 mm was made for the internal curing process. the experiments showed that the specific gravity of the cp is 2.52. table 1. absorption results of crushed pottery (cp) cp particle size (mm) absorption capacity, % (astm d6473-15) 2.36 11.1 1.18 12.2 0.6 10.82 0.425 7.23 0.3 6.14 0.15 4.61 0.075 4.46 mamoun a. alqedra / influence of internal curing on the mechanical properties of normal strength concrete (2018) 36 ordinary portland cement ii am 42.5 n was used in this study; the specific gravity of the cement was taken as 3.15. the fineness of the cement particles was 4200 cm 2 /g; the initial and final setting time were 1.5 and 6.5 hours, respectively, based on astm c191 [22]. the maximum size of the coarse aggregate (ca) was 20 mm. the average specific gravity and absorption of the coarse aggregates were 2.61 and 2.1 %, respectively. the grading of coarse aggregates is shown in table 2. sand was applied as fine aggregate (fa) and the specific gravity and absorption of the fine aggregates obtained in accordance to astm d6473 and astm c642 were 2.41 and 0.9%, respectively. the grading of the fine aggregate is presented in table 3. table 2. grading for coarse aggregate sieve opening, mm passing, % 19.00 100 12.50 90 9.50 45 4.75 0 table 3. grading for fine aggregate sieve opening percent passing no. 16 100 no. 30 76 no. 50 10 no. 100 4 b. experimental program the current study investigated partial replacement of fine aggregates with four contents of the cp. these cp contents included 0%, 10%, 15%, 20% and 25%. the mix design of the concrete samples was carried out based on aci 211 [23]. the results of the mix design proportions is included in table 4. table 4. mix proportions for 1m 3 concrete proportions cp samples 0% 10% 15% 20% 25% w/c ratio 0.43 cement, kg 442 ca, kg 1073 fa, kg 795 716 676 636 597 cp, kg 0 80 120 159 199 the 150 mm-cube samples were kept in a dry place with room temperature of 25 °c. these samples were maintained in this place until the day of testing. the fracture of various cp cube samples revealed well distribution of cp particles in the concrete matrix, as shown in figure 2. this well distrubtuion ensures that the internal curing process covers the complete concrete matrix. slump tests were carried out for the cp samples according to astm c143 [24] to obtain the effect of various cp contents on the workability of the mixes. the cp specimens were tested to obtain the mechanical properties; namely the compressive strength and flexural strength according to astm c39 [25] and astm c78 [26]. the compressive strength of the samples was obtained at ages of 7, 14 and 28 days for w/c of 0.43. afterwards, the optimum cp content mix was applied to study the flexural strength. finally, the long-term compressive strength was also studied at age of 90 days. figure 2. well distribution of crushed pottery particles iii. results and analysis. it was observed that the cp samples showed wet appearance on the surfaces at the age of 3 days and at room temperature. this indicates that the cp particles started releasing its internal water to the mix and hence the internal curing continues well, as shown in figure 3. this wet appearance remained for the first 7 days. afterwards, this wet appearance disappeared gradually. this finding was not observed in the 0% cp samples. crushed pottery mamoun a. alqedra / influence of internal curing on the mechanical properties of normal strength concrete (2018) 37 2.5 3 3.5 4 4.5 5 0 5 10 15 20 25 s lu m p , cm cp contents, % figure 3. wet appreance on the cp samples compared with 0% cp samples at 3 days age and room temperature. a. slump slump values of 3.3, 3.5, 3.7, 3.8 and 4.1cm for 0%, 10%, 15%, 20% and 25% cp samples are presented in figure 4. the results indicated that the partial replacement of the natural fine aggregates (sands) with various contents of cp was positive for the slump values. this would be attributed to a small portion of the internal water stored in cp particles is added to the free water of the mix, which in turn improves the workability. such behavior of workability improvement needs more investigation and study. figure 4. slump values for cp samples b. compressive strength the compressive strengths of the cp samples were obtained at ages of 7, 14, 28 and 90 days. figures 5 to 8 present the compressive strength of the cp samples at various ages. figure 5. compressive strength for cp samples at 7days figure five shows that increasing the cp content up to 20 % of the sand content resulted in improving the 7 days compressive strength (early strength) up to 7.5%. afterwards, any further increase in cp content (25%) showed a reduction in the early strength improvement. the 14 days strength of the cp samples, as shown in figure 6, showed a similar improvement to that of the 7 days strength values. as the cp content increased to 20% replacement of sand the 14 days strength values improved by 6.7%. a reduction in the 14 days strength was indicated beyond the 20% cp content (i.e. 25%). figure 6. compressive strength for cp samples at 14 days 24 26 28 30 0 5 10 15 20 25 7 -d a y s c o m p re ss iv e s tr e n g th , m p a cp contents, % 26 28 30 32 0 5 10 15 20 25 1 4 -d a y s c o m p re ss iv e s tr e n g th , m p a cp contents, % cp samples 0% cp samples mamoun a. alqedra / influence of internal curing on the mechanical properties of normal strength concrete (2018) 38 figure 7. compressive strength for cp samples at 28 days a slight improvement in the 28 days strength results was revealed as the cp contents increased, as shown in figure 7. no significant improvement in the 28 days compressive strength beyond 20% cp content of sand replacement. the compressive strength results indicated that the use of cp as an internal curing material was effective in curing the concrete samples. this finding was confirmed by the fact that higher strength values are obtained in this study. this behavior can be attributed to the continuing process of cement hydration due to the availability of more internal relative humidity that is stored inside the cp particles. this continuation of cement hydration process ensures lower voids and pores, and greater bond between cement paste and aggregate particles, as mentioned by several researches [2729]. further, the use of cp material for internal curing was successful in improving the early strength of the samples. it can be concluded that the optimum partial replacement of fine aggregates by cp content is 20%. this optimum cp content was applied for the subsequent stages of the experimental program. this result agrees very well with the optimum content of light weight aggregates applied by [17] and [16], who obtained an optimum content between 15 to 20 %. the long-term effect of the internal curing of cp material on the compressive strength was also investigated using the optimum cp content (20%). the 0% and 20% cp content samples were kept in a dry place for 90 days in a room temperature of 25 °c. compressive strength at 7, 28 and 90 days of the 0% and 20% cp content samples are presented in figure 8. figure 8. compressive strength of 0% and 20% cp samples at 7, 28 and 90 days the long-term compressive strength results presented in figure 8 showed that the improvement in strength continues beyond the 28-days strength almost with the same rate as compared with the 0% cp samples. this finding can also be attributed to the continuation of the cement hydration for longer times. c. flexural strength flexural strength of the 0% and 20% cp samples at 7 and 28 days are indicated in figure 9. the results revealed that there was an increase of 19.5% in the 28 days flexural strength of the 20% cp content sample as compared with that of the 0% cp content sample. this finding can be referred to the obtained improvement of the compressive strength, which was in turn reflected positively on the flexural strength. v conclusions the current study investigated the influence of internal curing on the mechanical properties of normal concrete. crushed pottery material was utilized as the internal water storage source. having performed the experimental program and analyzed the results, the following conculsions were achieved: figure 9. flexural strength of 0% and 20% cp samples at 7 and 28 days. 32 34 36 38 0 5 10 15 20 25 2 8 -d a y s c o m p re ss iv e s tr e n g th , m p a cp contents, % 20 25 30 35 40 7 28 90 c o m p re ss iv e s tr e n g th , m p a cp samples age, days 0% cp content 20% cp content 2.5 3 3.5 4 4.5 5 5.5 0 5 10 15 20 f le x u ra l st re n g th , m p a cp content, % 7-days flexural strength 28-days flexural strength mamoun a. alqedra / influence of internal curing on the mechanical properties of normal strength concrete (2018) 39 1the use of the locally available crushed pottery as an internal curing material was effective in curing the concrete samples. 2the partial replacement of the natural fine aggregates (sands) with various contents of cp was positive for the slump values. the workability of the cp samples was higher than that of the control samples (0% cp). 3the use of cp material for internal curing was successful in improving the early strength of the concrete samples. there was an increase of 7.5% and 6.7% in the 7 days and 14-days strength for the 20% cp samples, respectively, as compared with the control samples. 4a slight improvement in the 28 days strength results were revealed as the cp contents increased. no significant improvement was obtained at the 28 days compressive strength beyond the 20% cp content of sand replacement. 5the slump and strength results indicated that the optimum partial replacement of fine aggregates by cp content as an internal curing material is 20%. 6the long-term compressive strength results showed that the improvement in strength continues after the 28 days strength with the same rate as compared with the 0% cp samples. 7the results indicated that there was an increase of 19.5% in the 28 days flexural strength of the 20% cp content sample as compared with that of the 0% cp content sample. acknowledgment the author wish to thank ahmed jarad, ibrahim khalafallah, mohammed assaf and mahmoud abu-mustafa for their assistance in conducting the experimental program. references [1] y. nahata, n. kholia, and t. tank, "effect of curing methods on efficiency of curing of cement mortar," apcbee procedia, vol. 9, pp. 222-229, 2014. [2] j. yang, j. fan, b. kong, c. cai, and k. chen, "theory and application of new automated concrete curing system," journal of building engineering, vol. 17, pp. 125-134, 2018. [3] a. m. neville, properties of concrete, fifith ed. pearson education, 2013. [4] b. mather, "self-curing concrete, why not?," concrete international, vol. 23, no. 1, pp. 46-47, 2001. [5] r. dhir, p. hewlett, j. lota, and t. dyer, "an investigation into the feasibility of formulating ‗self-cure‘concrete," materials and structures, vol. 27, no. 10, p. 606, 1994. [6] s. weber and h. reinhardt, "various curing methods applied to high-performance concrete with natural and blended aggregates," in proceedings of fourth international symposium on the utilisation of high strength/high performance concrete, 1996, vol. 3, pp. 29-3. [7] o. m. jensen and p. lura, "techniques and materials for internal water curing of concrete," materials and structures, vol. 39, no. 9, pp. 817825, 2006. [8] d. p. bentz and w. j. weiss, internal curing: a 2010 state-of-the-art review. us department of commerce, national institute of standards and technology gaithersburg, maryland, 2011. [9] m. arafa, b. a. tayeh, m. alqedra, s. shihada, and h. hanoona, "investigating the effect of sulfate attack on compressive strength of recycled aggregate concrete," journal of engineering research and technology, vol. 4, no. 4, 2017. [10] a. bentur, s.-i. igarashi, and k. kovler, "prevention of autogenous shrinkage in highstrength concrete by internal curing using wet lightweight aggregates," cement and concrete research, vol. 31, no. 11, pp. 1587-1591, 2001. [11] m. kamal, m. safan, a. bashandy, and a. khalil, "experimental investigation on the behavior of normal strength and high strength self-curing self-compacting concrete," journal of building engineering, 2017. [12] d. shen, j. jiang, y. jiao, j. shen, and g. jiang, "early-age tensile creep and cracking potential of concrete internally cured with pre-wetted lightweight aggregate," construction and building materials, vol. 135, pp. 420-429, 2017. [13] k. wu, f. chen, c. xu, s.-q. lin, and y. nan, "internal curing effect on strength of recycled concrete and its enhancement in concrete-filled thin-wall steel tube," construction and building materials, vol. 153, pp. 824-834, 2017. [14] j. t. kevern and q. c. nowasell, "internal curing of pervious concrete using lightweight aggregates," construction and building materials, vol. 161, pp. 229-235, 2018. [15] n. k. lee, s. y. abate, and h.-k. kim, "use of recycled aggregates as internal curing agent for alkali-activated slag system," construction and building materials, vol. 159, pp. 286-296, 2018. [16] d. shen, j. jiang, j. shen, p. yao, and g. jiang, "influence of prewetted lightweight aggregates on the behavior and cracking potential of internally cured concrete at an early age," construction and building materials, vol. 99, pp. 260-271, 2015. [17] m. i. mousa, m. g. mahdy, a. h. abdel-reheem, and a. z. yehia, "mechanical properties of selfcuring concrete (scuc)," hbrc journal, vol. 11, no. 3, pp. 311-320, 2015. [18] m. i. mousa, m. g. mahdy, a. h. abdel-reheem, mamoun a. alqedra / influence of internal curing on the mechanical properties of normal strength concrete (2018) 40 and a. z. yehia, "physical properties of self-curing concrete (scuc)," hbrc journal, vol. 11, no. 2, pp. 167-175, 2015. [19] astm d6473-15, "standard test method for specif-ic gravity and absorption of rock for erosion control," astm international, west conshohocken,paastm d6473-15, 2015. [20] astm c642-13, "standard test method for density, absorption, and voids in hardened concrete," astm international, west conshohocken,paastm c642-13, 2013. [21] j. dayalan and m. buellah, "internal curing of concrete using prewetted light weight aggregates," international journal of innovative research in science, engineering and technology, vol. 3, no. 3, pp. 10554-10560, 2014. [22] astm c191-15, "standard test methods for time of setting of hydraulic cement by vicat needle," astm international, west conshohocken,paastm c191-15, 2015. [23] aci 211.1, "standard practice for selecting proportions for normal, heavyweight, and mass concrete," aci committee 2112002. [24] astm c. c143m-15a, "standard test method for slump of hydraulic-cement concrete," astm international, west conshohocken,paastm c143 / c143m-15a, 2015. [25] astm c. /c39m-15, "standard test method for compressive strength of cylindrical concrete spec-imens," astm international, west conshohocken,paastm c39 /c39m-15, 2015. [26] astm c. /c78m-15, "standard test method for flexural strength of concrete (using simple beam with third-point loading)," astm international, west conshohocken,paastm c78 /c78m-15, 2015. [27] d. c. t. hoogeveen, "internally-cured highperformance concrete under restrained shrinkage and creep," concreep, vol. 7, pp. 12-14, 2005. [28] d. p. bentz, p. lura, and j. w. roberts, "mixture proportioning for internal curing," concrete international, vol. 27, no. 02, pp. 35-40, 2005. [29] m. i. mousa, m. g. mahdy, a. h. abdel-reheem, and a. z. yehia, "self-curing concrete types; water retention and durability," alexandria engineering journal, vol. 54, no. 3, pp. 565-575, 2015. journal of engineering research and technology, volume 5, issue 3, september 2018 41 incorporation of waste glass and bottom ash in concrete construction mohammed h. arafa associate professor of civil engineering, islamic university of gaza, palestine. abstract this research investigated the effect of incorporating waste glass powder and raw bottom ash in concrete construction. laboratory work was designed to identify the performance of concrete mixes using waste glass powder and raw bottom-ash as a partial substitute of cement. the performance of the prepared mixes was obtained in terms of workability, compressive strength. the first series of concrete mixes were prepared with 10%, 20% and 30% glass powder as a replacement of the cement content. the cement content of the second series were replaced with 10%, 15% and 20% raw bottom-ash. further, a third series were carried out having 20 % of cement content replaced by 10% of glass powder and 10% of raw bottom ash. incorporation of glass powder or/and raw bottom-ash partially replacing cement content in concrete indicated a significant improvement in slump values. the ultimate compressive strength of 10% replacement of cement by glass powder showed the optimum content in which the strength was higher than the control mix. however, incorporation of raw bottom-ash in concrete as a partial substitution of cement showed decrease in strength compared with the control mix. index terms— concrete, glass powder, bottom ash, slump, compressive strength. 1introduction the gaza strip suffers from scarcity of land, where about two million inhabitants live in an area of not more than 360 km2. this resulted in very limited areas available for landfills making disposing solid wastes is a real issue in the gaza strip. further, the flow of construction materials into the gaza strip, such as cement, is a very complicated process. this situation pushed the researchers to optimize solid waste recovery to reduce the solid waste quantities going to the current landfills. glass waste and raw bottom ash represent materials of solid waste compositions that could be utilized as a cementreplacing material due to its pozzolanic reaction. the quantities of waste glass in gaza strip have been increasing significantly without being recycled, increasing the risk to public health due to the scarcity of land area. a study conducted by united nation development program in 2012 [1] showed that gaza strip produced about 37 ton per day of waste glass in 2012, anticipating this quantity to reach 72 ton per day in 2040. using waste glass in construction is one of the potential areas to both enhance the performance of concrete and to reduce the adverse social and environmental effects of disposing such solid wastes. several studies showed that glass has a chemical composition that can be classified as cementitious materials that can be partially replace cement content in concrete mixes [2-5]. grounding and milling of waste glass to micro sizes can improve the pozzolanic reaction between glass composition and the cement hydrates resulting in the production of calcium silicates hydrates [6-9]. researchers replaced waste glass powder with a varying percent of cement content used in concrete mixes [4, 8-11]. they replaced the cement content by a wide range of glass powder contents ranging from 10% to 30 %. the effect of adding waste glass powder was studied on various mechanical properties of concrete, such as workability, compressive strength, flexural strength, tensile strength, etc. due to the slow pozzolanic reaction between silicic acid available in waste glass powder (wgp) and the cement hydrates, the improvement in mechanical properties of the concrete samples containing wgp was observed at higher concrete ages [4]. however, rashad [7] reviewed past researches conducted on glass concrete and indicated that these researches were not conclusive in terms of workability and strength; the chloride resistance of glass concrete was similar to that of the control sample. studies were conducted to investigate the potential of using glass culets as aggregates, fine or/and [4, 7, 11-13]. the results of these studies indicated the possibility of replacing natural aggregates with waste. however, the alkali-silica reaction and deleterious chemical constituents should be considered. aggarwal, et al. [14] and andrade, et al. [15] studied the effect of use of bottom ash as a replacement of fine aggregates. the study found that the strength development for various percentages (0-50%) replacement of fine aggregates with bottom ash can easily be equated to the strength development of normal concrete at various ages. rafieizonooz, et al. [16] investigate concrete mixes by replacement of sand with bottom ash waste and cement with fly ash. concrete specimens were prepared incorporating 0, 20, 50, 75 and 100% of bottom ash replacing sand and 20% of coal fly ash by mass, as a substitute for ordinary portland cement. the pozzolanic properties of a coal combustion bottom ash were investigated by [17]. plain pastes containing equal amounts of calcium hydroxide and bottom ash were prepared and analyzed at different ages for their strength and the mohammed arafa / incorporation of waste glass and bottom ash in concrete construction (2018) 42 calcium hydroxide consumption. at early ages, bottom ash does not react with calcium hydroxide. its pozzolanic reaction proceeds slowly and accelerates gradually to become very interesting after 28 days and especially after 90 days. the research of jaturapitakkul and cheerarot [18] investigates the potential of using bottom ashes as a pozzolanic material. they incorporated the bottom ashes as a replacement of portland cement type i in mortars and concrete mixes. the results indicated that compressive strength of mixes having 20 to 30 % of bottom ash as cement substitute were significantly less than that of cement mortar. the current research aims at investigating the incorporation of waste glass or/and bottom ash in concrete construction. this aim was achieved by conducting a laboratory work to identify the performance of concrete mixes using waste glass powder or/and raw bottom-ash as a partial substitute of cement. the performance of the prepared mixes was obtained in terms of workability, compressive strength. 2experimental investigation 1.2 materials waste glass is obtained from local factory for manufacturing glass. this waste glass is milled and the passing sieve 75µm is collected for the experimental work. the fineness of the waste glass powder is 3319.12 cm 2 /kg using blain air permeability according to astm c204 [19]. specific gravity and ph of the waste glass is 2.60 and 10.26 respectively. cement cem ii am 42.5n was utilized in the current study according to en 197-1. specific gravity and fineness of opc was measured to be 3.14 and 3898.42 cm 2 /kg according to astm c187 [20] and astm c786/c786m [21], respectively. the natural coarse and fine aggregate and glass coarse aggregate were prepared according to the requirements astm c778 [22]; their gradations are shown in tables 1 and 2. table 1 gradation of natural coarse aggregates sieve opening (mm) coarse aggregates % passing sieve openin g (mm) coarse aggregate s % passing type i type ii grade iii 37.5 100 100 100 1.180 100 25 91.46 100 100 0.600 99.97 19 58.06 94.13 100 0.425 94.34 12.5 2.03 33.07 92.20 0.300 32.73 9.5 0.95 4.36 82.54 0.150 1.708 4.75 0.6 0.15 5.82 0.075 1.00 pan 0.04 0.01 0.06 pan 0.72 table 2 gradation of glass coarse aggregates sieve opening (mm) % passing grade 1 grade 2 25 100 100 19 88.23 100 12.5 27.41 99.99 9.5 8.23 99.97 4.75 0.29 1.1627 pan 0.01 0.01 the absorption and bulk specific gravity of the applied natural aggregates are presented in table 3. table 3 physical properties of aggregates aggregate type absorption capacity % bulk specific gravity (ssd) dry-rodded unit weight, kg/m3 coarse type i 1.58 2.58 111 type ii 0.97 2.63 111 type iii 1.27 2.62 111 fine 1.27 2.61 2.2 mix proportions and preparation the performed concrete mix was designed according to aci 211.1 [23]. the obtained mix proportions are presented in table 4. table 4 mix proportions of concrete. materials quantity, kg/m3 cement 330 coarse type i 540 type ii 330 type iii 350 sand 660 water 155 water/cement ratio 0.47 superplasticizer 2.31 the replacement of cement was conducted in three phases. the first phase was carried out by replacing 10%, 20% and 30% of cement by glass powder. in the second stage 10%, 15% and 20% of cement were substituted by bottom ash. the third stage comprised replacing 20 % of cement by 10% of glass powder and 10% of raw bottom ash in order to study the combined effect. in all test the weight of type i, type ii, type iii coarse aggregate and sand was kept constant with 540, 330, 350 kg and 660kg per cubic meter respectively. also the w/c ratio is set to be 0.47 and superplasticizer 2.31 kg. table 5 presents the details of the prepared specimens including the mohammed arafa / incorporation of waste glass and bottom ash in concrete construction (2018) 43 90 92 94 96 98 100 102 104 0 10 20 30 s lu m p ( m m ) glass powder content, % control mix, glass powder mixes, bottom ash mixes and glass powder-bottom ash mixes. the concrete was prepared according to the standard method of making and curing test specimens in laboratory, [24]. after 24 hours, the hardened concrete was removed from the 10 cm cubic molds and placed for curing in a water tank at temperature ranging from 21 to 25 c 0 until the date of testing. table 5: glass and bottom ash content of the mixes mix cement kg glass powder kg bottom ash kg control mix 330 -- glass powder mix 10% 297 33 - 20% 264 66 - 30% 231 99 - bottom ash mix 10% 297 -33 15% 280.5 -49.5 20% 264 -66 10% glass +10% bottom ash 264 33 33 2.3 testing program the slump test according to astm c143/c143m [25] was carried out to obtain the effect of various glass and bottom ash contents on the workability of concrete specimens. the cubic compressive strength based on astm c39/c39m [26] was performed; a single test result was obtained by averaging three companion cubes. the testing program comprised three series. in the first phase, the compressive strength was tested for concrete specimens replacing cement by 10 %, 20 % and 30 % glass powder. the strength of these concrete specimens was obtained at the ages of 7-days, 14-days, 28-days and 60-days. the second phase represents concrete specimens in which cement was replaced by 10%, 15% and 20 % raw bottom-ash. strength results of these specimens were obtained at the ages of 14-days and 28-days. the third phase included concrete specimens with 20 % of cement replaced by 10 % glass powder and 10 % raw bottom-ash. the strength of phase 3 concrete specimens was obtained at the ages of 7-days, 14-days, 28-days and 60-days. 3results and analysis 3.1 workability the workability of the tested concrete specimens was measured using the slump test. figure 1 represents the slump values of the concrete specimens having cement content partially replaced by 0%, 10 %, 20 % and 30 % glass powder. the slump results showed that the workability of the specimens improved as the glass powder content replacing cement is increased. the maximum improvement was reached at the partial replacement of cement of 30 % glass powder. this can be attributed to the mechanical effect of the smooth glass powder particles. it is believed that the existence of such smooth fine particles would improve the movement of aggregate particles on each other resulting in improving the overall workability of the specimens. further, the absorption capacity of the glass powder particles is less than that of cement particles. it results in providing more free water for the lubrication effect of the mixing water. these findings agree very well with [27, 28]. figure 1. slump with glass powder contents the workability of utilizing raw bottom-ash as a partial substitution of cement in terms of slump is indicated in figure 2. the results of slump show that the investigated contents of raw bottom-ash have a slight improvement on the slump values of specimens. there was no adverse effect on slump with partial substitution of cement by raw bottom-ash added up to 30 % replacement. this could be attributed to mechanical lubricant action of the bottom ash particles, which allows the aggregate particles move more freely around each other [17]. figure 2. slump with bottom-ash contents. 3.2 compressive strength of glass powder samples the compressive strength of specimens with partial substitution of cement by glass powder contents was obtained at several ages. table 6 and figure 3 show the compressive strengths of specimens with cement replaced by 10 %, 20 % and 30 % glass powder at age of 7-days, 14-days, 28-days and 60-days. 90 92 94 96 98 0 5 10 15 20 s lu m p ( m m ) bottom-ash content, % mohammed arafa / incorporation of waste glass and bottom ash in concrete construction (2018) 44 table 6. compressive strength of specimens with glass powder glass powder specimens compressive strength, kg/cm 2 7days 14days 28days 60-days 0 % 235 303 338 414 10 % 216 265 337 418 20 % 186 234 278 293 30 % 143 167 231 270 figure 3: compressive strength with several replacement ratios of glass powder at 7,14,28 and 60 days the compressive strength of the specimens with glass powder replacing 10%, 20% and 30% of cement with glass powder at 7-days achieved 91.91%, 79.14% and 60.85% of the control mix (without glass powder) respectively. at 14-days age, the 10%, 20% and 30% glass powder specimens showed strength of 87.45%, 77.22% and 55.11% of the control mix respectively. the 28-days strength of the 10%, 20% and 30% glass powder specimens indicated 99.70%, 82.24% and 68.34% of the control mix respectively. at 60-days age, the 10%, 20% and 30% glass powder specimens revealed strength of 100.96%, 70.77% and 65.21% of the control mix respectively. the results indicated that in general the strength is adversely affected by increasing the glass powder content. figure 4 presents the compressive strength results for all glass powder contents with respect to specimen age. the results showed that the development of the compressive strength continues with the age of the specimens with all glass powder contents. the 10% glass powder specimen showed that it started to compensate the early strength reduction at age 28days. further, at age of 60-days the strength of the glass powder specimen became slightly higher than that of the 0% glass powder specimen. this can be attributed to the fact that the hydration process of glass powder as a cementitious material is slow at the beginning and gets faster later [28-30]. it can be concluded that replacement of 10% of the cement content by glass powder gives the optimum compressive strength without scarifying the ultimate strength. the failure mode for the control mix and for all specimens with glass powder is normal mode; pyramid failure. figure 5 shows a photo of the failure mode of concrete specimens with 10% of glass powder replacement after 60 days. figure 4: compressive strength for glass powder contents with respect to specimen age figure 5: failure mode of concrete specimens with 10% of glass powder after 60 days 3.3 compressive strength of bottom ash samples the compressive strength of the specimens with raw bottom-ash replacing 0%, 10%, 15% and 20% of cement content is presented in figures 6 and 7. the 14-days compressive strength of the specimens with 10%, 15% and 20% of cement substituted with raw bottom ash achieved 71.28%, 67.65% and 65.34% of the control mix respectively. at age 28-days, the compressive strength of the specimens with 10%, 15% and 20% of raw bottom-ash reached 69.52%, 67.45% and 66.27% of the control mix respectively. 120 160 200 240 280 320 360 400 440 7 14 28 60 c o m p . s tr e n g th k g /c m 2 specimen age, days 0% glass powder 10% glass powder 20% glass powder 30% glass powder 0 50 100 150 200 250 300 350 400 450 0 5 10 15 20 25 30 c o m p re ss iv e s tr e n g th k g /c m 2 glass raplacement ratio % 7days 14 days 28 days 60days mohammed arafa / incorporation of waste glass and bottom ash in concrete construction (2018) 45 these results indicate that the incorporation of original bottom ash has a negative influence on the compressive strength, which agrees well with [18]. jaturapitakkul and cheerarot [18] found out that original bottom ash should not be used as a pozzolanic material. figure 6: compressive strength with several replacement ratios of raw bottom-ash at age of 14, 28-days figure 7: compressive strengths of bottom ash contents at 14days and 28 days age figure 8 shows the average 7, 14 and 28 days compressive strength of concrete mixes having combined action of both glass powder and raw bottom ash as a replacement of cement. a concrete mix was prepared using 10% replacement of cement with glass powder in addition to 10% cement replacement with bottom ash. the average compressive strength of these mixes at 7,14 and 28 days achieved 81.27%, 75.24% and 76.33% of the control mix (without glass powder and bottom ash) respectively. the compressive strength of the 28-days compressive strengths of the control mix with the 10% glass powder mix, 10% bottom ash mix and 10% glass powder and 10% bottom ash mix were 338 kg/cm2 , 265 kg/cm2 , 235 kg/cm2 and 258 kg/cm2, respectively. these results indicated the existence of 10 % glass powder reduces the adverse effect of the existence of bottom ash content. the 28 days compressive strength of the 10 % bottom ash mix improved from 235 kg/cm2 to 258 kg/cm2 when replacing 10% of cement by glass powder. figure 8: compressive strengths of 10% bottom ash and 10% glass contents, at 28-days age 4conclusions this research investigated the potential of incorporating waste glass powder and raw bottom as a substitute of cement in concrete construction. the performance of the prepared mixes was obtained in terms of workability, compressive strength. the first series of concrete mixes were prepared with 10%, 20% and 30% glass powder as a replacement of the cement content. the cement content of the second series were replaced with 10%, 15% and 20% raw bottom-ash. further, a third series were carried out having 20 % of cement content replaced by 10% of glass powder and 10% of raw bottom ash. the following conclusions are achieved: 1the slump results showed that the workability of the specimens improved as the glass powder content replacing cement is increased. the maximum improvement was reached at the partial replacement of cement of 30 % glass powder. 2the slump results indicated that incorporating raw bottom-ash as a replacement of cement has a slight improvement on the slump values of specimens. there was no adverse effect on slump with partial substitution of cement by raw bottom-ash added up to 30 % replacement. 3the compressive strength results showed that the development of the compressive strength continues with the age of the specimens with all glass powder contents. the 10% glass powder specimen showed that it started to compensate the early strength reduction at age 28-days. further, at age of 60-days the strength of the glass powder specimen became slightly higher than that of the 0% glass powder specimen. it can be concluded that replacement of 10% of the cement content by glass powder gives the optimum 0 50 100 150 200 250 300 350 400 0 5 10 15 20 c o m p re ss iv e s tr e n g th k g /c m 2 bottom ash replacement ratio % 14 days 28 days 150 175 200 225 250 275 300 325 350 14 28 c o m p re ss iv e s tr e n g th k g /c m 2 specimen age, days 0% 10% 15% 20% 150 175 200 225 250 275 300 325 350 14 28 c o m p re ss iv e s tr e n g th k g /c m 2 specimen age, days 0% 10% 15% 20% 0 50 100 150 200 250 300 350 400 7 1 4 2 8 c o m p r e s s iv e s t r e n g t h k g /c m 2 specimen age, days control mix 10% glass +10 ash mohammed arafa / incorporation of waste glass and bottom ash in concrete construction (2018) 46 compressive strength without scarifying the ultimate strength. 4the 14-days compressive strength of the specimens with 10%, 15% and 20% of cement substituted with raw bottom ash achieved 71.28%, 67.65% and 65.34% of the control mix respectively. at age 28-days, the compressive strength of the specimens with 10%, 15% and 20% of raw bottom-ash reached 69.52%, 67.45% and 66.27% of the control mix respectively. the 14 and 28 days compressive strength of mixes with cement replaced by raw bottom ash indicate a negative influence on the compressive strength. 5the compressive strength of concrete mixes having both glass powder and raw bottom ash as a replacement of cement indicated that the existence of glass powder reduces the adverse effect obtained when replacing cement by only bottom ash. acknowledgment the author wish to thank mohammed.ismail, mostafa.albaz, mohammed.maliha and mosab.abu anza for their assistance in conducting the experimental program. 5references [1] undp-papp, "feasibility study and detailed design for solid waste management in the gaza strip," undp, gaza2012. [2] p. soroushian, "strength and durability of recycled aggregate concrete containing milled glass as partial replacement for cement," construction and building materials, vol. 29, pp. 368-377, 2012. [3] m. kamali and a. ghahremaninezhad, "effect of glass powders on the mechanical and durability properties of cementitious materials," construction and building materials, vol. 98, pp. 407-416, 2015. [4] g. s. islam, m. rahman, and n. kazi, "waste glass powder as partial replacement of cement for sustainable concrete practice," international journal of sustainable built environment, vol. 6, no. 1, pp. 37-44, 2017. [5] m. torres-carrasco and f. puertas, "waste glass as a precursor in alkaline activation: chemical process and hydration products," construction and building materials, vol. 139, pp. 342-354, 2017. [6] v. vaitkevičius, e. šerelis, and h. hilbig, "the effect of glass powder on the microstructure of ultra high performance concrete," construction and building materials, vol. 68, pp. 102-109, 2014. [7] a. m. rashad, "recycled waste glass as fine aggregate replacement in cementitious materials based on portland cement," construction and building materials, vol. 72, pp. 340-357, 2014. [8] a. omran and a. tagnit-hamou, "performance of glasspowder concrete in field applications," construction and building materials, vol. 109, pp. 84-95, 2016. [9] h. du and k. h. tan, "properties of high volume glass powder concrete," cement and concrete composites, vol. 75, pp. 22-29, 2017. [10] g. vijayakumar, m. h. vishaliny, and d. d. govindarajulu, "studies on glass powder as partial replacement of cement in concrete production," international journal of emerging technology and advanced engineering, vol. 3, no. 2, pp. 153-157, 2013. [11] k. afshinnia and p. r. rangaraju, "impact of combined use of ground glass powder and crushed glass aggregate on selected properties of portland cement concrete," construction and building materials, vol. 117, pp. 263272, 2016. [12] a. siam, "properties of concrete mixes with waste glass," m. sc thesis m.sc. thesis, department of civil engineering, the islamic university of gaza, gaza, 2011. [13] h. abdeen, "properties of fired clay bricks mixed with waste glass," m.sc. m.sc. thesis, departemt of civil engineering, the islamic university of gaza, 2016. [14] p. aggarwal, y. aggarwal, and s. gupta, "effect of bottom ash as replacement of fine aggregates in concrete," asian journal of civil engineering (building and housing) vol. 8, no. 1, pp. 49-62, 2007. [15] l. b. andrade, j. rocha, and m. cheriaf, "evaluation of concrete incorporating bottom ash as a natural aggregates replacement," waste management, vol. 27, no. 9, pp. 1190-1199, 2007. [16] m. rafieizonooz, j. mirza, m. r. salim, m. w. hussin, and e. khankhaje, "investigation of coal bottom ash and fly ash in concrete as replacement for sand and cement," construction and building materials, vol. 116, pp. 15-24, 2016. [17] m. cheriaf, j. c. rocha, and j. pera, "pozzolanic properties of pulverized coal combustion bottom ash," cement and concrete research, vol. 29, no. 9, pp. 13871391, 1999. [18] c. jaturapitakkul and r. cheerarot, "development of bottom ash as pozzolanic material," journal of materials in civil engineering, vol. 15, no. 1, pp. 48-53, 2003. [19] astm c204, "standard test methods for fineness of hydraulic cement by air-permeability apparatus," astm international, usaastm c204, 2017. [20] astm c187, "standard test method for amount of water required for normal consistency of hydraulic cement paste," astm international, usaastm c187, 2016. [21] astm c786/c786m, "standard test method for fineness of hydraulic cement and raw materials by the 300-μm (no. 50), 150-μm (no. 100), and 75-μm (no. 200) sieves by wet methods," astm international, usa2017. [22] astm c778, "standard specification for standard sand. ," ed. usa: astm international, 2013. [23] standard practice for selecting proportions for normal, heavyweight, and mass concrete, 2002. mohammed arafa / incorporation of waste glass and bottom ash in concrete construction (2018) 47 [24] astm c109/c109m, "standard practice for making and curing concrete test specimens in the laboratory," astm international, usaastm c109/c109m, 2016. [25] astm c143/c143m, "standard test method for slump of hydraulic-cement concrete," astm international, usaastm c143/c143m, 2007. [26] standard test method for compressive strength of cylindrical concrete specimens, astm c39/c39m, 2017. [27] j. khatib, e. negim, h. sohl, and n. chileshe, "glass powder utilisation in concrete production," european journal of applied sciences, vol. 4, no. 4, pp. 173-176, 2012. [28] a. a. kadir, m. r. ismail, m. m. al bakri, n. a. sarani, and m. i. h. hassan, "the usage of glass waste as cement replacement," key engineering materials, vol. 673, p. 95, 2016. [29] y. aggarwal and r. siddique, "microstructure and properties of concrete using bottom ash and waste foundry sand as partial replacement of fine aggregates," construction and building materials, vol. 54, pp. 210-223, 2014. [30] h. lee, a. hanif, m. usman, j. sim, and h. oh, "performance evaluation of concrete incorporating glass powder and glass sludge wastes as supplementary cementing material," journal of cleaner production, vol. 170, pp. 683-693, 2018. transactions template journal of engineering research and technology, volume 1, issue 1, feburary 2014 12 load reduction in wind energy converters using individual pitch control hala j. el-khozondar, amani s. abu reyala, mathias s. müller abstract—to capture the maximum amount of energy with low cost, wind turbine should be large and individual pitch control (ipc) technique has to be considered. asymetric loads on the rotor blades experienced by wind speed and gravitational force might cause fatigue and damage to the blades. consequently, it is important to consider health monitoring of the wind energy converters to detect early structural problems. for this purpose, reliable and efficient optical grating sensors are added at the root of the blades to sense stresses and strains caused by the wind and the gravitational forces. the purpose of this work is to develop a simple model of the blade to study the moments generated by the wind and the gravity at each blade. it is shown that the calculated values of the strains and stresses agree with the values previous calculations. index terms—individual pitch control, wind turbine, power coefficient, stress, strain, moments. i introduction wind energy is one of the renewable energy sources, which does not cause environmental pollution problems. the european wind energy association (ewea)[1] has set a target till 2020-30 to get more energy from renewable sources including wind energy, resulting in a great growth in wind energy technology. wind turbines rotor blades may rotate around a horizontal axis or a vertical axis. their height varies from 40m to 80m, their rotor blades diameter changes from 50m to 130m, and their power may reach 5mw and above [2]. the wind turbine capacity upswings with increasing the rotor blade diameter and the tower height. however, this increase threatened the stability of the turbine which might lead to unrecovered damages due to dynamic load caused by wind turbulence, tower shadow, and rotor unbalance [3]. this requires a reliable control method for load reduction [4]. various control methods have been proposed for this purpose encompassing individual pitch control (ipc) which employed to lessen loads[5-8]. in ipc the three blades are controlled individually to decrease the rotor unbalanced load by montoring the pitch angle independently. ipc relies on load sensors to measure the loads acting on the blades. the control algorithm based on load sensors is executed and assessed by different authors [5, 9]. the available load sensor with high performance and reliability is expensive. this causes an increase in the cost of the wind turbine parts and thus the expenses of the energy produced from the wind turbine converter increases. this motivated several companies to produce low cost reliable load sensors. optical fiber bragg grating (fb) sensors are immune to the electromagnetic interference, resist envionemental harshness, are compact in size and provide accurate measurements of stress and strains as shown in previous studies [10-15]. fos4x installed the fb sensors at the roots of the rotor blades. the complex assembly of the rotor blade makes aeroelastic simulations hard. the aim of this work is to introduce a simplified model for individual pitch control of wind energy converter to get maximum possible value of power coefficient. the model is used to study the moments on the blades caused by the applied forces, wind and gravity on the blades. then, the stresses and strains are derived from the related moments and compared with the measured values of fb sensors. the topic of section 2 is the physics concepts of wind energy and the effect of the tip speed ratio on the power coefficient. section 3 is dedicated to the need of a reliable and efficient sensor for sensing the bending moment, stresses and strains caused by wind, gravitational and centrifugal forces at the root of the blade. the focus of section 4 is on the analytical model of the moments generated by the wind and the gravity for each blade. section 5 is devoted to compute the stress and strain from the bending moments and compare them with the real values measured by fiber bragg grating sensors of fos4x german company. ii physics concepts of wind energy in wind energy turbine, wind power is converted to electrical power. understanding the physical concepts of the conversion of wind energy to electricity helps us to enhance the performance of wind turbines. the wind power is related to the wind speed as follows, 31 2 p a u (1) ————————————————  h.j. el-khozondar is with the electrical engineering department, islamic university of gaza, gaza, palestine. email:hkhozondar@iugaza.edu  a.s.abu reyala is with palestine university, gaza, palestine.  m.s. müller is with the measurements and sensor technology institute, tum, germany. load reduction in wind energy converters using individual pitch control hala j. el-khozondar, amani s. abu reyala, mathias s. müller (2014) 13 where a is the rotor blades sweeping area, ρ is air density, and u is relative wind speed. equation (1) shows that the wind power depends on the cubic power of its speed. consequently, accurate measurement to wind power requires accurate wind speed data. the wind attacks the rotor plane at an angle α between the wind direction and the chord line of the airfoil, as displayed in figure 1. the two main components of the wind force acting on the rotor blade are the lift force fl and the drag force fd . these forces are given as follows, 21 2l l f c u a (2) 21 2d d f c u a (3) where cl is the lift force coefficient and cd is the drag force coefficient [2]. the drag force direction is parallel to the initial direction of movement while the lift force direction is perpendicular to the initial direction of movement as illustrated in figure 1. the wind energy changed to mechanical energy on the wind turbine blades is restricted by the betz' law (maximum 59%). the ratio of the mechanical power of the rotor blades, pr, to the wind power, p, is defined by the power coefficient of the turbine, cp: / p r c p p (4) where pr is determined from the following formula, 31 2r p p c a u (5) the dependence of cp on the tip speed ratio λ and the pitch angle φ can be approximated as in equation (6).  2 3 4 51 61 1( , ) expxpc c c c c c c                (6) where the coefficients c1-c6 and x have variant values for different wind turbines and β is a parameter defined as     31/ 0.08 0.035 / 1 1         (7) where /r u  , r is the blade length and ω is the blade angular velocity [16]. equation (6) is plotted in figure 2 for different values of the pitch angle φ = 0°, 5°, 10°, 15°, 20°. the coefficient values used in figure 2 are: c1= 0.5, c2 = 116, c3 = 0.4, c4 = 0, c5 = 5, c6 = 21 [17]. figure 1: forces exerted on the airfoil figure 2: the relationship between the cp and λ figure 2 shows a group of typical cp-λ curves. it can be noticed that the converted mechanical power from the turbine blade is a function of the rotational speed, and has its maximum value at a particular rotational speed, which varies with the pitch angle φ. iii individual pitch control (ipc) wind turbine rotor bears different types of loads; i. e., aerodynamic loads, gravitational loads and centrifugal loads. these loads cause fatigue and vibration in blades, which cause degredation to the rotor blades [18]. these loads can be overcome and the amount of collected power can be controlled using pitch control (pc) by tuning the attack angle of a wind turbine rotor blade into or out of the wind. each blade is exposed to different loads due to the variation of the wind speed across the rotor blades. for this reason, individual electric drives are used to control the pitching of the blades in a process called ipc [5]. in this case, the power coefficient cp, which is defined in equation (6), depends on the wind velocity. the converted mechanical power will be modified to the following expression, 31 2 ( ( ), ) ( ( ), ). ( , ) r p p r p c u p c u a u p u          (8) 2 4 6 8 10 12 14 16 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 tip speed ratio p o w e r c o e ff ic ie n t 20 10 15 5 pitch angle = 0 deg tip speed ratio p o w e r c o e ff ic ie n t load reduction in wind energy converters using individual pitch control hala j. el-khozondar, amani s. abu reyala, mathias s. müller (2014) 14 figure 3 : adjustable pitch blades figure 4 : effect of the pitch angle on the output power the variation of the received power with the wind speed and the pitch angle is plotted in figure 4. the figure shows that as the pitch angle changes, the value of the maximum output power varies. the load sensor is the most important component of ipc to get accurate values of asymmetrical loads on the blades. the measured load values detected by the sensor are fed into the the control algorithm to take the consequential action to pitch the individual blade to the right angle. measuring stress is very important to obtain the right information about the structural properties of wind turbine tower and jacket under harsh environment [17]. fiber bragg grating sensor (fbg) is used to get continuous set of stress and strain data to monitor structure behavior under different load conditions. fbg sensor consists of one cable that transmits data from many different operating points and can bear severe environment. iv the mathematical model : four fbg sensors placed at the root of each blade are used to feedback the system with the stress and strain. the values of the stress and strain imposed on the blades are calculated using a mathematical model and analytical simulation. the blade structure is very complex to start with; therefore, the rotor blade is represented by a half cylinder beam to simplify the problem. a full cylinder beam is considered by previous work done by siddiqi[18]. the half cylinder is used because it is a closer approximation to the shape of the rotor blade than the full cylinder. the half cylinder has length r, cross section area a, outer chord length c2, and inner chord length c1 as shown in figure 5. the blade rotates around the horizontal axis taken to be the z-axis by the angle θ and at the same time each blade rotates around its axis, x'-axis, by pitch angle φ as shown in figure 6. figure 5 : the half cylinder model of the blade figure 6 : rotation of the blade around z-axis and pitching around its x'-axis. in addition to the gravitational force, the wind flow against the rotor blades is translated into two perpindicular forces which are lift and drag forces as shown in equations (2) and (3). the total forces are transformed into the prime frame (x'y'z') using the transformation matrix in equation (9). 'sec tan cos tan sin 0 cos sin ' 0 sin cos ' a a j j xx a j j ay y j jaz a z                              (9) starting with equation (2), the drag force on one foil is derived over the cross section of one foil as follows:  2 1 2 sin cos l x y df c u da a a l     (10) 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 power curve wind velocity (m/s) o u tp u t p o w e r ( m w ) pitch angle= 0 deg. 5 10 15 20 load reduction in wind energy converters using individual pitch control hala j. el-khozondar, amani s. abu reyala, mathias s. müller (2014) 15 where 2 1 2 1 0.5 [ (2 ) (2 )]da da da c dr c dr   (11) dfl is transformed to the prime frame using the transformation matrix in equation (9). similar procedure are followed for the drag force fd. then, the two forces in the prime frame are analysed to axial force (dfx') in the x' direction (the beam axis), tangential force (dfy') and normal force (dfz'). integration over the length of the beam is done to get the following forces:  2' 2 1 1 4 tan x l f c u c c r     (12)   2' 2 1 1 4 sec cos sin y l d f u c c c c r       (13)   2' 2 1 1 4 sec sin sin z l d f u c c c c r       (14) moments in edgewise and flapwise directions can be calculated respectively. integrating the tangential forces over the blade length produces the rotor torque while integrating the normal forces produces the rotor thrust. ' ' '0 1 2 2 ( )( sec cos sin ) 2 1 '8 r m torque ra df z x y u c c c j c j r a l d z         (15) ' ' '0 -1 2 2 ( )( sec sin cos ) 2 1 ' 8 r m thrust ra df y x z u c c c j c j r a l d y         (16) the axial force from wind produces zero moment because the cross product will equal to zero. the same procedure is applied to calculate the moment due to the gravitational force (fg). as the blade rotates, the gravitational force acts in the negative y-direction at the center of gravity (cg) at a distance equals one third of the total length of the blade. () g y f m g a (17) where m is the mass of the blade and g is the acceleration of gravity. in the prime frame the gravitational force is cos (cos ' sin ) ' f mg ja jag y z   (18) the corresponding gravitational moment (mg) is ' cos (cos sin ) ' ' c m c a fg g gx mg ja ja z yg     (19) the static load, the stress and strain, can be calculated from rotor thrust and rotor torque. strain is a measure of the internal forces acting within a deformable body. normal stress is the force per unit area applied in a direction perpendicular to the surface of an object. table 1 summaries the stresses and strains result experienced different type of forces on the blades. the total stress and strain can be found by using superposition principle. table 1 : stresses and strains in the structural models where n is the normal force, a is the area, σ is normal stress, ε is strain, γ is shear strain, ν is poisson ratio, e is young modulus of elasticity, g is modulus of rigidity, m is bending moment, q is shear force, i is moment of inertia, t is thickness and j is polar moment of area. iv simulation results and discussion: the results are calculated numerically and summerized in table 1. the calculations are performed on the first blade. figure 7 shows a variation of stress and strain with time keeping other variables constant; i .e., the nominal value of the wind speed = 12.3 m/s, the attack angle = 10 degrees and the pitch angle value is chosen to give maximum power. in figure 8, the stress and strain are plotted as a function of the wind speed keeping all other variables fixed. the fixed variable values are chosen as folows: time equals to 10 sec, the pitch angle value is selected to give maximum power, and the attack angle value is 10 degrees. type stress strain normal n xx a   0yy  0zz  0 xy   0yz  0 xz   / xx xx e  / eyy xx   / ezz xx   0 xy   0 yz   0 xz   symmetric bending about z – axis /m y ixx z zz   /v q i txs y z zz   0yy  0zz  0yz  / exx xx  / eyy xx   / ezz xx   / gxs xs  0yz  symmetric bending about y – axis /m z ixx y yy   /v q i txs z y yy   0yy  0zz  0 yz   / exx xx  / eyy xx   / ezz xx   / gxs xs  0yz  load reduction in wind energy converters using individual pitch control hala j. el-khozondar, amani s. abu reyala, mathias s. müller (2014) 16 figure 9 displays the dependence of stress and strain on the pitch angle keeping the other variables constant. the values of these variables are: the wind speed nominal value is taken, the attack angle is equal to 10 degrees and the time value is equal to 10 sec. note that the stress and strain vary sinusoidally with pitch angle as expected from the theoretical equations. the variation of stress and strain with the attack angle is illustrated in figure 10. the other variables are kept contant. the calculated mean value of the strain (εxx) from figures (7)-(10) is equal to 0.4335 mm/m, 0.4329 mm/m, 0.4152 mm/m, and 0.4388 mm/m respectively. siddiqi[18] calculated mean value of the strain (εxx) which equals 0.55 mm/m. the mean value of the strain (εxx) we obtained from our calculation is deviated by 21% from the expected result [18]. the mean values of εyy = εzz = -0.4780 mm/m. figure 7 : stress and strain versus time figure 8 : stress and strain versus speed figure 9 : dependence of stress and strain on pitch angle figure 10 : variation of stress and strain with attack angle v conclusion the basic concept of individual pitch control of wind energy converter to get maximum possible value of power coefficient is developed. we used a half cylinder simple model to represent the blades of the wind energy turbines to analyse and calculate asymmetric loads on the rotor blades. analytical equations are derived to express the moments generated by the wind and the gravity on each blade. then, the strains and stresses are calculated. the results agree with the expected values. -15 -10 -5 0 5 10 15 1.5 1.6 1.7 1.8 x 10 7 pitch angle (deg) s tr e s s ( p a ) -15 -10 -5 0 5 10 15 4 4.2 4.4 4.6 4.8 x 10 -4 pitch angle (deg) s tr a in (m m /m ) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 1.2 1.4 1.6 1.8 x 10 7 time (sec) s tr e s s ( p a ) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 3.5 4 4.5 5 x 10 -4 time (sec) s tr a in (m m /m ) 0 5 10 15 20 25 1.5 1.6 1.7 1.8 x 10 7 speed (m/s) s tr e s s ( p a ) 0 5 10 15 20 25 4 4.2 4.4 4.6 4.8 x 10 -4 speed (m/s) s tr a in (m m /m ) (m m /m ) -10 -8 -6 -4 -2 0 2 4 6 8 10 1.532 1.534 1.536 1.538 1.54 x 10 7 attack angle (deg) s tr e s s ( p a ) -10 -8 -6 -4 -2 0 2 4 6 8 10 4.14 4.145 4.15 4.155 4.16 x 10 -4 attack angle (deg) s tr a in load reduction in wind energy converters using individual pitch control hala j. el-khozondar, amani s. abu reyala, mathias s. müller (2014) 17 references [1] j. muth, e. smith, 45% by 2030: towards a truly sustainable energy system in the eu, report by european renewable energy council (ecer), published in may 2011. [2] k. grogg, harvesting the wind: the physics of wind turbines, physics and astronomy department, carleton college, 2005. [3] y. xingjia, w. xiaodong, x. zuoxia, l. yingming, l. jun, individual pitch control for variable speed turbine blade load mitigation, icset, ieee (2008) 767772. [4] k. selvam, individual pitch control for large scale wind turbine: multivariable control approach, report by energy research center of the netherlands (ecn), ecn-e-07-053 (2007). [5] e. bossanyi, individual blade pitch control for load reduction, wind energy. 6 (2003) 119-128. [6] p. caselitz, w. kleinkauf, w. krueger, j. peschenka, m. reichardt and k. stoerzel, reduction of fatigue loads on wind energy converters by advanced control methods, proc. european wind energy conference, dublin, 1997, pp. 555-558. [7] t. van engelen, and e. van der hooft, individual pitch control inventory. report ecn-c-03-138, ecn (2003). [8] t. larsen, h. madson and k. thomson, active load reduction using individual pitch, based on load blade flow measurements, wind energy. 8(2005) 67-80. [9] e. kittilä, implementation and evaluation of wind turbine control concepts, master of science thesis, mechanical and materials engineering department, tamper university of technology, 11 january 2012. [10] a. pérez grassi, m. müller, h. j. el-khozondar and a. kock, method for strain tensor reconstruction with embedded fiber bragg grating sensors, smart materials and structures. 20 (2011)105031-6. [11] h. el-khozondar, m. müller, c. wellenhofer, r. elkhozondar, a. koch, four-mode coupling in fiber bragg grating sensors using full fields of the fundamental modes, fiber and integrated optics. 29 (2010) 420-430. [12] h. el-khozondar, m. müller, t. buck, , r. elkhozondar, a. koch, experimental investigation on polarization rotation in twisted optical fiber using laboratory coordinate system, fiber and integrated optics. 29 (2010)1-9. [13] h. el-khozondar, m. müller, r. el-khozondar, a. koch, polarization rotation in twisted polarization maintaining fibers using a fixed reference frame, journal of light wave technology. 27(2009) 5590-5596. [14] m. müller, t. buck, h. el-khozondar, a. koch, shear strain influence on fiber bragg grating measurement systems, journal of light wave technology. 27(2009) 5223-5229. [15] m. müller, h. el-khozondar, t. buck, and a. koch, analytical solution of four-mode coupling in shear strain load fiber-bragg-grating sensors, optics letters. 34 (2009) 2622-2624. [16] j. zhang, m. cheng, z. chen, x. fu, pitch angle control for variable speed wind turbines, drpt2008, 69 april, nanjing, china, 2008, pp. 1-6. [17] a. rolán, an approach to the performance-oriented model of variable-speed wind turbines, 978-1-42446392-3ieee, 2010, pp 3853-3858. [18] d. siddiqi, active load reduction of wind energy converters using individual pitch control, master thesis, technical university, munich, september 2011. [19] hermann-josef wagner, jyotirmay mathur, introduction to wind energy systems: basics, technology and operation, springer-verlag, berlin, 2009. hala j. el-khozondar was born in gaza, palestinian territory. she got her b.sc. in physics from birzeit university, palestinian territory in 1987. she earned her ph.d. in physics from new mexico state university (nmsu), usa in 1999. she joined the physics faculty at birzeit university in 1987. she had a postdoc award at max planck institute in heidelberg, germany in 1999. in 2000, she worked as assistant professor in the electrical engineering (ee) department at islamic university of gaza. in 2007, she was promoted to associate professor ee department. she is now a full professor at ee department and a fellow for the world academy of science (ftwas) and for the arab region ftwas-aro. she worked in initiating and developing the quality assurance unit and the external relations at islamic university of gaza. she advises several graduate and undergraduate students. she participated in several conferences and workshops. her research interests are focused on studying wireless communication, optical communication, nonlinear optics, optical fiber sensors, magneto-optical isolators, optical filter, mtms devices, biophysics, electro-optical waveguides, and numerical simulation of microstructural evolution of polycrystalline materials. she has several publications in highly ranked journals including journal of light wave technology, ieee journal of quantum electronics and optics letter. she is a recipient of international awards and recognitions, including a fulbright scholarship, daad short study visit, a alexander von humboldt-stiftung scholarship, erasmus mundus, and the islamic university deanery prize for applied sciences. she is also coordinator for several projects including tempus for promoting long life education, and al-maqdisi to enhance collaboration with french partners. amani salem abu reyala was born in gaza. she got her b.sc and m.sc degree in electrical engineering from islamic univeristy of gaza. she worked as a teaching assistant after she earned her b. sc. at the electrical engineering department in islamic univeristy of gaza. she is currently instructor at palestine university. mathias s. müller was born in germany in 1980. he received b.sc. degree in electrical engineering and information technology from the technical university of munich (tu munich), munich, germany, in 2006. after finishing his ph.d. thesis in 2010 in the field of fiber optic sensors, he has started his research on optomechanical interactions in fiber bragg grating sensors at the institute for measurement systems and sensor technology, tu munich, in the same year. transactions template journal of engineering research and technology, volume 1, issue 2, june 2014 52 preparation critical success factors for public private partnership (ppp) projects in palestine nabil i. el-sawalhi & mohammed a. mansour abstract—public private partnership (ppp) is a well-established method for incorporating the private sector to deliver a service or implement a project. ppp projects are a new concept in palestine. the aim of this study is to explore the critical success factors (csfs) for ppp projects in palestine. a structured questionnaire was distributed to 45 experts from different sectors to identify and rank the critical success factors of ppp projects. the csfs of ppp projects are stability of political situation, clear and detailed contract, existence of a sound economic policy, reliable delivery of service, analysis and allocation of risks, suitable legal framework, experienced private sector, profitability to the private sector, and accepted level of toll / tariff for a project. this study recommended that the government have to create a legal ppp framework and should establish ppp standard guidelines and processes to guide the implementation of ppp projects by stakeholders. keywordspublic private partenership, critical success factors, palestine i introduction undeveloped infrastructure; and limited funding are constraints to the government in developing infrastructure in palestine. this led palestinian national authority (pna) to privatize some of infrastructure projects and public services. there has been a growing public interest in finding alternative solutions to infrastructure development and improvement of service delivery through partnerships with the private sector. the pna, like many other developing countries is under increased pressure to accelerate the development of infrastructure and provide much needed social services to its population such as potable water, sanitation systems, transportation and electricity. future development plans emphasize enhancing private sector participation in infrastructure development. this, however requires a more scientific approach in order to identify key issues of concern. various partnership initiatives have emerged following the establishment of the pna in 1994, to provide public services. significant amounts of money have been invested in such projects. poor performance, unauthorized competition, and overall weak governance and regulation characterize most of these partnership projects. this is notable, in current partnership projects that there is a perception that service delivery in most parts of the country is still of a low standard and the price for such services is high, which influence the community satisfaction. current laws and regulations don’t encourage investing in partnership projects. public private partnership (ppp) projects are a new concept in palestine and to date there has been little serious investigation into such projects. a thorough literature review revealed to the moment, no recorded studies have been conducted in order to establish key principles concerning ppp development and application in palestine. this study will help in understanding and identification the issues and success factors involved in the implementation of ppp system. ii success factors of ppp in order to achieve successful projects, some suggestions have been reported in literature. number of success factors of ppp projects were indicated by different researchers to explore what characterizes a successful partnership or alliance. successful ppp implementation requires a stable political and social environment, which in turn relies on the stability and capability of the host government [1]. lambert et al. [2] identify the following components for establishing a successful partnership: mutual trust and commitment, joint planning, joint operating controls, effective communication, risk/reward sharing, style of contract, scope of the activities and the extent to which financial resources are shared. li and akintoye [3] have identified some csfs based on the uk ppp/pfi study that shows that effective procurement, project implementability, government guarantee, favorable economic conditions, and available financial market are essential for ppp to thrive. ozdoganm and birgonal [4] have developed a category of success factors that can be used as a method for success of bot projects. the success factors have been divided into four main groups as follow: (1) financial and commercial factors; (2) political and legal factors; (3) technical factors; and (4) social factors. ————————————————  nabil el-sawalhi, civil engineering department, islamic university of gaza, gaza, palestine.  mohammed ali mansour msc gradute researcher, civil engineering department, islamic university of gaza, gaza, palestine. nabil i. el-sawalhi, and mohammed a. mansour / preparation critical success factors for public private partnership (ppp) projects in palestine (2014) 53 the key to a successful implementation of a bot infrastructure project is in depth analysis of all aspects related to economic, environmental, social, political, legal, and financial feasibility of the project. for these reasons, the analysis of project feasibility decision needs a technique to include the qualitative decision factors that have the strong impact on the project [5]. jütting [6] has identified macro level conditions in favor of setting up of a ppp. these include a political environment supporting the involvement of the private sector, an economic and financial crisis leading to pressure for the public sector to think of new ways of service provision, and a legal framework which guarantees a transparent and credible relationship between the different actors. at the micro level, the capacities of the actors, e.g. their personal interest, skills and organizational and management structure are identified as being important. not all projects can be undertaken successfully using bot type schemes. a particularly cooperative ppp is a precondition for successful procurement using bot. both successful and unsuccessful bot based projects testify to the truism that appropriate political, legal, and economical environments are a perquisite for the initiation of such schemes. the host government must foster such environments[7]. in pakistan, bot contracts may be complicated due to its long-term contractual obligations, multiparty involvement, moreover legal, economical, and technical framework need to be developed on large scale for successful execution of the project [8]. iii ppp in developing contries it is probably fair to characterize ppps as high-risk, highreward propositions for governments, the more so as one moves across the ppp spectrum from management contracts to leases-affermages and then to concessions. ppp projects are complex arrangements; they are difficult to implement in the context of developing countries’ weak institutional capacity and economic volatility and involve significant transaction costs. they are vulnerable to vested interests and, unfortunately, can make easy targets for opportunistic politicians, especially during the early years when the population often still does not perceive tangible improvements in service. and finally, the fact that private operators do not always deliver must not be overlooked [9]. ppp experience is modest in the middle east and north afreca (mena). morocco and jorden are most advanced compared with other countries in the region[10]. a privatization policy in palestine from the time of the establishment pna in 1994, pna has started to adopt the strategy of private sector partnership in development programs. the tensions in the relationship between the public and private sectors became apparent. since 1994, the national economy has witnessed increasing levels of interference by the palestinian bureaucracy into the activities of the private sector through different forms including regulation, taxation, monopolies of certain services and others. this interference seems to have had an adverse effect on the growth and development of the private sector from the viewpoint of the business community [11]. b ppp experience in palestine the palestinian private sector was encouraged to participate in infrastructure investment, particularly in the energy and communications sectors. pna has contracted with the private sector in some vital and strategic projects such as electrical power plant, telecommunication sector and palestianin industrial estate and free zone authorty (piefza) which may be considered similar to ppps in some aspects since it involves the financing of public infrastructure by the private sector. the local authorities (municipalities) especially in the west bank have adopted the partnership contracts in its projects. they have some kind of small scale ppp with local business. these ppp projects are invariably management contracts. el-bireh and bethlehem municipalities have individually entered into ppp projects with a real estate development company on a bot basis. the projects are a multistory service taxi station in el-bireh and a bus station in bethlehem. both projects are in operation but both municipalities are facing difficulties with the ppp partner [12]. iv research objectives and methodology the aim of the research is to assess the csfs for ppp projects in palestine. the research was exploratory in nature. a structured questionnaire survey was used as a main tool to achieve the objectives. baseline study was carried out, since the study is exploratory to gain an adequate understanding of the existing relationship between the public and private sectors in palestine. that included a number of informal contacts with businessmen, local entrepreneurs and experts. it also involved a review of available literature and other secondary data gleaned from books, previous studies, and official documents relevant to the issue of study. after reviewing the literature and interviewing experts who were dealing with the subject at different levels, all the information that could help in achieving the study objectives were collected, reviewed, and finalized to be suitable for the survey. the questionnaire design was based on identified success factors categorized under main groups. in order to accomplish the aim of the research, respondents were asked to rate the degree of importance of the factors influencing the success of ppp projects from their own perspectives. the 5-point likert scale was used to calculate the mean score of importance. the targeted group was selected from individuals in public and private sector. the public sector covers nabil i. el-sawalhi, and mohammed a. mansour / preparation critical success factors for public private partnership (ppp) projects in palestine (2014) 54 officers and engineers with experience and knowledge of ppp projects. the private sector covers practitioners including contractors, consultants and investors. 45 experts of different backgrounds were purposely selected to compare the similarities and differences within different sectors. this would make the findings more representable. the selected sample composed of senior engineers, project managers, and general directors. this means that the data given by respondents of high rank positions is considered more accountable and credible than that provided by lower ranking officials. v results and discussion the distribution of respondent’s post representing different sectors is shown in figure 1. general engineer have the highest participation in the survey. figure 1 respondent’s post the total number of organizations for the experts is 19 organizations. the distribution of organization name for respondents representing public, private, and other sectors is shown in figure 2. the governmental department have the highest participation in the survey. figure 2 organization name for respondents according to the respondent’s experience, table 1 illustrates that most of respondents have more than (5-10) years experience which shows the validity of the obtained data which can lead to accurate results. table 1 respondent’s experience experience years number percent (%) less than 5 years 2 4.4 (5-10) years 15 33.3 (11-15) years 10 22.2 more than 15 years 18 40 a total of 38 influential factors were collected to determine the csfs of the project. these factors have been identified by a detailed review of the literature, previous studies of the same or similar subjects and consulting with experts on this topic. these factors were listed after conducting pilot study to coincide with the local market . the factors were consolidated and categorized under five main groups of factors as follows: (1) technical factors; (2) financial and economic factors; (3) social factors; (4) political and legal factors; and (5) other factors. the study outlines the mean value and ranking of overall success factors for csfs for ppp projects in palestine. in general, results shows that all factors have a mean rating higher than midpoint 3 of the 5-point likert scale, indicating the importance of the identified factors to ensure the success of ppp. table 2 shows the summery of the mean value and ranking of each group. according to the mean values it can be inferred that the five groups have somehow similar means. table2 overall factors groups group mean rank weight ratio% political and legal factors 4.226 1 84.52 technical factors 4.137 2 82.74 financial and economic factors 4.119 3 82.37 social factors 4.115 4 82.30 other factors 4.079 5 81.58 a political and legal factors the most critical group was political and legal factors group with a mean value of 4.226 and ranked in the first position between groups. this group includes three csfs from the heights success factor as illustrated in table 3. this indicates that the political and legal instability remained as a serious hurdle in the formulation of various infrastructure development reforms of ppp. the following challenges represent the major political and legal constraints of ppp projects faced by the stakeholders in palestine: israeli occupation foreign policies, tension at borders and crossings, and legal aspects. the factor "stability of political situation" is rated as the 7 9 17 33 4 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 general director project manager general engineer consultant engineer investment banker contractoruniversity academic respondent's post n u m b e r o f r e s p o n d e n ts 7 22 3 22 1 0 1 2 3 4 5 6 7 8 governmental department semi-public institution municipalitycontracting company investment company consultancy of f ice local university organization name n u m b e r nabil i. el-sawalhi, and mohammed a. mansour / preparation critical success factors for public private partnership (ppp) projects in palestine (2014) 55 most csf with a mean value of 4.533. the political situation in palestine is complex. the palestine has been living throughout the past period under an israeli occupation. the political situation influences the development of ppp projects according to the occupation restrictions such as israel control over the crossings and closures. the international agreements of pna cause political sensitivity for restrictions on promotion and implementation of infrastructure projects. this discourage private sector on investing in ppp projects. "clear and detailed contract that accommodate changes in the project requirements over concession period" is rated highly by respondents with a mean value of 4.4. this emphasizes that the contracts of some current projects are not clear nor fair to the parties. contracts that include all details of legal, finance and service aspects with flexibility according to the changes contribute to continuity of project. "suitable legal and regulatory framework" is rated an important legal factor with a mean value of 4.311. the general description to the legislative and regulatory practice for the private sector will range between the absence of a legislative policy and regulatory agenda for the government, in addition to the ambiguity, overlap, and lack of legislative harmonization. establishing legal framework can help both the public and private sectors understand the core value for a ppp project and minimize the disputes between the project parties. lack of laws and regulations do not encourage the private sector to participate in partnership projects. table 3 political and legal factors success factors mean rank weight ratio% p1 stability of political situation 4.533 1* 90.67 p2 government support for the project 4.156 17 83.11 p3 suitable legal and regulatory framework 4.311 6* 86.22 p4 sufficient legislative authority to enter the project 3.867 36 77.33 p5 the project is compatible to the local regulations 4.089 24 81.78 p6 clear and detailed contract that accommodate changes in the project requirements over concession period 4.400 2* 88.00 b technical factors the second group is technical factors group with a mean value of 4.137. this group includes one factor as csf from the heights success factor (table 4). "strong and experienced private sector" is rated as csf. this indicates that selection of the project partner should have the required technical and financial qualification of the project. this factor is the greatest determinant of the success or failure of ppp projects. table 4 technical factors success factors mean rank weight ratio% t1 well organized and committed public sector 4.178 13 83.56 t2 availability of government experience 4.067 26 81.33 t3 private sector technical innovation and creative solutions 3.911 32 78.22 t4 strong and experienced private sector 4.289 7* 85.78 t5 determining the service specifications and standards 4.178 14 83.56 t6 nature and size of project 4.200 11 84 c financial and economic factors the third group is financial and economic factors group with a mean value of 4.119. it includes two factors as csfs from have heigh ranks as illustrated in table 5. "existence of a sound governmental economic policy" is rated as the economic important factor with a mean value of 4.356. the national economy has witnessed increasing levels of interference from the government by the bureaucracy into the activities of the private sector. therefore, the issue needs to be critically examined to determine whether it has been a major cause behind the deterioration of this sector. a high ranking of this factor leads that the government should adopt economic policies to maintain a stable and growing economic environment, where the private sector can operate with confidence. profitability to the private sector" is rated as csf. private sector has different interest from the government and community to the public facilities and services. private sector is concerned with the financial aspect in assessing the ppp project. nabil i. el-sawalhi, and mohammed a. mansour / preparation critical success factors for public private partnership (ppp) projects in palestine (2014) 56 table 5 financial and economic factors success factors mean rank weight ratio% f1 project can attract foreign capital 3.844 37 76.89 f2 profitability to the private sector 4.267 8* 85.33 f3 stimulating the banks to offer long financing for the project 4.044 29 80.89 f4 stable economic environment 4.222 10 84.44 f5 project is more cost effective than traditional forms of project delivery 4.044 30 80.89 f6 existence of a sound governmental economic policy 4.356 3* 87.11 f7 thorough and realistic cost/benefit assessment 4.111 19 82.22 f8 the project should achieve better value for money which leads to low project life cycle cost 4.067 27 81.33 f9 investors and lenders are seeking safe returns 4.111 20 82.22 d social factors the fourth group is social factors group with a mean value of 4.115. it includes two factors as csfs from the heights success factor (table 6). "delivery of service is stable and reliable" is rated a csf with a mean value of 4.356. the program of community-based projects should be an important vehicle for ensuring timely and efficient implementation of essential improvements in service delivery. table 6 social factors success factors mean rank weight ratio% s1 social acceptance and awareness 4.111 21 82.22 s2 level of toll / tariff for a project is acceptable 4.267 9* 85.33 s3 creation more job opportunities 3.711 38 74.22 s4 delivery of service is stable and reliable 4.356 4* 87.11 s5 there is a long-term demand of the service in the community 4.089 23 81.78 s6 project is an environmentally sustainable 4.156 16 83.11 the community interest of ppp project is to have a good service. continuing success of a ppp project depends on a sustainable and reliable service delivery. "level of toll/tariff for a project is acceptable" is rated as csf. to ensure that benefits of the project are realized, design of a tariff should be suitable for community. e other factors the last group is other factors group with a mean value of 4.079 and ranked in the fifth position. it includes one factor as csf from the heights success factor as illustrated in table 7. table 7 other factors success factors mean rank weight ratio% o1 providing guarantee from the government for the project 3.889 34 77.78 o2 shared authority between public and private sector 3.889 35 77.78 o3 a detailed analysis and appropriate allocation of risks 4.333 5* 86.67 o4 satisfying the integrity between resources in society 4.067 28 81.33 o5 long term monitoring and control mechanism over the private sector 4.178 15 83.56 o6 existing of institutional policy framework for partnership projects 4.111 22 82.22 o7 reduction in disputes, claims and litigation 3.911 33 78.22 o8 capacity building and offering counseling for governmental institutions 4.200 12 84.00 o9 management skill of private sector 4.044 31 80.89 o10 government transparency in contracting with private sector 4.156 18 83.11 o11 existing of relationship management and coordination between public and private sectors 4.089 25 81.78 nabil i. el-sawalhi, and mohammed a. mansour / preparation critical success factors for public private partnership (ppp) projects in palestine (2014) 57 "a detailed analysis and appropriate allocation of risks" is rated as important factor with a mean value of 4.333. palestinian situation is linked with high risks, since the political and commercial risks are not stable. the key to successful ppp contracting is analysis and fair allocation between public and private sector partners of project risks. vi conclusion it is important to investigate the factors that contribute directly to the successful application of ppp in palestine. structured questionnaire was conducted to explore the local practices relating to the success factors for ppp projects. the study findings indicate that the political and legal factors group is considered the most important group that influences the success of ppp projects. the most important factor to be considered by the government is "stability of political situation". this influences investment in ppp projects. the political situation remains as a serious hurdle in the formulation of various infrastructure development reforms of ppp. the political situation in terms of israeli occupation practices and restrictions has in general a direct effect on investment in projects and on ppp projects in particular. the main precondition for ppp projects is stable political in view of the long period implementation and duration of ppp projects. other csfs should be considered at the planning stage by the government to ensure successful application. these csfs are stability of political situation, clear and detailed contract, existence of a sound economic policy, reliable delivery of service, analysis and allocation of risks, suitable legal framework, experienced private sector, profitability to the private sector, and accepted level of toll/tariff for the project. setting up a framework that promotes the ppp activities and protects the rights of those involved in the project. ppp guidelines and implementation process should be standardized to be used by various ppp stakeholders. policy makers and planners should analyze the ppp acts and transfer knowledge and skills from other countries which has successful system of ppp and adopt plans for ppp projects. references [1] a.wong, "lessons learned from implementing infrastructure ppps – a view from singapore", seminar jointly organized by the department of civil engineering of the university of hong kong and civil division of the hong kong institution of engineers, june 13 2007. [2] d.m. lambert, m.a. emmelhainz, j.t. gardner, "developing and implementing supply chain partnerships", the international journal of logistics management, vol. 7, no.2, 1-17 1996. [3] li, b. and akintoye, a. (2005), "an overview of public private partnership", public private partnership: managing risks and opportunities. isbn 0-632-06465-x), 1-30. [4] i.d.ozdoganm, m.t. birgonal, "a decision support framework for project sponsors in the planning stage of build-operate-transfer (bot) projects", construction management and economics, 18(3), 343-353, 2000. [5] a. f. salman, m. j. skibniewski, i. basha, “bot viability model for large-scale infrastructure projects”, journal of construction engineering and management, vol. 133, no. 1, january 2007, pp. 50-63. [6] j. jütting, "public private partnerships and social protection in developing countries: the case of the health sector", paper presented at the ilo workshop on ‘the extension of social protection’ geneva, 13-14 december, centre for development research, university of bon, germany,1999. [7] x. q. zhang, m. m. kumaraswamy, “procurement protocols for public-private partnered projects.” journal of constrction and engineering management, vol 127 , no. 5, pp. 351–358, 2001. [8] s. mubin, a. ghafar, "bot contracts applicability in pakistan for infrastructuredevelopment", proceedings of international conference management, 2007. [9] p. marin, public-private partnerships for urban water utilities : a review of experiences in developing countries, the international bank for reconstruction and development / the world bank, 2009. [10] b. grover, “ overview re public-private partnership in the domestic water supply sector”. water demand management forum on public-private partnerships, 15-17 october 2002, amman, jourdan. [11] u. shahwan, r.soudah, "public-private partnership as a strategy for economic development in palestine", bethlehem university journal, vol. 24, 2005. [12] municipal development and lending fund (mdlf), "a study of public private partnership in the municipalities", funded by world bank, prepared by new vision management consulting and training firm, palestine, ramallah, october, 2009. transactions template journal of engineering research and technology, volume 1, issue 2, june 2014 fpga based generalized switched reluctance motor controller yassen gorbounov abstract— this paper presents an approach towards building a dedicated fully digital embedded switched reluctance motor (srm) controller based on a field programmable gate array (fpga) circuit. the essentials of the switched reluctance motor operation and control principles are briefly introduced and the hardware implementation of the digital fpga controller is discussed. prior to build the controller a precise mathematical model based on artificial neural network has been built that solves the problems related with the high nonlinearity of the magnetic circuit of the motor. the coefficients obtained with the aid of this model are included as parameters in the verilog hdl description of the main algorithms. the paper highlights some advantages of fpga devices outlining their power and flexibility when it is about running of parallel processing algorithms. index terms— switched reluctance motor, parallel algorithms, motor control, embedded controller, artificial neural network. i introduction switched reluctance motors (srm) are doubly salient electric machines which have neither windings nor magnets on the rotor. the torque is produced by the tendency of the rotor to move to a position with lowest reluctance, i.e. where the inductance of the excited phase is maximized [1, 2]. although known from 1830s and having one of the simplest constructions their usage was restrained to around 1960s due to the specifics of their control. with the burst development of digital control logic and with the great achievements in power electronics the sr motors gain continuing popularity nowadays in areas such as the coal mines [3], aircraft and aerospace industries [4], electric vehicles [5], home and office appliances [6] and renewable energy sources in form of switched reluctance generators (srg) in wind turbines [7] etc. one of the major peculiarities of these motors is related to the high nonlinearity of the stator inductance that is a function of both the angular position and the current flowing through the active phase winding. that is why fast running parallel algorithms are required that disqualify most of the cpus and make fpgas extremely suitable. this paper deals with a dedicated digital srm controller built around xilinx spartan 3 starter kit [17, 18] that demonstrates the ability of the fpga circuits for running parallel algorithms. in the article the basic control principle of this motor type is outlined as well as specific implementation of the controller for srm12-8 is proposed. verilog hdl has been used as a basis for creating the digital hardware description. ii operation principles of the srm the switched reluctance motor has very simple and robust construction. there have been discussed some of the main concepts that characterize this type of motor and the related power converter is given below. a structure of the srm the cross-section of a typical 3-phase srm of type 12-8 is given in figure 1. figure 1 cross-section of srm12-8 each stator winding is independent from the others and the torque produced is insensitive to the current polarity which simplifies the power converter topology. to move the rotor in a desired direction of rotation the motor phases are switched sequentially by applying dc voltage pulses. the ————————————————  y. gorbounov is with the department of electrical drives automation, technical university of sofia, bulgaria fpga based generalized switched reluctance motor controller, yassen gorbounov (2014) 46 rotation is produced by the strive of the rotor to move to a position where the reluctance in the air gap between the rotor and the stator is minimized [2] and not because of the induced magnetic field in the rotor as is the case with the other motor types. for the sake of increasing the static torque the rotor poles number is reduced [15]. this however causes the specific audible noise of this motor type that is due to the change of the stator geometry which is produced by the torque ripples. physically the motor is made of laminated steel, has no impregnating resins in the coils and has no sparkling mechanical collector. these properties make this motor ecologic, highly reliable and durable. it is extremely suitable to be used in harsh environments. in addition sr motors are fault tolerant because they are operable even if one of the stator phases is shorted or interrupted. b topology of the power converter since the stress in the paper is on the digital control principle, only the main features of the power converter are refered here. the most widely spread topology of the power inverter used in sr drives is shown in figure 2. this is an asymmetric converter with freewheeling and regeneration capability. as it can be seen this inverter lacks one of the major drawbacks of the ordinary classical inverters – there is no possibility for shooting-through as the upper and the lower power switch legs are connected through the motor coil. there exist also three-phase topologies with only 4 power switches in total with a common lower leg consisting of just one transistor [1], and other topologies are possible as well. these features of the inverter contribute to the overall fault tolerance of the sr drive. figure 2 srm power inverter c the problem of nonlinearity as it has been mentioned earlier the main drawback of the switched reluctance motor control is related with the high nonlinearity of flux linkage, torque and stator inductance which all varies with both the angular rotor position and the current flowing through the phase coil. the specific shape of the stator inductance as a function of two variables, namely the angular position θ, and the phase current i, is shown in figure 3. this inductance is of major interest when it is about the control of the srm as its precise identification gives the most direct picture of the motor state and the most important parameters can be derived on its base. figure 3 stator inductance as a function of the angular position θ, and the phase current i the self-inductance is described with equation (1):     i i ill    , ,   (1) , where θ is the angular position, i is the phase current and ψ is the magnetic flux through the active phase. linear modeling of the inductance is frequently being discussed in the literature but it is not precise enough and its usage can compromise the quality of the control. a thorough comparative analysis involving some known methods is presented in [8]. the problem of the mathematical modeling of the nonlinearity is most usually solved with the aid of the finite element method analysis (fem) [9] which is a numerical technique for finding approximate solutions of partial differential equations and requires extensive computational efforts. that is why other intelligent methods exist that employs the interpolation approach. a method of this kind is the function decomposition using the fast fourier transform (fft) [12, 14]. it is an advanced and fast numerical approximation algorithm that is being commonly used nowadays in digital signal processing. in the proposed controller the artificial neural network modeling (ann) approach has been adopted [10, 11, 13] which proved to be quite precise. artificial neural networks are often used for applications where formal analysis would be difficult or impossible, such as pattern recognition, function approximation and nonlinear system identification and control. using ann the equation (1) can be also expressed as (2) [19]: 𝐿 = 𝐿(𝐼). 𝐴(𝜃) (2) first a. author, second b. author., and third c. author / research name (2013) 47 , where i is the phase stator current, θ is the rotor angle, l(i) is the inductance in the align position of the rotor that depends only on the phase current and a(θ) is the normalized inductance of the phase in zero current conditions or when the motor is not loaded. next, the partial derivatives of the phase inductance can be obtained after differentiating the expression (2) thus obtaining the form (3) [19].       d dа il il di idl а i il )( ).( ),( )( ).( ),(       (3) the partial derivatives can be presented by ordinary ones. the curves for a(θ) and l(i) can be taken experimentally. having the experimental values, it is easy to describe them by artificial neural networks which is very useful in avoiding the complex problem of solving the differential equations. the need of the very precise approximation when modeling of the inductance with ann is justified by the fact that each phase have to be energized from some given moment to an exact moment just before the inductance reaches its maximum, otherwise a negative torque will be generated and this will deteriorate significantly the power characteristics of the drive. iii digital controller the aforementioned specifics of the srm hindered the development of their control and more widely usage until around 1960s because of the very high demands to the control hardware. nowadays fpgas offer unbeatable performance characteristics. a fully digital fpga based controller is being proposed here that represents an embedded system on a chip. it is the lower level controller and the outer control loops such as current and speed control loops are not discussed here. the proposed device transforms the control of the sr motor into a dc-like one. the phase switching frequency depends on the voltage assignment and determines the motor speed while the current limitation that is embedded in the power supply module controls the motor torque and hence the power. in the ideal case the commutation of each phase happens in the time frame between the minimum inductance of the coil to its maximum. because of the self inductance, if the phase switching is made this way, after powering off the coil for a given time the magnetic field will still not be fully dissipated and a negative torque will be generated. this torque is a breaking one and will act when the phase that follows is being powered. to avoid this problem the so called advance angle α is introduced which forces the commutation to complete in an earlier time, when the maximum inductance is still not reached. this angle is obtained from the mathematical model built at its core around the equations (1) to (3). the general control principle is illustrated in figure 4. figure 4 general overview of the control principle using movement intervals the inductance shape in this figure is idealized and has trapezoid form. it is adopted in the positive direction of rotation the phases to be switched consecutively in the order c-b-a-c and in the negative direction in the order b-c-a-b which can be seen in the shaded areas. with the δ symbol it is shown the angle of phase overlapping which permits to influence the torque ripples in the direction of decreasing them and thus to reduce the harmful audible noise. with l1m, l2m and l3m are marked the maximum values of the inductance of each phase. they appear as upper limits for each range of rotation. given some advance angle α these maximum values are being shifted left or right depending on the direction of rotation so that the new values become l1, l2 and l3. all quantities (l1m, l2m, l3m, α and δ) are transferred to the control algorithm as parameters and give opportunity in some future work they to be adjusted in dynamic and adaptive fashion. this can be done by the aid of a new external parallel running algorithm without changing the existing controller structure. the generalized block diagram of the digital controller including all the major interconnections between its modules is given in figure 5. further in more details will be discussed some of the specifics of the main modules. figure 5 generalized block diagram of the controller a the starting algorithm the starting algorithm is a very important module as the a or b b or c c or a l1m range 2 range 3 range 1 l1cw l2cw l3cwl1ccw l2ccw l3ccw l2m l3m up-down counter starting algorithm clk rst pos dev l1 l2 l3 startf startb stop as bs cs en normal algorithm clk rst pos alpha l1 l2 l3 dir en a b cdelta advance angle control clk rst dir il1 il2 il3 ol1 ol2 ol3 12 12 12 12 12 param param param 12 a b c m o to r d rst q d rst q rst dir fa fb a b rst +/q clk rst pos 12 dir dir dir quad. decoder en en debounce debounce debounce btn start fwd btn start bwd btn stop 12param alpha 12param 12param encoder display fpga based generalized switched reluctance motor controller, yassen gorbounov (2014) 48 starting conditions for running the motor depends entirely on it. in the same time its implementation is not very simple as it works in the most unstable region – that of the lowest speeds and it does not have any prior information about the absolute rotor position. there exist several methods to start the motor. the current implementation solves the task in the following manner: 1. energizes phase a to hold it in a locked position. 2. cycles the switching of each phase depending on the signal for the direction of rotation – button “startf” for clockwise or “startb” for counterclockwise direction. 3. issues an enabling signal for the normal running regime algorithm. this happens in such a position when the motor has succeeded to develop sufficient torque and before the desired position for switching off the phase has been reached the control is being transferred to the normal control algorithm which is based on comparing the ranges of rotation. this algorithm uses only two of the encoder channels – a and b and does not consider the single pulse c. after issuing the enabling signal (en) the starting algorithm stays in the a3 state and waits to be reset or to receive the stopping signal (stop) after which it can be started again. the control graph of the moore state machine is shown in figure 6. figure 6 control graph of the staring algorithm if using d-type flip-flops the input equations are shown in (4):       stopaalposdevposa resetstartbad stopadevlposlposa aresetstartfad .. .. ... .. 321.1 01 3322 100     (4) b the normal running mode algorithm since the advance angle control algorithm is too simple (it just subtracts or adds the advance angle to the limiting values) it will not be discussed here. the normal running mode algorithm is presented in figure 7. figure 7 general view of the normal running mode algorithm as is it seen from the figure it consists of three parallel running algorithms namely a1, a2 and a3. each of them follows strictly the control principle depicted in figure 4 where a1 is responsible for the “range 1”, a2 for “range 2” and a3 for “range 3” respectively. these algorithms are completely independent one from each other which makes the controller flexible and easily extendable both in terms of phase control – advance angle and overlapping, and phase count – having a proper mathematical model it is not very hard to add another rotational range for a 4-phase srm for instance. the functioning of a1-a3 algorithms is explained clearly in figure 8, figure 9 and figure 10. figure 8 normal running mode algorithm – a1 figure 9 normal running mode algorithm – a2 a0 a2 a1 a3 as bs cs en  startbstartfreset  resetstartb  resetstartf  stop stop devlposlpos  32 devlposlpos  32 1 lposdevpos  1 lposdevpos  a1 a2 a3 advance angle control limit1 12 12 12 l2 l3 l3 l1 l1 l2 l1 l2 l3 limit2 limit3 a b c start en ba, a 0 1 0 1 dir 1 0 b            posposdir pospos lposlposlpos 0 0 00 332 start en ac, c 0 1 0 1 dir 1 0 a            331 133 0 lposlposdirlposposdir lposposdirlposlpos first a. author, second b. author., and third c. author / research name (2013) 49 figure 10 normal running mode algorithm – a3 iv simulation results and experimental work to prove the effectiveness of the algorithm and the controller there have been conducted thorough tests and analyzes both via software simulation and practical experiments with a 400w srm12-8 motor. due to space constraints only the starting algorithm simulation results will be given. the verilog code of the starting algorithm is shown in table 1. it follows strictly the control graph given in figure 6 and the derived equations (4). table 1 dataflow verilog description of the starting algorithm `timescale 1ns / 100ps module starter(input clk, reset, input [11:0] pos, //the relative current rotor position input [11:0] dev, //deviation exit position for this algorithm input [11:0] l1, //upper limit of the 'range2' input [11:0] l2, //upper limit of the 'range3' input [11:0] l3, //upper limit of the 'range1' input startf, startb, stop, output as, output bs, output cs, output en); // internal wires wire a0, a1, a2, a3; wire q0, q1; wire d0, d1; // states assign a0 = ~q0 & ~q1; assign a1 = q0 & ~q1; assign a2 = ~q0 & q1; assign a3 = q0 & q1; // input equations assign d0 = a0 & startf & ~reset | a1 | a2 & (pos > l2 & pos < l3 dev) | a3 & ~stop; assign d1 = a0 & startb & ~reset | a1 & (pos > dev & pos < l1) | a2 | a3 & ~stop; // output equations assign as = a0; assign bs = a2; assign cs = a1; assign en = a3; // flip-flops d_ff dff0 (.q(q0), .d(d0), .clk(clk), .rst(reset)); d_ff dff1 (.q(q1), .d(d1), .clk(clk), .rst(reset)); endmodule // d-type flip-flop module d_ff (input clk, rst, d, output reg q); always @(posedge clk) if(rst) q <= 1'b0; else q <= d; endmodule the simulation waveforms when running in clockwise and counterclockwise directions are shown in figure 11 and figure 12 respectively. the exact position when the enabling signal (en) is issued can be clearly seen in the figures. figure 11 starting in forward direction figure 12 starting in backward direction finally the three phase currents waveforms together with the control signal for one phase have been obtained from the real working system. these are shown on the oscillogram in figure 13 where the motor is running at average speed and in steady state running mode in forward direction. as it can be seen phase c is switched on immediately after phase a is switched off. the asymmetry in the current shape is due to the nonlinear phase inductance. start en  21 lposlpos cb, b 0 1 0 1 dir 1 0 c fpga based generalized switched reluctance motor controller, yassen gorbounov (2014) 50 figure 13 running at low speed v conclusion to meet the increasing demands of the contemporary industrial applications the electrical drives are required to own qualities such as high speed, low torque ripples, precise positioning, and good dynamics and be ecological and highly reliable. these specifics require cutting edge performance of the control electronics especially when it is about the nonlinear control. the operational principles of the switched reluctance motors have been briefly discussed in the paper and a dedicated fully digital embedded srm controller implementation was proposed. built around the spartan 3 fpga it was shown that it is capable of performing fast running parallel algorithms that disqualify most of the cpus available on the market. some of the main algorithms of the controller have been discussed making a proposal for highly adaptive architecture that is possible only if using fast digital logic capable of parallelism. the presented approach is suitable for any kind of 3-phase srm motors with central geometrical symmetry of the phases and it can be easily expanded to similar sr motors with greater number of phases. the simulation results of the starting algorithm that proved to be a critically important part of the controller were presented. finally an extensive experimental work has been done and the oscillogram of the phase currents has been given in the article that proved the efficiency of the proposed controller. references [1] t. j. e. miller, electronic control of switched reluctance machines. reed educational and professional publishing ltd, 2001, isbn 0 7506 50737 [2] r. krishnan, switched reluctance motor drives: modeling, simulation, analysis, design, and applications. virginia tech, 2001, crc press [3] h. chen, c. pavlitov, large power analysis of switched reluctance machine system for coal mine. mining science and technology journal, vol. 19 no. 5, elsevier, 2009, issn 1674-5264 [4] c. cossar, l. kelly, t. j. e. miller, c. whitley, c. maxwell, d. moorhouse, the design of a switched reluctance drive for aircraft flight control surface actuation. iee colloquium on electric machines and systems for the more electric aircraft, 1999, london [5] a. nishimiya, h. goto, h. guo, o. ichinokura, control of sr motor ev by instantaneous torque control using flux based commutation and phase torque distribution technique. epe-pemc 2008, poznan, poland [6] b. shino, g. dinesh, design of a switched reluctance motor for mixer-grinder application. ictt 2010, college of engineering, trivandrum, india [7] p. lobato, a. cruz, j. silva, a. pires, the switched reluctance generator for wind power conversion. 9th spanish portuguese congress on electrical engineering, 2005, marbella, spain [8] s. song, w. liu, a comparative study on modeling methods for switched reluctance machines. computer and information science, 2010, vol. 3, no. 2, issn 1913-8989 [9] k. ohyama, m. n. f. nashed, k. aso, h. fujii, h. uehara, design using finite element analysis of switched reluctance motor for electric vehicle, information and communication technologies, ictta 2006, damascus, syria, isbn 0-7803-9521-2 [10] c. pavlitov, y. gorbounov, r. rusinov, a. alexandrov, k. hadjov, d. dontchev, an approach to identification of a class of switched reluctance motors. speedam 2008, ischia, italy, isbn 978-1-4244-1663-9 [11] v. trifa, o. rabulea, switched reluctance motor phase inductivity approximation using artificial neural networks. workshop on variable reluctance electrical machines, 2002, technical university of cluj-napoca [12] f. kucuk, h. goto, h. guo, o. ichinokura, fourier series based characterization of switched reluctance motor using runtime data. icem 2010, isbn 978-14244-4174-7 [13] c. pavlitov, y. gorbounov, tz. georgiev, application of artificial neural networks for identification of variable reluctance motors, 16th international conference on electrical drives and power electronics, 2007, the high tatras, slovak republic, isbn 978-80-8073-8686 [14] s. misawa, i. miki, a rotor position estimation using fourier series of phase inductance for switched reluctance motor. speedam 2010, pisa, italy, isbn 978-14244-4986-6 [15] m. mikhov, a. avramov, r. ognyanov, (in bulgarian) prospects for application of drives with switched reluctance motors, proceedings of the union of the bulgarian scientists, 2001, sofia, bulgaria, issn 1311-2864 [16] d. silage, embedded design using programmable gate arrays. 2008, isbn 978-1-58909-486-4, bookstand first a. author, second b. author., and third c. author / research name (2013) 51 publishing [17] www.xilinx.com [18] www.digilentinc.com [19] c. pavlitov, electrical drives with switched reluctance motors, 2005, technical report, project no. if-0266/13.12.2005 [20] k. asghar, analysis of switched reluctance motor drives for reduced torque ripple using fpga based simulation technique. vol.6 no.2 2013, issn 25685171, american journal of information sciences [21] a. krishnamurthy, fpga baser control with high temperature switched reluctance motor for improving the input power quality. 2004, university of kentucky master's theses, paper 246 [22] y. gorbounov, identification and control of a class of switched reluctance motors, 2013, phd thesis, faculty of automation, technical university of sofia yassen gorbounov. the author received a phd degree from the faculty of automation at technical university of sofia, department of electrical drives automation in 2013. he is a member of the ieee, region 8, the federation of the scientific and engineering unions in bulgaria (fnts) and john atanasoff union of automation and informatics (uai). he is an author or co-author of over 20 journal and conference papers and is a co-author of 1 book. his research interests include automatic control of electrical drives, switched reluctance motor and generator control, application of neural networks and fuzzy logic for motor control, parallel processing algorithms with programmable logic devices. transactions template journal of engineering research and technology, volume 6, issue 2, october 2019 21 households’ affordability and willingness to pay for water services in khan younis city, palestine mazen abualtayef 1 , yousef oukal 1 , said ghabayen 1 , mohamed eila 2 , hatem abueltayef 3 1 environmental engineering department, the islamic university of gaza, palestine 2 projects and international cooperation environmental quality affairs, palestine 3 water department, municipality of khan younis, palestine abstract— willingness and affordability to pay for water services are significant factors in deciding the success and failure of water supply services. due to the fact that households’ willingness to pay for water services are too heavily influenced by specific circumstances, culture, and various social-economic factors, this study aimed at offering a comprehensive picture of the willingness and affordability of household to pay for water services in khan younis city. this may provide guideline for palestinian policy makers to develop a successful water pricing. it also aimed at recognizing the factors that determine households’ willingness to pay and assessing households’ perception of the existing water supply situation and water problems. to fulfill the aim of the study, the researcher used quantitative research method where a questionnaire survey was conducted. the questionnaire was distributed to 400 citizens in khan younis city. a pilot study was accomplished and thirty copies of the questionnaire were distributed to receive feedback to modify the form, and to make it easier for respondents to deal with it. spss software package was applied to identify the most relevant factors affecting household's affordability, and willingness to pay. the results of the analyses indicated that income, water distribution schedule, water quality, water quantity, municipality services, marital status, water network maintenance, water continuity, techniques in the municipality to deliver citizen’s complaint satisfaction, staff response speed about the delivered complaint in the suitable time were determinants for household customer’s willingness to pay and had an effect on it. furthermore, results revealed that satisfaction of households towards (water network maintenance, municipality service, available techniques in municipality to deliver citizen’s complaint, staff response speed about the delivered complaint in the suitable time and water quality) was not good. therefore, the municipality of khan younis city has to make improvements to raise its services quality. also, analysis indicated that there were different reasons for not committing to pay water bills. the most important ones were low income with percent 20.5%, the bad quality of water with percent 25.2% and bad municipality services with percent 17.9%. based on the study findings, it is recommended that the municipality of khan younis city to make improvements to improve water quality. in addition the municipality and coastal municipalities water utility have to separate their presenting of services in order to make citizens have clear picture of their individual services and to motivate them to pay for improved water services. index terms— affordability and willingness to pay, water services, khan younis i introduction water service is basic human right. in palestine today, it cannot be fully enjoyed. the near decade long blockade on gaza, deny palestinians the control over their water resources and prevent them from developing adequate water services [1]. the water in the gaza strip is critical for many reasons, among others the repetition of conflicts, the political instability, the lack of local available resources, the weak institutional framework, the dependency on external funding, and the difficulty to import materials and equipment. significant effort has been made to improve access to water worldwide in recent years [2]. though, the situation is far from perfect, particularly in gaza strip. it is assumed to have adequate financial resources in order to generate a water supply system, preserve and progress the services. accordingly, the users of the service must contribute to the cost of the improved service. according to cmwu [3], the situation was worsening by the conflict in july august 2014. it was clear that the operation has a devastating impact in terms of humanitarian consequences and in particular severe damages to water infrastructure has been extensive. additionally, the damages to the energy generation and electrical supply system, has resulted in a significant reduction in water supply to the population as well as deterioration in the quality of water (salinity). with the continuous disruption to water services, the people in the gaza strip have been made vulnerable to an ever worsening humanitarian crisis due to the lack of basic service provision, an unfolding environmental disaster and the potential health-related risks that come along with it. consequently, water services were considered as an mazen abualtayef, yousef oukal, said ghabayen, mohamed eila, hatem abueltayef/ households’ affordability and willingness to pay for water services in khan younis city, palestine 22 economic good because it had price and prices were derived from tariffs and tariffs were advocated and formulated in line with the adopted water policy [4]. water pricing is an effective strategy to manage water use. transferring to a more suitable price scheme can regulate inefficient levels of domestic water use by varying household water demand. developing countries are in need of more practical and effective water pricing methods since they usually suffer from insufficient water supply services and lack sophisticated and inclusive water pricing systems. the factors that affect households’ willingness to pay for improved water sources are too heavily influenced by specific circumstances, culture, and various social factors to be used outside of the specific scope of a study [5]. due to the fact that people’s attitude towards paying for water is a key factor in deciding the success and failure of water supply projects [6]. further, most of the municipalities suffer from gathering water services costs since citizens do not pay their water bills. this crisis should be managed carefully to secure the water demand with a successful price. therefore, any improvement process for the water supply service will increase the cost of this service since valuation of water service is the key component of an appropriate incentive for balanced and coordinated investment development in the different parts of the city. hence, this study endeavors to examine some of the factors that affect households’ willingness to pay for improved water services and to present their ability to pay for the improved water services in the city. this study sheds light on households’ willingness and affordability to pay for water services in khan younis city to generate useful baseline information for policy makers to improve a successful water supply policy. ii materials and methods the adopted methodology to accomplish this study is observational sectional approach, which is integrated to achieve the study objectives which were: measuring the ability and wiling to pay (wtp) for water supply service in the khan younis city and developing a water tariff model for municipal water departments in the gaza strip. a questionnaire was developed to the target group which was the customers of water supply in khan younis city. the strategy of this study has built on quantitative research method where the questionnaire survey was conducted. consequently, in sight of the features of quantitative research method as a technique for easier and more precise thorough analysis, the questionnaire was chosen to identify the factors affecting household`s willingness and affordability to pay for water services. as stated by the palestinian central bureau of statistics, the total number of household in khan younis city is 241,870, which means that 400 households is the sample. this sample is a random one, the questionnaire forms were distributed, filled and analyzed in khan younis city which has around 20 localities. based on the review of related literature a questionnaire was established with closed and open-ended questions. the questionnaire was planned in the arabic language, as most of the target residents were unfamiliar with the english language. in each questionnaire, a descriptive letter was involved to cover some moral considerations and to assist questionnaire filling. the questionnaire consisted of three sections. the first section was related to the social and the economic background of the respondent. the second section was about the current situation of water supply situation. the third section addressed the quality and quantity of the service, and the customers’ satisfaction. the fourth part discussed the affordability of water consumption, etc. moreover, the researcher distributed the organized questionnaire to panels of experts with experience in the same area of the research to obtain their notes on the questionnaire. after changing the questionnaire according to the commentaries of the experts and before distributing the final questionnaire on the entire sample, a pilot study is accomplished and 30 copies of the questionnaire are distributed. the questionnaire was designed to be administered as a structured interview. respondents were drawn from localities where the final survey was expected to be conducted. at the end of this process, the modifications are discussed with the supervisor, adjustments and addition were introduced as well as the final form of the questionnaire was constructed then the questionnaire was finalized. iii results and discussion a total of 400 households answered the questionnaire from different localities of khan younis city. of all the sample population, 20 responses were dropped because some of them lacked the required information and others gave unreliable and inconsistent answers. hence, only 380 questionnaires were used for this analysis. regarding the social and economic aspects of khan younis community, 87.4 % were male respondents while 12.6% were female respondents. from the total of 380 sampled households, 74% were educated and the rest 26% were not. according to the survey findings, 91.6% of the respondents are married. the single respondents are 5%. this reveals that the married respondent is responsible for the water cost and consumption. the data about the respondents’ age shows that the average ranges from 30 to above 40 years. 1. cost of water paid from other resources per month it is clear that 82.1% of the households paid 50 nis for water from other resources per month. it is believed that they buy water not because of water shortage but because the water quality is low. they used the water in cooking, taking shower and washing as the municipality water is too salty. figure 1 shows the percentage and the frequenmazen abualtayef, yousef oukal, said ghabayen, mohamed eila, hatem abueltayef/ households’ affordability and willingness to pay for water services in khan younis city, palestine 23 50 nis 51-100 nis more than 100nis cost of other water resources frequency 312 48 20 percent 82.1 12.6 5.3 0 50 100 150 200 250 300 350 cy of the cost of water paid from other resources per month. figure 1 cost of water paid from other resources per month 2. relationship between wtp and water quality satisfaction results revealed that satisfaction of 45.5% of the households is poor and 45.0% is average while just 9.5% is good. this result indicates that water quality is not good in khan younis city and needs improvements to raise its quality. table 1 shows that water quality satisfaction affects the willingness to pay since the significance is 0.00. this result asserts that most households confirm whenever water they are getting has good quality, their wtp rates are getting higher. this result is confirmed by several research findings. table 1 one-way anova for water quality satisfaction willingness to pay water bill monthly water quality satisfaction sum of squares df mean square f sig. between groups 9.08 2 4.54 11.36 0.00 within groups 150.67 377 0.40 total 159.75 379 it is noted while distributing the questionnaire that the majority of the households have willingness to pay instead of raising water quality since they buy water from other resources not because of water shortage but because of water bad quality. this result is confirmed by the result of the item number (32) in the questionnaire that shows that 64% of the respondents agree to pay more for water quality improvements. reviewing literature that has identified some variables that have been used in previous studies corresponding an improved water quality that is linked to household’s willingness to pay investigation, households were commonly found to be willing to pay higher for improved water quality from most of the studies. cho et al. [7] study found that rural residents in minnesota are willing to pay to improve their drinking water quality by reducing the iron and sulfate concentration in the water. however, willingness to pay is less for consumers that notice they are provided with good quality water. in addition, chi-square is done in order to test the effect of water quality satisfaction upon households’ acceptance to pay more for water improvement and quality. the table below illustrates that 63.9%% of the households accepted to pay more for improvement, whereas 36.1 % of them did not accept to pay more for improving the services. besides, chisquare test demonstrated that the significant value of the test is 0.02 which is less than 0.05, so it can be said that there is an effect of water quality satisfaction upon households’ acceptance to pay more for water improvement and quality. table 2 crosstabs for water quality satisfaction if the municipality attend to raise water services price for improvement and quality, do you agree to pay more for this improvement water quality satisfaction total g o o d a v e ra g e p o o r yes count 30 114 99 243 % within q_32if the municipality attend to raise water services price for improvement and quality, do you agree to pay more for this improvement 12.3% 46.9% 40.7% 100.0 % % within q_19water quality satisfaction 83.3% 66.7% 57.2% 63.9% no count 6 57 74 137 % within q_32if the municipality attend to raise water services price for improvement and quality, do you agree to pay more for this improvement 4.4% 41.6% 54.0% 100.0 % % within q_19water quality satisfaction 16.7% 33.3% 42.8% 36.1% total count 36 171 173 380 % within q_32if the municipality attend to raise water services price for improvement and quality, do you agree to pay more for this improvement 9.5% 45.0% 45.5% 100.0 % % within q_19water quality satisfaction 100.0 % 100.0 % 100.0 % 100.0 % table 3 chi-square tests for water quality satisfaction value df asymp. sig. (2-sided) pearson chi-square 9.807 2 .007 mazen abualtayef, yousef oukal, said ghabayen, mohamed eila, hatem abueltayef/ households’ affordability and willingness to pay for water services in khan younis city, palestine 24 likelihood ratio 10.496 2 .005 linear-by-linear association 9.385 1 .002 n of valid cases 380 3. relationship between wtp and water network maintenance satisfaction table 4 shows the values of one-way anova and regression tests. they reveal that water network maintenance satisfaction affects the households’ willingness to pay for water services, the significance value is 0.00. table 4 one-way anova for water network maintenance satisfaction willingness to pay water bill monthly water network maintenance satisfaction sum of squares df mean square f sig. between groups 18.488 2 9.244 24.671 0 within groups 141.259 377 0.375 total 159.747 379 results show that the respondents suffer from many problems in water networks. they do not mind to pay extra money in order to solve these problems and to make maintenance for their networks continuously. this correlates with the findings of the study that shows the satisfaction of the respondents towards water network maintenance satisfaction as the satisfaction of 35.5% of the households is poor and 50% is average while just 14.5% is good. this result indicates that water networks in khan younis city needs maintenance to increase the satisfaction. it is clear that the siege in gaza strip prevents the municipality to improve and maintain the networks of water. also, the cost of maintenance equipment is very high. the results of this study agree with the results of alghuraiz and enshasi [8] study showed that the majority of the respondents (97.2%) believed that it is necessary to improve the quality and quantity of water supply service. it was observed that 74.5% was convinced that the improvement process needs extra cost while 14.3% believed the contrary. the results also show that 82.8% of the respondents were willing to pay for improvement services, whereas 17.2% preferred the situation to remain as it is without any improvement because they are not able to pay. 4. affordability to pay for water services in order to check the respondents’ affordability to pay for water services chi-square test, percentages ad frequencies are addressed in this section. results of the above table showed that education has affected the respondents’ commitment to pay, 69% of the educated respondents are affordable to pay. the findings were similar to those from other studies where the level of education had an influence on the respondents’ affordability to pay [9, 10]. this was because, as the respondents were more educated and with better paying jobs and could not afford time to collect water from the sources outside their homesteads. thus, these respondents were willing to pay for a reliable water services instead of struggling to get water. table 5 payment commitment per month q_37your payment commitment per month h ig h p e rc e n ta g e a v e ra g e p e rc e n ta g e l o w p e rc e n ta g e t o ta l q_2 gender of the respondent male 95 29 113 34 124 37 332 female 14 29 12 25 22 46 48 q_3 age of the respondent less than 20 1 25 1 25 2 50 4 20-30 9 16 24 43 23 41 56 31-40 47 32 47 32 53 36 147 above 40 52 30 53 31 68 39 173 q_4 marital status of the respondent married 102 29 114 33 132 38 348 single 4 21 6 32 9 47 19 widower 1 17 2 33 3 50 6 divorced 2 29 3 43 2 29 7 q_5 education of the respondent elemen mentary 2 18 5 45 4 36 11 preparatory 7 33 6 29 8 38 21 secondary 17 25 21 31 29 43 67 bachlo r 58 26 81 36 84 38 223 higher education 25 43 12 21 21 36 58 q_7 occupation of the respondent employee 49 34 45 31 50 35 144 worker 24 25 40 42 32 33 96 private work 30 30 30 30 41 41 101 unemployed 6 15 10 26 23 59 39 q_8 amoun t of the respondent’s income 5001000 nis 20 21 35 36 41 43 96 10002500ni s 51 29 61 35 63 36 175 25004000 30 33 22 24 39 43 91 above 4000 8 44 7 39 3 17 18 mazen abualtayef, yousef oukal, said ghabayen, mohamed eila, hatem abueltayef/ households’ affordability and willingness to pay for water services in khan younis city, palestine 25 it is clarified from figure (2) that 28.7% of the respondents have the ability to pay since their commitment is high. while 38.4% have low ability to pay. this reveals that the majority of the respondents are not committed to pay for water services for different reasons. figure 2 payment commitment per month findings of figure (3) demonstrated that 27.4% of the respondents are committed to pay their bills and 72.6% are not committed to pay their water bills for different reasons. the most important reasons are low income with percent 20.5%, the bad quality of water with percent 25.2% and bad municipality services with percent 17.9%. correspondingly, it can be concluded that high cost low income level bad quantity and quality and bad municipality services (water network maintenance, quick responses to citizens ‘complains and regular reader visits) have an impact on the respondents’ commitment to pay their water bills every month. figure 3 the reasons of not paying it can be seen that the results of the analyses indicated that income, water distribution schedule, water quality, water quantity, municipality services, marital status, water network maintenance, water continuity, techniques in the municipality to deliver citizen’s complaint satisfaction, staff response speed about the delivered complaint in the suitable time that have been assumed as the factors for household customer’s willingness to pay, all of them are significant. these factors of household’s willingness to pay are already expected and are in line with previous studies. several studies have asserted the importance of water quality and color [11, 12], and they discussed the fact that water users are willing to pay more for better water quality [9], to reduce water pollution [7] and to be provided with continuous water supply. the current study revealed that income is a factor for household customer’s willingness to pay that agrees with several previous researches [14, 7, 15]. in this study, it is presumed that when household’s income is increased, there should be an increase in the household’s willingness to pay for water no matter the amount. moreover, wahid and hooi [16] have recommended that willingness to pay is a result of numerous factors or features that are emotionally important to the household customers. also, the results of this study emphasized by al-ghuraiz and enshasi [8] who concentrate on some essential factors such as water consumption, quality and quantity, socioeconomic situation, wtp, ability, and affordability. iv conclusions for socio-economic characteristics of the respondents: the majority of respondents were household heads (84%) aged between 31 and above, and 15% were aged from 20 to 30 years old. 91.6% of the respondents are married and the single respondents are 5%. respondents’ educational level is very high, as almost 60% have their bachelor degree and 15% have got their high education degree. 46% of the respondents have 1001 to 2500 nis income and 25% have very low income from 500 to 1000 nis. for household’s access to water and water quality and quantity: the satisfaction of water quality of 45.5% of the households is poor and 45% is average while just 9.5% is good. 48.4% of the households paid 55 nis, 36.1% paid 25 nis, 11.3% paid 115 nis and just 4.2% paid more than 175 nis so, water bills can be paid since it is not too much. 67.1% of the households have very good water distribution schedule since their schedule of distribution every day and every two days. only 5.3% of the respondents suffer from water shortage as water comes every week. 82.1% of the households paid 50 nis for water from other resources per month. it is believed that they buy water not because of water shortage but because the water quality is low. the satisfaction towards water network maintenance of 35.5% of the households is poor and 50 % is average while just 14.5% is good. 43.4% of the households see that municipality service is poor. while 18.9% have good satisfaction. therefore, the municipality of khan younis city has to make improvehigh average low payment commitment frequency 109 125 146 percent 28.7 32.9 38.4 0 20 40 60 80 100 120 140 160 high cost of bill low incom e level bad quality and quantit y of water bad munici pality service s. others reasons of not paying percent 6.8 20.5 25.2 17.9 2.1 .0 5.0 10.0 15.0 20.0 25.0 30.0 mazen abualtayef, yousef oukal, said ghabayen, mohamed eila, hatem abueltayef/ households’ affordability and willingness to pay for water services in khan younis city, palestine 26 ments to raise its services quality. the satisfaction upon the available techniques in khan younis municipality to deliver citizen’s complaint is 41% of the households is poor and 44% is average while just 15% is good. 48.4% of the respondents’ satisfaction about staff response speed about the delivered complaint in the suitable time is average and 41.6% of the respondents’ satisfaction is poor. for factors affecting willingness to pay for water: it can be seen that the results of the analyses indicated that income, water distribution schedule, water quality, water quantity, municipality services, marital status, water network maintenance, water continuity, techniques in the municipality to deliver citizen’s complaint satisfaction, staff response speed about the delivered complaint in the suitable time that have been assumed as the factors for household customer’s willingness to pay, all of them are significant. for affordability to pay for water: 28.7% of the respondents have the ability to pay since their commitment is high. while 38.4% have low ability to pay. results showed that education has affected the respondents’ commitment to pay, 69% of the educated respondents are affordable to pay. there are different reasons for not committing to pay water bills. the most important ones are low income with percent 20.5%, the bad quality of water with percent 25.2% and bad municipality services with percent 17.9%. there is an effect of cost of water consumption paid per month upon households’ acceptance to pay more for water improvement and quality. there is an effect of water quality satisfaction upon households’ acceptance to pay more for water improvement and quality. there are statistically significant differences at (α≤0.05) between regular water reader visit and the households’ affordability to pay for water improvement and quality. references [1] palestinian water authority, pwa (2015). national water and wastewater policy and strategy for palestine toward building a palestinian state from water perspective. [2] who and unicef (2016). drinking water equity, safety and sustainability. thematic report. [3] coastal municiplities water utitlity, cmwu (2015). delivering world class water services. cmwu, palestine. [4] shonnar, b. (2007). households’ affordability and willingness to pay for water and wastewater services in ramallah and al-bireh district, palestine. [5] wedgewood and sansom (2003) willingness-to-pay surveys: a streamlined approach: guidance notes for small town water services wedc, loughborough university. [6] kamaludin, m., rahim, k. a., & radam, a. (2014). assessing consumer’s willingness to pay for improving domestic water services in kelantan, malaysia. [7] cho, y., easter, k. w., mccann, l. m., & homans, f. (2005). are rural residents willing to pay enough to improve drinking water quality? jawra journal of the american water resources association, 41(3), 729-740. [8] al-ghuraiz, y., & enshassi, a. (2005). ability and willingness to pay for water supply service in the gaza strip. building and environment, 40(8), 1093-1102. [9] whittington, d. (1990). guidelines for conducting willingness-to-pay studies for improved water services in developing countries. wash technical report no. 56, water and sanitation for health project. washington, dc: us agency for international development. [10] kanyoka, p. (2008). water value and demand for multiple uses in the rural areas of south africa: the case of ga-sekoror of the annual conference; management and regulations. [11] doria, m. f. (2010). factors influencing public perception of drinking water quality. water policy, 12(1), 1-19. [12] warren, l. (1996). linking customer satisfaction to performance measures. paper presented at the proceedings. [13] beaumais, o., briand, a., millock, k., & nauges, c. (2014). what are households willing to pay for better tap water quality? a cross-country valuation study. [14] bogale and urgessa. (2012). households' willingness to pay for improved rural water service provision: application of contigent valuation method in eastern ethiopia. journal of human ecology, 38(2), 145-154. [15] genius, m., hatzaki, e., kouromichelaki, e., kouvakis, g., nikiforaki, s., & tsagarakis, k. p. (2008). evaluating consumers’ willingness to pay for improved potable water quality and quantity. water resources management, 22(12), 1825-1834. [16] wahid, n. a., & hooi, c. k. (2014). factors determining household consumer’s willingness to pay for water consumption in malaysia. asian social science, 11(5), 26. transactions template journal of engineering research and technology, volume 4, issue 3, septemper 2017 110 true multi-objective optimal power flow in a deregulated environment using intelligent technique f. r. zaro palestine polytechnic university ppu abstract—in this paper, a multi-objective particle swarm optimization (mopso) technique is proposed for solving the optimal power flow (opf) problem in a deregulated environment. the opf problem is formulated as a nonlinear constrained multiobjective optimization problem where the fuel cost and wheeling cost are to be optimized simultaneously. mva-km method is used to calculate the wheeling cost in the system. the proposed approach handles the problem as a true multiobjective optimization problem. the results demonstrate the capabilities of the proposed approach to generate true and well-distributed pareto-optimal solutions of the multiobjective opf problem in one single run. in addition, the effectiveness of the proposed approach and its potential to solve the multiobjective opf problem are confirmed. ieee 30 bus system is considered to demonstrate the suitability of this algorithm. index terms— optimal power flow, particle swarm optimization, wheeling cost, fuel cost, multiobjective optimization. i introduction the optimal power flow (opf) problem is one of the most widely studied subjects in the power system field. still researchers are working on opf problems for the present day challenges of power system such as a liberalized market or microgrid. the opf requirements have become more complex than it was and the classical power systems concepts and practices are overruled by the management of economic market, due to the deregulation of electricity market and consideration of dynamic system properties [1]. in deregulated electricity market, opf research result can be extended into many research fields: electricity transmission fee computation, locational real-time pricing, available transfer capability estimation, network congestion management, etc. [2]. the most common methods for opf: linear programming, nonlinear programming, quadratic programming, newton-raphson, interior point and artificial intelligence (ai) methods. ai methods include genetic algorithm (ga), evolutionary programming (ep), artificial neural network, ant colony, fuzzy logic method and particle swarm optimization (pso) [3]-[4]. the power transfer allocation is one of the important issues in deregulated power industry. the most common methods for allocation payment of electricity transmission systems: mva-mile, mw-mile, contract path, postage-stamp rate, unused transmission capacity, counter-flow, and distribution factors [5]. the results of ga, ep and pso were promising and encouraging for further research for solving opf problem [6]-[8]. in a standard opf problem, several objectives can be defined. the multiobjective opf problem was converted to a single objective problem by linear combination of different objectives as a weighted sum. however, this requires multiple runs depending on the number of desired pareto-optimal solutions (pos). additionally, this method cannot be used to find pos in problems having a non-convex pareto-optimal front. evolutionary algorithms can be efficiently used to eliminate most of the difficulties of conventional methods [9]-[11]. since they use a population of solutions in their search, multiple pos can be found in one single run. the multiobjective evolutionary algorithms have been proposed for different optimization problems of power system with impressive success [12]-[15]. generally, in a multiobjective optimization the major challenges are generating uniform distributed pareto set with maximum diversity, selecting the best compromise solution from the pareto set as well as the computational efficiency. several methods have been proposed to solve mutliobjective optimization problems [16]-[19]. in this paper, multi-objective particle swarm optimization (mopso) technique is utilized to solve the opf problem. the opf problem is formulated as a nonlinear constrained multiobjective optimization problem where the fuel cost and wheeling cost are treated as competing objectives. a hierarchical clustering technique is implemented to manage pareto-optimal set size. furthermore, for choosing the best compromise solution from pareto optimal solutions the fuzzy theory is proposed. as well as several runs have been carried out on the standard ieee 30-bus test system. the rest of this paper is organized as follows. the problem statement is described in section ii. whereas multiobjective optimization and the proposed approach are described in sections iii and iv respectively. the implementation of the proposed technique is described in section v. finally, the results and conclusions are made in sections vi and vii respectively. f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 111 ii problem statement a. problem objectives 1. minimization of fuel cost: the generator cost curves can be represented as 2 i i i gi i gi f a b p c p   ($/hour) (1) where fi is the fuel cost of the i th generator, ai, bi, and ci are the cost coefficients of the i th generator. table i contains the values of these coefficients. pgi is the real power output of the i th generator. in this study, j1 represents the total fuel cost, i.e., 1 1 ( ) ($/hour) ng g i i j f p f    (2) where ng is the number of generators. 2. minimization of wheeling cost: the wheeling cost is represented by the following equation (cent/hour) i f i i c w s l (3) where wf is weighting factor and it’s unit is cent/(hour. mva. km), si is the average apparent power flow in branch i (mva) and li is the length of branch i (km). the j2 represents the total wheeling cost (ct). 2 1 nb f i i i j ct w s l     (4) where nb is the number of branches. b. problem constraints equality constraints are the load flow equations: 1 [ cos( ) sin( )] 0 i i nb g d i j ij i j ij i j j p p v v g b           (5) 1 [ sin( ) cos( )] 0 i i nb g d i j ij i j ij i j j q q v v g b           (6) where i= 1,…nb, nb is the number of buses; pg and qg are the generator real and reactive power respectively; pd and qd are the load real and reactive power respectively; gij and bij are the transfer conductance and susceptance between bus i and bus j respectively [12]. inequality constraints are the system operating constraints:  generation constraints: vg and qg represent generator voltages and reactive power outputs, respectively. these constraints are restricted by their lower and upper limits as follows: ngivvv iii ggg ,...,1 maxmin  , (7) ngiqqq iii ggg ,...,1 maxmin  , (8) where ng is number of generators.  transformer constraints: represent the transformer tap (t) settings, which are bounded as follows: ntittt iii ,...,1 maxmin  , (9) where nt is the number of transformers.  switchable var sources constraints: switchable var compensations qc are restricted by their limits as follows: nciqqq cicici ,...,1 maxmin  , (10) where nc is the number of switchable var sources.  security constraints: these include the constraints of voltages at load buses and transmission line loadings as follows: nlivvv iii lll ,...,1 maxmin  , (11) where nl is the number of load buses max , 1,..., i il l s s i nb  (12) c. problem formulation the multiobjective optimization problem can be mathematically formulated as a nonlinear constrained as follows. minimize [j1, j2] (13) subject to: g(x,u) = 0 (14) h(x,u)  0 (15) where: x: is the vector of dependent variables consisting of real power generated at slack bus, load bus voltages (vl), generator reactive power outputs (qg), and transmission line loadings sl. hence, x can be expressed as 1 1 1 1 x [ , ... , ... , ... ] nl ng nb t g l l g g l l p v v q q s s (16) u: is the vector of control variables consisting of generator voltages vg, transformer tap settings t, and shunt compensations qc. hence, u can be expressed as 1 2 11 [ ... , ... , ... , ... ] ng ng nc t g g g g nt c c u v v p p t t q q (17) g: is the equality constraints. h: is the inequality constraints. iii multiobjective optimization generally, multiobjective optimization problem consists of a number of objectives to be optimized simultaneously and is associated with a number of equality and inequality constraints [11]-[12], [19]. it can be formulated as follows: obji nixfminimize ,...,1 )(  (18)      kkxh mjxg tosubject k j ,...,1 0)( ,...,1 0)( : constraints (19) where fi is the i th objective functions, x is a decision vector that represents a solution, and nobj is the number of objectives. in a multiobjective optimization problem, a minimizaf. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 112 tion problem, a solution x 1 dominates a solution x 2 if and only if: 1. )()(:}..., ,2 ,1{ 21 xfxfni iiobj  (20) 2. )()(:}..., ,2 ,1{ 21 xfxfnj jjobj  (21) the nondominated solutions are denoted as pareto optimal set or pareto optimal front. iv the proposed approach a. overview mopso technique is proposed for solving the opf problem [20]-[24]. the opf problem is formulated as a nonlinear constrained multiobjective optimization problem where the fuel cost and wheeling cost are treated as competing objectives. a hierarchical clustering technique is implemented to manage pareto optimal set size [25]. furthermore, the fuzzy set theory has been used to find best compromise solution since for the decision making purpose and practical reasons, one is interested in only one solution [26], [27]. the detailed flow chart of the proposed mopso is shown in fig.1. the basic elements of the proposed mopso technique are briefly stated and defined as follows [17],[28]-[34]:  nondominated local set, sj * (t),: it is a set that stores the nondominated solutions obtained by the j th particle up to the current time. as the j th particle moves through the search space, its new position is added to this set and the set is updated to keep only the nondominated solutions. an average linkage based hierarchical clustering algorithm is employed to reduce the nondominated local set size if it exceeds a certain prespecified value.  nondominated global set, s**(t),: it is a set that stores the nondominated solutions obtained by all particles up to the current time. first, the union of all nondominated local sets is formed. then, the nondominated solutions out of this union are members in the nondominated global set.  external set: it is an archive that stores a historical record of the nondominated solutions obtained along the search process. this set is updated continuously after each iteration by applying the dominance conditions to the union of this set and the nondominated global set. then, the nondominated solutions of this union are members in the updated external set.  local best, xj *(t), and global best, xj**(t),: in order to guide the search towards the pareto-optimal front, the global and local best individuals are selected as follows. the individual distances between members in nondominated local set of the j th particle, sj*(t), and members in nondominated global set, s**(t), are measured in the objective space. if xj*(t) and xj**(t) are the members of sj*(t) and s**(t) respectively that give the minimum distance, they are selected as the local best and the global best of the j th particle respectively. b. mopso steps the steps for mopso can be summarized as following: step 1: initialization: set the time counter t=0 and generate randomly n particles, {xj(0), j=1, …, n}, where xj(0)=[xj,1(0), …, xj,m(0)]. where m is the number of optimized parameters. xj,k(0) is generated by randomly selecting a value with uniform probability over the k th optimized parameter search space [xk min , xk max ]. similarly, generate randomly initial velocities of all particles, {vj(0), j=1, …, n}, where vj(0)=[vj,1(0), …, vj,m(0)]. vj,k(0) is generated by randomly selecting a value with uniform probability over the k th dimension [-vk max , vk max ]. each particle in the initial population is evaluated using the objective functions. for each particle, set sj*(0)={xj(0)} and the local best xj*(0)=xj(0), j=1, …, n. search for the nondominated solutions and form the nondominated global set s**(0). the nearest member in s**(0) to xj *(0) is selected as the global best xj**(0) of the j th particle. set the external set equal to s**(0). set the initial value of the inertia weight w(0). step 2: time updating: update the time counter t = t+1. step 3: weight updating: update the inertia weight w(t) =  w(t-1). where  is a decrement constant smaller than but close to 1 step 4: velocity updating: using the global best and individual best of each particle, the j th particle velocity in the k th dimension is updated according to equation (22): ))1()1(( ))1()1(()1( )()( , ** ,22 , * ,11,,   txtxrc txtxrctvtwtv kjkj kjkjkjkj (22) step 5: position updating: based on the updated velocities, each particle changes its position according to equation (23). )1()()( ,,,  txtvtx kjkjkj (23) if a particle violates its position limits in any dimension, set its position at the proper limit. step 6: nondominated local set updating: the updated position of the j th particle is added to sj*(t). the dominated solutions in sj*(t) will be truncated and the set will be updated accordingly. if the size of sj*(t) exceeds a prespecified value, the clustering algorithm will be invoked to reduce the size to its maximum limit. step 7: nondominated global set updating: f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 113 the union of all nondominated local sets is formed and the nondominated solutions out of this union are extracted to be members in the nondominated global set s**(t). the size of this set will be reduced by clustering algorithm if it exceeds a prespecified value. step 8: external set updating: the external pareto-optimal set is updated as follows. copy the members of s**(t) to the external pareto set. 1. search the external pareto set for the nondominated individuals and remove all dominated solutions from the set. 2. if the number of the individuals externally stored in the pareto set exceeds the maximum size, reduce the set by means of clustering. step 9: local best and global best updating: the individual distances between members in sj*(t), and members in s**(t), are measured in the objective space. if xj*(t) and xj**(t) are the members of sj*(t) and s**(t) respectively that give the minimum distance, they are selected as the local best and the global best of the j th particle respectively. step 10: stopping criteria: if the number of iterations exceeds its maximum preset limit then stop, else go to step 2. c. reducing pareto set by clustering the hierarchical clustering algorithm is utilized to manage the pareto optimal set. from the decision maker’s point of view, reducing the size of the pareto optimal set without affecting the trade-off front is desirable. [35]. d. best compromise solution the decision maker making the final judgment based on the best compromise solution that is selected from among the pareto optimal solutions using the fuzzy set theory [36]. to formulate fuzzy membership function, decision maker is asked to assess an unacceptable value of an objective f denoted by max i f , and a satisfactory value of f denoted by min i f . here membership value 0 means least satisfaction whereas 1 indicates maximum satisfaction. mathematically fuzzy membership function for each objective can be defined as:              max maxmin minmax max min 0 1 ii iii ii ii ii i ff fff ff ff ff  (24) the normalized membership function ( k ) is calculated as:       m k n i k i n i k i k o b j o b j 1 1 1    (25) where m is number of nondominated solutions. the best compromise solution is that attains the maximum value of  k . e. proportional sharing principle the proportional sharing principle has been used to trace the power flow. fig.2 shows this principle, where f1 and f2 represent the outflow at connected node whereas fa and fb represent the inflow [37]. f. upstream looking algorithm in this study, the tracing algorithm for the electricity flow looks at the flows inflowing to the network nodes so that it is referred to as upstream looking. figure1: flow chart of the proposed approach. f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 114 fa fb f1 f2 1 1 1 2 2 2         a b a b a b a b a b a b f f f f f f f f f f f f f f f f f f figure 2: proportional sharing principle. upstream looking technique develops a set of real and reactive power contribution factors, which uses the results of ac power flow and the law of conservation of apparent power. according to these contribution factors, the generation portion of each generator in each transmission line and the generation portion of each generator in the transmission losses can be calculated. this algorithm determines the gross power flow that shows how the power output from each generator would be distributed among the lines and loads [37]–[40]. for reactive power flow, a transmission line is considered as its π equivalence and its charging capacitance effects is included in its terminal bus loads according to ac power flow solution. the reactive power flow at the two terminals of the line have different directions. virtual bus has been added at the middle of each transmission line. this bus acts as reactive sources or sinks responsible for line generation or consumption [37]. fig. 3 shows the virtual bus model. bus i bus j qij qij qij qij qij,loss qij,loss source or load figure 3: virtual bus model. g. mva-km methodology mva-km method is ac power flow based method. this method is an extended version of the mw-km method. it considers both real power and reactive power. mva-km method allocates the wheeling cost based on the magnitude of power and the geographical distance between the delivery point and the receipt point [41]-[43]. the total transmission network cost can now be calculated using (3) and (4). h. implementation of the proposed approach in this study, the techniques used were developed and implemented using matlab software. on all optimization runs, the population size is 50 and the maximum number of generations is 500. the maximum size of the pareto optimal set and the local best set size were selected as 20 and 10 respectively. the clustering technique is used when the size of pareto optimal set in global best set and local best set exceeds the respective bound. i. results and discussions the ieee 30-bus system has been used to investigate the effectiveness of the proposed approach. fig. 4 shows the single line diagram of the test system, and the detailed data is given in [44]. the system has six generators at buses 1, 2, 5, 8, 11, and 13 and four transformers with off-nominal tap ratio in lines 6-9, 6-10, 4-12, and 27-28. the table ii has the lower and upper limits, the initial settings of the control variables and the initial values of objective functions. at first, the fuel cost and wheeling cost objectives are optimized individually and the best results of fuel cost and wheeling cost objectives are given in the table ii. convergence of fuel cost and wheeling cost objectives are shown in fig. 5 and fig. 6 respectively. the problem was handled as a multiobjective optimization problem where both objectives were optimized simultaneously with the proposed approach. in this study, two cases have been simulated: case 1: the generator cost curves are represented by quadratic functions as shown in (1). the values of the coefficients are given in table ii. the diversity of the pareto optimal set over the trade-off surface is shown in fig. 7. the best fuel cost, the best wheeling cost and the best compromise solution are given in table i and shown in fig.7. 29 30 27 28 2526 2423 191815 2017 21 221614 10 6 9 11 1 2 5 7 4 8 13 12 3 figure 4: single-line diagram of ieee 30-bus test system. figure5: fuel cost optimization for ieee 30-bus test system. 0 10 20 30 40 50 60 70 80 90 100 799 799.5 800 800.5 801 801.5 802 802.5 803 803.5 804 iterations f u el c o st ( $/ h o u r) f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 115 figure 6: wheeling cost optimization for ieee 30-bus test system. figure7: multiobjective optimization for ieee 30-bus test system of case1. case 2: in this case, the cost curves of the generators at busses 1 and 2 are represented by piecewise quadratic functions as given in table iii [45]. the result of case is shown in fig.8. figure8: multiobjective optimization for ieee 30-bus test system of case2. table ii optimal settings of control variables variables limits base case [21] individual optimization proposed mopso approach lower upper best fuel cost best wheeling cost best fuel cost best wheeling cost best compromise solution generators output (mw) p1 50 200 99.226 177.24 74.25 169.27 78.72 135.08 p2 20 80 80.000 48.77 80.00 49.05 77.25 45.83 p5 15 50 50.000 21.33 50.00 21.73 47.23 35.99 p8 10 35 20.000 21.19 35.00 24.89 35.00 33.59 p11 10 30 20.000 11.55 30.00 14.79 30.00 27.01 p13 12 40 20.000 12.00 18.07 12.00 19.09 12.00 generators voltage ( p.u. ) v1 0.95 1.10 1.050 1.10 1.05 1.10 1.10 1.07 v2 0.95 1.10 1.040 1.04 1.00 1.04 1.06 1.03 v5 0.95 1.10 1.010 1.06 1.04 1.05 1.07 1.02 v8 0.95 1.10 1.010 1.10 1.09 1.05 1.07 1.07 v11 0.95 1.10 1.050 1.10 1.00 1.04 1.04 1.03 v13 0.95 1.10 1.050 1.10 1.03 1.04 1.05 1.03 transformer taps position t6-9 0.90 1.10 1.078 0.94 0.98 1.01 1.01 1.00 t6-10 0.90 1.10 1.069 1.10 0.91 0.95 0.90 0.91 t4-12 0.90 1.10 1.032 1.03 1.01 1.03 1.02 1.01 t28-27 0.90 1.10 1.068 0.98 1.00 1.00 1.01 1.01 shunt elements (mvar) qc10 0.00 5.00 0.0 5.00 3.19 4.39 3.04 3.26 qc12 0.00 5.00 0.0 4.96 5.00 3.39 3.82 3.21 qc15 0.00 5.00 0.0 5.00 5.00 2.35 4.66 4.10 qc17 0.00 5.00 0.0 3.49 3.96 3.92 3.19 4.67 qc20 0.00 5.00 0.0 3.24 2.26 2.87 1.43 1.89 qc21 0.00 5.00 0.0 5.00 5.00 3.09 5.00 3.63 qc23 0.00 5.00 0.0 0.52 5.00 2.13 2.25 3.07 qc24 0.00 5.00 0.0 5.00 5.00 4.14 2.05 3.02 qc29 0.00 5.00 0.0 2.51 4.62 4.52 3.48 2.66 fuel cost ($/hour) 901.84 799.21 926.24 800.65 909.73 829.97 wheeling cost ($/hour) 1,796.81 1,835.04 1,333.21 1,707.65 1,411.15 1,506.58 0 10 20 30 40 50 60 70 80 90 100 1.34 1.36 1.38 1.4 1.42 1.44 1.46 1.48 1.5 1.52 x 10 5 iterations w he el in g co st (c en t/ ho ur ) 800 820 840 860 880 900 1.45 1.5 1.55 1.6 1.65 1.7 x 10 5 fuel cost ($/hour) w h e e li n g c o st ( c e n t/ h o u r) best wheeling cost the best compromise solution best fuel cost 660 680 700 720 740 760 1.46 1.48 1.5 1.52 1.54 1.56 1.58 1.6 1.62 x 10 5 fuel cost ($/hour) w h e e li n g c o st ( c e n t/ h o u r) best fuel cost best wheeling cost best compromise solution f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 116 table i generator cost coefficients g1 g2 g5 g8 g11 g13 a 0.0 0.0 0.0 0.0 0.0 0.0 b 200 175 100 325 300 300 c 37.5 175 625 83.4 250 250 table iii generator cost coefficients for case 2. from mw to mw cost coefficients a b c g 1 50 140 55.0 0.70 0.0050 140 200 82.5 1.05 0.0075 g 2 20 55 40.0 0.30 0.0100 55 80 80.0 0.60 0.0200 v conclusion multi-objective particle swarm optimization technique has been employed to obtain a multi-objective solution to the optimal power flow problem of the ieee 30-bus power system model. on all optimization runs, the swarm size is taken as 50 and the maximum number of generations is set at 500. the fuel cost and wheeling cost have been considered as competing objectives. furthermore, non-smooth fuel cost curve has been considered. a clustering technique has been employed to manage the number of the pareto optimal solution. moreover, the fuzzy set theory has been utilized to extract the best compromise solution over the trade-off curve. the results show the performance and efficiency of the proposed technique to solve multiobjective optimal power flow problem simultaneously. acknowledgment the author wish to thank prof. m. a. abido for his valuable remarks and comments during the conducting this research. references [1] z. qiu and g. deconinck, ―a literature survey of optimal power flow problems in the electricity market context,‖ power systems conference and exposition, 2009, pp. 1-6, 24. [2] k. s. pandya and s. k. joshi, ―a survey of optimal power flow methods,‖ journal of theoretical and applied information technology, pp.450-458, 2008. [3] q. kang; m. zhou; j. an; q. wu, ―swarm intelligence approaches to optimal power flow problem with distributed generator failures in power networks,‖ ieee transactions on automation science and engineering, vol.10, no.2, pp. 343-353, 2013. [4] m. r. al-rashidi and m. e. el-hawary, ―applications of computational intelligence techniques for solving the revived optimal power flow problem,‖ electric power system research. vol.79, no.4, pp. 694-702, apr.2009. [5] m. shahidehpour, h. yamin, and z. li, market operations in electric power systems forecasting, rescheduling, and risk management. john wiley & sons, inc.2002 [6] l. lai and j. ma, "improved genetic algorithms for optimal power flow under both normal and contingent operation states," int. j. electrical power & energy systems, vol. 19, no. 5, pp. 287-292,1997. [7] j. yuryevich and k. p. wong, "evolutionary programming based optimal power flow algorithm," ieee trans. on power systems, vol. 14, no. 4, pp. 1245-1250, 1999. [8] m. a. abido, ―optimal power flow using particle swarm optimization,‖ international journal of electrical power and energy systems, vol. 24, no. 7, pp. 563-571, oct. 2002. [9] k. thenmalar; a. allirani, ―particle swarm optimization scheme for the solution of economic dispatch,‖ 7th international conference on intelligent systems and control (isco), pp. 143-147, 2013. [10] b. xue; m. zhang; w. n. browne, ―particle swarm optimization for feature selection in classification: a multi-objective approach,‖ ieee transactions on cybernetics, vol. pp, no.99, pp. 1-16, 2013. [11] e. zitzler and l. thiele, ―an evolutionary algorithm for multiobjective optimization: the strength pareto approach,‖ swiss federal institute of technology, tikreport, no. 43, 1998. [12] m. a. abido, ―multiobjective evolutionary algorithms for electric power dispatch problem,‖ ieee trans. on evolutionary computations, vol. 10, no. 3, pp. 315-329, june 2006. [13] m. a. abido, ―environmental/economic power dispatch using multiobjective evolutionary algorithms,‖ ieee trans on power systems, vol. 18, no. 4, pp. 15291537, november 2003. [14] abdullah m. shaheen; ragab a. el-sehiemy; sobhy m. farrag, ―solving multiobjective optimal power flow problem via forced initialized differential evolution algorithm‖, iet generation, transmission & distribution, vol.10, iss.7, pp.16341647, 2016 [15] hao wang; huilan jiang; ke xu; guodong li., ―reactive power optimization of power system based on improved particle swarm optimization,‖ electric utility deregulation and restructuring and power technologies (drpt), 2011 4th international conference, pp. 606609, 2011. [16] niknam, t.; narimani, m.r.; aghaei, j.; azizipanahabarghooee, r.., ―improved particle swarm optimisation for multi-objective optimal power flow considering the cost, loss, emission and voltage stability index,‖ generation, transmission & distribution, iet, vol. 6, iss. 6, pp. 515527, 2012. [17] m.a. abido, ―multiobjective particle swarm optimization for environmental/ economic dispatch problem ,‖ http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.abdullah%20m.%20shaheen.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.ragab%20a.%20el-sehiemy.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.sobhy%20m.%20farrag.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.sobhy%20m.%20farrag.qt.&newsearch=true http://ieeexplore.ieee.org/document/7467798/ http://ieeexplore.ieee.org/document/7467798/ http://ieeexplore.ieee.org/document/7467798/ http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=4082359 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=4082359 f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 117 electric power systems research, vol.79, no.7, pp 1105-1113, july 2009. [18] shu-kui liu; jing tang; qi li; xia wu; yan luo, ―reactive power optimization in power system based on adaptive focusing particle swarm optimization,‖ electrical and control engineering (icece), 2010 international conference, pp. 4003 4006, 2010. [19] r. a abarghooee; v. terzija; f. golestaneh; a. roosta, ― multiobjective dynamic optimal power flow considering fuzzybased smart utilization of mobile electric vehicles‖, ieee transactions on industrial informatics, vol. 12, iss. 2, pp. 503-514, 2016 [20] m. a. abido,―multiobjective particle swarm optimization for optimal power flow problem,‖ power system conference, mepcon 2008. 12th international middle-east , 2008, pp. 392 – 396. [21] mancer, n.; mahdad, b.; srairi, k.; hamed, m.; ―multi objective orpf using pso with time varying acceleration considering tcsc,‖ environment and electrical engineering (eeeic), 2012 11th international conference, pp. 802 807, 2012. [22] jianguo, wang; wenjing, liu; wenxing, zhang; bin, yang, ―multi-objective particle swarm optimization algorithm based on self-update strategy,‖ industrial control and electronics engineering (icicee), 2012 international conference, pp. 171 174, 2012. [23] j. praveen; b. srinivasa rao, ―multi objective optimization for optimal power flow with ipfc using pso‖, 3rd international conference on electrical energy systems (icees), pp. 85 – 90, 2016 [24] j. e. fieldsend and s. singh, ―a multi-objective algorithm based upon particle swarm optimization, an efficient data structure and turbulence,‖ proceedings of the 2002 u.k. workshop on computational intelligence, 2-4 september 2002 pp. 37-44, [25] n. morse, ―reducing the size of nondominated set: pruning by clustering,‖ computers and operations research, vol. 7, no. 1-2, pp. 55-66, 1980. [26] j. s. dhillon, s. c. parti, and d. p. kothari, ―stochastic economic emission load dispatch,‖ electric power systems research, vol. 26, no.3, pp. 179-186, april 1993. [27] f. r. zaro and m. a. abido, ―multi-objective particle swarm optimization for optimal power flow in a deregulated environment,‖ intelligent systems design and applications (isda), 2011 11th international conference, pp.1122 1127, 22-24 nov. 2011. [28] govind rai goyal; h. d. mehta, ―multi-objective optimal active power dispatch using swarm optimization techniques‖, 5th nirma university international conference on engineering (nuicone), pp.1-6, 2015. [29] m. reyes-sierra and c. a. c. coello, ―multi-objective particle swarm optimizers: a survey of the state-ofthe-art,‖ international journal of computational intelligence research, vol.2, no.3 (2006), pp. 287–308, 2006. [30] r. i. chang, s.y. lin, y. hung, ―particle swarm optimization with query-based learning for multi-objective power contract problem,‖ expert systems with applications, vol. 39, iss. 3, pp.3116–3126, 15 feb. 2012. [31] l. wang, c. singh, ―stochastic combined heat and power dispatch based on multi-objective particle swarm optimization,‖ international journal of electrical power & energy systems, vol. 30, iss. 3, pp. 226–234, march 2008. [32] j. cai, x. ma, q. li, h. peng, ―a multi-objective chaotic particle swarm optimization for environmental/economic dispatch,‖ energy conversion and management, vol. 50, iss.5, pp. 1318–1325, may 2009. [33] xiangjing su; mohammad a. s. masoum; peter j. wolfs, ― pso and improved bsfs based sequential comprehensive placement and real-time multiobjective control of delta-connected switched capacitors in unbalanced radial mv distribution networks ieee transactions on power systems, vol.31, iss.1, pp.612-622, 2016. [34] sanjib ganguly, ―multi-objective planning for reactive power compensation of radial distribution networks with unified power quality conditioner allocation using particle swarm optimization‖, ieee transactions on power systems, vol.29, iss.4, pp. 18011810, 2014. [35] e. zitzler; l. thiele, ― multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach,‖ evolutionary computation, ieee transactions on , vol.3, no.4, pp. 257 – 271, nov 1999. [36] l. wang and c. singh, ―stochastic economic emission load dispatch through a modified particle swarm optimization algorithm,‖ electric power systems research, vol. 78, no. 8, pp. 1466-1476, august 2008. [37] j. bialek, ―tracing the flow of electricity‖, iee proc.gener. transm. distrih., vol. 143, no 4,pp. 313 320 july 1996. [38] d. shirmohammadi et al., ―evaluation of transmission network capacity use for wheeling transactions,‖ ieee trans. on power systems, vol. 4, no. 4, pp. 1405–1413, oct. 1989. [39] d. shirmohammadi et al., ―cost of transmission transactions: an introduction,‖ ieee trans. on power systems, vol. 6, no. 4, pp. 1546–1560, nov. 1991. [40] j. w. marangon lima, ―allocation of transmission fixed charges: an overview,‖ ieee transactions on power systems, vol. 11, no. 3, 1409–1418, august 1996. [41] m. d. ilic et al., ―toward regional transmission provision and its pricing in new england,‖ utility policy, vol. 6, no. 3, pp. 245–256, september 1997. [42] j. pan, y. teklu, s. rahman, and k. jun ―review of usage-based transmission cost allocation methods under open access,‖ ieee transactions on power systems, vol. 15, no. 4, november 2000. [43] j. bialek, ―allocation of transmission supplementary charge to real and reactive loads,‖ ieee trans. on power systems, vol. 13, no. 3, pp. 749–754, aug. 1998. [44] e. a. amorim, s. h. m. hashimoto, f. g. m. lima, j. http://www.sciencedirect.com/science/journal/03787796 http://www.sciencedirect.com/science?_ob=publicationurl&_tockey=%23toc%235716%232009%23999209992%231035071%23fla%23&_cdi=5716&_pubtype=j&view=c&_auth=y&_acct=c000051301&_version=1&_urlversion=0&_userid=1074406&md5=532eb94011d525dd0e0a66d2694bac3d http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.rasoul%20azizipanah-abarghooee.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.vladimir%20terzija.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.faranak%20golestaneh.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.alireza%20roosta.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.alireza%20roosta.qt.&newsearch=true http://ieeexplore.ieee.org/document/7384482/ http://ieeexplore.ieee.org/document/7384482/ http://ieeexplore.ieee.org/document/7384482/ http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=9424 http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=7442910 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=4545824 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=4545824 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=4545824 http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.j.%20praveen.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.b.%20srinivasa%20rao.qt.&newsearch=true http://ieeexplore.ieee.org/document/7510621/ http://ieeexplore.ieee.org/document/7510621/ http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.govind%20rai%20goyal.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.h.%20d.%20mehta.qt.&newsearch=true http://ieeexplore.ieee.org/document/7449590/ http://ieeexplore.ieee.org/document/7449590/ http://ieeexplore.ieee.org/document/7449590/ http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=7446021 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=7446021 http://www.sciencedirect.com/science/journal/09574174/39/3 http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.xiangjing%20su.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.mohammad%20a.%20s.%20masoum.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.peter%20j.%20wolfs.qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.peter%20j.%20wolfs.qt.&newsearch=true http://ieeexplore.ieee.org/document/7053965/ http://ieeexplore.ieee.org/document/7053965/ http://ieeexplore.ieee.org/document/7053965/ http://ieeexplore.ieee.org/document/7053965/ http://ieeexplore.ieee.org/document/7053965/ http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=59 http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=%22authors%22:.qt.sanjib%20ganguly.qt.&newsearch=true http://ieeexplore.ieee.org/document/6712924/ http://ieeexplore.ieee.org/document/6712924/ http://ieeexplore.ieee.org/document/6712924/ http://ieeexplore.ieee.org/document/6712924/ http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=59 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=59 http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=authors:.qt.zitzler,%20e..qt.&newsearch=partialpref http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=authors:.qt.thiele,%20l..qt.&newsearch=partialpref http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=4235 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=4235 http://www.sciencedirect.com/science/journal/03787796 http://www.sciencedirect.com/science?_ob=publicationurl&_tockey=%23toc%235716%232008%23999219991%23688555%23fla%23&_cdi=5716&_pubtype=j&view=c&_auth=y&_acct=c000051301&_version=1&_urlversion=0&_userid=1074406&md5=7cd9165bc4e5ad48eb03a006143382cd f. r. zaro / true multi-objective optimal power flow in a deregulated environment using intelligent technique (2017) 118 r. s. mantovani, ―multi objective evolutionary algorithm applied to the optimal power flow problem,‖ latin america transactions, ieee (revista ieee america latina), vol.8, iss.3, pp. 236244, 2010. [45] j. yuryevich and k. p. wong, "evolutionary programming based optimal power flow algorithm," ieee trans. on power systems, vol. 14, no. 4, pp. 1245-1250, nov.1999. f. r. zaro has received his bs degree in electrical engineering from palestine polytechnic university. later on, he received his ms and phd degrees in electrical engineering from king fahd university of petroleum and minerals (kfupm). nowadays, he is an assistant professor in palestine polytechnic university. he is interested in power quality, artificial intelligent techniques, power system planning and operation, facts devices, and smart grid. transactions template journal of engineering research and technology, volume 1, issue 2, june 2014 58 empirical capacitive deionization ann nonparametric modeling for desalination purpose adel el shahat abstract— this paper proposes capacitive deionization (cdi) operational conditions nonparametric modeling for desalination purposes. cdi technique is advantageous due to its low energy consumption, low environmental pollution, and low fouling potential. the objective of this paper is to model the investigation of different operational conditions (total dissolved solids (tds) concentration, temperature, flow rate) effect on the cdi electrosorption efficiency and energy consumption. the modeling based on real experimental data with laboratory scale experiments were conducted by using a commercial cdi with activated carbon electrodes developed by aque ewp [1], as a training data and express them as algebraic functions to connect between various operational characteristics. this is done by developing four models with the aid of artificial neural network (ann). first one to express electrosorptive performance of cdi at different solution temperatures with temperature and time as inputs and tds as output. second one for efficiency as output with temperature, time and tds as inputs. third one to illustrate effect of flow rate on electrosorption efficiency and energy consumption with flow rate and time as inputs and tds as output. forth one for energy consumption as output and operational flow rate, time and tds as inputs. all characteristics are well depicted in the form of 3d figures as the training data for ann models to show the validity of the proposed technique in interpolations and estimations. ann technique models are adopted for various characteristics estimation process and generation of functions for theses experimental data due to its advantages. ann models are created with suitable numbers of layers and neurons, which trained, simulated, checked, verified and their algebraic equations are concluded accurately with excellent regression constants. index terms— capacitive deionization (cdi), modeling, neural network, and estimation. i introduction the clean water is one of the key technological, social, and economical challenges of the 21st century. it is acknowledged as a basic human right by the united nations [5]. currently techniques such as reversible osmosis, electro dialysis or distillation are applied to desalinate salty water. capacitive deionization (cdi) has emerged over the years as a robust, energy efficient, and cost effective technology for desalination of water with a low or moderate salt content [1]-[3], and as promising energy efficient method. a simple operational principle is shown in fig. 1. the system consists of a fluidic inand outlet with channel in between. within the channel two desalination electrodes are situated which in our case are situated within the same plane. salt water enters the channel and a potential is applied across the electrodes. the cations and anions will be attracted to oppositely charged electrodes and are stored in the electrical double layer. after the electrodes are saturated, the system is regenerated [2], [4]. this technique is specifically interesting for portable desalination units. figure 1 capacitive deionization. a potential is applied across two electrodes. ions are attracted to the oppositely charged electrode. fresh water exits the system. after the electrodes are saturated, the system is regenerated. cdi operates at a relatively low electrical voltage for the removal of ions and it doesn’t produce any secondary regeneration wastes [4], [6]. in addition, cdi doesn’t require pressure driven membranes or high pressure pumps so that it avoids the scaling problems that always occur with conventional membrane based technologies for desalination [7], [8]. regeneration of electrodes is then required by applying a reverse potential to the electrodes to get rid of the adsorbed ions into the waste stream [9]. electrosorptive performance with modified activated carbon cloth as cdi electrodes was also investigated [10]. some papers paid attention on the effect of operational conditions to cdi, efficiency and energy consumption [11] – [14]. some researches depend on artificial neural network too for modeling desalination techniques [15]. response surface methodology (rsm) and ————————————————  adel el shahat with department of electrical and computer engineering at university of illinois, chicago (uic) as a visiting assistant professor, and assistant professor in engineering science department, faculty of petroleum and mining engineering, suez university, egypt. adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 59 artificial neural network (ann) have been used to develop predictive models for simulation and optimization of reverse osmosis (ro) desalination process [16]. a solar powered membrane distillation system has been used for developing an optimizing control strategy using ann model of the system based on experimental data under various operating conditions [17]. comparative energy consumption analysis study of capacitive deionization are adopted [18]. a nonlinear inverse model control strategy based on neural network is proposed for desalination plant to handle complex and nonlinear process relationships [19]. many benefits drawn from previous works were applied to this work. this work addresses the nonparametric modeling of capacitive deionization (cdi) operational conditions. this model investigates the effects operational conditions like: total dissolved solids (tds) concentration, temperature, and flow rate on the cdi electrosorption efficiency and energy consumption. the modeling here based on several electrosorption experiments were conducted by using a commercial cdi technology aqua ewp at different flow rates, feed solution tds concentrations and solution temperatures as shown in [1]. this real experimental data is used as a training data to be expressed as algebraic functions to connect between various operational characteristics. four ann models are developed to predict and estimate the performance within the range of the training data for the measured and unmeasured values. 1st model for electrosorptive performance of cdi at different solution temperatures which take temperature and time as inputs and tds as output. 2nd model for efficiency as output with temperature, time and tds as inputs. 3rd model for effect of flow rate on electrosorption efficiency and energy consumption with flow rate and time as inputs and tds as output. 4th model for energy consumption as output and operational flow rate, time and tds as inputs. all characteristics are well depicted in the form of 3d figures as the training data for ann models. ann technique models are adopted for various characteristics estimation process and generation of functions for theses experimental data due to its advantages. ann models with back propagation (bp) technique are created with suitable numbers of layers and neurons, which trained, simulated, checked, verified and their algebraic equations are concluded accurately with excellent regression constants. the ann models' algebraic equations are deduced for use directly. these models are validated in the means of comparisons between real data and simulated corresponding data from ann model with excellent acceptable error between targets and outputs. ii the commercial cdi pilot plant [1] the commercial cdi unit used in this research [1], was developed by aqua ewp, usa. fig. 2 shows a schematic diagram of the used cdi unit. as shown in it, the influent water is pumped from a storage tank through a pre-filter and afterwards passes over a flow weir to measure the influent flow to two carbon electrode cells connected in series. the electrodes within the cell are chargeable by applied dc potential in the range of 1 to 1.5 vdc. the whole operational cycle of the cdi takes 2.5 minutes. the cycle consists of two main steps, the regeneration mode step and the purification mode step [1]. the regeneration step commences with 30 seconds when the effluent solenoid valve (sv1) and the influent solenoid valve (sv0) are closed and the supplied power is off, followed by another 30 seconds when the effluent waste solenoid valve (sv2) and the influent solenoid valve (sv0) are opened and the power is turned on with opposite polarity of 1.5 vdc. after 60 seconds the regeneration step finished. the purification step is started immediately following this and it takes 90 seconds to purify the feed solution. here the influent solenoid valve (sv0) and the effluent solenoid valve (sv1) are opened. the cdi contains a critical acid cleaning tank for the cleaning of the electrodes when the purification doesn’t meet the standards. a heater was supplied to maintain the required temperature for the feed solution [1]. figure 2 cdi schematic diagram fig. 3. shows a schematic diagram of the cdi cells construction. the electrodes are mainly composed of activated carbon with an organic binder. each cell contains a mass of 1354 grams of activated carbon. the electrodes within the cell consist of a conductive surface sandwiched between layers of activated carbon. a nonconductive spacer material separates the plates from each other. these electrodes are connected to two sides of dc power supply by using connecting leads [1]. figure 3 cdi construction schematic diagram iii experimental data a series of laboratory experiments were conducted to investigate the effect of operational conditions (tds concentration, flow rate, temperature) on the cdi electrosorption efficiency and energy consumption [1]. a electrosorptive performance of cdi at different temp.s fig. 4. shows the purified stream tds concentration of the cdi unit at different temperatures. it is shown that puri adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 60 fied stream tds concentration increases gradually by increasing solution temperature. these results in [1], are consistent with the results reported by xu et al. [7] in treating brackish produced water from a natural gas operation site. figure 4 tds at different feed solution temperatures [1], [7] fig. 5. shows the relationship of electrosorption removal efficiency and solution temperature [1]. it can be noticed that the electrosorption removal efficiency is inversely proportional to solution temperature. as a result, higher electrosorption removal efficiency at low temperature may be caused by the transition from hydrophobic to hydrophilic transitions on the surface of the activated carbon [20]. figure 5 removal efficiency as a function of solution temperature [1], [20] it is notified from figure 5, the removal efficiency is changed from 90% to 80% when the temperature changed from 20 to 50 degree. the change in the removal efficiency is not so significant comparing with the change of the temperature as one of the advantages of this experimental apparatus as shown from these measured results. b effect of flow rate on elec.efficiency & energy consum. fig. 6. depicts the variation of the purified stream tds concentration with different flow rates [1]. it is shown that the tds concentration increases by increasing the flow rate. these results are consistent with the results reported by li et al. [21]. figure 6 purified stream tds at different flow rates [1], [21] fig. 7 shows the effect of different operational flow rates on the energy consumption. it is seen that as the flow rate increases the energy consumption decreases [1], [21]. figure 7 effect of operational flow rate on energy consumption [1], [21] the previous figures are used as training or learning data for the ann models as shown later. iv artificial neural network modeling using the artificial neural network (ann), with back-probagation technique [22]-[29] to implement four models; first one to express electrosorptive performance of cdi at different solution temperatures with temperature and time as inputs and tds as output. second one for efficiency as output with temperature, time and tds as inputs. third one to illustrate effect of flow rate on electrosorption efficiency and energy consumption with flow rate and time as inputs and tds as output. forth one for energy consumption as output and operational flow rate, time and tds as inputs, to help in modeling, parameters and characteristics estimation. this is done to make benefits from the ability of neural network of interpolation between points and also curves. finally, the algebraic equations are deduced to use it without training the neural unit in each time. a cdi temperatures electrosorptive ann model this model' inputs are the temperature and time and tds as output as shown in fig. 8. figure 8 a schematic diagram of 1st ann model the model algebraic equation is deduced as the following: (10.0722) / 35) re(temperatu = etemperatur n (1) (7.3753) / 13.5288) (time = time n (2) the previous equations present the normalized inputs (subscript n denotes normalized variable) for the ann model and the following equations lead to the required derived output equation. tds temperature time hidden layer logsig 19 neurons output layer purelin 1 neurons adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 61 5.2126 e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 e15 e16 e17 e18 e19 = + temperaturen timen e = (3) ))e (exp + (1 / 1=f 19:119:1 (4) 10.0259f19 0.5150f18 0.4842f17 0.1481 f16 2.2549f15 1.8162 + f14 0.4592 f13 0.2748f12 3.3387f11 1.8441 + f10 1.1421 f9 9.3616 + f8 0.6728 + f7 11.5199 + f6 0.3082 f5 0.2253 + f4 0.5261 + f3 0.1498 + f2 0.3075 + f1 0.4576 = tds n (5) the unnormalized output 168.0582 + tds 38.2848 = t n ds (6) figure 9 output vs target for 1st model figure 10 training state and error for 1st model figure 11 regression for 1st model = 1 b electrosorption removal efficiency ann model this model' inputs are temperature, time and tds and the output is electrosorption efficiency as shown in fig. 12. tds temperature time hidden layer logsig 3 neurons output layer purelin 1 neurons efficiency figure 12 a schematic diagram of 2nd ann model the model algebraic equation is deduced as the following with using the previous normalized eq.s for inputs. -26.0098 125.9409 -85.6763 -235.0903 -1.1231 -0.0065 -0.0494 temperaturen timen tdsn + -39.8512 e1 e2 e3 =e = (7) ))e (exp + (1 / 1=f 1,2,31,2,3 (8) 1.2829+ f3 3.7335 + f2 0.0157+ f1 3.2296 = e n fficiency (9) the unnormalized output 0.8386 + fficiency 0.0366 = fficiency n ee (10) figure 13 output vs target for 2nd model adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 62 figure 14 training state and error for 2nd model figure 15 regression for 2nd model = 0.99983 the data is well depicted in the following 3d figures for the inputs and targets (outputs) of the previous two models (1st and 2nd) to adequate with the function of ann technique and cover all data as mapping surface. 0 5 10 15 20 25 30 20 25 30 35 40 45 50 50 100 150 200 250 time (min)temperature (c0) t d s ( m g /l ) figure 16 3d relation for tds, temperature with time 0 5 10 15 20 25 30 20 25 30 35 40 45 50 0.8 0.85 0.9 0.95 1 time (min)temperature (c0) e ff ic ie n c y figure 17 3d relation for efficiency, temperature with time 0 5 10 15 20 25 30 50 100 150 200 250 0.8 0.85 0.9 0.95 1 time (min)tds (mg/l) e ff ic ie n c y figure 18 3d relation for efficiency, tds with time c cdi different flow rate electrosorptive perf. ann model this model' inputs are flow rate and time as inputs and tds as output as shown in fig. 19. tds flow rate time hidden layer logsig 19 neurons output layer purelin 1 neurons figure 19 a schematic diagram of 3rd ann model the model algebraic equation is deduced as the following: (1.1529) / 2.75) (f = f n lowratelowrate (11) (7.2835) / 13.2811) (time = time n (12) eq. s (11), (12) present normalized inputs. e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 e15 e16 e17 e18 e19 = + flow_raten timen e = 6.2694 -1.4885 -19.1709 (13) ))e (exp + (1 / 1=f 19:119:1 (14) adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 63 613.1402f19 312f18 312 + f17 0.4 + f16 0.4 + f15 619.2+ f14 0.2 + f13 0.4f12 0.2 + f11 5.3f10 6.6f9 0.7 + f8 0.4 +f7 0.5f6 618.7 + f5 f4 0.4 + f3 1.3 + f2 0.2f1 1109.9= tds n (15) the unnormalized output 181.3325+ tds 90.3743 = t n ds (16) figure 20 output vs target for 3rd model figure 21 training state and error for 3rd model figure 22 regression for 3rd model = 0.99999 d energy consumption ann model this model' inputs are operational flow rate, time and tds; energy consumption is the output as shown in fig. 23. tds flow_rate time hidden layer logsig 3 neurons output layer purelin 1 neurons energy consumption figure 23 a schematic diagram of 4th ann model the model algebraic equation is deduced as the following with using the previous normalized eq.s for inputs. flow_raten timen tdsn + e1 e2 e3 =e = -5.6641 -0.1956 3.0769 -58.0280 -0.0159 0.1877 7.5388 0.4935 -7.0334 0.1002 -63.5309 -1.3085 (17) ))e (exp + (1 / 1=f 1,2,31,2,3 (18) 1.0036f3 0.3765 + f2 2.127 + f1 1.1306= mptionergy_consue n n (19) the unnormalized output 3.26 + _n 1.5219 = _n n nconsumptioergyenconsumptioergye (20) figure 24 output vs target for 4th model figure 25 training state and error for 4th model figure 26 regression for 4th model = 0.9998 the data is well depicted in the following 3d figures for the inputs and targets (outputs) of the previous two (3rd, and 4th) models to adequate with the function of ann technique and cover all data as mapping surface. adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 64 0 5 10 15 20 25 1 2 3 4 5 0 100 200 300 400 time (min)flow rate (l/min) t d s ( m g /l ) figure 27 3d relation for tds, flow rate with time 0 5 10 15 20 25 1 2 3 4 5 1 2 3 4 5 6 7 time (min)flow rate (l/min) e n e rg y c o n s u m p ti o n ( k w h /m 3 ) figure 28 3d relation for energy consumption, flow rate with time 0 5 10 15 20 25 0 100 200 300 400 1 2 3 4 5 6 7 time (min)tds (mg/l) e n e rg y c o n s u m p ti o n ( k w h /m 3 ) figure 29 3d relation for energy consumption, tds with time v conclusion this paper gives a point of view for cdi technology as a novel electrosorption process for water desalination. cdi has many advantages due to its low energy consumption, low environmental pollution, and low fouling potential. this work addresses the nonparametric modeling of capacitive deionization (cdi) operational conditions. the modeling here based on several electrosorption experiments were conducted by using a commercial cdi technology aqua ewp at different flow rates, feed solution tds concentrations and solution temperatures as shown in [1]. four ann models are efficiently developed to predict and estimate the performance within the range of training data for the measured and unmeasured values. 1st model takes temperature and time as inputs and tds as output. 2nd model for efficiency as output with temperature, time and tds as inputs. 3rd model takes flow rate and time as inputs and tds as output. 4th model for energy consumption as output and operational flow rate, time and tds as inputs. all characteristics are well depicted in the form of 3d figures. ann models with back propagation (bp) technique are created with suitable numbers of layers and neurons. the ann models' algebraic equations are deduced for directly usage. the results obtained are sufficiently accurate to apply the models involving less computational efforts. these models are checked and verified by comparing actual and predicted ann values, with a good errors values and excellent regression factors between 0.99983 to 1 imply accuracy. artificial neural networks (anns) can handle complex and nonlinear process relationships, and are robust to noisy data. also, the neural networks are trained for nearly 70% of these training data extracted and then checked for the rest 30 % with the 70 %, i.e. for the whole 100 % range in the form of comparisons. the data used not only the dotted one but also some from in between the shoen points for more visiability. ann is also used for all the whole ranges and in between curves (which we do not know) like the 3d figures shown for all parameters and characteristics. references [1] mohamed mossad and linda zou, "effects of operational conditions on the electrosorption efficiencies of capacitive deionization", chemeca 2011, engineering a better world: sydney hilton hotel, nsw, australia, 1821 september 2011, barton, a.c.t.: engineers australia, 2011, pp. 1648-1660. [2] s. porada, r. zhao, a. van der wal, v. presser, p.m. biesheuvel, "review on the science and technology of water desalination by capacitive deionization", progress in materials science, 58, 2013, pp. 1388–1442. [3] li h, pan l, lu t, zhan y, nie c, sun z., "a comparative study on electrosorptive behavior of carbon nanotubes and grapheme for capacitive deionization", j electroanal chem., 2011, 653:40–4. [4] oren y., "capacitive deionization for desalination and water treatment – past, present and future", desalination, 2008, 228 (1-3), pp. 10-29. [5] http://www.un.org/news/press/docs/2010/ga10967.doc. htm. [6] anderson, m. a., cudero, a. l., palma, j., "capacitive deionization as an electrochemical means of saving energy and delivering clean water. comparison to present desalination practices: will it compete?" electrochimica acta., 2010, 55, pp. 3845–3856. [7] xu, p., drewes, j.e., heil, d., wang, g., "treatment of brackish produced water using carbon aerogel based capacitive deionization technology", water research, 2008, 42, pp. 2605–2617. [8] seo, s. j., jeon, h., lee, j. k., kim, g. y., park, d., nojima, h., lee, j., moon, s. h., "investigation on removal of hardness ions by capacitive deionization (cdi) for water softening applications", water research, 2010, 44, pp. 2267–2275. [9] broséus, r., cigana, j., barbeau, b., daines-martinez, c., suty, h., "removal of total dissolved solids, nitrates and ammonium ions from drinking water using chargebarrier capacitive deionization", desalination, 2009, 249, pp. 217–223. [10] ryoo mw, kim jh, seo g., "role of titania incorporated on activated carbon cloth for capacitive deionization of nacl solution", j. colloid interface sci. 2003, 264 (2): 414-19. adel el shahat/ emprical capacitive deionization ann nonparametric modeling for disalination purpose (2014) 65 [11] zhao r, biesheuvel pm, van der wal a., "energy consumption and constant current operation in membrane capacitive deionization", energy environ sci, 2012, 5:9520–7. [12] demirer on, naylor rm, rios perez ca, wilkes e, hidrovo c., "energetic performance optimization of a capacitive deionization system operating with transient cycles and brackish water", desalination 2013,314:130– 8. [13] rica ra, ziano r, salerno d, mantegazza f, bazant mz, brogioli d., "electro-diffusion of ions in porous electrodes for capacitive extraction of renewable energy from salinity differences", electrochim acta, 2013,92:304–14. [14] kim y-j, choi j-h., "improvement of desalination efficiency in capacitive deionization using a carbon electrode coated with an ion-exchange polymer", water res, 2010, 44:990–6. [15] al-zoubi, h., hilal, n., darwish, n.a., mohammad, a.w., "rejection and modelling of sulphate and potassium salts by nanofiltration membranes neural network and spiegler-kedem model", 2007, desalination, 206, pp. 42-60. [16] m. khayet, c. cojocaru, m. essalhi, "artificial neural network modeling and response surface methodology of desalination by reverse osmosis", journal of membrane science, volume 368, issues 1–2, 15 february 2011, pages 202–214. [17] r. porrazzo, a. cipollina, m. galluzzo, g. micale, "a neural network-based optimizing control system for a seawater-desalination solar-powered membrane distillation unit", computers & chemical engineering, volume 54, 11 july 2013, pages 79–96. [18] zhao, y., wang, y., wang, r., wu, y., xu, s., wang, j., "performance comparison and energy consumption analysis of capacitive deionization and membrane capacitive deionization processes", desalination, volume 324, issue (september 2, 2013), pp. 127-133. [19] shokoufe tayyebi, maryam alishiri, "the control of msf desalination plants based on inverse model control by neural network", desalination, volume 333, issue 1, 15 january 2014, pp. 92–100. [20] wang, h. j., xi, x. k., kleinhammes, a., wu, y., "temperature-induced hydrophobic-hydrophilic transition observed by water adsorption", science, 2008, 322, pp. 80-83. [21] li, h., zou, l., pan, l., sun, z., "using graphene nanoflakes as electrodes to remove ferric ions by capacitive deionization", separation and purification technology, 2010, 75, pp. 8-14. [22] adel el shahat, “pv module optimum operation modeling", journal of power technologies, vol 94, no 1 (2014). [23] adel el shahat, “pm synchronous machine new aspects; modeling, control, and design”, isbn 978-3-65925359-1, lap lambert academic publishing, germany, 2012. [24] adel el shahat, “dc-dc converter duty cycle ann estimation for dg applications”, journal of electrical systems (jes), issn 1112-5209; vol. 9, issue 1, march 2013. [25] adel el shahat, and hamed el shewy, “high fundamental frequency pm synchronous motor design neural regression function”, journal of electrical engineering, issn 1582-4594; article 10.1.14, edition 1, march, vol. 10 / 2010. [26] a. el shahat, and h. el shewy, “pm synchronous motor control strategies with their neural network regression functions”, journal of electrical systems (jes), issn 1112-5209; vol. 5, issue 4, dec. 2009. [27] adel el shahat, “photovoltaic power system simulation for micro – grid distribution generation”, 8th international conference on electrical engineering, 29-31 may, 2012, military technical college, egypt; ee137, iceeng 2012. [28] a. el shahat, h. el shewy, “neural unit for pm synchronous machine performance improvement used for renewable energy”, paper ref.: 910, global conference on renewable and energy efficiency for desert regions (gcreeder2009), amman, jordan. [29] adel el shahat, “pv cell module modeling & ann simulation for smart grid applications”, journal of theoretical and applied information technology, eissn 1817-3195; issn 1992-8645; vol. 16, no.1, june 2010, pp. 9 – 20. dr. adel el shahat is currently a visiting assistant professor, department of electrical and computer engineering at university of illinois, chicago (uic), laboratory for energy and switchingelectronics systems (leses) (2014:present). assistant professor in engineering science department, faculty of petroleum and mining engineering, suez university, egypt (2011:2014). previously, he was visiting researcher, ece dept., mechatronics-green energy lab, the ohio state university, usa (2008: sept.2010). his interests are: photovoltaic power, wind energy, electric machines, artificial intelligence, renewable energy, power system, control systems, power electronics, and smart grids. he was associate lecturer (2004:2008) in faculty of petroleum & mining engineering, suez canal university, egypt, and teaching assistant (2000:2004) in the same faculty. his education: phd in electrical engineering (2011), through joint supervision between ohio state university, usa and zagazig university, egypt; m.sc. in electrical engineering 2004, and b.sc. in electrical engineering 1999 from faculty of engineering, zagazig university. he has many publications between international journals papers, refereed conferences papers, books, book chapters and abstracts or posters. he is a member of ieee, ieee computer society, asee, iaeng, iacsit, ees, waset and arise. also, he is member of editorial team and reviewer for six international journals. he gains honors and recognitions from the ohio state university, usa 2009, suez canal university honor with university medal in 2012, 2006, merit ten top-up students award of each faculty from arab republic of egypt, in 2000, and ees, 1999, egypt, and also student distinguished awards (1994:1999) from helwan, and zagazig university, egypt. journal of engineering research and technology, volume 1, issue 2, june 2014 73 antireflection coatings combining silicon nitride with silicon nanoparticles m. beye1, f. ndiaye2, and a. s. maiga2 1 department of applied physics, gaston berger university of saint louis, senegal. 2 laboratory of electronics, data processing, telecommunications and renewable energies, saint louis, bp 234, senegal abstract—in this work, solar cell antireflection coatings (arc) combining silicon nitride layer with silicon nanoparticles are investigated. two configurations are considered: silicon nanoparticles on a sinx layer surface and partially embedded in a sinx layer. numerical results show that this arc concept performs better in the configuration where silicon nanoparticles are placed on the dielectric layer. a silicon nitride refractive index of 2.3 is found to be more advantageous, yielding a low weighted reflectance of 2.1% in the wavelength range 300 – 1100 nm. the stability of the reflectance for oblique incidence with angles lower than 40° was also observed. with this new arc concept, a relative increase of 8.6% in the short circuit current density over a solar cell with the standard si3n4 coating can be expected. index terms—silicon nitride, silicon nanoparticles, antireflection coatings i introduction silicon nitride (sinx) layers are largely used in photovoltaic (pv) applications due to their interesting optical and electrical properties [1]. planar layers of this and other dielectric materials, generally used as antireflection coatings (arcs) in solar cells and other optoelectronic devices, are often optimized for a single wavelength. therefore, they reduce the reflection from a surface in a narrow wavelength range. it has been shown that an optimized sinx double layer has a little advantage over the standard si3n4 single layer [2, 3]. scattering from metal particles has been demonstrated to efficiently couple light into waveguide modes in thin semiconductor layers due to their surface plasmon resonances [4, 5]. metal and dielectric nanoparticles have been also largely used to enhance light absorption in thin film solar cells [6 12]. metal nanoparticles embedded in a dielectric layer [13, 14] and patterned on it or directly onto a substrate [15, 16] have been found to reduce the reflection from a solar cell active layer surface over a broad spectral range. recently, a new approach not involving metals has been proposed, in which a two-dimensional periodic array of cylindrical silicon nanoparticles on a si3n4 spacer layer is used. simulations and experimental results showed a weighted reflectance lower than 2% over the spectral range 450 – 900 nm for angles of incidence up to ± 60° [17]. in this work, we consider a solar cell antireflection coatings combining silicon nitride layer with silicon nanoparticles in two configurations: silicon nanoparticles on a sinx layer surface and partially embedded in a sinx layer. the effect of the dielectric layer refractive index and the one related to the incorporation of silicon nanoparticles in the dielectric layer on the performances of this arc concept are investigated. ii models description and optimization procedure we consider a silicon solar cell surface of length l and width l, on which a regular array of spherical silicon nanoparticles is arranged as schematically shown in fig. 1. in this figure r is the radius of a particle and p the period of the two-dimensional array. the two configurations considered in this work are shown in fig. 2. figure 1: schematic representation of a regular array of spherical silicon nanoparticles. (a) (b) figure 2: schematic representations of arcs: (a) silicon nanoparticles on a sinx layer surface; (b) silicon nanoparticles embedded in a sinx layer. antireflection coatings combining silicon nitride with silicon nanoparticles, m. beye*, f. ndiaye, and a. s. maiga (2014) 74 for each configuration, a modified transfer matrix method [18, 19, 20] is used to calculate the reflectance from the silicon surface. the arc structures represented in fig. 2 can be considered as composed of two layers (therefore, three interfaces) on a si substrate (fig. 3). the amplitude of the uniform electrical field, e, at any position in a layer, can be divided into two components: the transmitted, e+, and the reflected component, e–. the field amplitudes at the top and bottom side of an interface are related by the so-called transmission or refraction matrix of the interface, ik–1,k, as follows [19]:                       ' ' ,1 1 1 ke k e kki ke k e (1) where           1,1 ,11 ,1 1 ,1 kkr kkr kkt kki (2) the fresnel coefficients of transmission, tk–1,k, and reflection of the interface, rk–1,k, are defined as follows [20]:  for the s polarization: k qknkqkn kqkn kkt    11 112 ,1 (3) k qknkqkn kqknkqkn kkr    11 11 ,1 (4)  for the p polarization : kqknkqkn kqkn kkt 11 112 ,1    (5) k qknkqkn kqknkqkn kkr 11 11 ,1    (6) where nk is the complex refractive index of the k-th layer and 2 0sin0 1          kn n kq  (7) θ0 is the angle of incidence. the field amplitudes at the top and bottom side of the k-th layer are related by the so-called propagation matrix of the layer, pk, as follows [18]:                      ke k e kp ke k e ' ' (8) where         )exp(0 0)exp( ki ki kp   (9) the phase shift, δk, due to the wave passing through the k-th layer is given by: )cos( 2 kkdknk            (10) where λ is the wavelength of the incident light, nk is again the complex refractive index of the k-th layer, dk its thickness, and θk is the propagation angle following snell’s law (n0 sinθ0 = n1 sinθ1 = n2sinθ2 = n3sinθ3). the above matrix transformations can be applied for the 2 layers and 3 interfaces in fig. 3, resulting in:                                       ' 3 ' 3 2221 1211 ' 3 ' 3 23212101 0 0 e e tt tt e e ipipi e e (11) the product matrix resulting from the above procedure is a 2 x 2 matrix called system transfer matrix t. the complex reflection coefficient of the multilayer can be calculated from the elements of the system transfer matrix as follows [18, 19]: 11 21 0 ' 3 0 0 t t e e e r       (12) from which the reflectance, r, is obtained as │r│2. it is clear that to calculate r, we need to know the complex refractive index of each layer and its thickness. for the arc configuration shown in fig 2.a, the first layer can be considered as a composite material consisting of silicon nanoparticles in air (we suppose that nair = n0 ≈ 1) with a thickness d1 equal to the diameter of the nanoparticles and the second is a homogeneous sinx spacer layer of thickness d2. for the configuration shown in fig. 2b, two layers of composite materials can be considered with the host media consisting of air and sinx, respectively. the complex refractive index of a material is related to its permittivity by the well-known formula: ε = n2. therefore, knowing the permittivity, we can calculate the complex refractive index. the maxwell-garnett effective medium approximation is used to calculate the effective permittivity of the composite figure 3: notation of electric field amplitudes in the two-layer arc structure. the prime is used for waves at the down side of an interface. journal of engineering research and technology, volume 1, issue 2, june 2014 75 materials formed by silicon nanoparticles in air and in sinx. this effective permittivity is given by [21]:        fbfi fbfi beff    21 1221    (13) where εb is the permittivity of the host (base) material, εi the permittivity of the particle’s materials and f the volume fraction of the particles in the host medium. for a layer of thickness d, its volume can be expressed as: vlayer = l×l×d. for simplicity, we will take l = l = 1 cm. the volume of a spherical particle of radius r is: vp = 4πr 3/3. the number of particles on the surface can be expressed as: np = (l/p)×(l/p). thus, the volume fraction of the particles in the corresponding layer is: f = (np×vp) /vlayer. optical data for silicon are taken from [22]. for silicon nitride, optical constants are calculated using data provided in the appendix of [23]. the reflectance weighted over the am1.5g solar spectrum (air mass 1.5 global: terrestrial solar spectral irradiance on a surface of specified orientation under specified atmospheric conditions) is evaluated in the wavelength range 300 – 1100 nm to take into account the influence of the photon flux and the solar cell internal quantum efficiency. this parameter is defined as [24]:               max min max min       diqef driqef wr (14) where f is the photon flux, iqe the solar cell internal quantum efficiency, defined as the number of electrons collected per photon not reflected and r the reflectance. the photon flux, f, is related to the irradiance, i, by the following expression [25]:   ch i f      (15) where λ is the wavelength of the incident light, h the planck's constant and c the speed of light. irradiance data are taken from [26]. the short circuit current density (jsc) is also evaluated as an important solar cells parameter. for a single junction solar cell, it is given by [23]:           daiqefqscj   max min (16) where a(λ) is the absorption by silicon. we suppose in this work that iqe = 1, meaning that all not reflected photons are collected. thus, in equation (4), a(λ) can be replaced by [1 – r(λ)]. iii silicon nanoparticles on a sinx layer surface the weighted reflectance under normal incidence from the silicon surface in the configuration shown in fig 2.a is about 17.3% for an array of spherical particles of diameter 125 nm, spaced by 450 nm and a si3n4 layer thickness of 60 nm with a refractive index of 2.0. these values are the optimal ones found in [15] for cylindrical nanoparticles. this high reflectivity suggests that the dimensional parameters have to be re-optimized since the shape of the particles has changed. the dependences of the weighted reflectance (rw) on the period (p) of the regular array and the spacer layer thickness (d2) are shown in fig. 4. the parameters used in the calculations are displayed in table i. it can be seen that there is an optimum both in the period and in the thickness. it is clear from the insets of fig. 4a-b that these optimal values are about 840 nm for the period and 66 nm for the spacer layer thickness. it has been argued that the optimum in the array period is related to the fact that a smaller period results in a strong interparticle coupling effect and a large period corresponds to a small surface coverage, reducing the overall light scattering. the optimum in the spacer layer thickness is because thicker spacer layers lead to reduce the near field-coupling, while thinner layers cause a shift of the plasmon resonance [16]. the reflectance under normal incidence from the silicon surface in the configuration with new optimized parameters is shown in fig. 5. reflectance curves from a bare silicon surface (a) (b) figure 4: dependences of the weighted reflectance on the dimensional parameters for a particle diameter of 125 nm: (a) period dependence of rw; (b) thickness dependence of rw. table 1 parameters used in the calculations. parameters for fig. 4a for fig. 4b diameter of particles (d1) 125 nm 125 nm period (p) variable 840 nm spacer layer thickness (d2) 60 nm variable antireflection coatings combining silicon nitride with silicon nanoparticles, m. beye*, f. ndiaye, and a. s. maiga (2014) 76 and coated with a standard 80 nm thick s3n4 are also shown for comparison. the optimization of the dimensional parameters, except the particle’s diameter, results in a decrease of the weighted reflectance from 17.3% to 3.2%. the strong reduction of the reflection is explained by the combined effect of the preferential forward scattering of the incident light by the silicon nanoparticles and the destructive interference from the sinx layer [17]. the scattering from particles is known to depend strongly on the dielectric environment. figure 6 represents the dependence of the weighted reflectance on the sinx refractive index. for the optimal dimensional parameters found in fig. 4, it appears that an optimal refractive index of about 2.3 leads to a weighted reflectance of 2.1%. however, it is well known that the extinction coefficient and, the absorption within a sinx layer, slightly increases in the lower wavelength range for high refractive indices [23]. therefore, the pronounced antireflection effect for higher index may be partly compensated by the increased absorption within the layer. this claim may be verified by taking into account the dispersion of sinx optical constants. a good antireflection coating must be stable in a wide range of angles of incidence. figure 7 shows the angular dependence of the weighted reflectance for the configuration shown in fig. 2a with the optimal parameters. it appears that the reflectance shows a good stability for angles of incidence under 40°. an evaluation of the short circuit current density from equation (16) shows that for a solar cell with the standard si3n4 single layer arc, a theoretical jsc value of 38.4 ma/cm 2 can be expected, whereas, for this new arc concept with the optimal parameters this value is 41.7 ma/cm2, corresponding to a relative increase of about 8.6%. iv silicon nanoparticles partially embedded in a sinx layer for the configuration of silicon nanoparticles on a sinx layer with the optimal parameters (diameter 125 nm, period 840 nm, spacer layer thickness 66 nm, refractive index 2.3) let’s incorporate gradually the nanoparticles in the dielectric layer. we can define a factor, femb, corresponding to the portion of a particle embedded in the sinx layer. here, the maximum value of femb is 66/125 or about 0.52, representing the ratio of the optimal spacer layer thickness over the optimal diameter of nanoparticles. the increase of the weighted reflectance with the factor femb (fig. 8) permit to conclude that the embedded silicon nanoparticles in the sinx layer yield a negative effect. therefore, the proposed arc performs better in the configuration where silicon nanoparticles are placed on the sinx layer. figure 5: reflectance under normal incidence from a bare silicon surface (black), a silicon surface coated with a 80 nm thick si3n4 layer (green) and with a 60 nm thick sinx layer on which silicon nanoparticles of diameter 125 nm and spaced by 450 nm are arranged (red). figure 6: dependence of rw on the sinx refractive index for optimal dimensional parameters (diameter 125 nm, period 840 nm and spacer layer thickness 66 nm). figure 7: angular dependence of the wr for the arc configuration with optimal parameters. figure 8: dependence of rw on the parameter femb under normal incidence for the arc with optimal parameters. journal of engineering research and technology, volume 1, issue 2, june 2014 77 v conclusion two configurations of antireflection coating combining silicon nanoparticles and silicon nitride layer are considered. the antireflection performances of this combination are found to be better in the configuration where silicon nanoparticles are placed on the sinx layer. for this configuration, a sinx refractive index of 2.3 provides better performances, yielding a low weighted reflectance of 2.1% in the wavelength range 300 – 1100 nm. stability of the reflectance for oblique incidence with angles lower than 40° is observed. a relative increase of 8.6% in the short circuit current density over a silicon solar cell with the standard si3n4 coating can be theoretically expected. partially embedded silicon nanoparticles in the dielectric layer yielded a negative effect. the new arc concept proposed in this work can be used for efficiency improvement in silicon solar cells. acknowledgments the authors would like to thank the staff members of the department of applied physics: a. ndiaye, d. diouf and m. sene for helpful discussions and suggestions. this work was also supported by the university gaston berger (grant n° 0003765). references [1] m. lipiński, “silicon nitride for photovoltaic application”, archives of materials science and engineering 46, 2 (2010) 69–87. [2] d. gong, y-j lee, m. ju, j. ko, d. yang, y. lee, g. choi, s kim, j. yoo, b. choi, and j. yi, “sinx double layer antireflection coating by plasma-enhanced chemical vapor deposition for single crystalline silicon solar cells”, japanese journal of applied physics 50, 08ke01 (2011) 1–5. [3] y. lee, d. gong, n. balaji, y. lee, and j. yi, “stability of sinx/sinx double stack antireflection coating for single crystalline silicon solar cells”, nanoscale research letters 7, 50 (2012) 1 – 6. [4] h. r. stuart and d. g. hall, “island-size effects in nanoparticles-enhanced photodetectors”, applied physics letters 73, 26 (1998) 3815–3817. [5] h. r. stuart and d. g. hall, “absorption enhancement in silicon-on-insulator waveguides using metal island films”, applied physics letters 69, 16 (1996) 2327–2329. [6] p. matheu s. h. lim, d. derkacs, c. mcpheeters, and e. t. yu, “metal and dielectric nanoparticle scattering for improved optical absorption in photovoltaic devices”, applied physics letters 93, 191113 (2008) 1–3. [7] e. t. yu, d. derkacs, s. h. lim, p. matheu, and d. m. schaadt, “plasmonic nanoparticle scattering for enhanced performance of photovoltaic and photodetector devices”, proc. of spie 7033 (2008) 70331v-1–70331v-9. [8] k. r. catchpole and a. polman, “design principles for particle plasmon enhanced solar cells”, applied physics letters 93, 191113 (2008) 1–3. [9] k. r. catchpole, a. polman, “plasmonic solar cells”, optics express 16, 26 (2008) 21793–21800. [10] f. j. beck, s. mokkapati, a. polman and k. r. catchpole, “light trapping for solar cells using localised surface plasmons in self-assembled ag nanoparticles”, 24th european photovoltaic solar energy conference, hamburg, germany (2009) 232–235. [11] k. c. sahoo, m. k. lin, e. y. chang, y. y. lu, c. c. chen, j. h. huang, and c. w. chang, “fabrication of antireflective sub-wavelength structures on silicon nitride using nano cluster mask for solar cell application”, nanoscale research letters 4 (2009) 680–683. [12] h. a. atwater and a polman, “plasmonics for improved photovoltaic devices”, nature materials 9 (2010) 205 – 213. [13] y. wang, n. chen, x. zhang, x. yang, y. bai, m. cui, y. wang, x. chen and t. huang, “ag surface plasmon enhanced double-layer antireflection coatings for gaas solar cells”, journal of semiconductors 30, 7 (2009) 072005-1 – 072005-5. [14] r. sircar, d. p. srivastava and b. tripathi, “study of ag nanoparticle embedded silicon nitride anti-reflection coating (arc) for silicon solar cells”, international journal of photonics 2, 1 (2010) 7-14. [15] p. spinelli, m. hebbink, r. de waele, l. black, f. lenzmann, and a. polman, “optical impedance matching using coupled plasmonic nanoparticle arrays”, nano letters 11 (2011) 1760–1765. [16] p. spinelli, v. e. ferry, j. van de groep, m. van lare, m. a. verschuuren, r. e. i. schropp, h. a. atwater, and a. polman, “plasmonic light trapping in thin-film si solar cells”, journal of optics 14 (2012) 1–11. [17] p. spinelli, m. a. verschuuren, and a. polman, “broadband omnidirectional antireflection coating based on subwavelength surface mie resonators”, nature communications 3, 692 (2012) 1–5. [18] c. c. katsidis, and d. i. siapkas, "general transfer-matrix method for optical multilayer systems with coherent, partially coherent, and incoherent interference", applied optics 41, 19 (2002) 3978–3987. [19] m. c. troparevsky, a. s. sabau, a. r. lupini, and z. zhang, "transfer-matrix formalism for the calculation of optical response in multilayer systems: from coherent to incoherent interference", optics express 18, 24 (2010) 24715–24721. [20] s. a. dyakov, v. a. tolmachev, e. v. astrova, s. g. tikhodeev, v. yu. timoshenko, t. s. perova, “numerical methods for calculation of optical properties of layered structures”, proc. of spie vol. 7521 (2010) 75210g-1–75210g-10. [21] r. ruppin, “evaluation of extended maxwell-garnett theories”, optics communications 182 (2000) 273–279. [22] m. a. green and m. keevers, "optical properties of intrinsic silicon at 300 k ", progress in photovoltaics 3, 3 (1995) 189–192. [23] h. nagel, a. g. aberle, and r. hezel, “optimized antirereflection coatings for planar silicon solar cells using remote pecvd silicon nitride and porous silicon dioxide”, progress in photovoltaics 7 (1999) 245–260. [24] d. bouhafs, a. moussi, a. chikouche, and j. m. ruiz, “design and simulation of antireflection coating systems antireflection coatings combining silicon nitride with silicon nanoparticles, m. beye*, f. ndiaye, and a. s. maiga (2014) 78 for optoelectronic devices: applications to silicon solar cells”, solar energy and solar cells 52 (1998) 79–93. [25] s. a. boden and d. m. bagnall, “sunrise to sunset optimization of thin film antireflective coatings for encapsulated, planar silicon solar cells”, progress in photovoltaics 17 (2009) 241–252. [26] american society for testing and materials (astm), “standard tables for reference solar spectral irradiance at air mass 1.5: astm g-173-03”, (2003). m. beye obtained his phd in applied physics in 2013 from gaston berger university. he received his ms (2005) in semiconductor materials and structures and bs (2003) degrees from the saintpetersburg electrotechnical university (russia). he held different teaching positions since 2006, particularly at the department of applied physics of the gaston berger university which he joined in 2010. he has also worked at the laboratory of electronics, data processing, telecommunications and renewable energies. his research interest includes antireflection coating technologies, solar cells and related topics, nanotechnology, environment. f. ndiaye is a phd student at the laboratory of electronics, data processing, telecommunications and renewable energies of the gaston berger university. she is working on the relations between environmental conditions and pv module performances. a. s. maiga obtained his phd in 1993 from the university of rennes (france). he is now an associate professor at the department of applied physics of the gaston berger university. he is also leading the laboratory of electronics, data processing, telecommunications and renewable energies. transactions template journal of engineering research and technology, volume 5, issue 3, september 2018 58 mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer mahmoud m. hilles and mohammed m. ziara abstract— effects of alkali resistant glass fiber reinforced polymer (ar-gfrp) with various proportions on the mechanical behavior of high strength concrete (hsc) were investigated on this study. concrete mixtures were prepared with various proportions of ar-gfrp typically 0.3, 0.6, 0.9, and 1.2 by weight of cement. the mixtures were cast and tested for compressive, splitting tensile and flexural strengths in accordance to astm standards. the experimental results showed that the strengths increase as fiber percentage increases; the compressive strength increased from 57.85 to 66.6 mpa, the splitting tensile strength increased from 3.06 to 4.92 mpa, and the flexural strength increased from 4.84 to 7.27 mpa when fiber percentage increased from 0.0 to 1.2 respectively. in comparison with plain hsc control specimens that showed destructive sudden failure, the formation of cracks that led to failure in the specimens with ar-gfrp was gradual as the fiber percentage increases. hence it can be concluded that the presence of fibers in the matrix has contributed towards prevent sudden crack formation and thus enhancing concrete ductility. index terms—hsc, gfrc, ar-gfrp, hsgfrc, mechanical behavior, mechanical properties, mode of failure. i introduction it has been recognized that high strength concrete (hsc) has a brittle behavior at ultimate limit state of loading. the addition of closely spaced and uniformly dispersed small fibers to concrete would act as crack arrester and would substantially improve its mechanical behavior and ductility. the addition of fibers results in a product, which has higher flexural and tensile strengths compared with normal concrete [1]. the conventional fiber reinforced concrete (frc) made by adding fibers in normal strength concrete (nsc) only exhibits an increase in ductility compared with the plain matrix, whereas high strength fiber reinforced concrete (hsfrc) made by adding fibers in hsc exhibits substantial strain hardening type of response which leads to a large improvement in both strength and toughness compared with the plain matrix [2]. many types of fibers are available in practice; glass fiber reinforced polymer (gfrp) is preferred than other types due to high ratio of surface area to weight and high strength properties to unit cost ratio. however, glass fiber which is originally used in conjunction with cement was found to be affected by alkaline condition of cement. the alkali resistant glass fiber reinforced polymer (ar-gfrp), which is used recently has overcome this defect and can be effectively used in concrete [3]. for new materials like high strength glass fiber reinforced concrete (hsgfrc), studies on mechanical behavior are of paramount important for initializing confidence in engineers. most of the carried out research on mechanical behavior of frc, was made with steel, carbon, and natural fibers. however, few attempts were made with glass fibers. in addition, the literature indicates that most of available studies were made with normal strength concrete (nsc) reinforced with insufficient proportions of glass fibers. therefore, the undertaken study was conducted to investigate the mechanical behavior of hsc reinforced with various percentages of glass fibers. the aim of this research is to study the effect of the addition of ar-gfrp with various proportions on the mechanical behavior of hsc. strength characteristics and mode of failure of hsgfrc were compared with plain hsc, by performing laboratory tests on compressive strength, splitting tensile strength, flexural strength, and density. ii experimental program a materials hsgfrc constituent materials used in this research include ordinary portland cement, course aggregate, fine aggregate, normal range water reducer (nrwr) and gfrp. proportions of these constituent materials have been chosen carefully in order to optimize the packing density of the mixture. ordinary portland cement cem ii 42.5r was used. the cement met the requirements of astm c150 specifications [4]. according to the local market surveying, the available fine aggregate is dune sand type which is finer than required by standard specifications of astm c33 [5] and its gradation does not fall within the limits. however, many researchers mentioned that the role of fm and gradation of the fine aggregate in hsc is not as crucial as in conventional strength mixtures [6] [7]. the grain distribution of fine aggregate is shown in table 1. the coarse aggregate was natural crushed lime stone of 12.5 mm nominal maximum size. the sieve analysis of the coarse aggregate according to astm c33 [5] is shown in figs.1. the specific gravities of coarse and fine aggregates were 2.66 and 2.63 respectively. nrwr confirming astm c494 [6] type a specification was used with dosage of 2 lit./m 3 according to manufacturer’s suggestion. according to manufacture data sheet, the properties of argfrp used are shown in table 2. m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 59 table 1 grain distribution of fine aggregate. figure 1 coarse aggregate sieve analysis according to astm c33 [5]. table 2 properties of ar-gfrp. b mix proportioning the reference concrete mix (without gfrp) was developed on a trial and error basis to obtain 28-day cylinder compressive strength for design of 50 mpa. the first trail mixture was based on kosmatka et al. [8], then modifications were applied to obtain the best determinable mix design proportions that achieved the target design strength. five fiber percentages were used, i.e. 0.0, 0.3, 0.6, 0.9, and 1.2 by weight of cement. accordingly, the five mixes shown in table 3 were used to evaluate the effect of ar-gfrp on the mechanical behavior of plain hsc. these percentages were chosen in a range that can give better observation and evaluation on the mechanical behavior of hsgfrc when contain a small amount of fiber and when contain a large amount of fiber. for the reference mixture m50 f0 (without fibers), mixing procedures was applied in accordance with astm c192 [7]. however, for addition of the glass fibers; special attention was applied. the glass fibers were always added last and mixed for the minimum time required to achieve uniform dispersion and to prevent their damage by excessive mixing. table 3 hsgfrc mixtures for 1 m 3 of concrete. in %ar-gfrp by weight of cement c (kg) fa (kg) ca (kg) nrwr (lit.) w/c m50 f0 0 600 484 1068 2 0.37 m50 f1 0.3 600 484 1068 2 0.37 m50 f2 0.6 600 484 1068 2 0.37 m50 f3 0.9 600 484 1068 2 0.37 m50 f4 1.2 600 484 1068 2 0.37 c test specimens and method compressive strength, splitting tensile strength, flexural strength, and density tests were carried out to evaluate the strength properties of hsgfrc using matest c104 servo plus with a capacity of 2000 kn. each test was carried out at ages 7 and 28 days, except for the density which was deter-mined at 28 days. cylinder specimens of 150 x 300 mm were prepared for testing the adopted hsc mix proportions that carried out in the trial mix stage which achieve the target design strength at 28 days of 50 mpa. the test of compressive strength was made according to astm c39 [8]. the specimens were loaded under compressing at a constant stroke rate of 1.4 mpa/min. confirming to the standard requirements. cube specimens of 150 x150 x150 mm were prepared for compressive strength test and density. the test of compressive strength was made according to bs 1881, part 108 standard test method [9]. the specimens were loaded at a constant stroke rate of 0.34 mpa/sec. confirming the standard requirements. cylinder specimens of 150 x 300 mm were prepared for splitting tensile strength test in accordance to astm c496 standard test method [10]. the specimens were loaded at a constant stroke rate of 1.4 mpa/min. confirming the standard requirements. prism speimens of 100 x 100 x 500 mm were prepared for flexural strength test using center-point loading in accordance to astm c293 [11]. the specimens were loaded so that the extreme fiber stress increases at rate of 1 mpa/min. confirmsieve size (mm) % passing 4.75 100 2.36 100 1.18 95.99 0.6 87.33 0.425 73.14 0.3 39.33 0.15 1.50 0.075 0 fineness modulus fm 1.75 fiber properties quantity fiber length hybrid 8 to 30 mm diameter 14µ density 2.7 t/m3 modulus of elasticity 72 gpa tensile strength 1,700 mpa chemical resistance very high electrical conductivity very low softening point 860 °c zro2 content 15-20 % material alkali resistant alkali resistant glass -5.0 15.0 35.0 55.0 75.0 95.0 115.0 1.00 5.00 25.00 % p a ss in g sieve size (mm) coarse aggregate astm min. limits astm max. limits m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 60 57.85 61.05 66.01 66.35 66.61 55 57 59 61 63 65 67 69 0 0.3 0.6 0.9 1.2 1.5 2 8 d a y s c o m p re ss iv e s tr e n g th ( m p a ) % ar-gfrp ing the standard requirements. for each mix, three specimens were made for testing for each test for period of 28 days and two specimens were made for testing for each test for period of 7 days. the mean value of the test results of the specimens was considered as the test result of the experiment. iii test results and discussion a mix design compressive strength test results the average 28 days’ compressive strength of three cylinder specimens for the adopted hsc mix proportions that carried out in the trial mix stage was 51.15 mpa which obviously achieve the target design strength of 50 mpa. b size effect the results of 7 and 28 days’ compressive strength are shown in table 4. the average 28 days’ cylinder compressive strength of plain hsc specimens (without fiber) obtained from adopted mix proportion was equal to 51.149 mpa, however, the average 28 days’ cube compressive strength of plain hsc specimens as shown in table 4 was equal to 57.85mpa. therefore, the ratio of cylinder to cube compressive strength was equal to 0.88 which is obviously higher than for normal strength grade. it should be mentioned that this ratio increases as concrete strength increases until approaching 1 for ultrahigh strength concrete [12]. the compression test assumes a state of pure, uniaxial compression. which is untrue because of the friction between the ends of the specimen and the bearing plates of the test machine. through friction, the bearing plates act to restrain the lateral expansion of the ends of the specimen and to introduce a lateral confining pressure near the specimen ends. this confining pressure is greatest right at the specimen end and gradually dies out forward the middle of the specimen [12]. it is believed that the restraining effect of the bearing plates of the testing machine may extends over the entire height of a cube specimen, however, it leaves unaffected part of cylinder specimen. it is, therefore, to be expected that the strength of cubes specimen is greater than for cylinder specimen made from the same concrete. for nsc, the ratio of cylinder to cube compressive strength is around 0.8, but, in reality, there is no simple relation between the strength of the specimens of two shapes. however, for hsc, the effect of specimen’s size and shape on the compressive strength is insignificant as for nsc. the ratio of cylinder to cube compressive strength increases strongly with an increase in strength and is nearly one at strength of more than 100 mpa [13]. this increasing in the ratio can be explained such that the lateral expansion in hsc is small and thus the plate confining effect will be insignificant in both the cube and the cylinder. both specimens will be subjected in this case to uniaxial state of stress. table 4 compressive strength test results. in % gfrp average compressive strength (mpa) % increase over the reference mix 28 days 7 days 28 days s 28 days cv % 28 days m50 f0 0 46.05 57.85 1.49 2.58 0 m50 f1 0.3 47.96 61.05 2.48 4.06 5.238 m50 f2 0.6 50.73 66.01 2.86 4.33 12.36 m50 f3 0.9 49.65 66.34 1.95 2.94 12.79 m50 f4 1.2 49.58 66.60 2.60 3.90 13.14 c effect of ar-gfrp on the compressive strength of hsc from table 4, it is observed that with increase in fiber percentage, the compressive strength also increases. as shown in figure 2, the 28 days’ compressive strength increases sharply from 57.85 to 66.01 mpa with increase in fiber percentage from 0.0 to 0.6 respectively. then, a very slight increase is observed in the compressive strength from 66.01 to 66.6 mpa when fiber percentage increases from 0.6 to 1.2 respectively. in general, as shown in figure 3, the percentage of increase over the reference mix at fiber percentage of 0.6 and 1.2 is 12.36 and 13.14 percent respectively, hence it is established that fiber percentage of 0.6 can be consider the optimum value of fiber addition for compressive strength enhancement since the difference between those values of fiber percentage is insignificant. figure 2 effect of ar-gfrp on 28 days compressive strength of hsc. figure 3 the percentage of increase in compressive strength over the reference mix due to addition of ar-gfrp on hsc. 0 5.238 12.36 12.798 13.14 0 5 10 15 0 0.3 0.6 0.9 1.2 % i n cr e a se o ve r r e fe re n ce m ix 2 8 d a y % ar-gfrp m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 61 0 10 20 30 40 50 60 70 0 7 2 8 c o m p re ss iv e s tr e n g th m p a age-days m50 f0 m50 f1 m50 f2 m50 f3 m50 f4 the behavior shown in figure 2, can be explained such that according to li [14], the reinforcement provided by fibers can work at both a micro and macro levels. at a micro level fibers arrest the development of micro cracks. the ability of the fiber to control micro cracking growth depends mainly on the number of fibers. whereas at a macro level, fibers control crack opening and increasing the energy absorption capacity of the composite. hence, a higher number of fibers in the ma-trix leads to a higher probability of a micro crack being in-tercepted by a fiber leading to higher compressive strengths as can be seen in figure 2 where a sharp increase in the compressive strength is observed as fiber percentage in-crease from 0.0 to 0.6. whereas at a macro level, the mode of failure will be enhanced due to the great amount of energy that consumed by fibers and due to postponing the formation of the first major crack in the matrix. on the other hand, fiber addition causes some perturbation of the matrix and appear strongly at high percentage of fibers, which can result in higher voids during the micro level due to fiber debonding and pullout process as long as there is no fiber fracture. when the macro level starts, voids can be seen as defects where macro cracking starts. in addition, as the amount of fiber exceeded the optimal value, larger surface area of coarse aggregate particles will be surrounded be fibers which is soft polymeric material could weaken the aggregate interlock thereby reducing the compressive strength of the concrete. it is safe then to say that the influence of fibers on the compressive strength at higher percentages could not enhance or increase the compressive strength as discussed and as can be seen in figure 2 where a very slight increase is observed at fiber percentage from 0.6 to 1.2. test result show good agreement with other researchers studied the effect of addition of gfrp on structural concrete, swami et al. [15] and ghorpade [16] show that the concrete compressive strength can increased obviously when small amount of fiber used, however, there was no additional significant enhancement in compressive strength when fiber percentage increased upper the optimum value as shown in figure 4 ghorpade -whose use very high strength concrete (vhsc) to study the effect of addition of gfrpshow that the optimum amount of fiber addition can be achieved at 1 percent, while when using hsc -such this researchthe optimum amount of fiber addition can be achieved at 0.6 percent, frequently, using lower strength -such swamithe optimum amount of fiber addition could decrease to 0.3 percent. hence, it was established that as concrete strength increase, the optimum amount of fiber addition for compressive strength enhancement also increase as shown in figure 4. figure 4 comparisons of compressive strength test results with other related researches. d effect of ar-gfrp on the strength gain with age of hsc figure 5 illustrates the strength gain with age for each mix. figure 6 illustrates the relative strength gain at 7 and 28 days with percentage ar-gfrp. in reference to figure 6, it is obvious that the ratio of 7 days’ to 28 days’ compressive strength for the reference mix (m50 f0) is higher than for normal strength grade, typically 79.6 percent. however, according to aci committee 363 [17], it has been recognized that hsc shows a higher rate of strength gain at early ages compared to nsc. the higher rate of strength development of hsc at early ages is caused by an increase in the internal curing temperature in the concrete mixtures due to a higher heat of hydration and shorter distance between hydrated particles due to low water-cement ratio. however, as shown in figure 6, the ratio of 7 days to 28 days’ compressive strength has decreased from 79.6 to 74.43 as fiber percentage increased from 0.0 to 1.2, respectively. this can be explained by that fibers may have absorbed part of increased temperature and can make the distance between hydrated particles wider, which will in turn result in less internal curing temperature in the concrete mixtures. figure 5 effect of ar-gfrp on the strength gain with age of hsc. 40 50 60 70 80 90 100 0 0.5 1 1.5 2 2 8 d a y s c u b e c o m p re ss iv e s tr e n g th (m p a ) % gfrp test results swami et al. [15] ghorpade [16] m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 62 figure 6 effect of ar-gfrp on %7days / 28days compressive strength. e effect of ar-gfrp on the density of hsc figure 7 shows the effect of ar-gfrp on the concrete density. it is observed that with increase in fiber percentage, the density increases very slightly. this can be explained due to the extremely light weight and high ratio of surface area to weight of ar-gfrp. figure 7 effect of ar-gfrp on density of hsc. f crack pattern and mode of failure during compressive strength test observation of specimens during compressive strength tests showed that in the case of plain hsc specimens; figure 8, the failure was sudden, brittle and destructive with loud sound. however, with addition of fiber by 0.3 percent on hsc specimens, the failure became normal similar to normal strength concrete as shown in figure 9. hsc specimens with 0.6 and 0.9 fiber percentage showed finer cracks and less dispersion compared with hsc specimens with 0.3 fiber percentage as shown figure 10 and figure 11, respectively. at highest fiber percentage of 1.2, the specimens after failure remained intact as shown in figure 12 and the cracks were extremely fine. in conclusion, it is observed that as fiber percentage increases, failure takes place gradually with the formation of cracks and can arrest sudden crack formation. moreover, although the higher fiber percentage could not enhance the compressive strength -as explained beforeat a macro level, fibers at higher percentages can control crack opening, consume a great amount of energy, and postponing the formation of the first major crack in the matrix, hence changing the failure mode from brittle to quasi-ductile. figure 8 mode of failure and crack pattern of plain hsc specimens (without fiber). (a) (b) figure 9 (a) and (b) mode of failure and crack pattern of hsc specimens with 0.3 fiber percentage. 79.605 78.556 76.856 74.85 74.439 55 60 65 70 75 80 85 0 0.3 0.6 0.9 1.2 % 7 d a y / 2 8 d a y c o m p re ss iv e s tr e n g th % ar-gfrp 2.4172 2.421 2.429 2.436 2.441 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 0 0.3 0.6 0.9 1.2 d e n si ty ( t/ m 3 ) % ar-gfrp m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 63 figure 10 mode of failure and crack pattern of hsc specimens with 0.6 fiber percentage. figure 11 mode of failure and crack pattern of hsc specimens with 0.9 fiber percentage. figure 12 mode of failure and crack pattern of hsc specimens with 1.2 fiber percentage. g effect of ar-gfrp on the splitting tensile strength of hsc table 5 show the results of 7 and 28 days splitting tensile strength, respectively. according to aci committee 363 [17], equation (1) was recommended for the prediction of the splitting tensile strength (fsp) of hsc with 28 days’ cylinder compressive strength (fc') within 21 to 83 mpa. the 28 days’ cylinder compressive strength of plain hsc was equals to 51.149 mpa; which makes the predicted splitting tensile strength fsp using equation (1) equals to 4.21 mpa. this value is very close with the average experimental value of plain hsc specimens (m50 f0) of 4.12 mpa shown in table 5. the difference is 2.1% only! showing good agreement. table 5 splitting tensile strength test results. it is observed from table 5 that with increase in fiber percentage, the splitting tensile strength increases significantly. as shown in figure 13, the splitting tensile strength increased from 3.06 to 4.92 mpa with the increase in the fiber percentage from 0.0 to 1.2, respectively for 7 days. for 28 days, the strengths increased from 4.12 to 6.7 mpa when the fiber percentage increased from 0.0 to 1.2, respectively. it is observed from figure 14 that, the percentage of increase in the splitting tensile strength over the reference mix due to addition of fibers is much higher than for the compressive strength. moreover, the increasing trend in splitting tensile strength due to addition of fibers continued ascending until the highest value of 6.73 mpa at (28 days) at the highest fiber percentage of 1.2 as shown in figure 13. in comparison with the increase in compressive strength figure 2 shows continuous ascending just until 0.6 fiber percentage and then at fiber percentage from 0.6 to 1.2, the increasing rate reduced slightly. this difference between the increasing mode of compressive strength and splitting tensile strength curves shown in figure 2 and figure13 can be explained simply that the defects that caused by higher fiber percentages during the micro level, which are as discussed before, the voids due to fiber de-bonding and pullout process, and the weakness of the aggregate interlock due to softening and polymeric characteristic of fibers, appear strongly when the concrete fail due to compressive stress. while in splitting tensile test, although the cylinder specimen subjected to compressive load, the specimen fails due to the induced tensile stresses before reach its ultimate compressive strength capacity (i.e. for the same mixture of m50f4, in the compression test, the maximum average applied compressive load at 28 days is 1505.4 kn. while in the splitting tensile test, the maximum average applied compressive load at 28 days is only 473.18 kn). it is safe then to say that with higher percentage of fibers, according to li [16] it is possible that micro cracks formed in the matrix at micro level will be stabilized due to the interaction between the matrix and fibers through bonding, hence postponing the formation of the first major crack in in % gfrp average split tensile strength (mpa) % increase over the reference mix (28 day) 7 days 28 days s 28 day cv % 28 day m50 f0 0 3.066 4.124 0.22 5.5 0 m50 f1 0.3 3.579 4.777 0.55 11.6 15.83 m50 f2 0.6 3.917 5.538 0.33 6.1 34.28 m50 f3 0.9 4.177 5.845 0.45 7.8 41.73 m50 f4 1.2 4.924 6.731 0.39 5.8 63.22 m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 64 the matrix. thus, the apparent tensile strength of matrix can be increased. hence it is established that ar-gfrp inclusion in hsc mixtures is more powerful for enhancing the tensile strength than compressive strength. figure 13 effect of ar-gfrp on splitting tensile strength of hsc. figure 14 comparisons of splitting tensile strength test results with other related researches. figure 15 the percentage of increase over the reference mix due to addition of ar-gfrp on hsc: comparison between compressive strength and splitting tensile strength. test result show good agreement with swami et al [15]. and ghorpade [16] who showed that the concrete splitting tensile strength can be increased significantly with addition of glass fiber even when large amount was used. however, ghorpade who used vhsc, reported that the splitting tensile strength was de-creased at fiber percentage larger than 1 percent as shown in figure 15. figure 16 mode of failure of plain hsc specimens (without fiber). h crack pattern and mode of failure during splitting tensile strength test. observations of specimens during splitting tensile strength test showed in the case of plain hsc that the failure was brittle and occurred by sudden complete splitting accmpained by loud sound, figure 16. with addition of fiber by 0.3 percent on hsc specimens, the mode of failure remained sudden similar to plain hsc specimens as shown in figure 17. however, hsc specimens with 0.6 and 0.9 fiber percentages showed less sudden splitting failure and remained intact as shown in figures 18 and 19. at highest fiber percentage of 1.2, the hsc specimens remained standing after failure and the crack was fine as shown in figure 20. in conclusion, as fiber percentage increased the splitting crack occurred progressively, arrested sudden crack formation, and overcame the brittle characteristic of hsc. figure 17 mode of failure of hsc specimens with 0.3 fiber percentage 0 5.238 12.36 12.798 13.14 0 15.834 34.28 41.731 63.223 0 10 20 30 40 50 60 70 0 0.3 0.6 0.9 1.2 % i n cr e a se o v e r t h e r e fe re n ce m ix (2 8 d a y s) % argfrp compressive strength splitting tensile strength 2 3 4 5 6 7 8 0 0.5 1 1.5 2 2 8 d a y s s p li tt in g t e n si le s tr e n g th (m p a ) % gfrp test results swami et al. [15] ghorpade [16] 4.124 4.777 5.538 5.845 6.731 3.066 3.579 3.917 4.177 4.924 0 1 2 3 4 5 6 7 8 0 0.3 0.6 0.9 1.2 s p li tt in g t e n si le s tr e n g th (m p a ) % argfrp 28 days 7 days m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 65 figure 18 mode of failure and crack pattern of hsc specimens with 0.6 fiber percentage. figure 19 mode of failure and crack pattern of hsc specimens with 0.9 fiber percentage. figure 20 mode of failure and crack pattern of hsc specimens with 1.2 fiber percentage. i effect of ar-gfrp on the flexural strength (modulus of rapture, fr) of hsc the results of 7 and 28 days’ flexural strengths (modulus of rapture, fr) are shown in table 6. according to aci committee 363 [17], equation. 2 was recommended for the prediction of fr of hsc with 28 days’ cylinder compressive strength fc’ within 21 to 83 mpa. the 28 days’ cylinder compressive strength of plain hsc (without fiber) was equals to 51.149 mpa. the prediction of fr using equation 2 equals to 6.72 mpa in comparison with measured average value of 6.35 mpa for plain hsc specimens (m50 f0) shown in table 6 with a deference of 5.5% only! showing good agreement. table 6 flexural strength (modulus of rapture) test results. it is observed from table 6 that with the increase in fiber percentage, fr increased significantly. as shown in figure 21, the flexural strength increased continuously from 4.84 to 7.27 mpa with increase in fiber percentage from 0.0 to 1.2 respectively for 7 days, and 6.35 to 9.68 mpa when fiber percentage increase from 0.0 to 1.2 respectively for 28 days. therefore, it concluded that the percentage of increase in the flexural strength over the reference mix due to addition of fibers is much higher than for the compressive strength; but little less than for splitting tensile strength, except at 0.3 fiber percentage where flexural strength shows the highest percentage of increase as shown in figure 22. the rate of increasing in flexural strength due to addition of fibers is the same as for splitting tensile strength, which was also ascending until the highest value of 9.68 mpa (28 days) at the highest fiber percentage of 1.2 as shown in figure 21. in comparison with the increase in the compressive strength, figure 2 shows continuous ascending just until 0.6 fiber percentage and then at fiber percentage from 0.6 to 1.2, the increase turned was very slight. this difference between the increasing mode of compressive strength and flexural strength curves shown in figures 2 and 21 can be explained similarly as for the splitting tensile strength. hence it is established that ar-gfrp inclusion in hsc mixtures is more effective in enhancing the tensile strength than compression strength. in % gfrp average flexural strength (mpa) % increase over the reference mix (28 day) 7 days 28 days s cv % m50 f0 0 4.84 6.35 0.49 7.79 m50 f1 0.3 5.28 7.53 0.20 2.63 18.50 m50 f2 0.6 6.26 8.28 0.31 3.75 30.41 m50 f3 0.9 6.68 8.79 0.37 4.24 38.35 m50 f4 1.2 7.27 9.68 0.47 4.85 52.36 m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 66 figure 21 effect of ar-gfrp on flexural strength (modulus of rapture) of hsc. figure 22 the percentage of increase over the reference mix due to addition of ar-gfrp on hsc: comparison between compressive, flexural and splitting tensile strength. test result show good agreement with swami et al [15]. and ghorpade [16] as discussed earlier and as shown in figure 23 for 1 percent fiber. figure 23 comparisons of flexural strength test results with other related researches. iv conclusion the effects of addition of alkali resistant glass fibers reinforced polymer (ar-gfrp) on the strength and mode of failure of high strength concrete (hsc) up to 51 mpa cylinder compressive strength were experimentally investigated. the following conclusions were made based on the results of compressive strength, splitting tensile strength, flexural strength, and density tests carried out at 7 and 28. i. maximum compressive strength of hsc was obtained at 1.2 percentage of fiber and achieved 13.14% increase over the reference mix without fibers. with 0.6% fiber 12.36% strength increase was recorded. ii. the ratio of 7 days to 28 days’ compressive strength decreased from 79.6% to 74.43% as fiber percentage increased from 0.0 to 1.2 respectively. iii. the density of hsc increased slightly from 2417 to 2441 kg/m 3 as fiber percentage increased from 0.0 to 1.2%. iv. the splitting tensile strength of hsc increased continuously reaching 63.22% increase at 1.2% fiber. v. the flexural strength (modulus of rapture) increased continuously reaching 52.36% increase with 1.2% fiber. vi. it was observed that the percentage of increase in the splitting tensile and flexural strengths over the reference mix due to addition of fibers is much higher than for the compressive strength. hence, it is established that ar-gfrp inclusion in hsc mixtures is more significant for enhancing the tensile strength than compression strength of hsc. vii. the specimen failure in compressive and splitting tensile strength tests was gradual as fiber percentage increased. on the other hand, the failure of plain hsc specimens was sudden, brittle and completely destruction with sound (loudly). hence it is established that the presence of fibers in the matrix has contributed towards prevent sudden crack formation. viii. test result show good agreement with aci committee 363 and other relevant researches. references [1] j. gustavo and m. parra, "high-performance fiberreinforced cement composites: an alternative for seismic design of structures," aci structural journal, vol. 102, no. 5, pp. 668-675, 2005. [2] o. buyukozturk and d. lau, "high performance concrete: fundamentals and application," in the international conference on developments and application of concrete, istanbul, turkey, 30 november, 2007. [3] aci committee 544.1r, "state-of-the-art report on fiber reinforced concrete" aci, detroit, 2002. [4] astm, american society for testing and materials, "astm c150: standard specification of portland cement" astm, philadelphia, pennsylvania, 2004. [5] astm, american society for testing and materials, "astm c33: standard specification for concrete ag6.35 7.53 8.28 8.79 9.68 4.84 5.28 6.26 6.68 7.27 0.00 2.00 4.00 6.00 8.00 10.00 12.00 0 0.3 0.6 0.9 1.2 f le x u ra l s tr e n g th ( m p a ) % argfrp 28 days 7 days 5.238 12.36 12.798 13.14 18.50 30.41 38.35 52.36 15.834 34.28 41.731 63.223 0 10 20 30 40 50 60 70 0 0.3 0.6 0.9 1.2 % i n cr e a se o v e r t h e r e fe re n ce m ix ( 2 8 d a y s) % argfrp compressive strength flexural strength splitting tensile strength 4 5 6 7 8 9 10 11 0 0.5 1 1.5 2 2 8 d a y s f le x u ra l s tr e n g th ( m p a ) % gfrp test results swami et. al. [15] ghorpade [16] m. hilles and m. ziara / study on mechanical behavior of high strength concrete reinforced with glass fiber reinforced polymer (2018) 67 gregates" astm, philadelphia, pennsylvania, 2003. [6] aci committee 363r, "state-of-the-art report on high-strength concrete," aci, detroit, 2010. [7] s. kosmatka, b. kerkhoff and w. panarese, design and control of concrete mixtures, 14 ed., illinois: portland cement association, 2003. [8] astm, american society for testing and materials, "astm c494: standard specification for chemical admixtures for concrete" astm, philadelphia, pennsylvania, 2004. [9] astm, american society for testing and materials, "astm c192: standard practice for making and curing concrete test specimens in the laboratory" astm, philadelphia, pennsylvania, 2002. [10] astm, american society for testing and materials, "astm c39: standard test method for compressive strength of cylindrical concrete specimens" astm, philadelphia, pennsylvania, 2003. [11] bs, british standard institution, "method for making test cubes from fresh concrete," british standard institution, bs 18881, testing concrete, part 108, uk, 1993. [12] astm, american society for testing and materials, "astm c496: standard test method for splitting tensile strength of cylindrical concrete specimens" astm, philadelphia, pennsylvania, 2004. [13] astm, american society for testing and materials, "astm c293: standard test method for flexural strength of concrete (using simple beam with centerpoint loading)" astm, philadelphia, pennsylvania, 2002. [14] mohamed m. ziara, the influence of confining the compression zone in the design of structural concrete beams, phd thesis, heriot-watt university, uk, 1993. [15] a. neville, properties of concrete, 5th ed., london: british library, 2011. [16] z. li, advanced concrete technology, hoboken, new jersey: john wiley and sons, 2011. [17] b. swami, a. asthana and u. masood, "studies on glass fiber reinforced concrete composites – strength and behavior," challenges, opportunities and solutions in structural engineering and construction, vol. 4, no. 5, pp. 144-203, 2010. [18] v. ghorpade, "an experimental investigation on glass fiber reinforced high performance concrete with silica fume as admixture," in 35th conference on our world in concrete and structures, singapore, 25 27 august 2010. transactions template journal of engineering research and technology, volume 1, issue 3, 2014 91 bandwidth enhancement of patch antenna with anisotropic substrate using inset l-shaped feed and l slots on ground plane amel boufrioua1 1electronics department, technological sciences faculty, university constantine 1, ain el bey road, 25000, constantine, algeria, e-mail boufrioua_amel@yahoo.fr abstract—in this paper, an inset l-shaped feed rectangular patch antenna with dual l slots etched on the ground plane is proposed and analysed for increasing band width of microstrip patch antenna. the results in terms of return loss and radiation pattern are given. the results show that dual wide bands are achieved and a better impedance matching for the upper and lower resonances are obtained.simulation results for the effect of uniaxial substrate on the return loss and bandwidth of the rectangular patch antenna using inset l-shaped feed with dual rectangular slots on the ground plane are also presented. our results of return loss using ansoft hfss are compared with those available in the literature which shows a good agreement. index terms—wide band antenna, rectangular patch, anisotropic substrate, modified ground plane. i introduction the development of modern wireless communication leads to the need for broadband antennas found a wide spread application in the wireless communication industry because of their attractive features such as easy fabrication, cost, linear and circularly polarized radiation characteristics. because of these attractive features of the broadband antennas are used in many wireless applications such as wi-fi, bluetooth, gsm and gprs. the rectangular and circular patches are extensively used radiators which have very limited bandwidth [1]. these limits the applications in several practical cases, and the narrow bandwidth of the microstrip antenna can be widened. recently, most of the research on microstrip antennas focused on methods to increase their bandwidth [2-10]. several patch designs with single feed, dual frequency operation have been proposed recently [3-10]. when a microstrip patch antenna is loaded with reactive elements such as slots, stubs or shorting pin, it gives tunable or dual frequency antenna characteristics [8]. since the slots are cut at an appropriate position inside the patch, they neither increase the patch size nor largely affect the radiation pattern of the patch [9]. these slots can take different shapes like, rectangular or square slot, step slot, tooth-brush shaped slot, v-slot, u-slot, etc [9]. the slot adds another resonant mode near the fundamental mode of the patch and realizes dual frequency response [9]. the study of anisotropic substrate is interesting since it has been found that the use of such materials may have a beneficial effect on circuit or antenna [2]. however, the designers should carefully check for the anisotropic effects in the substrate materials with which they will work. in this paper, a novel wide band rectangular patch antenna printed on isotropic or uniaxial anisotropic substrate is designed by using inset l-shaped feed with dual l slots on the ground plane. the proposed antenna provides a significant size reduction and can completely increase the band width. in this paper the performance analyses of the proposed antenna are presented using ansoft hfss software, which is based on finite element method, different parametric studies will be allowed and the effect of the various antenna parameters on the return loss and the radiation of the proposed antenna will be presented, also we will present the effect of the uniaxial substrate on the band width and the return loss with lower and upper resonant frequencies. ii antenna design a simple rectangular patch antenna the proposed structure of inset l-shaped feed with dual l slots on the ground plane given in this study to increase the band width, based on the simple rectangular microstrip patch which is designed first in order to compare it with the proposed structure. the geometry for the simple rectangular microstrip patch of dimension w × l (see figure 1) printed on the grounded substrate, which has a uniform thickness of h and having a relative permittivity εr and the dielectric material is assumed to be nonmagnetic with permeability 0. amel boufrioua/ bandwidth enhancement of patch antenna with anisotropic substrate using inset l -shaped feed and l slots on ground plane (2014) 92 figure 1 geometry of a simple rectangular patch antenna table 1 shows the different parameters of the simple rectangular patch antenna. table 1 design parameters of a simple rectangular patch antenna parameters value w 15.8 mm l 8 mm h 1.6 mm relative permittivity 𝜀𝑟=4.5 ground plane 34*20 mm2 figure 2 simulation of return loss s11 of a simple rectangular patch antenna figure 2 shows the frequency response of the simple rectangular patch antenna. b rectangular patch antenna using inset lshaped feed and l slots in the ground plane the geometry for the proposed antenna based on the previous simple rectangular patch is shown in figure 2, in this case two l-shaped slots with the same dimensions are etched on the ground plane and the feeding is accomplished with inset l-shaped feed. table 2 design parameters of the rectangular patch antenna usinginset l-shaped feed with dual l slots on the ground plane parameters value w 15.8 mm l 8 mm h 1.6 mm microstrip feed line 2.8 mm hs=hfi 6 mm wfi =hfl 0.5mm wl=wfl 2mm ws=hl 12 mm relative permittivity 4.5 ground plane 34*20 mm2 figure 3 geometry of inset l-shaped feed with l-slots in the ground plane figure 4 comparison of return loss and bandwidth between the frequency (ghz) r e tu rn l o ss , s 1 1 ( d b ) (d b ) our results [3] w l wfi wfl hfi sh wl hl w l ws hs hh hfl frequency (ghz) r e tu rn l o ss , s 1 1 ( d b ) (d b ) simple antenna proposed antenna amel boufrioua/ bandwidth enhancement of patch antenna with anisotropic substrate using inset l -shaped feed and l slots on ground plane (2014) 93 proposed structure and the simple rectangular patch antenna it is clear that dual frequencies with a very significant improvement in the bandwidth of the proposed rectangular patch using inset l-shaped feed with dual l slots on the ground plane are obtained compared to the simple rectangular patch antenna. c different parametric study the parameters given in table 2 are fixed; we varied wl to 2mm, 4mm and 6mm. variation of return loss as a function of frequency for different value of slot width of the l slot etched on the ground plane wl is shown in figure 5, also it is worth noting that a comparison study on the return loss s11 between the structure with the dual l slots on the ground plane and the structure with dual rectangular slots on the ground plane which is obtained by taking the parameter hh of the l slot on the ground plane equal zero is given. figure 5 comparison of return loss of the structure with (hh=0) and the structure with hh=6mm, with three values of wl (mm) it is clear that a significant improvement in the bandwidth of the proposed antenna using inset l-shaped feed with dual l slots on the ground plane are obtained compared to the rectangular patch using inset l-shaped feed with rectangularslots on the ground plane in the case with hh =0, also we can see clearly that for the parameter (wl = 2 mm) we have a wider band width with an optimum matching. the radiation patterns for upper resonant frequency for different value of slot width of the l slot etched on the ground plane wl of our proposed structure using inset lshaped feed with dual l slots on the ground plane compared with the radiation pattern of a simple rectangular patch antenna are illustrated by the figures below (fig 6 and 7). the rectangular patch using an insef l feed with a dual rectangular slots on the ground plane is obtained when the parameter hh of the proposed structure given by figure 3 is taken equal zero (hh=0), because this structure is simpler than the previous one we will give some results pertaining to this case in the next figures (8 and 9). figure 6 radiation pattern of a simple rectangular patch figure 7 radiation pattern of the rectangular patch antenna using inset l-shaped feed with dual l slots on the ground plane the parameters given in table 2 are fixed, variation of return loss as a function of frequency for different value of the length notch of the inset l-shaped feed wfl is shown in figure 8. in the case of figure 9, the rectangular patch using an inset l feed with dual rectangular slots on the ground plane is also studied; the patch is embedded in a substrate containing anisotropic materials with the optical axis normal to the patch and has a uniform thickness h. the relative permittivity in this casecan be presented by a tensor with the relative permittivity in the direction perpendidicular to the optical axis denoted εx (with εx= εy) and the relative permittivity in the direction of the optical axis denoted εz as given by [2, 11]. frequency (ghz) r e tu rn l o ss , s 1 1 ( d b ) (d b ) hh=0 wl=2mm wl=4mm wl=6mm amel boufrioua/ bandwidth enhancement of patch antenna with anisotropic substrate using inset l -shaped feed and l slots on ground plane (2014) 94 it is worth noting that the results of figures 5and 8 agree very well with those obtained by s.satthamsakul et al. [3]. figure 8 variation of return loss as a function of frequency for different value of the length notch of the inset l-shaped feed wfl for (hh=0) figure 9 effect of the uniaxial anisotropic substrate on the band width and the return loss of the rectangular patch using an insef l feed with dual rectangular slots on the ground plane from figure 9, the obtained results show that a significant improvement in the bandwidth is achived for the anisotropic ratio ar>1, it is worth noting that (ar=εx/εz). iii conclusion in this paper, analysis of inset l-shaped feed rectangular patch antenna with dual l slots etched on the ground plane has been studied. from the analysis it is found that a large band width can be achieved by this novel structure and consequently this antenna is very suitable for many applications especially for applications in the access points of wireless communications. references [1] j. j. bahl and p. bhartia, “microstrip antennas,” edited by m. a dedham, artech house, 1980. [2] a. boufrioua, “resistive rectangular patch antenna with uniaxial substrate”, in: antennas: parameters, models and applications, ch. 6, pp. 163-190, edited by albert i. ferrero, nova publishers, new york. 2009. [3] s.satthamsakul, n.anantrasirichai, c. benjangkaprasert and t. wakabayashi, “rectangular patch antenna with inset feed and modified ground-plane for wideband antenna,” sice annual conference 2008, august 2022, 2008, japan. [4] h. f. pues and a. r. van de capelle, “an impedance matching technique for increasing the bandwidth of microstrip antennas,” ieee trans antennas propagat, vol. 37, pp. 1345-1354, 1989. [5] a. boufrioua, “bilayer microstrip patch antenna loaded with u and half u-shaped slots,” icmcs'14,4th ieee international conference on multimedia computing and systems, april 14-16, 2014, morocco. [6] m. k. meshram, b. r. vishvakarma, “gap-coupled microstrip array antenna for wide-band operation,” international journal of electronics, vol. 88, pp. 11611175, 2001. [7] j. a. ansari, a. mishra, “half u-slot loaded semicircular disk patch antenna for gsm mobile phone and optical communications,” progress in electromagnetics research c, vol. 18, pp. 31-45, 2011. [8] d. k. srivastava, j. p. saini, d. s. chauhan, “broadband stacked h-shaped patch antenna,” international journal of recent trends in engineering, vol. 2, pp. 385-389 , 2009. [9] a. a. deshmukh, k. p. ray, “resonant length formulations for dual band slot cut equilateral triangular microstrip antennas,”wireless engineering and technology, vol. 1, pp. 55-63, 2010. [10] j. a. ansari, s. k. dubey, p. singh, r. u. khan, b. r.vishvakarma, “analysis of u-slot loaded patch for dualband operation,” international journal of microwave and optical technology, vol. 3, pp. 80-84, 2008. [11] a. boufrioua, a. benghalia, “effects of the resistive patch and the uniaxial anisotropic substrate on the resonant frequency and the scattering radar cross section of a rectangular microstrip antenna,” elsevier, ast, aerospace science and technology, vol. 10, pp. 217221, 2006. amel boufrioua was born in constantine, algeria; she received the b.s. degree in electronic engineering in 1996, the m.s. and ph.d. degrees in microwave from electronics department, constantine university, algeria, in 2000 and 2006 respectively. from february 2002 to december 2003, she was a research assistant with space instrumentation laboratory at the national centre of space techniques “cnts” (oran, algeria), and then in november 2003, she was an assistant professor at the electronic engineering department (constantine university). since 2008, she is a lecturer with the electronic department, university constantine 1; her area of interest is microwave and microstrip antennas. dr. amel boufrioua is the corresponding author and can be contacted at: boufrioua_amel@yahoo.fr. frequency (ghz) r e tu rn l o ss , s 1 1 ( d b ) (d b ) εx= εy=3.6, εz=5 εx= εy=5, εz=3.6 εx= εy=5, εz=6.4 εx= εy=6.4, εz=5 εx= εy=εz=5 frequency (ghz) r e tu rn l o ss , s 1 1 ( d b ) (d b ) wfl=6mm wfl=4mm wfl=2mm wfl=0 transactions template journal of engineering research and technology, volume 6, issue 1, april 2019 11 detecting significant events in arabic microblogs using soft frequent pattern mining jehad h. zendah, ashraf y. maghari abstract— nowadays, people use microblogs as a main platform to write about events that occur in their environment. many researches have been conducted for event detection on the english language, but, arabic context has not received much research. furthermore, existing approaches rely on platform dependent features such as hashtags, mentions, or retweets, which make their approaches less efficient when these features are not presented. further, some approaches which depend only on bursty or frequently used words, detect general viral topics instead of event related topics. in this work, we present a new approach for detecting events written in arabic using frequent event triggers. the approach first identifies the part of speech tags of a sentence and then analyze them to extract event triggers. a soft frequent pattern mining method is applied to find co-occuring event triggers. the approach has been evaluated using a subset of the evetar dataset. we divided the data into timely constrained windows to mimic the data stream behavior. two experiments of different time intervals were conducted, 6-hours and one-day time intervals. we achieved an average f-meaure of 0.644 and 0.717. the results show that our approach outperformed some widely known approaches and it was comparable with others. index terms—event detection, event trigger, soft frequent pattern mining. i introduction nowadays, microblogs have become the main virtual environment for connecting people and sharing digital content. it allows users to post and share short text, images, and/or short length videos of any type. this content is delivered to a network of followers or virtual friends of the content creator at relatively no time. content type depends on the user interests and situation. at the occurrence of an event, users post details about the event with their friends. with the instant delivery, events news usually spread faster and reach wider audience in microblogs compared to mainstream media [1]. events are real-world occurrences that take place in a certain geographical location over a certain time period [2]. capturing information about an event can help in many aspects. for example, it can help in accelerating the crisis response when the information about disastrous events are retrieved at the time of its occurrence, also, it can help people to easily track occurring events. in microblogs context, dou, w., et al. [3] defined an event as ―an occurrence causing change in the volume of text data that discusses the associated topic at a specific time. this occurrence is characterized by topic and time, and often associated with entities such as people and location‖. event detection (ed) definition depends on the task held by the researcher. in general, ed is the process of discovering and identifying new or previously unidentified events from a set of documents [4]. event detection from text has been addressed by many researchers; they employed different techniques from many fields such as machine learning, data and text mining, and natural language processing. the first event detection research program is the topic detection and tracking (tdt) conducted by [2]. the techniques introduced in tdt are used to monitor different newswire sources so that users can be aware of occurring events. they were applied on full text of well written news articles, however, with the emergence of microblogs, new challenges are introduced when using these techniques. for example, content generated by microblogs users is constrained to be very small, thus if the traditional term frequency-inverse document frequency (tf-idf) is used in such short documents, it will result in a sparse vector issue. microblogs posts are very noisy source of data, according to [5], microblogs’ posts do not always refer to an actual event or a subject of importance, but most of the time the posts contain meaningless or uninteresting daily life activity. in addition, the content is generated by anyone, thus, grammatical errors, informal words, or incomplete context will be introduced. event detection techniques are classified based on the event type, the detection task, and detection method [6]. the type of event can be specified or unspecified, where the specified event detection techniques require prior knowledge about the event –mainly a related keyword or a query. on the other hand, unspecified event detection techniques do not require any prior knowledge, such techniques require an ongoing monitoring and analysis for the incoming documents to find an increase number of keyword appearancecount, which can be an indicator of a potential event. based on the detection task, the detection process can be used for new or retrospective events. many existing approaches of event detection are tested on jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 12 english data. for the best of our knowledge only few researches have been conducted for the arabic language [7]. the arabic language introduces many challenges in the text mining and natural language processing (nlp) fields, this is due to its vocabulary richness, and its morphological and orthographic nature [8]. these challenges are inherited in the event detection problem. other existing approaches utilize platform specific features such as hashtags, retweets, and followers and external knowledge to enhance event detection [9, 10]. the problem with these approaches is that when these attributes do not appear for different reasons in the process, the accuracy of the event detection will be affected. in addition to these limitations, most of the approaches found in the literature depend on the burst behavior of specific words, but it is not true that every word that shows burst is related to an event. for example, in the arabic context, users usually post praising words to allah such as ―subhan allah wa behamdeh‖ ― وبحودٍ اهلل سبحاى ‖. these words will sometimes show bursts, but in reality, they do not belong to any event. in this paper we propose an approach for detecting significant events from arabic microblog posts that tackle these challenges. our approach relys on extracting event triggers from post text using pre-defined rules applied to the part of speech (pos) tags of the tweet. this process is essential to separate posts that may contain event occurrence from posts that do not. a soft frequent pattern mining approach is applied to the posts containing event trigger. the resulted cluster are treated as detected event. the proposed approach targets the arabic content to be a first stage for early event reporting, situational awareness and summarization for arabic audience. arabic event detection can be very useful in crisis response applications, as many places in the middle east have conflicts, thus different types of events can occur. the rest of this paper is organized as follows: section 2 reviews the event detection techniques. section 3 explains the methodology, section 4 demostrates the experements and the evaluation, and the conclusion is presented in section 5. ii literature review many approaches of event detection in microblogs use platform specific features which make the approach accuracy dependent on these features and cannot be ported to other platforms. also, other approaches focus on finding a burst of words in the stream to identify hot topics. these approaches detect a topic without any consideration if that topic is an actual event or a viral general topic. in addition to that, considerable research has been done and tested on english data but only few consider the arabic language. in the following sections we review some of these researches. alsaedi, n. and p. burnap [7] proposed a novel arabic event detection framework from twitter dataset. in the framework, the data undergoes a preprocessing step to enhance the data quality. a naïve bayes classifier is used to distinguish event-related tweets from irrelevant tweets. the classifier is trained on 1500 tweets and their terms are used as features. tweets are represented by a set of features which include temporal, spatial, and textual features such as retweet ratio, mention ratio, hashtag ratio, tweet sentiment, etc. tweets are then clustered together to distinguish events using an online clustering algorithm. the average weight of each term in all document in a cluster is used as the centroid of the cluster. the approach output was evaluated by splitting the dataset into days and then calculate the average value of precision which was 80.24% for disruptive events. our approach uses a same schema by filtering tweets before further processing. using the terms in a small dataset to train the classifier will have two drawbacks as stated in [11], first, it could decrease classifier accuracy as the keywords introduced in the dataset are subjective to specific events and can lead to undesirable result when new events emerges, second, using the bag of word will result in vector sparseness especially when used in dynamic and rapid changing corpus. our approach, however, uses a set of rules that examine the tweet’s syntax and determine if it contains an event trigger or not. tweets that do not have event trigger are filtered out. in [12] an approach for event detection using multimodal factor analysis model is proposed. the approach depends on two feature sets. the hashtags’ bag of words created from all the tweets containing the hashtag and geolocation vector containing the latitude and longitude values of all tweets containing the hashtag. a probabilistic generative modal is used to fuse these features and an expectation maximization algorithm is derived for finding the maximum likelihood estimates of the model parameters. the approach assumes that hashtags are used during event occurrence and tweets containing these hashtags are geographically closed together. in [13] an unsupervised approach for event extraction out of arabic tweets is presented. the approach tags the event expression and the related entities and link them to the knowledge base. event expression are identified using a set of rules based on the guidelines provided by the arabic annotation guidelines for events (consortium, 2005). this approach processes each tweet independently from the other, thus it will fail to identify significant events. our approach works on the burst behavior of event triggers, thus only trending/significant events are detected. in [9] an approach based on a multiple assignment graph partitioning algorithm is introduced where event is represented by a cluster of related words. the authors address the problem of message posting delays which lead to event attributes being scattered in different timestamps, thus the significance of event-related words will be decreased as time goes on. words are modeled using three twitter-based information theoretic metrics. the first metric is conditional word tweet frequency-inverse trend word tweet frequency (cwtf-itwtf), which is a time varying measurement similar to the popular idf-tf. the objective of this measurement is to decrease the weight of trendy ongoing events. jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 13 the second metric is word frequency-inverse trend word frequency (wf-itw), which is a time varying measure that consider the frequency of keywords. lastly, weighted conditional word tweet frequency-inverse trend weighted conditional word tweet frequency (wcwtf-itwcwtf). this measure depends on features from twitter such as the number of followers and the number of retweets to find the importance of a keyword. a fuzzy time series signal is produced from the three metrics. the approach is evaluated over a manually collected dataset using twitter stream api. a time window of size 6 hours is used over a month time horizon. to create a ground truth, for each time window the most frequent keywords are extracted and presented to experts to annotate them. the evaluation measurements used are keyword recall/precision and f1-score. this approach cannot be used in microblogging streams that do not produce these features, however, our approach does not consider any twitter based features. it also depends only on the bursty pattern of a set of keywords thus the resulted detected event can be a trending topic and not a real-world event. the approach requires different parameter tuning to achieve good f1-score. in [14] a generic framework for event detection that depends on dynamic multivariate graph is presented. a user-touser undirected graph is built where vertices are represented as users, and edges are represented as the follow relationship between the users. every vertex contains a textual feature vector of domain-specific keywords. the approach focuses on the search of evolving subgraphs over time with anomalous features. the evaluation metrics used are false positive rate (fpr), true positive rate (tpr). other approaches depend on frequently used words as the approach presned in [10] depends on clustering wavelet signals. a signal is built for each word using wavelet analysis to reduce space and storage. auto-correlation is calculated for each signal. signals that produces skewed autocorrelation are identified as insignificant words and then removed. similarity between words is calculated using the cross-correlation between every pair of words. similar words are determined using a threshold value where words that have a similarity higher than the threshold are clustered together. the approach is evaluated over a manually collected dataset from twitter where non-english characters are removed from the text. the evaluation is conducted manually considering only precision, as the authors cannot enumerate all events that occurred at the time of collection, thus recall is discarded. different experiments with different configuration are used and the best result achieved was 16.7%. clustering based on only a pair of words will produce generic events or topics [15]. for example, if a bombing event occurred in the same country with different locations, then using such algorithm will detect event that does not differentiate between the two different locations. in [15] a novel method called soft frequent pattern mining (sfpm) is introduced. the method is used to tackle the problem of using patterns of only pairs of terms for event detection. the method uses the same concept of the frequent pattern mining (fpm) technique in which it examines the simultaneous co-occurrence patterns of degree greater than two, however, it is less strict than fpm as it does not require all terms in the pattern to be frequent in the same document, but only a large subset of the terms is frequent in same document. the approach consists of two main components. term selection in which a fixed number of terms are selected for grouping. the selection process depends on the existence of a randomly collected tweets called reference corpus. the likelihood of appearance is estimated for each term in the reference corpus and the incoming tweets corpus. the ratio of the likelihoods of appearance is then computed. terms with the highest ratio will be more significant as the term has a frequency higher than usual in the two corpora. the second component is the implementation of the algorithm itself on the selected top terms. using a static reference corpus will result in a biased behavior for the term selection algorithm, as new emerging term will produce lower ratio, thus it will not be selected. in [1] an approach based on the traditional fp-growth method is presented. fp-growth produces the most frequently used combinations of words that co-occur together in a tweet. determining what is most frequent depends on a fixed threshold value. on the other hand, the approach introduces a dynamic procedure for calculating the threshold, so that it can handle the dynamic nature of words size over time. the procedure depends on a combination of statistical values which are the average and median of the words’ frequencies. a preprocessing step is performed for text tokenization and removing stop words, mentions, urls, and hashtags. a post processing step is also performed to eliminate duplicate patterns. duplicate patterns are determined by calculating the cosine similarity between patterns with a threshold of 0.75. the approach was tested using two datasets manually collected by querying the twitter stream api using a set of keywords that identify two events, the uk general elections 2015 and the greece crisis 2015. in [16] a multiple source approach that collect data from twitter stream and newswire websites is introduced. every source is considered as an independent stream. every stream undergoes two stages. first a weighted graph is built in which nodes represent words and edges represent the number of documents in which the two connected words cooccur together. a pruning process is conducted on the graph to keep emerging and important words. the multiple sources are merged by merging the pruned graph. events are detected using voltage-based clustering algorithm on the resulted graph. the approach was tested using two sources twitter and tumblr and achieved f-measure of 0.897. jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 14 xml identifty top event triggers apply soft frequent pattern mining for each event trigger significant event detection apply pre-defined rules on pos extract event triggers event trigger extraction data collection data preprocessing tokenization cleaning normalization stemming part-of-speech (pos) tagging stop word removal the performace of the approaches that depend on platform-specific features are affected by the absence of theses features. futhermore, the approaches that depend only on the presence of frequent keywords are very likely to detect general topic instead of real events. however, in this paper we introduce an approach that depends on the textual features of the sentence instead of platform specific features. moreover, the approach depends on frequent event triggers which enable it to detect real events. iii methodology shows the steps of our خطأ! لن يتن العثىر على هصدر الوزجع. approach for event detection, i.e. data collection, data preprocessing, event triggers extraction (applying a set of predefined rules in the extracted part-of-speech tags), and finally event detection (applying the soft frequent pattern mining algorithm on the top event triggers). these steps are discussed hereafter. a data collection in this step, we collect the data that need to be analyzed. usually, tweets are collected from the twitter stream, but in our case, an existing dataset that support our task is used. practically, twitter stream cannot be analyzed at once, thus the incoming data will be processed in parts in chronological order. we call these parts timely constrained windows. b data preprocessing this is a preliminary step required for preparing the dataset. we perform tokenization on every tweet. then a cleaning step is performed, in which we remove latin alphabets, pecial characters, emoticons and urls. we also remove hastags with maintaining its content tweets. tweets with less than two words are removed. after that a normalized step followed by stemming are performed to enhance arabic words similarity. part-of-speech (pos) tagging is performed on every processed tweet. lastly, stop words are removed from the dataset. c event trigger extraction an event trigger is a term or a group of terms that represent the event itself. in our approach we use event triggers as an indicator for the occurrence of an event in a tweet. also, it represents the important words in the text. it helps us shortening the mining process and extract real events instead of popular topics. a set of pre-defined rules are used to extract event triggers as follows: verb phrase (vp) rule 1.1: if it contains a verb in base form (vb), verb in past tense form (vbd), verb in non-3rd person singular present form (vbp) or verb in past participle form (vbn) tag followed by a noun (nn) tag then we consider both tags as an event trigger. as shown in table 1 rule 1.2: if it contains (vb), (vbd), (vbp) or (vbn) figure 1: steps used to detect significant arabic events jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 15 tag followed by adjective (jj) tag then we consider both tags as an event trigger. as shown in table 2 rule 1.3: if the above rules does not apply then we consider (vb), (vbd), (vbp) or (vbn) tag as the event trigger. noun phrase (np) rule 2.1: if it contains noun (nn) tag followed by (nn) or singular proper noun (nnp) tag then we consider both tags as an event trigger. as shown in table 3 rule 2.2: if it contains a noun with (nn) or (nnp) tag followed by a verb with (vb), (vbd), (vbp) or (vbn) tag then we consider both tags as an event trigger. as shown in table 4. after identify all event triggers in the dataset, a list of the event triggers and their frequencies is maintained. as the approach focuses only on significant events, we assume a significant event is presented by the highest usage of event triggers combination, thus we keep event trigger of high frequencies and remove the rest. for simplicity, the average of all frequencies is used as the threshold value to determine the frequent event triggers. d significant event detection in this step we generate groups of similar event triggers, where each group represents an event. we use an adapted version of the soft frequent pattern mining algorithm [15] to cluster event triggers that co-occur frequently in the documents. note that the terms are not necessarly frequent in the same document. originally, petkos et al. [15] extracts important words by comparing the words in dataset with a reference corpus. the problem with this solution is that maintaining a reference corpus requires a lot of efforts, also new event’s relatedwords that do not appear in the reference corpus will be considered not important, thus identifying important words is very biased. however, in our work, we select important words by selecting event triggers using rules that depends on the part-of-speech tags. soft frequent pattern mining (sfpm) is derived from the concept of frequent pattern mining (fpm). fpm is the process of finding frequent items in a set of transactions [17]. an item is said to be frequent if its frequency is above a predefined threshold. a frequent pattern is a set of items that co-occur together in the same transaction and their frequency is above a threshold. we consider the task of event detection as a frequent pattern mining problem. when an event occurs, a group of users will start tweeting about it using similar words pattern. we assume that an event is represented by a set of event triggers not just only one. this is due to the difference in writing style between users, which make the pos tagger produce different tags, thus different event triggers for the same event. in addition to that, the pos tagger may incorrectly tag a set of words that hold true to the rules shown. in our work, this incorrectly captured features still have high frequency in case of events, thus it does not affect the detection process. to formulate the algorithm, suppose we have a set of tweets t of size n, and a k number of event triggers (et) where all tweets in t contain at least one event trigger. our task is to group similar event triggers together and retrieve their common tweets as the detected event. thus, the objective of the algorithm is to produce a set of grouped event triggers. initially, every event trigger et is treated as a single event. to be able to merge ets, a numerical representation is needed. a vector ds of size n is calculated for each et where ds(n)=1 when et appears in the nth tweet and ds(n)=0 otherwise. the popular cosine similarity measure is table 1 example shows the extraction of event trigger using rule 1.1 tweet قضج محكمت انجىاياث ببزاءة خبيز هىدسي في وسارة انعدل event trigger لضت هحكوت pos tags (vbd قضت)/(nn محكمة)/(dtnns jj)/(خبيز nn)/(ببزاءة dtjj)/(انجىاياث (انعدل dtnn)/(وسارة nn)/(في in)/(هىدسي rule 1.1 table 2 example shows the extraction of event triggers using rule 1.2 tweet انخفجيز بانعكز انبحزيه: ضبط مطهوبيه مخورطيه في انشزقي بقيت انموضوع اضغط هىا event trigger ضبط هطلىبيي pos tags (vbd ضبط)/(jj مطلوبين)/(vn مخورطيه)/(in dtjj)/ (بانعكز dtjj)/ (انخفجيز dtnn) /(في dtjj) /(انموضوع dtnn)/(بقيت nn)/(انشزقي (هىا rb)/(اضغط rule 1.2 table 3 example shows the extraction of event triggers using rule 2.1 tweet قخيالن عهى األقم في اطالق وار خالل عمهيت احخجاس رهائه شزق باريس event trigger إطالق ًار، عوليت إحتجاس pos tags (vbd قخيالن)/(in عهى)/(dtnn األقم)/(in nn)/(خالل nn)/(نار nn)/(اطالق nn)/(في nn)/(رهائه nn)/(احتجاز nn)/(عملية (باريس nnp)/(شزق rule 2.1 table 4 example shows the extraction of event triggers using rule 2.2 tweet أنماويا ححذر مه اوقساو انمجخمع عقب هجوو شارني إبدو event trigger ألواًيا تحذر pos tags (nnp ألمانيا)/(vbp تحذر)/(in مه)/(nn nn)/(عقب nn)/(انمجخمع dtnn)/(اوقساو (إبدو jj)/(شارني jj)/(هجوو rule 2.2 jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 16 0 500 1000 1500 2000 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 used for comparison, where two event triggers are merged when their calculated similarity is above a threshold value. after merging the two event triggers, a vector summation is performed on both ds vectors, and the newly created vector is assigned to the newly created group. in a single full iteration, if no event trigger is merged, then the produced group of event triggers is treated as an event. the process is repeated for all event triggers that are not assigned to any group and the algorithm is terminated when there is no further merge. algorithm 1 shows a pseudocode of the adapted sfpm. algorithm 1 : adapted soft frequent pattern mining input: list of event triggers (et) output: sets of grouped event triggers for i=1 to et.count calculate et[i].ds end for events = et isrepeat = true while (isrepeat) do temp = events isassigned = integer array of size temp.count neweventset = empty object isrepeat = false for i=1 to temp.count 1 if isassigned [i] = 1 then skip end if for j=i+1 to temp.count if similarity(temp[i].ds, temp[j].ds) >𝜃�(|temp[i].ds|) then isassigned[j] = 1 temp[i] = merge(temp[i], temp[j]) isrepeat = true end if end for newevents.add(temp[i]) e nd for events = newevents end while iv experiments & evaluation a dataset to test the correctness of our approach of event detection, we need to find a dataset for such task. evetar [18] is an arabic dataset targeted for the event detection task. it contains a total of 590,066,789 tweets and covers 66 significant events. it has been collected in a one-month period using wikipedia’s current events portal at that time. an event is represented by a set of tweets that are relevant to that event during a time period in which the event occurred. originally, the dataset contains only the ids of the tweets and we are tasked to fetch the actual content from twitter. this is due to some privacy concerns. due to the large volume of the original dataset, we selected a sample of 134,069 tweets that cover the same number of events covered by the original dataset. this sample is provided by the dataset authors. we developed a small tool that uses twitter api to fetch the content of each tweet. after iterating over the dataset, we were able to fetch only 59,732 tweets that cover 50 significant events, as most of the tweets were deleted or their authors’ accounts were suspended, or deleted. only 23,973 tweets are labeled and the remaining do not represent an event or do not belong to any of the covered events. figure 2 shows the retrieved event labeled and the corresponding number of tweets. b measurements we used the same measurements introduced in [19]. these measurements are described as follows:  event recall: in general, recall is a measurement used in text retrieval. it is the fraction of relevant instances that have been retrieved over the total amount of relevant instances [20]. a detected event is a cluster of tweets. a cluster is said to be true positive if its containing tweets cover any of the reference events. as a cluster may contain hundreds or thousands of tweets, it is very likely to find tweets that do not belong to the event. thus, the proportion of tweets in the cluster that are part of a single reference event is computed. if the proportion is greater than a threshold then it is true positive. we used the threshold value of 0.5 as recommended by [19]. thus recall in event detection is calculated as follows: � � � � � � �  event precision: precision is the fraction of relevant instances among the retrieved instances [20]. precision of event detection is calculated as figure 2: distribution of events over retrieved dataset jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 17 follows: 𝑝 𝑖 𝑖 � � � � � � �  f-measure: it is the harmonic average of the precision and recall. it is used to determine the accuracy of classification problems [20]. − � 2∗ 𝑝 𝑖 𝑖 ∗ 𝑝 𝑖 𝑖 + c experiments setup we conducted our experiments over evetar to compare our results with other approaches of event detection. we developed a system that implements our approach. in reality, twitter data comes as a stream of tweets in chronological order, thus it is not feasible to apply the event detection approach on the whole dataset. to mimic this behavior, we split the tweets into chronologically ordered chunks/subsets. the number of subsets depends on the specified time interval. we choose two-time intervals, a one-day time interval, and a 6-hours time interval. unlike other approaches, long time intervals are selected because of the small number of tweets retrieved from the dataset ids. d results after running our system on the resulted subsets of the 6hours interval we achieved the results shown in figure 3. we can observe that at window 15 the lowest f-measure is achieved, this is due to the bad distribution of events in that window where most events have lower number of tweets. consequently, frequencies of event triggers belong to these events are low too. thus, using the average as a threshold value eliminates these event triggers. this can be a limitation of our threshold value selection, but our objective in this work is to detect significant events which are the ones with the highest frequencies. on the other hand, subset 19 produced the highest f-measure. the calculated average of recall, precision and f-measure is 0.607, 0.684, and 0.644. the one-day time interval achieved the results shown in figure 4. in subset 1 we achieved the highest f-measure with a value of 0.888. we were able to detect events that have a lower number of tweets because events with high number of tweets produced different event triggers, thus the frequency was distributed among these event triggers, which made the average value had better effect in the results. table 5 shows a sample of tweets of the same event that produce different event triggers. the lowest f-measure value is 0.533 achieved in subset 11. the calculated average of recall, precision and f-measure is 0.654, 0.793, and 0.717 respectively. table 6 shows the average measurements compared with three implemented approaches on evetar [10, 21, 22]. our approach outperformed both edcow and peaky topics, however, compared to mabed, we achieved better precision, but there was a wide difference in recall in favor of mabed. this is because the fetched dataset was incomplete and event-related tweets are lost during the fetch process. thus, our approach failed to detect events with lower frequencies. this is acceptable as our objective is to detect significant events. overall, our experiments show that having wider time intervals produce better results. our findings coincide with both [9, 23]. since wide intervals can cover enough tweets about an event, all event triggers related to this event will have higher frequent pattern. v conclusion in this paper, we have presented a soft frequent pattern mining based approach that uses event triggers for detecting significant arabic events from text founded in microblogs. our approach is based solely on the textual features of the text without relying on any platform-specific features such as hashtags, mentions, retweets. it depends only on the event triggers extracted from the text. experimental results indicate that the approach outperformed some widely known approaches and comparable with others. it also detects real events with proper accuracy instead of general viral topics. in the future, we will enhance the process of extracting the frequent event triggers using dynamic threshold value. in addition, we will adapt our approach to make it applicable for real-time uses. references [1] alkhamees, n. and m. fasli. event detection from social network streams using frequent pattern mining with dynamic support values. in big data (big data), 2016 ieee international conference on. 2016. ieee. [2] allan, j., topic detection and tracking: event-based information organization. vol. 12. 2012: springer science & business media. [3] dou, w., et al. event detection in social media data. in ieee visweek workshop on interactive visual text analytics-task driven analytics of social media content. 2012. [4] allan, j., introduction to topic detection and tracking, in topic detection and tracking. 2002, springer. p. 1-16. [5] hurlock, j. and m.l. wilson. searching twitter: separating the tweet from the chaff. in icwsm. 2011. [6] atefeh, f. and w. khreich, a survey of techniques for event detection in twitter. computational intelligence, 2015. 31(1): p. 132-164. [7] alsaedi, n. and p. burnap. arabic event detection in social media. in international conference on intelligent text processing and computational linguistics. 2015. springer. [8] farghaly, a. and k. shaalan, arabic natural language processing: challenges and solutions. acm transactions on asian language information processing (talip), 2009. 8(4): p. 14. [9] doulamis, n.d., et al., event detection in twitter microblogging. ieee transactions on cybernetics, 2016. 46(12): p. 2810-2824. jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 18 [10] weng, j. and b.-s. lee, event detection in twitter. icwsm, 2011. 11: p. 401-408. [11] wang, x., l. tokarchuk, and s. poslad. identifying relevant event content for real-time event detection. in advances in social networks analysis and mining (asonam), 2014 ieee/acm international conference on. 2014. ieee. [12] yılmaz, y. and a.o. hero, multimodal event detection in twitter hashtag networks. journal of signal processing systems, 2016. 90(2): p. 185-200. [13] mohammad, a.-s. and o. qawasmeh, knowledge-based approach for event extraction from arabic tweets. international journal of advanced computer science & applications, 2016. 1: p. 483-490. [14] shao, m., et al. an efficient approach to event detection and forecasting in dynamic multivariate social media networks. in proceedings of the 26th international conference on world wide web. 2017. international world wide web conferences steering committee. [15] petkos, g., et al. a soft frequent pattern mining approach for textual topic detection. in proceedings of the 4th international conference on web intelligence, mining and semantics (wims14). 2014. acm. [16] katragadda, s., r. benton, and v. raghavan, framework for real-time event detection using multiple social media sources. 2017. [17] aggarwal, c.c. and j. han, frequent pattern mining. 2014: springer. [18] almerekhi, h., m. hasanain, and t. elsayed. evetar: a new test collection for event detection in arabic tweets. in proceedings of the 39th international acm sigir conference on research and development in information retrieval. 2016. acm. [19] petrovic, s., real-time event detection in massive streams. 2013. [20] sammut, c. and g.i. webb, encyclopedia of machine learning. 2011: springer science & business media. [21] guille, a. and c. favre. mention-anomaly-based event detection and tracking in twitter. in advances in social networks analysis and mining (asonam), 2014 ieee/acm international conference on. 2014. ieee. [22] shamma, d.a., l. kennedy, and e.f. churchill. peaks and persistence: modeling the shape of microblog conversations. in proceedings of the acm 2011 conference on computer supported cooperative work. 2011. acm. [23] gaglio, s., g.l. re, and m. morana. real-time detection of twitter social events from the user's perspective. in communications (icc), 2015 ieee international conference on. 2015. ieee. figure 3: the results produced by dividing the dataset with 6hours time interval figure 4: the results produced by dividing the dataset with one-day time interval jehad h. zendah., ashraf y. maghari / a new approach based on soft frequent pattern mining for detecting significant events in arabic microblogs (2019) 19 table 5 tweets of belong to the same event produces different event triggers tweet عباس تىليع على عارم وأهزيكي إسزائيلي غضب: سي بي بي ال السلطت رئيس تىليع: الدوليت الجٌائيت للوحكوت االًضوام طلب event trigger رئيس تىليع عباس، تىليع tweet يمىل روها هعاهدة على عباس الزئيس تىليع على ردٍ في ًتٌياهى الدوليت الجٌائيت الوحكوت هي تملك اى يجب هي هي السلطت اى حواس حزكت هع لتحالفها event trigger عباس تىليع تىليع، رد tweet بعد الدوليت الجٌائيت الوحكوت الى االًضوام طلب اليىم يىلع عباس الفلسطييي المزار هشزوع رفض event trigger هشزوع رفض طلب، يىلع table 6 summary of the results achieved by our approach compared to other approaches applied to evetar. approach recal precission f-measure edcow 0.15 0.09 0.11 peaky topics 0.80 0.11 0.19 mabed 0.92 0.61 0.73 our approach (6hours interval) 0.607 0.684 0.644 our approach (1day interval) 0.654 0.793 0.717 transactions template journal of engineering research and technology, volume 1, issue 2, june 2014 66 a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab ghouraf djamel eddine1 and naceri abdellatif1 1department of electrical engineering university of sba, irecom laboratory bp 98 22000 algeria e-mail: jamelbel22@yahoo.fr abstract— this article present a comparative study between two control strategies, a classical pid regulator, and a robust h2 controller based on lqg control with kalman filter applied on automatic excitation control of powerful synchronous generators, to improve transient stability and its robustness of a single machine infinite bus system (smib). the computer simulation results have proved the efficiency and robustness of the robust h2 approach, in comparison with using classical regulator pid, showing stable system responses almost insensitive to large parameter variations. this robust control possesses the capability to improve its performance over time by interaction with its environment. the results proved also that good performance and more robustness in face of uncertainties (test of robustness) with the linear robust h2 controller (optimal lqg controller with kalman filter) in comparison with using the classical regulator pid. our present study was performed using a gui realized under matlab in our work. index terms— powerful synchronous generators and excitations, avr and pss, lqg control , kalman filter, stability and robustness. i introduction power system stability continues to be the subject of great interest for utility engineers and consumers alike and remains one of the most challenging problems facing the power community. power system oscillations are damped by the introduction of a supplementary signal to the excitation system of a power system. this is done through a regulator called power system stabilizer. classical pss rely on mathematical models that evolve quasi-continuously as load conditions vary. conventional pss based on simple design principles such as pi control and eigenvalue assignment techniques have been widely used in power systems [1, 2]. such pss ensure optimal performance only at their nominal operating point and do not guarantee good performance over the entire operating range of the power system. this is due to external disturbances such as changes in loading conditions and fluctuations in the mechanical power. in practical power systems networks, a priori information on these external disturbances is always in the form of certain frequency band in which their energy is concentrated. remarkable efforts have been devoted to design appropriate pss with improved performance and robustness. these have led to a variety of design methods using optimal and output feedback methods [3, 5]. the shortcoming of these model-based control strategies is that uncertainties cannot be considered explicitly in the design stage. the stabilizer of this new generation for the system avr – pss, aimed-at improving power system stability, was developed using the robust controller h2 based on lqg. this has been advantage of maintaining constant terminal voltage and frequency irrespective of conditions variations in the system study. the h2 control design problem is described and formulated in standard form with emphasis on the selection of the weighting function that reflects robustness and performances goals [6]. the proposed system has the advantages of robustness against model uncertainty and external disturbances (electrical and mechanical), fast response and the ability to reject noise. simulation results shown the evaluation of the proposed linear control methods based on this advanced frequency techniques applied in the automatic excitation regulator of powerful synchronous generators: the robust h2 linear stabilizer and conventional pid control schemes against system variation in the smib power system, with a test of robustness against parametric uncertainties of the synchronous machines (electric and mechanic), and make a comparative study between these two control techniques for avr – pss systems. ii dynamic power system model a power system description in this paper the dynamic model of an ieee standard of power system, namely, a single machine connected to an infinite bus system (smib) was considered [4]. it consists of a single synchronous generator (turbo-alternator) connected through a parallel transmission line to a very large network approximated by an infinite bus as shown in figure 1. mailto:jamelbel22@yahoo.fr a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab, ghouraf djamel eddine and naceri abdellatif (2014) 67 '' q e '' d x d i q u figure 1 standard system ieee type smib with excitation control of powerful synchronous generators b the park-gariov modeling of powerful synchronous generators this paper is based on the park-gariov modeling of powerful synchronous generators for eliminating simplifying hypotheses and testing the control algorithm. the psg model is defined by equations (1to 5) and figures 2 and 3 below [4]: vf i1d vd vq i1q ‘’q’’ xq x1q xf x1d xd ‘’d’’ i2q x2q figure 2. park transformation of the synchronous machine '' d e '' q x q i d u figure 3. equivalent diagrams simplifies of the synchronous machine with damping circuits (park-gariov model)  currants equations: sradffsrdaddd qsraqqqqddd qsraqqqdqqq xixi xixeui xixeui /)( /)( /)( /)( /)( /)( 11 222 '''' 111 ''''    sfdsfad fq ad fd sfd q ad f sf q xxx e x x x e x x x e 111 .1.1 '' ''    sfqad fd aq fq sfq d xx e x x x e 11 .1 ' ''    flow equations   dsdqad ixxe  '''' ;   qsqdaq ixxe  ''''  dtir q qqsq . 1 0 111      dtir q qqsq . 2 0 222      dtuir f fffsf    0 0   dtir d ddsd . 1 0 111      mechanical equations   , s s s sdtd                dt d jmmavecmmm jjejt  inertied'moment 0 :   etjtdaqqadj mms dt d tmiis dt d t -ou ..  t s e m p dt d j    c models of regulators avr and pss: the avr (automatic voltage regulator), is a controller of the psg voltage that acts to control this voltage, thought the exciter .furthermore, the pss was developed to absorb the generator output voltage oscillations [1]. in our study the synchronous machine is equipped by a voltage regulator model "ieee" type – 5 [7, 8], as is shown in figure 4. vref  pt pk a a 1 + vt ve vr efdmax efdmin efd figure 4. a simplified” ieee type-5” avr frefe a rea r vvv t vvk v    , in the pss, considerable’s efforts were expended for the development of the system. the main function of a pss is to modulate the sg excitation to [1, 2, and 4]. figure 5. a functional diagram of the pss used [8] in this paper the pss signal used, is given by: [9] see avr se gs xe t δfu fu’ if’ δp  p δu u’ uref p s s (1) (2) (4) (3) (5) (6) pss k w w pt pt 1 2 1 1 1 pt pt   v1 v2 v3 vpssmax vpssmax vpss δ input a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab, ghouraf djamel eddine and naceri abdellatif (2014) 68 inputkvv t v v v t t t vv v v t t t vv v pss w        . . 1 . 1 3 . 3 . 2 2 3 2 23 . 2 . 2 1 2 1 12 . 1 ; ; ; d simplified model of system studied smib we consider the system of figure 6. the synchronous machine is connected by a transmission line to infinite bus type smib. with: re: a resistance and le: an inductance of the transmission line [4]. va re v∞ le figure 6. synchronous machine connected to an infinite bus network we define the following equation of smib system                                 i ixilsinvpvv d qeodqeabcodq 0 ' cos 0 2   the iii robust h2-pss design based on lqg control and kalman filter the control system design method by means of modern fsm algorithms is supposed to have some linear test regulator. it is possible to collect various optimal adjustment of such a regulator in different operating conditions into some database. linear – quadratic – gaussian (lqg) control technique is equivalent to the robust h2 regulator by minimizing the quadratic norm of the integral of quality [13]. in this work, the robust quadratic h2 controller (corrector lqg) was used as a test system, which enables to trade off regulation performance and control effort and to take into account process and measurement noise [11,5]. lqg design requires a state-space model of the plant:        ducxy buax dt dx where x, u, y is the vectors of state variables, control inputs and measurements, respectively. figure 7. optimal lqg regulated system with kalman filter. the goal is to regulate the output y around zero. the plant is driven by the process noise w and the controls u, and the regulator relies on the noisy measurements yv = y+v to generate these controls. the plant state and measurement equations are of the form: both w and v are modeled as white noise. in lqg control, the regulation performance is measured by a quadratic performance criterion of the form: the weighting matrices q, n and r are user specified and define the trade-off between regulation performance and control effort. the lq-optimal state feedback u=–kx is not implemented without full state measurement. however, a state estimate x̂ can b e d er ived such that u = −kx̂ remains optimal for the output-feedback problem. this state estimate is generated by the kalman filter: thus, the lqg regulator consists of an optimal statefeedback gain and a kalman state estimator (filter), as shown in figure 7. on the basis of investigation carried out, the main points of fuzzy pss automated design method were formulated [6]. the nonlinear model of power system can be represented by the set of different linearized model shown in equations (7). for such model, the linear compensator in the form of u = –kx can be calculated by means of lqg method. the family of test regulators is transformed into united fuzzy knowledge base with the help of hybrid learning procedure (based variable structure sliding mode). in order to solve the main problem of the rule base design, which is called “the curse of dimensionality”, and decrease the rule base size, the scatter partition method [13] is used. in this case, every rule from the knowledge base is associated with some optimal gain set. the advantage of this method is practically unlimited expansion of rule base. it can be probably needed for some new operating conditions, which are not provided during learning process. finally, the robust h2 stabilizer was obtained by minimizing the quadratic norm 2 2 m of the integral of quality j(u) in (11), where   ., )()( 2/12/1 0 jsruqxzandxsmsz tt  [6]. a structure of the power system with robust h2 con troller the basic structure of the control system of a powerful synchronous generator with the robust controller is shown in figure 8.                         uuu and iii and or pp input fff fff mach 0 0 0 ,  (7) (8) kalman filter lqg regulator (9)      )()()()( )()()()()()( twtxtcty tvtutbtxtatx v  dtnuxruuqxxuj ttt    0 )2()( )ˆ(ˆ ˆ duxcylbuxa dt xd v  (13) (12) (10) (11) a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab, ghouraf djamel eddine and naceri abdellatif (2014) 69 g1 g le glissment erreur % delta delta ug la tension ug to workspace y tbb 720 tbb 720 tbb 500 tbb 500 tbb 200 tbb 200 tbb 1000 tbb 1000 step réseau pe pem cliques deux fois ci-dessous pour visualiser les courbes et les parametres déssirées math function 1 u ms goto 9 [delta ] goto 8 [eif ] goto 7 [if ] goto 6 [iq ] goto 5 [ud ] goto 4 [ug ] goto 3 [id ] goto 2 [uq ] goto 13 [eug ] goto 12 [wr] goto 11 [pe ] goto 10 [g ] goto 1 [duf ] gain 1 100 from 9 [g1000 ] from 8 [g500 ] from 7 [id ] from 6 [g200 ] from 5 [delta ] from 4 [pe] from 3 [ug ] from 20 [ug ] from 2 [delta ] from 19 [eug ] from 18 [duf] from 17 [g] from 16 [if ] from 15 [ud ] from 14 [uq ] from 13 [iq ] from 12 [eif ] from 11 [delta ] from 10 [g720 ] from 1 [ug ] dot product constant 5 ug constant 4 if constant 3 ug clock1 avr-fa 0 2 4 6 8 0.94 0.96 0.98 1 0 0.2 0.4 0.6 0.8 1 0 0.5 1 0 1 2 3 4 5 6 7 8 0.94 0.96 0.98 1 1.02 50 100 150 200 250 300 20 40 60 80 100 20406080100120 20 40 60 80 100 120 20 40 60 80100 20 40 60 80 100 120 0 2 4 6 8 0.995 1 1.005 0 2 4 6 8 0.96 0.98 1 1.02 figure 10.the realised gui / matlab as command object, we have synchronous generator with regulator avr-fa (pid with conventional pss), an excitation system (exciter), an information block and measures (bim) of output parameters to regulated. figure 8. structure of the power system withe robust h2 controller [3] iv the simulation result under gui/ matlab a creation of a calculating code under matlab / simulink the “smib” system used in our study includes:  a powerful synchronous generator (psg) ;  two voltage regulators: avr and avr-pss connected to;  a power infinite network line the smib mathematical model based on park-gariov model is used for simulation in this paper and is shown in figure 9. figure 9. structure of the synchronous generator (park-gariov model) with the excitation controller under [10]. b a created gui/matlab to analyzed and visualized the different dynamic behaviors, we have created and developed a “gui” (graphical user interfaces) under matlab. this gui allows as to:  perform control system from pss controller;  view the system regulation results and simulation;  calculate the system dynamic parameters;  test the system stability and robustness;  study the different operating regime (underexcited, rated and over excited regime). the different operations are performed from gui that was realized under matlab and shown in figure 10. '' q e '' d x d i q u régulateur sg avr+pss robust controller h2 exciter bim uref ∆f, f’, ∆u, u’, ∆if, if’, ∆p i g ug a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab, ghouraf djamel eddine and naceri abdellatif (2014) 70 damping coefficient α the static error q ol avr pss h2-pss ol avr pss h2-pss -0.1372 unstable -0.709 -1.761 -2.673 unstable 2.640 1.620 negligible -0.4571 unstable -0.708 -1.731 -2.593 unstable 2.673 1.629 negligible 0.1896 -0.0813 -0.791 -1.855 -2.766 5.038 2.269 1.487 negligible 0.3908 -0.1271 -0.634 -1.759 -2.695 5.202 1.807 1.235 negligible 0.5078 -0.1451 -0.403 -1.470 -2.116 3.777 0.933 0.687 negligible 0.6356 -0.1588 -0.396 -1.442 -2.099 3.597 0.900 0.656 negligible the setting time for 5% the maximum overshoot % q ol avr pss h2-pss ol avr pss h2-pss -0.1372 unstable 4,231 1,704 rapid 9.572 9,053 7,892 3.682 -0.4571 unstable 4,237 1,713 rapid 9.487 9,036 7,847 3.482 0.1896 3,793 1,617 rapid 10,959 9,447 8,314 3.915 0.3908 4,732 1,706 rapid 10,564 8,778 7,883 3.737 0.5078 14,320 7,444 2,041 rapid 9,402 6,851 6,588 2.290 0.6356 14,423 7,576 2,080 rapide 9,335 6,732 6,463 2,012 0 1 2 3 4 5 6 7 8 0.94 0.95 0.96 0.97 0.98 0.99 1 1.01 ug le temps en sec u g bo pss-h2 pss 0 1 2 3 4 5 6 7 8 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 pem le temps en sec p e m bo pss-h2 pss 0 1 2 3 4 5 6 7 8 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 1.65 delta le temps en sec d e lt a bo pss-h2 pss 0 1 2 3 4 5 6 7 8 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 -3 la courbe de glissment le temps en sec g bo pss-h2 pss 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 1.28 1.3 1.32 1.34 1.36 1.38 1.4 1.42 1.44 1.46 delta le temps en sec d e lt a bo pss-h2 pss 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.98 0.985 0.99 0.995 1 1.005 1.01 1.015 ug le temps en sec u g bo pss-h2 pss 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 -3 la courbe de glissment le temps en sec g bo pss-h2 pss 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 pem le temps en sec p e m bo pss-h2 pss 0 1 2 3 4 5 6 7 8 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 -3 la courbe de glissment le temps en sec g bo pss-h2 pss 0 1 2 3 4 5 6 7 8 0.92 0.925 0.93 0.935 0.94 0.945 0.95 0.955 0.96 0.965 ug le temps en sec u g bo pss-h2 pss 0 1 2 3 4 5 6 7 8 1.5 1.55 1.6 1.65 1.7 1.75 1.8 1.85 delta le temps en sec d e lt a bo pss-h2 pss 0 1 2 3 4 5 6 7 8 0.95 1 1.05 1.1 1.15 1.2 1.25 1.3 pem le temps en sec p e m bo pss-h2 pss c simulation result and discussion  study of the stability we performed perturbations by abrupt variations of turbine torque δtm of 15% at t = 0.2s, the following results (table 1 and figure 11, 12) were obtained by studying the “smib” static and dynamic performances in the following cases: 1. smib in open loop (without regulation) (ol) 2. closed loop system with the conventional stabilizer pss-fa and robust control h2-pss [10]. we simulated three operations: the under-excited, the rated and the over-excited. our study is interested in the synchronous power generators of type: tbb-200, tbb-500 bbc-720, tbb-1000 (parameters in appendix 1) [10]. table 1 presents the bbc -720 static and dynamic performances results in (ol) and (cl) with pss and h2-pss, for an average line (xe = 0.3 pu), and an active power p=0.85 p.u , for more details about the calculating parameters see gui-matlab in appendix 3. where: α: damping coefficient ε %: the static error, d%: the maximum overshoot, ts: the setting time table 1 the “smib “static and dynamic performances figures 11,12 and 13 show simulation results for : 's' variable speed , 'delta' the internal angle, 'pe' the electromagnetic power system, 'ug' the stator terminal voltage; for powerful synchronous generators bbc -720 with p = 0.85, xe = 0.3, q1 = -0.1372 (pu)  tests of robustness in a first step we performed variations of the electrical parametric (increase 100% of r). then, we performed variations of the mechanical parametric (lower bound 50% of inertia j) the simulation time is 8 seconds. we present in figure 12 (electrical uncertainties) and figure 13 mechanical uncertainties) figure 11. functioning system in the under-excited used of bbc 720 connected to a average line with pss , h2-pss and ol (study of the stability) figure 12.functioning system in the under-excited used of bbc 720 connected to a average line with pss , h2-pss and ol (tests of robustness) a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab, ghouraf djamel eddine and naceri abdellatif (2014) 71 figure 13. functioning system in the under-excited used of bbc 720 connected to a average line with pss , h2-pss and ol (tests of robustness) the electromechanical damping oscillations of parameters of the synchronous power generators under-excited mode in controllable power system, equipped by h2-pss (black), pss (blue) and open loop (green) are given in figures 11-13. results of time domain simulations, with a test of robustness (electrical uncertainties (figure 12) and mechanical uncertainties (figure 13)), confirm both a high effectiveness of test robust h2-pss regulator in comparison with using the classical regulator pid and open loop. for study of the stability the simulation results (figure 11), it can be observed that the use of h2-pss improves considerably the dynamic performances (static errors negligible so better precision, and very short setting time so very fast system (table 1), and we found that after few oscillations, the system returns to its equilibrium state even in critical situations (specially the underexcited regime) and granted the stability and the robustness of the studied system. v conclusion this paper proposes an advanced control method based on advanced frequency techniques: robust h2 approach’s (an optimal lqg controller with kalman filter), applied on the system avr pss of synchronous power generators, to improve transient stability and its robustness of a single machineinfinite bus system (smib). this concept allows accurately and reliably carrying out transient stability study of power system and its controllers for voltage and speeding stability analyses. it considerably increases the power transfer level via the improvement of the transient stability limit. the computer simulation results (with test of robustness against electric and mechanic machine parameters uncertainty), have proved a high efficiency and more robustness with the robust h2pss, in comparison using a conventional stabilizer (with a strong action) realized on pid schemes, showing stable system responses almost insensitive under different modes of the station. this robust h2 generator voltage controller has the capability to improve its performance over time by interaction with its environment. as perspective, to study the effectiveness of the robust control h2 realized a comparative study between a robust control h∞ and h2 applied to power system stability references [1] la. grouzdev, a.a. starodebsev, s.m. oustinov "conditions for the application of the best amortization of transient processes in energy systems with numerical optimization of the controller parameters avr-fa" energy-1990-n ° ll-pp.21-25 (translated from russian). [2] demello f.p., flannett l.n. and undrill j.m., « practical approach to supplementary stabilizing from accelerating power », ieee trans., vol. pas-97, pp, 1515-1522, 1978. [3] demello f.p. and concordia c., « concepts of synchronous machine stability as affected by excitation control », ieee trans. on pas, vol. pas-88, pp. 316– 329, 1969. [4] s.v. smolovik « mathematical modeling method of transient processes synchronous generators most usual and non-traditional in the electro-energy systems "phd thesis state, leningrad polytechnic institute, 1988 (translated from russian). [5] g. stein and m. athans “the lqg/ltr procedure for multivariable feedback control design” , ieee transaction on automatic control, vol. 32, no 2, 1987. [6] naceri a., “"study and application of the advanced methods of the robust h2 and h∞ control theory in the avr-pss systems of synchronous machines’, phd thesis, spbspu, saint petersburg, russia, 2002 (in russian). [7] p. kundur, "definition and classification of power system stability", draft 2, 14 january,2002 [8] p.m. anderson, a. a. fouad "power system control and stability", iee press, 1991. [9] hong y.y. and wu w.c., « a new approach using optimization for tuning parameters of power system stabilizers », ieee transactions on energy conversion, vol. 14, n°. 3, pp. 780–786, sept. 1999. [10] ghouraf d.e., “study and application of the advanced frequency control techniques in the voltage automatic regulator of synchronous machines’, magister thesis, udl-sba, 2010 (in french). [11] kwakernaak h., sivan r. linear optimal control systems, wiley-interscience, 1972. [12] g. stein and m. athans “the lqg/ltr procedure for multivariable feedback control design” , ieee transaction on automatic control, vol. 32, no 2, 1987. [13] yurganov a.a., shanbur i.j. ‘fuzzy regulator of excitationwith strong action, proceeding of spbstu scientific conference“fundamental investigation in technical universities”, saintpetersburg, 1998. (in russian)kwakernaak h., sivan r. linear optimal control systems, wiley-interscience, 1972. ghouraf djamel eddine. graduated from the faculty of electrical engineering, djillali liabes university of sidi bel-abbes (udl-sba), and algeria in 2003. he received the b.sc and m.sc. degrees from udl-sba in 2008 and 2010, respectively. he is now a ph.d. student in udl-sba and member in the irecom laboratory, algeria. his research interests include robust and adaptive control of electric power systems and networks, optimization, power system stabilizer (pss), stability and robustness, modeling and simulation. e-mail: jamelbel22@yahoo.fr (corresponding author) abdellatif naceri. graduated from the faculty of electrical engineering, djillali liabes university of sidi bel-abbes (udl sba), and algeria in 1997. he received the m.sc. and ph.d. degrees from the saint petersburg polytechnic university (spbspu), russia in 1999 and 2002, respectively. he is currently an mailto:jamelbel22@yahoo.fr a robust pss based advanced h2 frequency control to improve power system stability – implementation under gui/matlab, ghouraf djamel eddine and naceri abdellatif (2014) 72 2. the pss-avr model associate professor at udl sba and researcher at the irecom laboratory, algeria. his research interests include intelligent control and appl cations, robust and adaptive control of electric power systems and networks, power system stabilizer (pss) and °exible alternating current transmission system (facts), stability and robustness, modelling and simulation. e-mail: abdnaceri@yahoo.fr appendix 1. parameters of the used turbo –alternators 3. dynamics parameters calculated through gui-matlab parameters tbb 200 tbb 500 bbc 720 tbb 1000 notations power nominal 200 500 720 1000 mw factor of power nominal 0.85 0.85 0.85 0.9 p.u. xd 2.56 1.869 2.67 2.35 synchronous longitudinal reactance xq 2.56 1.5 2.535 2.24 synchronous reactance transverse xs 0.222 0.194 0.22 0.32 shunt inductive reactance statoric xf 2.458 1.79 2.587 2.173 inductive reactance of the excitation circuit xsf 0.12 0.115 0.137 0.143 shunt inductive reactance of the excitation circuit xsfd 0.0996 0.063 0.1114 0.148 shunt inductive reactance of the damping circuit on the direct axis xsf1q 0.131 0.0407 0.944 0.263 shunt inductive reactance of the first damping circuit on the quadrature axis q xsf2q 0.9415 0.0407 0.104 0.104 shunt inductive reactance of the secend damping circuit on the quadrature axis q ra 0.0055 0.0055 0.0055 0.005 statoric active resistance rf 0.000844 0.00084 0.00176 0.00132 resistance of the excitation circuit (rotor) r1d 0.0481 0.0481 0.003688 0.002 active resistance of the damping circuit according to the direct axis r1q 0.061 0.061 0.00277 0.023 active resistance of the damping circuit according to the quadrature axis r2q 0.115 0.115 0.00277 0.023 active resistance of the second damping circuit according to the quadrature axis mailto:abdnaceri@yahoo.fr transactions template journal of engineering research and technology, volume 5, issue 3, september 2018 48 assessment of educational outputs of low-income housing project dr. suheir m. s. ammar assistant professor at department of architecture at the college of engineering in the islamic university of gaza, gaza, palestine abstract—the provision of housing to meet the rapid increase in population from all economic classes is a target of most housing programs. this paper aims to suggest a new type of economic housing in gazapalestine through an architectural education studio and analyzing its acceptance through a questionnaire and interviews. the need for this type of multi-storey attached apartments in gaza is essential considering matching it with socio-cultural values to be accepted by society members. the results clarify the best characteristics of the lowincome housing. the degree of acceptance among the students was higher than among interviewees from responsible authorities. index terms— educational outputs; architectural studio; low-income housing; assessment. i introduction planning housing for all classes of society from lowincome to high-income is an objective for most governments. gaza is the main city in the gaza strip with an area of 55.5 km 2 according to the department of urban planning in gaza municipality. the population in gaza in according to the 2016 palestinian central bureau of statistics [1] is around 645 205 out of 1 881 135 for the whole gaza strip. the percentage of the unemployment in the gaza strip (for persons 15 years and above) in 2015 among men is 35.9% and among women is 59.6%. depending on the monthly consumption of a household, the rate of poverty among individuals in the gaza strip is 38.8 and in gaza city is 29.8 in 2011 [1]. nevertheless, most of the government housing projects and the private sector supply are apartments with an area above 110 m 2 in gaza which address the middle-income class. the academic work at the department of architecture in the largest university in gaza is not far away from this. hence, the idea of a new learning experience for the students of design studio 1 at the department of architecture in the college of engineering at the islamic university of gaza emerged. the new project concept aimed to improve the students‟ design capabilities for low-income class. they had to design a multi-storey residential building for this class; each floor consists of 12-15 apartments; each one has an area of 70-80 m 2 . this type of attached apartments is not available in gaza city. most multi-storey residential buildings contain three to four apartments per floor. additionally, it is common in educating residential buildings to have design problems such as villa, row houses, detached houses, multi-storey residential building with three to four apartments per floor and apartment‟s size above 150 m 2 for medium-income from the experience of the author which extend for more than 20 years. these types of housing were proper many years ago, but there is a need to change as the circumstances changed. the project tries to address both housing need and housing demand to suit the local society better. housing need is associated to the social necessity which is connected with values and norms of the society, while housing demand is connected with residents‟ capability to pay. the students were asked to do the best of their abilities to provide privacy, natural lighting and ventilation and the needed spaces for the apartments. privacy is more difficult to be achieved in small area apartments, and natural lighting and ventilation are not easy to be achieved in attached apartments. this study aims to analyze and evaluate the outcomes of a new experience of multi-storey residential buildings for low-income households. it also examines the students‟ acceptability of the idea of small attached apartments via a questionnaire. the students who participated in the course and others who didn‟t participate filled up the questionnaire. the opinion of stakeholders from local authority is the final aim to evaluate this type of apartments in gaza case through interviews. this study is important as it offers different plans solution for low-income apartments by reducing the area and time of construction while solving the problems of privacy and natural lighting and ventilation. the students were asked to think about solutions that meet residents‟ needs. its importance can expand to face the fact that urban sprawl cannot be the optimum solution for housing in countries that suffer from scarcity in lands and overpopulation. in addition, it assesses an architectural studio experience. suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 49 ii housing development in gaza palestine housing characteristics develop over years depending on changes of the lifestyle of a society and the changes in construction materials and techniques. in gaza case, a traditional house consists of a courtyard opened to the sky surrounded by rooms which mostly opened towards the courtyard. the entrance is indirect to prevent visitors from seeing different spaces of the house except for their room [2]. actually, the small rooms‟ areas, use of multi-function space and the attached houses were characteristics of these traditional houses (figure 1) [3]. after the first world war in in (1917), the courtyard was replaced by a living room covered with a concrete slab which is better for rainy weather in winter. the rooms‟ windows were oriented towards the setbacks. the houses were mostly two to three storeys with one apartment per floor and this continues during the israeli occupation (1967-1994). later and gradually from 1980, the plans of houses began to change towards separating the bedrooms to a private zone with doors open towards a small lobby and the storeys number increased. this was assoiated with the increase in architects‟ number at that time (figure2). the multi-storey residential buildings became commonplace after the arrival of the palestinian national authority in 1994 to cope with the increasing need of housing due to the normal increase in population and the return of the palestinian people to their homeland. jabareen and carmon [4] believed that these buildings disrupt the cultural and social fabrics because people were used to detached houses that facilitate better relationships with their neighbors. in such buildings, each floor consists of three to four apartments, and the living room is a separate space and not the central space towards which most spaces open (figure3). all this type of housing became familiar for the new generation who are born in these buildings and more acceptable for new families. the change from a type to another followed the increase of architects‟ number in the area, and as a result of the appearance of new construction materials and methods, in addition to the effect of the colonialism. the increase in population and limited area of land were main reasons for multi-storey residential buildings appearance. referring to a survey in 2015 conducted by the [5] 79% of the houses in the gaza strip are owned, 5.7 are rented and 15.2 are without payment which includes living in a house owned by parents. the percentage of owning houses was 89.42% in 2007. this gives an indication that the residents prefer to own their houses as a social value, but the percentage decreased to 79 % in 2015 following the affordability of residents. the distribution percentage of housholds in palestine by type of housing units in 2015 was 53.7 for apartments, 44.6 for house which is called locally dar and has 1-3 storeys with one apartment per floor for relatives, 1.1 for villa and 0.6 for others which include tent and independent room [6, 7]. according to a survey in 2015, the palestinian central bureau of statistics [7] published that the average area of a house in urban areas in palestine is 129 m 2 . the housing density in 2015 was 12.5% for less than 1 person for room, 46% for 1 to 1.99 person for room, 28.3% for 2 to 2.99 person for room and 13.2 for 3 or more person for room [6]. however, in a country like ireland, the guidelines of the government of ireland defines a minimum area of 45m 2 for one bedroom apartment, 73 m 2 for two bedrooms apartment, 90 m 2 for three bedrooms apartment [8]. in a study about palestine, abdullah and dudeen [9] stated that the governfigure 1 a traditional house plan source: [3] figure 2 an example of a house before 1980 figure 3: an example of apartments of a multi-storey building after 1980 suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 50 ment strategy should enable the low-income residents who cannot allocate at least 30% of their income for buying a 100m 2 apartment for a period of 20 years. in a study conducted about jordan, which to some extent similar to gaza case study, al-homoud, al-oun [10] declared that socio-cultural values of the users in jordan increased the unit area and cost; there are wasted areas such as guest rooms which are rarely used in many cases. in addition, the value of users‟ ownership priority is for detached single-family homes first followed by apartments in buildings with a limited number of units. they pointed out to the need to change of social and cultural values of the lowincome group, who required area bigger than they can afford. the idea of the project was to design adequate low-income housing units to address some of the local housing problems, namely, the gap between supply and demand, the limitation of land and its high cost, the shortage in low-income housing by suggesting economic units that respect the socio-cultural values. accordingly, the supposed area for the attached apartments in the design studio was from 70 to 80 m 2 . a provision of housing for low-income locally, the government projects at the ministry of public works and housing for poor can be divided into three groups. the first includes beneficiaries who are registered at the ministry of social affairs. this one is semi-free. in case the economic situation of the family gets better, it moves to a low-income housing project within facilitating from the ministry. the apartment which the family left will be used by another poor family. the second group includes the lowincome and medium income families. this type is a longterm owning project for a period of twenty years and the land is given free from the government. the apartments‟ areas vary from 100 to130 m 2 . the third one is housing association projects. the government participates with 40% of the land price and the remaining is paid as payments extend to 20 years. the apartment‟s areas vary from 150 to190 m 2 . there is another type implemented by the unrwa on lands given free from the government. however, the private sector is still the main provider of housing as the government projects represent a low percentage of residents‟ demand. however, the private sector is still the main provider of housing and addresses the needs of medium and highincome classes as in most of developing countries. although of all these programs, there is still a need for more small size economic apartments for the poor, low-income. iii apartments in multi-storey residential buildings in a study about healthy-natural lighting in apartments in korea kim and kim [11] stated that apartments in multistorey residential buildings suffer from a shortage of natural light because of their depth. this was a challenge for students in this study particularly in a city like gaza which suffers from shortage of electricity throughout the year. in his study of space-saving homes, romero [12] stated that the home with a small area is richer in possibilities than in limitations. he indicated that micro or minimal architecture disregards any useless space, decorative expression and use the flexibility of spaces. the area of entrance halls and passageways should be minimized. in addition, he refers to the importance of benefiting from interior design principles to visually expand small rooms such as color, pattern, and texture. the principles of minimal architecture are the most suitable for low-income class to give them good qualities of small size houses. husin, nawawi [13] and zainal, kaur [14] demonstrated that housing conditions plays a noteworthy role in the quality of life of poor and low-cost housing. zainal, kaur [14] added that a socio-economic indicator should be considered in urban issues for the poor and lowincome to meet residents‟ needs and demands. actually, this type of attached apatments is economic and widely used in countries like malysia and indonesia and other countries for both low-income and medium-income. iv methodology to achieve the study aims, three tools are used. the first is plan analysis of five different configurations of the students‟ works in design studio1 to catch the advantages and disadvantages of each one. a questionnaire for students who participated in this course from the second year in architecture and others from the third year who did not try this type of projects to compare the effect of participating in course experience. the third is interviews with governmental officials in the field. the project began by asking each student to look at similar projects all around the word to criticize them in the light of the local lifestyle. oh, ishizaki [15] considered critiquing as a fundamental practice in teaching architecture design studio, and it should take place between teachers and students and among students themselves. while analyzing the projects the students think about the problems included in each one, and how to solve these problems in their projects. thinking about problems and how to solve them were defined by ustaomeroglu [16] as the main aim of a design studio. he added that the instructor guidance can help. this was followed by a group discussing, in the class, of the positive and negative aspects of each one related to local social and environmental aspects. the sites and the requirements of the project were submitted to students. this includes six different sites. the students were asked to design from 12-15 apartments in each floor with an area of 70-80 m 2 for each one, and at least two stairs. the floor number is three in addition to a ground floor. each apartment consists of a living room, two to three bedrooms, a bath, a kitchen and a water closet. they were asked to consider natural lighting and ventilation for at least bedrooms and living rooms to minimize running cost post-residing. this was a big challenge in this type of attached apartments. a description of the value of flexibility of using a space for different activities and multi-use of furniture was explained to stusuheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 51 dents, for example, using a living room as a space for social interaction, studying, eating or even sleeping at night. students have to adapt with the area and requirements to innovate distinctive solutions. actually, after the students had the requirements for the project, they recognized the problem, then, they began thinking to achieve solutions using freehand sketching at the beginning to imagine the scale. they began to minimize the negative aspects with the help of the instructors. fifty projects were submitted. a questionnaire for both the students who attended the class who did not attend it. the questionnaire aimed to assess their low-income projects. it consists of three parts; the first is general information about the respondent, and the second part measures the better characteristics of the economic housing and the degree of acceptance of this type of housing. the best five students‟ plans with different configurations were used in the third question to be evaluated. an electronic questionnaire was distributed through students‟ groups on facebook. this is an easy way to get fast responses and high response rate. there were 46 respondents out of 50 from the students who participated design studio1 and 44 respondents out of 50 from other years. the questionnaire was analyzed using spss software. the opinion of governmental officials in the field, including senior officials, was important as this type of housing projects is not available. there are 13 interviewees; five from the ministry of public works and housing, five from the ministry of local government including architects and nonarchitects, two from municipalities and one from palestinian housing council. these two ministries are the most relevant authorities in housing provision. the interview consists of four openended questions; one is related to the government housing projects, and the others are to evaluate the positives and negatives of the outcomes of the five plans of students‟ works and the possibility to apply such design locally. the fourth was to assess the importance of the educational process in forming the ideas of local architects to find creative solutions for the housing projects in gaza. v discussion and analysis a discussion and analysis of the plans the students' configurations of the apartments diverse to include five major types: open corridors around a closed courtyard, two corridors around an open courtyard, an open corridor around a closed linear courtyard, an internal corridor and three clusters around two semi-separated closed courtyards. the challenge was to achieve as much as possible privacy, natural lighting, and ventilation. natural lighting and ventilation are important in all countries, but they are of special importance in a city like gaza which suffers from lack of electricity for many years. natural lighting and ventilation for all spaces is not easy. hadid [2] argued that privacy is an important value in a society such as palestinian people. privacy is more difficult to be achieved in small area apartments. however, all students‟ plans have the three major zones in an apartment pubic (living room), semi-public (the kitchen) and private (the bedrooms). still, living room in this type of economic housing can be used for both family and visitors, the kitchen is separated and can be used from all family members; the bedrooms are exclusively for family members. the kitchen in many of the apartments belongs to the lobby of bed rooms. according to the society cultures and values, this keeps the privacy of the family when it has visitors in the living room. in addition, living room is a flexible place that can be used for different activities such as a sitting area for family members, a sitting for visitors. asfour et al. [17] assured the importance of flexibility in housing design as it increases housing utilization efficiency. actually, according to family number in gaza, there is a need for three bedrooms, but for saving spaces, the living room can be used as a third bedroom at night. most of the apartments at the corners of the architectural plans have three bedrooms as they have two outside facades (figure 9). the five different best projects‟ plans are shown in figures 4 to 8. the students try to avoid the negative aspects they saw in other projects such as lack of privacy. many spaces in these projects have their windows looking towards the open corridors which are not accepted locally. in the following, a discussion of these plans using critical thinking which is defined by çakır and yurtsever [18] as to think about many things at the same time including positive and negative aspects. in all configurations, to minimize the area used for corridors the passageways are through the living rooms, and entrance halls are part of the living rooms. each of figure 4 & 5 has a courtyard. however, in case 1 the land is larger, the courtyard is closed, and the corridors and four apartments opened on it. in case 2 the courtyard is opened from one side which is the direction of the lovely wind in summer and the benefit for lighting and ventilation from the courtyard is for the apartments and stairs. to allow the lovely wind to get into the courtyard in case 1 there is a window in the middle space between the two west apartments. in addition, ten of the apartments in both cases have ventilation for their kitchens from small inner courts. the apartments are divided into two groups separated from each other. suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 52 figure 4 case 1-open corridors around a closed courtyard figure 5 case 2two corridors around an open courtyard in both cases 3 and 4, the plots have the shape of a linear rectangle. so, the solution was to gather the apartments around a linear courtyard surrounded by an open corridor that links the whole apartments in case 3 and around an internal corridor in case 4. in case 3, the two stairs are at the ends of the corridor (figure 6). as a result, they are far from some apartments. in case 4, there are three stairs among the whole apartments (figure7). the corridor is closed and get its natural ventilation and lighting from the stairs. most of the apartments‟ kitchens have ventilation from small inner courts. figure 6 case 3an open corridor around a closed linear courtyard suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 53 figure 7 case 4an internal corridor figure 8 case 5-three clusters around two closed courtyards case 5 in figure 8 is different as it divides the apartments into three groups, and each group has a separate stair. there are two courtyards that give natural ventilation and lighting for eight apartments and for the corridors. half of the apartments‟ kitchens get natural ventilation from small inner courts. in general, it can be seen that case 3 and 4 have a horizontal continuity movement among the whole apartments in one floor. case 1 and 2 are divided into two groups, and case 5 has three groups. cases 2 and 5 benefit from the courts for natural ventilation for apartments while case 1 and 3 benefit from the court mostly for the corridors and partly for some apartments. case 4 which does not have a courtyard, has the largest number of apartments, eight small inner courts and the highest percentage of the built area. however, it is less efficient in natural light and ventelation (table 1). table 1 assessment of the five types of plans configuration item case1 case 2 case 3 case 4 case 5 1 area of plot 2222 2523 2505 2505 6502 2 built area 1311 1270 1204 0105 0655 3 percentage of built area 25% 61.8% 59.9% 32% 05% 4 number of units 01 01 13 02 06 5 number of small inner courts 6 6 5 8 5 6 existence of a large court yes yes yes no yes 7 number of stairs 2 2 2 1 3 8 ventilation for kitchens 10 towards small courts 10 towards small courts 9 towards small courts 13 toward s small court s 8 towar ds smal l cour ts 9 continuity of a corridor not available not available available avail able not avail able 10 opposite entrance doors 8 apartments 2 apartments no apartment 10 apart ment s 4 apar tmen ts 11 linkage between apartments 2 groups 2 groups 1 group 1 grou p 3 g r o u p s suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 54 while using a large percentage of built area, less quality of natural ventilation and lighting can be achieved. it is clear that the higher percentage of built area, in case 4, is associated with the largest number of the apartments per floor and there is no large courtyard while there was a need for extra small inner courts among the apartments. figure 9 an enlarged corner apartment both cases 1 and 2 have almost the same characteristics, but the percentage of built area is higher in case 2 as the courtyard is smaller. although the courtyard is smaller, the lighting and ventilation are better as it is open from one side. case 5 has a courtyard divided into two parts by a passageway. it has the same percentage of the built area as case 1, however, case 1 has two more apartments as its plot area is larger. table 1 shows a comparison between the five types. b discussion and analysis of the questionnaire the first part of the questionnaire defines an optional name, the academic level, area of the house in which the student lives. 46 respondents are from the second year in architecture and 44 are from the third year considering that there is a year for studying general engineering at the beginning of the study. the results show that 29.5% of the respondents live in a house with an area more than 200m 2 , 44.3% in a house of an area between 150200m 2 , 25.0% in a house of an area between 100-149m 2 and1.1% in a house of an area less than 100m 2 . the second part of the questionnaire asks about the proper characteristics of housing for low-income. to investigate the internal consistency reliability for the items, there is a need to measure cronbach's alpha value. table 2 shows a very good value of (0.86). table 2 reliability statistics reliability statistics cronbach' s alpha cronbach's alpha based on standardized items no of items .857 .865 18 the five-point likert scale was used; it ranged from strongly agree with 5 points to strongly disagree with 1 point. the items which have the highest means are: “privacy is essential for bedrooms” with a mean of 4.24, “using a multi-storey building is a good solution for low-income” with a mean of 4.14, “flexibility in using one space for different uses is acceptable” with a mean of 4.01, “living room is the most important space” with a mean of 3.92, “it is enough to have a bathroom and a water closet” with a mean of 3.85, “living room can be used as a guest room” with a mean of 3.84, “70-80 m² area is proper for low-income families” with a mean of 3.82, “living room can be used for eating” with a mean of 3.82. it is clear that the importance of living rooms decreased in this generation. it could be the effect of the wide spread of laptops and mobile phones. the lowest items are: “it is acceptable to open a bedroom window on an inner small courtyard” with a mean of 1.61. it is important in the social life in gaza to have natural lighting and ventilation for bedrooms as they are used for the study, play, read and other uses, in addition, to sleeping. the other low mean items are “opposite external doors are acceptable” with a mean of 2.89, “it is acceptable to open a kitchen window on a courtyard” with a mean of 2.89. the answers are almost moderate. opposite external doors can penetrate the privacy of others just in case both of apartments‟ doors opened at the same time. in gaza, the kitchen is so important for the whole family; it is a space to cook, eat, and talk while a mother is cooking. “this type of housing is essential in gaza” is the last item, and it has a mean of 4.23 (table 3). ustaomeroglu [16] approved that a good design is not associated with the larger areas or expensive cost. the item of “passing through living room is better than using a corridor” has a mean of 2.94. this mean is low and some students are still unconvinced of passing through the living room space to minimize the less important spaces in the apartment. there are no significant differences between the answers of the two groups. eigbeonan [19] stated that there are many similarities in ideas among students since they share their society culture, but if these ideas are not accepted scientifically it is hard to change them. in the third part of the questionnaire, the respondents were asked to arrange the plans from 1 to 5 considering number 1 is the best. the answers were reversed in analyzing. table 3 shows that the best plan was case 2 with a mean of 4.0 followed by case 1 with a mean of 3.5. the last one was case 4 which does not have a courtyard with sd 1.6. about 48% choose it the last choice. case 5 was chosen to be the first, fourth, fifth with a percentage of about 23% for each, and as the second and third with a percentage of about 15% for each. the following part discussed the reasons for choosing (table 4). suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 55 table 3 descriptive statistics of the characteristics of the low-income housing no. item mean first year mean second year total mean 1 living room is the most important space. 4.00 3.83 3.92 2 70-80 m² area is proper for low-income families. 3.89 3.74 3.82 3 living room can be used as a bed room. 3.24 3.12 3.18 4 living room can be used as a guest room. 3.74 3.95 3.84 5 living room can be used for eating. 3.93 3.69 3.82 6 opposite external doors are acceptable. 2.85 2.93 2.89 7 one balcony is enough. 3.13 3.60 3.35 8 flexibility in using one space for different uses is acceptable. 3.96 4.07 4.01 9 passing through living room is better than using a corridor. 3.09 2.79 2.94 10 it is enough to have a bathroom and a water closet. 3.93 3.76 3.85 11 using multi-storey buildings is a good solution for low income. 4.02 4.26 4.14 12 using many apartments in one storey is proper for lowincome. 3.70 3.86 3.77 13 it is acceptable to open a kitchen window on an inner small courtyard. 2.98 2.79 2.89 14 it is acceptable to open a bathroom window on an inner small courtyard. 3.63 3.33 3.49 15 it is acceptable to open a bedroom window on an inner small courtyard. 1.54 1.69 1.61 16 there is no need for entrance hall. 3.74 3.52 3.64 17 privacy is essential for bedrooms. 4.22 4.26 4.24 18 this type of housing is essential in gaza 4.15 4.31 4.23 regarding plans of figures 4 and 5 the respondents, who like them, stated that they have a good natural lighting for the external corridors and the accessibility to the apartments are easy; each plan is divided into two parts which minimize the number of neighbors at the same storey, minimize social problems and noisy and minimize crowded at the external corridors. table 4 descriptive statistics of the five plans min. max. mean std. deviation case 1 1 5 3.5 .982 case 2 1 5 4.0 1.088 case 3 1 5 2.6 1.143 case 4 1 5 2.4 1.609 case 5 1 5 2.9 1.500 valid n (listwise) for figure 5, they added that opening the courtyard towards the lovely wind in summer is good for the spaces which are oriented towards the internal courtyard. another positive point is the external doors are not opposite. the internal courtyard will give a good view for the residents. for both figures 6 and 7, the external corridors are very long for reaching the apartments, and this is not good visually. they added that the very long courtyard in figure 6 is not good for natural lighting and getting sunshine. some stated that figure 7 is the best for economic housing as it has the highest percentage of the built area; the layout of the building is very long which help in giving natural lighting for more spaces; they admired the existence of a balcony for each apartment. however, this plan has opposite exterior doors which is not good for privacy. for the last case in figure 9, some like the irregular external corridors, while others dislike them. some admired dividing the plan into three groups to minimize social problems and minimize the number of persons in external corridors which are short compared with other figures. in general, many appreciate using courtyards for this type of housing which adds a natural view for residents, give their children a space to play and give an external view and natural lighting and ventilation for apartments and corridors. for me as a lecturer, i think this questionnaire gives the students the opportunity to scrutinize and inspect the five projects at the end of the semester and most of them gave good and convincing reasons for their answers. the last question was to evaluate studying this experience. most of the opinions are positive except three. the most repeated expressions are: beneficial, interesting, realistic to gaza case, distinguished, functional, important, rich, proper, very satisfied, unique, and nice. some referred to its importance to young spouses; others said: “i became more acceptance to this type of houses”. for the students who didn‟t attend the studio, the same expressions are used; some students declared that they hope if they try it themselves as a design project or as a house to live in. c discussion and analysis of the interviews with stakeholders the interview has four questions. the first question asks about the extent to which the government projects addressed the lowincome housing. all respondents replied that in terms of number, the projects are not enough. some explained that they are suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 56 associated with the availability of external fund. in terms of quality and price, some considered that most of these projects cannot address the need of low-income because they cannot afford the payments. regarding the second question about the proper apartment area for low-income housing, the respondents from the ministry of public works and housing stated that the area should be estimated according to the number of family members. they added that the low-income families are large families, and 120 m 2 is proper. two attendants considered 80-90m 2 is enough, another two respondents defined 145 m 2 , while the others defined a wide range from 80-120m 2 . in general, it is clear that most of the attendants are against 80 m 2 as a proper area for lowincome, and some considered multi-storey attached housing is not accepted socially. the evaluation of the outputs of the students‟ works in the third question supports this. the lowest number considers the output of students suitable for the category mentioned, while most of the attendants see that they do not fit the large number of the family members, and they may be rejected by residents as there are a lot of apartments on the same floor which minimizes the privacy and may cause social problems among residents. this makes case 5, which divides the project for three separated groups more appropriate as it reduces the number of families in the shared hallways on each floor. some added that there is a need to have socially homogenous categories in each building to reduce the cultural and social gap. the same social characteristics are found in jordan which is near to palestine in the previously mentioned study of [10]. they affirmed the need to change the social and cultural values of the low-income group, to suit their financial capabilities. in this study, there is a need to change the social and cultural values so as to follow the effective demand which links the demand with the financial capabilities of residents, not the potential demand which represents their dreams. everyone agrees that this type of apartment is suitable for young couples, as owning or renting, to serve the first family years of life. in general, the whole interviewees praised the advantages of the designs in terms of economy in space and cost and providing a good level of natural ventilation and lighting. however, they expressed their concern about the possibilities of social problems occurrence among the neighbors because of the numbers of neighbors on the same floor. locally, three or four apartments per floor are the normal cases. additionally, problems related to the management and maintenance of the shared spaces and noise were defined. from the total comments, we can conclude that there is a need to increase the area slightly to reach 100 m 2 by adding a third bedroom, and to minimize the number of apartments sharing the same common spaces in the same floor, and this will be considered in the course for the next years. the concept is to minimize the area gradually so as the residents can acclimate to these changes bit by bit. since the development in housing, over the years, is associated with the changes happened in social life and the economic and technological potential, it is supposed that human beings have to adapt themselves to these changes. this will help to find proper solutions to the housing problems in the developing countries, including gaza, one of which the gap between demand and supply. there is a lack of supply for low-income class. one of the interviewees indicates that there is a need to study the market needs for each class of residents to minimize the gap between demand and supply. regarding the fourth question, the whole agreed on the importance of the educational process in forming the ideas of local architects to find creative solutions for the housing projects in gaza. one of the participants suggests the participation of officials from the governments‟ authorities in the educational process to link it with the local reality. vi conclusion and recommendations this study presents a new experience in an architectural studio that addresses the needs of low-income class by suggesting small attached apartments in multi-storey-buildings. the students were asked to evaluate such universal buildings from internet websites to find out the disadvantages from the local perspective. they were asked to find different solutions that overcome these weaknesses. the study displays five different solutions: open corridors around a closed courtyard, two corridors around an open courtyard, an open corridor around a closed linear courtyard, an internal corridor, and three clusters around two closed courtyards. the architectural analysis compares the strength and weakness of each one. a questionnaire is used to define the proper characteristics of apartments for low-income housing and to evaluate these five solutions. the sample size in the questionnaire is limited due to the number of students who participated in the class. however, they give important and sufficient findings. the statistical analysis of the questionnaire showed the proper characteristics of apartments for low-income housing from the students‟ perspectives. the most important characteristics were: „privacy is essential for bedrooms, using a multi-storey building is a good solution for lowincome, flexibility in using one space for different uses is acceptable, living room is the most important space, it is enough to have a bathroom and a water closet for an apartment, living room can be used as a guest room, 70-80 m² area is proper for low-income families respectively‟. opening a bedroom window towards an inner small courtyard was not accepted. the students choose the solutions of „two corridors around an open courtyard‟ and „open corridors around a closed courtyard‟ as the best from their opinion, while the officials who participated through interviews prefer the solution of three clusters around two closed courtyards. the students appreciate architectural characteristics such as natural ventilation and lighting from a large courtyard, while the officials pay more attention to the privacy and crowded. they supposed that the fewer the numbers of families at the same floor the lower the social problems. the interviews show that the government housing projects are not enough and some added that they are not affordable for low-income families. most of them were not satisfied with the suggested area for the apartments (70-80 m 2 ). from their experience, they think that the area should be proportional to the number of family members even for low-income class. as such, they consider the five cases suheir m. s. ammar / assessment of educational outputs of low-income housing project (2018) 57 of students‟ projects suitable for young couples. from a different perspective, when the choice is between the less-than-ideal or nothing, the suggested areas of studio output would appear sufficient and suitable. actually, there is a need to change the social and cultural values regarding housing size so as to follow the effective demand which links the demand with the affordability of residents, not the potential demand which represents their dreams. the whole of interviewees assured the importance of the educational process in forming the ideas of local architects to find creative solutions. as the participants in the interviews are not satisfied with the suggested small apartments, there is a need for a future study about the needs of the youth for their future houses. these youths will be the inhabitants of the houses in the near future. acknowledgment i would like to thank all my students, who participated with their design plans or answered the questionnaire. in addition, many thanks for the officials who participated with their opinions. references [1. palestinian central bureau of statistics. 2016 20, nov.2016]; available from: http://www.pcbs.gov.ps/site/lang__ar/881/defau lt.aspxdefault.aspx#housing. 2. m., hadid, architectural styles survey in palestinian territories, in energy codes for building (www. molg. pna. ps). 2002, ministry of local government, palestine. 3. architectural centre for heritage -iwan. 2016, architectural centre for heritage -iwan,. 4. jabareen, y. and n. carmon, community of trust: a socio-cultural approach for community planning and the case of gaza. habitat international, 2010. 34(4): p. 446-453. 5. palestinian central bureau of statistics, gaza strip governorates statistical yearbook, 2014. 2015: ramallah palestine. 6. palestinian central bureau of statistics, palestine in figures 2016. 2017, palestinian central bureau of statistics: ramallah – palestine. 7. palestinian central bureau of statistics, urban planning is a guarantee sustainable development of urban areas in palestine for the year 2016. . 2016. 8. sustainable urban housing, sustainable urban housing: design standards for new apartments guidelines for planning authorities, department of the environment-community and local government, editor. 2015, government of ireland 9. abdullah, s. and m. dudeen, towards preparing a national housing policy in the occupied palestinian territories. 2015: ramallah. 10. m., s.al-homoud, al-oun, and a.-m. al-hindawi, the low-income housing market in jordan. international journal of housing markets and analysis, 2009. 2(3): p. 233-252. 11. g. kim, and j.t. kim, healthy-daylighting design for the living environment in apartments in korea. building and environment, 2010. 45(2): p. 287-294. 12. j., romero, development of a “space-saving model” for a one-family dwelling case study of japanese architecture with space limitations. journal of building construction and planning research, 2015. 3(04): p. 196. 13. h.n., husin, et al., correlation analysis of occupants’ satisfaction and safety performance level in low cost housing. procedia-social and behavioral sciences, 2015. 168: p. 238-248. 14. n.r., zainal, et al., housing conditions and quality of life of the urban poor in malaysia. procedia-social and behavioral sciences, 2012. 50: p. 827-838. 15. y., oh, et al., a theoretical framework of design critiquing in architecture studios. design studies, 2013. 34(3): p. 302-325. 16. a.a., ustaomeroglu, concept-interpretation-product in architectural design studios-karadeniz technical universty 2 nd semester sample. procedia-social and behavioral sciences, 2015. 197: p. 1897-1906. 17. o.s. asfour, and r.m. alsousi, exploring residents’ attitude towards implementing housing design flexibility in the gaza strip. journal of engineering research and technology, 2016. 3(2). 18. g. çakır, and b. yurtsever, an assessment of critical thinking skills based architectural project course in terms of student's outputs. procedia-social and behavioral sciences, 2013. 106: p. 348-355. 19. a.b., eigbeonan, effective constructivism for the archdesign studio. international journal of architecture and urban development, 2013. 3(4): p. 5-12. http://www.pcbs.gov.ps/site/lang__ar/881/default.aspxdefault.aspx#housing http://www.pcbs.gov.ps/site/lang__ar/881/default.aspxdefault.aspx#housing transactions template journal of engineering research and technology, volume 1, issue 4, december 2014 103 digital simulation of an interline dynamic voltage restorer for voltage compensation dr.p.usha rani r.m.d.engineering college, chennai, pusharani71@yahoo.com abstract – the dynamic voltage restorer (dvr) provides an advanced solution for voltage sag/swell problems. the voltage-restoration process involves real-power injection into the distribution system. the interline dvr (idvr) proposed in this paper provides a way to compensate the voltage sag/swell caused in a feeder. the idvr consists of several dvrs connected to different distribution feeders in the power system sharing common energy storage, where one dvr in the idvr system works in voltage-sag/swell compensation mode while the other dvr in the idvr system operate in power-flow control mode. the modelling and simulation of single phase idvr system using sinusoidal pulse width modulation (spwm) technique for voltage sag and swell conditions and three phase idvr using space vector pulse width modulation (svpwm) technique for voltage sag condition are presented. closed loop control for voltage sag and swell for a simple idvr systems are modeled and simulated using matlab software. the simulation results are presented to demonstrate the effectiveness of the proposed idvr system. . index terms— interline dynamic voltage restorer (idvr), interline power flow controller (ipfc), sinusoidal pulse width modulation (spwm), space vector pulse width modulation (svpwm), total harmonic distortion (thd). i introduction the need of the electrical power is increasing and simultaneously the problems while transmitting the power through the distribution system are also increasing. voltage fluctuations are considered as one of the most severe power quality disturbances to be dealt with. even a short-duration voltage fluctuation could cause a malfunction or a failure of a continuous process. there are several types of voltage fluctuation that can cause the systems to malfunction, including surges and spikes, sag, swell, harmonic distortions, and momentary disruptions. among them, voltage sag and swell are the major power-quality problems. voltage swell is the sudden increase of voltage to bout more than 110% amplitude of the supply voltage, whereas the voltage sag is the sudden decrease of voltage ton about 90% amplitude of supply voltage. this is caused due to the sudden reduction or addition of the load across that particular feeder. this change of voltage is compensated by injecting the voltage in series with the supply from another feeder at the time of disturbances using dvr. this idvr system is presently one of the most cost-effective and a highly efficient method to mitigate voltage sag/swell. the concept of interline dynamic voltage restorer (idvr) where two or more voltage restorers are connected such that they share a common dc-link is similar to the interline power flow controller (ipfc) concept. in this paper, a two-line idvr system is explained which employs two dvrs, connected to a common dc-link, is connected to two different feeders originating from two grid substations, and could be of the same or different voltage level. when one of the dvrs compensates for voltage swell/sag produced, the other dvr in idvr system operates in power-flow control mode. dvr principles and voltage restoration methods at the point of common coupling are presented. the problem of voltage sags and swells and its severe impact on sensitive loads are described [1]. voltage swell and overvoltage compensation problems in a diode bridge rectifier supported transformer-less coupled dvr, are discussed. the simulation and experimental results for unbalanced voltage swell compensation are given [2]. the performance of a dvr in mitigating voltage sags/ swells is demonstrated with the help of matlab. the dvr handles both balanced and unbalanced situations [3]. the modeling and simulation of idvr is presented in paper [4] and [6]. paper [5] proposed a new topology based on the z source inverter for the dvr, in order to enhance the voltage restoration property of the device. the modeling aspects of the dvr working against voltage sags by simulation in the pscad/emtdc have been presented [7]. modeling dr.p.usha rani/ digital simulation of an interline dynamic voltage restorer for voltage compensation (2014) 104 and simulation of single phase idvr using multiple pwm technique is presented [8] and [10].the modeling and simulation of single phase z source impedance based dvr, idvr using multiple pwm technique is presented [9] and [13]. in this paper, modeling and simulation of single phase idvr with spwm technique is used for voltage sag and swell compensation. ii. principle of idvr the idvr system consists of several dvrs in different feeders, sharing a common dc-link. a two-line idvr system shown in fig.1 employs two dvrs are connected to two different feeders where one of the dvrs compensates for voltage swell/sag produced, the other dvr in idvr system operates in power-flow control mode. the common capacitor connected between the two feeders act as the common dc supply. and simulation of single phase idvr using multiple pwm technique is presented [8] and [10].the modeling and simulation of single phase z source impedance based dvr, idvr using multiple pwm technique is presented [9] and [13]. in this paper, modeling and simulation of single phase idvr with spwm technique is used for voltage sag and swell compensation. voltage swell/sag in a transmission system are likely to propagate to larger electrical distance than that in a distribution system. due to these factors, the two feeders of the idvr system in fig.1 are considered to be connected to two different grid substations. it is assumed that the voltage distortion in feeder1 would have a lesser impact on feeder2. fig.1 schematic diagram of an idvr the upstream generation-transmission system is applied and the two feeders can be considered as two independent sources. these two voltage sources vs1 and vs2 are connected in series with the line impedances zl1 and zl2 which is in-turn connected to the buses b1 and b2 as in fig. 1. the dvr is connected in series with the feeder and the dvrs across different feeders are connected by a common dc-link. the load across each feeder is connected in series to the dvr, where vl1 and vl2 are the voltages across the load. the injection of an appropriate voltage needs a certain amount of real and reactive power which must be supplied by the dvr. supply of real power is met by means of an energy storage facility connected in the dc-link. large capacitors are used as a source of energy storage in most of the dvrs. generally, capacitors are used to generate reactive power in an ac power system. however, in a dc system, capacitors can be used to store energy. when the energy is drawn from the energy storage capacitors, the capacitor terminal voltage decreases. hence, large capacitors in the dclink energy storage are needed to effectively mitigate voltage swell of large depths and long durations. the pulse can be generated using various modulation techniques. in this paper, the pulse for the switch is generated using spwm. iii. model of idvr using spwm technique the simulink models of the closed loop controlled idvr system with the h bridge inverter using spwm technique for sag and swell conditions are developed and the simulation results are presented. the idvr system with two backto-back connected dvr stations was implemented with a closed loop control of inverter switches. fig.2 shows the simulink model of the closed loop controlled idvr. the rectifier inverter system is shown as a subsystem. fig.2 simulation circuit of idvr v + v + v + out1 in1 in2 subsystem2 in1 in1 in2 conn1 conn2 subsystem1 scope2 i + current measurement 4 ohm 40mh 3ohm 18mh1 3ohm 18mh 350 v 1 2 1 2 200v 10 ohm 100mh 10 ohm 100mh 0.2 to 0.51 0.2 to 0.5 dr.p.usha rani/ digital simulation of an interline dynamic voltage restorer for voltage compensation ( 2014) 105 the subsystem 1 consists of a full bridge inverter with a filter. subsystem 2, shows the rectifier output voltage the spwm control technique is used to reduce the harmonic content in the output voltage. the driving sine pulses for the switches are shown in fig.3. fig.4 (a) shows a 32.6 % voltage sag initiated at 300ms and it is kept until 600ms, with a total voltage sag duration of 300ms in low voltage feeder 1. fig.3 (b) and (c) show the voltage injected by the dvr 2 and the compensated load voltage respectively. due to the presence of the idvr, the load voltage remains constant throughout the voltage sag period. fig.5 shows the common dc link voltage waveform. fig.6 shows the fft analysis of the closed loop idvr system for sag. the total harmonic distortion (thd) value is 4.81%. fig.3 driving pulses of inverter switches fig.4 response of idvr to a voltage sag fig.5 common dc link voltage for sag fig.6 fft analysis of idvr for sag fig.7 (a) shows a 44.1 % voltage swell initiated at 300ms and it is kept until 600ms, with a total voltage swell duration of 300ms in low voltage feeder 1. fig.7 (b) and (c) show the voltage injected by the dvr 2 and the compensated load voltage respectively. due to the presence of the idvr, the load voltage remains constant throughout the voltage swell period. fig.8 shows the common dc link voltage waveform. fig.9 shows the fft analysis of the closed loop idvr system for swell. the total harmonic distortion (thd) value is 0.12%. 0.4 0.405 0.41 0.415 0.42 0.425 0.43 0.435 0.44 0 0.5 1 1.5 2 v o lt a. gate pulse 1,2 0.4 0.405 0.41 0.415 0.42 0.425 0.43 0.435 0.44 0 0.5 1 1.5 2 time(sec) v o lt b. gate pulse 3,4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -200 0 200 v o lt a. uncompensated voltage 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -200 0 200 v o lt b. injected voltage 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -200 0 200 time(sec) v o lt c. compensated voltage 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -2000 -1500 -1000 -500 0 500 1000 time(sec) v o lt dr.p.usha rani/ digital simulation of an interline dynamic voltage restorer for voltage compensation (2014) 106 fig.7 response of idvr to a voltage swell fig.8 common dc link voltage for swell fig.9 fft analysis of idvr for swell iv. idvr using svpwm technique the proposed idvr circuit with the space vector pwm is shown in fig.10. here, the error voltage in the dqframe is used to calculate the resultant reference voltage and angle α of the space vector eight sector frame work. table. 1 shows the parameters used for simulation studies. the subsystem 1 consists of feeder 1 and dvr 1 as shown in fig.12. the dvr 1 consists of an inverter, switching time calculator and switching pulse generator. the subsystem 2 consists of feeder 2 and dvr 2. the dvr 2 has an inverter, switching time calculator and switching pulse generator. fig. 10 simulation circuit of three phase idvr table. 1 parameters of the two line idvr supply voltage feeder 1 240v supply voltage feeder 2 707v source impedance (0.1+j3.142*e-4)ω line impedance (for 100km) (1.6+j0.34)ω series transformer turns ratio 1:1 injection transformer ratio 1:1 dc voltage 260v fixed load resistance 40ω fixed load inductance 60mh filter inductance 10mh filter capacitance 0.0177µf line frequency 50hz carrier frequency 12003hz 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -200 0 200 v o lt a. uncompensated voltage 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -200 0 200 v o lt b. injected voltage 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -200 0 200 time(sec) v o lt c. compensated voltage 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1000 -800 -600 -400 -200 0 200 400 600 time(sec) v o lt s conn2 conn1 subsystem1 conn2 conn1 subsystem dc voltage source dr.p.usha rani/ digital simulation of an interline dynamic voltage restorer for voltage compensation ( 2014) 107 fig.12 subsystem 1 of three phase idvr fig.13 indicates that 27.42% voltage sag is initiated at 300ms and it is kept until 800ms, with a total voltage sag duration of 500ms in the low voltage feeder 1. fig.14 (a) and (b) show the voltage injected by the dvr2 and the compensated load voltage respectively. due to the presence of the idvr, the load voltage remains constant throughout the voltage sag period. fig.15 shows the fft analysis of the three phase idvr with the svpwm model. the thd of the svpwm system is found to be 0.08 %. fig.13 voltage sag of three phase idvr fig.14 response of three phase idvr to a voltage sag fig.15 fft analysis of the three phase idvr v. conclusion the modelling and simulation of single phase idvr system using spwm technique for sag and swell conditions are presented. the modelling and simulation results of three phase idvr using space vector pwm technique are also presented. the model of single phase idvr for 44.1% of the voltage swell and 32.6% of the voltage sag is compensated using closed loop control. the model of three phase idvr for 27.42% of the voltage sag is compensated using closed loop control. the simulation results indicate that the implemented control strategy compensates voltage sags with high accuracy. the results show that the control technique is an improved method for voltage sag and swell compensation. 2 conn2 1 conn1 atan2 alpha abc sin_cos dq0 abc_to_dq0 transformation1 abc sin_cos dq0 abc_to_dq0 transformation v + sin cos a 1 + a 1 b 1 + b 1 c 1 + c 1 a 2 + a 2 b 2 + b 2 c 2 + c 2 vabc a b c a b c v a b c a b c a b c vabc a b c a b c vabc a b c a b c a b c a b c a b c a b c a b c a b c a b c a b c n a b c n a b c sector mag alpha alpha1 tz t1 t2 t0 switching time calculator t1 t2 t0 sector timing switching pulse generator scopee scoped scopec scopeb2 scopeb1 scopeb scopea > pid(s) pid controller node 0 sqrtu 2 u 2 mat lab function mat lab fcn gate port1 port3 port5 port2 port4 invert or -kpi -c180 1/3 constant breaker2breaker1breaker 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -400 -300 -200 -100 0 100 200 300 400 time v o lt uncompensated voltage (v) dr.p.usha rani/ digital simulation of an interline dynamic voltage restorer for voltage compensation (2014) 108 references 1. chellai benachaiba and brahim ferdi, “voltage quality improvement using dvr”, electrical power quality and utilization, journal vol. xiv, no. 1, pp. 39-45, 2008. 2. chi-seng lam, man-chung wong and ying-duo han, “voltage swell and over voltage compensation with unidirectional power flow controlled dynamic voltage restorer”, ieee transactions on power delivery, vol. 23, no. 4, pp. 2513-2521, 2008. 3. paisan boonchiam, and nadarajah mithulanathan, “detailed analysis of load voltage compensation for dynamic voltage restorer”, thammasat international journal of science and technology, vol. 11, no. 3, pp. 1-6, 2006. 4. mahinda vilathgamuwa d., choi s. s. and wijekoon h.m, “interline dynamic voltage restorer: a novel and economical approach for multiline power quality compensation”, ieee transactions on industry applications, vol. 40, no. 6, pp. 1678-1685, 2004. 5. mahinda vilathgamuwa d., c.j.gaanayake, p.c.loh, y.w.li, “voltage sag compensation with z source inverter based dynamic voltage restorer”, ieee industrial electronics conference, pp. 22422248, 2006. 6. mahinda vilathgamuwa d., choi s. s. and wijekoon h.m, “a novel technique to compensate voltage sags in multiline distribution systemthe interline dynamic voltage restorer”, ieee transactions on industrial electronics, vol.53, no.5, pp. 1603-1611, 2006. 7. nguyen p. t. and tapan k. saha, “dvr against balanced and unbalanced voltage sags: modelling and simulation”, proceedings of the ieee industrial electronics conference, pp. 1-6, 2004. 8. usha rani p. and rama reddy s, “digital simulation of an interline dynamic voltage restorer for voltage compensation”, international conference on computer, communication and electrical technology icccet, ieee xplore 9781-4244-9391-3/11, pp. 133-139, 2011. 9. usha rani p. and rama reddy s, “modeling and simulation zsi based dvr for voltage compensation”, international conference on computer, communication and electrical technology icccet, ieee xplore 978-1-42449391-3/11, pp. 90-96, 2011. 10. usha rani p. and rama reddy s, “voltage sag / swell compensation in an interline dynamic voltage restorer”, international conference on emerging trends in electrical and computer technology, icetect, ieee xplore 978-1-4244-7925-2/11, pp. 309-314, 2011. 11. usha rani p. and rama reddy s, “voltage sag / swell compensation using z-source inverter based dynamic voltage restorer”, international conference on emerging trends in electrical and computer technology icetect, ieee xplore 978-1-42447925-2/11, pp. 268-273, 2011. 12. usha rani p, “voltage swell compensation in an interline dynamic voltage restorer’, journal of scientific and industrial research, vol.73, no.1, pp.29-32, 2014. 13. usha rani p, “modelind and simulation of mli based interline dynamic voltage restorer’, australian journal of basic and applied sciences, vol.8, no.2, pp.162-167, 2014. p. usha rani is professor in electrical and electronics engineering department, r.m.d. engineering college, chennai, india. she received her b.e. degree in electrical & electronics engineering from the government college of technology, coimbatore, india, m.e. degree in power systems from college of engineering, anna university, chennai, india and phd in the area of power electronics and drives from anna university, chennai, india. she has published over 26 technical papers in international and national journals / conferences proceedings (ieee xplore-5). she has 17 years of teaching experience. her earlier industrial experience was with chemin controls, pondicherry, india. her research interests on application of power electronics to power quality problems and facts. she is life member of indian society for technical education and member if ieee. transactions template journal of engineering research and technology, volume 1, issue 3, september 2014 79 fuzzy optimal control of a poisoning-pest model by using 𝜶-cuts mohamad hadi farahi 1, mansooreh keshtegar 2, marzieh najariyan 3 1department of applied mathematics, faculty of mathematical sciences, ferdowsi university of mashhad. mashhad .iran, farahi@math.um.ac.ir 2 department of applied mathematics, faculty of mathematical sciences, ferdowsi university of mashhad. mashhad .iran, keshtegar_mino@hotmail.com 3 department of applied mathematics, faculty of mathematical sciences, ferdowsi university of mashhad. mashhad .iran marzieh.najariyan@gmail.com abstract— in this article, a dynamical system represents the poisoning-pest model is considered. at first a mathematical model for the poisoning-pest model is simulated. since there is no exact number of pests it is natural to consider the variables as fuzzy variables. thus we need to consider a fuzzy dynamical system to the poisoning-pest model. to solve such a fuzzy optimal control system, using 𝛼-cut, and zadeh’s extension principle, one can convert this system to a non-fuzzy optimal control system. the final optimal control problem is solved by discrelization method. index terms—fuzzy optimal control, poisoning-pest model, zadeh’s extension principle, fuzzy solution, generalized differentiability i introduction hukuhara differentiability (h-differentiability) for fuzzy functions was originally introduced by puri and raelescu in [1]. after that kelva [2] discussed the properties of differentiable fuzzy function using hukuhara derivative. fuzzy differential equations are studied in several papers [3, 4]. but hukuhara derivative has a disadvantage : the fuzziness of solution increases when time goes on. bede and gal in [5] introduced the generalized differentiability. the presented differentiability has not this disadvantage. apparently the disadvantage of generalized differentiability of a function compared to hdifferentiability is that a fuzzy differential equation has no unique solution. whenever differential equation and control functions are fuzzy in an optimal control problem, we are facing with a fuzzy optimal control problem. the so-called problem considered by many authors, for example: diamond and kandel in [4] showed the existence of the fuzzy optimal control for the system �̇̃�(𝑡) = 𝑎(𝑡)⨀�̃�(𝑡) ⊕ �̃�(𝑡), �̃�(0) = �̃�0. najariyan and farahi in [6, 7] found new techniques respectively for solving linear fuzzy controlled systems with fuzzy initial conditions and fuzzy optimal linear control systems with fuzzy coefficients by using 𝛼-cuts. one of the application of optimal control problems is the problem of controlling pests. many attempts have been made in this area (see [8, 9]). often times we are not dealing with an exact number of pests when we want to control pests. in such cases, researchers have used of the theory of fuzzy (see [10]). this article is based on minimizing the number of pests. because the exact number of pests is not known for us so we associate with a optimal fuzzy control. this paper is organized as follows: in section 2 we present basic definitions and theorems of fuzzy numbers and operations of fuzzy numbers. also in this section we have discussed the definition of zadeh’s extension principle and generalized differentiability. in section 3 we define optimal fuzzy control of a poisoning-pest model problem. in section 4, we applied the technique to a real poisoning-pest model. finally, section 5 will give a conclusion briefly. 2 basic concepts let ω be a set in ℝ, then a fuzzy subset �̃� of ω is defined by its membership function, �̃�(𝑡), which produces values in [0,1] for all 𝑡 in ω. so, �̃�(𝑡): ω → [0,1]. a fuzzy number is a convex, normalized fuzzy set of the real line ℝ whose membership function is piecewise continuous and we show it as ℱ(ω). a triangular fuzzy number �̃� is defined by three numbers 𝑎 < 𝑏 < 𝑐 where the base of the triangle is the interval [𝑎, 𝑐] and its vertex is at mohamad hadi farahi, mansooreh keshtegar, marzieh najariyan / fuzzy optimal control of a poisoning-pest model by using 𝜶-cuts (2014) 80 𝑡 = 𝑏. triangular fuzzy numbers will be written as �̃� = (𝑎, 𝑏, 𝑐) (see[11]). if �̃� is a fuzzy number then an 𝛼-cut of �̃�, written �̃�𝛼 is defined as: 𝜇𝛼 = [𝜇]𝛼 = { {𝑥 ∈ ω|𝜇(𝑥) ≥ 𝛼} , 0 < 𝛼 ≤ 1 {𝑥 ∈ ω|𝜇(𝑥) > 0} , 𝛼 = 0, where �̅� denotes the closure of 𝐴 ⊂ ω and �̃�0 is the support of �̃�, (see[12]). in this paper, we show the lower bound of �̃�𝛼 , as �̃�𝛼 and the upper bound of it as �̅̃�𝛼 . definition 1 (zadeh’s extension principle). let z be a cartesian product of universes, that is z = z1 × z2 ×. . .× zr and μ̃1, μ̃2, . . . , μ̃r be r fuzzy sets in z1, z2, . . . , zr respectively and y is a given space. each function f: z → y induces corresponding function f̃ = ℱ(z1) × ℱ(z2) ×. . .× ℱ(zr) → ℱ(y) (i.e., f̃ is a function mapping fuzzy sets in z to fuzzy sets in y) defined for each fuzzy set μ̃ ∈ z by f̃(μ̃1, μ̃2, . . . , μ̃r)(y) = { sup (z1,z2 ,...,zr )=f −1(y) min{μ̃1(z1), μ̃2(z2), . . . , μ̃r(zr)}, f −1(y) ≠ ϕ, 0 f −1(y) = ϕ, where 𝑓 −1 is the inverse of 𝑓. the function 𝑓 is said to be obtained from 𝑓 by the extension principle. an important result of zadeh’s extension principle is the characterization of the image levels of a fuzzy set through 𝑓, where 𝑓 is a continuous function. therefore if 𝑓: ℝ × ℝ → ℝ is a continuous function then according to zadeh’s extension principle one can extend 𝑓 to 𝑓: ℱ(ℝ) × ℱ(ℝ) → ℱ(ℝ) by the equation 𝑓(�̃�, 𝜈)(𝑧) = 𝑠𝑢𝑝𝑧=𝑓(𝑠,𝑡)𝑚𝑖𝑛(�̃�(𝑠), 𝜈(𝑡)). (1) it is well known that 𝑓𝛼 (�̃�, 𝜈) = 𝑓(�̃�𝛼 , 𝜈𝛼 ), 𝛼 ∈ [0,1], �̃� ∈ ℱ(ℝ), 𝜈 ∈ ℱ(ℝ). (2) using zadeh’s extension principle the operations of addition, ⊕ , multiplication, ⊗, and scalar multiplication, ⨀, on the ℱ(ℝ) are defined respectively by (�̃� ⊕ 𝜈)(𝑠) = 𝑠𝑢𝑝𝑡∈ℝ𝑚𝑖𝑛{�̃�(𝑡), 𝜈(𝑠 − 𝑡)}, (𝜇 ⊗ 𝜈)(𝑠) = 𝑠𝑢𝑝𝑡∈ℝ𝑚𝑖𝑛{�̃�(𝑡), 𝜈(𝑠/𝑡)}, and (𝜆⨀𝜇)(𝑠) = { 𝜇( 𝑠 𝜆 ), 𝜆 ≠ 0, 𝜒{0}, 𝜆 = 0 where 𝜒{0} is the characteristic function of 0. it is clear that the following properties are true for all 𝛼-cuts [𝜇 ⊕ 𝜈]𝛼 = 𝜇𝛼 + 𝜈𝛼 , [𝜆 ⊙ 𝜇]𝛼 = 𝜆𝜇𝛼 , 𝛼 ∈ [0,1], and [𝜇 ⊗ 𝜈]𝛼 = [𝑚𝑖𝑛{𝜇𝛼 𝜈𝛼 , 𝜇𝛼 �̅�𝛼 , �̅�𝛼 𝜈𝛼 , �̅�𝛼 �̅�𝛼 }, 𝑚𝑎𝑥{𝜇𝛼 𝜈𝛼 , 𝜇𝛼 �̅�𝛼 , �̅�𝛼 𝜈𝛼 , �̅�𝛼 �̅�𝛼 }]. according to the definition of operations of addition, scaler multiplication, the operation subtraction, ⊖, is similarly defined . definition 2 [5] let ũ, υ̃ ∈ ℱ(ℝ) . if there exists w̃ ∈ ℱ(ℝ) such that ũ = υ̃ ⊕ w̃, then w̃ is called the hukuharadifference of ũ and υ̃ and it is denoted by ũ ⊖h υ̃. definition 3 [5]. let x̃: t ∈ ℝ → ℱ(ℝ) and t0 ∈ t. we say that x̃ is differentiable at t0 if : (i) there exists an element �̇̃�(𝑡0) ∈ ℱ(ℝ) such that, for all ℎ > 0 sufficiently near to 0, there are �̃�(𝑡0 + ℎ) ⊖𝐻 𝑥(𝑡0), 𝑥(𝑡0) ⊝𝐻 �̃�(𝑡0 − ℎ) and the limits 𝑙𝑖𝑚ℎ→0+ �̃�(𝑡0 + ℎ) ⊖𝐻 �̃�(𝑡0) ℎ = 𝑙𝑖𝑚ℎ→0+ �̃�(𝑡0) ⊖𝐻 �̃�(𝑡0 − ℎ) ℎ = �̇̃�(𝑡0), or (ii) there is an element �̇̃�(𝑡0) ∈ ℱ(ℝ) such that, for all ℎ < 0 sufficiently near to 0, there are �̃�(𝑡0 + ℎ) ⊖𝐻 �̃�(𝑡0), �̃�(𝑡0) ⊝𝐻 �̃�(𝑡0 − ℎ) and the limits 𝑙𝑖𝑚ℎ→0− �̃�(𝑡0 + ℎ) ⊖𝐻 �̃�(𝑡0) ℎ = 𝑙𝑖𝑚ℎ→0− �̃�(𝑡0) ⊝𝐻 �̃�(𝑡0 − ℎ) ℎ = �̇̃�(𝑡0). theorem 1 [12]. let �̃�: 𝑇 → ℱ(ℝ) be a function and denote �̃�𝛼 (𝑡) = [𝑥𝛼 (𝑡), �̅�𝛼 (𝑡)] for each 𝛼 ∈ [0,1] . then: (i) if �̃� is differentiable in the first form (i), then 𝑥𝛼 and �̅�𝛼 are differentiable functions and �̇̃�𝛼 (𝑡) = [𝑥𝛼 (𝑡), �̇̅�𝛼 (𝑡)]. (ii) if �̃� is differentiable in the second form (ii), then 𝑥𝛼 and �̅�𝛼 are differentiable functions and �̇̃�𝛼 (𝑡) = [�̇̅�𝛼 (𝑡), 𝑥𝛼 (𝑡)]. now we consider the fuzzy initial value problem �̇̃�(𝑡) = 𝑓(𝑡, �̃�(𝑡)), �̃�(0) = �̃�0 (3) where 𝑓: [0, 𝑇] × ℱ(ℝ) → ℱ(ℝ) is obtained by zadeh’s extension principle from a continuous function 𝑓: [0, 𝑇] × ℝ → ℝ, note that 𝑓 is continuous because 𝑓 is continuous (see [13]), and by (2) we have 𝑓𝛼 (𝑡, �̃�) = 𝑓(𝑡, �̃�𝛼 ) where 𝑓(𝑡, 𝐴) = {𝑓(𝑡, 𝑎)|𝑎 ∈ 𝐴}. associated with (3) we can consider the following crisp differential equation �̇� = 𝑓(𝑡, 𝑥(𝑡)), 𝑥(0) = 𝑥0 (4) where �̇�(𝑡) is the derivative of a crisp function 𝑥: [0, 𝑇] → ℝ . for more details see [14]. theorem 2 let �̃� ∈ ℱ(ℝ). suppose that 𝑓 is a continuous function and for each 𝑥0 ∈ ℝ there exists a unique solution 𝑥(. , 𝑥0) for (4) and that 𝑥(𝑡, . ) is continuous in ℝ for each 𝑡 ∈ [0, 𝑇]. then: (i) if 𝑓 is nondecreasing with respect to the second argument, then the fuzzy solution of (3) and the solution of (4) via the derivative in the first from (i) are identical. (ii) if 𝑓 is nonincreasing with respect to the second argument, then the fuzzy solution of (3) and the solution of (4) via the derivative in the second from (ii), if it exists, are identical. proof 1 see [12] 3 optimal control of the poisoning-pest model in this section we present a fuzzy optimal control for the poisoning-pest model. we are going to determine the sufficient amount of poison to kill the approximate number of pests. suppose that �̃�(𝑡) is the pest density, �̃�(𝑡) is the speed of poison insufflation at time 𝑡 ≥ 0 , where �̃� and �̃� are fuzzy numbers. so we hope that in interval time [0, 𝑡𝑓 ], the pest density is reduced desirable. consider the pest density at 𝑡 = 0 is �̃�1. we want to minimize the cost of the poison and the harm done to the crop. the fuzzy control mohamad hadi farahi, mansooreh keshtegar, marzieh najariyan / fuzzy optimal control of a poisoning-pest model by using 𝜶-cuts (2014) 81 model of the poisoning-pest is as follows: �̇̃�(𝑡) = �̃�(�̃�(𝑡)) ⊖ �̃�(𝑡) ⊗ �̃�(𝑡), (5) where the initial condition �̃�(0) = �̃�1, and the final condition is �̃�(𝑡𝑓 ) = �̃�𝑓 . the function �̃�(�̃�(𝑡)) can be written as 𝑟⨀(𝑥 ̃ ⊕ (𝑥 ̃ ⊗ 𝑥 ̃ )⨀ −1 𝑘 ),where 𝑟 is the growth rate of the pest density and 𝑘 is the maximum pest density of the environment (see [15] for more details). the objective function is as: min 𝐽(�̃�(𝑡), �̃�(𝑡)) = ∫ 𝑇 0 [�̃�(�̃�(𝑡)) ⊕ ℎ̃(�̃�(𝑡))]𝑑𝑡. (6) suppose that the function �̃�(�̃�(𝑡)) denote the cost of pest harm and the function ℎ̃(�̃�(𝑡)) denotes the expense of the poison at the time 𝑡 ≥ 0. 4 application we consider the following fuzzy optimal control problem (see[15]): �̇̃�(𝑡) = 𝑟⨀ (�̃� ⊕ (�̃� ⊗ �̃�)⨀ −1 𝑘 ) ⊝ �̃�(𝑡) ⊗ �̃�(𝑡) (7) with cost function: 𝐽(�̃�(𝑡), �̃�(𝑡)) = ∫ 𝑡𝑓 0 [�̃�(�̃�(𝑡)) + ℎ̃(�̃�(𝑡))]𝑑𝑡 , (8) where 𝑟 = 10 9 , 𝑘 = 20, 𝑡𝑓 = 10 ,�̃�(0) = 5̃ = (4,5,6), �̃�(𝑡𝑓 = 10) = 1̃ = (0,1,2), �̃�(�̃�(𝑡)) = 10⨀�̃�(𝑡), and ℎ̃(�̃�(𝑡)) = 2⨀�̃�(𝑡). so, the fuzzy optimal control problem can be written as: min 𝐽(�̃�(𝑡), �̃�(𝑡)) = ∫ 10 0 [10⨀�̃�(𝑡) ⊕ �̃�(𝑡)]𝑑𝑡 �̇̃�(𝑡) = 10 9 ⨀(�̃� ⊕ (�̃� ⊗ �̃�)⨀ 1 −20 ) ⊖ �̃�(𝑡) ⊗ �̃�(𝑡) �̃�(0) = (4,5,6), �̃�(𝑇 = 10) = (0,1,2) we solve this problem using 𝛼-cuts technique. we consider �̃�𝛼 = [𝑥𝛼 , �̅�𝛼 ] and �̃�𝛼 = [𝑢𝛼 , �̅�𝛼 ]. in the problem 𝑓(�̃�, �̃�) = 10 9 ⨀(�̃� ⊖ 1 20 ⨀(�̃� ⊗ �̃�)) ⊖ �̃�(𝑡) ⊗ �̃�(𝑡), is obtained by zadeh’s extension principle from a continuous function 𝑓(𝑥, 𝑢) = 10 9 (𝑥(𝑡) − 1 20 𝑥 2) − 𝑢(𝑡)𝑥(𝑡) , since 𝑢(. ) is a bounded function, it is not difficult to show that 𝑓(𝑥, 𝑢) is an increasing function with respect to 𝑥(𝑡) for all |𝑢(𝑡)| ≤ 1 in [0,10], so we must use the first form (i) derivative. now �̇�𝛼 = 𝑓(𝑥𝛼 , 𝑢𝛼 , �̅�𝛼 ) and �̅̇�𝛼 = 𝑓(�̅�𝛼 , 𝑢𝛼 , �̅�𝛼 ). the objective function is the average of 10𝑥𝛼 + 2𝑢𝛼 and 10�̅�𝛼 + 2�̅�𝛼 . so one interfaces the following non-fuzzy optimal control problem: 𝑚𝑖𝑛𝐽 = 1 2 ∫ 10 0 (10𝑥𝛼 (𝑡) + 10�̅�𝛼 (𝑡) + 2𝑢𝛼 (𝑡) + 2�̅�𝛼 (𝑡)) 𝑑𝑡 (9) 𝑠𝑡: �̇�𝛼 (𝑡) = 10 9 (𝑥𝛼 (𝑡) − (𝑥𝛼 2 )(𝑡) 20 ) − �̅�𝛼 (𝑡)𝑥𝛼 (𝑡) (10) �̅̇�𝛼 (𝑡) = 10 9 (�̅�𝛼 (𝑡) − (�̅�𝛼) 2(𝑡) 20 ) − 𝑢𝛼 (𝑡)�̅�𝛼 (𝑡) (11) where the initial conditions are 𝑥𝛼 (0) = 5𝛼 + 4(1 − 𝛼), �̅�𝛼 (0) = 5𝛼 + 6(1 − 𝛼) and final conditions are 𝑥𝛼 (10) = 𝛼, �̅�𝛼 (10) = 𝛼 + 2(1 − 𝛼) for all 𝛼 ∈ [0,1]. because 𝑢𝛼 ∈ [−1,1] and �̅�𝛼 ∈ [−1,1] ,so one can define function 𝑢𝛼 = 𝐴1sin(𝑡 × 𝜋 4 ), and �̅�𝛼 = 𝐴2sin(𝑡 × 𝜋 4 ) where 𝐴1 ∈ [−1,1], 𝐴2 ∈ [−1,1]. we solve this problem by discretization method (see for more details [16]), the solutions have obtained for 𝛼 = 0,0.25,0.5,0.75,1. the solutions of 𝑥𝛼 , �̅�𝛼 and 𝑢𝛼 , �̅�𝛼 are shown in figure 1 and figure 2 respectively. 5 conclusion optimal fuzzy control theory is applied to a poisoning-pest problem. by applying 𝛼-cuts, and using zadeh’s extension principle the fuzzy optimal control of a poisoning-pest system, extended to a new form involve in lower and upper state and control. based on diseretization method, the above metioned nonfuzzy optimal control problem is solved. acknowledgment the authors would like to thank the anonymous reviewers for their careful reading, constractive comments, and nice suggestions which have improved the paper very much. this research was supported by a grant from ferdowsi university of mashhad , no ma89194far figure 1: the number of pests , 𝑥𝛼 , �̅�𝛼 figure 2: the speed of poison, 𝑢𝛼 , �̅�𝛼 mohamad hadi farahi, mansooreh keshtegar, marzieh najariyan / fuzzy optimal control of a poisoning-pest model by using 𝜶-cuts (2014) 82 references [1] puri. m and ralescu. d, differential and fuzzy functions, j.math.anal.appl, 91:552–558, 1983. [2] kelva. o, fuzzy differential equations, fuzzy sets and systems , 24:301–317, 1987. [3] daimond. p, brief note on the variation of constants formula for fuzzy differential equations, fuzzy sets and systems , 129:65–217, 2002. [4] diamond. p and kandel. p, metric space of fuzzy sets, theory and application , world scientific , singapore , 1994. [5] barnabas bede and sorin g. gal, generaizations of the differentiability of fuzzy-number-valued functions with applications to fuzzy differential equations, fuzzy sets and systems, 151:581–599, 2005. [6] najarian. m and farahi. m. h, optimal control of fuzzy linear controlled system with fuzzy initial conditions, iranian journal of fuzzy systems , 3:21–35, 2013. [7] najariyan. m and farahi. m. h, a new approach for optimal fuzzy linear time invariant controlled system with fuzzy coefficients , journal of computational and applied mathematics, 259:682–694, 2014. [8] kar. t. k, abhijit ghorai and soovoojeet jana, dynamics of pest and its predator model with disease in the pest and optimal use of pesticide, journal of theoretical biology , 310: 187–197, 2012. [9] ghosh. s and bhattacharya. d. k, optimization in microbial pest control: an integrated approach, applied mathematical modelling, 34:1382–1395, 2010. [10] scherm. h, simulating uncertainty in climate–pest models with fuzzy numbers, environmental pollution , 108:373–379, 2000. [11] buckley. james. j and jawers. leonard. j, monte carlo methods in fuzzy optimization, studies in fuzziness and soft computing , 222. [12] chalco-cano. y and roman flores, comparation between some approaches to solve fuzzy differential equations, fuzzy sets and systems, 160:1517–1527, 2009. [13] roman flores, barros. l and bassanezi. r, a note on the zadeh’s extensions, fuzzy sets on banach spaces, inform.sci, 144:227–247, 2002. [14] chalco-cano. y and roman flores, on the new solution of fuzzy differential equations, chaos solitons fractals , 38:112–119, 2006. [15] zhenkun huang and shuili chen, optimal fuzzy control of a poisoning-pest model, applied mathematics and computation , 171: 730–737 ,2005. [16] najariyan. m , farahi. m. h, m. alavian, optimal control of hiv infection by using fuzzy dynamical systems, the journal of mathematics and computer section , 2:639649,(2011). mohammad hadi farahi is a full professor at the department of applied mathematics, school of mathematics, ferdowsi university of mashhad, iran. he obtained his b.sc, m.sc and phd, respectively from ferdowsi university of mashhad, iran in 1972, brunel university, uk in 1978, and leeds university, uk in 1996. he has published more than 70 technical papers in international journals, and also five text books. his scientific interests include optimal control, optimization, sliding mode control, bio-mathematics, ode 's, pde 's and approximation theory. mansooreh keshtegar received her b.sc and m.sc , from ferdowsi university of mashhad, iran respectively in 2011, and 2013. now she is doing some researchs in the area of control theory. marzieh najariyan received her b.sc. degree from payam noor university of torbat-e-heydarieh im 2005, m.sc and phd degrees from department of applied mathematics, ferdowsi university of mashhad,iran respectively in july 2008 and december 2013. she has published nine papers in international journals and also she has attended several national and international conferences with oral presentations. she is currently working on fuzzy differential equation and fuzzy control theory. transactions template journal of engineering research and technology, volume 5, issue 4, december 2018 68 identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa nabil i. el-sawalhi, and belal n. alyazji abstract—the sheltering and housing for special hardship cases families is one of main objectives of unrwa. it contributes to alleviating the suffering of poor refugee families. there are many problems beyond noninterference in the process of housing, which negatively affects human dignity and the right to housing. this paper aims to improve the housing and intervention plans in unrwa for families which classified under special hardship cases. the factors affecting the process of the intervention were identified. a questionnaire for a group of engineers and social workers from icip and rssp who are specialist working in shelter at unrwa in gaza strip was conducted. 105 questionnaires were distributed. a total of 85 questionnaires were received with a response rate of 81%. the results showed a breakdown of the social, technical, political, legal and economic factors, challenges and constraints that affect the intervention process by unrwa. the most important social factors were the population density and number of family within the shelter. the severe defect affects the shelter stability and the numbers of existing rooms in the shelter were the most important technical factors. the type of shelter property for family and poverty level of the household are considered the most two important economic factors. moreover, the most important challenges and constraints were the availability of funds and budget for the intervention process. index terms—hardship cases, factors, unrwa, intervention. i introduction united nations relief and works agency (unrwa) were mandated by the un in 1949 to provide support to palestinian refugees. unrwa maintains a dedicated special support programme, which is infrastructure and camp improvement programme (icip) which provide support to the relief and emergency departments. there is an agency’s special hardship case (shc) programme focus in providing a cushion support to the poorest families among the refugee’s population. upon implementation, the shc pogramme increased the amount of shelter, economical & social assistance to needy families. the agency is aware that shelter needs are best addressed within an integrated approach to the human development of refugees. over years, the agency intends to enhance its capacity for implementation by building its own strategies and standards. there are objectives will be pursued within the framework of a comprehensive and integrated approach to shelter interventions, as shelter rehabilitation, re-construction & re-housing for shcs. the shc survey represents the first comprehensive attempt by the agency to describe the socio-economic conditions of shc families in the five fields of unrwa operations [1]. although most refugees have been able to make improvements and additions to their shelters over the years, the very poorest refugees often live in shelters that are now in extremely in bad condition, wet, crumbling walls, leaking zinc roofs and rodent infestation cause additional social and health problems. unrwa may be able to repair or reconstructed hundreds of shelters in coming years for beneficiaries who joins its waiting list each year for shelter rehabilitation. social, economic, technical, and political etc., factors can be interacted in the prototype and the housing schemes of shc intervention or assistance. in order to provide excellent evaluation procedures, equity and transparency for beneficiaries (shcs), unified criteria should be applied for all cases. the parameters and its influence in interventions and shelter assistance are in need for determination. however, often, the main four factors (economical, social, technical and political) intervened in shelter assessment. it is important to identify the important parameters it in order to make unique sector in beneficiaries’ selection. this research aims to improve shelter and housing schemes and intervention for shcs with related to its reliability and application. the specific objectives of this research are: to identify the factors affecting shelters assessment, identify the most parameters affecting in shelters interventions and check the logic and plausibility of the means-to-ends applied to existing intervention for beneficiaries. unrwa believes that decent living conditions for refugees is fundamental to their human dignity and does not compromise their right of return, so improving critically substandard shelters, especially for the most vulnerable refugees remains one of its goals. priority will be given to the special hardship cases (shcs). as a strategy, icip was developed for agency-wide shelter rehabilitation strategy in one of approaches which provides a decent standard to refugees [2]. united nations [3] reported that, unrwa aims to achieve the human development goal of ensuring that palestinian nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 69 refugees enjoy a decent standard of living through interventions by its relief and social services programme (rssp), microfinance programme and icip, in collaboration with host countries and national and international partners. interventions under the icip prioritized families classified as absolutely or abjectly poor in gaza strip, jordan, lebanon and the west bank. in addition, work was initiated facilities, including schools, health centers and vocational training and community development centers, while work on solid waste disposal, drainage and water and sewerage systems was carried out to prevent the spread of diseases, without prejudice to the agency’s position concerning the responsibility of host authorities to administer the camps. in gaza strip, under emergency assistance, so many families benefited from unrwa shelter repair, construction and reconstruction programming. according to icip plan for 2010-2011[4] that expanding the shelter rehabilitation sub-programme beyond special hardship cases in the assessment and planning to include other vulnerable groups living in unsafe shelters will be nullified and actually may further contribute to deteriorating the macro situation if such programme minimum capacity is not created at the field level. the programme’s approach to shelter, housing and re-housing is guided by the right to adequate housing taking into account affordability, appropriateness and acceptability. ii literature review a shelter rights ―housing rights are seen as an integral part of economic, social and cultural rights within the united nations, european, inter –american, and african human rights instruments‖ [5]. in article 25 in universal declaration of human rights stated that, ―everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control‖ [3]. unrwa [6] defined shelter as a single family dwelling consisting of one or more than one room, including a kitchen and toilet, the shelter may be part of a shared dwelling (kitchens and sanitary units could be also shared), cottage / agricultural shelter; small shelters not exceeding 80 square meter in size, located in rural areas and not used as permanent residences and used primarily for leisure, makeshift shelter; a tent or shelter made out of corrugated iron sheets / wood or other scrap materials. many international human right instruments such as the 1948 universal declaration of human rights recognize housing as one of those rights to be granted by human beings. un-habitat agenda [7] mentioned that, one of main objective is a collaborative global movement towards adequate housing for all and improving housing for and the living conditions of slum dwellers. its main objective is to assist member states in working towards the realization of the right to adequate housing. upgrading of sub-standard shelters is an integrated approach which addresses multiple household-level needs faced by vulnerable families living in sub-standard buildings. it involves the provision of assistance to support permanent shelter and household-level wash upgrades in exchange for security of tenure and rent reduction. the intervention addresses the physical aspects of poor living-conditions whilst reducing the household’s rent-burden, reducing their economic vulnerability and provides them with more stability. it contributes towards an increase in the adequate housing stock in jordan, the local economy and social cohesion through the clear investment in the host community [8]. pothiawala [9] emphasize that shelter is a critical determinant for survival of the affected population in the initial stages of a disaster. it is essential to provide security, personal safety, protection from the climate and to prevent disease outbreaks. it is also important for individual human dignity and to enable affected population to recover from the impact of disaster. eventually, the appropriate response will also be determined by the ability of the displaced population to return to the site of their original dwelling and start the recovery process. unhsp [10] illustrated that, adequate housing must provide more than four walls and a roof, a number of conditions must be met before particular forms of shelter can be considered to constitute ―adequate housing‖. these elements are just as fundamental as the basic supply and availability of housing. for housing to be adequate, it must -at a minimum meet the following criteria: security of tenure, availability of services, materials, facilities and infrastructure, affordability, habitability, accessibility, location, cultural adequacy. detailed submission guidelines can be found on the author resources web pages. author resource guidelines are specific to each journal, so please be sure to refer to the correct journal when seeking information. all authors are responsible for understanding these guidelines before submitting their manuscript. b sub-standard shelter state of california [11] stated that, any building or structure, or portion thereof, including any dwelling unit, to an extent that they endanger the life, limb, health, property, or welfare of the public or the occupants thereof shall be deemed and is hereby declared to be substandard building, a) inadequate sanitation shall include, but not be limited to, the following: b) structural hazards c) dangerous buildings and structures nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 70 iii factors affecting shelter interventions bashawria et al. [12] stated that, the design factors define the performance of shelters and should be developed through consultation with the people affected by a disaster, government sectors, private sectors, and any other players involved in disaster recovery, such as volunteers and insurance organizations, to prevent against the environmental (climate variations, recycling, upgrading, disposal, hygienic (water and air)), economic (type of shelters, lifetime, livelihood), technical (easy to erect and dismantle, materials and insulations, classification of hazards and performance, physical and psychological effects), and sociocultural (cultural difference, dignity and security and communication). a social factors in general, where stigmatization remains unaddressed and social or community services are unavailable including social housing persons with disabilities continue to face discrimination when seeking housing, or more general challenges in securing the resources necessary for obtaining, adequate housing. such challenges inevitably make them more vulnerable to forced evictions, homelessness and inadequate housing conditions [10]. adaptation activities could be assessed awareness and knowledge of adaptation activities, and expectation of future benefits [13]. adaptation as for social obsolescence, it is defined as fashion or behavioral change in society that demands building adaptation [14]. b technical and physical factors patt [15]and ting [16] proposed that, occupants’ satisfaction with existing public buildings could be measured using attributes like satisfaction with building qualities (interior design or function), building conditions (structural defects or surface defects), building facilities, surrounding environment and building services. compared to general construction, adapting existing buildings involves high levels of risks and uncertainty [17]. ―obsolescence is the process of an asset going out of use‖. it determines the timing of building adaptation as housing obsolescence indicates the tendency of a building to become out-of-date [18]. in the building adaptation context, langston et al. [14] have comprehensively classified housing obsolescence into six categories: physical, economic, functional, technological, social and legal obsolescence. buildings’ rental level drops as buildings age without continuous refurbishment; therefore, the building age can be a good indicator of physical obsolescence, physical obsolescence can also be detected by its conditions expressed in the manner of structural defects or surface defects [19]. c economic factors economic obsolescence can be assessed by attributes like rental income level, rate of return, and depreciation, changes in occupants’ requirements ―leads to possible functional change from the purpose for which a building was originally designed‖. the severity of functional obsolescence therefore can be assessed by studying building services like lifts, and examining the flexibility of the original design [20]. the economic development of a region is an important driving force for urban development. economically growing regions have migration surpluses which increases the demand for built-up areas. additionally, the demand for industrial areas and infrastructure increases [21]. d political and legal factors langston et al. [14] proposed that, the attribute of compliance to statutory requirements like revised safety regulations, fire regulations, building ordinances or environmental controls is an effective means used for indicating the level of legal obsolescence. the critical elements of the process by which housing and communities are constructed and reconstructed are considered to be such as local governance, land administration, housing construction system and practices, housing finance, and local infrastructure construction and operation [22]. iv shelter architectural guideline a covered floor area in excess of 3.5m2 per person will often be required to meet these considerations [23]. norwegian refugee council-lebanon (nrc) [24] mentioned that, minimum shelter standards as follows: 1. 3.5m² per person of living space (excluding kitchen and toilet) should be created when possible 2. electrical works should provide at least 1 light fitting per room 3. toilets should have a ratio of one per 15 people or better 4. waste water and sewerage disposal should be by connection to a septic tank, mains sewerage, improved pit latrine or other recognised means. 5. water storage tanks should hold a minimum of 70l and a maximum of 400l per person. 6. water fittings should be specified. guidelines for individual shelter rehabilitation on grant basis-unrwa [25] stated that, the following stated as architectural space guidelines, where the space requirements are based on the size of the family as follows:  1-2 persons 1 room + kitchen + sanitary facilities  3-5 persons 2 rooms + kitchen + sanitary facilities  6 + persons 3 rooms + kitchen + sanitary facilities space requirements should be provided by implementation of any one or combination of more than one of the specified interventions. in the case of reconstruction space requirements are based on number of rooms with suggested areas as follows: nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 71  1-2 persons 1 room + kitchen + sanitary facilities+ 15% (for circulation and flexibility) = 32.2m2  3-5 persons 2 rooms + kitchen + sanitary facilities+15% (for circulation and flexibility) = 46m2  6 + persons 3 rooms + kitchen + sanitary facilities + 15% (for circulation and flexibility) =59.8m2 these calculations are based on room size 14m2 for the first room (main room) and12 m2 for the second and third rooms (secondary rooms), kitchen 9m2, sanitary facilities 5m2 and 6m2 in case of three room’s shelters. an allowance of 15% for circulation space has been included in the above calculations to give the total net area entitlements. as every shelter is provided with an allowance for circulation, room sizes can be increased if circulation is minimized. additionally, it was mentioned that, all developments should meet the following minimum space standards as shown in table 1 table 1 minimum space standards single story dwelling two story dwelling three story dwelling (bedroom /persons) m² (bedroom /persons) m² (bedroom /persons) m² 1b2p 50 2b4p 83 3b5p 102 2b3p 61 3b4p 87 4b5p 106 2b4p 70 3b5p 96 4b6p 113 3b4p 74 4b5p 100 3b5p 86 4b6p 107 3b6p 95 4b5p 90 4b6p 99 a intervention scenarios johnson and wilson attempted to describe various building adaptation strategies in a map, and these strategies range from minor maintenance through renovation to restoration [20]. building adaptation refers to ―any intervention to adjust, reuse or upgrade a building to suit new conditions or requirements‖. thus, building adaption potential can be defined as an indicator reflecting the potential that a building ought to be adapted [18]. guidelines for individual shelter rehabilitation on grant basis-unrwa [25] identified that, the type of interventions as follows: a) reconstruction: requires demolition of all the existing shelter and the construction of a new shelter; b) expansion/extension: the construction of a horizontal or vertical extension to the existing shelter, entailing additional spaces; c) partialreconstruction: demolition of part of a shelter and reconstruction of one or more spaces, including structural works; d) major repair and supplementary structure: comprehensive upgrading repairs (such as new windows, plumbing, and plastering) and installation of secondary structure such as columns to support new concrete roof slab; e) minor repair: routine maintenance repairs (such as repair of windows, roof) and no structural works or demolition; f) adaptation: adaptation of spaces to suit special needs of family (disability or age) but no reconstruction, repair or expansion. shelter wg jordan [8] clarified that; any intervention should target the most vulnerable families living in substandard accommodation that lack a combination of any of the following: a) adequate privacy, dignity and protection from the climatic exposure (i.e. wet and cold); b) adequate access to safe water and sanitation (therefore resulting in unhygienic conditions); c) adequate connection to municipal infrastructure and services (e.g. electricity, water supply, waste-water collection, solid waste collection); or d) expose the occupants to avoidable health and safety risks. unhcr [26] mentioned that, one of priorities recommendations for the shelter response follow that to improve conditions of sub-standard shelters, through repairs, weather-proofing interventions and safety standards since there are many refugees rent accommodation types that are considered sub-standard, such as unfinished houses with poor sanitation, ventilation or lights, or shelters that lack minimum safety standards and put adults and/or children at risk, very often, shelters are not insulated to protect families against the elements. upgrading refugee accommodation to reach basic standards and permit decent living conditions for refugee families is the minimum goal of any shelter intervention. v methodology literatures of the factors affecting shelters assessment, most affecting parameters in shelters interventions and their improvements were reviewed. according to the literature review and after interviewing experts who are dealing with the subject at different levels, all the information that could help in achieving the study objectives were collected, reviewed and formalized to be suitable for the study survey and after many stages of brain storming, consulting, amending, and reviewing, a questionnaire was developed with closed questions. the questionnaire included multiple choice questions. the variety in these questions aims first to meet the research objectives, and to collect all the necessary data that can support the discussion, results and recommendations in the research. several previous studies were used to select the factors such as: [2]; [4]; [5]; [7]; [10]; [12]; [14]; [19]; [15]; [16]; [18]; [22]; [23]. nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 72 a field survey, with a target population of 40 social workers, 40 site engineers, five social services officers, 8 area engineers and 8 managers were distributed on all zones of gaza. all the members of target group are working in shelter and housing schemes for shcs at unrwa was conducted. a pilot study for the questionnaire was conducted by distributing the prepared questionnaire to one (1) manager, two (2) area engineers, two (2) social services officers, one (1) site engineer and one (1) social worker. the two (2) area engineers and two social services officers, manager and social worker were selected, also who have good experience in the field of social and relief services programs projects. the seven experts were asked to review the questionnaire and verify the validity of the questionnaire topics and its relevance to the research objective and give their advice. in general, they agreed that the questionnaire is suitable to achieve the goals of the study. important comments and suggestions were collected and evaluated carefully. all the suggested comments and modifications were discussed with the study’s supervisor before taking them into consideration. the questionnaire was modified based on the results of the pilot study. the questionnaire was used to collect the required data in order to achieve the research objective. fortunately, the response rate was 82 % for social worker’s staff, 80 % for site engineers. 83 % for social services officer, 88 % for area engineers, and 75 % for managers. the relative importance index (rii) was used in the analysis in addition to other approaches such as the one-way anova and frequencies and percentiles. likert scaling was used for ranking questions that have an agreement levels. the respondents were asked to give their perceptions in group of questions on ten-point scale (1, for the less important to 10 for the highly important), which reflects their assessment regarding the factors affecting bidding process. v results and discussion table 2 shows the the social workers and site engineers respondents are more than 70 % of the population, in addition to that 62.4 of respondents have experience in unrwa is above 10 years . moreover, 95.3 % of the respondents did the field survey by themselves, that’s means the target group who are working in the field has more enough experience, practice and awareness in order to provide the best opinion about needs of shelter and housing schemes improvement. table 2 demographic data demographic data frequency percent % age less than 40 years 35 41.2 40-less than 50 years 29 34.1 50 years and more 21 24.7 gender mal 67 78.8 female 18 21.2 what is the nature of social worker 29 34.1 your work site engineer 32 37.6 social services officer 5 5.9 area engineer 7 8.2 manager 6 7.1 others 6 7.1 academic education level diploma bachelor 63 74.1 master 20 23.5 ph.d. 2 2.4 what is your general experience in (in years) less than 5 4 4.7 from 5 to less than 10 16 18.8 from 10 to less than 15 25 29.4 15 or more 40 47.1 years of works in organization (unrwa) less than 5 10 11.8 from 5 to less than 10 22 25.9 from 10 to less than 15 23 27.1 15 or more 30 35.3 area north 15 17.6 gaza 28 32.9 middle 19 22.4 khan younis 9 10.6 rafah 14 16.5 did you join any assessment for special hardship cases yes 81 95.3 no 4 4.7 a social factors table 3 shows that the mean of item #5 ―population density within the shelter‖ equals 8.45 (84.47%), test-value = 10.51, and p-value less than 0.05. the sign of the test is positive, so the mean of this item is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to this item. the mean of item #3 ―the age of the shelter owner‖ equals 4.26 (42.59%), test-value = -5.98, and p-value less than 0.05. the respondents disagreed to this item. the mean of the field ―social factors‖ equals 6.95 (69.47%), test-value = 5.24, and p-value less than 0.05. the sign of the test is positive and the respondents agreed to field of ―social factors ". from the above points and the analysis of the data results shown in table 3, the respondents pointed out and agreed that the social factors in general can affect unrwa intervention, in addition to that what was mentioned in the literature review about the density of the population inside the shelter was endorsed as an important factor affecting the standard human. this result is matching the results of [18] as the shelter density is the most importsnt criteria to make the beneficery eligible for intervenstion. meanwhile, the age of the shelter owner has less importance since the intervention can be provided to needy people and not according to their ages which seems to be logic. nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 73 table 3 means and test values for ―social factors‖ item mean s.d rii (%) test value pvalue rank the number of family members 7.81 2.82 78.12 5.92 <0.001 4 the difference in gender of the shelter owner 4.73 2.89 47.29 -4.05 <0.001 9 age of the shelter owner 4.26 2.69 42.59 -5.98 <0.001 10 the gender-specific considerations 7.66 2.64 76.59 5.78 <0.001 5 population density within the shelter 8.45 2.15 84.47 10.51 <0.001 1 family members suffering from various chronic diseases 8.32 1.97 83.21 10.80 <0.001 2 the number of married persons within the shelter 6.92 2.75 69.17 3.05 0.002 6 the social status of the shelter owner 6.29 2.99 62.94 0.91 0.184 8 the presence of a person with special needs is considered a "disable" within the family 8.28 1.96 82.82 10.73 <0.001 3 demographic changes (a shortage or increase in the family in terms of marriage, death, birth ...) 6.79 2.71 67.88 2.68 0.004 7 all items of the field 6.95 1.67 69.47 5.24 <0.001 b economic factors table 4 shows that the mean of item #4 ―the type of shelter property for the family equals 8.62 (86.19%), test-value = 10.33, and p-value less than 0.05. it is concluded that the respondents agreed to this item. the mean of item #8 ―the amount of cash assistance provided to the family‖ equals 6.08 (60.83%), test-value = 0.25, and p-value = 0.400 which is greater than the level of significance 0.05  . it is concluded that the respondents (do not know, neutral) to this item. the mean of the field ―economic factors‖ equals 7.09 (70.94%), test-value = 5.30, and p-value less than 0.05. the sign of the test is positive, so the mean of this field is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to field of “economic factors ". from the above points and the analysis of the data results in table 4 the respondents point out and agreed that the economic factors in general can affect unrwa intervention, in addition to that what was mentioned in the literature review about to whom the shelter intervention can be prioritized and unnerved, the analysis emphasize that type of shelter property as the most important factor affecting the intervention. that is having a detorerated shelter justfy giving assistance and indicator to the poverty. this result matching the result of [7]. on the other hand, the amount of cash assistance provided to the family has less importance in shelter intervention since this assistance is provided for necessities of life such as food and clothes. table 4 means and test values for ―economic factors‖ item mean s.d rii (%) test value pvalue rank the household income level 7.04 2.55 70.36 3.72 <0.001 4 the working condition of the head of the family 6.94 2.57 69.40 3.36 0.001 5 the poverty level of the household by service classification 8.50 2.45 85.00 9.26 <0.001 2 the type of shelter property for the family 8.62 2.32 86.19 10.33 <0.001 1 the construction costs 6.29 2.84 62.89 0.93 0.178 6 ownership of a piece of land by the family 7.18 2.69 71.79 4.01 <0.001 3 receiving assistance from other parties 6.23 2.83 62.26 0.73 0.233 7 the amount of cash assistance provided to a family 6.08 3.01 60.83 0.25 0.400 8 all items of the field 7.09 1.89 70.94 5.30 <0.001 c technical factors table 5 shows that the mean of item #6 ―severe defect in the existing shelter affects its stability‖ equals 8.32 (83.21%), test-value = 9.48, and p-value less than 0.05. the sign of the test is positive, so the mean of this item is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to this item. the mean of item #9 ―the location of shelter from the street, its accessibility and the presence of services in the area are factors influencing the intervention‖ equals 5.73 (57.29%), test-value = -0.87, and p-value = 0.194 which is greater than the level of significance 0.05  . it is concluded that the respondents (do not know, neutral) to this item. the mean of the field ―technical factors‖ equals 7.49 (74.92%), test-value = 8.31, and pvalue less than 0.05. it is concluded that the respondents agreed to field of ―technical factors ". from the above points and the analysis of the data results in table 5 the respondents point out and agreed that the technical factors in general has impact on unrwa intervention, in addition to that what was mentioned in the literature review about to whom the shelter intervention can be prioritized and unnerved, the analysis emphasize that the contribution of unrwa is affected in the event of a severe defect in the existing shelter which affects its stability as an important factor affecting the intervention and this well-known according to safety measurement. this result is matching the research of [15] and [16]. while the location of shelter from the street, its accessibility and the presence of services in the area factor has less importance in shelter intervention as the location or accessibility of the shelters for most refugees in gaza strip almost the same inside the camps. nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 74 table 5 means and test values for ―technical factors‖ item mean s.d rii (%) test value pvalue rank land area 7.87 2.45 78.71 7.03 <0.001 5 the existing building area 7.68 2.63 76.79 5.86 <0.001 6 existing shelter type 7.94 2.25 79.40 7.84 <0.001 4 the chronological age of shelter 7.00 2.62 70.00 3.51 <0.001 10 the location of shelter 7.33 2.50 73.33 4.88 <0.001 8 severe defect in the existing shelter affects its stability 8.32 2.24 83.21 9.48 <0.001 1 slight defect in the existing shelter 7.02 2.37 70.24 3.95 <0.001 9 key element is not appropriate in the shelter 7.96 2.21 79.65 8.19 <0.001 3 the location of shelter from the street, its accessibility and the presence of services in the area 5.73 2.88 57.29 -0.87 0.194 11 the exposure of shelter to floods contributes 7.49 2.45 74.94 5.63 <0.001 7 the number of rooms in the shelter 8.07 2.19 80.71 8.73 <0.001 2 all items of the field 7.49 1.66 74.92 8.31 <0.001 d legal and political factors table 6 shows that the mean of item #8 ―the sheltering process is an unrwa objective‖ equals 8.53 (85.29%), testvalue = 9.87, and p-value less than 0.05. the sign of the test is positive, so the mean of this item is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to this item. the mean of item #3 ―land authority laws‖ equals 7.20 (72.00%), test-value = 4.23, and p-value less than 0.05. it is concluded that the respondents agreed to this item. the mean of the field ―legal and political factors‖ equals 7.96 (79.62%), test-value = 10.09, and pvalue less than 0.05. the sign of the test is positive, so the mean of this field is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to field of ―legal and political factors ". from the above points and the analysis of the data results in table 6 the respondents point out and agreed that the legal and political factors in general has impact on unrwa intervention, in addition to that what was mentioned in the literature review about to whom the shelter intervention can be prioritized and unnerved, the analysis emphasize that the sheltering process in unrwa is one of its objective and has high important factor where this is one of un role, while land authority laws affect unrwa's contribution has less importance in shelter intervention as the land authority regulation inside the camps are not followed by unrwa. the reached result is fully agreed as what was mentioned in the previously studied such as united nations [3] and unrwa [2]. table 6 means and test values for ―legal and political factors‖ item mean s.d rii (%) test value pvalue rank laws and strategies of unrwa 8.51 2.19 85.06 10.54 <0.001 2 political circumstances 7.72 2.84 77.18 5.59 <0.001 7 land authority laws 7.20 2.62 72.00 4.23 <0.001 9 the municipal building laws and regulations 7.89 2.22 78.94 7.87 <0.001 6 the laws of human safety 7.45 2.57 74.47 5.20 <0.001 8 the closure of the crossing borders 8.02 2.46 80.24 7.57 <0.001 5 the availability of land title documents 8.34 2.23 83.41 9.49 <0.001 3 the sheltering process 8.53 2.36 85.29 9.87 <0.001 1 the human right to housing 8.04 2.55 80.35 7.35 <0.001 4 all items of the field 7.96 1.79 79.62 10.09 <0.001 e challenges and obstacles table 7 shows that the mean of item #7 ―the availability of the budget‖ equals 8.86 (88.55%), test-value = 12.28, and p-value less than 0.05. it is concluded that the respondents agreed to this item. the mean of item #10 ―the case of weakness and perhaps the absence of participation from eligible families in the preparation and implementation of the intervention‖ equals 5.95 (59.52%), test-value = -0.14, and p-value = 0.443 which is greater than the level of significance 0.05  . it is concluded that the respondents (do not know, neutral) to this item. the mean of the field ―challenges and obstacles‖ equals 7.49 (74.94%), test-value = 7.89, and p-value less than 0.05. the sign of the test is positive, so the mean of this field is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to field of ―challenges and obstacles ". the above points and the analysis of the data results in table 7 the respondents point out and agreed that the challenges and obstacles factors in general has impact on unrwa intervention, in addition to that it can affect to whom the shelter intervention can be prioritized and unnerved, the analysis emphasize that the high important factor of this group availability of fund and as a matter of fact this is applicable, on the contrary the contribution of unrwa is affected in the case of weakness and perhaps the absence of participation from eligible families in the preparation and implementation of the intervention has less importance in shelter intervention as this is almost not available where all the families waiting their intervention and willing to participate. nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 75 table 7 means and test values for ―challenges and obstacles‖ item mean s.d rii (%) test value pvalue rank the time taken to prepare families in need of assistance 7.09 2.38 70.94 4.24 <0.001 8 donation terms 8.20 2.14 82.00 9.47 <0.001 2 the criteria for selection according to the eligibility of families 8.05 2.08 80.47 9.07 <0.001 5 the total number of families eligible for intervention 8.14 2.32 81.41 8.51 <0.001 4 the satisfaction and acceptance of families due to the type of the intervention 7.92 2.33 79.17 7.52 <0.001 6 completion of licensing procedures 8.19 2.14 81.90 9.40 <0.001 3 the availability of the budget 8.86 2.12 88.55 12.28 <0.001 1 the lack of structural and urban planning 6.92 2.46 69.17 3.41 <0.001 10 the lack of land granted by the government 7.34 2.59 73.41 4.77 <0.001 7 the case of weakness and perhaps the absence of participation from eligible families in the preparation and implementation of the intervention 5.95 3.06 59.52 -0.14 0.443 12 weakness and lack of information 7.04 2.52 70.35 3.78 <0.001 9 the lack of availability of qualified technical and scientific personnel in the evaluation process 6.41 3.19 64.12 1.19 0.119 11 all items of the field 7.49 1.75 74.94 7.89 <0.001 f all groups table 8 shows the mean of all items equals 7.41 (74.05%), test-value = 8.62 and p-value less than 0.05. the mean of all items is significantly greater than the hypothesized value 6. it is concluded that the respondents agreed to all items of questionnaire. also from table 8, it is shown that, " legal and political factors " was ranked in the first position by fields factors with rii of (79.62 %), ―challenges and obstacles " was ranked in the second position by contracting companies with rii of (74.94 %). on the other hand, the last position for main groups ―social factors ―with rii of (69.47 %) and ―economic factors " was ranked in the fourth position by contracting companies with rii of (70.94 %).‖ technical factors ―was ranked in the middle position by contracting companies with rii of (74.92%). it’s means that, legal and political factors are considered the most important factors affecting unrwa intervention for shcs as it is the first step of any intervention done by unrwa, in the same way challenges and obstacles factors and technical factors are considered as important since any intervention cannot be provided with obstacles, moreover any shelter intervention needs the technical evaluation more than social evaluation. table 8 means and test values for ―all items of questionnaire‖ field mean s.d rii (%) test value pvalue rank social factors 6.95 1.67 69.47 5.24 <0.001 5 economic factors 7.09 1.89 70.94 5.30 <0.001 4 technical factors 7.49 1.66 74.92 8.31 <0.001 3 legal and political factors 7.96 1.79 79.62 10.09 <0.001 1 challenges and obstacles 7.49 1.75 74.94 7.89 <0.001 2 all items of all items 7.41 1.50 74.05 8.62 <0.001 vii conclusion the main factors affecting special hardship cases shelters’ interventions in unrwa-gaza strip were: economic, social, technical and political & legal and obstacles and challenges factors. in order to provide high-level evaluation procedures, equity and transparency for beneficiaries (shcs), unified criteria should be applied for all cases. often, the four main factors intervened in shelter assessment has been identified in addition to sub-factors. this is led to transparent beneficiaries’ selection. this reseach concluded the most effective criteria that should be implement by unrwa to improve the shelter intevesion and transpernacy in serving the needest palestinian refugees. the most important criteria are intrdused. the most important social factors affecting unrwa intervention were: population density and number of family within the shelter, existing of family members suffering from various chronic diseases or presence of person with special needs, in addition to number of married persons within the shelter. the type of shelter property for family, poverty level of the household by service classification, ownership of a piece of land by the family and the household income level are considered as an important economic factors affecting special hardship cases shelters’ interventions in unrwa-gaza strip. a severe defect in the existing shelter affects its stability, the number of rooms existing in the shelter, the kitchen or bathroom conditions or its availability in the shelter, and the shelter type are considered as an important technical factors affecting special hardship cases shelters’ interventions in unrwa-gaza strip. the sheltering process is according to unrwa objective, laws and strategies, the availability of land documents and the human right to housing is considered as an important nabil i. el-sawalhi & belal n. alyazji/ identification of factors affecting special hardship cases shelters’ interventions at gaza strip – unrwa/2018 76 legal and political factor in unrwa's intervention. also, several challenges and obstacles that facing the unrwa staff through providing assistance shelter to beneficiaries were identified. the availability of the budget, donation terms, completion of licensing procedures, total number of families eligible for intervention, the criteria for selection according to the eligibility of families, the satisfaction and acceptance of families due to the type of the intervention, the lack of land granted by the government, the time taken to prepare families in need of assistance, weakness and lack of information, were the most important obstacles and challenges noted. vii recommendations the study recommended that there should be a determination by unrwa to request the necessary funds, and to work effectively with the local governments and ask to provide the necessary land for the housing projects. local government should accelerate the development plans for the camps; create a computerized program based on the priorities for the important factors. finally, the study recommended that the staff should attend training courses to improve their experience in the intervention evaluation process. references [1] i. hejoj, and a. badran, ―a socio-economic analysis of special hardship case families in the five fields of unrwa operations‖, 2006. http://www.un.org/unrwa/publications/pubs07/shc_an alysis.pdf [accessed 15 may 2017] [2] unrwa, public information office, unrwa 19501990 serving palestinian refugees, 1990. [3] united nations, universal declaration of human rights, 2015. [4] unrwa, infrastructure and camp improvement implementation plan for 2010-2011, 2009. [5] p. kenna, ''globalization and housing rights'', indiana journal of global legal studies, vol. 15, no. 2, pp. 397-469, 2008. [6] unrwa, shelter technical instructions, 2010. [7] un-habitat, united nations human settlements programme, the habitat agenda goals and principles, commitments and the global plan of action, 2013. [8] unhcr and nrc, shelter wg jordan, 2013. [9] s. pothiawala, ''food and shelter standards in humanitarian action'' turkish journal of emergent medice ,2015;15(suppl 1): pp. 34–39. [10] unhsp, planning sustainable cities: global report on human settlements. london: unhsp and earth scan, 2009. [11] state of california, state housing law, 2010. [12] a bashawria, s. garritya, and k. moodleya, ''an overview of the design of disaster relief shelters'', procedia economics and finance. pp. 924 – 931, 2014. [13] c. yiu, and y. leung, ―a cost and benefit evaluation of housing rehabilitation‖, structural survey, vol. 23, no. 13, pp. 8-51, 2005. [14] c. langston, k. wong, c. hui, and l.y. shen, ''strategic assessment of building adaptive reuse opportunities in hong kong''. building and environment, pp. 170918, 2008. [15] c. patt, ―satisfaction level of the elderly in housing & development board main upgrading programme‖. singapore: national university of singapore, 2004. [16] s.l. ting, ―satisfaction level of the lift upgrading works in public housing‖. singapore: national university of singapore, 2002. [17] d. boyd, and l. jankovic, ―the limits of intelligent office refurbishment‖, property management, vol. 11, pp. 102-11, 1992. [18] j. hoymann, ''quantifying demand for built-up area''. journal of land use science, 2010. [19] s.j. wilkinson, k. james, and r. reed, ―using building adaptation to deliver sustainability in australia‖, structural survey, vol. 27, pp. 46-61, 2009. [20] e. teo, and g. lin, "building adaption model in assessing adaption potential of public housing in singapore", building and environment, vol. 46, no. 13, pp. 70-79, 2011. [21] g. arlt, j. gössel, b. heber, j. hennersdorf, i lehmann, and n. x. hini, impact of urban use patterns on soil sealing and land price. dresden, 2001. [22] world bank, safer homes, stronger communities, (january 2010), a handbook for reconstructing after natural disasters. [23] zutphen and damerell, the sphere project 2011, humanitarian charter and minimum standards in humanitarian response, 2011. [24] norwegian refugee council, lebanon, shelter minimum standards guidelines, 2014. [25] unrwa, guidelines for individual shelter rehabilitation on grant basis, 2011. [26] unhcr, global trend repor, united nations, 2012. http://www.un.org/unrwa/publications/pubs07/shc_analysis.pdf http://www.un.org/unrwa/publications/pubs07/shc_analysis.pdf transactions template journal of engineering research and technology, volume 6, issue 2, october 2019 10 critical factors causing contractor's business failure in gaza strip khalid al hallaq abstract— the construction industry remains a major player of the palestinian economy. business failure is an important issue for companies in the gaza strip uncertain environment. this paper aimed to explore the critical factors that have the potential to cause contractor's business failure and to determine their level of severity from contractor's viewpoint. this paper has considered. critical factors were listed under five groups: financial and political, contractual, managerial, organizational, and economical causes. contractors have been advised to consider the most critical factors that have the potential to cause business failure. the most critical factors include: cost of materials, lack of resources, delay in collecting dibs from clients, monopoly, changing funding sources, dealing with suppliers and traders and israeli attacks. index terms— business failures, contractors, construction industry, contracting failure, construction in palestine, gaza strip, factor analysis. 1introduction: the pattern of palestinian economic activity in palestine is uncertain and unusual. while economic activity and growth stimulators in conventional economies are largely related to internal economic variables and policies, the palestinian economy operates in an environment rife with different internal and external risks and challenges, which significantly affect and change the economic situation. the most external challenges facing the palestinian economy include the israeli occupation and closure (economic forecast report, 2018). organizations need to be well prepared, organized, and plan appropriate strategies to stay relevant, competent, and active, in the industry ( abu bakar et al., 2016). in gaza strip, construction has a positively affecting on economic, social, educational and vocational sectors and other sectors. it has a large contribution to the gross domestic product directly, in addition to the indirect contribution through the related activities such as manufacturing, electricity, water and other economic activities. it used to employ an average of 14.4 % of palestinian labor force volume (pcu, 2017). present contribution to real gdp by construction at the end of year 2018 was 6% and 6.7% in gaza strip and the west bank respectively. the rate of change of real value added by construction between year 2017 and 2018 recorded a significant contraction by (-22.5%) in the gaza strip. during the same period, a significant expansion was noticed by (+6.0%) in the west bank (unsco socio-economic report, 2019). among other unique characteristics of life in palestine, the construction industry stands out from other parts of the economy. construction is heavily affected by economic cycles and political environment, which change frequently and dramatically in palestine in general and in gaza strip in particular. the construction industry has a significantly high rate of business failure due to high operating risks and uncertain conditions. all over the world, contractors compete fiercely in the marketplace, exposing themselves to risk of failure, as well as the prospect for success. palestine is not an exception. in the last two years, more than 50 contracting companies exposed to failure as a result of unnatural environment in gaza strip. currently, there are about 70 contractors facing business failure due to their inability to cope with environmental, subjective and competitive conditions. this research will identify the most important and critical factors that lead to the failure of contractors in order to enhance the ability of these companies to survive, compete, and overcome the abnormal conditions. 2literature review 2.1 definition of failure there are many definitions of business failure. altman (1968) defined the failure from an economic viewpoint and said that a company is considered to have failed if the realized rate of return on invested capital, with allowances for risk considerations, is significantly and continually lower than prevailing rates on similar investments. berryman (1893) recognized it as the condition of the firm when it is unable to meet its financial obligations to its creditors in full. it is deemed to be legally bankrupt and is usually forced into insolvency liquidation. another definition of failure denoted by watson & everett (1883) as attributed business failure to four different situations: discontinuance for any reason; ceasing to trade and creditor loss; sale to prevent further losses; and failure to make a go of it. shepherd (2003) show that, the failure occurs when a fall in revenues and/or a rise in expenses are of such a magnitude that the firm becomes insolvent and is unable to attract new debt or equity funding; consequently, it cannot continue to operate under the current ownership and management. ucbasaran et al. (2010) said that the failure as not only the sale or closure of a business due to bankruptcy, liquidation, or receivership but also the sale or closure of a business because it has failed to meet the entrepreneur‟s expectations, which reflects the varying personal thresholds of performance among entrekhalid al hallaq/ critical factors causing contractor's business failure in gaza strip 11 preneurs.. david o. mbat & eyo (2013) said that the failure could be seen in terms of the inability of a corporate organization to conform itself with its strategic path of growth and development to attain its economic and financial objectives as well as legal obligations. 2.2 causes of failure a number of researchers had studied the causes of contracting business failure. dun and bradstreet corporation (1986) had identified the major causes of business failures in the construction industry as; economic factors, inexperience, poor sales, expense, customer, fraud and neglect, asset and capital, and disaster. they found the most significant failure cause as economic factors. within the economic factors category, there were five subcategories that were bad profit, high interest rates, loss of market, no customer spending and no future. schleifer (1989) also identified ten causes as the bane of the construction industry. the first five of the identified factors are related to business strategies and the second five are related to accounting considerations. the factors were; increasing project size, expanding into unfamiliar locations, replacing key personnel, moving into new construction, not maturing in management as business expands, using poor accounting systems, evaluating project profit incorrectly or not in time, not controlling equipment costs, not billing or collecting effectively and jumping between computerized accounting systems. the findings indicated that over 80% of the failures were caused by five factors, namely insufficient profits (27%), industry weakness (23%), heavy operating expenses (18%), insufficient capital (8%) and burdensome institutional debt (6%). all these factors, except for industry weakness, are budgetary issues and should therefore be handled by companies that are cognizant of the effects of these factors on their survivability ( donkor, 2011). argenti (1976) in his book 'corporate collapse' summarized what was written in failure. he concluded six main causes as a result of what written about the subject of company failure follows; top management, accounting information, change, accounting manipulation, rapid expansion, economic cycle. hartigan (1973) listed seven main causes of failure were as follows: lack of capital, under costing, lack of control, lack of advice, the government, trade fluctuations and fraud. jannadi (1997) had previously presented a study of the factors that contribute to the failure of construction contractors in saudi arabia and found that the most important factors were: difficulty in acquiring work, bad judgment, lack of experience in the firm‟s line of work, difficulty with cash flow, lack of managerial experience, and low profit margins. davidson and maguire (2003), based on their accountancy experience, identified ten most common causes for contractor failures as: growing too fast, obtaining work in a new geographic region, dramatic increase in single job size, obtaining new types of work, high employee turnover, inadequate capitalization, poor estimating and job costing, poor accounting system, poor cash flow, and buying useless stuff. enshassi, et al. (2006) identified the main factors that cause business failure based on contractors‟ view point in palestine. the research identified delay in collecting debt from clients (donors), border closure, heavy dependence on bank loans and payment of high interest on these loans, lack of capital, absence of industry regulations, low profit margin due to high competition, awarding contracts by client to the lowest bidder, and lack of experience in contract management. kivrak and arslan (2008) examined the critical factors causing the failure of construction companies through a survey conducted among 40 small to medium-sized turkish construction companies. a lack of business experience and the country‟s economic conditions were found to be the most influential factors in company failure. mahamid (2011) ranked the factors as highly influential with huge potential to cause contractor‟s business failure based on contractors‟ view point in palestine: fluctuation in construction material costs; delay in collecting dibs from clients; lack of experience in contracts; low margin of profit due to competition; and closure and limitation of movement between west bank areas. mbat and eyo (2013) concludes that there are a lot of factors, internal and external, to the firm could be responsible for corporate failure. the corporate should consider the relative influence of management, board of directors, employees, external auditors, regulatory bodies, and government to avert failure. holt (2013) aimed to synthesize published knowledge in construction business failure to explore the failure agents. he concluded that the broad practical propositions to help negate the potential negative effects are the managerial, financial, company characteristics, and macroeconomic environment. wang and wu (2017) adopted modified two-stage learning algorithm to predict business failure. the modified learning model can utilize geometric feature of the data to discover the low-dimensional manifold embedding in the high-dimensional space by coordinate representations. it is more suitable to select feature values for financial data. the first stage, the stepwise forward selection approach is easy to understand and implement, and can enhance the performance of the selective ensemble model efficiency. in the second stage, different selective ensemble models are integrated according to normal or failed firms, which can exert the respective advantage of ensemble models to process the suitable firms. doumpos et al. (2017) examined the development of corporate failure prediction models for european firms in the energy sector, using a khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 12 large dataset from 18 countries. the construction of models is based on multiple criteria decision aid approach taking into consideration both ordinal criteria and nominal country-sector effects. the results confirmed the importance of incorporating energy-related data to the analysis of the distress risk for firms in the energy sector. it was found that data related to the quality and reliability of energy networks, energy sustainability factors, as well as the size and openness of a country's internal energy market, can provide valuable additional information compared to firm-specific attributes and economic/business environment. venugopal (2018) explained the persistence discourse of failure in development as a point of departure to understand what is signifies, how it is structured, and what consequences it bears. he framed failure as a socially constructed category. he also concluded that changing sets of beneficiaries, definitions, goals, and indicators of success, and outcomes that are multi-layered, evolve over time, hard to measure, and generate unpredictable externalities, every successful project can also be reinterpreted as a failure. cui et al. (2018) concluded that: the company's business capacity cannot adapt to the company's development is the most primary factor in the green business failure. while the "short-term investor mind-set and less investment" had the strongest effect on green business failure. 3methodology a total of 73 factors that might affect contractors‟ business failure were defined through a detailed literature review of relevant research studies (hartigan, 1973; ross & rami, 1973; cohen, 1973; argenti, 1976; dun and bradsteet, 1986; kangari, 1988; schleifer, 1989; abidali and harris, 1995; osama, 1997; assaf, s., 2004; peterson, 2005; enshassi et al., 2006; strischek and mclntyre, 2008; donkor, s., 2011). the factors were tabulated in a questionnaire form and the questionnaire was reviewed by three groups of experts to test its content validity. the target population in this study is all contractors of the first, second and third categories for building works that have valid registration by the palestinian contractors union with a total of 203 contractors. the following statistical equation was used to determine the sample size: )1( 2   zx ))1(( 2 xen nx n   where: z: (1.96 for 95% c.i) p: (0.50) n: sample size n: population size =186 e: maximum error of estimation (0.07) 133 ))9604.0)05.0)(1202(( 9604.0203 ))1(( 9604.0)5.01(5.096.1 2 2 2        xen nx n x therefore, the calculated sample size is 133 contractors based on a 95% confidence level. the questionnaire was sent out to a total of 133 contractors asking their contribution in ranking the identified 73 factors in terms of severity using an ordinal scale. the ordinal scale that was used are 1 = very low influence, 2 = low influence, 3 = moderate influence, 4 = high influence, and 5 = very high influence. only a total of 101 completed questionnaires were returned representing a good response rate of 75.93%. factor analysis was employed to reduce a large number of variables (factors of business failure) to a smaller set of underlying factors that summarize the essential information contained in these variables. using spss v.22, principle component analysis with varimax rotation were performed to set up which items could capture the aspects of same dimension of the proposed determinants causes of business failure and examine the underlying structure or structure of interrelationships among these causes. in order to perform the factor analysis for proposed items, all appropriate checks, requirements and procedures were fulfilled, as mentioned in table (1). three main three phases were proceeded to accomplish factor analysis, as follows: preliminary analysis, factors extraction, and factors naming and interpretation. table (1): factor analysis process requirements and criteria factor analysis phase requirement acceptation criteria references preliminary analysis (first phase) type of the study data (variables) subjective variables (yong and pearce, 2013) distribution of the data normal distribution (sample size of the study larger than 30) (hair et al., 2010) (field, 2009) sample size more than 50 (winter et al., 2009) (sapnas and zeller, 2002) data reliability test (internal consistency) cronbach coefficient alpha > 0.7 (pallant, 2005) factorability of the correlation matrix (visual inspection of the correlation matrix) each item (variable) correlated with several other variables with correlation coefficients greater than 0.30 and none of the correlation coefficients has a (field, 2009) (tabachnick and fidell, 2007) khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 13 factor analysis phase requirement acceptation criteria references value greater than 0.9. anti-image correlation matrix the diagonals on the anti-image correlation matrix should have an overall measure of sampling adequacy (msa) of 0.50 or above (hair et al., 2010) items correlation matrix adequacy “kaiser-meyerolkin (kmo) measure of sampling adequacy/bartlett's test of sphericity the bartlett‟s test of sphericity is significant when (p-value <0.05), and when the value of the kmo index is above 0.5. (mane and nagesha, 2014) (hair et al., 2010) factors extraction (second phase) communality values each item communality value more than 0.5 (field, 2009) cumulative percentage of variance explained by the extracted factor solution the cumulative variance explained by the extracted factors should be greater than 50% of total variance explained. (meyers et al., 2006) (mane and nagesha, 2014) loaded items and extracted factors properties each item should has at least one factor loading value equal or more than (0.5). (pallant, 2005) (mane and nagesha, 2014) each one of the extracted factors should include at least three items to be acceptable. (costello and osborne, 2005) any item loaded on more than one factors with factor loading greater than 0.5 should be removed “no cross-loading items”. (henson and roberts, 2006) (hair et al., 2010) reliability measure of the extracted factors the variables formed each factor explains the measure within this factor based on cronbach‟s alpha (cα) value, which should be more than 0.7 (pallant, 2005) factors naming and interpretation (second phase) arrangement of extracted factors extracted factors should be arranged and numbered in a descending order on the basis of the amount of variance explained by each one. (hart, 2008) (henson and roberts, 2006) (williams et al., 2010) factor analysis phase requirement acceptation criteria references factor naming each factor subjectively labeled in accordance with the factor loading values and the correlation between the individual items loaded on it. interpretation of the principal factors interpretation of each factor should be provided based on the labeling and items included in each factor. khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 14 4result and discussion factor analysis was used to examine the pattern of intercorrelations between the 73 items/ variables of success factors for the application of ems in an attempt to reduce the number of them. it also used to group items/ variables with similar characteristics together. in other words, it identified subsets of items/ variables that correlate highly with each other, which called factors or components. factor analysis was conducted for this study using the principal component analysis (pca). appropriateness of factor analysis the data was first assessed for its suitability to the factor analysis application. there were many stages of that assessment: 1. distribution of the data: with the base of central limit theorem, the data collected can be considered normally distributed because sample size for this study was 101 and it was larger than 30 as proposed by hair et al. (2010). therefore, the normal distribution requirement for factor analysis application for this part of study has been satisfied as stipulated by field (2009). 2. validity of sample size: the reliability of factor analysis is dependent on sample size. factor analysis/ pca can be conducted on a sample that has fewer than 100 respondents, but more than 50 respondents. the sample size for this study was 101. 3. data reliability test: the first stage of the quantitative analysis was related to the reliability test where the reliability of the questionnaire was tested according to the cronbach‟s alpha measurement. through the analysis that has been done, the alpha reliability of the scale of 73 items (factors) in this study was 0.94 for the items indicating that 94% of the variance of the total scores of all factors can be attributed to systematic variance. since the result was achieved above 0.7, it showed that all items have indicated internal consistency and achieved high reliability as proposed by pallant (2005). 4. kaiser-meyer-olkin (kmo) and bartlett's test: the kaiser-meyer-olkin (kmo) sampling adequacy test and bartlett's test of sphericity were carried out. the results of these tests are reported in table (1). the value of the kmo measure of sampling adequacy was 0.792 (close to 1) and was considered acceptable and marvelous because it exceeds the minimum requirement of 0.50 and it is above 0.90 („superb„ according to kaiser, 1974; field, 2009; zaiontz, 2014). moreover, the bartlett test of sphericity was another indication of the strength of the relationship among items/ variables. the bartlett test of sphericity was 1417.778, and the associated significance level was 0.000. the probability value (sig.) associated with the bartlett test is less than 0.05, which satisfies the pca requirement. this result indicated that the correlation matrix was not an identity matrix and all of the items/ variables are correlated (field, 2009; zaiontz, 2014). table (1): kmo and bartlett's test for business failure factors kmo and bartlett's test kaiser-meyer-olkin measure of sampling adequacy. 0.792 bartlett's test of sphericity approx. chisquare 1417.778 df 378 p-value 0.000 cronbach's alpha (cα) 0.90 after all the appropriate checks were performed and indicated that all the 73 variables should be retained in an initial capture of factors, using the principal component analysis approach with exploratory factor analysis through spss v.22. several criteria should be achieved in order to accept the extracted solution obtained in any phase and to consider this solution as a suitable final solution for the involved variables. the following sections explains these criteria and process of investigation for the final solution (after sixteenth run). 1. communalities (common variance) communality is the first criteria to be checked in the extracted solution. it reveals the percentage of variance in a particular variable that is explained by the factor (williams et al. 2010). larose (2006) has also claimed that communalities less than 0.5 were considered too low, since this would meant that the variable shares less than half of its variability with other variables. higher communality value means higher importance of the variable. after (sixteenth run) of factor analysis, we get (28) factors that communality values confirms with this assumption as their values larger than 0.5. table (2) : communality values of business failure factors “final run” item communality values of final run “sixteenth run” a4 0.676 a5 0.576 a6 0.626 a7 0.667 a8 0.598 a11 0.620 a18 0.568 a28 0.601 a30 0.651 a32 0.509 a43 0.577 a44 0.632 a45 0.701 a46 0.686 a49 0.649 a50 0.560 khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 15 item communality values of final run “sixteenth run” a54 0.547 a55 0.619 a59 0.680 a63 0.598 a64 0.667 a65 0.856 a66 0.813 a67 0.672 a68 0.575 a70 0.673 a71 0.730 a73 0.732 extraction method: principal component analysis. 1. total variance explained by using the output from iteration 1, there were five eigenvalues greater than 1 (figure 1). the eigenvalue criterion stated that each component explained at least one item's/ variable's worth of the variability, and therefore only components with eigenvalues greater than one should be retained (larose, 2006; field, 2009). the latent root criterion for some factors to be derived would indicate that there were five components (factors) to be extracted for these items/ variables. results were tabulated in table (3). the five components solution explained a sum of the variance with component 1 contributing 28.894%, component 2 contributing 11.105%, component 3 contributing 8.597%, component 4 contributing 7.010%, and component 5 contributing 5.306%. all the remaining factors are not significant. the five components were then rotated via varimax (orthogonal) rotation approach. this approach does not change the underlying solution or the relationships among the items/ variables. rather, it presents the pattern of loadings in a manner that is easier to interpret factors (components) (reinard, 2006; field, 2009; zaiontz, 2014). the rotated solution revealed that the five components solution explained a sum of the variance with component 1 contributing 14.405%, component 2 contributing 14.047%, component 3 contributing 11.436%, component 4 contributing 11.325%, and component 5 contributing 9.522%. these five components (factors) explained 60.734% of total variance for the varimax rotation. table (3): total variance explained by factor analysis for the final run of business failure factors c o m p o n e n t initial eigenvalues extraction sums of squared loadings rotation sums of squared loadings t o ta l % o f v a r ia n c e c u m u la ti v e % t o ta l % o f v a r ia n c e c u m u la ti v e % t o ta l % o f v a r ia n c e c u m u la ti v e % 1 8.0 90 28. 89 4 28. 894 8.0 90 28. 89 4 28.8 94 4. 03 3 14. 405 14. 40 5 2 3.1 09 11. 10 5 39. 999 3.1 09 11. 10 5 39.9 99 3. 93 3 14. 047 28. 45 1 3 2.4 07 8.5 97 48. 596 2.4 07 8.5 97 48.5 96 3. 20 2 11. 436 39. 88 7 4 1.9 63 7.0 10 55. 607 1.9 63 7.0 10 55.6 07 3. 17 1 11. 325 51. 21 2 5 1.4 86 5.3 06 60. 912 1.4 86 5.3 06 60.9 12 2. 66 6 9.5 22 60. 73 4 6 0.9 98 3.5 77 64. 489 7 . . . . . . . . . . . . 8 9 1 0 2 8 0.0 68 0.2 43 100 .00 extraction method: principal component analysis 1. scree plot the scree plot below in figure (1) is a graph of the eigenvalues against all the factors. this graph can also be used to decide on some factors that can be derived. the point of interest is where the curve starts to flatten. it can be seen that the curve begins to flatten between factors 1 and 5. note also that factor 6 has an eigenvalue of less than 1, so only five factors have been retained to be extracted. figure (1): scree plot of business failure factors khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 16 2. rotated component (factor) matrix table (4) shows the factor loadings after rotation of 28 items/ variables on the five factors extracted and rotated. the pattern of factor loadings should be examined to identify items/ variables that have complex structures (complex structure occurs when one item/ variable has high loadings or correlations (0.50 or greater) onto more than one factor/ component). if an item/ a variable has a complex structure, it should be removed from the analysis (reinard, 2006; field, 2009; zaiontz, 2014). it was loading onto five components. on the basis of such restriction, seven items loaded on the first factor, six items loaded on the second factor, five items loaded on the third factor, five items loaded on the fourth factor, five items loaded on the fifth factor “table (5)”. it is worth noting here, that rotated component matrix table should be checked only after satisfying all requirements mentioned above such as msa values, communalities, kmo, p-value for bartlett‟s test of sphericity and etc.,. however, three conditions should be satisfied in this table to consider the solution acceptable. table (4): rotated component matrix for the final run of business failure factors no. factors/ components of business failure factors f a c to r lo a d in g e ig e n v a lu e s % v a ri a n c e e x p la in e d c ro n b a c h 's a lp h a ( c α ) component/ factor one: financial and political causes a66 high cost of materials 0.754 8.10 14.41 0.87 a67 lack of resources 0.734 a70 delay in collecting dibs from clients 0.729 a65 monopoly 0.718 a73 changing funding sources 0.713 a68 dealing with suppliers and traders 0.688 a66 israeli attacks 0.604 component/ factor two: contractual causes a11 owner absence from the company 0.760 3.11 14.05 0.82 a28 low margin of profit due to competition 0.723 a55 owner involvement in construction phase 0.712 a30 estimating practices 0.706 a54 award contracts to lowest price 0.651 no. factors/ components of business failure factors f a c to r lo a d in g e ig e n v a lu e s % v a ri a n c e e x p la in e d c ro n b a c h 's a lp h a ( c α ) a59 monopoly of some important material for construction 0.649 component/ factor three: managerial causes a6 use of project management techniques 0.741 2.41 11.44 0.79 a4 bad decisions in regulating company policy 0.733 a7 company organization 0.710 a5 labor productivity and improvement 0.656 a8 procurement practices 0.654 component/ factor four: organizational causes a45 increase number of projects 0.792 1.96 11.33 0.82 a46 increase size of projects 0.750 a49 increase number of employees 0.721 a44 contractor's difficulties in achieving bank facilities 0.589 a43 problem rising due to temporary items in the contract 0.573 component/ factor five: economical causes a64 banks policy 0.754 1.49 9.52 0.75 a50 change work from private to public or vice versa 0.635 a63 general government restriction 0.635 a18 inflation 0.610 a32 bill and collecting effectively 0.558 extraction method: principal component analysis. rotation method: varimax with kaiser normalization. once factors have been extracted and rotated, it was necessary to cross checking if the items/ variables in each factor formed collectively explain the same measure within target dimensions (doloi, 2009). if items/ variables indeed form the identified factor (component), it is understood that they should reasonably correlate with one khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 17 another, but not the perfect correlation though. cronbach's alpha (cα) test was conducted for each component (factor). the higher value of cα denotes the greater internal consistency and vice versa. an alpha of 0.60 or higher is the minimum acceptable level. preferably, alpha will be 0.70 or higher (field, 2009; weiers, 2011; garson, 2013). according to the results which were tabulated in table (4), cα for each factor higher than 0.7, they are considered to be excellent. financial and political factors it is clear that the seven items that loaded on this group are related to financial and political factors that can cause business failure according to the view point of local contractors. this group accounts for 14.41% of the total variance explained and the reliability score (cronbach‟s α) of 0.87. according to factor analysis theory, the first factor accounts for the largest part of total variance of the data. hence, it implies that high cost of materials considered as the most important factors cause business failure in gaza strip. it is closely followed by lack of resources, delay in collecting dibs from clients, monopoly, changing funding sources and dealing with suppliers and traders. most of these factors are financial factors but associate with political conditions. according to the statistical analysis, there is no weighted difference between the financial and political factors except the lowest ranking factor, which was israeli attacks. it may be interpreted as most of companies in gaza strip are not exposed to israeli attacks directly, and if exposed to attacks, they compensate by local authorities. contractual factors it can been seen from table (6) that there are six items\variables that loaded on this group. the total variance was 14.04% and the reliability score (cronbach‟s α) of 0.82. table (6) illustrates that the owner absence from the company, low margin of profit due to competition, owner involvement in construction phase, estimating practices are the top ranked four factors. these are closely followed by award contracts to lowest price and monopoly of some important material for construction. the owner absence from the company is the most factor affecting to the failure of company because the loss of experience in the stuff and not good following the work result the failure, the second factor more competitive lead to less profit in the contract from the contractor side, the monopoly is important factor in failure in gaza because the closure of ports. managerial factors there are five factors listed under this group as shown in table (6). the highest three business failure causes are the use of project management techniques, bad decisions in regulating company policy, and company organization. it is closely followed by two factors which were labor productivity and improvement and procurement practices. it is quite interesting to note that the use of the project management techniques is heavily affecting in failure because the very good techniques lead to good management at the project less the failure, the bad decision at company affect more on the work and make problem at the sites that is a failure, company organization to the employee in the company is good less failure and choice bad engineer or any employee do the failure, good productivity for the employee is important factor to less the failure, less factor affecting on the failure in this group procurement practices. organization factors table (6) illustrates the ranking of five factors under this group. the top-ranked factors are increase number of projects, increase size of projects and increase number of employees. that more affecting three factor because more skilled employees are needed to sequence the project without problem, increase successful project less failure. according to the contractors, there is significance difference between the three top factors and the two lowest factors which were contractor's difficulties in achieving bank facilities and problem rising due to temporary items in the contract, this factor lead to failure because no bank facilities is stopping the project that may lead to financial failure, the lowest factor in the items of the contract lead to more problem between the owner stuff and contractor stuff may lead to failure. economic factors it is obvious that the five factors that loaded on this group are related to economic factors that can cause business failure according to the viewpoint of local contractors. the first factor accounts for the largest part of total variance of the data. hence, it implies that banks policy considered as the most important in the economic factors in gaza strip and it is heavily affecting factor. the other factors respectively are change work from private to public or vice versa, general government restriction, inflation, bill and collecting effectively. the change of the type of work that need to skilled employee and more experience project manager to do best in the project new type without failure, inflation is a worldly reason from financial viewpoint and contractor. conclusion business failure has become an increasingly important issue in the gaza strip construction industry due to ongoing closure that cause business instability. the failure of a company may cause considerable losses to all parties in the construction industry. in particular, it may affect various stakeholders, such as clients, contractors, subcontractor, suppliers, consultants, investors, or employees. there are many factors that could be responsible for the contractors failure which impact negatively the local economic environment. the main objective of this paper is to identify the critical factors that have the potential to cause contractor's business failure in the gaza strip and to determine their level of severity from contractor's viewpoint. seventy-three factors were considered in this research paper, and then reduced to twenty-eight factors using factor analysis. they were listed under the following five groups: (1) financial and political, (2) contractukhalid al hallaq/ critical factors causing contractor's business failure in gaza strip 18 al, (3) managerial, (4) organization, and (5) economical. the most critical factors that highly affect contractor's business failure are: (1) high cost of materials, (2) lack of resources, (3) delay in collecting dibs from clients, (4) monopoly, (5) changing funding sources, (6) dealing with suppliers and traders, (7) israeli attacks. it is recommended that contracting companies should consider the influence of the previous factors to avert failure. they should also focus on the remedies of failure by using a blend of managerial and organizational actions to overcome the impacts of financial and political implications of failure. references abidali, a. f., & harris, f. (1995). a methodology for predicting company failure in the construction industry. construction management and economics, 13(3), 189-196. abu bakar, a. h., yusof, m. n., tufail, m. a., & virgiyanti, w. (2016). effect of knowledge management on growth performance in construction industry. management decision, 54(3), 735-749. altman, e. i. (1968). financial ratios, discriminant analysis and the prediction of corporate bankruptcy. the journal of finance, 23(4), 589-609. argenti, j., & argenti, j. (1976). corporate collapse: the causes and symptoms (p. 193). london: mcgraw-hill. arslan, g., tuncan, m., birgonul, m. t., & dikmen, i. (2006). e-bidding proposal preparation system for construction projects. building and environment, 41(10), 1406-1413. asiedu, w. (2009). assessing construction project performance in ghana: modelling practitioners’ and clients’ perspectives (doctoral dissertation, phd. thesis, technology universiteit eindhoven). bader, m., & assaf, s. (2004). causes of contractors‟ failure in saudi arabia. unpublished report king fahd university of petroleum and minerals,(http://faculty. kfupm. edu. sa/cem/assaf/students_reports/causes-of% 20contractors-failure-in. pdf). berryman, j. (1983). small business failure and survey of the literature. european small business journal, 1(4), 47-59. clough, r. h., sears, g. a., sears, s. k., segner, r. o., & rounds, j. l. (2015). construction contracting: a practical guide to company management. john wiley & sons. brinkman, w.p., “design of a questionnaire instrument”, handbook of mobile technology research methods, pp. 31-57, nova publisher, 2009. byrne, b. m. “structural equation modeling with amos: basic concepts, applications and programming (multivariate application series)”, 2 ed edition, new york: taylor & francis group, 2010. clough, r., “construction contracting”, john wiley & sons, new york, 1981. osborne, j. w., costello, a. b., & kellow, j. t. (2008). best practices in exploratory factor analysis. best practices in quantitative methods, 86-99. creative research system. sample size formulas. available from: http://www.surveysystem.com/samplesize-formula.htm [accessed on: 11/03/2016]. cui, l., chan, h. k., zhou, y., dai, j., & lim, j. j. (2019). exploring critical factors of green business failure based on grey-decision making trial and evaluation laboratory (dematel). journal of business research, 98, 450-461. mbat, d. o., & eyo, e. i. (2013). corporate failure: causes and remedies. business and management research, 2(4), 19-24. davidson, r. a., & maguire, m. g. (2003). ten most common causes of construction contractor failures. journal of construction accounting & taxation, 13(1), 35-35. de vaus, d.a. (2002). surveys in social research. 5 th edition. australia: allen and unwin. de winter, j. d., dodou*, d. i. m. i. t. r. a., & wieringa, p. a. (2009). exploratory factor analysis with small sample sizes. multivariate behavioral research, 44(2), 147-181. donkor, s. (2011). determinants of business failure: the perspective of smes building contractors in the ghanaian construction industry (doctoral dissertation). doumpos, m., andriosopoulos, k., galariotis, e., makridou, g., & zopounidis, c. (2017). corporate failure prediction in the european energy sector: a multicriteria approach and the effect of country characteristics. european journal of operational research, 262(1), 347-360. dun and bradstreet corporation. (1986). dun's census of american business. new york: dun & bradstreet corporation. palestine monetary authority (pma). economic forecast report, 2018: december 2017. enshassi, a., al-hallaq, k., & mohamed, s. (2006). causes of contractor's business failure in developing countries: the case of palestine. causes of contractor's business failure in developing countries: the case of palestine, 11(2). field, a. (2009). discovering statistics using spss. sage publications. grosskopf, k. r. (2004). teaching methods improvement using industry focus groups: a case study in construction financing. international journal of construction education and research, 1(1), 13-25. hair, j. f., anderson, r. e., babin, b. j., & black, w. c. (2010). multivariate data analysis: a global perspective (vol. 7). hari, s., egbu, c., & kumar, b. (2005). a knowledge capture awareness tool: an empirical study on http://www.surveysystem.com/sample-size-formula.htm http://www.surveysystem.com/sample-size-formula.htm khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 19 small and medium enterprises in the construction industry. engineering, construction and architectural management, 12(6), 533-567. hart, g.w. multivariate statistics: factor analysis. available from: http://www.socialresearchmethods.net/tutorial/flynn/fact or.htm [accessed on: 18/04/2015]. henson, r. k., & roberts, j. k. (2006). use of exploratory factor analysis in published research: common errors and some comment on improved practice. educational and psychological measurement, 66(3), 393-416. holt, g. d. (2013). construction business failure: conceptual synthesis of causal agents. construction innovation, 13(1), 50-76. israel, g. d. (1992). determining sample size, peod6, agricultural education and communication 539 department, florida cooperative extension service. gainesville: institute of food and agricultural, 540. [accessed on: 28/07/2015]. jannadi, m. o. (1997). reasons for construction business failures in saudi arabia. project management journal, 28(2), 32-36. kangari, r. (1988). business failure in construction industry. journal of construction engineering and management, 114(2), 172-190. kangari, r., farid, f., & elgharib, h. m. (1992). financial performance analysis for construction industry. journal of construction engineering and management, 118(2), 349-361. kivrak, s., arslan, g., dikmen, i., & birgonul, m. t. (2008). capturing knowledge in construction projects: knowledge platform for contractors. journal of management in engineering, 24(2), 87-95. kivrak, s., & arslan, g. (2008). factors causing construction company failure. building abroad. koota, j. (2003). market review and study of success characteristics in construction companies. vtt tiedotteita-research, 2195, 11-26. larose, d. t., & larose, d. t. (2006). data mining methods and models (vol. 12). hoboken (nj): wileyinterscience. mahamid, i. (2011). causes of contractors‟ failure: contractors‟ view. in 2nd international conference on construction and project management ipedr., singapore. malhotra, n.k. and birks, d.f. (2006). marketing research: an applied approach 2 nd edition. harlow: prentice hall financial times. mane, s. d., & nagesha, n. (2014). analysis of factors for enhancing energy conservation in indian railway workshops: a case study. international journal of research in engineering and technology, 3(3), 717-724. mbat, d. o., & eyo, e. i. (2013). corporate failure: causes and remedies. business and management research, 2(4), 19-24. meyers, l. s., gamst, g., & guarino, a. j. (2016). applied multivariate research: design and interpretation. sage publications. nega, f. (2008). causes and effects of cost overrun on public building construction projects in ethiopia. unpublished doctoral dissertation, addis ababa university, ethiopia. pallant, j. (2005). spss survival manual. a step by step guide to data analysis using spss for windows (version 12) 2ed edition, australia: allen & unwin. pcu palestinian contractors union. construction sector profile. palestine: west bank, 2013. pcu, “the palestinian contractors' union”, gaza, 2017. rehbinder, e. (2011). do personal networks affect the success of foreign venture performance. an empirical analysis of nordic firms in poland. copenhagen. copenhagen business school– handelshøjskolen. peterson, s. j. (2005). construction accounting and financial management (p. 556). new jersey: pearson prentice hall. ross, j. e., & kami, m. j. (1973). corporate management in crisis: why the mighty fall. prentice-hall. sanvido, v., grobler, f., parfitt, k., guvenis, m., & coyle, m. (1992). critical success factors for construction projects. journal of construction engineering and management, 118(1), 94-111. sapnas, k. g., & zeller, r. a. (2002). minimizing sample size when using exploratory factor analysis for measurement. journal of nursing measurement, 10(2), 135-154. schleifer, t. c. (1989). why some contractors succeed and some don‟t. concrete const. june. shepherd, d. a. (2003). learning from business failure: propositions of grief recovery for the selfemployed. academy of management review, 28(2), 318-328. strischek, d. and mclntyre, m. (2008). red flags & warning signs of contractors‟ failure. the rma journal, (90), 72-79. tabachnick, b. g., fidell, l. s., & ullman, j. b. (2007). using multivariate statistics (vol. 5). boston, ma: pearson. ucbasaran, d., shepherd, d. a., lockett, a., & lyon, s. j. (2013). life after business failure: the process and consequences of business failure for entrepreneurs. journal of management, 39(1), 163-202. unsco socio-economic report. (2019). overview of the palestinian economy in q4/2018 venugopal, r. (2018). ineptitude, ignorance, or intent: the social construction of failure in development. world development, 106, 238-247. watson, j and everett, j. (1993). defining small business failure. international small business journal, 11(3): 35-48. williams, b., onsman, a., & brown, t. (2010). exploratory factor analysis: a five-step guide for novices. australasian journal of paramedicine, 8(3), 1http://www.socialresearchmethods.net/tutorial/flynn/factor.htm http://www.socialresearchmethods.net/tutorial/flynn/factor.htm khalid al hallaq/ critical factors causing contractor's business failure in gaza strip 20 13. wang, l., & wu, c. (2017). business failure prediction based on two-stage selective ensemble with manifold learning algorithm and kernel-based fuzzy self-organizing map. knowledge-based systems, 121, 99-110. yong, a. g., & pearce, s. (2013). a beginner‟s guide to factor analysis: focusing on exploratory factor analysis. tutorials in quantitative methods for psychology, 9(2), 79-94. zaiontz, c. (2014). factor analysis. real statistics using excel. transactions template journal of engineering research and technology, volume 1, issue 4, decmber 2014 126 digital sensorless speed direct current motor control by the aid of static speed estimator constantin pavlitov1, yassen gorbounov2, radoslav rusinov2, anton dimitrov 1 c. pavlitov is with the automation of electrical drives department, technical university of sofia, bulgaria. 2 y.gorbounov, r. rusinov are with the department of automation of mining productiom, university of minig and geology “st. ivan rilski”, sofia, bulgaria abstract— this paper deals with speed control of dcm without angular speed measuring device. this device sometimes appears to be rather expensive and unaffordable in everyday common speed controllers. however since speed feedback has to exist measuring of anchor current and mathematical speed estimator device is suggested [1] instead speedometer. the synthesis of sensorless speed controller is performed in three major steps. the first one is referred to identification of the motor; the second one reflects matlab simulink model of the whole system – motor and regulator and the third step reveals microcontroller implementation. the implementation has been made by the aid of msp430 16-bit mcu and most of the functions have been performed by programmable means. this article will be quite helpful for those ones who are eager to make their transition from matlab simulation steps towards digital controller system implementations. index terms— sensorless control, dc motor control, speed estimator, speed control. i introduction this article does not pretend for newly scientific principles of control. its purpose is to demonstrate a teaching example for the transition between the mathematical models of electrical drive into its practical implementation. that is why the control programe has been discussed in details and model pecuarilities have been given as crutial point guidelines. the example is dedicated to sensorless speed control of direct current motor. we hope that extensive authors’ experience of the area will be of benefit to the young digital electrical drive designers in their process of learning. ii mathematical model the mathematical model consists of several consecutive models: motor identification; regulator device model; speed estimator and angular speed filter device; mathematical model of the whole system. a motor identification matlab simulink model [2], [3] is presented in fig.1. in the figure: uz is the power bridge supply voltage; u=uz is the anchor voltage which maximal value is 12[v] and duty cycle is ; ia is the anchor current with a maximal value of 1.1[a]; ic is the static anchor current that is proportional to the value of the static torque; r is the active anchor resistance (it is measured within a stuck anchor: r=uz/ia=11[]); te is the electrical time constant of the motor (it is measured by an l-meter many times and the average value is taken into account: te=2.94•10 -4[s];  is the angular speed [rad/s]; k is the motor constant (k=(uzr.ia)/=0.02); tm is the mechanical time constant which is figure 2 starting motor transition process. uz=12v, ic=116ma; wmax=534rad/s. uz 1 r k 1 te.s+1 r tm.k 1 s -ue ic ωia figure 1 direct current motor matlab simulink model 0 100 200 300 400 500 600 0 0.2 0.4 0.6 0.8 1 ω [rad/s]ia [ax2.10-3] constantin pavlitov, yassen gorbounov, radoslav rusinov, anton dimitrov/ digital sensorless speed direct current motor control by the aid of static speed estimator (2014) 127 measured with the time for self stopping of the motor and here it is 138ms; j is the moment of inertia – (1):  26 22 .10.1035.5 11 1375.002.0. mkg r tk j m     (1) the starting transition process of the motor is given in fig.2. b regulator device the regulator device in classical electrical drives [2], [4], [5] of the kind is given in fig.3. in fact it is analogue pi regulator with two time constants t1=r1.c and t2=r2.c. uin and uout are correspondingly the input and output of this device. the transient function is easy to be obtained by this circuit diagram. the corresponding equation is (2):        su st st su scr scr su r sc r su in ininout . . 1. . . 1.. .. 1 1 2 .1 2 1 2        (2) the digital recursive equation [6] is (3).            11 2 .1 1 t kuh kuku t t kuku in inin outout   (3) the matlab implementation of this regulator is demonstrated in fig.4. the input h represents the sample time of the system. c the speed estimator and angular speed filter device since the device is sensorless [1] the angular speed doesn’t have to be measured. but for the sake of speed feedback, angular speed has to be estimated. this estimation has been done by (4). it gives static relationship between the angular speed and anchor current and voltage. k riu a .  (4) it will be seen further in the final model that the lack of dynamical relation is not so badly reflected to the quality of the whole system. the matlab simulink model has been presented in fig.5. the estimator device is built in the feedback of the electrical drive. the speed value is a function of the anchor voltage and the anchor current. every fluctuation of the arguments has made influence on the angular speed output especially the current which is rapidly changing. high harmonics presented in the current will be propagated freely towards the angular speed output. that is why an aperiodic digital filter [7] is suggested at the output of the estimator. it has to cut high harmonics and noises coming from the current hall measuring device. the transient function of this device is pointed out by (5):    su st su inout . 1. 1   (5) the recurrent equation which represents the digital form [7] of the filter is (6): figure 3 analogue variant of the pi regulator figure 5 the matlab simulink model of the speed estimator device figure 4 matlab implementation of the pi regulator constantin pavlitov, yassen gorbounov, radoslav rusinov, anton dimitrov/ digital sensorless speed direct current motor control by the aid of static speed estimator (2014) 128      ku ht h ku ht t ku inoutout .1.     (6) this final equation will be used for the practical implementation. d the mathematical model of the whole system the matlab simulink whole system model is pictured in fig.6. the identified dcm motor is included as subsystem dcm. the speed pi digital regulator is built in corresponding subsystem. the sample time of the regulator is equal to h. the speed estimator is also built into a subsystem. due to its algebraic expression representation the speed value is filtered by low pass filter in order fast current fluctuations to reduce their influence over angular speed value. the electrical drive load is performed by two elements [8], [9] – permanent and dynamic. these two components represent the friction of bearings [8], [9] and magnet stator influence. current quantizer determines the error of the hall measuring [10] device. the result of simulations has been done under following simulation circumstances: regulator time constants t2 = 0.05[s], t1 = 0.005[s]; sample time step h=1[ms]; uzmax=12[v]; iamax=1.1[a]; amax=350[rad/s]; load current icmax=200[ma]. it is well seen by the curves in fig.7 that in spite of disturbances in the load the curve of regulated speed remains almost smooth and constant. the values of h, t1 and t2 have been obtained as a result of many simulations. these results are overlapped with the practical implementation. iii practical implementation the practical implementation of the electrical drive system consists of two major parts – hardware and software. a hardware of the system the hardware unites the several scaled inputs/outputs and power electronic circuit in fig.8. inputs are organized in three 10bit adc converters and outputs consist of two 8bit dacs. in fact, dacs have been performed as pwm converters. the frequency of the pwms is constant and the pulse width is determined by 8bits binary digit word. every input pin is scaled with a coefficient which reflects the physical input parameter entered inside of the processor like z[rad/s]=wcoef.wpot; uz[v]=uzcoef.uzpot etc. the output pin value for pwm is taken by gamma (duty cycle), transferring it from float to byte and after that filling outgamma into the pwm output: outgamma = 255 * gamma; analogwrite(analogoutpin, outgamma); // as pwm the power circuit is presented in fig.9 (based on l293b). the dc motor can be switched in both ways, hard chopping, controlling q1 and q2 (both signals are inverted to each figure 6 matlab simulink whole system model adc1 adc2 adc3 msp430 g2553 wz vz [0 – 1023] [0 – 1023] [0 – 1023] x:=x-512; x:=x/245; ia=x dir analog direction of the softchopping mode p15 1/0 256-level pwm p14 pwm 22k 100nf 10uf 12v 30k [0 – 3.6v] uz – power bridge voltage uzcoef=12/1023=0.013 a1 / p3 a4 / p6 a3 / p5 2.2k 100nf 10uf 5v 47k 100nf 10uf 3.6v [0 – 3.6v] speed assignment wcoef=534rad/s / 1023=0.5214 hall sensor 0v 2.5v 5v 0v 1.8v 3.6v hall sensor scaling 0.86v / 1a iarange=±2.09a iamax=1.1a figure 8 hardware of the sensorless dc motor speed control figure 7 speed regulation process simulation: (a)anchor voltage x1/70[v], (b)-anchor current x1/500[a], (c)regulated angular speed [rad/s], (d)-estimated speed [rad/s], (e)-load x1/500[n.m] constantin pavlitov, yassen gorbounov, radoslav rusinov, anton dimitrov/ digital sensorless speed direct current motor control by the aid of static speed estimator (2014) 129 other) and soft chopping, controlling q1 and dir. the signal dir inverts only during the change of motion direction. en signal enables the power circuit. it should be mentioned here that during the change of movement direction (dir) the code supplied to pwm should be complemented in order the absolute speed value to be kept intact. b software of the system the sensorless dcm control program looks like this: const int analoginpinwz = a4; //the potentiometer wz is //attached to this pin // speed assignment pin 6 const int analoginpinhall = a3; //hall detector. current proportional readings. pin 7. const int analoginpinuz = a1; //power supply voltage from adc. reflects bridge power supply. pin 3. const float wcoef = 0.5215; // =(524rad/s)/1024 const float hall=245; // = 512/2.09a maximal measured current 5.678a // sensitivity 0.317 v/a determined by hall detector const float uzcoef = 0.0130; // = 12v/1024 -12v max bridge power supply const int analogoutpin = 14; // analog output pin int dir =low; //soft chopping switch variable-points out the direction of movement const int digitaloutpin = 15;//soft chopping switch pin float gamma = 0; //duty cycle of the regulator int outgamma=0; //duty cycle pwm presentation int hallpot=0; //value read by hall sensor int wpot = 0; // value read by the wz potentiometer int uzpot = 0; // value supplied by the bridge potentiometer float wz=0; // assigned speed normalized value [rad/s] float i=0; // normalized current variable [a] float uz=0;// normalized power supply voltage [v] float w0=0; // estimator output float a = 0; //filter coefficient t/t+h float b = 0; //filter coefficient h/t+h float wold = 0; // filter output float wnew = 0; // filter input float w=0; //speed variable float u = 0; //anchor voltage variable float unew = 0; //regulator output variable in moment k float uold = 0; //regulator output variable in moment k-1 const float k = 0.020; //motor constant const float r = 11.3; // anchor resistance float dw = 0; // speed change in a step const float t=0.001; //lpf time constant const float t1=0.05; // bigger regulator time constant const float t2=0.005; // smaller regulator time constant float ki =0; // integral regulator coefficient float kp = 0 // proportional regulator coefficient float h=0.001; // sample time void setup () { ki=t2/t1; //regulator coefficients calculation kp = h/t1; pinmode(digitaloutpin,output); serial.begin(9600); a=t/(t+h); b=h/(t+h); } void loop() { wpot = analogread(analoginpinwz); // read speed assignment wz=wcoef*wpot+50; // assignment normalized hallpot = analogread(analoginpinhall); //read current feedback hallpot = hallpot 512; i=hallpot/hall; //current normalized uzpot = analogread(analoginpinuz); //read bridge power supply uz= uzcoef*uzpot; //supply is normalized w0=(u-i*r)/k; //estimate speed wnew = a*wold+b*w0; //w0 filtered w=wnew; dw=wz-w; wold = wnew; uold=unew; //refresh the regulator unew=uold+ki*dw+kp*wz; //calculate new value if (unew > uz) unew = uz; if (unew < -uz) unew = -uz; gamma = unew/uz; //calculate duty cycle u = gamma*uz; if (gamma >=0 ) dir =low; else dir = high; digitalwrite(digitaloutpin,dir); if (gamma < 0 ) gamma=1+gamma; outgamma = 255 * gamma; //transfer gamma from float to integer analogwrite(analogoutpin,outgamma); //out gamma as pwm delay (0.5); //extend sample time if needed } iv experimental results speed results are estimated by oscilloscope (see figures 10 to 15) which measures sinusoidal signal generated by 1 2 3 4 5 6 7 8 16 15 14 13 12 11 10 9 1 2 3 4 dcm en q1 q2 dir +7v +7v l293b figure 9 power electronic circuit. the transistors at the output of the bridge are bipolar constantin pavlitov, yassen gorbounov, radoslav rusinov, anton dimitrov/ digital sensorless speed direct current motor control by the aid of static speed estimator (2014) 130 measuring winding embedded inside of the motor pit 14/5. the loading torque is 0.004[n.m]. figure 14 high motor speed anchor current. the variations of the anchor current are smaller in high speed. figure 10 highest motor speed: f/4 gives rot./s figure 11 lowest motor speed: f/4 gives rot./s figure 12 processor pwm: it is rapidly changing figure 13 power side soft chopping pwm. the upper spike corresponds to the upper bridge transistor saturation; the down spike reflects the down bridge transistor saturation. figure 15 low motor speed anchor current. variations of the anchor current are bigger than in high speeds. constantin pavlitov, yassen gorbounov, radoslav rusinov, anton dimitrov/ digital sensorless speed direct current motor control by the aid of static speed estimator (2014) 131 v conclusion the implemented dc motor speed control system gives opportunity for the following conclusions: due to the rather high preciseness and adequateness of the mathematical model, simulation system gives high predictability of the results. it hasn’t been serious problem to predict the best sample time and time constants of the regulator. it has been seen that one of the most significant factor for speed smoothness is accurate measuring of the anchor current. this accuracy is strongly dependant by the hall detector range ability change. since most of the low current industrial hall detectors range is between ±5a it is hard to reach high accuracy in ma’s range. this leads to comparatively low ratio factor between the high and low speeds. in this case a reduction of the current measuring range (by rewinding the sensor) has been reached: ±2.09a in case of 1.1a max current and accuracy of ±20ma. the relation between highest and lowest speed has reached factor of 7 in case of maximal load. since the measured current is very rich to high harmonics there is need to filter the current and eliminate them. but this filter should be low pass with very small time constant in order not to lose significant information by the anchor current. in spite of this filtering, fluctuations of the current influence significantly the angular speed after the estimator. this pointed out that there is need of low pass angular speed filter. it’s time constant have to be much higher than first (current’s) one but the bigger the time constant of lp the bigger instability of the system becomes, due to the phase deviation of angular speed. so the best time constant appears to be around 1-2ms. obviously this filter is recommended to be digital. it has been predicted quite sharply its time constant and its significance to the given design. all this has been done by the matlab model. the optimal value of sample time has been determined by several matlab model experiments searching for the best parameters of the regulator performance.usually this value coincides with the value determined by the maximal computational power of the applying dsp. it has been predicted that satisfactory sample time is equal to 1ms. it means the processor should be fast enough to reach these requirements. msp430 is in the point of its limits but succeeded. the task will be facilitated if the controlled motor is of a greater power which means it would have a bigger time constant. finally it can be made a conclusion that the computational power of this microcontroller is fully enough for the sensorless control of the dc motors with power more than 5-10 watts. references [1] c. pavlitov, h. chen, i. colak, t. tashev, y. gorbounov: sensorless control of srm by the aid of artificial neural network adaptive reference model. in: epe 2011, birmingham, united kingdom, 2011. [2] c. pavlitov, l. spirov, y. gorbounov, n. stefanov: parallel processing in electrical drives control. in: computer science’2006, istanbul, turkey, 2006. [3] y. gorbounov, c. pavlitov, r. rusinov, a. alexandrov, k. hadjov, d. dontchev: cpld design of parallel processing algorithms for electromechanical system control. in: computer science 2008, kavala, greece, 2008 (best paper award). [4] a. lewis: a mathematical approach to classical control, 2012. [5] j. polderman, j. willems: introduction to the mathematical theory of systems and control, 2013. [6] i. landau, g. zito: digital control systems: design, identification and implementation, communications and control engineering, asin b001qfz9fy, 2006. [7] r. allred: digital filters for everyone: second edition, isbn-10 1481084739, 2013. [8] m. chilikin: obshii kurs elektroprivoda (in russian), energia, moscow, 1971. [9] p. childs: mechanical design engineering handbook, isbn-10 0080977596, 2013. [10] k. sozanski: digital signal processing in power electronics control circuits, isbn-10 1447152662, 2013. constantin pavlitov. the author is an associate professor in the department of electrical drives automation at faculty of automation in the technical university of sofia. he delivers lectures in “logical control of electromechanical systems”, “computer monitoring and control of electromechanical systems” and “microprocessor control of electrical drives”. he gives also lectures in the english language department of engineering in the area of computer architecture included in the subject “computing ii”. his scientific interests are in the field of fuzzy and neuro-fuzzy control of electrical drives and their identification with the aid of artificial neural networks. yassen gorbounov. the author received a phd degree from the faculty of automation at technical university of sofia, department of electrical drives automation in 2013. he is an assistant professor in the university of mining and geology "st. ivan rilski", sofia and delivers lectures in “measurement of nonelectric quantities” and “microrpocessor systems”. he is a member of the ieee, region 8, the federation of the scientific and engineering unions in bulgaria (fnts) and john atanasoff union of automation and informatics (uai). he is an author or co-author of over 20 journal and conference papers and is a co-author of 1 book. his research interests include automatic control of electrical drives, switched reluctance motor and generator control, application of neural networks and fuzzy logic for motor control, parallel processing algorithms with programmable logic devices. radoslav rusinov. the author is an assistant professor in the university of mining and geology “st. ivan rilski”, sofia. he received his msc in 1999 from technical university of sofia. his research interest include control systems, electrical drives, fuzzy logic, neural networks. transactions template journal of engineering research and technology, volume 1, issue 3, september 2014 83 fuzzy logic based flow controller of dam gates muhammad imran1, muzammal zulfqar1, haroon rasheed2, shahzadi tayyaba3, muhammad waseem ashraf1,* and zahoor ahmad1 1gc university lahore, pakistan 2bahria university karachi, pakistan 3the university of lahore, pakistan *email: muhammad.waseem.ashraf@gmail.com abstract—this paper presents the design and simulation of flow controller of dams gates using fuzzy logic based control system. a new way of controlling the opening and closing of dam gates has been proposed in this study. the system has been developed firstly in matlab using fuzzy logic mamdani model and then the system is checked by developing a simple c language code using microcontroller. two input parameters have been selected like water level and water speed. the three membership functions to these parameters have been assigned. the one output parameter is selected as flow controller. three membership functions are also assigned to the output. in this study the system works according to the rules defined in the fuzzy inference system. the simulation results are compared with the calculated results of mamdani formula. a pseudo code is developed from the rules defined in the fis and code is generated in c language. the c language code is then burnt in the atmel 89c51 microcontroller and a simple control circuit is developed to check the software results on practical basis. the basic concept of the study is that this system will show extreme response on extreme condition otherwise it will show normal results. when both inputs are in the higher (high water level and fast speed) region the output results will also be in the higher region (dam gates full open). this system showed great efficiency and robustness as it can be easily seen from the observations. this system is much different from the conventional systems as it involves simple microcontroller circuit. index terms— fuzzy logic, dam gates, flow controllerl, membership functions, fuzzy rule editor, fuzzy inference system. i introduction life on earth began a long ago. human beings used to live in jungles and used to hide in caves. human being fed themselves by hunting animals and covered themselves by their skin. when the first human came on earth he did not even know how to cover himself, but by the time passed he discovered that he could cover himself by leaves of trees, he could himself by hunting the animals. in short man tried to learn how to live at this earth. he learnt how to survive in the extreme conditions. it was his fight for the survival which made him the most intelligent and powerful creature on the earth. the term survival of the fittest reminds us of very interesting creature dinosaurs but they also vanished and could not survive on earth, but it was human being who always tried to find the new way to feed himself and to cover himself. in fact humans have always struggled for the betterment of their life. although humans were always passionate for finding the new ways of making their life comfortable, but for last few centuries’ charismatic changes have occurred in the life of every single human being due the marvelous and miraculous inventions of science. the work of years was reduced to months, of months was reduced to days, of days was reduced to hours and of hours is reduced to minutes, seconds, and even to nano-seconds.it was all done due to the hard work of human beings. the new technologies now a day have changed the entire course of human life. of that new technology many are not robust and have much room for error. the humans have built dams and artificial resorvoirs etc. to get water and energy from them. dams are the major sources of hydro energy. fuzzy logic is basically a flexible technique and is a numerical representation of system in which answer is just not only high or low, 0 or 1, on or off and true or false. it is a free technique which is not bounded by any specific states. let us take an example of thermally heated metal where value is not just only hot or cold but also between them, this system could be easily developed by using fuzzy logic as it can tell that some part of the metal is at normal temperature. the most common way of using fuzzy logic is to solve it through matlab software. bagis et al. described a to control the spillway gates of novel way to control the spillway gates of dams.the method they proposed was extremely different from the conventional way of controlling the flow controller spillway gates of dams.the controller they used was fuzzy proportional derivative which was based on evolutionary algorithm. the suyatem was found as robust controller was very indifference with the normal controller based on fuzzy logic.the main purpose of the controller was to provide a betterment in controlling the reservoir gates. different hydrograph with the different magnitudes were used to simulate the purposed control system.the result provided by simulation indicated that the system is not even robust but also provide the accumuhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 84 rat and proficient results. in fact the system was totally different from the common conventional system [1]. lin et al. proposed an intelligent fuzzy logic system which whose basis was artificial neural network. it was a system which could be easily compared with the average used fuzzy system. the uniqueness of the system was its ability to take right decision on right time and right place. this fuzzy system with ability to take decisions could be based on techniques of machine learning. the structure could be easily made to learn the rules of fuzzy logic and could easily find the inputs and outputs. it was seen that by connecting the self-organized and supervised system the results taken were astonishing. the system designed was user friendly and could be easily understood by anyone. two examples were used to explain the system rightly. although the system had some resemblance with the conventional fuzzy system but in some ways it was quite different from it. the system was seen to be giving strong and proficient results [2]. castro presented a work on the topic how does fuzzy logic is an approximation method and why it has an edge over other logics. he also described why does fuzzy logic shows great performance while other logics cannot do so. people mostly criticize the performance of fuzzy controller but in this work it has been proved that the fuzzy controller has great impact on daily life problems. it is basically a novel method. fuzzy system uses linguistic based problem solving method. it has shown promising results as it is based on differentiable input fuzzification and output defuzzification. castro has shown both quantitative and qualitative approach to these fundamental questions [3].tang et al. proposed a fuzzy logic based system for creating genetic algorithm. he used a method in which genes were divided in to two different inputs. fuzzy membership functions were used to define the parameters of inputs. fuzzification was done on the input and rules were defined, then defuzzification was done on the output sides. fuzzy logic based system is used due to its unique way of tackling complex and non-linear problems [4].sakthivel et al. described a new way to control a liquid level in a spherical tank. the level of liquid in a spherical tank is an extremely non-linear quantity as the cross sectional area of spherical tank is also non-linear.it has highest level of variation.a black box mdel has been used to solve the problem of this system .the normal pi controller’s parameters are use in this system and its servo response is compared with that of fuzzy logic response.matlab is used for the real time implementation of this system. the fl controller gives the promising results and helps in solving this nonlinear problem [5]. adhikary et al. 2012 studies the hydro power plant and reported on their secure and competent control by using fuzzy logic. the control of water is very tough under improper conditions. to tackle such problems fuzzy logic method is very important tool. fuzzy logic is a rule based technique and also membership functions have a great impact on the performance and efficiency of a system. this paper describes the fuzzy logic based controller method for safe reservoir control of dams through spillway gates. for secure spillway gate control the fuzzy logic controller (flc) system used two input variables i.e dam lake level and water inflow rate. the output is only single variable “openness of the spillway gate”, and is controlled by (flc). the main purpose of this control system is to discharge excess water level and received desired water level. the things that are too complex to be mathematically modeled, is very easy described with fuzzy logic controller (flc). the other advantage of fuzzy logic is automation control because humans are forgettable. this work can be extended to develop a method for relating fuzzy logic linguistic variables [6]. graham et al.1986 reported on fuzzy discovery and direct of a water fit. to set control rule is very important and necessary in fuzzy logic controller (flc) for better results. at the end a comparison is made by checking the experimental and practical results. the technique is consisting of a set of a linguistic variable which can explained on a process control computer using fuzzy logic. by counting fuzzy model classification with fuzzy controller originated a fuzzy adaptive regulator. the fuzzy controller is utilized in various variables control system [7]. bagis 2002 proposed to find out fuzzy link task with tabu search a function to manage method. in this paper there is a new methodology to select membership functions for a fuzzy logic controller with tabu search. fuzzy logic controller is a quick system precise by linguistic reports based on rule or data based. if mathematical modeling is very complex or the system is non-linear or dependent on time, then we use fuzzy logic controller (flc) to solve such kind of problems very effortlessly with effectiveness. in this research a new algorithm is presented i.e. tabu algorithm which is based on knowledge based system consist of data and rule base. glover presented tabu search method which is a strong technique to solve dense problems. the results of tabu search algorithm can be adapted by changing the control parameters such as original results, tabu states, type of shift and active actions of a system [8]. hasebe et al. 2002 reported on the reservoir operation by using fuzzy logic system. in this paper the technique of neural network is also applied. dam is used for various-purpose like storage of water in demand for water and energy, balances for rise and fall in river water flow and to increase the rank of water. fuzzy system is applied in competent way when only use of water and neural network is applied for flood control. dam water level, rainfall, inflow, estimated inflows are the input components used in this paper. storage and outflow are the output variables. result explains there is a discrete variation in flood season and non-flood season, as in the case of flood. if key is used for storage then it explains better performance in non-flood season, and composition detection for the season of flood [9]. karaboge et al. 2007 studies on scheming spillway entry of dams and reported on making a fuzzy logic controller with most advantageous laws. the water tank process is a very difficult and non-linear method. so a fuzzy logic controller is made with minimum rule number to solve such kind of problems. fuzzy logic controller is made with the help of tabu search algorithm. ts technique is used to conclude outstanding regulation base of fuzzy controller. by using the ts muhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 85 technique, the function to presenting rule arrangement with the least amount of laws happens to consistent and better results are attained [10]. bagis et al. 2004 reported on artificial neural networks and fuzzy logic based controlled of spillway entry of dams by using fuzzy logic controller. the neural network technique is also used in it. in this paper a very ample way is offered for calculating the spillway gates of dams during flood season. artificial neural network is familiar work with nonlinear conditions. a controlled system is made with dissimilar situations of flood i.e what is the value of water when the flood is come. a comparison is made between the results of practical and theoretical. the performance shows that, it gives better results to control the spillway gate of dams by using neural network and fuzzy logic technique [11]. chang et al. 2005 presented adaptive neuro fuzzy conclusion system forecast in water reservoir. it is very important for the executive to verify the exact value of water in reservoir. a neurobased fuzzy inference system was used to make the water level control system during floods and land sliding. in the reservoir of water level, this case is discussed and we obtain the proficient outcomes. reservoir is basically a place where we collect or store something. water reservoir is used for storing water during normal or dangerous circumstances. when the level of water is very high in the case of flood, then to make the best use of water is very important. a neuro fuzzy inference system is used to control the level of water reservoir. in this, we used two inputs ‘a’, ‘b’ and one output ‘c’. four rules are made as there are two inputs. in general, anfis model presented accurate and efficient prediction of water level for next three time steps, where the value of correlation coefficients (cc) are very close to unity larger than 0.99. by using fuzzy inference system dealt with complex inputs and output, we get a useful guide for the operations of flood control [12]. wu et al. 2004 studied on a type-2 fuzzy logic controller for water intensity procedure. a genetic algorithm is used in a type-2 fuzzy logic controller to control a liquid-level process. a two-step process is used to make the type-2 fuzzy logic controller. in a first process, the parameters of a type -1 fuzzy logic controller are modified by using the process of genetic algorithm. in a second step process, footprint of uncertainty is involved by clearing the fuzzy input set. the result shows that the type-2 fuzzy logic controller meet with the complexity of plant and can handle the uncertainty better than its type-1 counterpart. fuzzy logic is a flexible logic system whose values are approximate instead of exact. fuzzy logic controller use linguistic ‘’ if, then” rules that can be made with the knowledge of experts in relevant field. flc’s are knowledge based controllers [13]. ii design methodology a designing in matlab fuzzy logic control system in fis editor could be assigned by several numbers of inputs but here it has two inputs and each input has three (mfs) membership functions. the ranges should be selected according to the desired values of input (mfs) and output (mfs) , here values of inputs and output has been taken( 0-1 units) for both and shown in table 1. the figure 1 shows that how the common regions have been differentiated. the first overlapped region of the range 0 to 0.5 and 0 to 1 is called region 1 and second overlapped region of range 0 to 1 and 0.5 to 1 is called region 2. it is same for the water speed input. the calculations have been taken according to this regional division. figure 1 division of the regions in designing of this system different rules have been established for the better result. the rules involve the simple if and then statement and the and logic. table 1 ranges selected for inputs and outputs table 2 rules for the inputs and output membership functions rang es (units ) input(1) water level input(2) water speed output flow controller mf1 00.5 high level(hl) slow(s) closed (c) mf2 0-1 normal level(nl) normal(n) normal opened (n_o) mf3 0.51 low level(ll) fast(f) full opened( f_o) rule no. if water level if water speed then flow control 1 hl s n_o 2 hl n n_o 3 hl f f_o 4 nl s n_o 5 nl n n_o 6 nl f n_o 7 ll f n_o 8 ll n n_o 9 ll s c muhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 86 the figure 2 and 3 shows the rule viewer graph of the system designed. figure 2 results when water level is in region 1 and water speed is in region 2 figure 3 results when water level is in region 2 and water speed is in region 1 the figures 4 and 5 show the surface viewer graph . figure 4 surface graph between inputs and output figure 5 surface graph between inputs and output b algorithm design for flow controller system the input and output values are given below: water level is in region 1= 0.344; water speed is in region 2= 0.632 and flow controller tendency is towards full opened mf= 0.518 water level the 1st input of the system; whose value lays in region 1 in mf graphs. mfs are= high level (hl) and normal level (nl).the mfs f1 and f2 for these values are: f1= (0.5-0.344)/0.5 = 0.312 f2 =1f1=1-0.312=0.688 water speed the 2nd input parameter for the system; whose value lays in region 2 of mf graphs. mfs are: normal (n),fast (f).the mfs f3 and f4 for these values are: f3=(1-0.632)/1 = 0.368 f4 =1f3=1-0.368=0.632 singleton values for this system are given in table 3. table 3 used for singleton values water level is in region 2= 0.693; water speed is in region 1= 0.305and flow controller tendency is towards closed mf = 0.463 .water level the 1st input of the system; whose value lays in region 1 in mf graphs. mfs are= normal level (nl) and low level (ll).the mfs f1 and f2 for these values are f1= (1-0.693)/1 = 0.307 f2 =1f1=1-0.312=0.693 water speed the 2nd input parameter for the system; whose rule no. water level water speed flow control singleton values ra hl n n_o 0.5 rb hl f f_o 1 rc nl n n_o 0.5 rd nl f n_o 0.5 muhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 87 value lays in region 2 of mf graphs. mfs are: slow (s) and normal (n). the mfs f3 and f4 for these values are f3=(0.50.305)/0.5 = 0.39 f4 =1f3=1-0.39=0.61 table 5 shows the singleton values for this system: table 4 shows the singleton values iii results and discussions fuzzy logic (fl) based control system is being proposed here for the flow controller of water in dams etc. the given system contains fl controller which has two inputs (water level, water speed) and 1 output flow controller.and logic has been used here. the mamdani’s model is used here the results of whom are given below: table 5 shows the rules corresponding to the mfs a calculations using the mamdani’s formula by using the formula for mamdani’model output is calculated for both the conditions as water level is in region 1=high level= 0.344; water speed is in region 2=fast = 0.632 and flow controller tendency is towards full opened mf= 0.518where ri are rules of table 4 and si are singleton values of table 3.here singleton values corresponds to 3 different variables of flow controller i.e closed=0, normal opened=0.5 and full opened=1. hence σ si * ri = s0 * ra + s1 * rb+ s2 * rc+ s3* rd =0.5*0.312+1*0.312+0.5*0.368+0.5*0.632=0.156+0.312+0 .184+0.316=0.968 σri = ra + rb + rc + rd = 0.312+0.312+0.368+0.632 =1.624 flow controller = [ σri *si / σri ] = 0.968/ 1.624 = 0.596 matlab simulation value= 0.518 calculated value= 0.596 difference= 0.596-0.518 = 0.078 the tendency of the system lies in the region of full opened of mf of output. table 6 shows the rules corresponding to the mfs water level is in region 2=low level= 0.693; water speed is in region 1=slow= 0.305 and flow controller tendency is towards closed mf = 0.463 where ri are rules of table 4 and si are singleton values of table 3.here singleton values corresponds to 3 different variables of flow controller i.e closed=0, normal opened=0.5 and full opened=1. hence σ si * ri = s0 * ra + s1 * rb+ s2 * rc+ s3* rd =0.5*0.307+0.5*0.307+0*0.39+0.5*0.61=0.1535+0.1535+0 +0.305 = 0.612 σri = ra + rb + rc + rd = 0.307+0.307+0.39+0.61 = 1.614 flow controller = [ σri *si / σri ] = 0.612/ 1.614 = 0.379 matlab simulation value= 0.463 calculated value= 0.379 difference= 0.463-0.379 = 0.084 the tendency of the system lies in the region of closed of mf of output. b observations using all the simulated and calculated results the following observations have been made which tell the robustness of the system.water level is in region 1=high level= 0.344; water speed is in region 2=fast = 0.632 and flow controller tendency is towards full opened mf= 0.518 table 7 frest observations water level is in region 2=low level= 0.693; water speed is in region 1=slow= 0.305 and flow controller tendency is towards closed mf = 0.463 rule no. water level water speed flow control singleton values ra nl s n_o 0.5 rb nl n n_o 0.5 rc ll s c 0 rd ll n n_o 0.5 rule no. membership function ra f1^f3=0.312 rb f1^f4=0.312 rd f2^f3=0.368 rd f2^f4=0.632 rule no. membership function ro f1^f3=0.307 r1 f1^f4=0.307 r2 f2^f3=0.39 r3 f2^f4=0.61 sr. no observation results 1. matlab simulation value 0.518 2. design value 0.596 3. difference 0..078 muhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 88 table 8 second observation iv embedding the fuzzy logic system in microcontroller using pseudo code fuzzy logic rules itself cannot be burnt in microcontroller as it is nothing more than software based controller. fuzzy itself means approximation, foggy or unclear. so as we make program in assembly and burn it in microcontroller it cannot be done for fuzzy logic controller. we have to make and pseudo code to make a fuzzy program work in microcontroller as shown in figure 6. figure 6 pseudo code here p0=1st input= water level p1=2nd input= water speed p2=output= flow controller for water level: p0.0=high level (hl) p0.1=low level (ll) p0.2= normal level (nl) for water speed: p1.0=fast (f) p1.1=slow (s) p1.2=normal (n) for flow controller (dam gates): p2.0=full opened (f_o) p2.1=closed (c) p2.2=normal opened (n_o) of course on the real field sight of dams there are fixed speed and level sensors and big circuitries are present for controlling the opening and closing of dams. in this system we have developed a simple pseudo code based microcontroller circuit which is derived from the results of fuzzy logic based controller. this system gives and over look of how the opening and closing of dam gates work. this system is the best interpretation of the above used fuzzy system that whenever there are extreme conditions the system will show its extreme action (f_o or c) otherwise it will work in normal condition. any of the microcontrollers can be used pic or atmel. but here atmel 89c51 microcontroller has been used. the three led’s show the different mfs of flow controller (dam gates) and the logic states show different mfs of two of the inputs. proteus software has been used to develop the microcontroller circuit. led red= full opened mf led blue= closed mf led green= normal opened mf p0.0=1=water level=high level (hl) p1.0=1=water speed=fast (f) p2.0=led red glows= flow controller (dam gates) = full opened (f_o), as shown in figure 7. figure 7 full opened dam gates p0.1=1=waterlevel=low level (ll) p1.1=1=water speed=slow (s) p2.1=led blue glows= flow controller (dam gates) = closed (c), as shown in figure 8. sr. no observation results 1. matlab simulation value 0.463 2. design value 0.379 3. difference 0.084 muhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 89 figure 8 closed dam gates there is else condition means between the extreme conditions the no matter what is the condition the system will show results in normal region as shown in figure 9. figure 9 else condition or normal opened dam gates v conclusion it has been seen that whenever the flood season comes lot of water from the dams is wasted and which results into unbearable consequences. the human life loss, agricultural and power losses are the major factors of this destructive phenomenon. these losses could be due to land sliding and flood. for the past few centuries many major dam failures has been seen. this resulted into much human causality. it has been noticed that destruction was caused due to the negligence about the speed of water coming into dams from different sources and water level in the dams. as the water level and water speed are two major factors in the flow controller system of dams. it is also noticed that if dam gates are not controlled (opening, closing or half way opening) then it can result in same losses mentioned above. this system provides the an efficient way of controlling dam gates that the gates will be opened or closed in accordance with the water speed and water level. the system was checked for practical results using microcontroller and showed promising results. of course the practical conditions are always different from the actual conditions but if this system given a chance it will show better results. this system is very novel and strong. the close results of this system show its importance and efficiency. this study can be improved to more precise level and more can be done in this field. references [1] aytekin bagis, dervis karaboga “evolutionary algorithm-based fuzzy pd control of spillway gates of dams” journal of the franklin institute 344 (2007) 1039–10. received 6 january 2006; received in revised form 5 october 2006; accepted 21 may 2007. [2] chin-teng lin, c. s. george lee “neural-networkbased fuzzy logic control and decision system” ieee transactions on computers, vol. 40, no. 12, december 1991. [3] j.l.castro“fuzzy logic controllers are universal approximators”ieee transactions on systems, man, and cybernetics, vol 25, no. 4, april 1995. [4] kit-sang tang, kim-fung man, zhi-feng liu, sam kwong “minimal fuzzy memberships and rules using hierarchical genetic algorithms” ieee transactions on industrial electronics, vol. 45, no. 1, february 1998. [5] g.sakthivel, t.s.anandhi, s.p.natarajan “g.sakthivel, t.s.anandhi, s.p.natarajan/ international journal of engineering research and applications (ijera) issn: 2248 9622 vol. 1, issue 3, pp.934-940. [6] priyabrata adhikary, pankaj kr roy, asis mazumdar, “safe and efficient control of hydro power plant by fuzzy logic” , volume-2, issue-5, 1270 – 1277 2012. [7] bruce p. graham and robert b, newell, “ fuzzy identification and control of a liquid level rig”, fuzzy sets and systems 26, 255-273 1988. [8] aytekin bagis, “ determining fuzzy membership functions with tabu search-an application to control,” elsevier 10 october 2002. muhammad imran, muzammal zulfqar, haroon rasheed, shahzadi tayyaba, muhammad waseem ashraf and zahoor ahmad/ fuzzy logic based flow controller of dam gates (2014) 90 [9] m. hasebe, y. nagayama, “reservoir operation using the neural network and fuzzy systems for dam control and operation support,” elsevier 245-260 2002. [10] dervis karaboga, aytekin bagis, tefaruk haktanir, “controlling spillway gates of dams by using fuzzy logic controller with optimum rule number,” elesevier 232238 2008. [11] aytekin bagis, dervis karaboga, “artificial neural networks and fuzzy logic based control spillway gates of dams,” wiley inter science 2485– 2501 (2004). [12] fi-john chang, ya-ting chang, “adaptive neuro-fuzzy inference system for prediction of water level in reservoir,” elsevier advances in water resources 29 1–10 (2006). [13] donyi wu, woei wan tan , “a type-2 fuzzy logic control for the liquid-level process,” 25-29 juk, 2004. transactions template journal of engineering research and technology, volume 1, issue 4, december 2014 109 enhanced context-aware role-based access control framework for pervasive environment tawfiq s. barhoom1, mohammed o al-akhras2 1 associateproffessor at islamic university gaza & gaza, palestine, tbarhoom@iugaza.edu.ps 2resaercher at palcore microsystems co. & gaza, palestine, md.alakhras@palcore.com abstract— utilization of contextual information considered very useful for improving access decision making process against systems resources, to be more effective in providing authorized service for a large number of end users.we selected model makes decisions based on context information sensed and collected from user environment. then we enhanced context utilization and framework performance based on theoretical idea previously published [14], through studying the process of making decision based on context information validity. we focused on enhancing the distributing and management process of context information over users by using the proxy, which works as observer to enforce policy for short term context information. in case of any change, breaks access control policy rules, the proxy on user device will automatically send revocation/grant request based on change made for context information related to the user in his local environment. after the change made to context information listed within the available policy rules, the proxy will re-evaluate it on user device, and utilize available resources on the device, then grant or revoke permissions, finally will update the web service to be up-to-date. such enhancement will highly increase system responsiveness and enhance authorization for end users.. index terms— context aware, role base access control, performance, grant, revocation. i introduction ervasive environment means that processing of information becomes integrated with everyday life activities, where computers involved in human life will not stop them to practice daily life, unlike desktop computers which enforce users to stop any other daily life activity to use them. the emergence of pervasive environment or ubiquitous computing as coined by m. wesier [1]. the surrounding ambience becomes smarter, our actions and existent become noticed and measured by computers and sensors which provide computer applications with information about us, such environment poses new concerns that should be taken into account, one of the major concerns is security where users resources in such environment is vulnerable for unauthorized access, data and services confidentiality and integrity is now more under risk, also the opportunity to quick join for unknown users to the networks becomes larger. one of the accompanied concepts of pervasive computing is the use of context awareness, emergence of context aware applications and services where an increasing demand for such applications. context aware applications utilize implicit information to adapt system or application behavior. researchers produced several definitions for context, b. schilit [2] referred to as location, people interact with and objects and any change to these objects. later b. schilit [3] added to the definition new parameters like network connectivity, communication costs, bandwidth, resources or social situation. while aspects and parameters for context cannot be enumerated due to situation change, so as a consequence a. dey [15] defined context as follows: “any information that can be used to characterize the situation of an entity. an entity is a person, place or object that is considered relevant or the interaction between a user and an application, including the user and applications themselves”. as we stated before security is a major concern, especially is how to control access to resources and services which defined as access control mechanisms which defined as authorization. one of the recently wide spread techniques is role based access control (rbac), this mechanism which conforms companies hierarchical structure which makes it more desirable. advancement in pervasive computing makes traditional rbac, a. dey [15] issued by national institute standards and technology (nist) has limitations, where an extension that take context information into account becomes an urgent need. many enhancements approaches adopted, where their model design based on: 1. flat rbac model for enhancement [4]. 2. constrained rbac model for enhancement [4]. 3. non rbac models: models which built and implemented access control based on mechanisms other than role base which could be access control list (acl) or discretionary access control (dac). the different contributions provided and described don’t p t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 110 provide complete image for framework enhancement and development for pervasive computing. for example context information validity, which means how much the measurement of context information, will be compliant with real value, so the decision made based on the value of specific context information will be valid or not. so as a result of the change of context information value will lead to a change in permission state. our work will introduce enhanced framework which will combine some techniques from exist models and presented another, the framework will enhance permission revocation and restoration based on context information validity. ii related works researchers use context awareness in their suggestions for new applications paradigm, also access control mechanisms is one of disciplines affected by this new paradigm, efforts made in improving access control mechanisms to be enriched with context aware, using our research perspective. we categorized efforts into rbac based and non-rbac based, so we revise shortly non-rbac then rbac based research in detailed manner. a non-rbac access control mechanisms  semantic based approach presented semantic context aware access control framework for pervasive computing environment a. dey [15], designed semantic context-aware policy model adopts ontology and rules to express context information and contextaware access control policies. mainly focuses on nonorganizational bodies and spontaneous interaction scenarios and on enhancing policy dynamicity and adaption, so the need to focus and solve access control for centralized business that has its own resources needed to be protected and controlled from unauthorized users and also considered as pervasive environment.  web service based approach such approach depends mainly on using web services to control access of users on objects and resources, claudio ardgana [8] presented architecture that utilizes web service to enforce policies for controlling access. b rbac based access control mechanisms the first attempt to utilize rbac in contextual manner done by m. covington [9] provides a model to create and access information about homed residential and resources within the home called generalized rbac it depends on environmental roles in addition to traditional roles provided by rbac about subject and object, rbac notion of a role is generalized to capture the state of the environment. presented a model for securing future applications, which uses generalized approach in handling context information, the model incorporate the notion of roles and environment roles with notion of subject roles. where homes equipped with high technologies needs high knowledge about information security, which usually not found by most of home residents. systems should make it very simple to define and manage security policies; also another challenge security mechanism should be usable and non-intrusive. for example the days in the week split into two groups holidays role and weekdays role or based on locations upstairs, downstairs and guestroom. permission assignment in grbac done not only based on subject roles but also on active environmental roles. providing such roles for environmental state is considered heavy load on pervasive environment systems. also i think applying such example to be generalized on more wide situations where large number of user than a home, which make such system more complicated to maintain high accuracy measurements. also the authentication of user requests for access doesn’t introduced, which considered very important module to authenticate users interacting with the system, and preventing any deception or any identity spoofing. y. g. kim [10] introduced a mechanism that uses state machine matrix scm to grant or deny access privileges based on context. the additions for traditional rbac are: • state checking matrix: handles context information like location, time, and others. • state checking agent: handles roles subset for each user. • context aware agent: handles permissions subset for each role. g. zhang [5] introduced a model as an extension to rbac, where roles dynamically assigned to users depending on their context, where they used two state machine for each user one for representing assigned subset of roles and permission assignment hierarchy and both of the subset changed dynamically depending on context change where monitored and transferred to central authority by context agent. generating such state machine for each user in pervasive environment especially if the resources or services being targeted by user not exist on large servers or central computing power, in other words if the resource or the service exist of limited power devices with existence of large request form large number devices, this model will be at the expense of responsiveness of the system. w. han [11] introduced a formal model for context sensitive access control, where reference monitor responsible for making decision. the proposed architecture doesn’t concentrate on how: • integration or extending new context factors could be done. • also how much such a model could be convenient to pervasive environment? t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 111 j. hwan [12] introduced formalized definition for managing dynamic roles and permissions assignment, also three major components responsible for three major operations as follows: • access control manager-acm: responsible for processing access control request. • context aware user assignment managercauam: provides roles assignment based on context requirement defined in each table. • context aware permission assignment manager capam: provides permission assignment based on context requirements, also provides personalized access control via utilizing user preferences information stored in user profile repository. t. devdatta [13] presented a model for context aware rbac in pervasive computing applications, the model uses context information in role admission policies also how application behave when context condition fails to hold. s. sadat [14] introduced context aware access model based on rbac for pervasive computing environment called cap, the context information is grouped into long term (ltc) and short term (stc) context information. cap introduced as a solution for rbac drawbacks for handling unknown users that join pervasive network through using dynamic user role assignment and using context information for dynamic permission activation for roles as well , but this model has drawbacks as stated by authors in next step research contribution as follows s. sadat [6]: • fetch many context information values to make a decision some of them may not be used, which causes overhead at execution time. • doesn’t support role hierarchy. • uses limited combination of context conditions for assigning roles or activating permissions. s. sadat [6] introduced enhanced version of cap called icap, and tries to overcome some of the limitations in cap, as noticed author’s transformed describing context from a 4tuple to triple,where previous published paper introduced as follows: . also the icap now handles roles hierarchy when assigning roles which includes inheritance. contribution added in icap is how to handle and assign role hierarchy which includes inheritance feature among roles, which in turn allows transferring permission set from parent to child. the icap does not provide a mechanism for permission revocation when the condition changes. the need for authentication component to identify or authenticate users in such environment where unknown users or malicious user has the ability to join such environment and try to access resources and services with no right. iii model architecture our enhancement will be made on s. emami [14] cap, cap has two main parts domain authority and session agent as shown in figure 1 , session agent created when the user starts or enter new session: a domain authority collects long term contexts and responsible for assigning roles to the user in the beginning of the session based on long term context conditions related to the role, this part responsible for filling s-r in the session according to rpc, also appoints spa and then sent it to the session agent (sa) to manage access, domain authority consist of the following components:  long-term context manager: collect long term context information from sensors, and then convert it to predicate formula to be stored.  session manager: responsible for handling session requests from users, also assigns session agent and session to a user.  dynamic user role assigner: assigns roles to the user session based on role assignment conditions then fills sr storage. b session agent collects short-term context information and evaluates user’s access requests according to spa, if request authorization function accepts request then permission granted to the request issuer otherwise rejected, the main components in session agent:  short-term context manager: collect short term context information.  permission authorizer: makes a decision about users access requests based on role permissions in the session. iv frame work design a. dey [15] defines context aware framework as “the framework will allow application designers to expend less effort on the details that are common across all contextaware applications and focus their energies on the main goal of these applications”. our improved framework will send to client required context information set with change factor for each one, then the client will check if the previously required context information breaks this threshold or factor, as a consequence will notify the web service in order to reevaluate permissions granted or denied based on this context information. t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 112 a domain authority as shown in figure 2, the improved context aware framework as described in previous contributions and studies split into two main components domain authority and session agent:  domain authority contains session manager which responsible for handling session requests issued by clients, long term manager which handles and manages context information acquisition and distribution, dynamic user role assigner handles and enforce assignment of roles to users according to long term context information (ltci) and role assignment conditions (rac) then fills session role s-r database.  session agent contains short term context manager which responsible for managing context information acquisition and distribution which classified as short term, permission authorizer make decisions about user access requests based on granted permission for the session which belongs to, stored in the dynamic database session permission assignment (spa), also session agent contains new component called environment context information validity controller which also responsible for controlling validity of context information such as environment related for example time, permission authorizer will check its validity before making decision.  client proxy: will be responsible for enforcing and ensure the integrity of context information measured through secure communication channel to prevent fraud attempts from malicious users, client agent will contain context information manager responsible for acquisition and distribution of measured context information, and user context information validity controller which will check that any context information change and is less than change threshold or not to issue event to the web service to revaluate access request decision made. 1. user context information validity controller: table 1 web service execution time average using junit testing. test case validate user session request loading granted permissions loading local conditions total response time 1 16 615 85 482 1301 2 10 560 65 417 1152 3 19 636 50 482 1308 4 19 585 43 474 1297 5 13 539 41 508 1212 6 14 526 43 472 1165 7 25 621 53 493 1308 8 11 495 72 413 1090 9 46 501 39 470 1214 10 20 594 55 581 1354 average (run time) 19.3 567.2 54.6 479.2 1240.1 fig. 1 rbac relational diagram [16] fig. 2 enhanced rbac relational diagram [16] t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 113 responsible for handling and monitoring context information validity based on value change and how that change affect permission condition validity being guard, while the number of conditions being guard for each user individually, will dramatically degrade required performance to server side framework. 2. user context information manager collects context information and format it to be processed by validity controller, such module needs to be pluggable module that enables adding new tools and apis to broaden its ability to measure and sensing user and environmental context information, with reasoning mechanisms. v framework testing and evaluation we will show the testing of the framework results for improvement made: a. experimental setup we describe in this section what are needed to set experimental environment in order to evaluate the system from perspectives being under focus. our framework includes web service implemented using java programming language, and deployed using glassfish server 4.0 , the web service use as a back end database mysql 5.1.37 database management system. the platform incubate such server applications, is intel(r) core™2 duo cpu p8400 @2.26 ghz, 3072 mb ram. also the experiment includes mobile device with android operating system, has the following specification, 1 ghz dual core cortex-a9, 800 mb ram, wi-fi 802.11 a/b/g/n dual band, android os, v4.4.2 (kitkat). all devices are connected using wireless network access point 150 mbps, model no: tl-wa701nd. the testing and evaluation measurements will be shown independently, due to inexistence of similar development for such frameworks. b. performance evaluation of the performance considers response time as criteria to measure system performance, also we will measure response time for critical operations and tasks done on both web service response time, then we will test response time for the proxy, which in turn call web service remotely to access services or resources, also for the most expensive and important tasks: 1) calculation of web service tasks execution time evaluation: using junit test: 1. validate user 2. loading granted roles – fillsr 3. loading granted permissions – validateallperms 4. loading conditions needed for gaining role permissions – fillcondsbyrole the table [1] represents the results of executing the web service which handle the main evaluation process for access decision made. we didn’t include load granted roles, because execution done once during the session, also our enhancement focuses on changes made on short term context, where granting roles related to long term context done one time each session. 2) calculation of proxy execution time evaluation: we evaluate and statistically compare results on the device: 1. load local rpc 2. load local spa as shown in figure 3 based on result collected in table 1, we can see that the session request operation and loading local conditions operation have high execution time includes the following main operation with high execution time, then we need to analyze session request operation as follows: 1. establishing a session for the end user. 2. evaluating the roles assignment based on conditions from preset policy and represented previously in a database table. 3. evaluating the permission assignment based on conditions from preset policy and represented inside the database table. a. loading granted permissions – validateallperm. b. filling them to the spa. to approve our enhancement made by adopting the new approach, let us suppose that we have a session with 100 access request, then the execution frequency for each user as the following: as shown in table 1 the validate task and fillsr done once at the beginning of the session, which fillsr depends fig. 3. operations execution time scale chart. t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 114 on long term context information as noted before such ltcs doesn’t change during the session average time. the most frequent operation which occurs with every change to the context information provided by proxy, such change enforce the framework to revaluate conditions validity, in order to identify which permissions will continue grant or will be revoked. c. security we should take into account for matching security cia triad: 1) confidentiality and integrity: to address system confidentiality we have two choices: • transfer soap request using ssl. • using soap extension ws-security. while the pervasive environment communication resources is expensive and limited, also the computation power is relatively low, where most devices which communicate is low power devices also to decrease battery life impact for portable devices, that leads us to choose less expressive technique to avoid the constraints that we have. we use ssl as technique to transfer soap messages between the framework parts, through using third party library ksaop2-android, to ensure security matching side by side save system parts performance to be more convenient for pervasive environment. also using proxy side to control context information foster integrity, where malicious users will be prevented from injecting or providing the web service with false values for the framework web service which leads for false access control decision. 2) availability: the most risky reasons for falling down or stopping the system from functioning return to the following problems: • system failures failures such inability to read data sources, or to make comparisons or business logic processing, so we conduct unit testing, to ensure that each functionality works and output results correctly, also we foster during programming testability of the software, like simplicity and independence of modules and components also to improve fault tolerance we use exception handling. • denial of service attacks another important drawback for availability is service interruption, as a consequence our system architecture are distributed which in turn simplify and facilitate monitoring of each part, so that’s leads to reduce service bottleneck problem, we used connection pooling for managing connections in order to alleviate any overload made on the system due to user large numbers. fig. 3. operations execution time scale chart. table. 2. comparison of estimated of tasks execution time frequency during the session. operation estimated execution frequency per session execution time total execution time validate user 1 19.3 19.3 s e ss io n r e q u e st establishing a session 1 110 110 evaluating the roles assignment based on conditions (fillsr) 1 188 188 evaluating the permission assignment (validateallperms && fillspa) 100 215 215000 loading local permissions 100 54.6 5460 t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 115 vi. conclusion context awareness application has increasing significance in various environments and domains; one of these domains is authorization, while authorizations techniques are various, we selected role based technique as a base for research, such techniques fits the organizational structures, where roles is a matching for persons functional behavior for a specific system or domain. in this paper we attempts to investigate and enhance the model s. emami [6] to be applicable to work in pervasive environment, through adding the proxy module, such module works with end users as interface that facilitate to them access specific system resources based on preset policy by system administrators, also responsible for monitoring context information collected by the framework to make access decision, where any change occur will notify context manager for the change to revaluate related permissions or roles granted based on such context information. also the proxy increasingly enhance the model performance when applied, where the framework will not require revaluation for policy rules each request made, instead each device for end user will independently monitor and notify for the change when occur and hold resource utilization. the enhanced framework also will enhance system applicability from the security perspective, where malicious change for context information, which intended to change access decision, became impossible. vi i. future work the future expansion of enhanced framework has multiple directions, where such of research includes many areas and disciplines needs to be enhanced and covered, especially researching the dynamicity of mounting context information from available sensors, focus study for best selection of encryption techniques for messaging which has neutral effect on performance. also the certainty of context information values should be researched to enhance value measurement with lower battery impact. also we need applying further study for user privacy preferences to be taken into account when applying the framework policy, another important research issues should be extended this study especially in making the framework has zero configuration to enhance usability in various domains and business environments. acknowledgment we are really feel grateful and appreciate provided support introduced from information technology faculty at islamic university – gaza for their support represented by deanery; also i really appreciate help introduced from palcore microsystems co. for their sponsorship for research activities and financial requirements. references [1] mark weiser. “the computer for 21st centuray” [journal]. 1991. [2] b. schilit, m.theimer. “disseminating active map information to mobile hosts” [journal]. 1994. [3] b. schilit, norman adams, roy want. “context-aware computing applications” [journal]. 1994. [4] r. sandhu d.f. ferraiolo, d, r. kuhn. “the nist model for role based access control: toward a unified standard” [journal]. 2000. table. 3 samsung device with android os run the framework test case validate user session request loading granted permissions loading local conditions total response time 1 60 805 407 838 3136 2 116 852 315 859 3986 3 119 730 330 828 3207 4 156 948 416 584 3227 5 108 998 525 1277 4802 6 215 935 448 1013 4825 7 101 611 367 764 3230 8 93 572 318 694 3189 9 112 659 145 629 2951 10 99 805 197 776 3163 average (run time) 117.9 791.5 346.8 826.2 3571.6 table. 3 samsung device with android os run the framework test case validate user session request loading granted permissions loading local conditions total response time 1 60 805 407 838 3136 2 116 852 315 859 3986 3 119 730 330 828 3207 4 156 948 416 584 3227 5 108 998 525 1277 4802 6 215 935 448 1013 4825 7 101 611 367 764 3230 8 93 572 318 694 3189 9 112 659 145 629 2951 10 99 805 197 776 3163 average (run time) 117.9 791.5 346.8 826.2 3571.6 t. s. barhoom, m. o al-akhras/ enhanced context-aware role-based access control framework for pervasive environment (2014) 116 [5] g. zhang and m. parshar. “context-aware dynamic access control for pervasive applications” [journal]. 2004. [6] s. emami , s. zokaei. “context-senstive dynamic role-based access control model” [book]. [s.l.] : isecuri, 2010. [7] a. toninelli1, rebecca montanari1, lalana kagal2, and ora lassila. "context a semantic context-aware access control framework for secure collaborations in pervasive computing environments." 5th international semantic web conference, iswc 2006. 2006 [8] c. ardgana, ernesto damiani, sabrina vimercati,pierangela amarati. a web service architecture for enforcing access control policies".proceedings of the first international workshop on views on designing complex architectures, 2004. [9] m. covington m. moyer, and m. ahamad. “generalized role-based access control for securing future applications” [journal]. 2000. [10] y. g. kim c. j.mon,d. jeong,c.y. song and d. k. baik. “context-aware access control mechanism for ubiquitous applications” [journal]. 2003. [11] w. han junjing zhang, xiaobo yao. “context-sensitive access control model and implementation” [journal] // ieee. 2005. [12] j. hwan choi hyunsu jang and young ik eom. “carbac: context aware rbac scheme in ubiquitos computing environments” [journal]. 2010. [13] t. devdatta k. and anand. “context-aware role-based access control in pervasive computing systems” [book]. 2008. [14] s. emami m. amini, s. zokaei. “a context-aware access control model for pervasive computing environments” [journal]. 2007. [15] a. dey, k., d. salber and g. d. abowd (2001). a conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. humancomputer interaction 16. to appear in 2001. transactions template journal of engineering research and technology, volume 1, issue 3, september 2014 95 optimization of calcium alginate preparation in aqueous by response surface methodology kamaruddin, m.a.1, *yusoff, m.s.1, aziz, h.a.1 and alrozi, r.2, zawawi, m.h.3 1school of civil engineering, universiti sains malaysia, 14300 nibong tebal, penang, malaysia, email: suffian@usm.my 2faculty of chemical engineering, universiti teknologi mara pulau pinang, 13000 penang, malaysia 3department of civil engineering, college of engineering, universiti tenaga nasional, 43000 kajang, selangor malaysia abstract— in this study, a statistical software package and design of experiment were applied for the preparation of calcium alginate in aqueous. alginate which is originates from polysaccharide brown algae with different type of uranic and manuronic chains were used as an intermediate, blended with calcium carbonate powder for macro size adsorbent preparation. though adsorption has been an ideal choice in waste water purification, the needs to find an alternative source of adsorbent has received considerable interests recently. in this study, a central composite design was used to develop a model to predict and optimize the preparation condition of calcium alginate. mathematical model equations were obtained from simulation programming. analysis of variance (anova) for viscosity and ph (responses) indicated that the model was adequate to fit the experimental data (p values, lack of fit, r2adj). from the statistical parameters, it showed that the quadratic effects for both calcium carbonate and alginate powder were the most significant. meanwhile, the correlation coefficient, r2 for both independent variables (calcium carbonate and alginate) of 0.9974 and 0.9008 implied that the developed models wer eadequate to navigate the design space. the optimum preparation condition was carried out by compromising the independent factors and responses at different criteria. finally, the optimum preparation condition for calcium alginate was obtained wit h 2.00 g of calcium carbonate and 10 % (w/v) that result in 38 cp of viscosity at ph 10, respectively. index terms— adsorbent, alginate, ph, statistical analysis, viscosity. i introduction managing water pollution is one of the crucial challenges in current world due to rapid changes of product manufacturing and technological advancement that result in wide variation in industrial effluent. for instance, heavy metals byproducts from electroplating, paint, textile, mining and steel making activities carries significant amount of copper, leads, manganese and cadmium that poses a threat to human and environment. generally, the discharge of industrial waste waters can be vary in terms of quantity and quality of heavy metals, organic and non organic matter, suspended particulates, to name a few. therefore, discharge of untreated waste water has been a major concern to many stakeholders in order to safe guard environment and human health, particularly. if not properly and safely treated, waste water from industrial activities can be an impendent source that is very costly to remediate. one of the available methods in physico-chemical is adsorption which offers better removal of heavy metals, high efficiency, high resistance, plentiful source of material and cost effectiveness [1]. until now, various types of adsorbents have been discovered and tested for their efficiency in adsorption process including mineral deposits [2], agriculture wastes [3-5] and industrial byproducts [6, 7] the selection of these types of material mostly depending on the insoluble porous matrix and some available active groups that capable to reacts with polar and non polar pollutants [8]. alginate based polysaccharides have been widely employed in biomedical and pharmaceutical fields mostly in drug delivery, wound dressings, and dental implants. in nano particles application, alginate has found their application in biological devices [9]. in environmental field, due to its gel forming ability, biocompatibility, non toxicity and biodegradability [10, 11], alginate has been used as catalysts and adsorbents for separation and purification in waste water treatment. however, the utilization of alginate in aqueous exposed their solubility which limits the adsorption properties for ionic pollutants specifically for the case of heavy metals ions. therefore, an experimental work was carried out to modify alginate in aqueous blended with calcium carbonate powder to improve the adsorption properties. the preparation condition of the calcium alginate was tested towards viscosity and ph based on the mathematical equation developed from statistical software. “ kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 96 ii exprimental set up a materials principally, alginate has been recognized as an excellent polysaccharide in gelling system because of its unique physicochemical, thermal and rheological properties. it importance relies in hydrocolloid properties including their viscosity, ph, solubility and mechanical properties. in this work, sodium alginate (c6h7o6na) powder was obtained locally and supplied by r&m chemicals (malaysia) with low molecular weight was preferred because it is widely used in many encapsulation processes [12]. meanwhile, calcium carbonate powder was obtained from a limestone quarry wastes which is considered as a byproduct from the quarrying activity. the composition of alginate and calcium carbonate is listed in table 1. table 1: alginate and calcium carbonate composition alginate (c6h7o6na) specification content assay 91-106% moisture max 15% matter insoluble in water max 1% lost of ignition, loi at 1100 °c max 25% molecular weight 85000 viscosity; max at 2 g/l (used spindle no. 4), mpas 65 calcium carbonate powder (caco3) elements content c 20 cao 75 sio2 3.4 al2o3 1.1 particle size passing 75 µm sieve aperture b calcium alginate preparation the aqueous solution was prepared by first adding known volume of distilled water in 250 ml beaker and stirred at 85 °c for 15 min. then, a known weight of alginate was added slowly with a constant stirring rate of 150 rpm for 10 minutes until homogeneous mixture was observed. prior to the addition of calcium carbonate powder, the alginate solution was cool down to 50 °c to prevent thermal shock from occurring that would initiate lump formation between alginate and calcium carbonate. subsequently, the mixture of alginate and calcium carbonate powder was stirred for 15 minutes and the measurement of viscosity and ph was carried out, respectively. c viscosity and ph measurement the viscosity of the calcium alginate in aqueous was measured by using laboratory viscometer model dv ii+pro (brookfield, usa). the spindle used during measurement was based on the manufacturer recommendation of sc4-27. during the measurement, the aqueous temperature was increased to 80 °c for 15 minutes interval and the spindle speed was maintained at 20 rpm throughout the measurement process. next, ph was measured by using eutech 2700 ph meter (thermo-scientific, usa). the entire measurements were done in triplicates and the average values were used further in statistical analysis. d design of experiment to achieve adequate and reliable measurements of interests, the response surface methodology (rsm) was used. rsm is a collection of mathematical and statistical technique for developing, improving and optimizing independent and dependent variables (5). it is normally used to identify the relative of several affecting factors in the presence of complex relationship. above all, the application of rsm will increase product yields, reduce process variability, closer confirmation of the output response to nominal and target requirements and reduces trial and overall cost [13]. in this work, a central composite design (ccd) which is an efficient tool for sequential experiment was used to incorporate information from a properly planned factorial experiment. the ccd consists of 2k factorial or “cube” points where ‘k’ is the number of factors. 2k axial points fixed axially at a distance α, from the center to generate quadratic terms, and replicate tests at the center of experimental region (14). in addition, replicates of the tests are important as they provide an independent estimate of the experimental error. in this work, a ccd for 2 factors (alginate and calcium carbonate), with 5 replicates at the center resulting in total 22 + 22 + 5 = 13 runs. a value of α = [2k]1/4 assures rotation of the ccd and equivalent to 1.414. in order to narrower the independent variables ranges, a preliminary experiment was conducted prior to design of experimental runs to minimize the uncontrolled factors effects. it was found that the effective ranges of calcium lies between 2 to 10 g. meanwhile, the effective alginate amount in aqueous preparation was found between 5 to 10 % (w/v). generally, the viscosity and ph of calcium alginate relies on these preparation conditions knowing that increasing calcium carbonate dosage led to increase of ph due the precipitation of calcium ion. in addition, viscose solution tends to retard the formation of alginate beads when ejecting from injector nozzle. therefore, these two independent factors have been identified as the key variables in the preparation of calcium alginate in aqueous. simplified design summary for independent variables and responses in terms of coded factors is listed in table 2. a complete ccd with 4 factorial points, 4 axial points and 5 replicates of the center point are given in table 3. kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 97 table 2: independent variables and responses in coded term independent factors code unit range caco3 x1 g 2 – 10 alginate x2 % (w/v) 5 – 10 responses code unit range viscosity y1 cp 24 – 49 ph y2 9.7 – 11.8 table 3: experimental design and results run design order results caco3, x1 alginate, x2 viscosity, y1 (cp) ph, y2 experiment predicted experiment predicted 1 -1 -1 24.00 24.37 10.40 10.20 2 +1 -1 40.00 40.63 9.70 10.03 3 -1 +1 38.00 37.87 11.30 11.67 4 +1 +1 45.00 45.13 10.40 10.37 5 -1.414 0 27.00 26.93 11.80 11.71 6 +1.414 0 44.00 43.57 9.00 9.34 7 0 -1.414 33.00 32.39 10.90 10.43 8 0 +1.414 45.00 45.11 11.30 11.54 9 0 0 49.00 48.60 11.40 11.54 10 0 0 49.00 48.60 11.80 11.54 11 0 0 49.00 48.60 11.80 11.54 12 0 0 48.00 48.60 11.40 11.54 13 0 0 48.00 48.60 10.40 10.20 iii results and discussions a model fitting the most important parameters which affect the viscosity and ph of the calcium alginate are calcium carbonate and alginate dosage. in order to investigate the combine effects of these factors, experiments were conducted at different combination. thirteen runs of experiments were carried out to evaluate the effects of these combinations and correlated based on the second-order polynomial model. also, the suggested models for both responses were found to be quadratic as shown in equations 1 and 2: 𝑉𝑖𝑠𝑐𝑜𝑠𝑖𝑡𝑦, 𝑌1 = +48.60 + 5.88𝑥1 + 4.50𝑥2 − 6.67𝑥12 − 4.92𝑥22 − 2.25𝑥1𝑥2 (1) 𝑝𝐻, 𝑌2 = +11.51 + 0.47𝑥1 + 0.39𝑥2 − 0.25𝑥12 − 0.8𝑥22 + 0.35𝑥1𝑥2 (2) where x1 and x2 are the calcium carbonate and alginate dosage. the coefficient with one factor is known as the effect of that particular factor, meanwhile the coefficient of two factors and the other with second-order terms known as the interaction between the two factors and quadratic effects. in addition, the positive and negative sign represents and synergistic and antagonistic effects, respectively (15). b models validation to ensure satisfactory and adequate prediction to the real system of the fitted data, model validation was carried out. also, the fitted model was validated for precise judgment as to avoid misleading conclusions. in this work, we used graphical and numerical methods as an ideal tool for model explanatory. a residual is defined as difference between an observed value and estimated value. figure 1 shows residuals against the fitted values. meanwhile, figure 2 shows the residuals against observation data. the plot was drawn to evaluate any inconsistency or any drift for each observation of the residuals. it can be assumed that both residuals (viscosity and ph) of the models were randomly distributed and no obvious drift of the data models were found. next, normal probability plots of the models data were plotted to check for the normality. additionally, if the residual plot lies approximately along a straight line, the normality assumption is satisfied. any departure from a straight line indicates that a departure from a normal distribution of the residuals. in this study, it was observed that the residuals for both viscosity and ph were normally distributed and therefore the normality assumptions are satisfied and response variables are normally distributed. plots of normal probability against residuals for viscosity and ph are shown in figure 3. kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 98 (a) (b) figure 1: plot of residuals against fitted values for a) viscosity and b) ph (a) (b) figure 2: plot of residuals against order of observation for a) viscosity and b) ph (a) (b) -2 -1 0 1 2 20 30 40 50 60 r e si d u a ls fitted values -2 -1 0 1 2 3 8 9 10 11 12r e si d u a ls fitted values -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 0 5 10 15 r e si d u a ls order of observation -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0 5 10 15 r e si d u a ls observation order 0 20 40 60 80 100 120 -2 -1 0 1 2 n o r m a l % p r o b a b il it y residuals 0 20 40 60 80 100 120 -2 -1 0 1 2 3 n o r m a l % p r o b a b il it y residuals kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 99 for numerical analysis, the developed models were then checked by employing the coefficient of determination (r2) and adjusted r2 (r2adj). r2 indicates how fit a set of data points distribute in a statistical model. in contrast, r2adj modifies the r2 by taking into account extra explanatory variable in the model. to obtain this, the sum of squares (ss), number of experiment (n) and the number of predictor or terms (p) were used and calculated as follows: 𝑅2 = 1 − ssresidual ssmodel + ssresidual (3) radj 2 = 1 − n − 1 n − p (1 − r2) (4) the calculated r2 values for both viscosity and ph for respective models were higher than 90%, corresponds to 0.9974 and 0.9008. in addition, the radj2 for both responses were found close to r2 (0.9955 and 0.8299) and conforms satisfactory adjustment of the quadratic models towards the experimental data. analysis of variance (anova) was then carried out to analyze the differences between group means and variation. as demonstrates in table 4, low probability values of less than 0.0001 and 0.0021 for viscosity and ph explained that the regression were significant to the quadratic models. to further investigate the model adequacy, lack of fit for each of the models was carried out. principally, the lack of fit describes the variation in the data to the fitted model. in the case that the model does not fit the data sufficiently, the lack of fit will be significant. as can be seen from table 4, the lack of fit of 0.395 and 0.1084 for viscosity and ph indicates that the lack of fit was not significant which suggests that the model was capable to describe the data well. moreover, adequate precision measures the signal to noise ratio for both viscosity and ph of 61.540 and 9.592 (data not shown) which more than 4 implies that the model can be used to navigate the design space. c optimization analysis to obtain an optimum preparation condition for calcium alginate, response optimizer was employed. however, it is crucial to analyze the relationship between predictors and responses prior to optimization for each model and carried out first. the analyses were carried out by means of fisher ‘s ‘f’ and student ‘t’ tests. generally, an f-test is used to test for more than one coefficient or joint hypotheses, whereby, t-test is used whenever the hypotheses test is concern to one coefficient at a time. the p values associated with ttest explain the significance for each factor and the interaction between them. as the magnitude of t increases, the values of p become smaller which corresponds to more significance of the coefficient term. as can be seen from table 5, linear, quadratic and interaction terms for predictors were found to be significant for viscosity measurement with p values less than 0.000. it can be considered that all the coefficients gave ultimate effect in determining the optimum condition for viscosity. meanwhile, quadratic effect of calcium carbonate gave least significant effect for determining ph with p values of 0.110 followed by interaction between calcium carbonate and alginate (0.095). in addition, linear effect of calcium carbonate (0.008), alginate (0.020) and quadratic (alginate, 0.001) were found to have significant impact from the coefficient of regression for ph, respectively. table 4: anova for viscosity and ph figure 4 shows the three dimensional mesh wire plots for viscosity and ph. the plots enable graphical visualization to understand the relationship between independent variables and responses. from the figure, increasing amount of alginate and calcium carbonate led to increase in viscosity. in contrast, little dosing of alginate and calcium carbonate reduced viscosity from 37 to 21 cp due to the leaching of calcium carbonate and alginate compared to water’s density. this condition is undesirable because low viscosity will only retard the formation of calcium alginate in crosslink solution. therefore, the interest region for viscosity was fixed at 37 cp to ensure pourability of calcigum alginate mixture. for the case of ph, its proven that lower dosing of calcium carbonate and alginate results in low ph. generally, precipitation of alkaline ions from calcium will increase hydroxide ions. therefore, increasing the amount of calcium statistical parameter degree of freedom sum of square mean square prob.>f remarks viscosity model 5 884.88 176.98 < 0.0001 significant x1 1 276.61 276.61 < 0.0001 x2 1 161.74 161.74 < 0.0001 x12 1 309.95 309.95 < 0.0001 x22 1 168.73 168.73 < 0.0001 x1x2 1 20.25 20.25 0.0001 lack of fit 1.15 0.34 0.3953 not significant ph model 5 8.36 1.67 0.0021 significant x1 1 1.79 1.79 0.0078 x2 1 1.19 1.19 0.0197 x12 1 0.44 0.44 0.1105 x22 1 4.75 4.75 0.0005 x1x2 1 0.49 0.49 0.0950 lack of fit 0.1084 not significant kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 100 carbonate results in increase the ph values. meanwhile, it is worth to mention that alginate plays least effect to the ph of the aqueous because dissolution of uranic and manuranic acid in alginate was limited to the amount of hydrophobic agents presence in the aqueous. in this case, water, as the main dissolving mediator was capable to overcome the acidic condi tion of alginate and produces bases aqueous condition. table 5: estimated regression coefficient for viscosity and ph figure 4: 3 dimensional wire plots for a) viscosity and b) ph an optimum preparation condition for calcium alginate in aqueous was determined based on the numerical optimization process. to obtain this, combination of two responses were compromised subject to optimization parameters. principally, a viscous condition and relatively high ph are desired to obtain an optimum preparation condition. this process is crucial in order to reduce the number of experimental runs when the original designs contain more points. in addition, the target goal for independent variables (calcium carbonate and alginate) were fixed in the range while the responses (viscosity and ph) were fixed at 37 cp and 10, respectively. the software searches for a combination of input variables levels that would jointly optimize a set of responses by satsifying the requirements for each response in the set. finally, after obtaining composite desirability for each response, the globla solution for each of the preparation conditions was obtained sucssfully at 2 g of calcium carbonate and 10% of alginate that results in 37 cp of viscosity and ph 10. to conforms with suggested preparation conditions from the software, three replicates of experiments were carried out for calcium carbonate and alginate. as shown in table 6, the viscosity and ph obtained from the additional experiments are close to those predicted from the model which indicates that the rsm was the ideal tool for optimizing the preparation conditions of the calcium alginate. table 6: replication of suggested preparation condition viscosity ph predicted values 37 10 replicate 1 35 10 replicate 2 36 11 replicate 3 35 10 error (%ave) 4.5 3.3 v conclusion this study has demonstrated that the utilization of rsm and design of experiment has successfully obtained the optimum preparation conditions of calcium carbonate and alginate for calcium alginate. a statisical modelling with ccd with fixed values of independent variables managed to produce a high correlation coefficient from two quadratic models. normal probability plots for both responses were found normally distributed and successfully explained by the parameter viscosity, y1 ph, y2 estimated coefficient standard error t-value prob.>t estimated coefficient standard error t-value prob.>t constant 48.600 0.2591 187.540 0.00 11.5400 0.1622 71.136 0.000 x1 5.880 0.2049 28.702 0.00 0.4725 0.1283 3.684 0.008 x2 4.496 0.2049 21.947 0.00 0.3859 0.1283 3.009 0.020 x12 -6.675 0.2197 -30.382 0.00 -0.2513 0.1375 -1.827 0.110 x22 -4.925 0.2197 -22.417 0.00 -0.8263 0.1375 -6.008 0.001 x1x2 -2.250 0.2897 -7.766 0.00 0.3500 0.1814 1.930 0.095 kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 101 anova. f inally, additional experiments have shown that relatively small error arised from the replicates indicates that rsm and ccd can be used for modelling and optimizaing the calcium alginate preparation conditions. acknowledgment authors would like to acknowledge univesiti sains malaysia under postgraduate research grant scheme (usmprgs/8045051) and ministry of higher education (mybrain) for the financial support during the research period. references [1] kim kh., keller aa., yang jk (2013). removal of heavy metals from aqueous solution using a novel composite of recycled materials. colloids surf, a 425(0): 614. [2] hussain s. aziz ha, isa mh, adlan mn, asaari fah (2007). physico-chemical method for ammonia removal from synthetic wastewater using limestone and gac in batch and column studies. bioresour. technol, 98(4): 874-880. [3] kamaruddin ma,. yusoff ms, ahmad ma (2011). optimization of durian peel based activated carbon preparation conditions for ammoniacal nitrogen removal from semi-aerobic landfill leachate. j. sci. ind. res, 70: 554560. [4] kamaruddin ma, yusoff ms, aziz ha, alrozi r (2013). removal of direct blue 71 from aqueous solution by adsorption on rice husk carbon-clinoptilolite composite adsorbent. in business engineering and industrial applications colloquium (beiac), ieee. [5] kamaruddin ma, yusoff ms, aziz ha, alrozi r (2013). preparation of rice husk carbon-clinoptilolite composite adsorbent for color removal from real textile wastewater: effects of operating conditions. in business engineering and industrial applications colloquium (beiac), ieee. [6] kuncoro ep, fahmi mz (2013). removal of hg and pb in aqueous solution using coal fly ash adsorbent. procedia earth planet. sci, 6(0): 377-382. [7] oh c, rhee s, oh m, park j (2012). removal characteristics of as(iii) and as(v) from acidic aqueous solution by steel making slag. j. hazard. mater, 213–214(0): 147-155. [8] chen jh, xing ht,. guo hx, li gp, weng w, hu sr (2013). preparation, characterization and adsorption properties of a novel 3-aminopropyltriethoxysilane functionalized sodium alginate porous membrane adsorbent for cr(iii) ions. j. hazard. mater, 248–249(0): 285-294. [9] lencina m, andreucetti msn, gómez c, villar m. (2013). recent studies on alginates based blends, composites, and nanocomposites, in advances in natural polymers, s. thomas, p.m. visakh, and a.p. mathew, editors. 2013, springer berlin heidelberg. 193-254. [10] yang l, ma x, guo n (2012). sodium alginate/na+rectorite composite microspheres: preparation, characterization, and dye adsorption. carbohydr. polym, 90(2): 853-858. [11] ahmadkhani khari f, khatibzadeh m, mahmoodi nm, gharanjig k (2012). removal of anionic dyes from aqueous solution by modified alginate. desalination and water treatment,. 51(10-12): 2253-2260. [12] chan es, lee bb, ravindra p, poncelet d (2009). prediction models for shape and size of ca-alginate macrobeads produced through extrusion–dripping method. j. colloid interface sci. 338(1): 63-72. [13] ravikumar k, pakshirajan k, swaminathan t, balu k (2005). optimization of batch process parameters using response surface methodology for dye removal by a novel adsorbent. chem. eng. j, 105(3): 131-138. [14] trinh, tk (2010). application of response surface method as an experimental design to optimize coagulation tests. environ. eng. res, 15(2): 63-70. [15] tan, iaw, ahmad al, hameed bh (2008). optimization of preparation conditions for activated carbons from coconut husk using response surface methodology. chem. eng. j, 137(3): 462-470.mohammed s. elmoghany, and basil hamed, “fuzzy controller design using fpga for sun and maximum power point tracking in solar array system ” technical report iug., gaza, palestine., nov. 2011. mohamad anuar kamaruddin is a ph.d candidate in waste water engineering at universiti sains malaysia and a recipient of ministry of higher education malaysia scholarship. he received his first degree from universiti sains malaysia in 2009 awarded with bachelor of science in civil engineering. he obtained degree in master of science in civil engineering from universiti sains malaysia in 2011 with major in landfill technology. his current research is focuses on alleviating problems associated with waste water and solid waste management. to date, he has published several scientific articles related to environmental engineering field. associate professor dr mohd suffian yusoff obtained his first degree from universiti putra malaysia in agricultural science in 1995. he later pursued master degree in mineral resources engineering in universiti sains malaysia and graduated in 2000. dr. yusoff received his doctorate from universiti sains malaysia in 2006 with major in solid waste management. currently, dr yusoff serves school of civil engineering universiti sains malaysia as anacademic programme chairperson (environmental and sustainability). he has published numerous refereed articles in professional journals. dr yusoff’s field of expertise’s are solid waste management, landfill technology and leachate treatment. dr yusoff also has conducted numerous consultancies and research works at national and international level. his vast experience in landfill operation and management has enabled him to conduct numerous talks and seminars at national and international level. hamidi abdul aziz is a professor in environmental engineering at the school of civil engineering, universiti sains malaysia. dr. aziz received his ph.d in civil engineering (environmental engineering) from university of strathclyde, scotland in 1992. he is the editor-in-chief of cjasr, ijses and the managing editor of kamaruddin, m.a., yusoff, m.s., aziz, h.a and alrozi, r. application of response surface methodology for calcium alginate preparation in aqueous (2014) 102 ijewm, ijee. he has published over 200 refereed articles in professional journals/proceedings and currently sits as the editorial board member for 8 international journals. dr aziz's research has focused on alleviating problems associated with water pollution issues from industrial wastewater discharge and solid waste management via landfilling, especially on leachate pollution. he also interests in biodegradation and bioremediation of oil spills. rasyidah. alrozi received her first degree from universiti sains malaysia in 2009 awarded with bachelor of science in chemical engineering. she obtained degree in master of science in chemical engineering from universiti sains malaysia in 2010. currently, she serves at faculty of chemical engineering universiti teknologi mara, pulau pinang as a lecturer. her research interest lies in activated carbon, adsorption and wastewater treatment. mohd hafiz, zawawi received his ph.d in civil engineering from universiti sains malaysia. his major is on groundwater study, isotope, hydrochemical as well as landfill mangament. he has published numerous scientific journals and involved in various consultancy works in the field of environmental engineering. currently he serves as a senior lecturer in well reputable private university in malaysia. . transactions template journal of engineering research and technology, volume 1, issue 4, december 2014 156 bidirectional wdm access architecture employing cascaded awgs and rsoas. fady i. el-nahal department of electrical engineering, the islamic university of gaza, gaza, p.o.box 108, palestine, email: fnahal@iugaza.edu.ps. abstract—here we propose a bidirectional wavelength division multiplexing (wdm) access architecture employing cascaded cyclic arrayed waveguide gratings (awgs) and reflective semiconductor optical amplifiers (rsoas) for system applications mainly in wavelength routed fiber-to-the-home (ftth) networks. these architectures can address multiple n2 customers using n wavelengths by employing multiple nxn awgs at the central office (co) and multiple 1×n awgs at the distribution points (dps). a 20 km range colorless wdm passive optical network (wdm-pon) was demonstrated for both 4 gbit/s downstream and 2.5 gbit/s upstream signals respectively. the ber performance of our scheme demonstrates that our scheme is a practical solution to meet the data rate and cost-efficient of the optical links simultaneously in future access networks. index terms—wavelength-division-multiplexing passive optical networks (wdm/pons), fiber to the home (ftth). arrayed waveguide gratings (awgs), reflective semiconductor optical amplifiers (rsoas). i introduction the rapid growth in internet traffic and bandwidth intensive applications is continuing to fuel the penetration of fiber networks into the access network segment [1]. wavelengthdivision-multiplexing passive optical networks (wdm/pons) are an attractive option due to their high capacity, easy management, network security, protocol transparency, and easy upgradability [2]. pon systems in fiber to-the-home (ftth) networks have been widely deployed to fully support “triple-play” services including data, voice, and video services. among various pon technologies, wdm/pons, which offer point-to-point connectivity via a dedicated wavelength to each customer, are believed to eventually provide an optimal ftth architecture [3]. the rapid deployment of wdm into the core and metropolitan networks has cut down wdm components costs, and pushed wdm technology closer to the end-user. the awg is a key wdm technology, offering high wavelength selectivity, low insertion loss, small size, high channel count and potentially low cost [4-6]. several wdm/pon systems have been studied recently where a reflective semiconductor optical amplifier (rsoa) plays an important role [7–15]. the rsoa replaces the high cost wdm source at the onu and can be used as a modulator and amplifier [16-18]. this gives additional gain enabling the possibility of avoiding the use of an erbium doped fiber amplifier (edfa) in the system. in these systems, both the upstream and downstream channels use the same wavelength for improving the wavelength utilization efficiency. cascaded arrayed-waveguide gratings (awgs) access architectures are discussed here [19-24]. these architectures can route to a large number of customers using a minimal set of wavelengths by employing awgs [25], while offering scalability and improved crosstalk. in this work a cascaded awg-based access architecture that consists of multiple n×n awg’s at the co and multiple 1×n awg’s at the distribution points (dp) is investigated. n wavelengths are multiplexed and transmitted from the co, offering unique optical path for each end user for both downstream and upstream transmission. each awg at the dp addresses n end users where the end users transmit in the same wavelength that they receive. a commercially available simulation tool was used for the calculations reported in this work [26]. ii proposed architecture space wavelength switching can be attained by utilizing the periodic spectral response of the awg, and its latin routing capability [4]. this is achieved by setting the awg free spectral range (fsr) to match the product of the number of output ports and the channel spacing. in addition, the number of wavelengths must equal the number of output ports. consequently, for a symmetric nn awg, the output port from which wavelengths appear will depend on the input port at which the wavelengths were launched. the minimum number of wavelength channels needed for n2 connections is n. an example of the wavelength assignment is shown in table 1, where the same wavelengths are used for different connections. for example, 4 is used for n connections such as 4 to 1 and 3 to 2 [4]. f. i. el-nahal / bidirectional wdm access architecture employing cascaded awgs and rsoas. ( 2014) 157 table 1 an example of the wavelength assignment in an n×n awg router. the same functionality can be achieved by employing active awg, an awg with an array of phase modulators on its arrayed-waveguides section, where a programmable linear phase-profile or a phase hologram is applied across the arrayed-waveguide section. following the second free propagation region, this results in a wavelength shift at the output section of the awg. this effect can be used to tune the device, and also allows space-wavelength switching functionality [27,28]. thus, just one input port of the nn awg can be used for downstream transmission, whilst the remaining ports are used for upstream reception. since active awg’s cost is relatively higher than passive awgs, a passive n×n awg, circulators, and an opto-mechanical switch are used to achieve the desired wavelength routing in this work [19]. fig. 1 shows the proposed access architecture. packets are modulated onto one of r wavelengths from a fast tunable laser depending on the destination optical network unit (onu) [22]. the cells reach the first stage s1, which consists of a p-way passive optical splitter. the cells are routed into the appropriate arm and progress to the second stage s2, consisting of a q-way power splitter, where each of the arms of the splitter are connected with one of the upstream ports, respectively, of the pth n×n awg. space-wavelength switching is achieved by using an awg in conjunction with a power splitter and an opto-mechanical switch. for the proposed wdm access architecture shown in fig. 1, there would be r = 24 wavelengths spaced by 100ghz (0.8nm), requiring a q = 24 port awg. the awgs at the second and third stages (s2 and s3) are matched to have the same fsr (2400 ghz), passband width and number of input/output ports, so that q = r. the packets are routed out of the co and onto stage s3, located at the distribution points (dps), consisting of awgs with r = 24 ports. each awg acts as a fixed optical wavelength router directing the wavelength multiplexed cells to the destination onu located in the customer premises. at the onu, using optical splitter/coupler, portion of the signal is fed to a receiver. for up-link, the other portion of the downstream signal from the splitter/coupler is re-modulated using 2.5 gb/s nrz upstream data by rsoa in the onu. the re-modulated on-off keying (ook) signals are sent back over the fiber to the co where they are de-multiplexed by the awg. the reflected optical signal is detected by a pin-photodiode. uplink optical sidebands produce crosstalk when uplink data was detected at the co. crosstalk can be reduced by using bessel filter. the total number of onus served is the product pqr. for p = 12, q = r = 24 this access network can provide service to 6192 customers, thus allowing a graceful evolution from current passive optical networks (pon) infrastructures [29]. a factor p downstream bandwidth capacity can be achieved by replacing stage 1 consisting of a p -way splitter by a bank of p tunable lasers and modulators, so that the network consists of p = 12 sub-networks, where each can serve 576 onus [22]. iii simulation issues and results the laser source is externally modulated by a mach-zehnder modulator. the data stream, which modulates the continuous wave laser source, was generated using an nrz shaped pulse train. the average output power of the source is 20dbm while the linewidth of the laser is 10mhz. a gaussian approximation of the modal field in the waveguides was used in the awg model and the insertion loss is assumed to be 4db, which is a typical value in standard awgs. polarization dependent loss has been ignored. the waveguide is assumed to be symmetrical and only the fundamental mode was taken into account. these assumptions can be justified as they have minimal effect on the simulation results. a polarization independent rsoas that can handle signals with arbitrary polarization state were used. limitations of the rsoa model used here include neglecting the gain dispersion. neglecting the gain dispersion is acceptable as long as the bandwidth of the optical signal is significantly smaller than the amplification bandwidth, which is typically of the order of several tenth of nms. a 20 km of standard singlemode fibre (smf) with 0.2 db/km loss was used for fibre transmission simulations. this module solves the nonlinear schrödinger equation describing the propagation of linearlypolarized optical waves in fibers using the split-step fourier method. this model allows simulating nonlinear (spm, fwm, xpm) and raman effects in wdm systems. ber simulations were carried out with a bit rate of 4 gb/s downstream and 2.5 gb/s upstream for a 1.7 ghz bandwidth receiver respectively. the received eye diagrams of downstream and upstream signals were measured at onu and co respectively. the received eye diagrams of arbitrary downstream and upstream signals are shown in fig. 2 and fig. 3 respectively. the eye-diagrams are widely open and prove that the system is stable and that crosstalk is negligible. f. i. el-nahal / bidirectional wdm access architecture employing cascaded awgs and rsoas. ( 2014) 158 figure 1 cascaded awgs based access architecture the ber versus input optical power pin curves for the downlink and uplink are shown in fig. 4. it is clear from the results that both downlink and uplink do provide good ber performances. it is noted from the figure that the ber for the downlink and uplink goes down with increasing pin from 10 dbm to 20 dbm. for the downlink, when pin = 10 dbm, the ber = 1.3×10-9 and quality factor q= 5.9. when pin = 20 dbm, the ber = 3.2×10-32 and q = 11.8. for the uplink, the ber goes down slowly with increasing pin from 10 dbm to less than 15 dbm. when pin = 10 dbm, the ber = 2.3× 10-9 and q = 5.9. for pin ≥ 15 dbm, the ber is nearly constant. when pin =20 dbm, the ber = 1.4×10 -39 and q = 13.1. this can be explained by the fact that the rsoa is operating in the gain saturation region. figure 2 eye diagram of downlink the variation of the gain of rsoa with the optical input power pin is shown in fig. 5. it is clear that the maximum gain appears at pin = 10 dbm, then goes down to reach the lowest gain at pin= 20 dbm. figure 3 eye diagram of uplink v conclusion this paper has presented novel simulation results to evaluate awgs and rsoas for system applications mainly in new bidirectional access architectures employing cascaded arrayed-waveguide grating technology. space wavelength f. i. el-nahal / bidirectional wdm access architecture employing cascaded awgs and rsoas. ( 2014) 159 switching was achieved in these networks using a passive awg in conjunction with an optical mechanical switch. these architectures can address up to 6912 customers employing only 24 wavelengths, separated by 0.8 nm. rsoa is used as low cost colorless transmitters for high-speed optical access exploiting wdm technology. the results obtained demonstrate that cascaded awgs access architectures have a great potential in future ftth networks. figure 4 ber versus input power figure 5 the variation of rsoa gain with input power references [1] elaine wong, “next-generation broadband access networks and technologies”, j. lightwave technol., vol. 30, no. 4, pp. 597-608, 2012. [2] f.t. an, d. gutierrez, k.s. kim, j.w. lee, l.g. kazovsky, “success-hpon: a next generation optical access architecture for smooth migration from tdm–pon to wdm–pon”, ieee commun. mag. vol. 43, no. 11, pp 540-547, 2005. [3] s.-j. park, c.-h. lee, k.-t. jeong, h.-j. park, j.-g. ahn, k.-h. song, “fiber-to thehome services based on wavelength-division-multiplexing passive optical network”, j. lightwave technol. vol. 22, pp. 2582–2591, 2004. [4] m.k. smit, c. vandam, “phasar-based wdmdevices: principles, design and applications”, ieee j. select. top. quant. 2 (2) (1996) 236–250. [5] m.k. smit, “new focusing and dispersive planar component based on an optical phased array”, electron. lett. 7 (24) (1988) 385–386. [6] g.r. hill, “a wavelength routing approach to optical communication networks”, brit. telecom. technol. j. 3 (6) (1988) 24–31. [7] s. y. kim, s. b. jun, y. takushima, e. s. son and y. c. chung, “enhanced performance of rsoa-based wdmpon by using manchester coding,” j. optical networking, vol. 6, pp. 624-630, 2007. [8] y. c. chung, “recent advancement in wdm pon technology”, tech. dig. ecoc 2011, geneva, paper th.11.c.4. [9] j. prat, c. arellano, v. polo, c. bock, “optical network unit based on a bidirectional reflective semiconductor optical amplifier for fiber-to-the home networks”, ieee photon. technol. lett., vol. 17, no. 1, pp. 250-252, 2005. [10] m.d. feuer, j.m. wiesenfeld, j.s. perino, c.a. burrus, g. raybon, s.c. shunk, n.k. dutta, “single-port laseramplifier modulators for local access”, ieee photon. technol. lett. vol. 8, no. 9, pp. 1175–1177, 1996. [11] n. buldawoo, s. mottet, f. le gall, d. sigonge, d. meichenin, s. chelles, “a semiconductor laser amplifier-reflector for the future ftth applications”, in: proc. eur. conf. opt. commun. (ecoc 1997), edinburgh, uk, 1997, pp. 196–199. [12] j.m. kang, s.k. han, “a novel hybrid wdm/scmpon sharing wavelength for upand down-link using reflective semiconductor optical amplifier”, ieee photon. technol. lett. vol.18, no. 3, pp. 502-504, 2006. [13] g.c. kassar, n. calabretta, i.t. monroy, “simultaneous optical carrier and radio frequency re-modulation in radio-over-fiber systems employing reflective soa modulators”, in: lasers and electro-optics society (leos 2007), october, 2007, pp. 798–799. [14] a. kaszubowska, l. hu, l.p. barry, “remote downconversion with wavelength reuse for the radio/fiber uplink connection”, ieee photon. technol. lett. vol. 18, no. 4, pp.562–564, 2006. [15] t. kuri, k. kitayama, y. takahashi, “a single lightsource configuration for full-duplex 60-ghz-band radio-on-fiber system”, ieee trans. microw. theory tech. vo. 51, no. 2, pp. 431–439, 2003. [16] t. kim, k. jeung-mo, and h. sang-kook, "performance analysis of bidirectional hybrid wdm/scm pon link based on reflective semiconductor optical amplifier" microwave and optical technology letters, f. i. el-nahal / bidirectional wdm access architecture employing cascaded awgs and rsoas. ( 2014) 160 issn 0895-2477, vol. 48, april, 2006. [17] zhansheng liu et al., "experimental validation of a reflective semiconductor optical amplifier model used as a modulator in radio over fiber systems", ieee photon. technol. lett., vol. 23, no. 9, pp. 576– 578, 2011. [18] m.c.r. medeiros, r. avó, p. laurêncio, n.s. correia, a. barradas, h.j.a. da silva, i. darwazeh, j.e. mitchell, p.m.n. monteiro, “radio over fiber access network architecture employing reflective semiconductor optical amplifiers”, in: proc. icton-mw’07, sousse, tunisia, 2007, pp. 1–5. [19] i. tsalamanis, m. toycan, s.d. walker, “study of hybrid cascaded awg-based access network topology”, in: proc. icton, nottingham, uk, 2006. [20] f. el-nahal, r.j. mears, “multistage wdm access architecture employing cascaded awgs”, optical fiber technology, vol. 15, no. 2, pp. 181–186, 2009. [21] m.c. parker, f. farjady, s.d. walker, “space/wavelength-routed atm access architecture based on cascaded programmable arrayed-waveguide gratings”, in: proc. cleo’98, san francisco, ca, 1998. [22] f. farjady, m.c. parker, s.d. walker, “an evolutionary access network upgrade path based on a multidimensional-time-space-wavelength architecture”, in: proc. noc’98, manchester, uk, 1998. [23] f. farjady, m.c. parker, s.d. walker, “evolutionary multidimensional access architecture featuring costreduced components”, in: proc. icapt’98, ottawa, on, canada, 1998. [24] m.c. parker, f. farjady, s.d. walker, “wavelengthtolerant optical access architectures featuring ndimensional addressing and cascade arrayed waveguide gratings”, j. lightwave technol. 16 (12) (1998) 2296– 2302. [25] m.c. parker, f. farjady, s.d. walker, “application of synthetic aperture techniques to awg passband control”, in: proc. ipr’98, victoria, bc, canada, 1998. [26] optisystem from optiwave. [27] m.c. parker, s.d. walker, a. yiptong, r.j. mears, “application of active arrayedwaveguide gratings in dynamic wdm networking and routing”, j. lightwave technol. 18 (12) (2000) 1749–1756. [28] f. i. el-nahal, r.j. mears, “applications of active awgs in future wdm networks”, opt. eng. 42 (3) (2003) 867–874. [29] d. hopper, fujitsu telecommunications europe, ltd., private communication. transactions template journal of engineering research and technology, volume 5, issue 4, december 2018 77 developing a framework for implementing greenlean construction techniques samah abu musameh 1 , mamoun a. alqedra 2 , mohammed arafa 3 and salah r. agha 4 faculty of engineering, islamic university of gaza, palestine. 1 associate professor of civil engineering, the islamic university of gaza, palestine. 2 associate professor of civil engineering, the islamic university of gaza, palestine. 3 professor of industrial engineering, the islamic university of gaza, palestine. 4 abstract: it is known that the traditional construction life cycle produces wastes in time, materials and cost as well as producing polluting materials. the main objective of this study is to propose a framework that integrates the traditional construction techniques with green-lean construction techniques to improve the economic and environmental performance of the construction industry. the study was carried out using analytical network process (anp) and zero one goal programming (zogp). the results suggested an optimum arrangement to reach an effective green-lean framework; this involves a wide range of interventions that could be implemented for moving the traditional construction towards green-lean construction. the study concluded that going green lean is not necessarily expensive if the right approach is applied. further, going green-lean can save time and money while keeping staff and buildings clean and safe. further, the study recommended paying more effort in the design phase to integrate innovative solar and water energy alternatives. it was also recommended to find new value-added activities/ materials to replace non-value ones. in addition to that, analyzing the needed activities in an early start of the project would help releasing the right work at the right time with the right people at an optimum price. index terms— green techniques, lean construction, analytical network process, anp, zogp i introduction despite management efforts, construction industry faces many issues related to performance, productivity and impact on the environment [1]. the construction industry consumes a significant amount of resources annually, generates significant waste and produces a host of emissions [2] both of which could be decreased using green-lean techniques to meet the estimated budget, time and reduce the negative environmental impacts of construction activities [3]. currently, 100 million tons of construction waste, including 13 million tons of unused materials, is being generated each year, with only 20% currently capable of being recycled. the majority of this waste ends up in landfill, contributing to further pollution of the biosphere [4]. the construction activities require prudent planning and efficient management. this was assumed due to the high volume of construction activities, the creation of poorquality products and the harmful environmental impact [3]. therefore, there is an urgent need to improve efficiency and effectiveness of management strategy during the construction cycle, to smartly balance between time, cost, quality, resources and its influence on the environment. the optimum solution is to achieve high quality with low cost within the time constraints, to use the resources without affecting the environment. the lean construction process is a derivative of the lean manufacturing process. this has been a concept popularized since the early 1980s in the manufacturing sector. it is concerned with the elimination of waste activities and processes that create no benefit. it is about doing more for less[5]. the lean techniques can help to improve the economic impact of the project and reduce the waste in the construction process where different studies from various countries have illustrated that the wastes in construction field equal approximately 47% of the total construction process [6]. while green techniques mitigate the significant impacts of the construction on the economy, society and environment [7], where, regarding to the world business council (wbc) for sustainable development, blocks in the construction consumes 40% of total construction energy [7] therefore, this study attempted to provide a better understanding of green -lean techniques and their concepts, which will increase the productivity and reduce waste. as an output, the study suggest a framework which enables integrating the traditional construction process with greenlean construction techniques, to promote the economic and environmental performance of the construction through using analytical network process (anp) and zero one goal programming (zogp) as analytical methods. s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 78 ii literature review the construction industry has changed all over the world including gaza strip over the past years; companies are faced with real issues regarding performance, productivity and the construction impact on the environment. the next paragraphs discuss previous studies, which recommended green building and using lean principles. in additional to that, the paragraphs introduce anp, zogp principles. kibert [8] defined green building as ―a healthy facility designed and built in a resource efficient manner and using ecologically based principles‖. green principles focus on the environmental issues; they include eco-product design, environmental design, consider re-use principles of design for reuse, re-manufacturing and recyclability, and the use of environmentally friendlier materials [9]. there are many international tools to assess the green performance of the construction and its effect on the environment, for example, australian building greenhouse rating (abgr), green star, leadership in energy and environmental design (leed) and life cycle assessment (lca). the applied green tool in this study is life cycle assessment (lca) which was defined by rebitzer, et al. [10] as a green tool that systematically assesses and manages the environmental impact of a product, process, or service through its entire life cycle, from the material and energy used in the raw material extraction and production processes, through acquisition and product use, and continuing to final product disposal. as for lean, ohno [11] defined it as a business system with a fundamental objective of eliminating waste, and he defined waste as ―anything that does not add value.‖. value-added activities are the ones that the client is interested in paying for, the ones that help converting the product or service to a new product, and the ones that must be done correctly. issa [12]defined lean as a new concept in the construction production management. it produces a control tool with the goal of reducing the losses throughout the process. another definition for lean described by lim [13] which is attaining a balanced use of works, materials and resources. this allows contractors to minimize costs, decrease the waste in the construction and deliver projects on time. the lean has five principles, namely: value, value stream map (vsm), flow, pull and perfection. the value emphasized that customer is the main person who is charge of determining the needed value of the project [14]. the value stream map is a well-known used lean tool, which is a technique that analyzes the materials and information through a process flow diagram, taking into consideration the details of the required time, resources, cost that are needed for each step in the process flow. with regard to the pull principle, womack and jones [14], mentioned that pull implies the capability to design and make precisely what the client needs quickly and efficiently. finally, perfection is defined as to deliver a product which meets the client requirements, with optimum quality without mistakes and defects and within the agreed time. relationship between green and lean; it is elementary that adopting green techniques lead to sustainability. this is not necessarily a vice versa relationship[15]. lean production is a systemic approach to meeting customer expectations, whatever they value, by reducing waste. at first glance, lean could only contribute to sustainability, while sustainability is achieved only if the customer values sustainability (bae, 2008). many countries gain great benefits from applying lean methods to construction industries. china, as a great construction country, also has advocated the implementation of lean construction technologies in recent years [16] . the direct relationships between green and lean practice overall and environmental performances are promising. it seems to be great ‗win-win‘ opportunities. however, there must be total long-term commitment to green and lean practice to achieve better performance[17]. iii methodology a combination of quantitative and qualitative methods are applied in the study. qualitative data were collected through semi-structured questionnaire with project managers and experts having more than 10 years of experience. the purpose was to understand the advantages of the traditional techniques and to prioritize the criterion that affect the construction process to be analyzed through analytical network process (anp) and zero one goal programming (zogp). study strategy: (anp) and (zogp) are used as analytical tools to propose the optimum framework for the integration of the traditional construction with more greenlean techniques. questionnaire design: the questionnaire comprised six tables, in which the first table consisted of a pairwise comparison of the main criteria, the second, third, fourth and fifth tables consisted of subcriteria pairwise comparison and finally the sixth table designed to perform a comparison for feedback connections between these sub-criteria. figure (1) shows anp model structure. developing framework: analytical network process (anp) was used to prioritize the sub-criteria that generally affect the construction process, and then zero one goal figure 1 analytical network analysis framework s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 79 figure 3 anp model framework programming (zogp) was applied to propose a realistic green-lean framework in different scenarios as in table (1). analytical network process (anp) model it‘s a multi-criteria decision making tool used to derive relative priority scales of absolute numbers from individual judgments [18]. the advantage of anp uses the ratio scales to make accurate predictions and wiser decisions. table 1 applied criteria and sub-criteria criteria environment quality cost time sub criteria polluting materials material reliability design time wasters (activities) polluting activities customer satisfaction material project duration water systems defects labor adhere to deadline renewable energy concurrent drawings machine renewable material material waste operational energy systems activities waste this model is proven effective in many fields such as predicting sports results, economic fluctuations, business, and different events. the main feature that makes anp unique is its ability to deal in a systematic manner with feedback and to define precisely the value from a customer's point of view in ratio numerical scale [19]. the anp method consisted of two parts. the first is to build the hierarchy of criteria, sub-criteria, and alternatives. the second part is to build links and connections between these elements. then, defining the weight of each element and its rank among other elements (in this study, defining the weights were done based on a specialized questionnaire filled by 10 experts point of view with more than 10 years‘ experience). the main idea behind using the anp approach is not to limit the human creativity into a mathematical shape. rather, it resembles a natural flow of thinking. anp provides a mathematical approach that is more effective that the probabilistic approach [20] by using super decision software, which is utilized in the decision-making process with a feedback. anp consist of clustered network of elements, which are goal, criteria, sub-criteria, and alternatives. these clusters contain nodes, which are linked together, so that the software could prioritize them. the anp allows for all possible and potential dependencies [19]. the prioritizing process depends on a series of pairwise comparison between the criteria sub-criteria, and alternative clusters. the model consists of four levels. the first level is the goal cluster where the goal (traditional vs. green-lean comparison) is determined as a node. the second level is the criteria cluster where four nodes were used (environment, quality, cost and time) as shown in figure (1). the third level is the sub-criteria clusters where environment cluster contains polluting materials, polluting activities, efficient water systems, renewable energy, renewable material use and energy system nodes. the quality cluster contains material reliability, customer satisfaction, construction defects, concurrent design drawings (taking into consideration the right design principles and solve the conflicts among the different (architect, civil, mechanical, electrical) specializations in the design phase), material and activities waste nodes. while cost cluster contains design, materials, labor, machines and operational cost nodes. finally, time cluster contains time wasters, extend the project duration in case of need and adhere to deadline nodes. the fourth level is the alternative cluster, which contains two nodes, the traditional and green-lean construction alternatives. figure (2) shows a screenshot for anp model framework in super decision software. it is worth mentioning that in order to complete the model, a connection between the nodes must be developed according to the relation between these sub-criteria and nodes. the next step is to insert the average of the expert‘s responses to the pairwise comparison. pairwise comparison process the prioritizing process depends on a series of pairwise comparison between the criteria, sub-criteria and alternative clusters. the pairwise comparison depends on two questions that were asked during interviewing experts (during the questionnaire filling process) to differentiate between elements. the first question is which criteria is more significant than the other ones. the second question is what is score of this importance? [21]. figure 2 anp model framework s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 80 this process was applied to make tradeoffs among criterion and sub-criteria, the judgments are usually made numerically as a score, which is a reciprocal pairwise comparison in a carefully designed scientific way. the first step in this phase is to discuss the preliminary questionnaire with a pilot sample by inviting 2 professionals with more than 10 years of experience to identify the criterion that affect the construction process and to develop the final questionnaire design. this helped a lot at the next step of designing the final questionnaire to be answered by 10 different experts. anp model was constructed and pairwise comparisons were added to the model based on the experts‘ responses to calculate the priority of the criteria and sub-criteria. then analysis of these results, the weights of these criteria and sub-criteria and their relation to each other were obtained. table (2) shows the fundamental judgments scale, which was adopted by [19]. in his process, the judgments were first given verbally as indicated in the scale table (2). the vector of priorities is the principal eigenvector of the matrix. this vector gives the relative priority of the criteria measured on a ratio scale. table 2 fundamental scale [19] 1 equal importance 3 moderate importance of one over another 5 strong or essential importance 7 very strong or demonstrated importance 9 extreme importance 2,4,6,8 intermediate values use reciprocals for inverse comparisons the answers of the experts were included in a matrix, and then the following steps were performed as explained in the next example. in the first step, equations in each column are summed up as in equation (1), a vertical summation was performed for the answers. (1) then, each value is divided by its corresponding total summation to obtain the equation (2). an illustration of the arithmetic mean calculation was assigned for every horizontal row of this matrix, which is called synthesized matrix. (2) third step includes, the arithmetic mean which was obtained for every row in equation (2) and multiplied by the corresponding matrix of equation (1), as shown in equation (3). (3) in the fourth step, the resulting values were taken from equation (3) and divide everyone to it‘s correspond arithmetic mean from equation (2). (0.49÷0.16=3.02)..(1.62÷0.54=3.02)..(0.89÷0.30=2.98) (4) the fifth step was to calculate lambda (which is a probability distribution used in multivariate hypothesis testing), this process is done by summing up all the values resulting from equation (4) then dividing it by the total number of the analyzed variables. 𝜆 = = 3 01 (5) in the sixth step, consistency index was calculated. this was carried out by subtracting the total number of the values from lambda then divide it by (n-1). 𝐶𝑜𝑛𝑠𝑖𝑠𝑡𝑒𝑛𝑐𝑦 𝐼𝑛𝑑𝑒𝑥 (𝐶𝐼) = = = 0041 (6) the final step was to calculate the consistency ratio as shown in equation (7). the result must be less than 0.1 for an acceptable consistency level. 𝐶𝑜𝑛𝑠𝑖𝑠𝑡𝑒𝑛𝑐𝑦 𝑅𝑎𝑡𝑖𝑜 ( ) ( ) = = 007 < 1 (7) the study model consists of four criteria, namely: cost, quality, duration and environment aspects and 19 sub-criteria as shown in table 3. pairwise comparison was added to the questionnaire panel. all the responses of the experts were inserted in an excel sheet to calculate the average of the answers, which creates super matrix and then inserted in data entry panel. table 3 criteria and sub criteria criteria sub criteria cost design cost materials cost operational cost labor cost machine cost quality reliability of the used material customer satisfaction construction defects concurrent drawings relationship material waste activities waste s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 81 time time wasters project duration adhere to deadline environment polluting materials polluting activities water systems renewable energy tools renewable material use zero one goal programming (zogp) the analytical network process (anp) is the first analyzing tool applied in this study, the next paragraphs discuss the next applied tool which is weighted zero one goal programming (zogp) using the lindo program, in which more constraints were added to make the model more dynamic and realistic. lindo platform is designed solely for solving optimization problems, whether they are linear, integer, and non-linear etc. its applications are, but not limited to, business and governmental issues. this optimization process helps in getting the optimum value of profit, production, or even happiness. this is through getting the best utilization of funds, time, and labor (lindo systems, inc., 2010). the weights results of anp were fed into lindo program as coefficients for the objective function of the main model. several scenarios were suggested to simulate real construction cases where one of the five cost sub-criteria (the operation, design, labor, machine and material) cost is fixed or all the sub-criteria were fixed as in real construction process. table (4) shows that every sub-criteria is linked to a variable that indicates its condition whether it is minimized or maximized. table 4 the variable definition criteria condition sign x1 operation cost minimize x2 machine cost minimize x3 material cost minimize x4 design cost minimize x5 labor cost minimize x6 renewable material maximize + x7 energy systems maximize + x8 polluting materials minimize x9 polluting activities minimize x10 water systems maximize + x11 concurrent design drawings maximize + x12 activities waste minimize x13 materials waste minimize x14 construction defects minimize x15 customer satisfaction maximize + x16 materials reliability maximize + x17 adhere to deadline maximize + x18 project duration minimize x19 time wasters in construction minimize the formulation of the zogp the obtained weights from anp are set as coefficients for objective function that should be maximized or minimized as shown in table (4). the maximization process was assigned a positive sign while the minimization process was assigned a negative sign. the objective function was subject to the environment, quality, cost and time constraints. the detailed formulations are given the next paragraphs. objective function minimize ∑ wj di + + wj di subject to: 1. 𝑑 𝑑 = 0 2. ∑ = j = 6, 7, ……19 b= 5,6, ……19 where  represents the integer variables of the subcriteria except the corresponding fixed sub-criteria given in table (4).  ―j‖ is the assumed index value of the sub criteria except the fixed sub-criteria correspond to the given in table (4).  ―dj+, dj-‖ = the positive and negative deviation variables of the sub-criteria except the excluded fix ones which are being analyzed in the scenario depending on its corresponding condition in table(4), j= 1,2,….19 except fixed sub-criteria.  where represents the variables of the subcriteria except the corresponding fixed sub-criteria given in table (4).  ―b‖ represents the number of the sub-criteria that were required to work from 1 to 19, where ―b‖ is the control point that prioritizes the sub-criteria in order to give the logical framework that was targeted in this study. the five cost sub-criterion (the operation, design, labor, machine and material) were assigned a fixed value to determine the priority of the other sub-criterion and analyze the influence on the sub-criterion order which to be implemented in the green-lean framework. iv result and analysis a main criteria weights this study analyzes the construction performance from four different aspects that formulate the main criteria (environmental, quality, cost and time) criteria. every criteria contain many sub-criteria. through the analysis, a study of the relation between these sub-criteria and their influence on each s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 82 other were discussed and analyzed using analytical network process (anp) as shown in the next paragraphs. results indicated that the environmental criteria is the most significant criteria that should be taken into consideration with a weight of 0.34. quality and cost criteria weights are 0.27 and 0.21 respectively. finally, the time criteria weighed 0.18 as shown in table (5), which indicates that there is a tendency from the experts to encourage promoting traditional construction to be greener and to pay more effort on the environmental side of the project and the high quality of the production, even if it is little higher than in the cost. table 5 main criteria weights normal value limit super matrix environmental criteria 0.342 0.055 quality criteria 0.271 0.043 cost criteria 0.208 0.033 time criteria 0.178 0.028 total 1 b sub criteria weights 1. weights of environment sub-criteria the environmental criteria weighs 0.34, which indicates that it is significant to adopt new changes and make the construction process more sustainable. this would surely affect the construction environment .table (6) shows the weights of the environment sub criterion. table 6 environment sub-criteria weights normal value limit super matrix renewable material use 0.301 0.068 energy systems 0.260 0.059 polluting materials 0.211 0.048 polluting activities 0.128 0.029 water systems 0.099 0.022 total 1 the results indicate that renewable material is an important sub-criterion weighs 0.068 as shown in table (6). this spot the lights on the effects of the energy consumption of the building and its performance with respect to the environment. when it comes to the energy systems sub-criteria, the results show that appropriate energy systems (photovoltaic, thermal, biomass and wind) weigh 0.058, which means that this sub-criteria should highly be taken into consideration, especially that developing country suffers from energy problems and an increase in the pollution, poor resources of energy and inefficient use of them [22]. the reason behind this high rank can be attributed to the absence of fossil fuel resources. palestine imports all it needs of petroleum products from israeli market and about 92% of electrical energy from the israeli electric corporation (iec). indigenous energy resources are quite limited to solar energy for photovoltaic and thermal applications (mainly for water heating), and biomass (wood and agricultural waste) for cooking and heating in rural areas, while the potential of wind energy is relatively small but not yet utilized in palestine. 2. weights of cost sub-criteria the study analyzed many cost sub criterion that influence the proposed framework in order to integrate the traditional construction with green-lean techniques as shown in table (7). the budget of the project imposes many serious limitations, for example, the area of the building, types of the used materials in the project, types of equipment, which significantly affect the decision whether to use green-lean techniques or not, as it needs a higher initial cost. table 7 cost sub criteria weights cost sub-criteria normal value limit super matrix operational cost 0.306 0.043 machine cost 0.260 0.037 material cost 0.172 0.024 design cost 0.129 0.018 labor cost 0.132 0.019 total 1 the operational cost sub-criteria ranked as the first cost subcriteria with a weight of 0.043. the operational cost is the biggest cost during the life cycle of the building. so, studying carefully the value that the customer wants to achieve in the final product, will help the designer to prepare the building in the design phase for these values with the least possible cost savings and high ability to predict and control the expected defects on the projects, which minimize the total cost of the project. changes initiated by the client and enduser together with errors and omissions in contract documentation were found to be the primary causes of rework and increasing the operational cost [23]. machine cost sub-criteria ranks as the second with a weight of 0.036, this high rank can be justified by the fact that machines save time, money and give a sequence reliable performance. 3. weights of quality sub-criteria the quality sub-criterion (such as concurrent design drawings, activity waste, material waste, construction defects, customer satisfaction and reliability of the used material) were analyzed. the results showed that solving contradics. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 83 tions between the concurrent drawings in the design phase before moving to the next step – the real implementation – has the highest rank with a weight of 0.044. this was followed by decreasing the waste in the construction activities with a weight of 0.042 as shown in table (8). table 8 quality sub criteria weights quality sub-criteria normal value limit super matrix concurrent drawings 0.248 0.045 activities waste 0.230 0.042 material waste 0.224 0.040 defects 0.116 0.021 satisfaction 0.103 0.019 material reliability 0.077 0.014 total 1 the concurrent drawings sub-criteria, has the highest weight of 0.044. this attributed to the fact that solving the contradictions between the drawings (architect, civil, mechanical, and electrical) in the design phase, will highly minimize the defects of the drawings and their implementation, which will directly affect the cost and duration of the construction project. 4. weights of time sub-criteria the construction projects rarely ends at the scheduled time. therefore, there is a need to improve efficiency and effectiveness of the management strategy to smartly balance between time, cost, quality, resources and their influence on the environment. table (9) presents the weights of time sub criteria. table 9 time sub criteria weights time sub-criteria normal value limit super matrix adhere to deadline 0.615 0.080 project duration 0.238 0.031 time wasters 0.147 0.019 total 1 the results show that meeting the deadline is the most important time sub-criteria with a weight of 0.080. this could be justified due to many reasons, for example, the construction penalty payment and resources overload in case of delay. comparison of alternatives results showed that for the green-lean alternative, the environment criteria has the highest rank as it weighed 0.33. the second highest rank was the quality criteria which weighed 0.27, followed by the time criteria with a weight of 0.23 and cost criteria comes in the last rank with a weight of 0.17 as shown in table (10). on the other hand, for the traditional alternative, time criteria has the highest rank with a weight of 0.35, cost criteria is in the second rank and weighs 0.32 as shown in table (10) . the quality is in the third rank with a weight of 0.19, while the environmental criteria is in the lowest rank showed a weight of 0.13. table 10 alternatives performance with respect to the main criteria main criteria green – lean traditional environment criteria 0.33 0.13 quality criteria 0.27 0.19 time criteria 0.23 0.35 cost criteria 0.17 0.32 the next paragraphs compare the performance of the two alternatives (traditional vs. green-lean) in the construction process as shown in figure (4). with respect to the environment criteria of the green-lean alternative, the renewable materials sub-criteria have the weight of 0.197 while in the traditional alternative weighs 0.145. for the energy system sub-criteria (photovoltaic, thermal, biomass and wind), the green-lean has a weight of 0.206 while the traditional alternative weighs 0.227. for the polluting materials, activities and efficient water systems (using gray water system, wastewater treatment and conservation) indicated the same weight of 0.199 in the green-lean alternative; the case is different in traditional alternative for the same subcriterion with weights of 0.264, 0.20 and 0.164 respectively as shown in figure (3). s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 84 for the quality criteria, it was obvious that green-lean focuses on decreasing the non-value adding activities and materials, so it was not strange that decreasing the material waste in green-lean had the highest weight of 0.184 in the quality criteria. furthermore, decreasing these sub-criteria was also very important in the traditional alternative, with a weight of 0.161 as shown in figure (4). this can be explained by the fact that decreasing the waste would directly guarantee a decrease in the total cost of the project, an increase in its quality and a decrease in the total project duration. figure 5 alternatives performance with respect to quality subcriteria as for cost sub-criteria, operational cost showed the highest rank; it weighed 0.283 in the green-lean alternative, while it had the lowest rank and weighed 0.121 in the traditional alternative as shown in figure (5). figure 6 alternatives performance with respect to cost subcriteria as for the time sub-criteria, weights were close for adherence to deadline sub-criteria, in both alternatives green-lean and traditional, had the highest rank with a weight of 0.341 as shown in figure (6). d sensitivity analysis in order to gain more insights into the problem and develop the model to be more dynamic, sensitivity analysis was performed for the main criteria, which are environment, quality, cost, and time. figure (7) shows the trend of the traditional and green-lean construction. it indicates that the increase in the parameter value α (the horizontal axis) increases the performance of green-lean and decreases the performance of the traditional alternative as in equation (8). parameter value = α * alternative (1) + (1α) alternative (2) (8) where, alternative 1: (green – lean alternative), alternative 2: (traditional/ conventional alternative) and (α) is a parameter value computed by super decision program base on the weighs of each sub-criterion. 0 0.05 0.1 0.15 0.2 0.25 m a te ri a l w a st e c o n cu rr e n t d ra w in g s a ct iv it ie s w a st e m a te ri a l re li a b il it y s a ti sf a ct io n d e fe ct s green – lean traditional 0 0.05 0.1 0.15 0.2 0.25 0.3 o p e ra ti o n a l co st m a ch in e co st la b o r co st d e si g n co st m a te ri a l co st green – lean traditional 0 0.05 0.1 0.15 0.2 0.25 0.3 e n e rg y sy st e m s r e n e w a b le m a te ri a l u se p o ll u ti n g m a te ri a ls p o ll u ti n g a ct iv it ie s w a te r sy st e m s green – lean traditional 0.3 0.31 0.32 0.33 0.34 0.35 adhere to deadline project duration time wasters green – lean traditional figure 7 alternatives performance with respect to time subcriteria figure 8 sensitivity analysis of the alternatives performance figure 4 alternative performance with respect to environment s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 85 alternative performance (the vertical axis) represents numerically the integrated relation between the two alternatives and the sub-criteria. this means that at the start of the project, there are two situations in front of the decision maker. whether to accept or not any of the criteria and subcriteria into consideration. a. when the decision maker doesn‘t implement the green – lean criteria and sub-criteria, then α * alternative (1) = 0, the parameter value of green –lean alternative is zero and the traditional alternative (1α) alternative (2) = is at the highest value =1. this also means that the green-lean alternative is at the lowest value zero as shown in figure (9). as more sub-criteria are being taken into consideration, the preference of the traditional practices decreases, while the green-lean preference increases. it‘s worth mentioning that α is the percentage of the sub-criteria includes overall scenario. b. as the demand for fulfillment of more green – lean sub criterion increases (horizontal axis -parameter value), the traditional alternative fails in satisfying the sub-criteria in an optimum value as the greenlean does. the traditional normalized value decreases, while the green–lean increases, when considering more subcriterion. c. when the decision maker does implement all the green – lean criteria and sub-criteria, then the parameter value is approximately 0.88, while the traditional alternative is approximately 0.12. it is obvious from figure (7) that neither the green – lean trend can reach the value 1 nor the tradition fall to a zero value. e suggested framework using zeroone goal programming zogp determines near optimum and realistic frameworks in different scenarios. the model considered all the goals simultaneously by forming an achievement function that minimizes the total weighted deviation from all the goals stated in the model. the weights reflected the decision makers‘ preferences regarding the relative importance of each goal. the main idea of the zogp model is to answer the question of ―what is the near optimum framework to work on, for a given amount of cost?‖ the framework assigned the five cost sub-criterion (operational, machine, material, design and labor costs) fixed values to determine the priority of the other sub-criteria in zogp model, the sub-criterion were ranked according to their contribution to the total objective function value as shown in figure (8). figure 9 framework for the first scenario it is worth mentioning that the objective function remains consistent as more sub-criterion are added till eight subcriterion. this means, for example, if the decision maker decided to use just one sub-criteria to improve his work, or if he use 8 sub-criteria (concurrent drawings, increase the energy systems, efficient water use, renewable material, reliable material use, adhere to deadline, satisfaction, reduce defects), he will get the same objective function value and the same result. however, including more sub-criterion will increase the objective function value as shown in figure (9). beyond the eighth criteria, the total cost of the project would increase more than the one in the traditional construction. it would be recommended to implement the next sub-criteria as in figure (9), however, the initial cost would increase gradually. this situation would encourage the decision maker to expand his promoting plan to be more green-lean without adding cost or penalty. beyond the eighth sub-criteria, the objective function value starts to increase. figure 10 relation between sub-criterion rank and objective 0 50 100 150 200 250 6 c o n cu rr e n t… 7 e n e rg y s y st e m s 8 w a te r sy st e m s 9 r e n e w a b le … 1 0 m a te ri a l… 1 1 a d h e re t o … 1 2 c u st o m e r… 1 3 d e fe ct s 1 4 p o ll u ti n g … 1 5 t im e w a st e rs 1 6 p o ll u ti n g … 1 7 m a te ri a l w a st e 1 8 a ct iv it ie s… 1 9 p ro je ct … o b je ct iv e f u n ct io n v a lu e s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 86 effect of fixing cost sub-criteria on the objective function for fixed cost x1, x2, x3, x5 cases, figure (10) shows that the objective function remains consistent for the first eight sub-criterion and would not affect the regular construction cost. this encourages the decision maker to promote the construction process based on at least 8 sub-criteria which are satisfactory, material reliability, adhere to deadline, concurrent drawings, water systems, energy systems and renewable material. if the decision maker takes these sub-criteria into consideration, a wide range of modifications will appear in the construction process and tangible effects would be felt. figure 11 relation between sub-criterion rank and objective function where all cost sub-criterion are fixed v conclusions and recommendations the overall goal of this study was to propose an integrated green -lean framework to improve the performance, efficiency and greenness of construction processes during the construction phase. in order to accomplish this goal, several techniques for green-lean construction were discussed as follows. when simulating a real construction project, it is usual to have a fixed value for the project cost (the operation, design, labor, machine and material).this affect the ranking of the sub-criterion (which is concluded from the literatures and modified by experts with more than 10 years‘ experience). in the resulting framework, taking into consideration the right design principles and solve the conflicts among the different (architect, civil, mechanical, electrical) specializations in the design phase were in the first rank as the most critical sub-criteria to insure the smooth flow in the construction process thus affecting the project duration and cost. energy system (for example, installing photovoltaic solar cells, replacing regular windows with double-glazed ones, replacing incandescent bulbs with compact fluorescent light bulbs) comes in the second rank. then, the use of efficient water systems like gray water reuse should also be considered at an early stage of the project. these two sub-criterion are classified as green techniques in addition to renewable material use and reuse of the material in the site. renewable material ranked fourth, for example, using green cake block will reduce the harmful effect of the traditional block on the environment. this sub-criteria is strongly related to the next sub-criteria in the fifth rank, which is the reliability of the used material mainly if it is used for the first time in construction. adherence to deadline sub-criteria comes in the sixth rank, almost all the projects delayed and do not respect the scheduled time plan, this forces the contractor to pay a delay penalty and affect the value that the owner tries to accomplish. lean concept concentrates on the idea that the project must be studied precisely to come up with a detailed implementation plan to guarantee the ability to finish the project on time. satisfaction sub-criteria is in the seventh rank; it is taken into consideration during the design process, so it is anticipated that the satisfaction would be guaranteed in the construction phase. defects sub-criteria is in the eighth rank, which might appear due the lack of experience of the labor, misunderstanding in the design drawings, inclement weather or accidents and all these must be considered and be prepared for from the early start of the project. the current study recommended paying more effort in the design phase to give the designer the optimum duration to improve the designs and integrate it periodically with innovative solar and water energy alternatives. dhingra, et al. [9]. at the very beginning, planning with the environment in mind cannot be postponed from the design process or considered a luxury in lean thinking. on the other hand, applying green would be beneficial as it is automatically constrained with economic aspects. when thinking sustainable, applying lean cannot be separated from applying green. this means that understanding economic, environmental, and social aspects are inevitable to correctly apply sustainability. it was also recommended to find new value-added activities/ materials to replace non-value ones. for example, replacing the portland cement blocks with more-friendly blocks like green cake blocks which made from recycled rubble and ash. also, to use polystyrene (ps) foam as sustainable isolation material that replaces asphalts. the asphalts, during the burning process, asphalt vapor does not condense all at once, so the workers are exposed not only to asphalt fumes but also to vapor‘s. many studies show that the cancer risk increases among the workers who expose to asphalt vapors (wess, 2005). while polystyrene foam does not need special handling or consideration, there is no dust during installation and use, no chemical binders and safe for consumers no exposure to harmful substances during service life. 0 50 100 150 200 250 2 s a ti sf a ct io n 3 m a te ri a l… 4 a d h e re t o … 5 co n cu rr e n t… 6 w a te r s y st e m s 7 e n e rg y s y st e m s 8 r e n e w a b le … 9 d e fe ct s 1 0 l a b o r co st 1 1 d e si g n c o st 1 2 p o ll u ti n g … 1 3 t im e w a st e rs 1 4 m a te ri a l co st 1 5 p o ll u ti n g … 1 6 m a te ri a l w a st e 1 7 a ct iv it ie s… 1 8 p ro je ct … o b je c ti v e f u n c ti o n v a lu e s. abu musameh, m. alqedra, m. arafa and s. agha / developing a framework for implementing greenlean construction techniques (2018) 87 it is also recommended to use modern analyzing tools, mainly in the design phase to study the environmental impact of the construction and promote it in the early start with innovative alternatives like shading elements, panels and skylight and water treatment strategies. it is not essential to use expensive tools to grantee the thermal satisfaction, for example, it could be done with simple modification in the ventilation, orientation and location of the building. furthermore, analyzing the needed activities in an early start of the project would help to release the right work at the right time with the right people at an optimum price. references [1] g. ofori, "challenges of construction industries in developing countries: lessons from various countries.," in 2nd international conference on construction in developing countries, gaborone, 2000. [2] a. banawi and m. m. bilec, "a framework to improve construction processes: integrating lean, green and six sigma," international journal of construction management, vol. 14, no. 1, pp. 45-55, 2014. [3] l. koskela, "application of the new production philosophy to construction," technical report no. 72, cife, stanford university1992. [4] z. alwan, jones, p., & holgate, p., "strategic sustainable development in the uk construction industry, through the framework for strategic sustainable development, using building information modelling," journal of cleaner production, pp. 349-358, 2017. [5] a. ashworth, & perera, s., "contractual procedures in the construction industry," routledge, pp. 15-17, 2018. [6] r. f. aziz and s. m. hafez, "applying lean thinking in construction and performance improvement," alexandria engineering journal, vol. 52, no. 4, pp. 679695, 2013/12/01/ 2013. [7] j. zuo and z.-y. zhao, "green building research– current status and future agenda: a review," renewable and sustainable energy reviews, vol. 30, pp. 271-281, 2014. [8] c. j. kibert, sustainable construction: green building design and delivery. john wiley & sons, 2016. [9] r. dhingra, r. kress, and g. upreti, "does lean mean green?," journal of cleaner production, vol. 85, pp. 1-7, 2014/12/15/ 2014. [10] g. rebitzer et al., "life cycle assessment: part 1: framework, goal and scope definition, inventory analysis, and applications," environment international, vol. 30, no. 5, pp. 701-720, 2004/07/01/ 2004. [11] t. ohno, toyota production system: beyond large-scale production. crc press, 1988. [12] u. h. issa, "implementation of lean construction techniques for minimizing the risks effect on project construction time," alexandria engineering journal, vol. 52, no. 4, pp. 697-704, 2013. [13] v. a. j. lim, "lean construction: knowledge and barriers in implementing into malaysia construction industry," universiti teknologi malaysia, 2008. [14] j. p. womack and d. t. jones, "lean thinking—banish waste and create wealth in your corporation," journal of the operational research society, vol. 48, no. 11, pp. 11481148, 1997. [15] b. nunes and d. bennett, "green operations initiatives in the automotive industry: an environmental reports analysis and benchmarking study.," benchmarking: an international journal, pp. 396-420, 2010. [16] s. li, wu, x., zhou, y., & liu, x., "a study on the evaluation of implementation level of lean construction in two chinese firms," renewable and sustainable energy reviews, pp. 846-851, 2017. [17] y. zhan, tan, k. h., ji, g., chung, l., & chiu, a. s., "green and lean sustainable development path in china: guanxi, practices and performance," resources, conservation and recycling, pp. 240-249, 2018. [18] e. yazgan and a. k. üstün, "application of analytic network process: weighting of selection criteria for civil pilots," journal of aeronautics and space technologies, vol. 5, no. 2, pp. 1-12, 2011. [19] t. l. saaty, theory and applications of the analytic network process: decision making with benefits, opportunities, costs, and risks. rws publications, 2005. [20] a. jayant, v. paul, and u. kumar, "application of analytic network process (anp) in business environment: a comprehensive literature review," international journal of research in mechanical engineering & technology, vol. 4, no. 3, pp. 29-37, 2014. [21] e. w. cheng and h. li, "contractor selection using the analytic network process," construction management and economics, vol. 22, no. 10, pp. 1021-1032, 2004. [22] j. alazraque-cherni, "renewable energy for rural sustainability in developing countries," bulletin of science, technology & society, vol. 28, no. 2, pp. 105-114, 2008. [23] p. e. love and h. li, "quantifying the causes and costs of rework in construction," construction management & economics, vol. 18, no. 4, pp. 479-490, 2000. transactions template journal of engineering research and technology, volume 1, issue 4, december 2014 diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 1, 2 arouna saley hamidou, 1 oumar diop, and 1 amadou seidou maiga. 1 laboratory of electronics, computer, telecommunication and renewable energy, department sat, gaston berger university, p.o. box 234, saint-louis, senegal (french) 2 department of physics, faculty of science and technology, university of maradi, niger (french) abstract: the general objective of the present work is to contribute to a set up of an operational prototype of monitoring surface energy fluxes inside the niger’s w park, using landsat data and few fields’ data. the model sebal/metric is used to estimate the main surface fluxes. the diachronic study of the obtained fluxes reveals constant daily mean values for a given season. during autumn 2002, the mean values of the daily evapotranspiration are almost 4mm/day. humidity indicators are then deduced from the obtained fluxes. their diachronic study permits to identify area with cold pixels as been less stressed than area having dry pixels. this study shows that landsat imagery can be used, at a large scale, in monitoring the main biophysical processes occurring at the soil-vegetation-atmosphere interface. then; it allows identifying areas at risk, inside the park, needing an adequate plan of management and conservation. index terms— energy fluxes; remote detection; soil-vegetation-atmosphere interface; diachronic i introduction numerous studies were interested these last decades in the processes of transfer of mass and energy at the ground level, through estimating and diachronic studying of surface energy fluxes by remote detection. such large-scale study is essential for a good understanding of physical processes occuring at the interface soil-vegetation-atmosphere. it helps to better apprehend the combined impacts of natural variability of the climate and anthropogenic actions, observable these last decades at the global scale. diachronic study of surface energy fluxes allows identification of degraded forest’s areas or areas subject to severe hydrous stress, as shown by earlier studies [1-2]. it helps also to prevent the risk of wild forest fires, since plant’s hydrous state is inversely linked to inflammability of forest resources as shown by viegas et al. [3]. it is therefore necessary to obtain at a large scale, reliable information on land surface energy fluxes and evapotranspiration. many methods using remote detection data in calculating surface fluxes have been in focus as shown in earlier works [4-5-6-7]. the most used algorithms are: sebal, [8], tseb,[9]), sebi,[10] ), ssebi, [5]) ; sebs, [11] and metric, [6]. in niger, studies using remote data have been conducted in order to improve the management of the park, as shown in earlier studies [12-13-14-15]. still, no study on surface energy fluxes and their relationship with soil’s state has yet been conducted. it is so necessary to fill this gaps.the general objective of the present work is to contribute in the development of an operational prototype of monitoring those fluxes, using many landsat data. this prototype is based on simplified procedures, to make it easy operational and reproducible for field’s managers in sahelian’s conditions, where field data are rare and inaccessible. ii characteistics of the study area the study area is located in niger republic (west africa), fig.1. it lays between longitudes 2°25'e and 2°45'e and arouna saley hamidou, oumar diop, and amadou seidou maiga./ diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 144 latitudes 12°25'n et 12°40'n. it covers a surface area of 63.000 ha. it is composed of a protected area in the south, inside the park, and a non protected area in the north, outside the park. the two areas are separated by a natural border, the niger river. fig 1. geographic position of the study’s area it is a tropical type with soudano-sahelian climatic system. four types of geomorphologies are identified and mapped in the area: rocky plateaux, pediments and drains, battleships plateaux and the intermediary forms as shown by benoit, [13]. iii materials and methods the data used in this study are from six landsat tm and etm+ detectors, path 192 row 051, acquired during autumn in niger, with almost clear sky conditions where, one can minimise the effects of cloud on the reflectance detected by the satellite. solar conditions, on the day of acquisition of each image are calculated in this study. the obtained values and the dates of acquisition are presented in table1. they are used during atmospheric corrections of the reflectance detected by the satellite (using modtran 4/flaash model according to hoke, [16]) and during correction of the effects of relief on the reflectance (using a digital elevation model of the study’s area). they are also used during the parameterisation of the surface energy balance equation, given as: rn + h + g + le = 0 (1) where, rn (w m-2): net incident solar radiation flux; h (w m-2): sensible heat flux; g (w m-2): soil heat flux; le (w m2): latent heat flux. the raw landsat images, used in this study, are of level 1 delivered by usgs, (utm, and wgs 84 zone 31). the pixel size is 30m x 30m.the supervised classification, by maximum likelihood method, is applied to classify each image. the use of this method is motivated by our wellknown knowledge of the study’s area and because through experience, supervised classification becomes easier and more correct.then, the six images were classified using this method. the results of this classification, for image acquired on 1st february 1990, are presented in fig.2. fig 2. land use/occupation on 1st february 1990 the maps of this classification are indispensable at the time of executing sebal/ metric, precisely while choosing the dry and cold pixels. these pixels (called anchor pixels) are pixels on which thermal gradient, dt and sensible heat fluxes, h are calculated. luminances of optic domains (visible, near and mean infrared) were converted into reflectances before mapping the surface energy fluxes. the obtained reflectances are then used to calculate the following inputs parameters: surface albedo (α), index of vegetation (ndvi) and surface temperature (ts). the theoretical basis of mapping evapotranspiration from remote detection data are nowadays well documented [17-5-6]. steps given by allen et al, [6] were used in this study to map the surface energy fluxes and evapotranspiration. the basic eq. (1) had been that of surface energy balance. thus, the equivalent energy of evapotranspiration, le has been estimated as a residual of eq. (1), applied to each pixel. it is calculated according to: le = rn h g (2) where, rn is given by: rn = (1 α) r global + r atm ↓ r suf ↑ (3) with, r global : the incident global solar radiation, (w/m²) partially reflected by the surface in function of surface albedo, r atm ↓: the incident atmospheric longwave radiation, (w/m²) and r suf ↑: shortwave radiation emitted by earth’s surface, (w/m²). arouna saley hamidou, oumar diop, and amadou seidou maiga./ diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 145 145 h is the sensible heat flux, (w/m²) obtained by an iterative approach, from the aerodynamic equation, given by: h = (ρ air cpdt) / r ah (4) with: ρ air = air density in kg m-3; cp = 1004 j kg-1k-1 (specific capacity of air); dt (°k) = thermal gradient of air (between z1= 0.1 m and z2 = 2 m above the ground), r ah = aerodynamic résistance to heat transfer in s m ¹, between tow nearest surfaces, separated by distance z 2 -z 1 . g (w/m²) is the soil conduction flux calculated according to bastiaanssen, [17]: g = [(ts 273.16) (0.0038 + 0.0074α) (1 0,98ndvi 4 )] rn (5) in eq. (3) the surface albedo α is calculated according to liang et al,: α = 0.356r 1 + 0.13r 3 + 0.373r 4 + 0.085r 5 + 0.072r 7 0.0018 (6) where the r i is the reflectance in channels i (1; 3; 4; 5 et 7) of landsat satellite, corrected from atmospheric and relief effects. these reflectances are deduced from the corresponding luminance l λi .the global solar radiance or incoming shortwave radiation is calculated using formula: r global = (gcs × cosθ.img×τ sw ) / d² (7) with, gcs = 1367 w m -2 (solar constant), cosθ.img (integrate the solar declination; the latitude; the slope; the surface aspect angle and solar hour angle of our study area) is the spatial distribution of solar declination angle, d = relative mean distance between the earth and the sun; τ sw = transmissivity of the atmosphere, calculated in function of air effective emissivity. the atmospheric radiation r atm ↓ is calculated according to the stefan-boltzmann’s formula: r atm ↓ = ε s ε a σt a 4 (8) with, ε s : the surface emissivity (it corresponds to the conversion factor of thermodynamic energy to radiative energy), expressed in function of ndvi. ε a : air effective emissivity ; σ : boltzmann’s constant. the radiation emitted by the earth surface r suf ↑ is calculated according to stefan-boltzmann’s formula: r suf ↑ = ε s σ t 4 s (9) with ts calculated from the radiative surface temperature t rs (ts = (trs/ ε s ) 4 i.e. by simple inversion of stefanboltzmann’s law). t rs is given by the following formula: t rs = k 2 /ln [(k 1 /r c(6) ) + 1] (10) k 1 and k 2 are specific constants of calibration for each type of landsat satellite. the values of the constants are given inside the header files of each image, downloadable at the same time with the image. r c(6) is the real radiance emitted by the surface, corrected from the atmospheric and relief effects. calculation of h from formula (4) requires simultaneous existence of dry pixels and cold pixels on the site of study as shown by allen et al, [6]. the supervised classification has permitted the identification of such pixels: dry pixels are rocky levelling and burned area and cold pixels are meadow and aquatic vegetation. to spatialize dt, we have first determined the values of h on dry pixels (h dry ) and after on cold pixels (h cold ). they obtained values are then used to estimate the thermal gradient dt using an iterative process, starting by applying neutral stability conditions of the atmosphere, until obtainment of dt convergence after successive corrections of the atmospheric stability, precisely on the aerodynamic resistance. the mapping of dt is made possible by assuming a linear relation with t s , according to allen et al, [6]: dt = a b t s (11) where b and a, constants estimated on anchor pixels (dry/cold pixels), chosen on each image. the spatial distribution of dt is used in another iteration process from eq. (4), thus allowing the mapping of h. the spatial distribution of the other instantaneous fluxes allows mapping the latent heat flux, h and then the instantaneous evapotranspiration et inst witch is calculated according to the following equation: etrday = fe *rnday (12) where, fe (in french) is the fraction of evaporation considered constant for a given day, as suggested by bastiaanssen et al, [4]: f e = leinst. / (rn-g) (13) leinst is the instantaneous latent heat (leinst) and (rn-g) is the available energy at earth’s surface. rnday is the net daily radiation given by: rnday = (1 – α0)*rgday110* τday (14) arouna saley hamidou, oumar diop, and amadou seidou maiga./ diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 146 rgday: is the global daily radiation and τjour: daily transmissivity of atmosphere (expressed as function of sunstroke fraction n/n) given by: τday = 0.25 + 0.50 * n/n (15) rgday is esteemed from the daily exo-atmospheric radiation kexo and τday: rgday = kexo * τday (16) known that, to evaporate 1kg of water we need 2, 45*106 joules (latent heat of evaporation), we have etrday in mm day -1calculated as: etrday = etr (joule) / (2, 45*106) (17) iv results and discussion a. spatial and diachronic analyses of the inputs parameters and the obtained fluxes the inputs parameters of the model, i.e. surface temperature, surface albedo and ndvi are estimated, in space at pixel scale and in time at the different dates of acquisition, table1. this table and the figures of appendis a, (fig1a, fig2a, and fig3a) show a very spatial variability of these inputs. this variability can be explained by the heterogeneous character of the study’s area, observable in fig.2. from table1 we can observe that when the ndvi is high the corresponding temperature is low, vice versa. this result is general; a ground which vegetal cover increases sees its surface temperature decreasing. this could be due to the fact that the vegetation reduces the aerodynamic resistance of the evapotranspiration. a complementary study is necessary to be conducted in order to verify such hypothesis. table 1: values of the inputs for each day of image acquisitions surface temperature is among the most key parameters that control the whole physical processes occurring at solvegetation-atmosphere interface. it is therefore important to get more reliable information on this parameter. thus, its distributions (spatial and temporal) were analysed. the spatial distribution shows that the surface temperature varies between 296.93°k and 328.75°k, fig1a, with a mean value of 313.82°k. these values are in the same order of magnitude as the ones obtained by remote detection in areas with almost the same type of climate as our study’s area, [20-21]. minimum values correspond to cold pixels (water, meadow and aquatic vegetation) and high values to hot pixels (rocky levelling and burned area). the evolutions of surface temperature, in terms of vegetation abundance (through the vegetation index, ndvi) were also analysed. on 4th october 1992, where the vegetation is abundant (mean ndvi =0.28, with maxima reaching up to 0.67) lowest surface temperature is obtained (ts = 307.11°k). this could be due to the fact that vegetation reduces the resistance of surface evapotranspiration, this induced diminish of surface temperature. the temporal comparison between the mean daily values of the evapotranspiration, on the following days: 04/10/1992, 30/11/1998, 02/02/2002, and 17/11/2002, shows that these values are practically constant (4mm/day), as shown in table 2. indeed, except precipitations and wind all the others biophysical parameters are generally constant for a given season, like in autumn, season during which the study’s images were acquired. on the other hand, values of the remaining fluxes i.e. the sensible heat flux, h and the conduction flux g, are varying both in space, figures of appendis b (fig1b,fig2b,fig3b) and time, table2, due to the variability of the phenomenon of convection. table 2: values of surface energy fluxes and evapotranspiration arouna saley hamidou, oumar diop, and amadou seidou maiga./ diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 147 147 b. characterizations on the soil’s state before characterizing the soil’s state, the diagrams defined by the relation between ts and ndvi were used to locate the dry/cold pixels, using the triangle’s method, [2-19]. the relation between ts and albedo was then used to confirm the positions of such pixels. for the image, acquired on 5th february 2003, a threshold albedo of 0.2905 was obtained for corresponding ts of 313.6°k, fig.3. fig 3. surface temperature function of albedo cold pixels are pixels having cold temperatures with albedo lesser than threshold albedo. dry pixels are pixels having high temperatures with albedo greater than threshold albedo. after locating the dry/cold pixels we have analyzed the spatial and temporal variability of the surface energy fluxes of two different areas extracted from the same image, those areas are named a and b: a has more cold pixels (wellwatered and fully vegetated) than b and b has more dry pixels (almost bare soil not too much covered). mean values of humidity indicators over area a, table3a, shows highest values of evaporation fraction and daily evapotranspiration. on the other hand, these values are lowest over b, table3b. this is explained by the fact that an increase of albedo induces diminish of energy absorbed by the surface and thus, lesser temperature; as regulation by latent heat flux is no more possible. covered surfaces have the highest values of fraction of evaporation. this has grave consequences, expressed as diminish of soil humidity and drainage of vegetation, more marked in case of lack of water. during the months of february 1990 and 2003, humidity indicators of zone b have the lowest mean values compared to those of zone a, tables3a and 3b. hydrous stress is thus more marked over zone b. according to thiery and al. [22], it was during the last 90’s and 2000’s decades that negatives impacts of climate changes are observed in sahelian’s regions. it is therefore highly probable that these observed lowest values are linked with the impacts of climate changes. table 3a and 3b: mean daily values of humidity indicators on zone a and on zone b table 3a: zone a table 3b : zone b v conclusion this study has permitted a diachronic monitoring of the main surface energy fluxes and humidity indicators. obtained maps have reflected the dynamic of the study’s area, for different inputs of the model used. the same dynamic was observed for the main resulting fluxes i.e. flux of conduction, sensible heat flux and latent heat flux. the monitoring has allowed also the characterization of the soil’s state and identification of areas that can be subject to severe hydrous stress. by lack of sufficient and pertinent field’s data we are not able to verify some hypothesis we made in this study. more images and field’s data are necessary to get arouna saley hamidou, oumar diop, and amadou seidou maiga./ diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 148 interpolated daily, monthly and seasonal values.but the obtained values are in the same order of magnitude as those encountered in the literature for regions having almost the same type of climatic characteristics as our study’s area. we are planning to conduct a campaign of field’s data collecting and a real time satellite image downloading, over a long period, through an important research project, in collaboration with some parteners. it will contribute in setting up the operational prototype of monitoring surface energy fluxes inside the park. this will help to better apprehend, inside the park, the various biophysical processes occurring at the solvegetationatmosphere interface. then; it allows identifying areas at risk, needing an adequate plan of management and conservation. vi recommandations as shwon through this study, remote sensing can be a powerful mean of studying surface energy fluxes at a large scale. new generations of satellites with a temporal resolution of 3 days, like formosat and venus, offer the possibilities to utilize the same approaches developed in this study. many fied data are also necessary to validate the obtained surface energy fuxes by remote sensing. then, to conduct a better diachronic monitoring of surface energy fluxes, at a large scale, we highly recommend conducting the folloving field’s works: 1. to realize a big campagne of field’s data collection, at pixel scale and at real time, corresponding to the passage of the satellite over the pixel; 2. to realize and test an operational prototype of contiuous mapping of surface energy fluxes, using satellite’s informations; 3. to devellop algorthims permiting the interpolation of the surface energy fluxes at the day’s basis, seasonal’s basis and annual’s basis, between many dates of image’s acquisition. taking in account the above three aspects in the monitoring processes is the only necessary condition for a better utilization of obtained surface energy fluxes in the hydrologicals and environnemental’s thematics. vii acknowledgment the authors wish to thank pr. saadou mahamane and pr. ali mahamane, respectively rector and vice rector of the university of maradi, for their helpful contributions. this work was supported in part by the budget of the university of maradi. references [1] hamimed a., mederbal k., khaldi a. " utilisation des données satellitaires tm de landsat pour le suivi de l’état hydrique d’un couvet végétal dans les conditions semi-arides en algérie". télédétection 2: 29-38, 2001. [2] mehor. m, hamimed. a, khaldi. a, seddini.a, abdesselam. b. "spatialisation de la température et des flux énergétiques de surface à partir des données satellitaires landsat etm+". revue française de photogrammétrie et de télédétection, n°190, pp.15-17, 2008 [3] viegas d.x., viegas t.p., ferreira a a.d. "moisture content of fine forest fuels and fire occurrence in portugal". the international journal of wild land fire, vol. 2 (2): 69-85, 1992. [4] bastiaanssen, w.g.m., m. menenti, r.a. feddes and a.a.m. holtslag. "remote sensing surface energy balance algorithm for land (sebal): 1. formulation". j. hydrol., 212-213: 198-212, 1998. [5] roerink, g.j., z. su and m. menenti. s-sebi. "a simple remote sensing algorithm to estimate the surface energy balance". phys. chem. earth, 2000 [6] allen, r.g., t. masahiro and t. ricardo. "satellitebased energy balance for mapping evapotranspiration with internalised calibration (metric)-model". j. irrigat. drain. eng., 133(4): 395-406, 2007. [7] hamimed, a., z. souidi and k. mederbal. "spatial évapotranspiration and surface energy fluxes from landsat etm + data: application to a mountain forest region in 2009 algeria". jas auf alger, november, 2009. [8] bastiaanssen, w.g.m. "regionalization of surface fluxes densities and moisture indicators in composite terrain". ph.d. thesis, agricultural university wageningen, 273 p, 1995. [9] norman, j.m., kustas, w.p. and humes, k.s. "source approach for estimating soil and vegetation energy fluxes in observations of directional radiometric surface temperature". agricultural and forest meteorology, 77:263-293, 1995. [10] menenti, m.; choudhury. "parameterization of land surface evaporation by means of location dependent potential evaporation and surface temperature range". proceedings of iahs conference on land surface processes, 1993. [11] su, z. "the surface energy balance system (sebs) for estimation of turbulent heat fluxes at scales ranging from a point to a continent". hydrol. earth syst. sci., 6(1): 85-99, 2002. [12] couteron, p. "reflection on spatial models for land sudano-sahelian area. dea memory structures and spatial dynamics". university of avignon, pp: 61, 1992. [13] benoit, m. "status and use of land on the outskirts of the national park of "w" in niger. contribution to the study of natural and plant resources tamou’s township park and the "w". office of scientific and technical research overseas (orstom) niamey, niger (in french), 1998. [14] inoussa, m.m., a. mahamane, c. mbow, m. saadou and b. yvonne. "spatiotemporal dynamics of woodarouna saley hamidou, oumar diop, and amadou seidou maiga./ diachronic monitoring of surface energy fluxes by remote detection in the northen-est of niger w national park 149 149 land in the w national park of niger (west africa) ". drought, 2011. [15] diouf. a. "influence des régimes des feux d’aménagement sur la structure ligneuse des savanes nord-soudaniennes dans le parc du w (sud ouest niger)". thèse de doctorat, soutenue en 2013 à l’école inter facultaire de bios ingénieurs de l’université libre de bruxelles, 2013. [16] hoke.t. "modtran4. radiative transfer modelling for atmospheric correction". proceeding of the optical spectroscopic techniques and instrumentation for atmospheric and space research iii, spie july 1999. [17] bastiaanssen, w. "sebal-based sensible and latent heat fluxes in the irrigated gediz bassin, turkey". j. hydrol., 229(1-2): 87-100, 2000. [18] liang, s., c.r. shuey and c. daughtry. "narrowband to broad band conversions of land surface albedo: ii validation". remote sensing env. 84: 25-41. pënualas, 1993.the reflectance at 950-970 mm regions as an indicator of water status". int. j. remote sensing, 14: 1887-1905, 2002. [19]arouna, s.h., oumar.d, amadou.s.m. "a spatial analysis of surface energy fluxes and evapotranspiration in the northern-east of niger w national park". research journal of environmental and earth science 5(3): 123-130, 2013 issn: 2041-0484; e-issn: 20410492 © maxwell scientific organization, 2013. [20] bashir, m.a., takeshi, h., haruya,t., abdelhadib, a. w., akio, t. "the spatial analysis of surface temperature and evapotranspiration for some land use/cover types in the gezira area, sudan". research project supported by the grants-in-aid (no.16405031), from the japan society for the promotion of science, 2007. [21] pënualas, c. "the reflectance at 950-970 mm regions as an indicator of water status".int. j. remote sensing, 14: 1887-1905,1993. [22]thierry, m., erwann, f., joan, b. "evaluation des risques liés aux variationsspatiotemporelles de la pluviométrie au sahel", 2007. abbreviations list etrday: daily evapotranspiration. etm+: enhanced thematic mapper plus. ndvi: normalized difference vegetation index. pnwn: parc national du w du niger (french). metric: mapping evapotranspiration with high resolution and internalized calibration. sebal: surface energy balance algorithm for land. sebi: surface energy balance index. s-sebi: soil surface energy balance index sebs: soil energy balance system. tm: thematic mapper. tseb: two-source energy balance algorithm. transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 75 figure 1: mla antenna shape in [10]. figure 2: the e-shape mla antenna. analysis and design of e-shape meander line antenna for lte mobile communications ahmed hamdi abo absa 1 , mohamed ouda 2 , ammar abu hudrouss 3 1 tech. department, university of palestine, palestine, a.absah@gmail.com. 2 electrical engineering department, engineering college, islamic university-gaza, mouda@iugaza.edu.ps 3 electrical engineering department, engineering college, islamic university, gaza,ahdrouss@iugaza.edu.ps abstract—the meander line antenna (mla) is an electrically small antenna. electrically small antennas pose several performance related issues such as narrow bandwidth, low gain and high cross polarization levels. in this paper, we analysis and design an e-shape mla as anew shape to achieve wider bandwidth and smaller gain at 2.5 ghz compared to the classical mla. parametric study has been done for the effect of changing each variable in the antenna structure and study the effect of this change on the antenna performance. the best` performance of separate variables is combined at the end which give suboptimal design. professional design software (hfss) is used to design and optimize the antenna and matlab codes were written to determine the resonant frequency and the bandwidth for each study in this paper. index terms— e-shape; bandwdth; gain; meander line; parametric. i introduction the bandwidth of microstrip antenna may be increased using air substrate [1]; however, dielectric substrate must be used if compact antenna size is required [2]. there are a few approaches exists in the literature that can be applied to improve the microstrip antenna bandwidth. these include increasing the substrate thickness, introducing parasitic element either in coplanar or stack configuration, and modifying the shape of a common radiator patch by incorporating slots. the last approach is particularly attractive because it can provide excellent bandwidth improvement and maintain a single-layer radiating structure to preserve the antenna’s thin profile characteristic. the successful examples include e-shaped patch antennas [3–7], u-slot patch antennas [8], and v-slot patch antennas [9]. the authors in [10] proposed a meander-line structure for pcmcia cards operating at 2.4 ghz as shown in fig 1. the maximum gain and return loss of the antenna at the resonant frequency are 2.76 db and -17db, respectively. the substrate material was used is fr4 with εr = 4.5, tan δ = 0.0150, and dielectric height, h = 1.5 mm, as shown in fig. 1. mailto:a.absah@gmail.com mailto:mouda@iugaza.edu.ps ahmed h. abo absa, mohamed ouda, ammar abu hudrouss / analysis and design of e-shape meander line antenna for lte mobile communications (2015) 76 fgure 3: e-shape mla. in this paper, the modified e-shape mla, fig. 2, is designed at a resonant frequency 2.5 ghz. the analysis of the eshape mla is described in section ii. a comprehensive parametric study has been carried out to understand the effects of various dimensional parameters as shown in section iii. finally the conclusion is written in section iv. ii eshape mla analysis [13] the width and length of the microstrip antenna are determined as the following [13]: √ √ √ (1) where 𝑣 is the free space velocity of the light,ε is dielectric constant of substrate and f is resonant frequency. the effective dielectric constant is given as [13] [ ] √ (2) where the dimensions of the patch along its length have been extended on each end by a distance δl, which is a function of the effective dielectric constant ε and the width to height ratio (w/h), and the normalized extension of the length, is (3) the actual length of the patch (l) can be determine as √ √ (4) e-shaped rectangular microstrip antenna consists of two symmetrical parallel slots incorporated as shown in fig. 3. the two slots are designed in this shape to disturb the surface current path and to introduce a local inductive effect that is responsible for the excitation of the first resonant mode. the slot length (l2), slot width (w2), and the center arm dimensions of the e-shape control the frequency of the second resonant mode and the achievable bandwidth [14]. the second resonant frequency is out of the our area of concern as it is located at 5.1 ghz . a common rectangular patch antenna can be represented by means of the equivalent circuit of fig. 4(a) [14]. the resonant frequency is determined by l1c1 [14]. at the resonant frequency, the antenna input impedance is given by resistance r. the equivalent circuit for the modified shape is modified into the form as shown in fig. 4(b) [14]. the second resonant frequency is determined by l2c2. the analysis of the circuit network shows that the antenna input impedance is given by [14] 𝑍 𝑅 𝑗 ⁄⁄ ⁄⁄ (5) the imaginary part of the input impedance is zero at the two series resonant frequencies determined by l1c1 and l2c2, respectively [14]. this is not the exact model of the eshaped antenna because the parallel-resonant mode that the equations show between the two series-resonant frequencies. nevertheless, the model is considered to be sufficient to study the operating principle of the antenna design [14]. if the two series resonant frequencies are too far apart, this may leads to unsatisfactory reflection coefficient at the antenna input [14]. meanwhile, if the resonant frequencies are set too near to each other, the parallel-resonant mode may affect the overall frequency response and this in turn may degrade the reflection coefficient near each of the seriesresonant frequencies [14]. therefore, each dimension of the e-shape antenna is important and should be carefully opti figure 4: equivalent circuits of (a) rectangular patch and (b) e-shape antenna [14]. ., ahmed h. abo absa, mohamed ouda, ammar abu hudrouss / analysis and design of e-shape meander line antenna for lte mobile communications (2015) 77 mized. iii parametric study a substrate with a relative dielectric permittivity of 4.5 and thickness of 1.5 mm is selected to obtain a compact radiation structure that, at the same time, meets the required bandwidth specification. it is fed by a 50-ω sma connector. the technique of setting value of some parameters for the resonant frequency can be done step by step. the first consideration is to design the dimensions of antenna as shown in fig.3. the parameters w1, w2, w3, w4, w5 and w6 are set as variables and to show how their effects on the bandwidth and the gain of the e-shape mla. a step 1: the length of the ground is changed from 29 mm to 32 mm with step 0.1 mm with all other parameters are fixed. the simulated result of the return loss s11 as a function frequency is shown in fig. 5.fig. 5 is shows the increase of the resonant frequency with the increase of the ground length. the bandwidth for each step is determined using matlab software. a bandwidth of 0.95 ghz was obtained at ground length of 30.2 mm, which shows a significant increase over the design given in [12] of is 0.6 ghz. the maximum gains are shown in fig. 6, where the values are between 2.7 and 2.8 db at the resonant frequency of 2.5 ghz. b step 2: choosing the optimum result of s11 from step 1 (length of the ground is 30.2 mm), and varying the width w3 by step of 0.1 mm from 0.5 mm to 4.5 mm and fixing the other parameters. similar to step 1, it has been found that when the width w3 is increased, the resonant frequency is also increased. in this case, increasing width w3 could affect the resonant frequency and bandwidth, where the best value for the bandwidth is obtained at w3 = 2.2 mm . c step 3: the optimum parameters from step 2 were chosen (length of the ground is 30.2 mm and w3=2.2 mm), and the widths w1, w5 and w6 were varied by step 0.2 mm from 0.6 mm to 2.5 mm and fix all other parameters. it has been found that when the width w1, w5 and w6 are increasing, the resonant frequency increases. the best values for the bandwidth is 1.3 ghz and s11 = 19.6 db. these values are achieved at w1= w5= w6= 1 mm. the maximum gains for the best values are the same as in step 1. step 4: the optimum parameters from step 3 were selected and the width, w2 was varied by step up of 0.1 mm from 0.6 mm to 2.5 mm and all the other parameters were fixed. it is shown that, increasing of w2 has no effect on the resonant frequency and the bandwidth. so in this step, we will select the best width w2 =1 mm that give a gain of 2.96 db. d step 5: figure 8: the return loss for the final design of the e-shape mla. figure 6: max gain for each frequency input at phi = 0. figure 5: return loss for the different height of the ground. figure 7: the final design for the e-shape mla.. ahmed h. abo absa, mohamed ouda, ammar abu hudrouss / analysis and design of e-shape meander line antenna for lte mobile communications (2015) 78 the optimum parameters were chosen from step 4 and the width w4 was varied by step up of 0.1 mm from 0.6 mm to 2.5 mm and fix the other parameters. it has been found that, when the width w4 increasing, the resonant frequency shift to the right. in this case, increasing width w4 could affect the resonant frequency and bandwidth. the best values for the bandwidth is 1.2 ghz and s11 = 18.9 db. these values are achieved at w4= 1.9 mm. the maximum gain is 3.02 db which occurs also at w4= 1.9 mm. the final design of the e-shape mla is shown in fig. 7 where the return loss and the gain is shown in fig. 8 and fig. 9, respectively. the gain for the final design is depicted in fig. 10. v conclusion an electrically small e-shape mla operating at the 2.5 ghz was designed and studied in this paper using hfss software package. parametric study was applied to achieve the optimum antenna design for the standard lte mobile handset. the antenna provided a significant bandwidth enhancement and small gain enhancements. the e-shape mla depicts an overall fair performance and it could be a promising candidate to overcome the deficiencies of the low profile small antennas. references [1] a. f. a. ayoub, ―analysis of rectangular microstrip antennas with air substrates,‖ journal of electromagnetic waves and applications, vol. 17, no. 12, pp 1755–1766, 2003. [2] g. vetharatnam, , b. k. chung, and h. t. chuah, ―design of a microstrip patch antenna array for airborne sar applications,‖ journal of electromagnetic waves and applications, vol. 19, no. 12, pp 1687–1701, 2005. [3] yang, f., x. x. zhang, x. ye, and y. rahmat-samii, ―wide-band e-shaped patch antennas for wireless communications,‖ ieee trans. antennas propagat., vol. 49, no. 7, pp 1094–1100, july 2001. [4] k. l. wong, and w. h. hsu, ―abroad-band rectangular patch antenna with a pair of wide slits,‖ ieee trans. antennas propagat., vol. 49, no. 9, pp 1345–1347, sep. 2001. [5] ge, y., k. p. esselle, and t. s. bird, ―e-shaped patch antennas for high-speed wireless networks,‖ ieee trans. antennas propagat., vol. 52, no. 12, pp 3213– 3219, dec. 2004. [6] y. ge, k. p. esselle, and t. s. bird, ―a compact eshaped patch antenna with corrugated wings,‖ ieee trans. antennas propagat., vol. 54, no. 8,pp 2411–2413, aug. 2006. [7] a. yu, and x. x. zhang, ―a method to enhance the bandwidth of microstrip antennas using a modified eshaped patch,‖ proceedings of radio and wireless conference, pp 261–264, pp 10–13,aug. 2003. [8] k. f. lee, k. m. luk , k. f. tong, s. m. shum, t. huynh and r. q. lee , ―experimental and simulation studies of the coaxially fed u-slots rectangular patch antenna,‖ iee proc. microw. antenna propag., vol. 144, no. 5, pp 354–358, oct.1997. [9] g. rafi, and l. shafai, ―broadband microstrip patch antenna with v-slot,‖ iee proc. microw. antenna propag., vol. 151, no. 5, pp 435–440, oct.2004. [10] c.-c. lin, s.-w.kuo, and h.-r. chuang, ―a 2.4-ghz printed meander line antenna for usb wlan with notebook-pc housing,‖ ieee microw.wirelesscompon.lett., vol.15, no. 9, pp. 546-548, sept. 2005. [11] h. choo, h. ling, ―design of broadband and dual-band microstrip antennas on highdielectric substrate using the genetic algorithm,‖ ieee proc. microwaves antennas propagat., vol. 150, no. 3 , pp. 137-142, june 2003. [12] k. michael, a. kucharski, ―genetic algorithm optimization for broadband patch antenna design,‖ proc. of the 16th international conference on microwaves, radar and wireless communications, mikon-2006, kraków , vol. 2, pp. 748-751, may 22-24, 2006. [13] m. k. verma, sapnaverma and d. c. dhubkarya, ―analysis and designing of e-shape microstrip patch antenna for the wireless communication systems,‖ in figure 9: the radiation pattern for the final design for the e-shape mla. figure 10: the gain for the final design for the e-shape mla. ., ahmed h. abo absa, mohamed ouda, ammar abu hudrouss / analysis and design of e-shape meander line antenna for lte mobile communications (2015) 79 ternational conference on emerging trends in electronic and photonic devices & systems, 2009. [14] b.-k. ang and b.-k.chung, ―a wideband e-shaped microstrip patch antenna for 5–6 ghz wireless communications,‖ progress in electromagnetics research, pier 75, pp 397–407, 2007. [15] m. s. sharawi , rf planning and optimization for lte networks, taylor and francis group, llc , 2011. transactions template journal of engineering research and technology, volume 2, issue 1, march2015 56 characterization of raw egg shell powder (esp) as a good bio-filler amal s.m. bashir 1 , yamuna manusamy 2 1 faculty of engineering and green technology, universiti tunku abdul rahman, amaal_989@hotmail.com 2 associat professoruniversiti tunku abdul rahman, yamunam@utar.edu.my abstract—chicken eggshell (es) is an aviculture byproduct that has been listed worldwide as one of the worst environmental problems. it constituted by a three-layered structure, namely the cuticle on the outer surface, a spongy (calcareous) layer and an inner lamellar (or mammillary) layer. the chemical composition (by weight) of by-product eggshell consists of calcium carbonate (94%), magnesium carbonate (1%), calcium phosphate (1%) and organic matter (4%) such as type x-collagen, sulfated polysaccharides, and other proteins. this study was conducted to investigate the various characteristics of esp including scaning electron microscopy (sem), particle size, surface morphology, ftir and x-ray fluorescence (xrf), and thermo gravimetric analyzer (tga). based on its unique characteristics, the potential use of esp as a natural filler prepared from food waste incorporated with natural rubber latex foam (nrlf) was investigated. index terms ____ composit, eggshell waste, natural filler, characterization. i introduction chicken eggshell (es) is an aviculture byproduct that has been listed worldwide as one of the worst environmental problems. it constituted by a three-layered structure, namely the cuticle on the outer surface, a spongy (calcareous) layer and an inner lamellar (or mammillary) layer [1]. the chemical composition (by weight) of by-product eggshell has been reported as follows: calcium carbonate (94%), magnesium carbonate (1%), calcium phosphate (1%) and organic matter (4%) such as type x-collagen, sulfated polysaccharides, and other proteins [2]. yi et al. [3] reported that es chemical composition and availability makes es a potential source of filler for bulk quantity, inexpensive, lightweight and low load-bearing composite applications. there have been several attempts to use eggshell components for different applications; adding es into food supplements for people and animals, art projects galore include egg shells as an ingredient, mosaics, paints, paper making, dying and carving [4]. moreover es are reused as a fertilizer or soil conditioner because of their high nutrition contents such as calcium, magnesium and phosphorus [5]. es can be used as natural filler for making lightweight polymer composites which provides an effective means. various researchers have investigated the potential use of natural filler (nf) from es wastes. egg shell is popular to be used as bio-filler in materials especially in plastics and polymers. studies has shown that egg shell able to replace up to 75 % of commercial caco3 and talc as new bio-filler into polypropylene composites. study of egg shell as bio-filler also proved that it performs better than all different types of particle size of caco3 fillers used [6]. in rubber production, egg shell is used as bio-filler in epoxidized natural rubber (enr) composites. it was indicated thet es filled materials showed superior vulcanization characteristics by the increasing of maximum torque and cure rate index (cri) with the reducing of cure time and scorch time. morphological property revealed that es was greater interfacial adhesion than those of others [7]. consequently, this study was carried out to investigate the various characteristics of esp including particle size, surface morphology, ftir and x-ray fluorescence (xrf), area and thermal behaviour. based on its characteristics, the potential use of esp as a natural filler incorporated with natural rubber latex foam (nrlf) was highlighted mailto:amaal_989@hotmail.com mailto:yamunam@utar.edu.my bashir and munusamy / characterization of raw egg shell powder (esp) as a good bio-filler (2015) 57 ii material and methods sample preparation the raw es used in this study was collected from cafeteria located at university tunku abdul rahman (utar) kampar, perak, malaysia. the samples were rinsed with clean water to remove the residue egg contains that attached on the egg shells. the water content was removed by drying under hot sun. the membrane was removed, and then grounded using grinder model retsch zm 200. then esp was sieved. the mastersizer 2000 particle size analyzer was used to detect the particle size for esp. particle size distribution particle size distribution (psd) is an indication of different sizes of particle which are presented as proportions. measurement was caridout by referring relative particle amount as a percentage where the total amount of particles is 100 % in the sample particle group. in psd test, various kinds of standards such as volume, area length and quantity are normally used to determine particle amount [8]. frequency distribution shows in percentage of the particles amounts appear in respective particle size intervals after the range of target particle sizes was divided into separate intervals. the cumulative distribution, particles passing the sieve, expresses the percentage of the particles amount from specific particles sizes or below. in this study, mastersizer 2000, hydro2000 mu (a) was used to determine the particle size distribution. field emission scanning electron microscope (fesem) fesemjeol 6701-f scanning electron microscope was used to investigate the surface morphology and textur of the esp. the test sample was cut and placed onto the specimen stub with doublesided carbon tape. the specimen was then prepared for examination by coated with platinum. fourier transform infra-red spectroscopy (ft-ir) the infrared spectra were measured using spectrum-rx1 to determine the content and impurity of esp. x-ray diffraction studies (xrd) x-ray diffraction (xrd) is a fast analytical method mainly applied for identification of a crystalline material. besides, it is also able to provide information on unit cell [9]. the analysed material which finely ground, homogenized, and average bulk composition was tested via xrd. xrd spectra were recorded with shidmazu xrd-600 in step scan mode using ni-filtered cu kα radiation, which has 0.1542 nm in wavelength. esp samples were scanned in reflection, whereas the moulded composites in transmission mode in the angle interval of 2θ = 1-12 º. the interlaying spacing (d-spacing) of the powder form esp was derived from the peak position (d001 – reflection) in the xrd diffractograms according to the bragg equation, equation 1. λ = 2d sin θ (1) where λ is the x-ray wavelength, d is the interlayer spacing and θ is the angle of diffraction [10]. x-ray fluorescence (xrf) the chemical composition of the eggshell waste powder sample was determined by x-ray fluorescence machine. thermogravimetric analyses (tga) the thermogravimetric analyses of the esp sample was measured using a mettler-toledo thermogravimetric analyzer tga/sdta851e. esp sample was tested at a heating rate of 20ºc/ min from 30ºc to 800 º c under nitrogen gas flow. iii. result and discussion particle size analysis fig. 1 demonesrates the particle size distributions curve of esp. the particle size of esp powder at peaks d 0.5, d 0.1, and d 0.9 were obtained as 7 µm, 1.106 µm, and 24.019 µ, respectively. it can be indicated that with finely grounded of egg shell powder, it is predicted to appear the similar reaction of commercial calcium carbonate to strengthen the physical strength of rubber composites with same loading and same particle size. in summary, the egg shell powder as bio-filler is feasible to test since the particle size of egg shell powder is highly similar to the commercial calcium carbonates. field emission scanning electron microscope (fesem) the morphology of esp is illustrated in fig. 2 (a) and fig. 2 (b). from the figures, it can be observed that the esp does not have an exact shape resulting of the grinded process used, size and length. in addition, a wide particle size range was detectd, which is in accordance with the results obtained of the particle size analysis (fig.1). the scanning electron micrograph of fig. 2 (b) shows the high porosity of the eggshell powder particles. in fact, the eggshell contains a significant amount of gas exchange pores [11]. fourier transform infra-red spectroscopy (ft-ir) fig. 3 shows the ft-ir spectra graph for esp with numerhttp://www.scielo.br/scielo.php?pid=s0366-69132006000400004&script=sci_arttext#figura2 http://www.scielo.br/scielo.php?pid=s0366-69132006000400004&script=sci_arttext#figura4 bashir and munusamy / characterization of raw egg shell powder (esp) as a good bio-filler (2015) 58 ous bands from 4000 cm -1 to 400 cm -1 . by observing the graph, it appears that a prominent absorption peak of carbonate was observed at 1431 cm -1 , respectively, attributed to alkyl group. besides, the ft-ir result also showed the absorption peak of calcite at 876 cm -1 of co3 2 . this agrees well with the result reported by islam et al. [12] in which they observed the absorption peak of calcite at 875 cm -1 of co3 2. x-ray diffraction studies (xrd) x-ray diffraction patterns of the esp sample are shown in fig. 4. the sample presented all diffraction peaks that are characteristics of calcite (caco3). calcite is the thermodynamically most stable form of caco3 at room temperature [13]. x-ray fluorescence (xrf) as presented in table 1, the chemical composition of the esp shows that calcium oxide (cao) was the most abundant component. the high amount of calcium oxide is associated to the presence of calcium carbonate, which is the main component of avian eggshell [14]. thus, the eggshell waste sample can be considered from a chemical viewpoint a pure relatively natural carbonate-based material, as well as its composition is very similar to the calcitic calcareous [14]. table 1: chemical composition of the esp chemical compostion wt.% c 21.1286 na2o 0.1046 mgo 0.9261 p2o5 0.4149 so3 0.3264 k2o 0.0542 cao 76.9922 fe2o3 0.0132 sro 0.0396 thermogravimetric analyses (tga) the thermal behaviour of the eggshell waste sample was analyzed by tga test. the results show the presence of three thermal events. as shown in fig. 5, the first stage at (~ 65 ºc) is endothermic and is attributed to the removal of physically adsorbed water on the particles of the waste powder. the second stage (~ 544.73ºc) is exothermic, and related to decomposition of organic matter. the third stage (~ 699.14 ºc) is endothermic, and was caused by decomposition of calcium [14]. iv conclusion in this study, the chemical, physical, and morphological characteristics of es waste were investigated using particle size analayser, sem, ftir, xrd, xrf, and tga. it can be indicated that es charcterisics (similar to commercial calcium carbonate‖ and availability makes es a potential source of filler for bulk quantity, inexpensive, lightweight and low load-bearing composite applications. ) references [1] d.j. kang, k. pal, s.j. park, d.s. bang, j.k. kim, ― effect of eggshell and silk fibroin on styrene– ethylene/butylene–styrene as bio-filler‖. materials and design, 31, 2216–2219, 2010. [2] w.t. tsai, j.m. yang, c.w. lai, y.h. cheng, c.c. lin, c.w. yeh, ―characterization and adsorption properties of eggshells and eggshell membrane‖. bioresource technology. 97 488–493, 2006. [3] f.yi, z.x. guo, l.x. zhang, h. yu, q. li, ―soluble eggshell membrane protein: preparation, characterization and biocompatibility‖. biomaterials. 25, 4591– 4599, 2004. [4] squidoo. the very many uses for eggshells http://www.squidoo.com/using-egg-shells, 2013. [5] a.g.j. tacon, ―utilisation of chick hatchery waste: the nutritional characteristics of day-old chicks and egg shells. agric‖. wastes 4 335–343,1982. [6] p. toro, r. quijada, m. yazdani-pedram, j. luis arias. mater. lett. 61, 4347-4350, 2007. [7] p. intharapat, a. kongnoo, k. kateungngan, ―the potential of chicken eggshell waste as a bio-fillee epozidized natural rubber (enr) composite and its properties‖. journal of polymer material, 21, 245-258, 2013. [8] shimadsu corporation, ―particle size distribution dependent on principle of measurement‖. [online] available from: http://www.shimadzu.com/an/powder/support/practice /p01/lesson02.html [accessed 3rd april 2013]. [9] b. engin, h. demirtas, m. eken, ―temperature effects on egg shells investigated by xrd, ir and esr techniques, radiation physics and chemistry. 75, 268277, 2006. [10] y. munasamy, h. ismail, m. mariatti, c.t. ratnam. ―ethylenevinyl acetate/ natural rubber/ organoclay nanocomposites: effect of sulfur and peroxide vulcanization‖. journal of reinforced plastics and composites, 27, 1925-1945, 2008. [11] a. c. fraser, m. cusak, the am. microsc. and anal. 53, 23-24, 2002. http://www.scielo.br/scielo.php?pid=s0366-69132006000400004&script=sci_arttext#figura1 http://www.squidoo.com/using-egg-shells http://www.shimadzu.com/an/powder/support/practice/p01/lesson02.html http://www.shimadzu.com/an/powder/support/practice/p01/lesson02.html http://www.sciencedirect.com/science/journal/0969806x bashir and munusamy / characterization of raw egg shell powder (esp) as a good bio-filler (2015) 59 [12] m.s. islam, s.r hamdan., m.d rahman, i. jusoh, a. ahmed,‖treated light tropical hardwoods‖. bioresources 6, 737-750, 2011. [13] c.s. gopinath, s.g. hegde, a.v. ramaswany, s. mahapatra, mater. res. bull. 37, 1324, 2002. [14] m.n freire, j.n.f. holanda, ―characterization of avian eggshell waste aiming its use in a ceramic wall tile paste‖. cerâmica, 52, 240-244, 200 (a) (b) bashir and munusamy / characterization of raw egg shell powder (esp) as a good bio-filler (2015) 60 transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 61 microscopic observation of anaerobic microorganism in a modified anaerobic hybrid baffled (mahb) reactor in treating recycled paper mill effluent (rpme) siti roshayu hassan 1 , nastaein qamaruz zaman 2 , irvan dahlan 3 1 school of civil engineering, universiti sains malaysia, engineering campus, seri ampangan, 14300 nibong tebal, pulau pinang, malaysia, ct_ayu87@yahoo.com 2 school of civil engineering, universiti sains malaysia, engineering campus, seri ampangan, 14300 nibong tebal, pulau pinang, malaysia, cenastaein@eng.usm.my 3 school of chemical engineering, universiti sains malaysia, engineering campus, seri ampangan, 14300 nibong tebal, pulau pinang, malaysia, chirvan@usm.my abstract— activities of various kinds of microorganisms are the key factor for anaerobic digestion which produces methane gas. therefore, anaerobic digestion process is known as an alternative methods to convert waste into methane. in this present study, a modified anaerobic hybrid baffled (mahb) reactor with a working volume of 58 l were used to treat recycled paper mill effluent (rpme) wastewater. the mixture of 90% rpme wastewater and 10% anaerobic sludge was used as substrate. the morphologies of anaerobic microorganism involved in anaerobic digestion of rpme in mahb reactor was observed using scanning electron microscopy (sem) and fluorescence microscope. only small amount of protozoa and fungi were observed in the mahb reactor system. it was found that bacteria responsible in biodegradation of biomass inside the mahb reactor were dominant. it was also demonstrated that fast growing bacteria which were capable to growth under high substrate levels and reduced ph was dominant at the front compartments (acidification zone). towards the end of the reactor, a slower growing scavenging bacteria that grow better at higher ph was found. in addition, the population of methanococcus, methanosaeta and methanosarcina were higher compare to other species of methane former bacteria. index terms— anaerobic microorganism identification, recycled paper mill effluent, biogas i introduction anaerobic digestion is the most common process for dealing with wastewater containing sludge. it is widely used as a source of renewable energy. the process produces a biogas, consisting of methane, carbon dioxide and traces of other ‘contaminant’ gases [1]. this biogas can be used directly as fuel, in combined heat and power gas engines [2] or upgraded to natural gas-quality bio methane. from literature surveys, it is suggested that an anaerobic baffled reactor (abr) act as an effective option for onsite sanitation in low income, waterborne and peri-urban communities [3]. abr were consists of vertical baffles, in which the wastewater is forced through and over the vertical baffle as it moves from inlet to outlet [4]. although abr were extensively used to treat different types of industrial waste, literature survey shows that there is lack on the anaerobic treatment of recycled paper mill effluent (rpme) wastewater by a novel modified anaerobic hybrid baffled (mahb) reactor. spatial nutrient concentration gradients resulted from compartmentalised design of mahb reactor contributes to microbial consortia that optimally adapted to the specific conditions in each compartment [3, 5]. previous researches demonstrates that fast growing bacteria which is capable to grow at reduced ph and high substrate levels should dominating the first two compartments whereas slower growing bacteria dominating near the end of the reactor at higher ph [5]. phase separation provides greater stability to environmental parameters, increased conversion of suspended solids and enhanced efficiency [6]. fluorescent microscope and scanning electron microscopy (sem) were used to observed specific microorganism types that responsible in converting the biomass of rpme siti r. hassan, nastaein q. zaman., and irvan dahlan / microscopic observation of anaerobic microorganism (2015) 62 wastewater and provide insight into the mechanisms of anaerobic digestion in the mahb reactor [7]. previous study indicates that hydrolytic and acidogenic bacteria existed in all compartments of the reactor [8]. using sem, hydrogenotrophic, methanogen-like archae were observed at the front part of reactor which includes morphotypes resembling methanobacterium sp, methanococcus sp and methanospirilium sp. this present study were done to observe microorganism which are responsible to convert biomass of recycled paper mill effluent to biogas in a laboratory scale mahb reactor using scanning electron microscopy (sem) and fluorescence microscope. ii materials and methods a reactor design a modified anaerobic hybrid baffled (mahb) reactor was used to determine the generation of biogas from recycled paper mill effluent (rpme). vertical baffles in each of compartments inside the mahb reactor enhances the solids retention for better substrate accessibility to methanogens. the laboratory-scale unit as shown in fig 1 with a total working volume of 58l. feeding and effluent tank were designed for influent and removing the wastewater to and from the reactor. a gas collection system was for collection and analysis on the amount of biogas produced. for the purpose of this present study, recycled paper mill effluent (rpme) were collected from muda paper mill bhd, bandar tasek mutiara, penang, malaysia and refrigerated at 4 ̊c. prior to analysis, the samples were warmed to room temperature (25 ± 2 ̊c). the seeding and start up process of mahb reactor have been reported previously [9]. the mahb reactor was started at hydraulic retention time (hrt) of 4 days and organic loading rate (olr) of 0.14 g/l/dy b microbiological examinations during steady state, microbial examinations were done to observe the most active and important species in each compartment of mahb reactor as an anaerobic biological reactor. scanning electron microscope (sem) and fluorescence microscopy were used to examine the microbial population involved. sludge sample from each compartment were taken and examined in sem according to standard protocols [10] and were produced at magnifications between 10x to 20x. iii result and discussion observation using fluorescence microscope shows existence of protozoa and fungi in the system. however from observation, most of the system were consists of microorganism population which are bacteria comprising hydrolytic, acetogenic and methanogenic bacteria. figure 2 below illustrated the hydrolytic, acetogenic and methanogenic bac-teria found in the mahb reactor system. micro-scopic image of hydrolytic, acetogenic and meth-anogenic bacteria observed were in similarities with the image previously indicates by malakahmad et al. [11]. generally, anaerobic digestion occurred in three stages which are hydrolysis, acetogenesis and methanogenesis. from observation through fluorescence microscope, it is thought that organic polymers of rpme wastewater such as carbohydrates are broken down by extracellular enzymes produced by hydrolytic bacteria (fig 2a). then, during second stage of acetogenesis process, monomeric compounds are further converted by acetogenic bacteria (fig 2b) into volatile fatty acids, h2 and co2. lastly, methanogenic bacteria (fig 2c), an obligate anaerobes bacteria whose growth rate is slower than bacteria in first and second stages converted products of second stage into methane and other end products. results indicates that in front compartments of mahb reactor (acidification zone), a fast growing bacteria which capable to grow at reduced ph and high substrate levels was dominant. however, towards the end of the reactor, a shifted to a slower growing scavenging bacteria which grow better at higher ph was found. similar results were reported by malakahmad et al [11]. figure 1: laboratory scale modified anaerobic hybrid baffled (mahb) reactor figure 2: three main categories of anaerobic bacteria in the abr system (a) hydrolytic bacteria (b) acetogenic bacteria and (c) methanogenic bacteria siti r. hassan, nastaein q. zaman., and irvan dahlan / microscopic observation of anaerobic microorganism (2015) 63 methanogenic bacteria were consisted of both gram negative and gram positive with a diversity of shapes. they grew slowly inside the reactor with generation time ranged from 2 days under room temperature [11]. methane were derived by two ways. one third of methane was derived by carbon dioxide reduction by hydrogen and another two third was by acetate conversion by methanogens [12]. figure 3 shows image of methanogens observed in the biodegradation of rpme. the image of observed methanogen were compared with previous finding by macario et al [13, 14] by comparing the shape of the microbes. iv bacterial population development in the abr system regarding to the unique construction of mahb reactor, each compartment may comprise of varieties of microbial communities within. the microbial ecology within each compartments were influenced by external parameters such as temperature and ph as well as amount and type of substrate present. figure 4 shows images of two microorganism, methanosaeta and methanosarcina, which used same substrate and can coexist in mahb reactor treating rpme wastewater. the image of methanosarcina and methanosaeta were denoted using previous finding by pillay et al and lima et al for fluorescent and sem image respectively by comparing the shape of both microbes in which methanosarcina has coccus shape while methanosaeta has a bamboo-shaped rods [ 8, 15]. methanosarcina like bacteria was found in the wastewater sample at the front compartment while methanosaeta was mainly found towards the rear end of compartment. bacterial population of the mahb reactor were compared to previous work by yang et al [16] which indi-cates that acetate loading in front compartment favoured growth of methanosarcina. as the methane concentration was high, the concentrations of acetate were also relatively high. therefore it provides the best environmental conditions for methanosarcina which can grow well at ph as low as 6. subsequent compartments shows domination of methanosaeta as the acetate concentration as low as 6.5 mg/l. similar results were also recorded by polprasert et al. [17] supported that acetate concentrations as low as 20 mg/l enabled the domination of methanosaeta like bacteria throughout a four compartment reactor. in addition, with high k and ks value, methanosarcina will dominate when acetate concentrations are high, while methanosaeta will dominating at lower acetate concentration as the k and ks value lower [18]. for hydrogen scavenging bacteria, such as methanobrevibacter and methanobacterium, higher hydrogen concentration will stimulates the methane formation. v conclusion it is thought that the uniqueness of mahb reactor design influence the predominance of either acetoclastic methanogenic bacteria. observation showed that bacteria were dominant compared to other microbes. the configuration of the abr causes partial delinking of acidogenesis and methanogenesis. theoretically, about two thirds of the methane was derived from conversion of acetate by methanogenic bacteria. methanosprilium, methanobacterium, methanococcus, methanosarcina and methanosaeta were observed in the biodegradation of rpme wastewater. owing to the capability of activity in acetate environment, methanosarcina and methanosaeta were dominant than other kinds of methane formers in the modified anaerobic hybrid baffled (mahb) reactor. acknowledgment the authors acknowledge the financial support from the universiti sains malaysia (ru-i a/c.1001/pjkimia/814148). references [1] nnfcc and t.a. centre. national non-food crops centre. .nnfcc renewable fuels and energy factsheet: anaerobic digestion 2011 2014-06-26 [cited figure 3: fluorescence image of (a) methanococcus (small cocci), (b) methanobacterium (chain forming rod) and, (c) methanosprilium (filamentous) figure 4: fluorescent image and sem micrograph showing: a) methanosarcina (small coccus microorganism) and b) methanosaeta (bamboo shape microorganism) siti r. hassan, nastaein q. zaman., and irvan dahlan / microscopic observation of anaerobic microorganism (2015) 64 2014 26]; available from: http://www.nnfcc.co.uk/publications/nnfcc-renewablefuels-and-energy-factsheet-anaerobic-digestion. [2] jenbacher, g., e. biogas engines. [cited 2011 26. 04]; available from: www.clarke-energy.com. [3] j bell, "treatment of dye wastewaters in the anaerobic baffled reactor and characterisation of the associated microbial populations," ph. d thesis, school of chemical engineering, university of natal, 2002 [4] skiadas, i.v., g. h.n., and g.lyberatos." modelling of the periodic anaerobic baffled reactor (pabr) based on the retaining factor concept". water research, vol. 34, no. 15, 3725-3736. 2000 [5] barber, w.p. and d.c. stuckey." the use of the anaerobic baffled reactor (abr) for wastewater treatment: a review". water research, vol. 33, no. 7, 15591578. 1999 [6] barber, w.p. and d.c. stuckey. "the influence of startup strategies on the performance of an anaerobic baffled reactor". environmental technology, vol. 19, no. 5, 489-501. 1998 [7] foxon, k.m., c.a. buckley, c.j. brouckaert, p. dama, z. mtembu, n. rodda, m.t. smith, s. pillay, n. arjun, t. lalbahadur, and f. bux, "evaluation of the anaerobic baffled reactor for dense peri-urban settlements" wrc report k. 2005 [8] pillay, s., k. foxon, n. rodda, m. smith, and c. buckley, "microbiological studies of an anaerobic baffled reactor (abr)," unpublished. [9] hassan, s.r., h.m. zwain, n.q. zaman, and i. dahlan." recycled paper mill effluent treatment in a modified anaerobic baffled reactor: start-up and steady-state performance" environmental technology, vol. 35, no. 3, 294-299. 2013 [10] mepham, b.l., theory and practice of histological techniques,, in the journal of pathology, j. d. bancroft and a. stevens, editors. 1991, john wiley & sons, ltd.: churchill livingstone, edinburgh. p. 281-281. [11] malakahmad, a., s.m. zain, n.e. ahmad basri, s.r. mohamed kutty, and m.h. isa." identification of anaerobic microorganisms for converting kitchen waste to biogas ". world academy of science, engineering and technology, vol. 60. 2009 [12] cavinato, c., "anaerobic digestion fundamental," unpublished. [13] macario, e.c.d. and a.j.l. macario. methanogen photo gallery: methanococcus. [cited 2014 25. 06. 14]; available from: http://www.wadsworth.org/resnres/bios/macario/slide03_m-oltae_i_10_05.html. [14] macario, e.c.d. and a.j.l. macario. methanogen photo gallery: methanobacterium. [cited 2014 25. 06. 14]; available from: http://www.wadsworth.org/resnres/bios/macario/methan obacterium_bryantii.html. [15] lima, c.a.a., r. ribeiro, e. foresti, and m. zaiat." morphological study of biomass during the start-up period of a fixed-bed anaerobic reactor treating domestic sewage". brazilian archives of biology and technology, vol. 48, 841-849. 2005 [16] yang, x., g. garuti, r. farina, v. parisi, and a. tilche, "process differences between a sludge bed filter and an anaerobic baffled reactor treating soluble wastes," proceedings of 5th international symposium on anaerobic digestion, bologna, italy, 1988 [17] polprasert, c., ed. 1996. organic waste recycling, technology and management, chichester: john wiley and sons publication. [18] conklin, a., h.d. stensel, and j. ferguson." growth kinetics and competition between methanosarcina and methanosaeta in mesophilic anaerobic digestion". water environment research, vol. 78, no. 5, 486-496. 2006 siti r. hassan. phd student, school of civil engineering, universiti sains malaysia, engineering campus, seri ampangan, 14300 nibong tebal, seberang perai selatan, pulau pinang, malaysia. h/p: +6013-9987142 e-mail: ct_ayu87@yahoo.com nastaein q. zaman. senior lecturer, phd, school of civil engineering, universiti sains malaysia, engineering campus, seri ampangan, 14300 nibong tebal, seberang perai selatan, pulau pinang, malaysia. tel: 04-5995999 ext. 6287 fax: 04-5941009 e-mail: cenastaein@usm.my irvan dahlan. phd, amicheme, senior lecturer, school of chemical engineering universiti sains malaysia, engineering campus, seri ampangan, 14300 nibong tebal, seberang perai selatan, pulau pinang, malaysia. tel: +604-5996463 h/p: +6012-5754660 fax: +604-5941013 e-mail: chirvan@usm.my ices5 proceeding-pp. 1-3 gaza, 4-5 november 2014 journal of engineering research and technology, volume 2, issue 1, mrach 2015 7 monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix rifa j. el-khozondar 1 , hala j. el-khozondar 2 1 physics department, al-aqsa university, p.o.box 4051, gaza, palestine, email:rifa20012002@yahoo.com 2 electrical engineering department, islamic university of gaza, p.o.box 108, email: hkhozondar@iugaza.edu abstract— liquid phase sintering is a process in which solid grains coexist with a liquid matrix. this process has important applications in processing of several engineering materials. examples of these applications are high-speed metal cutting tools, alumina substrates for packaging silicon chips and barium titanate electrical capacitor. grain growth in liquid phase sintered materials occurs by ostwald ripening. the purpose of this paper is developing monte carlo potts model to simulate ostwald ripening in liq uid-phase sintered materials. ostwald ripening is simulated by treating two phases, solid grains dispersed in a liquid matrix as a two-dimensional square array of sites. each site of the solid-phase grains (phase a) is given a random positive number between 1 and q where q=100 for all the simulation. the sites of the liquid phase (phase b) are assigned only one negative number, qb = -1. it is found that the grain growth is controlled by volume diffusion for volume fraction of the solid grains ranging from 40% to 90%. the grain growth exponent has the value, n=3, in agreement with the theoretical value of ostwald ripening. index terms— monte carlo, potts model, grain growth, grain size, sintering, ostwald ripening. i. introduction practically all engineering ceramics are sintered by a liquid phase, which is a viscous glass. in this state, the solid grains solubilize in the liquid causing wetting of the solid. consequently, an emergent capillary force attracts the grains together. thus, liquid phase sintering has essential applications in materials processing and enhancing their properties for commercial applications. the microstructure of liquid phase sintered materials evolves with time by ostwald ripening. ostwald ripening is a process in which small grains in a liquid matrix decrease in size until they vanish while large grains grow and the mean size will increase with time while the number of grains will decrease. in such a process, the interfacial area reduction drives the diffusional mass removal from areas of high interfacial energy to areas of low interfacial energy. lifshitz, slyozov and wagner [1,2] have theoretically investigated ostwald ripening process named lsw theory for volume fractions close to zero. they assumed that the second phase grains are spherical and the mean distance between the grains is large compared with their dimensions. this implies that the interaction between the grains may be ignored and the volume fraction of the dispersed phase is very small. the lsw theory predicted that in the long time limit the average grain radius asymptotically changes with time as t1/3. the lsw theory has provide a way to find the interfacial energy between the matrix and the dispersed phase. the lsw approach has a difficulty to be tested by experiment or computer simulation because it is limited to zero volume fraction of the growing phase while experiments investigate finite volume fractions. several authors [3-6] have attempted efforts to modify the lsw to study the effect of volume fractions on the growth behavior. they found that the variation of grain size with time does not depend on the volume fraction. the ostwald ripening driven mass transfer may significantly change the microstructure of two-phase materials. this change in the microstructure happens by means of small grains shrinking and moving their mass to large grains. consequently, the average grain size increases with time and the number of the second phase grains drops with time. the materials microstructure affects their mechanical properties; therefore, it is very important to understand the change of materials microstructure to improve their performance in industrial applications. because ostwald ripening controls materials microstructure, we have to study ostwald ripening process to understand the microstructural evolution of materials. the theoretical methods used to model grain growth [7] and ostwald ripening [1,2] made many simplified assumptions to make the problem easy to understand. to diminish these simplified assumptions, several analytical models have been developed [8-20]. the main purpose of this paper is to develop a numerical approach for simulating ostwald ripening in system, which has conserved volume fractions. it is based on our monte carlo potts computer simulation rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 8 model for grain growth in two-phase systems [10-11]. not only this monte carlo potts model permit one to display the microstructural evolution during ostwald ripening but also to get information about average grain size and grain size distribution. the emphasis will be on the microstructural evolutions and on their comparison to experimental observations. furthermore, the grain size variation with time during ostwald ripening will be discussed. for ostwald ripening of second phase grains embedded in a matrix, it is shown that the mean grain size varies with time as t1/n, where n is the grain growth exponent. the growth exponent for ostwald ripening is n=3 in case of volume diffusion controlled growth [1,2]. base on the computer simulation results, the growth exponent will be determined. next section will cover the simulation method. section iii is devoted to cover otwald ripening simulation, followed by conclusion. ii. simulation method monte carlo potts model implemented by solomatov et al. [11] to study grain growth in twophase systems was developed to study the grain growth in liquid phase. the structure of the solid phase in the liquid phase is initialized using a 400×400 square array of sites. a site represents a domain with a specific orientation of crystalline lattice. the orientation of the individual grains is described with the help of spin. the initial microstructure used in this study was generated by randomly occupying the lattice with the desired volume fraction of phase a and phase b. phase a represents the solid polycrystalline material and phase b is the liquid matrix. each site of phase a is given a random number between 1 and q while the sites of phase b are given only one state represented by a negative number -1. the two phases were produced by defining boundary energies between sites of phase a and the sites of phase b so that the components can separate into two phases. eab is the solid-liquid (a-b) interfacial energy, eaa is the grain boundary energy between two solid grains (a-a) of different orientation. the liquid phase cannot have liquid-liquid interfaces in the simulations, ebb=0. the total interfacial energy of the system is given by,     n n total i j i 1 j 1 1 e e i,j 1 δ q q 2         (1) where e(i, j) is the boundary energy between site i and j, qi = -1 is the spin of the i th site. n is the total number of lattice sites. δ is the kronecker delta function. the interfacial energy eab is higher than eaa or ebb. the total number of the sites of phase a and phase b kept constant. the microstructure evolves as a result of ostwald ripening process which transfer mass from one grain of phase a to another grain of the same phase through the liquid matrix (phase b). this is numerically done as follows. a site is randomly selected and its neighbor is randomly selected from its eight nearest neighbors. if both sites belong to different phases, the site of phase a is flipped to a new orientation. the orientation randomly selected from 1 to q. then, the aand bsites are allowed to exchange their spins. in order to check if the new orientation is accepted or rejected, it is necessary to use the exchange probability function determined by boltzmann statistics   1 0 exp e 0, e δe ( ) kt p e            (2) where k is the boltzman constant and t is the temperature. a random number between 0 and 1 is selected. if the number is less than p(δe), the move will be accepted; otherwise, it will be rejected. after each attempt, the time is incremented by 1/n, where n is the total number of lattice sites. the time is given in units of monte carlo steps (mcs) which represents n attempted orientaions. iii. simulation of ostwald ripening the microstrucral evolution of ostwald ripening is controlled by the ratio of the solid-solid and the solidliquid boundary energies, eaa and eab. therefore, ostwald ripening was simulated by using the following values, eab=1.0, ebb=0.0 and eaa=2.5, q=100 and grain volume fractions ranging from 40% to 90%. figure 1 displays the simulated microstructures at several times for different volume fractions of solid grains (phase a) at the value of t equal to 1.3. the solid grains are white and the liquid phase is gray. these microstructures are quite similar to experimenal results [21]. evidence of ostwald ripening is the decrease of the number of grains and the increase of the mean grain size with time. this evolution in microstrucure occurs due to the fact that small particles eventually shrik and disappear and their atoms transfere to large grains by volume diffusion. figure 2 exhibits the dependence of mean grain size on time. the mean grain size and the standard deviation are calcluated after running each case ten times. the standard deviation is not indicted because it is found to be very small. as can be seen from figure 2, the system goes through a transitional regime and eventually approach the self-similar regime after time varies between time t ≈ 103 104 depending on the case of solid fractions. we can find the value of the growth exponent by taking the slope of the curves rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 9 shown in figure 2. the growth exponent is very sensitive to change in time; therefore, the slope is calculted in a time window with width equal to 5. the slope is plotted as a fuction of time in figure 3. it can be seen from figure 3 that the slope in the self-consistent regime is close to the value 1/3 which is the predicted value for ostwald ripening of grain solids in a liquid matrix where grain growth is controlled by long-range diffusion and the typical diffusion is the seperation distance between the solid grains. iv. conclusion the monte carlo potts model has been developed to simulate ostwald ripening by choosing the suitable parameters. a two-phase system composed of solid grains in a liquid matrix was simulated by determining the value of the solid-solid and liquid-solid boundary energies. it is found that the microstructural evolution of the two phases reaches a self-similar regime where power-law relationship was obtained. the value of the grain growth exponent close to the value of n=3 in agreement with the theoretical value of ostwald ripening. references [1] i. m. lifshitz, v. v. slyozov, "the kinetics of precipitation from supersaturated solid solution," j. phys. chem. solids, vol. 19, pp. 35-50, 1961. [2] c. wagner, "theorio der altering von niederschlägen durch unlӧsen," elektrochem, vol. 65, pp. 581-591, 1961. [3] a. j. ardell, "on the coarsening of grain boundary," acta metall, vol. 20, pp. 601-609, 1972. [4] p. w. voorhees, m.e. glicksman, "solution to the multi-particle diffusion problem with applications to ostwald ripening—ii. computer simulations," acta metall, vol. 32, pp. 2013–2030, 1984. [5] j. a. marqusee, j. ross, "theory of ostwald ripening: competitive growth and its dependence on volume fraction," j. chem. phys., vol. 80, pp. 536-543, 1984. [6] m. tokuyama and k. kawasaki, "statisticalmechanical theory of coarsening of sperical droplet," physica a, vol. 123, pp. 386 – 411, 1984. [7] m. hillert, "on the theory of normal and abnormal grain growth," acta metall., vol. 13, pp. 227-231, 1965. [8] r. el-khozondar, h. el-khozondar, "numerical modeling of microstructural evolution in threephase polycrystalline materials," arabian journal for science and engineering, vol. 34, pp. 241-252, 2009. [9] r. el-khozondar, h. el-khozondar, g. gottstein, a. rollet, "microstructural simulation of grain growth in two-phase polycrystalline materials," egypt. j. solids, vol. 29, pp. 35-47, 2006. [10] r. el-khozondar, h. el-khozondar, "numerical simulations of coarsening of lamellar structures: applications to metallic alloys," egypt. j. solids, vol. 27, pp. 189-199, 2004. [11] v.s. solomatov, r. el-khozondar, v. tikare, "grain size in the lower mantle: constraints from numerical modeling of grain growth in two-phase systems," phys. earth. planet. inter., vol. 129, pp. 265-282, 2002. [12] v. tikare, j. d. gawlez, "applications of the potts model to simulation of ostwald ripening," j. am. ceramic soc., vol. 81, pp. 485-491, 1998. [13] k. w. mahias, k. hanson, j. w. morrid jr., "comparative analysis of cellular and johnsonmahl microstructures through computer simulations," acta metall., vol. 28, pp. 443-453, 1980. [14] h. j. frost, c.v. thompson, "the effect of nucleation conditions on the topology and geometry of two-dimensional grain structures," acta metall., vol. 35, pp. 529-540, 1987. [15] o. ito, o, e.r. fuller jr., "computer modeling of anisotropic grain microstructure in two dimensions," acta metall., vol. 41, pp. 191-198, 1993. [16] e. schule, "a justification of the hillert distributions by spatial grain growth simulation performed by modifications of laguerre tessellations," comput. mater. sci., vol. 5, pp. 277-285, 1996. [17] h. j. frost, c. v. thompson, c. l. howe, j. whang, "a two-dimensional computer simulation of capillarity-driven grain growth: preliminary results," scr. metal., vol. 22, pp. 65-70, 1988. [18] e. a. ceppi, b. o. nasello, "computer simulation of bidimensional grain growth," scr. metal., vol. 12, pp. 1221-1225, 1984. [19] d. fan, l. q. chen, "on the possibility of spinodal decomposition in zirconia-yittria alloysa theoretical investigation," j. am. ceram., vol. 78, pp. 1680-1686, 1995. [20] z. nikolic, w. huppmann, "computer simulation of chemically driven grain growth during liquid phase sintering," acta metall, vol. 28, pp. 475479, 1980. [21] p. w. voorhees, "ostwald ripening of two-phase mixture," annu. rev. mare. sci., vol. 22, pp. 197215, 1992. rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 10 figure 1. simulated microstructure of solid-liquid systems with different solid fractions. solid grains (phase a) are white and liquid (phase b) is gray 40% 50% 60% t = 9000 t = 90000 t = 900000 rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 11 figure 1. continued 70% 80% 90% t = 9000 t = 90000 t = 900000 rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 12 figure 2. the mean grain size change with time. the volume fractions of solid grains varies from 40% to 90% as indicated. rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 13 figure 3. the grain growth exponent as a function of time for various fractions of the solid grains. rifa j. el-khozondar and hala j. el-khozondar/ monte carlo potts simulation of grain growth of solid grains dispersed in a liquid matrix (2015) 14 rifa j. el-khozondar is an associate professor at the physics department at al-aqsa university. her main research interests are material science, numerical methods and waveguide sensors. she has attended several national and international conferences. she is in the editorial board for several international journals. she also has been awarded several scholarships including the daad and the alexander von humboldt-stiftung scholarship. hala j. el-khozondar is professor at the electrical engineering department in the islamic university of gaza, palestinian territory. she had a postdoc award at max planck institute in heidelberg, germany in 1999. she is a fellow for the world academy of science (ftwas) and for the arab region ftwas-aro. she is in the editorial board for national and international journal. her research covers a broad spectrum of research includig optical fibres, wireless communication, optical communication, nonlinear optics, optical fiber sensors, magneto-optical isolators, optical filter, mtms devices, biophysics, electro-optical waveguides, and numerical simulation of microstructural evolution of polycrystalline materials. she is a recipient of international awards and recognitions, including a fulbright scholarship, daad short study visit, a alexander von humboldt-stiftung scholarship, erasmus mundus, and the islamic university deanery prize for applied sciences, twas medal lecture (2014), and isesco prize 2014. journal of engineering research and technology, volume 1, issue 4, december 2014 111 the potential of thermal insulation as an energy-efficient design strategy in the gaza strip omar s. asfour1, emad kandeel2 1associate professor, dept. of architecture, iug, palestine, oasfour@iugaza.edu, o.asfour@hotmail.com 2master graduate, dept. of architecture, iug, palestine, kandeele2012@hotmail.com abstract—global consumption of energy is increasing over time as a result of the continuous increase in population and urbanism. this includes energy consumed in buildings in both construction and operation stages, where significant amount of energy is consumed in heating and cooling. as a matter of fact, most buildings in the gaza strip are constructed without using thermal insulation. this resulted in an increasing reliance on mechanical means of cooling and heating in order to maintain thermal comfort of building occupants. thus, this study carries out a numerical assessment of using thermal insulation as an energy-efficient design strategy considering the case of gaza. this has been done though a computerised thermal modelling process of a typical residential building in gaza. the study concluded that the good use of thermal insulation in walls and roofs can effectively reduce undesired heat gains and losses through building fabric, which help reducing human discomfort throughout the year by about 17%. in this regard, the use of air cavity as thermal insulation in a double wall has been found more effective and feasible than the use of polystyrene thermal insulation. index terms—thermal insulation; thermal comfort; energy; residential buildings; gaza. i introduction the recent climatic changes are believed to be directly related to the increasing fossil fuels consumption. the role of buildings is fundamental here as 50% of global resources go into construction, and 45% of energy production is used in buildings [1]. thus, buildings have a great potential in the field of energy savings through implementation of passive design techniques. in this context, there are several design approaches. for instance, it is firstly possible to reduce the rate of energy consumption to a reasonable level. it is also possible also to partially or fully replace the external energy sources by self reliance on energy. unfortunately, passive design techniques in buildings are not invested effectively. for example, gaza is a highlypopulated region that suffers from a severe shortage in conventional energy sources. however, it is possible to notice that increasing number of buildings in gaza started to depend on air conditioning to maintain acceptable indoor conditions. this consumes a great deal of energy and pollutes the environment by noise and harmful emissions such as co2 and cfcs. one passive design techniques that could be used to limit this problem is thermal insulation. thermal behaviour of buildings is highly affected by the design of their envelope. this includes designing all external elements that are in contact with the external environment such as roofs, walls and windows. thus, the use of thermal insulation helps reducing unwanted heat losses or heat gains. this limits heating and cooling loads, and provides healthy and comfortable indoor environment. there are numerous alternatives when it comes to choosing insulation materials. in general, insulation materials fall into three main categories [2]: inorganic/mineral: products based on silicon and calcium such as glass and rock. synthetic organic: materials derived from organic feed stocks based on polymers. natural organic: vegetation-based materials like hemp and lamb’s wool. this requires good selection of building materials that have appropriate thermal properties. this includes thermal transmittance (u-value), which measures how well a material allows heat to pass through. thermal insulators are usually characterized by relatively low thermal transmittance. thermal properties also include thermal lag, which is measured in hours. thermal lag means the time taken for heat wave to pass from one side of building external element to the other side. thermal insulators are usually characterized by relatively high thermal lag. several studies based on different methodologies and tools have been carried out to assess the effect of thermal insulation on buildings operational energy demand. the main aim of these studies is either to quantify the effect of thermal insulation in a specific climate or location, or to assess the efficienomar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 111 cy of a specific material or building component as a thermal insulator. in this regard, shi and yang [3] discussed the importance of the integrated and comprehensive performance evaluation. this is to give more value to the conventional architectural design method based on space, form, and function, which needs also to be based on scientifically sound performance analysis. ozel [4] investigated the optimum insulation thickness of building walls considering five different structure materials and the climatic conditions of elazıg, turkey. this is determined before and after the use of thermal insulation. extruded and expanded polystyrene were examined in this regard using the yearly cooling and heating transmission loads as performance indicators. these are calculated using an implicit finite difference method under steady periodic conditions. depending on the structure material and the insulation material, results showed that the optimum insulation thicknesses vary between 2 and 8.2 cm, and the payback periods vary between 1.32 and 10.33 years. fang et al. [5] investigated the effect of building envelope insulation on cooling energy consumption in china summer using two types of experimental chambers. these chambers were constructed to evaluate the effects of external wall insulation on energy consumption and the indoor thermal environment. air conditioning power consumption was recorded using a power meter. results showed that the indoor thermal environment of the insulated chamber was less affected by the outdoor environment when compared to the basic chamber. in this regard, the insulated chamber offered a saving up to 23.5% in air conditioning energy consumption during the summer time. shoubi et al. [6] carried out a study to assess various combinations of materials that help reducing energy demand in buildings. this has been done with reference to the bungalow house type, and the tropical climatic conditions prevailing in malaysia. ecotect software has been used to estimate the annual amount of energy consumption in the baseline and improved designs. results indicated that the use of alternative insulation materials in the building envelope, such as double-brick walls, could reduce this consumption by about 28%. one important study in this regard is the palestinian code of low-energy buildings according to this code, the maximum total u-value should be 1.8-2.2 w/m2.k for ceilings and floors, and 2.5 w/m2.k for walls [7]. the code also offers several design options for building envelope, where thermal insulation is used. however, these options are not numerically justified. thus, the use of thermal insulation as an energyefficient design strategy in gaza and its effect on thermal comfort will be numerically examined in the following modelling study. ii methedology in this study, computer simulation of several cases has been implemented. in this regard, ecotect 5.5 program has been used as a thermal simulation tool. ecotect is userfriendly software that is widely used for thermal modelling of buildings. it has a cad interface and several thermal analysis tools that allow assessing buildings thermal performance during the design stage. thermal performance modeling has been carried out firstly for the reference case as a benchmark, and then for the thermally-insulated cases. all factors, except thermal insulation of building walls and final roof, have been assumed fixed to ensure a fare comparison that shows the effect of this insulation. discomfort status, measured in degree hours, has been used as an indicator of the thermal performance. this is because improving thermal comfort of building users should be the ultimate aim of the design. the prototype that has been chosen is a five-storey residential building that represents a common residential building type in gaza. this type is common as a response to the extended family culture and increasing price of urban land. moreover, most electricity consumption in gaza (about 70%) goes to the domestic sector [8]. this means that a great deal of energy could be saved here as a result of implementing passive design strategies. each floor of the modelling case accommodates four flats with different orientations. simplicity of building design has been observed, and a sufficient area of 160m2 has been provided to accommodate six members, which reflects the average palestinian family size [9]. each flat consists of three bedrooms, a living room, a dining room, two toilets and a kitchen. the apartments are vertically linked with common staircase. as shown in figure 1, the building has been threedimensionally modelled. its long axis has been oriented northsouth, which reduces its exposure to the southern sun. ecotect reads any building as thermal (or non-thermal) zones, which should be observed in the modelling process. thus, each flat of the twenty flats in the building has been assumed as a single thermal zone, where the internal partitions are neglected due to their relatively low thermal mass. this has resulted in ten thermal zones, which are named according to their height. for example, the ground floor includes four flats as follows: gf-a, gf-b, gf-c, and gf-d, and so on. figure 1 three dimensional view of the modelling case omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 111 iii modelling cases setup a building materials ecotect thermally reads building components according to their type. this includes floors, walls, roofs, ceilings, windows, etc. for each element several options related to building materials are available. these materials, along with their thermal properties, are available in the program material library. this includes: u-value, solar absorption, and thermal lag. the most common construction system in the residential buildings of gaza is the structural system (reinforced concrete foundations, columns, and ceilings). in this study, building materials are defined to match the most common ones in gaza. this is intended to explore thermal performance of the reference building, and decide whether it needs some improvement or not. as mentioned above, ecotect material library offers a wide range of building materials. however, additional materials have been created in this study to meet the common building materials in gaza. thermal properties of these additional materials have been obtained using the thermal properties calculator integrated in ecotect, and the palestinian code of energy efficient buildings [7]. the following is a description of the several materials used in the reference modelling case. these materials are also illustrated in figure 2. 1. external walls most commonly, walls in gaza are made of hollow concrete blocks and thin layers of cement plastering applied to the internal and external walls. a typical section of external walls shows 20 cm hollow concrete blocks, with 1-1.5 cm of internal plaster and 2-3 cm of external plaster. thermal properties of this element are as follows: u-value: 2.3 w/m2k, admittance: 4.4 w/m2k, decrement factor: 0.3, time lag: 7.4 hrs. 2. ceiling the typical ceiling section shows three parts: 8 cm layer of reinforce concrete, 17cm layer of hollow concrete blocks, and 1 cm layer of plastering. in ecotect, the internal floors between flats are assumed the same but with these layers in reversed manner. thermal properties of this element are as follows: u-value: 2.6 w/m2k, admittance: 4.9 w/m2k, decrement factor: 0.4, time lag: 6.8 hrs. 3. glazing windows are important parts of the building envelope since they provide both lighting and ventilation. a typical single-glazed window with aluminium frame is assumed here. thermal properties of this element are as follows: u-value: 5.5 w/m2k, admittance: 5.5 w/m2k, solar heat gain coefficient: 0.9. b zone thermal settings the second step after defining building materials is to define thermal settings of each thermal zone. this includes the following: 1. estimation of internal heat gains internal heat gains in any thermal zone may be a result of its occupants, lighting or appliances used in it. heat gains due to lighting and appliances are called the sensible gains. as for occupants, six occupants are assumed in each zone to reflect the average palestinian family size [9]. occupants are assumed in “sedentary” mode. thus, the total heat gain due to occupants is 70w * 6 person = 420w. it is important to note that it is only required in ecotect to specify the number of occupants, which helps the software to calculate the resulting heat gain. to specify the sensible gains, heat gains due to lighting and appliances should be estimated. as for lighting, it is estimated in residential buildings that heat gains due to energy efficient lighting is about 11w/m2 [10]. as for appliances, table 1 shows an assumption of the appliances that are usually used in residential buildings in addition to their operation times. the total heat emission due to these electric equipment is 620 w. given that each thermal zone (or apartment) floor area is 160 m2, heat gain due to equipment is about 4w/m2. thus, sensible heat gains in the thermal zone settings should be defined as: heat gains due to lighting + heat gains due to appliances = 11 + 4 = 15 w/m2. figure 2 main building materials used in the reference modelling case omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 121 2. estimation of ventilation rate hvac system for all zones is assumed natural ventilation. a ventilation rate of 18 litres of fresh air every second is required for every person in the case of residential buildings [11]. as number of persons in every zone is 6, this means that the required volumetric air change rate is 108 l/sec, which equals 108*103*3600 cm3/h, i.e. 388.8 m3/h. given that volume of the zone is 160m2 * 3m = 480 m3, the required air change per hour is = 388.8/480 = 0.81 ach/h, or 1 ach/h approximately. in ecotect program, this value represents the average uncontrolled air leakage (infiltration) in typical construction. this can provides sufficient ventilation rate in winter when windows are closed. in summer, the software assumes that windows are opened to provide natural ventilation whenever external conditions are thermally acceptable. 3. schedules management schedules feature is used in ecotect to control some variables in the thermal settings of zones. for example, it can be used to switch off the lights overnight, which reduces the resulting sensible heat. in this study, two schedules are assumed: the occupancy schedule: this assumes full occupancy from 12 am to 8 am, 30% from 8 am to 2 pm (the working time), and 70% from 2 pm to 12 am. this is in weekdays. in weekends, 70% occupancy is assumed from 2 pm to 12 am, and a full occupancy is assumed for the rest of the day. the sensible gains schedule: this assumes 70% of the reference value of sensible gains (15 w/m2) from 8 am to 8 pm, and 30% for the rest of the day, i.e. overnight. c climatic data to carry out any thermal analysis using ecotect, it is essential to specify the city in which the building is located. to do so, the climatic data file of this city should be downloaded from the program directory. in fact, there are limited climate data files to download while using the program. as there is no climatic data file for gaza, it is possible to rely on al-arish climatic data file due to the similarity between these two cities. al-arish is a coastal city in egypt that is close to gaza. both cities are located on latitude 31 n. table 2 compares temperature averages for both cities. it is possible now to start thermal analysis of the building, giving that thermal insulation is not used yet. due to the limited size of the study, thermal analysis will be limited to two zones (flats): zone second-a (i.e. flat "a" in the second floor), and zone fourth-a (i.e. flat "a" in the top floor). both zones are south-westerly, which represents high exposure to sun, where thermal insulation may be required. moreover, zone fourth-a is located in the top floor and receives additional solar radiation through the roof, where thermal insulation may be required as well. iv modelling study results a thermal behaviour of reference case ecotect offers several thermal performance analysis features and indicators. this includes: the hourly temperature profile. the hourly heat gains and losses. the monthly loads/ discomfort. 1. the hourly temperature profile this displays internal and external temperature profiles together for a selected day and thermal zone. this facilitates comparing both profiles and understanding building thermal behaviour. temperature values have been obtained for the average hottest day to represent summer conditions, and for the average coldest day to represent winter conditions. figure 3 shows the average summer and winter temperature profiles for zones second-a and fourth-a. in summer, figure 3 shows that zone fourth-a has a higher internal temperature compared to zone second-a mainly in the afternoon period. this can be a result of the solar radiation acting on the roof of zone fourth-a. another observatable 1 heat emission due to appliances [11], adapted by authors equipment no. watts operation time total (w) refrigerator 1 60 75% 45 washing machine 1 1000 10% 100 oven 1 2000 5% 100 microwave 1 2000 5% 100 kettle 1 2000 5% 100 tv. 1 150 50% 75 p.c. 1 200 50% 100 total (w) 620 table 2 average monthly temperature in al-arish, egypt, and gaza, palestine [12] average monthly temperature (al-arish) (oc) 1 2 3 3 5 6 7 8 9 10 11 12 13 14 16 18 21 24 25 26 25 22 20 16 average monthly temperature (gaza) (oc) 1 2 3 3 5 6 7 8 9 10 11 12 13 14 15 18 20 23 25 26 25 22 19 15 omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 121 tion is that internal temperature in both zones has less swing compared to the external one, which presents the effect of building envelope in modifying the external temperature passively. thermal comfort limits have been defined in ecotect from 18 to 26oc. as shown in figure 3, both zones are above the lower limit, but exceeds the upper one starting from the noon time when external temperature reaches 36.8oc. in winter, figure 3 shows that both zones are outside the comfort range (18-26oc) with no significant swing. however, zone fourth-a has lower internal temperature compared to zone second-a during the whole day. this can be a result of heat loss through the roof of zone fourth-a. 2. the hourly heat gain and losses this displays magnitudes of the several heat flow paths acting on the examined thermal zone in a specified day. this is displayed in watts, and includes several heat flow paths like fabric, zonal, and solar heat gains or losses. this is useful to understand what is going on inside a building as a result of changing its building materials, ventilation system, or windows orientation for example. table 3 shows the hourly heat gains and losses for zones second-a and fourth-a in the average hottest and coldest days. in summer, it is clear that zone fourth-a gains more heat through building fabric as it has more exposed surface area (241m2 compared to 81m2 in zone second-a). fabric heat gains in zone fourth-a (63253wh) are higher by about four times when compared to zone second-a (15501wh). solar heat gains are the same as there is no difference between both zones in terms windows area and orientation. ventilation gains are significant, where gains occur as a result of the hot air leakage into the space. the zonal gains or losses depend on the internal temperature of the adjacent zones. heat should move from the hotter zones to the colder ones. it is clear that zone fourth-a loses more heat (1917wh) to the adjacent zones compared to zone second-a (-37wh). similar observations can be noticed in the winter, but in a reversed manner. however, zone fourth-a gains some heat (3942wh) from the underneath zones as it is colder than them. 3. the monthly discomfort degree hours if a thermal zone is naturally ventilated, ecotect can perform a thermal comfort assessment with reference to the occupants inside the space. this considers the amount of time the internal temperature of this zone spends outside the specified comfort conditions. in addition, thermal discomfort in ecotect can be estimated using degree hours, which measures thermal discomfort by the number of degrees spent outside the comfort band. figure 4 shows the discomfort intervals measured in degree-hour for each month in the year. this is given for zones second-a (the wide bars) and fourth-a (the narrow ones), given that comfort lower limit is 18oc and comfort upper limit is 26oc, and that upper columns indicate the "too hot" state, while bottom ones indicate the "too cool" state. it clear that there is a significant amount of discomfort recorded in both zones in both summer and winter. however, the problem is more significant in summer for zone second-a, and in winter for zone fourth-a. the previous analysis shows that the proposed building requires some improvements to improve its thermal performance. one option is to examine the figure 3 hourly temperature profiles for zones second-a and fourth-a in the average hottest and coldest days table 3 estimated hourly gains and losses for zones second-a and fourth-a in the average hottest and coldest days summer gains (wh) zone: second-a zonal internal vent. solar fabric hvac heat path -37 36648 55690 2821 15501 0 total (wh) zone: fourth-a zonal internal vent. solar fabric hvac heat path -1917 36648 55690 2821 63253 0 total (wh) winter gains (wh) zone: second-a zonal internal vent. solar fabric hvac heat path 250 38580 -83351 504 -21745 0 total (wh) zone: fourth-a zonal internal vent. solar fabric hvac heat path 3942 38580 -83351 504 -47372 0 total (wh) omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 122 effect of thermal insulation, which is the main passive design strategy targeted in this study. this is because thermal insulation usually protects the building from the undesired climatic conditions, and reduces its need for heating and cooling. this will be done for the external walls of zone second-a, and for the external walls and the roof of zone fourth-a as discussed below. b thermal behaviour of the thermallyinsulated case two thermal insulators recommended by the palestinian code of energy efficient buildings [7] are examined here: wall insulation-a: this is a 35cm double wall with a middle air cavity and cement plastering at both sides. thermal properties of this insulated wall are: u-value: 1.5 w/m2k, admittance: 5.6 w/m2k, decrement factor: 0.13, time lag: 7.4hrs. wall insulation-b: similar to the above one but polystyrene insulation in the middle instead of the air gap. thermal properties of this insulated wall are: u-value: 0.4 w/m2k, admittance: 5.6w/m2k, decrement factor: 0.2, time lag: 12hrs. 1. effect of thermal insulation on zone "second-a" the use of both insulation-a and insulation-b will be examined for zone second-a here. building material of the external walls in the whole building has been changed to these two insulated walls. the hourly heat gains and losses could be a useful tool to demonstrate the effect of thermal insulation as it displays heat movement through building fabric. table 4 shows the resulting heat gains and losses through building fabric. results obtained shows that the use of thermal insulation has successfully reduced heat gains or losses through building fabric. however, wall insulation-b (polystyrene in a 35cm double wall) seems to be more effective when compared to wall insulation-a (air cavity in a 35cm double wall). however, a look at the resulting thermal comfort conditions, presented in table 5, reveals that these conditions have improved in the cold months but not in the hot ones. as for the "too cool" case, thermal discomfort has been reduced by 6% and 24% as a result of using wall insulation-a and wall insulation-b, respectively. as for the "too hot" case, thermal discomfort has increased by 6% and 225% as a result of using wall insulation-a and wall insulation-b, respectively. this indicates that despite the positive effect of thermal insulation in protecting the building from the undesired hot weather in summer, it seems that it prevented internal gains from leaving the space which causes some overheating and increases thermal discomfort. thus, thermal insulation has a positive effect in the cold months but not in the hot ones. this confirms the need of adopting a comprehensive passive design strategy in which several passive techniques are integrated to improve thermal performance over the year. this includes: reducing internal heat gains. using shading devices and energy-efficient windows. using high thermal mass in internal partitions. using night-time ventilation. one strategy will be examined here, which is the use of night-time ventilation in the relatively hot months (april to september). this strategy aims to cool the building fabric over night by increasing ventilation rate during night-time. table 5 total annual thermal discomfort degree hours in zone second-a before and after the use of thermal insulation type (deg. hrs.) wall insulation-a before insulation after insulation diff. too hot 8249 8749.6 +6 too cool 3809.3 3589.5 -6 type (deg. hrs.) wall insulation-b before insulation after insulation diff. too hot 8249 10066.3 +22 too cool 3809.3 2895.3 -24 table 4 estimated heat gains and losses through building fabric in zone second-a for the average hottest and coldest days, before and after using thermal insulation type gains/losses wall insulation-a before insulation after insulation diff. summer gains (wh) 15501 10411 -33% winter losses (wh) -21745 -16542 -24% type gains/losses wall insulation-b before insulation after insulation diff. summer gains (wh) 15501 9692 -38% winter losses (wh) -21745 -15408 -29% figure 4 measured monthly discomfort degree hours for zones second-a (the wide columns) and fourth-a (the narrow ones) omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 121 thus, the building becomes able to absorb more internal heat gain during the day time. to do so, a ventilation schedule has been assumed. that schedule assumed an infiltration rate of 1 ach/hr during winter and day-time in summer, and an infiltration rate of 2 ach/hr during over night in summer. this practically means that windows will be slightly opened overnight when external air temperature is low. the effect of night-time ventilation on thermal discomfort is presented in table 6. it shows that night-time strategy, along with thermal insulation, helped improving thermal performance of the building. this strategy reduced total discomfort degree hours by 18% for both wall insulation-a and wall insulation-b. however, wall insulation-a is more effective in summer, which has the priority in thermal design of buildings located in hot climates. 2. effect of thermal insulation on zone "fourth-a" thermal insulation in this zone is similar to zone seconda but with additional thermal insulation used in the top roof. an insulated roof section recommended by the palestinian code of energy efficient buildings [7] will be examined here (see figure 5). this roof is a 25cm ribbed concrete roof covered with 5cm of thermal insulation (polystyrene), 5cm of foam concrete, 2cm of moisture insulation, 2.5cm of sand, and 1cm of tiles, respectively from bottom to top. thermal properties of this insulated roof are: u-value: 0.73 w/m2k, admittance: 5.3 w/m2k, decrement factor: 0.1, time lag: 11hrs. this means that the insulated roof has lower u-value (0.73 compared to 2.6 w/m2k), and higher thermal lag (11 compared to 6.8 hrs). the previous analysis showed that thermal insulation works more effectively when integrated with the night-time ventilation. it also showed that wall insulation-a offers better insulation in summer. thus, these two findings will be considered here. figure 6 and table 7 show the total annual discomfort measured in degree-hour for zone fourth-a. this is shown for three scenarios: before using thermal insulation in the walls and the roof. after using the insulation (wall insulation-a). after using thermal insulation along with night-time ventilation. it seems that findings here are consistent with the ones obtained in the case of zone second-a. the positive effect of using thermal insulation can be noticed in winter as it could reduce thermal discomfort by about 16%. however, it causes some overheating in summer. this can be overcome using nigh-time ventilation, where total discomfort has been reduced by 17%. figure 5 the proposed insulated roof section table 6 total annual discomfort degree hours for zone second-a before and after the use of night-time ventilation beside thermal insulation type (deg. hrs.) wall insulation-a before insulation & night vent. after insulation & night vent. diff. too hot 8249 6598.4 -20 too cool 3809.3 3251.8 -15 total 12058.3 9850.2 -18 type (deg. hrs.) wall insulation-b before insulation & night vent. after insulation & night vent. diff. too hot 8249 7363.9 -11 too cool 3809.3 2519.6 -34 total 12058.3 9883.5 -18 figure 6 monthly discomfort degree hours for zone fourth-a before and after insulating the walls and the roof table 7 monthly discomfort degree hours for zone fourth-a before and after insulating the walls and the roof status (deg.hrs.) a. no insulation b. wall & roof ins. diff. (a&b) c. ins. & night-time vent. diff. (a&c) too hot 6700.5 7382.7 +10 5785.1 -14 too cool 5293.4 4469 -16 4219.3 -20 total 11993.9 11851.7 -1 10004.4 -17 omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 121 v energysavings and payback period to estimate the amount of energy that would be saved as a result of using thermal insulation, the monthly space load graph in ecotect can be used as an approximation method. this graph displays the total heating and cooling loads required to maintain thermal comfort in a specified thermal zone. to do so, the building is assumed air conditioned when the internal temperature is higher or lower the thermal comfort limits (18-26oc). the fact that the building is air conditioned implies two changes in the zonal thermal settings: natural ventilation rate will be set to the minimum, assumed 1.0 ach/hr of air infiltration, as windows are closed. night-time ventilation will not be used as windows are closed to allow for air conditioning. table 8 shows the expected energy savings as a result of using thermal insulation for zones second-a and fourth-a. both wall insulation types assumed in this study (wall insulation-a, and wall insulation-b) have been considered, in addition to the roof insulation in the case of zone fourth-a. heating and cooling loads are estimated in kwh. according to the local price, each kwh costs about ils 0.5. this is equivalent to usd 0.13 according to 2014 rates. results obtained show that it is possible to save an annual amount of usd 86 and 498 for zones second-a and fourth-a, respectively, when comparing the total heating and cooling loads before and after using thermal insulation, given that wall insulation-a is used. in the case of using wall insulation-b, it is possible to save an annual amount of usd 101 and 500 for zone second-a and fourth-a, respectively. there is no significant difference between wall insulation-a and wall-insulation-b as a total in terms of heating and cooling loads. however, it can be noticed that wall insulation-b saves more money in heating on the account of cooling, while wall insulation-a offers more balanced savings, especially in zone second-a. also, it can also be noticed that the effect of roof insulation is more significant as more money can be saved in zone fourth-a. in order to estimate the payback period, construction costs of the external walls and top roof have been estimated before and after using thermal insulation. the following sections summarize the findings: a. wall insulation-a given that the additional cost of using wall insulation-a (35cm double wall with a middle air cavity) in zone "second-a" is usd 390 per housing unit, and that the annual saving in heating and cooling as a result of using thermal insulation is 86 usd, it is possible to get back insulation cost in zone "second-a" in 4.5 years. as for zone "fourth-a", the additional cost of insulation includes walls (usd 390), and the roof (usd 5520). this equals usd 5910 per housing unit. given that the annual saving in heating and cooling as a result of using thermal insulation is usd 498, it is possible to get back insulation cost in zone "fourth-a" in 11.9 years. b. wall insulation-b given that the additional cost of using wall insulation-b (35cm double wall with polystyrene in the middle) in zone "second-a" is usd 813, and that the annual saving in heating and cooling as a result of using thermal insulation is usd 101, it is possible to get back insulation cost in zone "second-a" in 8 years. as for zone "fourth-a", the additional cost of insulation includes walls (usd 813) and the roof (usd 5520). this equals usd 6333. given that the annual saving in heating and cooling as a result of using thermal insulation is usd 500, it is possible to get back insulation cost in zone "fourth-a" in 12.7 years. table 8 annual heating and cooling loads and money saving for zones second-a and fourth-a before and after using thermal insulation a. wall insulation-a, and roof insulation loads for zone second-a (kwh) heating cooling total before insulation 1959 9711 11670 after insulation 1507 9500 11006 diff. (kwh) 452 211 664 money saved ($) 59 27 86 loads for zone fourth-a (kwh) before insulation 3447 11710 15157 after insulation 2139 9184 11323 diff. (kwh) 1308 2526 3834 money saved ($) 170 328 498 b. wall insulation-b, and roof insulation loads for zone second-a (kwh) heating cooling total before insulation 1959 9711 11670 after insulation 671 10224 10895 diff. (kwh) 1288 -513 775 money saved ($) 167 -67 101 loads for zone fourth-a (kwh) before insulation 3447 11710 15157 after insulation 1781 9528 11309 diff. (kwh) 1666 2182 3848 money saved ($) 217 284 500 omar s. asfour, emad kandeel/ the potential of thermal insulation as an energy -efficient design strategy in the gaza strip (2014) ) 121 vi conclusion thermal behaviour modelling of buildings is a complicated process in which several aspects interact, and several assumptions should be reasonably treated. however, the use of computer simulation significantly facilitates this process. this study showed the great potential of passive design techniques to save energy and improve thermal comfort in buildings, considering the climatic conditions of gaza strip, palestine. with the focus on thermal insulation, it has been found that it is possible to significantly reduce heat gains and losses through building fabric by using thermal insulation in walls and top roof. this can reduce thermal discomfort by about 17% over the year. however, the use of air cavity in a double wall has been found more effective when compared to the use of polystyrene in the same double wall. the positive effect of thermal insulation in summer can be invested when it is coupled with other passive design strategies to reduce the adverse effect of internal heat gains. this includes night-time ventilation, which has been examined in this study. despite the fact that thermal insulation of walls and final roof has been solely explored in this study, the potential of other design techniques cannot be neglected in order to achieve a comprehensive perspective of energy efficient design strategies. this includes shading, the use of double glazed windows, the use of other insulating materials available in gaza such as natural stone, and the use of landscaping. as for the economic benefits, the use of air cavity in a double wall has been found more feasible when compared to the use of polystyrene in the same double wall. it is possible to get back insulation cost in 4.5 years in the middle floor (compared to 8 years in the case of using polystyrene insulation), and in 11.9 years in the top floor (compared to 12.7 years in the case of using polystyrene insulation). however, the observed payback period in both cases may be reduced by optimizing the thickness of the invested thermal insulation. thus, the shortage of conventional energy sources in gaza strip necessitates effectively investing the available passive design strategies. in this regard, it is required from all parties involved in the construction sector to adopt effective strategies for energy-efficiency that integrate them into building design, taking into account their economic and environmental benefits. this requires that the concerned official bodies take an action to legalize the issue through appropriate building norms and policies that put energy efficiency of buildings in action. references [1] b., edwarwds and p. hyett, rough guide to sustainability. london: riba, 2002. [2] p., smith, architecture in a climate of change a guide to sustainable design. oxford: architectural press, 2nd ed., 2005. [3] x. shi, and w. yang, “performance-driven architectural design and optimization technique from a perspective of architects,” automation in construction, vol. 32, pp. 125–135, 2013. [4] m. ozel, “thermal performance and optimum insulation thickness of building walls with different structure materials,” applied thermal engineering, vol. 31, pp. 3854-3863, 2011. [5] z. fang, n. li, b. li, g. luo, and y. huang, “the effect of building envelope insulation on cooling energy consumption in summer,” energy and buildings, vol. 77, pp. 197–205, 2014. [6] mo. shoubi, ma. shoubi, a. bagchi, and a. barough, “reducing the operational energy demand in buildings using building information modeling tools and sustainability approaches,” ain shams engineering journal, 2014, doi.org/10.1016/j.asej.2014.09.006. [7] ministry of local government, the palestinian code of energy efficient buildings. ramallah: ministry of local government, 2004. [8] a., muhaisen, “the energy problem in gaza strip and its potential solution”, proceedings of energy and environmental protection in sustainable development (iceep), palestine polytechnic university, p. 145-153, hebron, 2007. [9] pcbs (palestinian central bureau of statistics), census 2007 [online]. available at: , accessed 30 jan. 2012. [10] cibse (chartered institution of building services engineers), energy efficiency in buildings. london: cibse, 1998. [11] cibse (chartered institution of building services engineers), cibse guide, vol. a. & b. london: cibse, 1988. [12] weatherbase.com, records and averages for middle east [online]. available at: , accessed 29 jan. 2012. transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 1 comprehensive decision approach for sustainable wastewater reuse using multicriteria decision analysis-gis abdelmajid nassar 1 , husam al najar 2 , osama dawood 3 , ziyad abunada4 1 associat professorthe islamic university of gaza, anassar@iugaza.edu.ps 2 associat professorthe islamic university of gaza, halnajar@iugaza.edu.ps 3 phd student department of engineering, university of cambridge. omfd2@cam.ac.uk 4 phd student department of engineering, university of cambridge. za242@cam.ac.uk abstract— the evolving water scarcity in the gaza strip adds extra pressure on the limited available groundwater source and leads to water quality deterioration. there is a high need for sustainable nonconventional water resource. wastewater reuse (wwr) is one of the main water strategies in the gaza strip due to the increase amount of generated wastewater and the rehabilitation of treatment facilities. however, wwr can generate negative impacts to the ecological and the socioeconomical system once it has been poorly applied. the current study investigates the most sustainable wwr schemes by accounting for the impacts of wwr based on real field study. multicriteria analysis and experts’ judgment were used to identify the impacts of wwr and to prioritise the severity of these impacts, respectively. wwr impacts were weighed using multi criteria analysis definite software according to their relative importance. results indicated that public health and cost of treatment are of the most concern. spatial analysis using geographic information system (gis 9.2) was conducted to investigate the areas with high potential for wwr over the gaza strip based on the multicriteria results. resulted map showed that gaza southern area has the high potential for wwr with the least possible generated impacts.. index terms— gaza strip, gis, multi-criteria, analysis,wastewater, reuse. i introduction groundwater is the only fresh water source in the gaza strip. the groundwater is highly overexploited and is heavily contaminated due to agricultural activities and seawater intrusion (qahman et al. 2009; al-juaidi et al. 2011; alnajar and ashour, 2013). average annual water deficit in the gaza strip is estimated by 60-70 mm3 (pwa 2013). recent reports showed that the groundwater aquifer in the gaza strip will become unusable by 2020 where the deterioration will become irreversible by 2020 (unrwa 2012). the expanding urbanization, the lack of sufficient water harvesting facilities and the poor aquifer recharge have worsened the water problem. current water demand is estimated by 180 mm3, 70% of which is being consumed by the agriculture sector (pwa, 2013). in fact, this sector accounts for the high nitrate concentrations in aquifer resulting from the intensive use of fertilizers and pesticides under low recharge conditions. the average nitrate concentration in agricultural areas is five times more than the standard limits of who (50 mg/l). moreover, the over pumping from agricultural illegal wells intensified accelerated the sea water intrusion and resulted in high chloride levels (average of 800 mg/l) (al najar, 2011). water resource planners therefore, have to find non-conventional alternate sources of water to bridge the deficits (al-agha & mortaja 2005). possible management options include the use of treated wastewater (tww) and desalination are at the forefront of water management plans (al-yaqubi et al. 2007; al-juaidi et al. 2011). there is a high potential for wwr due to the increased generated wastewater quantities. (afifi, 2006) estimated that about 92mm3 of wastewater will be generated in gaza strip by year 2020. this amountif properly usedcan provide adequate amount for the agricultural sector and save the aquifer from further deterioration. the lack of proper wastewater collection system creates the need to dispose partially treated wastewater to the open lands and hence significant environmental pollution and public health concerns are encountered. wwr not only can reduce the water deficit in the gaza strip, but it also can minimize the environmental deterioration which is one of the main aspects considered by the policy makers in the gaza strip (al-juaidi et al. 2010). however, wwr can generate negative impacts due to the different governing conditions and the current strategies for wwr (anane et al, 2012). to our knowledge, there is no specific designed framework that accounts for all the possible generated impact. in addition, there is a knowledge gap in investigating the impact of wwr in spatial scale based on criteria analysis. al juaidi et al. (2010) studied the optimisation and the decision analysis of the proper allocation of the fresh and the wwr based on the crop types mailto:anassar@iugaza.edu.ps mailto:anassar@iugaza.edu.ps mailto:omfd2@cam.ac.uk mailto:za242@cam.ac.uk nassar a. et.al. / comprehensive decision approach for sustainable wastewater reuse using multicriteria decision analysisgis (2015) 2 with cost effective study (al-juaidi et al. 2010). however, the study only accounts for cost-benefit criteria without accounting for the social and ecological impacts. the impacts of wwr are site specific and depend on water quality, crop, soil types and other factors (abunada & nassar 2014). impacts of wwr have to be accounted for and the severity of these impacts has to be addressed. the relative importance and weights for these impacts have also to be determined. several mca techniques have been used to identify the most suitable locations for wastewater reuse such as electre, promethee, ahp, topsis, aim, etc. (behzadian et al, 2010; conté et al., 2008; zhong-wu et al., 2006). however, only few have integrated into gis [al-adamat et al., 2010; (kallali et al., 2007). analytic hierarchy process (ahp) was established by thomas lorie saaty in 1970s. ahp was used to prioritise the different decision alternatives regarding to the wwr using pair-wise comparisons (anane et al., 2012). the current works aims at defining the relative importance of the possible generated impacts of wwr using expert judgments. the mca is used to prioritise these impacts and to account for the possible influences by either assigning positive or negative effects. based on that, a spatial analysis using gis is conducted to identify the most suitable areas for wwr. for this reason, multi criteria analysis (mca) and spatial analysis gis tool were integrated to evaluate these impacts based on real judgments. ii methodology multi-criteria analysis (mca) the mca evaluates the problem under consideration in term of evaluation trade-off matrix. matrix columns represent the different alternatives under consideration while rows represent the evaluation criteria where these alternatives have their impacts. decision of finite set of alternative (defininit) software developed by institute of environmental studies of vu universitywas used to conduct the mca analysis. in definite, the matrix elements are quantifying the performance of each alternative with respect to the criterion based on the following: 1. i (i=1..i) represents the alternatives and j (j=1..j) represents the criteria. sji denote the effect of alternative i according to criterion j. the matrix s (j x i) includes all data about the performance of the desired alternatives and is called the effects table. 2. the priorities assigned to the decision criteria are expressed in weights wj (j=1,..j). theses weights were assigned by experts judgment and they are site specific based on the ecological and socio-economic conditions. the elements weight reflects its importance. the summation of weights within the criteria equals hundred in order to accounting for the relative weight of elements within the same criteria. similarly, the relative weights of the criteria were assigned and the summation of the criteria weights equals hundred. the current study considered three main alternatives to simulate three different wastewater treatment conditions. it represents the case of no reuse, the reuse under current situation with partially treated wastewater and the reuse under improved treatment. these cases were expressed as no project, current situation and extended treatment scenarios, respectively. the mca contained nine main criteria: total cost, crop production, public health, soil contamination, groundwater contamination, groundwater recharge, ecology, social impacts and environmental impacts as shown in table 1. to our knowledge, literature has not reported such a comprehensive criteria where all possible wwr impacts were combined and evaluated in one analysis. the main criteria were subdivided into sub elements (i.e. cost was subdivided into cost of treatment and the resulted cost savings from no-use of fertilizers). the weight for main and sub criteria were assigned by thirty eight experts (responded positively) from the field of water and wastewater in the gaza strip represented the senior level at their institutions. experts represented wide spectrum of different institutions including academics, professionals, consultants, industry, governmental and nongovernmental organizations. there was high agreement between the experts and the analysis showed good agreement among the scores assigned to single criteria where the standard deviation between the highest and the lowest weight was ± 0.07. following the determination of the weights and the values of the elements and the criteria, all effect scores were standardized. the scores of the main alternatives were calculated by multiplying the standardized effect scores times their assigned weights. these weights are then used for the following work with gis, where criteria and sub-criteria are represented by spatial data based on the concept of the impact for each. criteria can be presented by data grid, where every single cell has a standardized value according to its influence and depending on the status of wastewater source (i.e. treatment condition). each grid is assigned a weight based on those identified by stakeholders for the first part. generally, mca includes main steps starting from problem definition to decision taking and final conclusion as shown in figure 1. through the current study, mca has been carried out according to the following steps: definition of problem; the analysis aims at clarifying the significant impacts of wwr under different scenarios. then the process will be extended to specify the best areas suitable for wwr for irrigation using gis and spatial analysis. involvement of stakeholders; stakeholders from different sectors that can be affected by wwr and can be part of the planning and decision making process are invited to contribute. this included decision makers, ministries, palestinian water authority, universities, municipalities, private sector, international and nongovernmental organizations, and others. in the current study, a sample of 38 experts in water and wastewater sector including 16 academics, 14 managerial nassar a. et.al. / comprehensive decision approach for sustainable wastewater reuse using multicriteria decision analysisgis (2015) 3 staff from different institutions concerning with water sector, 3 professionals form nongovernmental organizations and 5 technicians, was consulted to set up the main alternatives and to address the possible generated impacts. figure 1 schematic diagram showing main steps of multi-criteria analysis for decision analysis (mcda) definition of options; three options were identified based on the current strategies for wwt. the first option is (to do nothing), the second is to use the current tww, and the third is to establish reuse but with extended treatment that enhances current tww quality. these scenarios actually represent the existing situation and planned strategies for wwt in gaza strip. identification of criteria; stakeholders were provided with an elongated list of possible impacts on different themes. the stakeholders have eliminated the trivial items and highlighted or listed new items. working close to all of them helped building a decision hierarchy structure of nine criteria with nineteen sub-criteria as listed in table 1. weighting of criteria; stakeholders were consulted once again to rank the criteria and sub-criteria according to its significance out of 100 as shown in table 1 based on their own judgments. this represented the corner stone of the mca. standardization of alternatives; in this process, wwr impacts for each of the three alternatives defined in step 3 are set to a common domain of measurement. this domain ranged from extreme negative (---) to extreme positive (+++) representing the value of 0 to 1. this means that the values scaled up to 7 including the zero value which represented the neutral case or no sense. the impact ranged from large negative effect as 0 to large positive effect 1. the standardization of options is presented in table 1. the main unique feature of definite is that it can systematically leads the researcher through a number of rounds of an interactive assessment circles and uses an optimization approach to integrate all information provided by the experts to a full set of value functions [fahmy et al., 2001]. table 1 main criteria and sub-criteria with the assigned weights under different reuse scenarios with its standardized values iii criteria the main evaluation criteria of the generated impacts included the following evaluation effects: total cost: wwr has different economical implications in terms of cost and benefit. total cost includes the cost of developing the treatment facilities to allow for specific effluent quality, operation, maintenance and supplying of the treated wastewater to the desired locations. to determine the development cost of a treatment facility, fixed treatment capacity was assumed with. this allows to calculate the cost per cubic meter and to eliminate scale of economy where the calculations were consistent. the benefits are generated from the reduction of fertilizers use and the possible increase in crop production as a result of wastewater reuse. this cost criterion is expressed in gis by a constant layer for construction and operation and maintenance costs. supplying cost is presented as a function of the distance to the wastewater treatment plant. the weights of sub-criteria are 0.5 for both of construction and operation and maintenance. crop production: organic matter provides main nutrients for crops and enhances the crop production. this results in cost savings due to limited addition of fertilizers and ennassar a. et.al. / comprehensive decision approach for sustainable wastewater reuse using multicriteria decision analysisgis (2015) 4 hances the crop yield. obviously, this will be converted into benefit. fertilizers savings was calculated by multiplying the annual crop water quantities needed times the concentration of nutrients under each alternative. no project for example provides the greater amount of nutrients with zero need to any fertilizers. for gis, this criterion was expressed by the crop type. crop rotation may become necessary in case of wwr. so, a gis layer was prepared standardizing areas dedicated for non restricted irrigation schemes and being subjected to "large negative effect". public health: public health was evaluated based on the level of impacts that the contaminants can affect the human health and to what extent the resulted impacts can spread through. this is highly dependent on the irrigation method and crops type. however, this criterion has less risk under extended treatment as viewed by stakeholders. therefore, this criterion was expressed by the mobility and morbidity rates. this was expressed in the gis in terms of the distance of the reuse areas from current residential areas. soil contamination: the sub-criteria identified under this category are the impact on soil erosion and contamination, degradation of land value, and the potential for reclamation of soil. it is highly influenced by the soil type, irrigation method and on irrigated water quality. the level of treatment is a key factor in determining the severity of this criterion. groundwater contamination: wastewater constituents leaching to both soil and aquifer was expressed by groundwater contamination. pathogenic contamination and soil build up nitrate pollution are the main contamination events. this effect is crucial in case of shallow groundwater. gis indicator was expressed in terms of the depth of water table and soil type. in gis, the deeper the water table, the more suitable option is produced. sandy soil has more capacity to leach contaminants beyond the root zone and results in severe impacts. groundwater recharge: wwr provides an alternative water source and release the stress on the existing water resources. to some extent, this can be viewed as a groundwater recharge based on aquifer depth and soil type and other parameters where the saving amounts worked as new charge rather than abstraction. this was expressed in terms of the generated savings from water abstraction and the benefit of unlimited water resource. therefore this effect was presented by the gis in terms of distance from the fresh water resource where the cost of irrigation depends on the location of water well. ecology: ecology was expressed in terms of the impact on the biodiversity and the aquatic life. this effect was viewed in terms of the distance between the disposal sites of tww and the residential areas. it also was expressed as how much the effect on the aquatic life will be in case of disposing the effluent into the sea and the generated benefits from reusing the wastewater for agricultural purposes. environmental and social impacts: the impact on the overall environment quality was expressed as a measure of the environmental quality and quality of life. this was presented as a constant grid all over gaza strip assuming uniform effect over the gaza strip under same treatment condition with better environmental quality under extended scenario. the social impact was based on previous studies and assumed public acceptance for the wwr according to the weighting and standardization specified in table 1. iv results and discussion according to the mca results (figure 2), the option of extended wastewater treatment seems to be the most feasible option. although, this option does not win in terms of total cost which is a trade of different generated costs, it stands as the best option for all other aspects. it also comes second in crop production effect as more nutrients can be provided by the current situation. [fahmy et al., 2001] indicated that mca analysis may lead that the most feasible alternative may not get the high scores in every single evaluation effect, but its overall performance is much better than all the others. the significance of the extended treatment alternative appears clearly in the case of public health which is one of the top criteria that was under the concern of stakeholders. high variation is presented between the current and the extended options in this aspect as it is believed that current ww quality is the main reason for the visible environmental deterioration which results in bad odor and other stuff. one interesting note is related to the effect on crop production, where the current quality of wastewater is believed to save more nutrient than extended treatment alternative, however, it seems that this option which mainly reflects one of the cost themes has nothing compared with other critical aspects, such as public health and gw contamination. this might be also due to the fact that extended treatment option may provide the required nutrients for some certain crops. hence, the gain from the fertilizers saving compared with other criteria especially public health is minor. it was obvious that do nothing option, is the worst case that can be considered. it has very negative impact in terms of public health, environmental and social impacts. this confirms that there is a high demand for wwr especially regarding these concerns. in figure 3, it can be noticed that spatial analysis has shown interesting findings regarding the difference in suitability of land for irrigation with tww under the two conditions (current and extended option). the extending urbanization is limiting the area suitable for agriculture in general. giving that social impact is higher on those who live next to the irrigated lands, additional zone of less suitable lands for wwr got formed as a buffer around the urbanized areas. this can be seen clearly as the white colour in both maps. areas where tww can be used are concentrated in the eastern parts of gaza strip. this might be due the fact the water table is slightly deeper than other places and due to the fact that these areas still raw in terms of urbanization. these results agree with those obtained by [sogreah, 1999]. nassar a. et.al. / comprehensive decision approach for sustainable wastewater reuse using multicriteria decision analysisgis (2015) 5 iii equations if you are using word, use either the microsoft equation editor or the mathtype add-on (http://www.mathtype.com) for equations in your paper (insert / object / create new | microsoft equation or mathtype equation). ―float over text‖ should not be selected. the extent of the suitable area for irrigation doesn’t change significantly with the new planned extended wwt option. this is logically feasible since none of the spatial characteristics of spatial data or their attributes are changed. however, the change that the extended treatment systems are going to enhance the suitability of land can be visibly noticed. this significant enhancement can be explained by the criteria measures described earlier; however, spatial factors contributed this result significantly. the new location of wwtps falls on the eastern regions, which results in significant minimization of transportation costs, as well as minimization of impact on social life and public health. v conclusions the reuse of treated wastewater for agricultural purposes is a good option accounting for new water resource. the need for the nonconventional water source is of great concern especially in arid areas. generated impacts of wwr were determined based on literature and the impacts relative importance was assigned based on the level of impacts and frequency. this study indicated that public health and cost are of great concern from the expert judgments perspective. the decision making process using mca and gis showed that more care has to be taken in selecting the areas of wwr. the decision has to account for the crop type, soil type, water quality and the geographic location. the southern part of the gaza strip seems to be most suitable and verifying the whole criteria for site selection. under current conditions, it seems that extended treatment could provide the best wwr quality and hence increase the opportunity of wwr. references [1] al-najar h. and ashour e.(2013). the impact of climate change and soil salinity in irrigation water demand on the gaza strip. journal of water and climate change. 4 (2): 118130. [2] qahman k., a. larabi, d. ouazar, a. naji and h.-d. cheng alexander (2009). optimal extraction of groundwater in gaza coastal aquifer. j. water resource and protection, 4, 249-259. [3] unrwa, (2012). gaza in 2020: a livable place?. united nation relief and works agency. palestinian territories. [4] a bunada, z. & nassar, a. (2014). impacts of wastewater irrigation on soil and alfalfa crop : case study from gaza strip. environmental progress and sustainability, 00(00), pp.1–7. [5] afifi s. (2006). wastewater reuse status in the gaza strip, palestine. international journal of environment and pollution, 28(1-2/2006), pp.76–86. [6] al-adamat r, diabat a, s.g. (2010). combining gis with multicriteria decision making for sitting water harvesting ponds in northern jordan. journal of arid environments, 74(7), pp.1471–7. [7] al-agha, m.r. & mortaja, r.s. (2005). desalination in the gaza strip: drinking water supply and environmental impact. desalination, 173(2), pp.157–171. available at: http://linkinghub.elsevier.com/retrieve/pii/s0011916404 007064 [accessed january 26, 2014]. [8] al-juaidi, a.e., kaluarachchi, j.j. & kim, u. (2010). multi-criteria decision analysis of treated wastewater use for agriculture in water deficit regions. jawra journal of the american water resources association, 46(2), pp.395–411. [9] al-juaidi, a.e., rosenberg, d.e. & kaluarachchi, j.j. (2011). water management with wastewater treatment and reuse, desalination, and conveyance to counteract future water shortages in the gaza strip. international journal of water resources and environmental engineering, 3(12), pp.266–282. [10] al-najar, h. (2011). the integration of fao-cropwat model and gis techniques for estimating irrigation water requirement and its application in the gaza strip. natural resources, 2, pp.146–154. [11] al-yaqubi, a. et al. (2007). bridging the domestic water demand gap in gaza strip-palestine. , 32(2). [12] anane, m. et al. (2012). resources , conservation and figure 2 results of mca figure 3 suitable areas for irrigation with tww under current condition of wwtps and extended condition nassar a. et.al. / comprehensive decision approach for sustainable wastewater reuse using multicriteria decision analysisgis (2015) 6 recycling ranking suitable sites for irrigation with reclaimed water in the nabeul-hammamet region ( tunisia ) using gis and ahp-multicriteria decision analysis. ―resources, conservation & recycling,‖ 65, pp.36–46. available at: http://dx.doi.org/10.1016/j.resconrec.2012.05.006. [13] behzadian m, kazemzadeh rb, albadvi a, a.m.p. (2010). a comprehensive literature review on methodologies and applications. european journal of operational research, 200, pp.198–215. [14] conté g, anane m, goltara a, principi i, k.h. (2008). multicriteria analysis for water and wastewater management in small rural areas. sustainable water management, 2(20). [15] fahmy, h., tawfik, m. & hamdy, a. (2001). evaluation of alternative use strategies of treated wastewater in agriculture. management, (icid international workshop on wastewater reuse management). [16] kallalihamadi, ananea makram, jellalia salah, tarhouni jamila (2007). gis-based multi-criteria analysis for potential wastewater aquifer recharges sites. water international, desalination v215, p 111–119. [17] pwa ( 2013). sustainable management of the west bank and gaza aquifer (susmaq). management options reportwater security and links with water policy in palestine (final draft)., version 3(12), p.p.90. [18] un-unicef (2012). gaza in 2020 alived place?: a report by the united nations country team in the occupied palestinian territory. united nations, office of the united nations special coordinator for the middle east peace process (unsco), united nat(august), pp.1–20. [19] zhong-wu l, guang-ming z, hua z, bin y, s.j. (2006). the integrated eco-environment assessment of the red soil hilly region based on gis—a case study in changsha city, china. ecological modelling, 202(540). abdelmajid nassar dr. nassar has phd in environmental engineering from loughborogh universityuk and associate professor at faculty of engineeringislamic university of gaza. dr. nassar has wide experience in wastewater treatment and reuse, storm water harvesting and sludge management hussam al najar. dr. al najar has phd in environmental engineering and associate professor at the islamic universityfaculty of engineering osama dawood. department of engineering, university of cambridge, cambridge. ziyad abunada. department of engineering, university of cambridge, cambridge. transactions template journal of engineering research and technology, volume 1, issue 4, december 2014 150 singular value decomposition-based arma model parameter estimation of non-gaussian processes adnan m. al-smadi yarmouk university, irbid, jordan smadi98@yahoo.com abstract—autoregressive moving average (arma) modeling has been used in many fields. this paper presents an approach to time series analysis of a general arma model parameters estimation. the proposed technique is based on the singular value decomposition (svd) of a covariance matrix of a third order cumulants from only the output sequence. the observed data sequence is corrupted by additive gaussian noise. the system is driven by a zero-mean independent and identically distributed (i.i.d.) non-gaussian sequence. simulations verify the performance of the proposed method. index terms— time series forecasting, singular value decomposition, arma model, non-gaussian process, parameters estimation. i introduction in statistical signal processing, parametric modeling of non-gaussian process experiencing noise interference has been a very important research area. the use of time series models is very effective techniques to model the parameters of non-gaussian systems. among different time series methods, one of the most often used procedures is arma model. the use of arma model identification has been applied in several areas such as seismic data processing, adaptive filtering, and communication systems [1]. there are several papers that have been written in the literature to estimate the parameters of a general arma process using a variety of second and higher-order statistics. the second order statistics (sos) measures work fine if the signal under study has a gaussian probability density function (pdf). that is because all of its statistical properties are completely determined by the first and second moments. gaussian distribution is tractable and fairly realistic model [2]. however, many real-life signals are non-gaussian. for example, the electromagnetic environment encountered by receiver systems is often nongaussian in nature. however, the receiving systems are designed to perform in white gaussian noise [3]. also, acoustic noise is in many cases highly non-gaussian. hence, in practice, there are situations where we must look beyond the autocorrelation of the available data to suppress additive noise and extract phase information. while the gaussian random process still plays a great and significant role in stochastic signal processing, non-gaussian random processes and higher order statistics (hos), or cumulants, are of increasing importance to the researchers. hos is currently an area of intense research and new results are constantly being reported. in non-gaussian process identification, the corrupted cumulants can be used as important information. this is because cumulants are generally asymmetric functions of their arguments, as such carry phase information about the arma transfer function. therefore cumulant statistics are capable of determining the order of arma model [4, 5]. in addition, they are suitable for order selection when the arma process is corrupted by gaussian noise of unknown covariance function [6]. furthermore, hos can capture the non-minimum phase information in the available signal. singular value decomposition (svd) plays an extremely important role in engineering and science problems from both a theoretical and a practical point of view. for example, svd is one of the most important tools of numerical signal processing. it is employed in a variety of applications in scientific computing, signal processing, automatic control, and many other areas. for example, svd techniques have been used in spectrum analysis, filter design, system identification, and for solving linear equations. in linear systems of equations, svd provides robust solution of both overdetermined and underdetermined least–squares problems. it allows one to diagnose the problem in a given matrix and provides numerical answers as well. it works also for singular matrices; for example, svd can be used to find a solution of a set of linear equations corresponding to a singular matrix that has no exact solution by locating the closest possible solution in the least square sense. in addition, some systems of equations are sensitive to small changes in the values. in such systems, the svd can help with the solution of ill-conditioned equations by identifying the direction of sensitivity and discarding that portion of the problem. adnan m. al-smadi/ singular value decomposition-based arma model parameter estimation of non -gaussian processes (2014) 151 in system identification, svd methods have been used for autoregressive (ar) and moving average (ma) model order determination of general autoregressive moving average (arma) models. cadzow [7] proposed an algorithm that uses the svd of an extended autocorrelation matrix for extracting the ar model order. giannakis and mendel [8] proposed a method that uses the svd of a cumulant matrix for non-gaussian processes. zhang and zhang [9, 10] proposed two techniques for ma model order determination. the first approach [9] uses the svd of an autocorrelation matrix, while the second approach [10] uses the svd of a cumulant matrix of non-gaussian process. reddy and biradar [11] proposed an information theoretic approach to model selection using svd. al-smadi [17] used svd in arma model parameters estimation. several methods have been proposed to estimate the arma model parameters using hos such as the methods in [12, 13]. in [12] giannakis and mendel (gm) developed a residual time series method for estimating the arma parameters using second and third order cumulants. the method starts by estimating the ar parameters using 1-d slice of the third order cumulants. then using the ar parameters and the observation measurements, a residual time series is computed. finally, the ma parameters are estimated using the residual time series in which case the residual satisfies an ma model. they developed a special structured matrix that contains both second and third order statistics of the output to satisfy this condition. the rts method was described in [14] as being one of the best-known methods for estimating the coefficients of arma models. however, the estimation goes through three stages, and any error in the estimation will carry to next stage, which will result in an inaccurate estimation. in [13] swami and mendel developed a method for estimating the arma parameters using q 1-d slices of the cumulant of the observation measurements. hence, this algorithm is called the “q-slice” (qs) solution. the method assumes that the ar parameters are available. the method starts by obtaining the impulse response of the system. it requires q-slices of the cumulants to estimate the first q coefficients of the impulse response. then, the ma parameters are estimated using the ar parameters and the impulse response. swami and mendel stated that their method usually works for moderate snr. in this paper, we present a novel technique to estimate the parameters of a general arma(p,q) process using the svd of a special covariance matrix formed from the third order cumulants of the output sequence only. section 2 presents the formulation of the problem. simulation examples are discussed in section 3. section 4 draws some conclusion remarks. ii problem formulation let {x(t)} denote a stationary arma(p,q) system given by x(t) =    p i itxia 1 )( +    q i itwib 0 )( ; a0 =b0= 1 (1) where x(t) is the observed time series, w(t) is the input sequence which is not observed, ai and bi are parameters, p and q are the order of the model. by expressing the transfer function of the model in z-domain while the model is assumed free of pole-zero cancellations and exponentially stable, it can be formulated as follows. )( )( 1 )()( 1 0 0 za zb p kz k a q kz k b kzkhzh k k k            (2) now, for identifying the time series with non-gaussian process, a driving noise sequence w(t) is assumed to add in the arma model, which is a zero mean, stationary and nongaussian independent identically distributed (i.i.d.) noise. the output signal, y(t), becomes y(t)=x(t)+v(t) (3) where v(t) is a gaussian noise independent of input w(t), and hence of output x(t). with this model, cumulants can be used. under the assumption that x(t) is stationary, the third order cumulants of the noisy output y(t) is )]()()([),( 3 mtyntytyemn y c  (4) as the cumulants of higher than second-order found in a gaussian process is identical to zero, then ),( 3 ),( 3 mn y cmn x c  (5) that is, the cumulant of third order (and higher) is insensitive to the additive gaussian noise of unknown covariance function. the cross-cumulant between x(t) and w(t) is ])()()([),( mtxntxtwemnwxxc  (6) now, multiplying both sides of equation (1) by x(t+n)x(t+m), yields  )()()( mtxntxtx  )()()1(1 mtxntxtxa  )()()( mtxntxptxa p  )()()( mtxntxtw  )()()1(1 mtxntxtwb )()()( mtxntxqtwbq  (7) taking the expected value for equation (7), we obtain  )333 ,()1,1(1 ),( pmpncpamncamnc xxx  ),()1,1( 1 ),( qmqncqbmncbmnc wxxwxxwxx   (8) adnan m. al-smadi/ singular value decomposition-based arma model parameter estimation of non -gaussian processes (2014) 152 by stacking equation (8) for several of n and m ranging from –  to  where  denotes the range of the third order cumulants to be used, the system in (8) can be expressed in matrix format as follows qbcpacc wxxx  3 (9) the vector c contains third order cumulants at n = m= 0, the vectors ap and bq contain the parameters for the process in equation (1). the matrix c3x contains the cumulants of the output sequence and the matrix cwxx contains the crosscumulants of the input and output sequences given by c3x =                )1,1(3),(3)1,1(3 )2,1(3)3,2(3)2,1(3 )1,1(3)2,2(3)1,1(3 ppxcxcxc ppxcxcxc ppxcxcxc     (10) cwxx=                )1,1(),()1,1( )2,1()3,2()2,1( )1,1()2,2()1,1( qqwxxcwxxcwxxc qqwxxcwxxcwxxc qqwxxcwxxcwxxc     (11) in these equations,  = max (p,q). equation (9) can be written as dcc   (12) where c= [ c3x cwxx], (13) θ is a vector represents the parameters θ =[1, a1 ,, ap, 1, b1 ,, bq] t (14) the vector d represents the residual errors in fitting the model cθ to the data c. for a given estimate of θ, the squared error between c and the model cθ is ]))([(2 tcccctre   ddcccc tt  )]()[(  (15) where tr denotes the trace. to obtain the least square estimate, we minimize eq (15) by taking the gradient of e2 with respect to θ )(22   ccce t    (16) the least squares estimate equates the gradient to zero. then cccc tt  or cccc tt  (17) now, we can write the matrix c as the singular value decomposition as follows. tvuc  (18) where u and v are orthogonal matrices,  is a matrix whose elements are zeros except possibly along the main diagonal (the singular value of c). that is, the singular values of c are the diagonal elements of  . ),,,( 21 rdiag   (19) such that 021  r  (20) notice that i are called the singular values and are nonnegative numbers. the matrix u contains the left singular vectors of c; the matrix v contains the right singular vectors of c. then, the normal equation in (17) may be written as cuvvv tttt  ̂ (21) these equations can be solved as follows.    p i c i uiivcuv t 1 111ˆ  (22) where 1 is the inverse and is found as ]1,,1 1 [1  pdiag   (23) to test the optimality of the proposed algorithm, the mean and the variance measures were considered. the signal-tonoise (snr) was computed as follows.          2 2 log10)( v x dbsnr   (24) where 2 x the signal is power and 2 v is the measurement noise power. it should be noticed that the only available data in the arma modeling is the output sequence. however, the input sequence is necessary to compute the cross-cumulants, which is an intermediate step in the identification process. hence, the observed output data was modeled by a high order ar model [15]. thus, the system in (1) could be rewritten as    m i i inxnw 0 )()(ˆ  (25) where i  are the parameters of the high ar model and are estimated as follows. 1 0 )()( 1 1               n k t kk n              n k kxk n 0 )()( 1 1 (26) with adnan m. al-smadi/ singular value decomposition-based arma model parameter estimation of non -gaussian processes (2014) 153  (k) = [-x(k-1) -x(k-2) … x(k-m)]t (27) and m is the order of the high ar model. in this estimation γ0 has been assumed to be one without loss of generality. now, using )(ˆ nw in the place of w(n), the identification procedures developed in this paper can be used. iii simulation examples simulation studies are presented to test the proposed arma model parameters estimation approach using svd. several examples were simulated at different levels of signal-tonoise ratio (snr) on the output. the input sequence was generated as a zero-mean, i.i.d., and exponentially distributed random process. the length of the signal is n=1500. a comparison of the performance of the proposed algorithm with the gm and the qss methods were made at different snrs on the output signal. to guarantee statistical independence, all the results are a mean of 100 monte carlo runs, using different seed in each case. the computations were performed in matlab. in addition, armaqs and armarts commands were used from the higher-order spectral analysis toolbox [16] to estimate the arma parameters using qs and gm methods, respectively. a example 1 the time series to be considered is given by x(t) + 0.7907x(t–1) + 0.042x(t–2) –0.5556x(t–3) – 0.0247x(t–4) + 0.3846x(t–5)+0.3026x(t–6) = w(t) + 0.3452w(t–1) + 0.53w(t–2) +0.3985w(t–3) +0.8138w (t–4) (28) this is an arma (6, 4). it has six poles and four zeros. the poles are located at 0.7102±j0.41, -0.43 ± j0.7448, and 0.6755 ± j0.39. the zeros are located at 0.485 ± j0.84 and 0.6576 ± j0.6576. the signal x (t) is observed in additive gaussian noise y (t) =x(t) + v(t). the input sequence was drawn from a zero-mean non-gaussian distribution; namely, exponential distribution. then the input signal was passed through the filter in equation (28). after that, the output of the filter was corrupted with additive gaussian noise at snr of 20db on the output sequence. it is assumed that the only available data is the output measurements. to estimate the arma model parameters, the cumulant matrix c in equation (13) must be formulated. as it can be seen from (13), the matrix c consists of two matrices concatenated together. the matrix c3x contains the cumulants of the observed output sequence, whereas cwxx contains the cross-cumulants of the unobservable input and the observed output sequences. to estimate the input signal, durbin [15] method was used. the arma parameters were estimated using the qs, the gm, and the proposed svd methods. table 1 shows the average estimate results of 100 monte carlo simulations at snr of 20 db on the output sequence. the performance measures considered for estimating the parameters are the arithmetic mean and the variance. table 1 true and estimated arma(6,4) parameters (mean± variance). true qs gm proposed a(1) 0.791 0.728 ±0.173 0.687±0.019 0.764±0.007 a(2) 0.042 -0.029± 0.102 -0.027±0.022 0.028 ± 0.005 a(3) -0.551 -0.599± 0.030 -0.516±0.009 -0.554± 0.004 a(4) -0.025 -0.022± 0.053 -0.035±0.007 -0.029 ± 0.003 a(5) 0.385 0.388 ± 0.017 0.366±0.011 0.391± 0.004 a(6) 0.306 0.291 ±0.023 0.293±0.011 0.308± 0.003 b(1) 0.3452 0.728± 0.297 0.282±0.066 0.382 ± 0.008 b(2) 0.53 -0.029± 0.208 0.630±0.309 0.474 ± 0.005 b(3) 0.3985 -0.599± 0.155 0.429±0.033 0.266 ±0.006 b(4) 0.8138 -0.022± 0.843 0.703±0.031 0.685 ± 0.005 b example 2 the time series to be considered is given by x(t) – 1.5x(t–1) + 1.2x(t–2) –0.455x(t–3) = w(t) + 0.2w(t–1) + 0.9w(t–2) (29) this is an arma (3,2). it has three poles and two zeros. the poles are located at 0.3939±j0.6955 and 0.7121. the zeros are located at -0.1 ± j0.9434. the data was generated and analyzed as in example 1. the arma parameters were estimated using the qs, the gm, and the proposed svd methods. table 2 shows the average estimate results of 100 monte carlo simulations at snr of 20 db on the output sequence. table 2 true and estimated arma(3,2) parameters (mean± variance). true qs gm proposed a(1) -1.5 -1.489 ± 0.015 -1.483±0.015 -1.495±0.011 a(2) 1.2 1.197 ± 0.027 1.191 ± 0.028 1.195 ±0.015 a(3) -.455 -0.451±0.009 -0.449 ±0.009 -0.449± 0.003 b(1) 0.2 0.201± 0.084 0.195 ± 0.028 0.186 ± 0.011 b(2) 0.9 0.901± 0.111 0.866 ± 0.003 0.857 ± 0.043 c example 3 the time series to be considered is given by x(t) +0.39x(t–1) + 0.3x(t–2) +0.2x(t–3) = w(t) – 1.85 w(t–1) + 0.825w(t–2) (30) this is an arma (3,2). it has three poles and two zeros. the poles are located at 0.0711±j0.6088 and -0.5323. the zeros are located at 1.1 and 0.75. notice that this is a nonminimum phase model since it has one of its zeros outside the unit circle. the data was generated as in the previous examples. the arma parameters were estimated using the qs, the gm, and the proposed svd methods. table 3 shows adnan m. al-smadi/ singular value decomposition-based arma model parameter estimation of non -gaussian processes (2014) 154 the average estimate results of 100 monte carlo simulations at snr of 20 db on the output sequence. table 3 true and estimated arma(3,2) parameters (mean± variance). true qs gm proposed a(1) 0.39 0.408±0.015 0.395± 0.014 0.340±0.006 a(2) 0.3 0.290±0.009 0.284±0.009 0.270±0.004 a(3) 0.2 0.197±0.005 0.193±0.005 0.179 ±0.002 b(1) -1.85 -2.124±7.542 -3.956±42.061 -1.4794±0.005 b(2) 0.825 0.943±1.521 0.768±0.149 0.600±0.006 iv discussion in the above examples, arma parameters were estimated using the proposed svd method and compared with two well known methods; namely, gm and the qs methods. the computations were performed in matlab. in addition, armaqs and armarts commands were used from the higherorder spectral analysis toolbox [16] to estimate the arma parameters for the qs and gm methods, respectively. tables 1, 2, and 3 show the average estimate results of 100 monte carlo simulations at snr of 20 db on the output sequence. the performance measures considered for estimating the parameters are the arithmetic mean and the variance. it can be seen that the proposed arma parameters estimation technique performs better than the gm and the qs methods. the proposed svd method in this paper was more computationally efficient than the other two methods since the ar and ma parameters are estimated in one step only. that is equation (22) has both ar and ma coefficients. however, in the other two methods the ar coefficients must first be estimated and then the ma coefficients can be estimated. it should be emphasized that the main reason for using hos for arma parameter estimation is the fact that additive gaussian noise of unknown variance does not affect the theoretical cumulant statistics. however, when dealing with finite-length data observations, the computed cumulants do not vanish but will be very small. from the author’s experience with higher-order cumulants, these cumulants have little effects on these results at moderate snr. however, the performance deteriorates at low snr. v conclusion this paper presents an approach to estimate arma parameter of a given system using the singular value decomposition of a covariance matrix of a third order cumulants of the observed output sequence only. a comparison of the performance of the proposed algorithm with the qs and the gm methods was made at 20 db snr on the output signal. the presented simulation results demonstrate the effectiveness of the proposed method. references [1] j. m. mendel,” tutorial on higher-order statistics (spectra) in signal processing and system theory: theoretical results and some applications,” proceedings of the ieee, vol. 79, no. 3, pp. 278-305, march 1991. [2] a. al-smadi and m. smadi, ” study of the reliability of a binary symmetric channel under non-gaussian disturbances,” international journal of communication systems, vol. 16. no. 10, pp. 865-973, 2003. [3] s. zabin and d. furbeck, ” efficient identification of non-gaussian mixtures,” ieee trans. comm., vol. 48, pp. 106-117, 2000. [4] a. al-smadi and d.m. wilkes, “robust and accurate arx and arma model order estimation of nongaussian processes”, ieee trans. signal processing, vol. 50, pp. 759-763, 2002. [5] a. al-smadi, “cumulant-based order selection of nongaussian autoregressive moving average models: the corner method”, signal processing, vol. 85, pp. 449456, 2005. [6] a. swami and j. mendel, arma parameters estimation using only output cumulants, ieee trans. signal processing, 38(7):1257-1265, 1990. [7] j. a. cadzow, “spectral estimation: an overdetermined rational model equation approach”, proceedings of the ieee, vol. 70, pp. 907-939, september 1982. [8] g. b. giannakis and j. m. mendel,” cumulant-based order determination of non-gaussian arma models”, ieee trans. signal processing, vol. 38, pp. 1411-1423, august 1990. [9] x. zhang any y. zhang, ”determination of the ma order of an arma process using sample correlation,” ieee trans. on signal processing, vol. 41, no. 6, pp. 2277-2280, june 1993. [10] x. zhang and y. zhang,” singular value decompositionbased ma order determination of non-gaussian arma models”, ieee trans. signal processing, vol. 41, pp. 2657-2664, august 1993. [11] v. reddy and l. biradar,” svd-based information theoretic criteria for detection of the number of damped/undamped sinusoids and their performance analysis,” ieee trans. signal processing, vol. 41, no. 9, pp. 2872-2881, september 1993. [12] g.b. giannakis and j.m. mendel, identification of nonminimum phase systems using higher order statistics, ieee trans. acoust., speech, signal processing, vol. 37, pp. 360-377, 1989. [13] a. swami and j. mendel, arma parameters estimation using only output cumulants, ieee trans. signal processing, vol. 38, no. 7, pp. 1257-1265, 1990. [14] g. stoglou and s. mclaughlin, ma parameter estimation and cumulant enhancement, ieee trans. signal processing, vol. 44, no. 7, 1704-1718, 1996. [15] j. durbin, ” the fitting of time series models,” rev. int. statist. inst., vol. 28, pp. 233-243,1960 adnan m. al-smadi/ singular value decomposition-based arma model parameter estimation of non -gaussian processes (2014) 155 [16] a. swami, j. mendel, and c. nikias, higher-order spectral analysis toolbox-user's guide. natick, ma: the math works, inc., 1998. [17] a. al-smadi, "arma model parameters estimation using svd" the 6th ieee international conference on science of electronics, technologies of information and telecommunications (setit), sousse, tunisia, pp. 814816, march, 2012. adnan m. al-smadi prof. adnan m. al-smadi received b.s. degree (magna cum laude) and the m.s. degree in electrical engineering from tennessee state university, nashville, tn, usa in 1987 and 1990, respectively. he received the ph.d. degree in electrical and computer engineering from vanderbilt university, nashville in 1995. from 1989 to 1991, he was an instructor of mathematics at tennessee state university, and from 1991 to 1992, he was an instructor of mathematics at fisk university, nashville. from 1992 to 1995, he was an instructor of aeronautical and industrial technology at tennessee state university. in addition, from 1995 to 1997, he was an adjunct assistant professor of electrical and computer engineering at vanderbilt university. from 1995 to 1997, he served as interim department head of the aeronautical and industrial technology department at tennessee state university. from 1997 to 2006, he joined hijjawi college for engineering technology as a faculty member of electronics engineering at yarmouk university, jordan. from 2002 to 2004, he served as the chairman of the department of electronics engineering at yarmouk university. from 2004 to 2006, professor al-smadi served as the dean of hijjawi college for engineering technology. from 2006 to 2010, he was on sabbatical leave as a professor of computer science and the dean of prince-hussein bin abdullah college for information technology at al al-bayt university in jordan. from may 2012 to september 2012, he was the head of of quality assurance & accreditation department at yarmouk university. from 2009 to 2014, prof. al-smadi served as a member of the "recognition of nonjordanian universities & certificates' equivalency," committee at the ministry of higher education and scientific research, jordan. since 2013, he has been serving as a "member of the board of trustees," jerash private university, jordan. in addition, professor al-smadi is serving now as the dean of hijjawi college for engineering technology. professor al-smadi is a senior member of the ieee and a member of eta kappa nu in the usa. transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 34 wavelength division multiplexing passive optical network (wdm-pon) technologies for future access networks fady i. el-nahal 1 , mahmoud alhalabi 2 , abdel hakeim m. husein 3 1 electrical engineering department, islamic university of gaza, gaza, palestine, fnahal@iugaza.edu.ps 2 electrical engineering department, islamic university of gaza, gaza, palestine, eng.halabi@hotmail.com 3 physics department, al aqsa university, gaza, palestine, am.husein@alaqsa.edu.ps abstract— wavelength division multiplexing passive optical network (wdm-pon) introduces high data rate and large bandwidth. a bidirectional wdm-pon system based on a fabry-perot laser diode (fp-ld) with two cascaded array waveguide gratings (awgs) has been demnstrated. the downstream data rate equals to 10 gbps and the upstream data rate equals to 2.5 gbps. this network is classified to 10gpon standard. fp-ld will be used at optical network unit (onu) as transmitter, so it can re-modulate the downstream signal with upstream data and then re-sent upstream towards the central office (co). the main idea for using awgs in the system is to increase the capacity, security and privacy. awgs will be used to multiplex and demultiplex different wavelengths in wavelength division multiplexing pon (wdm-pon). our proposed system is an effective low cost system and the injection locked fp-ld is used as low cost colourless transmitters for high-speed optical access exploiting wdm technology. index terms— wavelength division multiplexing passive optical network (wdm-pon), fabry-perot laser diode (fp-ld), array waveguide gratings (awgs). 1introduction for the last years, passive optical network (pon) systems have been studied. for initial deployment, a simple and low cost optical network unit (onu) design is desirable. in addition, a variety of wavelength division multiplexingpassive optical network (wdm-pon) systems has been studied to increase the channel capacity in existing optical fibers. a bidirectional subcarrier multiplexing–wdm pon (scm-wdm pon) is demonstrated using a reflective filter and cyclic array waveguide grating (awg) where up/downlink data could be provided using a single optical source. in the proposed scheme, the signal for downstream was modulated by a single continues wave (cw) laser diode and re-modulated in the onu as an upstream, the proposed wdm–pon scheme can offer the scm signal for broadcasting service. a 1gbps signals both for upstream and downstream were demonstrated in 10 km bidirectional optical fiber link [1]. designs of low cost onu for wdm-pon are presented and evaluated. reflective semiconductor optical amplifiers (rsoas) are proposed to be used as core of the onu in a bidirectional single-fiber single-wavelength topology. forward error correction (fec) is employed to mitigate crosstalk effects [2]. wavelength re-use model is exploited with rsoa for wdm-pon transmission, among the various solutions to the optical subscriber network realization, the wdm-pon has been considered as an ultimate nextgeneration solution. the wavelength re-use model with the rsoa has recently been developed for application to the wdm-pon. the wavelength re-use scheme has a common feature that the optical signal modulated with downstream data is re-used to carry the upstream data through the rsoa in the subscriber-side equipment by a series of processes such as being flattened out, reflected at the rear facet of the rsoa, and then re-modulated with upstream data. the major advantage with the wavelength re-use scheme would be the possibility of realizing the simplest wdm-pon optical link structure, which is directly reflected on costeffectiveness of the network both in equipment and maintenance costs. the gain saturation scheme is presented. it uses the fact that the optical gain of rsoa declines as the injection power into rsoa increases. experimental results show that it is possible to achieve error-free bidirectional transmission with 1.25 gbps for upstream and 2.5 gbps for downstream data rates over 20 km transmission distance [3]. an upstream-traffic transmitter based on fabry perot laser diode (fp-ld) as modulator is proposed and demonstrated for wdm access networks. by injection-locking the fp-ld with the downstream wavelength at the onu, the original downstream data can be largely suppressed while the upstream data can be transmitted on the same injectionlocked wavelength by simultaneously directly-modulating the fp-ld [4]. a 10gbps upstream transmission using fp-ld remotely injection-locked by coherent feed light from the co. experimental results show that transmission over a 10 km single mode feeder fiber incurs power penalty of 1.1 db and up to 16 cavity modes of the fp-ld can be injection-locked [5]. in this article, we will use fp-ld in onu as an upstream fady el-nahal, mahmoud alhalabi, and abdel hakeim m. husein (2015) 35 source as well as we will use cascaded awgs in the system. the distance will be 10 km between co and onus. downstream data rate will be 10gbps and upstream data rate will be 2.5 gbps. this article includes an important comparison in using rsoa and fp-ld as an upstream source in onu. 2theory a. wdm pons figure 1 illustrates a typical wdm pon architecture that consists a co, two cyclic awgs, a trunk or feeder fiber, a series of distributions fibers, and onus at the end users. figure 1: wdm-pon architecture. inset: allocation of upstream and downstream wavelength channels into two separate wavebands [6] the first periodic awg that locates at co multiplexes downstream wavelengths to the onus and demultiplexes upstream wavelengths from the onus. the trunk fiber carries the multiplexed downstream wavelengths to a second periodic awg that locates at rn. the second awg demultiplexes the downstream wavelengths and guides each into a distribution fiber for transmission to the onus. the downstream and upstream wavelengths allocated to each onu are separated by a multiple of the free spectral range (fsr) of the awg, allowing both wavelengths to be directed in and out of the same awg port that is connected to the destination onu. in figure 1, the downstream wavelengths assigned for onu1, onu2, and onun are symbolized λ1, λ2… and λn respectively. also, upstream wavelengths from onu1, onu2, and onun that are destined for the co are symbolized λ1`, λ2`… and λn` respectively. in a wdm pon, wavelength channels are spaced 100 ghz (0.8 nm) apart. in systems classified as dense wdm-pon (dwdm), a channel spacing of 50 ghz or less is used. although a wdm pon has a physical p2mp topology in reality, logical p2p connections are facilitated between the co and each onu. in the example shown in figure 1, onun receives downstream signals on λn and transmits upstream signals on λn`. the capacity on these wavelengths is especially assigned to that onu. the benefits of wdm pon include protocol and bitrate transparency, security and privacy, and ease of upgradeability and network management. b. awg router the awg router is an important element in many wdmpon architectures. a conventional n-wavelength wdm coupler is a 1×n device as shown in figure 2 (a). figure 2: conventional wdm coupler versus awg. awgs have multiple input and multiple output as indicated in figure 2(b) but the conventional wdm has one input and multiple output as shown in figure 2(a). a general awg router includes two star couplers joined together with arms of waveguides of unequal lengths as shown in figure 2(b) [7]. each arm is related to the adjacent arm by a constant length difference. these waveguides function as an optical grating to disperse signals of different wavelengths. the optical path length difference δl between adjacent array waveguides is set to be ∆𝐿 = 𝑚×𝜆 𝑛 (1) where m is an integer number, λ0 is the central wavelength, and neff is the effective refractive index of each single mode waveguide [8]. an awg has another name which is called wavelength grating router (wgr) as well as it has a very important and useful characteristic which is called cyclical wavelength routing property illustrated by the table in fig. 2.1 (b) [9]. with a normal wdm multiplexer in fig. 2.1 (a), if an ‗‗out-of-range‘‘ wavelength, (e.g. λ-1 or λ4, λ5) is sent to the input port, that wavelength is simply lost or ‗‗blocked‘‘ from reaching any output port. an awg device can be designed so that its wavelength demultiplexing property repeats over periods of optical spectral ranges called fsr. moreover, if the multi-wavelength input is shifted to the next input port, the demultiplexed output wavelengths also shift to the next output ports accordingly. cyclical awgs are also called colorless awgs. c. fp-lds the fp-ld is considered a light emitting diode (led) with a pair of end mirrors. the mirrors are needed to create the right conditions for lasing to occur. the fp-ld is also called ―a fabry-perot resonator‖ [10]. figure 3: fabry-perot filter structure fady el-nahal, mahmoud alhalabi, and abdel hakeim m. husein (2015) 36 the input light will enter the cavity through the mirror on the left and will leave it through the mirror on the right. some wavelengths will resonate within the cavity and it can pass through the mirror on the right but the other wavelengths will strongly attenuate as shown in figure 3. the operation of the fp-ld is similar to the operation of the fabry-perot filter. as the distance between the mirrors is increased, the more wavelengths will be produced within the cavity. wavelengths produced are related to the distance between the mirrors by the following formula: 𝐶𝑙 = λ×𝑋 2 ×𝑛 (2) where: λ = wavelength cl = length of the cavity x = an arbitrary integer 1, 2, 3… n = refractive index of active medium a fp-ld is similar to an edge-emitting led with mirrors on the ends of the cavity in its basic form. a surface of fpld should be easier than a surface of led to construct. in a led, a lot of attention is taken into account to collect and guide the light within the device towards the exit aperture. in an ideal laser, the problem of guiding the light is not taken into account. lasing happens only between the mirrors and the light produced is exactly guided but it is not as simple as this. all types of fp-ld contain electrical contacts on the top and on the bottom to supply it by injection current. a simple double hetero-structure laser is shown in figure 4. mirrors are formed at the ends of the cavity by the ―cleaved facets‖ of the crystal from which it is made. figure 4: index guided fp-ld. the operational principle of an index guided fp-ld differs from the operational principle of gain guided fp-ld. if strips of semiconductor material are put beside the active region as shown in figure 4, an index guided fp-ld are created. so, the active region is surrounded on all sides by material of a lower refractive index. mirrored surfaces are formed and this is easy to guide the light much better than gain guidance alone. any light strikes the edges of the cavity is captured and guided it to the cavity. additional modes reflecting from the sides of the cavity are eliminated. this is not too much of a power loss since lasing cannot occur in these modes and only spontaneous emissions will leave the cavity by this way. it produces a spectral width of between 1 nm and 3 nm with usually between 1 and 5 lines. linewidth is generally around .001 nm. it is better than the gain guided fp-ld. 3system analysis a. system models in this article, the proposed system model is discussed. it contains modulator, awg, pon link, demodulators and the fp-ld. the proposed pon architecture is shown in figure 5. in downstream, cw laser with 193.1 thz frequency is modulated by mzm using 10 gbps nrz downstream data to generate the desired downstream signal. the generated signal is sent to the first awg at co which multiplexed it then it is sent over the bidirectional optical fiber. it passes through the second awg at rn which multiplexed the input signal again. the multiplexed signal is sent to onu. at the onu, using optical splitter/coupler, portion of the multiplexed signal is fed to a balanced receiver. for upstream, the other portion of the downstream multiplexed signal from the splitter/coupler is remodulated using 2.5 gbps nrz upstream data by fp-ld in the onu. the re-modulated ook signal re-pass through the awg which demultiplexed the upstream signal then it is sent over bidirectional optical fiber. the upstream demultiplexed signal passes through the first awg then it is received in co. by using the circulator to avoid influencing the downstream signal, the upstream signal is sent to a pd is used to receive the upstream signal in the co. figure 5: block diagram of the proposed bidirectional pon system model the system model is categorized into three main parts which are co, single mode fiber channel and onu. the parameters of the proposed system are listed in table 1. table 1: simulation parameters are used in the proposed system. parameter value layout parameter bit rate (downstream) 10 gbps bit rate (upstream) 2.5 gbps sequence length 128 bits samples per bit 64 number of samples 8192 optical transmitter (cw laser) laser power input, pin 1mw (0 dbm) frequency/wavelength 193.1 thz / 1550 nm laser line width 10 mhz optical link length 10 km attenuation 0.2 db/km dispersion 16.75ps/(nm×km) fady el-nahal, mahmoud alhalabi, and abdel hakeim m. husein (2015) 37 optical attenuator attenuation 10 db optical receiver (pin pd) responsivity 1 a/w dark current 10 na filter type low pass bessel filter (lpbf) for downstream 4 ghz lpbf for upstream 1.7 ghz b. bidirectional wdm-pon system based on fpld with two cascaded awgs: i. co part: the transceiver at co is shown in figure 6(b). this model includes awg after circulator as shown in the figure, it is operated as multiplexer in the downstream direction and like demultiplexer in the upstream direction. the min. ber equals to 3.6 × 10 2. the eye diagram of upstreamreceived signal at co is shown in figure 6(a). (a) (b) figure 6: (a) eye diagram for upstream signal at co in wdmpon with awg at rn, (b) the transceiver at co ii. bidirectional channel part the channel includes bidirectional optical fiber and awg. a bidirectional single mode fiber of 10 km is used to forward the signal and to backward it with an optical delay of 1 unit in order to separate the upstream and downstream signals. table 2 shows the main parameters of a bidirectional optical fiber. table 2: bidirectional optical fiber parameters in gpon parameter value reference wavelength 1550 nm length 10 km attenuation 0.2 db/km dispersion 16.75 ps/)nm×km( dispersion slope 0.075 ps/)nm 2 ×km( the downstream optical signal will pass through awg which is used as demultiplexer in the downstream direction and as multiplexer in the upstream direction. the main parameters of awg are listed in table 3. table 3: awg parameters at rn parameter value size 2 (two input port and two output port) frequency 193.1 thz bandwidth 25 ghz frequency spacing 100 ghz insertion loss 0 db return loss 65 db depth 100 db filter type gaussian filter order 2 iii. onu part the transceiver at onu includes two parts, first part is used to receive the signal from co and second part is used to send signal to co. figure 7 illustrated onu part. figure 7: onu part in wdm-pon with two cascaded awgs onu includes many components for receiving downstream signal and for transmitting upstream signal. received signal components include splitter, optical attenuator, pin pd, lpbf, 3r regenerator and ber analyzer, transmitted signal components include fp-ld, nrz generator and prbs generator. splitter is used to split the downstream signal into two partitions, one of them is received by pin pd and the other portion is passed to fp-ld. ber analyzer is used to measure the ber of downstream signal so the value of min. ber equals to 1 × 10 . the eye diagram of downstream signal is shown in figure 8. fady el-nahal, mahmoud alhalabi, and abdel hakeim m. husein (2015) 38 figure 8: eye diagram of downstream signal at onu in wdmpon with two cascaded awgs iv. ber versus received power for the proposed system with two cascaded awgs: in this section, we will show the influence of the received power variation on the ber in both upstream signal and downstream signal, according to the previous section, our system includes three main parts such as co part, bidirectional smf and onu part. the ber versus downstreamreceived power pd curves for the downstream and upstream signals are shown in figure 9. figure 9: min. log of ber versus downstream received power at onu for downstream and upstream in wdm-pon with two cascaded awgs it is noted from the figure 9 that the ber versus the downstream received power pd (injected power) at onu for the upstream signal goes down with increasing pd from -18 dbm to -8 dbm. when pd = -18 dbm, the ber = 6×10 -11 . when pd = -8 dbm, the ber =2.7×10 -18 . for the downstream signal, the ber curve goes down with pd from -18 dbm to -8 dbm. when pd = -18 dbm, the ber = 1× 10 -13 . when pd = -8 dbm, the ber = 1×10 -16 . figure 10 is illustrated ber versus upstream received power pu at co. figure 10: min. log of ber versus upstream received power at co for downstream and upstream signals in wdm-pon with two cascaded awgs it is noted from the figure 10 that the ber versus the upstream received power pu at co for the upstream signal goes down with increasing pu from -13.89 dbm to -13.835 dbm. when pu = -13.89 dbm, the ber =6×10 -11 . when pu = 13.835 dbm, the ber =2.7×10 -18 . for the downstream signal, the ber curve goes down with pu from -13.89 dbm to 13.835 dbm. when pu = -13.89 dbm, the ber = 1× 10 -13 . when pu = -13.835 dbm, the ber = 1×10 -16 . v. upstream ber versus fp-ld bias current for the proposed system with two cascaded awgs: in this section, we will explain the effect of fp-ld bias current on upstream ber at co for the proposed model. input power of cw laser is fixed at co and it equals to 0 dbm. figure 11 shows upstream ber versus the bias current of fp-ld. figure 11: upstream ber versus bias current of fp-ld table 4: upstream ber versus bias current (ib) ib(ma) ber 30 37.5 45 52.5 60 upstream ber 5.7×10 -10 1×10 -10 8×10 -12 1×10 -14 1.5×10 -17 we can conclude from table 4, upstream ber is decreased and it became better as bias current of fp-ld is increased. upstream downstream fady el-nahal, mahmoud alhalabi, and abdel hakeim m. husein (2015) 39 vi. wdm-pon based on fp-ld versus wdm-pon based on rsoa with two cascaded awgs: we will study the effect of cw laser power on the ber at co for the two systems when the input power is fixed and it equals to 0 dbm. figure 12 shows the architecture of these colorless systems. figure 12: architecture of wdm-pon showing colorless sources based on rsoa or fp-ld with two cascaded awgs. fp-ld and rsoa are used at onu to remodulate the downstream signal with upstream data (2.5 gbps) which is sent to co. we will show the effect of using fp-ld on the upstream signal at co, min. ber of upstream signal equals to 6×10 -11 while min. ber for upstream signal when using rsoa equals to 1×10 -6 . upstream received power when using fp-ld equals to -13.89 dbm and upstream received power when using rsoa equals to -3 dbm. now we will conclude the difference between rsoa and fp-ld in table 5. table 5: comparison between using fp-ld and rsoa on wdmpon pon type wdm-pon based on fpld wdm-pon based on rsoa min. ber for upstream signal at co 6×10 -11 1×10 -6 received power at co for upstream signal -13.89 dbm -3 dbm cost low high amplify the incoming signal no yes figure 13 is shown the comparison between upstream ber for both wdm-pon based on rsoa and fp-ld when input power cw laser is increased from 0 dbm to 10 dbm. figure 13: wdm pon based on rsoa versus wdm pon based on fp-ld in the results of upstream ber when input power of cw laser is increased. wdm pon based on fp-ld is better than wdm pon based on rsoa in the results because the upstream ber values are good in our proposed system as illustrated in figure 16. 4. conclusion the proposed model includes two cascaded awgs and fpld. this model contains two awgs to in-crease the number of onu, the multiplexing and demultiplexing channels and support more securi-ty and privacy. fp-ld is very effective device in this model due to its low cost optical source. all results in this model are shown, it compare with a model that is used rsoa at onu and we note fp-ld is better than rsoa because the upstream ber with fp-ld is lower than the upstream ber with rsoa. 5. references [1] f. el-nahal and a. husein, ―bidirectional wdm-pon architecture using a reflective filter and cyclic awg‖ optik – int. j. light electron opt., vol. 122, issue 19, pp. 1776-1778, october 2011. [2] c. arellano, c. bock, and j. prat, "rsoa-based optical network units for wdm-pon," in optical society , america, pp. 1-3, 2005. [3] j. yu, b. kim, n. kim, ―wavelength re-use scheme with reflective soa for wdm-pon link," vol. 3, pp. 1704 – 1710, 2008. [4] l.y. chan, c.k. chan, d.t.k tong, e tong and l.k. chen, ―upstream traffic transmitter using injectionlocked fabry-perot laser diode as modulator for wdm access networks,‖ electronics letters , vol. 38 no. i, 3rd january 2002. [5] z. xu, y. wen, c. chae, y. wang, and c. lu, ―10 gb/s wdm-pon upstream transmission using injectionlocked fabry-perot laser diodes,‖ in lightwave department, institute for infocomm research, fady el-nahal, mahmoud alhalabi, and abdel hakeim m. husein (2015) 40 singapore 119613, 2006. [6] e. wong, "next-generation broadband access networks and technologies," journal of lightwave technology vol. 30, no. 4, february 15, 2012. [7] dragone, ‗‗a nxn optical multiplexer using a planar arrangement of two star couplers,‘‘ ieee photon. technol. lett., vol.3, pp812–815, 1991. [8] m. cen, ‗‗study on supervision of wavelength division multiplexing passive optical network systems, ‘‘ master of science thesis, kth information and communication technology, pp9-10, 2011. [9] n. frigo, ‗‗a survey of fiber optics in local access architectures,‘‘ in optical fiber telecommunications, iiia, edited by i.p. kaminow and t.l. koch, academic press, pp461–522, 1997. [10] h. dutton, ―understanding optical communications‖, international technical support organization, pp. 102113, september 1998. transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 41 a comparative study of the thermal comfort by using different building materials in gaza city (jert) husameddin m. dawoud college of applied engineering & urban planning, university of palestine gaza strip, palestine, hdawod@gmail.com abstract—this study compares between different alternatives of construction in gaza city. this comes for proposing a new approach of using available construction materials to improve the thermal resistance of the building and to minimize energy losses. using available materials with different detailed techniques, the focus was on three systems applied on the residential construction in gaza city. common materials used in building envelope such as stone, hollow block and plaster are combined together in different ways to form three systems of building envelope. after thorough on-site investigation and data collection, the information along with regional weather data, was input into the ecotect energy simulation software for thermal performance evaluation. the breakdown analysis of passive gains indicates that the majority of heat losses occurs via conduction heat transfer (building fabric). this study found that using 5cm air gap in exterior walls saves 50% of energy required to maintain comfortable temperature inside the home. current study demonstrates how a building envelope reacts significantly to outdoor conditions through graphic illustration. in addition, it shows ways for the research to be extended by the creation of simulations using ecotect software. this research contributes to the promotion of passive and low energy architecture towards a sustainable future. index terms— thermal comfort, envelope systems, air gap, wall cavity, ecotect. i introduction construction and design of buildings in palestinian areas have changed considerably over the last century. flat roof and thin walled buildings of relatively low thermal insulation have replaced the old dome-roofed thick high walled houses, which were characterized by good thermal insulation and ventilation. however, new buildings are characterized by more efficient use of construction materials. being in dire need of heating, cooling and ventilation systems, energy consumption in the new buildings has increased ever since. the building sectors account for about 40% of the total energy consumption and 38% of the co2 emission in the u.s. [1]. however, in palestine, local homes still have an energy loss in winter that exceeds 6 times energy loss in buildings in the u.s. [2]. therefore, this study mainly focuses on the thermal performance of available alternatives of materials and construction in palestine comparing their environmental performance. this kind of buildings is expected to save energy and to be environment-friendly for long-term. selection of suitable construction materials in buildings is sufficient to improve the thermal resistance dramatically, hence to minimize energy losses. the focus was on the envelop of the building for its vast effect on the thermal behaviour inside the building. in order to test their thermal properties, certain building materials have been investigated to construct walls of residential private house in gaza city (figure 1). this building actually exists at gaza city and has two floors. the ground floor has an area of 190m², and the first floor has an area of 132m2 (figure 2). the main elevation is facing south and it is not shaded by any vegetation or structure that could alter the results. this house is located in the middle part of gaza strip. it has a warm steppe climate while it changes into a warm desert climate in the southern part [3]. figure 3 displays the climate characteristics of the study area. they were adapted from el arish city in egypt because gaza strip does not have weather data up to moment. el arish city is located 70 km far from gaza strip and it carries the same climate characteristics and geographic conditions. ii wall materials in palestine buildings in palestine consist of concrete structure with flat roofs and hollow blocked walls. while stone is used for cladding with total thickness of the wall exceeding 25 cm. the usage of stone for cladding is not affordable all the time because of the cost and availability. another common alternative is rendering the inner and outer sides of the building by cemented plaster only. in this case, thermal transmittance as well as energy loss will be high causing discomfort of the habitants. the need for comfortable indoor climate has lead to develop other construction techniques to overcome the drawback of scarcity or the high cost of insulation materials. these new techniques are based on using air gap or polystyrene boards 2to 5cm thick placed inside the hollow blocked concrete wall. husameddin m. dawoud (2015) 42 iii research material exterior wall thermal insulation can effectively reduce both the annual energy consumption and peak loads of cooling and heating systems. it is well known that most of the palestinian modern buildings consist of walls constructed from stones, concrete ,hollow block and plaster [2]. stone is only used as a cover material because of its fancy appearance in facades rather than its thermal prosperities. stone is obtained by taking rock from the earth and reducing it to the required shapes and sizes [4]. the majority of stone quarries in palestine is concentrated in the west bank area due to its rocky land. the limitations of stone industry as well as the obstacles in importation of stones to gaza city make it costly to use. therefore, hollow blocks made from cement and aggregates were used as main building materials plastered from both sides. hollow block has standard dimension for height (20cm) , length (40cm) and width in different sizes (20cm, 15cm, and 10cm). external plaster is common choice for residential buildings in which they are made from cement, sand and lime. colour is variable and it can be used to absorb or to reject solar radiation [5]. however, choosing colour of paintings in palestine has no calculation or scientific methods. it is spontaneously known that light colours do reflect light around and they can help in reducing heat gain in summer [6]. thermal performance of the selective materials was only evaluated on the basis of computer simulations. colour of external materials was set to 4de7d3, while internal colour was fixed to a3f8f8. iv simulation the simulations were run on a computer model using autodesk ecotect® 2011. in order to ascertain the direct effect of wall materials on the thermal behaviour of the building, the material properties and details of the walls only were altered for each run. in other words, the materials and dimensions of the doors, windows, roof and floor constructions were still the same. doors were set to solid corepine timber. windows were single glassed timber frame. roofs were flat made from concrete and hollow block with thickness of 25cm. floors were 10 cm concrete slab. simulation was analyzed for all zones of the house together. active system for heating and cooling in all zones were placed to be mixed-mode system with efficiency 95%. stairs, bathrooms and toilets were designed to have a natural ventilation. the scope is limited to three alternatives of the walls. the first alternative consists of a stone for cladding with 4cm as a thickness followed by 5cm mortar to process the paste of stone. next layer is the hollow block with a thickness of 20cm covered by 1.5cm internal plaster as shown in table 1a. time lag for these layers is 2.41 hours; it can be calculated using dynamic thermal properties calculator (ver 1.0) [7] as ecotect does not provide that directly. simplified uvalue for alternative a (based on admittance method) is 2.33 w/m2k. u-value was calculated and assigned to relevant wall components in ecotect. the second alternative (b) consists of three layers. the first layer is the external plaster with a thickness of 1.5cm. figure 2 analyzed building plansground and first floors. (image courtesy of zawaya co. [11]) figure 1 the case study of residential private house in gaza city. (image courtesy of zawaya co. [11]) husameddin m. dawoud (2015) 43 the density of the external plaster is higher than that in the internal one to afford the fluctuations of the weather. the second layer is the hollow block with 20cm as a thickness. the internal layer is the plaster with a thickness of 1.5cm as shown in table 2a. time lag is 1.87 hours and simplified u-value for these three layers is 2.51 w/m2k. the third alternative (c) consists of five layers which are 1.5cm external plaster, 15cm hollow block, 5cm air gap, 10cm hollow block and 1.5cm internal plaster (see table 3a). time lag is 2.02 hours and simplified u-value for these three layers is 1.6 w/m2k. computer simulations help to analyze conditions that are not tested yet in reality, moreover; to draw conclusions based on comparisons of different building systems. this comes before the beginning of the construction works. although simulation studies with ecotect were carried out for different months of the year, the results of the simulations for only average temperature are presented here for brevity. in order to compare the behaviour of the different materials, the simulation was firstly run on a computer model for the alternative a. figure 4 shows the loads per month to maintain the temperature from 18.0 c to 26.0 c through the year. red bars above the horizontal line in the middle are heating loads during the seasons, autumn and winter . while blue bars are the cooling loads during summer and spring seasons. when this house is enveloped by using alternative a, the energy consumption is 23381 kwh per year. the system for providing heating and cooling was fixed to mixedmode system. this system is a combination of airconditioning and natural ventilation where the hvac system shuts down whenever outside conditions are within the defined thermostat range. adaptive methods were chosen in calculation because the adaptive comfort models add a little more human behaviour to the mix. they assume that if changes occur in the thermal environment to produce discomfort, then people will generally change their behaviour and act in a way that will restore it. such actions could include taking off clothing, reducing activity levels or even opening the windows. the main effect of such models is to increase the range of conditions that designers can consider as comfortable, especially in naturally ventilated buildings where the occupants have a greater degree of control over their thermal environment. v results and discussion the simulations were run on a computer model for the three alternatives using ecotect soft-ware. it is summarized in a graph for the entire year for the given building with all the conditions applied in the analysis. the three alternatives for the envelope materials are presented together in table 4. the loads of alternative b record the highest value (26737.334 kwh). while the loads of alternative c is the lowest (19207.45 kwh). alternative c saves 30% of the loads compared with alternative b due to the efficiency of walls in reducing gains and losses. gains and losses occur via the various heat transfer mechanisms within a zone. these mecha-nisms include conduction, sol-air, direct solar, ventilation, internal and inter-zonal gains and losses. that is indicated by the colours shown in the legend below the figures 5,6 and 7. values above the horizontal axis indicate heat gain; values below this axis indicate heat loss. to the left of these figures, the passive gains breakdown is measured in watts per hour per square metre. while to the right of the graph, the gains are presented as percentage values. passive gains and losses breakdown analyses indicate that the majority of heat losses during winter or heat gains during summer occurs via conduction heat transfer figure 3 diurnal averages of outdoor air temperature and solar radiation, for gaza strip husameddin m. dawoud (2015) 44 (building envelope). gains and losses analysis for alternative a shows the percentage of 54% caused by effect of using stone in exterior wall cladding (figure 5a). the usage of stone cladding in exterior walls of the building could save energy better than using stone for cladding as shown in alternative a (figure 5a), or even better than using plaster only for exterior walls as shown in alternative b (figure 6a). as a result of the building conduction, the breakdown analysis for alternative a and b is respectively 54% and 60% of gains and losses. alternative c is the most efficient option. it shows that 41% of heat gains and losses is due to the building conduction (figure 7a). therefore, this study suggests that building envelop in general, and more specifically, the walls of the building with low u -values should reduce heat gains and losses. referring to palestinian code for thermal insulation, the overall heat transfer coefficients (“u” factors) should not exceed 1.8 w/m2k. alternative c fulfilled this code and provided the lowest u-value of the walls (1.60 w/m2k) compared with the other two alternatives a and b (2.33 and 2.51 w/m2k respectively). to conclude, conduction heat gains and losses are reduced from around 54% and 60% to around 41%. it should be noticed that the values are relative to the total amount of heat gains and losses. figure 8 compares these values to the total amount and shows the significant difference between using exterior walls with air gap (alternative c) and other alternatives. the current study found that the peak measured values for heat gains and losses in alternative c with air gap for insulation has halved percentage. in accordance with the present results, previous study for ministry of local palestinian gov-ernment [8] has demonstrated that homes with insulation save 50% of required energy to maintain comfortable temperature inside the home. it is true that the installation of insulation materials will cost more. although the study of ministry of local palestinian government [8] shows that within maximum two years of running the system of heating or cooling, the saves of consumed energy will compensate the money was spent in intable 4 comparison between the three alternatives of wall materials month loads of alternative a (kwh) loads of alternative b (kwh) loads of alternative c (kwh) jan 1395.981 1761.632 866.866 feb 1238.777 1560.258 784.666 mar 194.416 273.902 104.918 apr 592.126 717.526 474.25 may 1015.314 1210.707 843.143 jun 3474.519 3944.548 2930.56 jul 4957.645 5466.524 4270.838 aug 5189.35 5675.152 4539.039 sep 3324.176 3635.027 2919.985 oct 913.854 1068.838 853.13 nov 166.602 227.688 92.497 dec 917.779 1195.531 527.562 total 23380.537 26737.334 19207.45 figure 4 alternative a monthly heating/cooling loads; heating load = 4112 kwh, cooling load = 19269 kwh; total loads = 23381 kwh. figure 8 passive gains and losses comparison for the three alternatives 34.82% 34.26% 43.75% 47.95% 21.43% 17.79% 0% 10% 20% 30% 40% 50% 60% l o s s e s g a in s l o s s e s g a in s l o s s e s g a in s a b c husameddin m. dawoud (2015) 45 stalling insulation materials. this finding supports previous research into this brain area which links air gap for insulation and cost savings. mahlia, ng, olofsson, and andriyana [9] found that additional 0.64%/m2w all of life cycle cost savings can be achieved by applying 6 cm air gap at the selected insulation at optimal thickness. moreover, sadrzadehrafiei, mat, sopian and lim [10] found that adding 2cm air gap in a brick walls decreases fuel consumed and emissions. introducing optimal thicknesses insulation between 3 and 5cm and by adding air gap of 2cm, energy consumption cost was reduced to 2426% compared to a wall without insulation and air gap. heat transfer through walls is minimized while economical and environmental advantages are also attained. vi conclusion we conclude that building's envelop must have the priority in thermal insulation works, specially, in multi-storey bulding. this is because of its relativly wide area compared with the area of other building's elements such as roof and windows. air gap inside the exterior walls works as a moderate. it also has the highest thermal resistivity compared to the other materials. it performs well in both hot and cold climate and it has the best r-value. it is the costliest when compared to the other materials and it is neither combustible nor perishable. external walls with air gap should be handled carefully to minimize the air flow and to stop any leaking that could ruin the insulation system. the efficiency improvements provide a platform for the designers to include the thermal properties beforehand and to ensure the minimization of the loss of energy. the final results are interpreted from the total amount of heat gains and losses using the ecotect software. the focus was on the envelop of the building, more specifically, the exterior walls for its significant effect. however, characteristics of other building elements are important factors to determine the energy efficiency of the buildings. the specific design which involves the orientation of individual buildings enhances the energy usage to a maximum extent. moreover, it could be investigated in future studies putting in mind the crowdedness of gaza strip and the expensive land price that hinder flexibility of choosing the best orientation of building to meet energy efficiency requirments. references [1] doe (the united states department of energy), buildings energy data book. 2009. [2] ministry of local government, construction materials & local market survey in palestinian territories. 2002. [3] the climate of palestinian territories. 2014 [cited 2014 24 jun]; available from: http://www.whatstheweatherlike.org/palestinianterritories/. [4] muhsen, h.l.a.-a., decision making in the selection of the exterior walls techniques in affordable housing buildings in palestine. 2012, national university. [5] palme, m., j. guerra, and s. alfaro, thermal performance of traditional and new concept houses in the ancient village of san pedro de atacama and surroundings. sustainability, 2014. 6(6): p. 3321-3337. [6] hadid, m., establishing, adoption, and implementation of energy codes for building. architectural styles survey in palestinian territories. 2002. [7] mpa the concrete centre. dynamic thermal properties calculator (ver 1.0). 2014; available from: http://www.concretecentre.com/. [8] ministry of local government, cost efficiency of thermal insulation. 2002. [9] mahlia, t., et al., energy and cost savings of optimal thickness for selected insulation materials and air gaps for building walls in tropical climate. energy education science and technology, 2012. 29(1): p. 597-610. [10] sadrzadehrafiei, s., k.s.s. mat, and c. lim, determining the cost saving and emission reduction of optimum insulation thickness and air gap for building walls. australian journal of basic and applied sciences, 2011. 5(12): p. 2287-2294. [11] zawaya company for design and consultation, gaza strip, palestine (2014). private house design. retrieved from http://www.zawaya.ps author dr husameddin dawoud currently is an assistant professor in the college of applied engineering & urban planning at the university of palestine in gaza. he received his ph.d in theory in design from university science malaysia in the year 2014. he occupied the posts of director of architectural heritage in the islamic university of gaza in the year 2009, and chief engineer in zawaya company for design and consultant since 2007. his research interest includes; cad and aad in architecture design, flow theory and creativity in design, rehabilitation of historic building, and green buildings. he is member of association of engineers (gaza governorates). he has published 6 scientific articles in his fields of interest. he can be contacted at hdawod@gmail.com. husameddin m. dawoud (2015) 46 appendix table 1a material prosperities for the alternative a admittance [w/m2k]: 5.15, time lead [hours]: 1.24 time lag (decrement delay) [hours]: 9.47 time lag [hours]: 2.41 simplified u-value (based on admittance method) [w/m2k]: 2.33 m a te ri a l t h ic k n e ss [ m m ] d e n si ty [ k g /m ³] s p e ci fi c h e a t ca p a ci ty [ j/ k g /k ] t h e rm a l co n d u cti v it y [ w /m /k ] stone 40 2300 1000 1.8 mortar 50 2300 1000 1.75 hollow block 200 2000 836.8 1.1 plaster 15 1300 1000 0.57 table 2a material prosperities for the alternative b admittance [w/m2k]: 4.99, time lead [hours]: 1.23 time lag (decrement delay) [hours]: 6.60time lag [hours]: 1.87simplified u-value (based on admittance method) [w/m2k]: 2.51 m a te ri a l t h ic k n e ss [ m m ] d e n si ty [ k g /m ³] s p e ci fi c h e a t ca p a ci ty [ j/ k g /k ] t h e rm a l co n d u cti v it y [ w /m /k ] external render (cement, sand) 15 1800 1000 1 hollow block 200 2000 836.8 1.1 plaster 15 1300 1000 0.57 table 3a material prosperities for the alternative c admittance [w/m2k]: 5.15, time lead [hours]: 1.24 time lag (decrement delay) [hours]: 9.47time lag [hours]: 2.02simplified u-value (based on admittance method) [w/m2k]: 1.60 m a te ri a l t h ic k n e ss [ m m ] d e n si ty [ k g /m ³] s p e ci fi c h e a t ca p a ci ty [ j/ k g /k ] t h e rm a l co n d u cti v it y [ w /m /k ] external render (cement, sand) 15 1800 1000 1 hollow block 150 2000 836.8 1.1 air gap 50 hollow block 100 2000 836.8 1.1 plaster 15 1300 1000 0.57 husameddin m. dawoud (2015) 47 figure 5a passive gains and losses breakdown graph for alternative a figure 6a passive gains and losses breakdown graph for alternative b figure 7a passive gains and losses breakdown graph for alternative c transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 48 spatio temporal analysis in land use and land cover using gis case study: gaza city (period 1999 – 2007) maher a. el-hallaq assistant professor of surveying and geodesy, civil eng. department, the islamic university of gaza, palestine mhallaq@iugaza.edu.ps abstract— in recent years, gaza city is exposed to a large amount of land use and land cover changes, as a result of lack of planning and monitoring programs. this leads to complex serious problems such as: lack of storm water infiltration, impact of global warming, potential agricultural failures, soil erosion, etc. due to increasing changes of land use, mainly by human activities, detection of such changes, assessment of their trends and environmental effects are necessary for future planning and resource management. this study aims to detect changes occurred in gaza city for land use and land cover during the interval between 1999 and 2007using gis techniques. it shows that within the period from 1999 to 2007, the built up areas have been reached the highest increase (8.06%). on the other hand, both of green and dry lands have been decreased. certainty, the green lands is transformed from 41.79% in 1999 to 38.80% and the dry lands become 18.80% in 2007 while it is 24.16% in 1999. for the wet lands, the area of this category has been increased with a percent of 0.96% as a total in 2007. depending on those numbers, the study expects that the built up areas will be the dominance at the expense of other categories as a result of the continuous population growth and in accordance to the proposed master plan of 2025 of gaza city. the study strongly recommends giving real opportunity for the local community in sharing in the awareness campaigns to introduce the scope of this study for all community sectors to be aware about lulc for upcoming generations. index terms— change detection, gaza city, land use and land cover, gis. i introduction change detection is the process of identifying differences in the state of an object or occurrence by observing it at different periods [1]. reference [2] defines change detection as the comparison and difference of multi temporal images of the same geographical area. this is achieved by using image-handling techniques to analyze the changed areas of the landscape over different times. change detection is a key for monitoring the globe natural resources through analysis to spatial distribution of the population of attention. aspects of change detection that are necessary for monitoring natural resources are; detecting changes that have occurred, identifying the nature of the change, and measuring the size of the change [3]. the change of land use and land cover (lulc) is a result of complex relations between some biophysical and socioeconomic situations that may occur at different temporal and spatial scales [4]. land cover refers to physical conditions on the ground or natural cover of the land for example forests, grasslands, etc. while land use refers to the human actions such as residential areas, industrial areas, and agricultural fields [5]. lulc change detection is required for updating lulc maps and the management of natural resource. change detection for land use and land cover is an active topic and provides varieties of new techniques constantly developed over the last years. it is not easy to choose a suitable technique for specific change detection,. in general, a good change detection research should give information like the area of change and change rate, the spatial distribution of changed types, the change trajectories of land cover types and the accuracy assessment of change detection results [6]. for this reason, a review of change detection techniques used in previous researches is useful to understand how these techniques can be best use. reference [7] classifies change detection techniques for land use and land cover into the following seven categories: (1) algebra, (2) transformation, (3) classification, (4) advanced models, (5) geographic information systems (gis), (6) visual analysis and (7) other techniques. ii the study area gaza city is a palestinian city in the gaza strip. it is considered as the second capital of palestine because of its strategic location, its economic importance and the presence of most of the headquarters of the palestinian national authority [8]. after the establishmentl of the palestinian national aumailto:mhallaq@iugaza.edu.ps maher a. el-hallaq / change detection in land use and land cover in gaza city using gis, period 1999 – 2007 (2015) 49 thority in 1994, gaza city has witnessed extraordinary expansion, growth and developmental activities such as construction of buildings, roads and many other human activities. this lead to increase land employs and rapidly making changes in the status of its land use and land cover over time without any action to monitor and evaluate this status. the area of the city is estimated to be 55.6 square kilometers [9]. figure 1 shows the geographic location of gaza city. gaza city is located on a low-hill with an elevation about 45 meters above sea level. much of the urban expansion of the city is parallel to the coast in addition below the hill especially to the north and east to form gaza neighborhoods and border of the city. at three kilometers distance west of the city core, the port of gaza is located [9]. gaza city participates border with border towns of jabalya, beit lahiya and beit hanoun in the north while it’s enclosed by the mediterranean sea in the west, in the south the al-zahra city while the remaining border of 1978 are the restrictions of the city from eastern border. gaza city is divided into seventeen neighborhoods as follow: el daraj, sheikh radwan, el awda city, northern remal, southern remal, sabra, nassr, tuffah, ijdaida, east ijdaida, old city, shiekh ejleen, zaytoon, tal el-hawa, beach camp, turkman and east turkman (see figure 2). nowadays, gaza city is the biggest population center with about 496,410 inhabitants and the average population density is almost 6913 person/km 2 [10]. table 1 shows the gaza’s population for each neighborhood in 2009 [9]. the landsat satellite images of gaza city are acquired for two epochs, 1999 and 2007. both images have a resolution of 50 cm and 20 cm respectively. unfortunately, only these two images are available. however, studying the change of lulc may express the required needs since there is no significant changes have been occured after 2007 due to the siege of the gaza strip up to now. figure 3 and figure 4 show the aerial photographs of gaza city as well as the city neighbourhoods. figure 1 the geographic location of gaza city figure 2 neighborhoods in gaza city table 1 the population in neighborhoods of gaza city s n neighborhood population ( % ) sn neighborhood population ( % ) 1 al awda city 8250 1.40 10 shiekh ejleen 20350 3.46 2 al nassr 33000 5.61 11 sheikh radwan 36000 6.12 3 al sabra 27500 4.68 12 zaytoon 66000 11.23 4 turkman 48000 8.16 13 east turkman 42000 7.14 5 east ijdaida 1000 0.17 14 old city 27500 4.68 6 ijdaida 35750 6.08 15 northern remal 22000 3.74 7 southern remal 30250 5.15 16 tuffah 41500 7.06 8 tal el hawa 8800 1.50 17 beach camp 90000 15.31 9 el daraj 50000 8.50 maher a. el-hallaq / spatio temporal analysis in land use and land cover using gis , case study: gaza city (period 1999 – 2007) 50 iii lulc classification there are many land use and land cover classification systems that are used in many countries of the world, such as classification of the u.s. geological survey and ecological land classification system. that is noted; these systems containing the ratings are not presented or used in palestine or in the gaza strip in particular, such as forests and wetlands. palestinian central bureau of statistics (pcbs) [10] in 2007 has developed a classification system for land use based on the classification system of the economic commission for europe (ece). this classification contains thirteen different patterns of land uses, but it contains some classifications that do not exist in the gaza strip, such as jungle, the territory of the settlements and natural reserves, although they exist in the west bank. in addition, that is not comprehensive for all land uses and land cover [11]. in 1997, the municipality of gaza in cooperation with the ministry of local government developed a structural plan of the city consisting of the following classifications (figure 5): residential zone class a,b,c, freeze development zone , agriculture residential zone , beach zone, main commercial center, old town, commercial facades, tourism and recreation zone, public buildings, green area, archeological site, public cemeteries, sport zone, existing roads, ring roads, railway land, storm water collection area, regional transportation center, industrial area, and agricultural area. figure 3 aerial image (1999) figure 1 block diagram for the sun tracker system figure 4 aerial image (2007) figure 5 structural plan of gaza city year 1999 year 2007 maher a. el-hallaq / change detection in land use and land cover in gaza city using gis, period 1999 – 2007 (2015) 51 because of the difficulty of obtaining maps with high accuracy and providing maps with low resolution from multiple sources, it is difficult to use the technology of remote sensing. in addition, gis technology is used because of the small size of the study area for classification of land use and land cover. for these reasons, it is recommended to establish a special classification in this research to reflect the variance of the categories on the lulc so that it contains all categories. it was observed that this classification can be collected into groups so that commensurate with the nature of the research as well as it covers all land uses and land cover in the city. therefore, in this research, the established land use and land cover in gaza city categories are listed below:  built up land: which consists of residential, main commercial center, old town, commercial facades, tourism and recreation zone public buildings, public cemeteries, existing roads, ring roads, railway land, regiona transportation center, sport zone, and industrial areas.  green land: which consists of green and agricultural areas.  wet land: which consists of storm water collection areas.  dry land: which consists of all areas that do not fall under the categories of the three previous (this is an area of land covered with gravels). iv methodology in this research, gis is used as a technique of change detection. it is considered as an effective technique in studying the lulc change as it helps in trialing, analyzing, surveying lands and calculating averages for studying categories. in addition, this technique is recognized with its ability to view two stages of the study categories on maps. a data preprocessing aerial images of gaza city taken in 1999 and 2007 are used to detect changes of lulc. many steps have been done to prepare needed data. it starts by preparing the required layers for many processes in the study, making ready the database using arc catalog software, exporting of different extensions of data files and modifying them with extensions of required data files. the next step is to update files to arc map software, and preparing of all statistical data by microsoft excel software in order to simplify dealing with them through this study. georeferencing is the first and fundamental to build of vector spatial model, where this process has been applied to overlaying aerial images of 1999 with 2007 using the local coordinate system (palestine grid). the process was needed to reach high accuracy during implementation to achieve the greatest possible congruence between the two images and align distortion ratio in aerial images. twenty control points are used to rectify and georeference the two images. points selection considers the distribution of neighbors of gaza city including its border to reach maximum degree of comply between the two aerial maps. this is necessary in order to achieve accuracy in results for the areas to prevent any losses or repetition/ duplication of any area during the study completion. rms is noted to equal ± 0.005 m. b digitizing process for process applications during the project, it was based mainly on the existing classification of the lulc of gaza city. the process is to build vector spatial model that describes the type of spatial data for the lulc because the following operations will depend upon the use of spatial database for this process. all issues that have been mentioned previously have been taken into account during implementation. it has been the primary goal of this process for spatial representation to lulc items in gaza city for years 1999 and 2007 separately in order to apply change detection process using these data. digitizing has been done for each neighborhood and its database was built by arccataloge to be easier in handling with data for any application following the digitizing process. to guarantee there is no any mistakes as decreasing or increasing in areas during the digitizing process, snapping function has been used for all points (start, end, and vertex) between polygons during the process. c topologic model topology process is one of the most important audited processes for data accuracy which will be built among many of the analytical processes of the project, especially for digitizing stage. in addition, through them there is modification for all the problems of overlaps and intersections between vector spatial data. the amounts of large areas that have been implemented during the digitizing stage require topology process to check for errors resulting from each district. d editing functions editing functions are used through all project phases to add, delete, or manipulate the geographic position of features. sliver or splinter polygons are thin polygons that occur along the borders of polygons following digitizing and the topological overlay of two or more coverage’s. in other words, editing is the detection of errors in text records or spatial database features and the implementation of the needed correction. corrections can include additions, deletions, and rearrangements, as well as changing size, font, style, color, orientation, alignment, scale, and rotation. editing techniques are exclusive to spatial features and include changing elevation, thickness, and width, attribute assignments, surface textures, dimensioning and others [8]. maher a. el-hallaq / spatio temporal analysis in land use and land cover using gis , case study: gaza city (period 1999 – 2007) 52 d development of a classification scheme based on the classification of gaza municipality of main classification for lulc in gaza city ( built up, dry land, green land and wet land), a classification scheme was setting to develop the study approaches. it is necessary where identifying and interpretation of lulc by attribute data with spatial data for each area. figure 6 shows the classification method: e change detection tools gis software provides an erase tool which is useful in showing change of places and areas in general. to know the directions of any increasing or decreasing which could be occurred in classification study, the intersect tool can be used. figure 7 illustrates an example of using intersect tool. v results and analysis the gained results, after the digitizing process has been completed, were mainly the calculated areas for classified regions of lulc and the representative percentage for each one in the both 1999 and 2007 years. the built up class during 1999 to 2007 increases from 33.38% to 41.44%, while the areas of dry land decreases from 11075.75 dunom in 1999 to reach 8617.43 dunom in 2007. table 2 presents the areas and study classification percentage in 1999 and 2007. in general, there is an increasing in built up class, decreasing of dry land class and green land class, and little increasing in wet land class. the percentage of changes between different lulc classes for the period 1999 to 2007 can be derived from table 3. according to the results, the higher increasing of change detection in built up class of the city is estimated as 8.06% almost annual increase rate is estimated at 1%. in addition, about 0.30% increasing in wet land class has been noticed. otherwise, there is a decreasing of change in dry land class as -5.36% and about -3.00% decreasing for green land class. in addition, table 3 presents a summary of the area change for lulc types by dunom. it is very important to evaluate the current situation for land use, to know any increasing or decreasing direction of classification of lulc. generally, in the study area, a change has been noticed cross of the classification classes. intersect tool has been used to get these results as shown in figure 8 and figure 9. they explain the location of all increases and decreases for built up, green land, dry land and wet land. figure 6 classification scheme table 2 control rule base for mppt fuzzy controller. classification 1999 2007 area (dunom) % of area area (dunom) % of area built up 15303.9912 33.38% 18998.6425 41.44% dry land 11075.7577 24.16% 8617.4293 18.80% green land 19158.6333 41.79% 17784.6436 38.80% wet land 303.2133 0.66% 440.8802 0.96% total area 45841.5955 100% 45841.5955 100% table 3 summary of lulc change detection classification change detection (dunom) increase decrease change area 2007 (dunom) % of change built up 3963.7120 269.0608 3694.6512 18998.6425 8.06% dry land 2042.5844 4500.9129 -2458.3285 8617.4293 -5.36% green land 1682.5316 3056.5213 -1373.9896 17784.6436 -3.00% wet land 137.6669 0.0000 137.6669 440.8802 0.30% total change 7826.4949 7826.4949 0.0000 45841.5955 0.00% figure 7 example for use of intersect tool figure 1 block diagram for the sun tracker system maher a. el-hallaq / change detection in land use and land cover in gaza city using gis, period 1999 – 2007 (2015) 53 figure 9 lulc decreases of classes (1999-2007) it is noticed that the built up of the north western of the city increased because the existence of al awda city which was constructed at the expense of the dry land. the south of the city, the dry land is also decreased because it is transformed to green land. in addition, the built up class is increased in "tal el hawa" neighborhood to face the extension of netsareem settlement during that time. the political situation in the gaza strip affected the decreasing of green land in the east of the city where this area becomes dry land because the army frequent attacks of this region and make damages to its agricultural area. the changes in the lulc over gaza city neighborhoods can be observed from table 4 which displays the proportion of representing area as structural plan, population and the change detection happened in built up, green land, dry land, wet land classes. in general, all classifications of lulc in neighborhoods have been decreased except the built up class. in addition, green land class increases with about 2.7% in al awda city neighborhood and 2.23% in tal el hawa basically because of good planning. otherwise, there is a notable increasing in dry land regions in the southern and eastern of the city, particularly, increasing in east ijdaida and east turkman neighborhoods, shiekh ejleen district which is specially referring figure 1 block diagram for the sun tracker system figure 8 lulc increases of classes (1999-2007) table 4 percent change in lulc of neighborhoods of gaza city sn neighborhood area as structural plan ( % ) population (2009) change detection of study classification (%) ( % ) built up dry land green land wet land 1 al awda city 1.40 1.40 40.91 -43.61 2.70 0.00 2 al nassr 4.46 5.61 21.06 -21.06 0.00 0.00 3 al sabra 3.31 4.68 5.33 -4.49 -0.84 0.00 4 turkman 6.33 8.16 8.92 -4.58 -4.34 0.00 5 east ijdaida 10.78 0.17 2.74 1.45 -6.98 2.79 6 ijdaida 6.01 6.08 6.79 -1.34 -5.44 0.00 7 southern remal 5.53 5.15 10.58 -9.27 -1.31 0.00 8 tal el hawa 1.73 1.50 26.32 -28.55 2.23 0.00 9 el daraj 5.30 8.50 11.77 -4.45 -7.31 0.00 10 shiekh ejleen 4.62 3.46 10.73 3.94 14.68 0.00 11 sheikh radwan 2.24 6.12 8.85 -8.85 0.00 0.00 12 zaytoon 24.72 11.23 6.02 -6.18 0.16 0.00 13 east turkman 8.66 7.14 2.34 1.32 -3.66 0.00 14 old city 1.53 4.68 2.07 -1.81 -0.26 0.00 15 northern remal 5.09 3.74 8.38 -5.98 -2.39 0.00 16 tuffah 6.33 7.06 7.99 -5.42 -2.57 0.00 17 beach camp 1.96 15.31 4.42 -5.03 0.61 0.00 maher a. el-hallaq / spatio temporal analysis in land use and land cover using gis , case study: gaza city (period 1999 – 2007) 54 to the security status and the continuation of israeli invasions during the study period which led to convert many green land regions into dry land. moreover, no increasing in wet land area has been noticed excluding east ijdaida where a sewage treatment station was constructed to cause 2.79% change detection. according to the results illustrated on the table 4, realizing that the higher increasing changes detection in built up class in al awda city is estimated as 40.91% which can be considered as new neighborhood. another rising on the area is estimated as 1.4% as a residential tower to accommodate 1.4% of the population. tal el hawa residential also represents 1.5% of the population and constitutes 1.73% of the total area city. tal el hawa becomes after al awda city in change detection increasing in built up to constitute 26.32%. on the other hand, al nassr neighborhood is the third largest one in terms of change detection within 21.06% and occupying 4.46% of the city area, which is a high rate area comparing to northern remal neighborhood areas which constitute 5.06% and change detection in built up 8.38% while southern remal within 5.53% of the total area of the city and increasing change in built up estimated 10.58%. noticing that the population percentage is nearly close in the three neighborhoods and with no big differences (al nassr 5.61%, southern remal 5.15%, and northern remal 3.74%) due to the high proportion of land purchasing price in southern remal and northern remal than al nassr. it is to be the most prestigious squares and many governmental buildings, educational institutions are including in those neighborhoods in addition to many commercial lands. the structure plan of gaza city which is to be adapted to residential and tourist town is clearly observed in the ratio of built up, which constitutes 81.98% of the original city area. while, green land covers up 17.79%, the dry land 0.00%, and wet land 0.23% as a sewage basin and treatment station. table 5 presents percentage of area classes comparing between structural plan and the result change in 2007. expectation of the lulc according to the structural plan is trying to highlight the problem of limited space areas and the continuous population growth where the built up class constitutes 81.98% which means the extra needs for an urgent planning and revising in order to recognize appropriate solutions for using the available areas to figure out future solutions. the idea of this expectation depends on finding a correlation equation between the built up area and the population in order to forecast the year of which the whole built up areas will be fully occupied according to the structural plan for gaza city in 1997 by using prediction equation of the population growth rate p=po (1+r)t [12]. the relation between the increasing uses of built up land and the population growth of the city is strong and effective relation and it was clearly observed in the use of the lulc, which was observed based on the results and data of the study. table 6 shows the relationship between the built up rate and the population assigned and predicted. excel helps to develop the trend equation for this relation, (y= 12371x16404) where, (x) is the built up rate, and (y) is the number of population growth expected practically. applying the equation, it leads to know the predicted population growth rate and which year is expected in rise based on this census in the city using the equation to predict the population growth rate, year 2025 witnessing a complete using and possessing of areas in the city. vi conclusion the study has designed many digital computerized and accurate maps, which are connected to databases for the most obvious results of the study indicates that the city witnessed a continuous growth and changing taking places in many terms; politically, governmentally, educationally, demographically and touristic. those terms are considered as the most important change which consequently reflects on the lulc. the study comes to describe areas, places, rate and its change detection for classification study. the results observed increasing in built up purposes by alteration average 8.06% otherwise areas like dry land and green land are declining by alteration average -5.36% in the dry land and -3.00% in the green land. noticeably, there was a slight increasing in the wet land areas by 0.30%. regarding to the provided information research, the study expects a rapid growth for the built up class due to population growth, which will help on filling up all the chosen areas according table 6 predicting the relationship between the built up rate and population year % of built up no. of population 1997 30.98 367388 1999 33.38 395840 2007 41.44 496410 2025 81.98 997770 table 5 percent of area classes as in structural plan and image 2007 classes area as in structural plan area as in image 2007 built up 81.98% 41.44% dry land 0.00% 18.80% green land 17.79% 38.80% wet land 0.23% 0.96% maher a. el-hallaq / change detection in land use and land cover in gaza city using gis, period 1999 – 2007 (2015) 55 to the structural plan of the city by year 2025. the study identifies a various transformations of the lulc during the study period referring to many reasons; political situations, social situations, economical and administration situations apparently because the absence of controlling systems and not fully put up with laws and construction regulations. acknowledgment i would like to express my thankfulness to all those who gave me the possibility to complete this study. i am deeply indebted to eng. waheed al borsh and eng. hussam al borsh for their continuous valuable assistance during data collection and data processing. references [1] singh, a., "digital change detection techniques using remotely sensed data". international journal of remote sensing, vol. 10, no. 6, p.p 989-1003, 1989. [2] h. hsiung huang, c. ju hsiao, "post-classification and detection of simulated change for natural grass". acrs, national cheng-chi university, 2000, url: http://www.geospatialworld.net. [3] macleod and congalton, "a quantitative comparison of change detection algorithms for monitoring eelgrass from remotely sensed data". photogrammetric engineering & remote sensing, vol. 64, no. 3, p.p 207 – 216, 1998. [4] r.s. reid, r.l. kruska, n. muthui, a. taye, s. wotton, c.j. wilson and woudyalew mulatu, "land-use and land-cover dynamics in response to changes in climatic. biological and socio-political forces: the case of southwestern ethiopia". landscape ecology. vol. 15, p.p 339-355, 2000. [5] c. inglis-smith, ―satellite imagery based classification mapping for spatially analyzing west virginia corridor h urban development‖. master thesis, the graduate college of marshall university, 2006. [6] francesca giordano, ―a landscape approach for detecting and assessing changes in areas prone to desertification by means of remote sensing and gis‖. master thesis, university of pushchino, 2008. [7] d. lu, p. mausel, e. brondizio, and e. moran, "change detection techniques". international journal of remote sensing, vol. 25, no. 12, p.p 2365–2407, 2004. [8] wikipedia, the free encyclopedia. accessed on 10 july 2013. url: http://www.wikipedia.org. [9] municipality of gaza. accessed on 20 july 2013. url:http://www.mogaza.org. [10] palestinian central bureau of statistics (pcbs), "population, housing and establishment census 2007". the gaza strip . census final results, accessed on 01 april 2013. url:http://www.pcbs.gov.ps. [11] saleh abu amrah, "applications of geographic information system in the study of land use of the city of deir al-balah". master thesis, the islamic university of gaza, palestine, 2010. [12] morris h. degroot and mark j. schervish, ―probability and statistics‖, third edition, addison-wesley, isbn 0201524880, 2002. http://www.geospatialworld.net/ http://www.wikipedia.org/ http://www.mogaza.org/ http://www.pcbs.gov.ps/ transactions template journal of engineering research and technology, volume 2, issue 2, jun3e 2015 141 dependency of dry density of soil on water content in the measurement of electrical resistivity of soil chik, z. 1 , murad, o.f. 2 , rahmad, m. 3 1 professor, faculty of engineering and built environment,universiti kebangsaan malaysia (ukm), irzamri@gmail.com 2 student, faculty of engineering and built environment,universiti kebangsaan malaysia (ukm), murad5353@yahoo.com 3 student, faculty of engineering and built environment,universiti kebangsaan malaysia (ukm), muhamadrahmad29@gmail.com abstract— density is defined as the weight of soil per unit volume of soil. often in the construction of different types of structures in-situ 'fill' is required. in-situ measurement of density is vital for such projects. when soil is being used as fill material it is usually compacted to a dense state to obtain satisfactory engineering properties. dry density of soil basically depends on many properties of soil. for this reason it is difficult to establish an empirical relationship between electrical resistivity and dry density of soil. for similar typ of grain size distribution and dry density of soil, electrical resistivity widely varies with different percentages of water content. in this study dry density of soil was determined using standard proctor test. after compaction electrical resistivity of soil was measured for different dry density of soil. from the graph of volumetric dry density of soil versus water content of soil, a slightly steeper increment in dry density can be observed with increasing water content. on the other hand in the case of dry density versus electrical resistivity graph, electrical resistivity remains almost same for increasing dry density until it reaches to the peak value. after achieving the maximum dry density a detrimental slopes can be observed in the both graphs. in the case of electrical resistivity of soil, after achieving maximum density of soil it does not decrease in remarkable extent. so except some dissimilarity a common trend can be observed in the both of the graphs. in both graphs after a maximum value, dry density tends to reduce. also in both cases measured maximum dry density of soil was 1.79, 1.936, 1.792 and 1.821 gm/cm3 respectively. so it can be concluded that for both cases electrical resistivity mostly depends on percentages of water in the soil rather than dry density. it is difficult measure dry density of soil from only electrical resistivity. but the maximum dry density of soil can be found from the least resistivity value. index terms— density, electrical resistivity, percentages of water, maximum soil density i introduction density is the mass of solid particles divided by the volume of solid particles. the mass of soil excludes pore space and organic material. a high bulk density is indicative of either soil compaction or high sand content. most soils have a density between 1 and 2 g/cm 3 . dry density of soil represents welldefined properties of the materials. it is an indicator of soil compaction and soil health [1]. it also provides valueable inforformations such as porosity and void ratio of soil. in terms of agriculture dry density of soil indicates structure of the soil and soil suitibility for growth of plants. different physical, chemical and biological properties of soil such as infiltration, available water capacity, soil porosity, plant nutrient availability, and soil microorganism activity affected by dry density of soil. so many geophysical methods were used to measure the degree of compaction both at the site and in the laboratory. among all geophysical methods electrical resistivity is a non destructive and comparatively less time consuming method. other conventional methods for determination of soil compaction are invasive as well as costly [2]. as it is basically the ratio between mass and volume of the soil, theoretically the value of dry density mostly depends on the mass and the volume of the soil. so the properties that affect both mass and volume such as grain size distribution, soil compaction etc indirectly affect soil dry density. however water content is one properties of soil that directly affect dry density. for every type, grain size of particles there is a specific moisture content in which the density of soil is maximum. but properties of soil such as grain size distribution and soil compaction have comparatively less effect than moisture content in the electrical resistivity of soil. this is a major problem for determining soil dry density using soil resistivty. ii previous works on dry density measurement using different geo-physical method among many geo-physical methods that have been used for the measurement of soil density, multichannel analysis of surface waves (masw), soil conductivity and soil resistivity are most popular methods . kalinski and vemuri studied soil compaction using electrical conductivity measurements in 2005. the researchers proposed a new method cqa based on mailto:murad5353@yahoo.com chik, z., murad, o.f., rahmad, m/. dependency of dry density of soil on water content in the measurement of electrical resistivity of soil (2015) 142 θ [3]. in the year of 2011, laloy and javaux used electrical resistivity in moisture content or bulk density of soil measurement incorporate with ―pedo-electrical‖ function. in that study five pedo-electrical models were used to reproduce electrical resistivity as measured by ert in silt loam soil sample within specific range of moisture and bulk density. in this purpose the waxman and smits model, the revil model, the volume-averaging (va) model, the rhoades model, and the mojid model were inverted within a bayesian framework to identify the optimal parameter, parameter uncertainty and its effect on model prediction. sensitivity of the electrical resistivity was studied using calibrated va model and found that approximately 1.5 times higher sensitivity to soil moisture content than to soil bulk density. in addition, the sensitivity of electrical resistivity to soil moisture and soil bulk density was found to increase as soil moisture and bulk density decreased [4]. on the same year of 2011, chik and islam studied on soil compaction estimation using electrical resistivity including chemical characterizations in the soil. in that study four different types of soil was considered. standard proctor compaction tests, was carried out for all types of soil sample. electrical resistivity was measured for compacted soil sample with different percentages of water contents (figure 1). recently lin and sun evaluated model-based relationship between cone index, soil water content and bulk density using dual-sensor penetrometer [6]. iii methodology soil sample was collected from a slope side near new fkab building ukm, malaysia. four samples were collected from four boreholes at different slopes. then soil grain size distribution was determined using sieve analysis . sieve analysis was important because soil resistivity defers with the type of soil. according to unified soil classification system (uscs), this average type of soil taken from four boreholes is basically clayey sand (sc). the portion of sand in this type of soil is more so the resistivity value was less. because clay fillup the voids between which eases the transfer electricity between soil particles. soil sample was mixed with different percentages of water within the range of 5 % to 25% (figure 2). for measuring the exact percentages of water mixed with soil sample following equation (equation 1) was used, 𝑢 = 𝑚 − 𝑚 𝑚 𝑋100 % (1) where, 𝑢 = percentage of moisture content 𝑚 = wt. of soil + water 𝑚 = wt.of dry soil then the sample is compacted using standard proctor. after that the dry density of soil was measured. as the percentages of table 1 grain size analysis sieve no sieve opening (mm) wt. of soil retained (gm) percent soil retained cumulative percent retained percent finer 4 4.750 117.15 6.5 6.5 93.5 8 2.360 595.19 33.2 39.7 60.3 16 1.180 383.62 21.4 61.1 38.9 30 0.600 235.77 13.2 74.3 25.7 40 0.420 181.87 10.1 84.4 15.6 50 0.300 208.58 11.6 96.0 4.0 100 0.150 66.48 3.7 99.7 0.3 200 0.075 3.31 0.2 99.9 0.1 pan 12.20 0.7 figure 1 dry density and soil resistivity for different of water contents in various soil sample [5]. figure 2: location of soil sample collection chik, z., murad, o.f., rahmad, m/. dependency of d ry density of soil on water content in the measurement of electrical resistivity of soil (2015) 143 water increased dry density of soil also increases gradually. but just after the maximum dry density of 1.79, 1.936, 1.792 and 1.821 gm/cm 3 respectively, it started to decrease with higher percentages of water content. for measuring the dry density of soil follwing equation (equation 2) was used, = (2) where, = bulk density of soil =mass of the soil particle = volume of the soil particle just after compaction the resistance of soil was measured using two probe electrical resistivity meter (kyoritsu) (figure 4). 4 cm distance was considered between the probes. finally the resistivity of soil was measured using following equation (equation 3), = 2𝜋𝑎𝐴 (3) where, = resistivity of soil 𝑎 = distance between the probes 𝐴 = ristance of the soil between the probes iv results and discussion electrical resistivity of is the opposite electrical properties of conductivity. as water is a covalent liquid, the ions are held together by sharing electrons. most covalently bonded liquids are not electrically conductive because the electrons, which essentially act as the medium by which the electricity travels, are tied up between the atoms they are related to. also small amount of water does ionize and increase the electrical conductivity of soil. due to the effect of water content electrical resistivity of soil tends to decrease with the increasing amount of water content. for borehole 1, from 7.7 % to 11.78 % of water content electrical resistivity between the soil sample drop-off about 104 (kω-m). but after a certain level, resistivity does not figure 3: graph between dry densities vs. water content figure 4: measurement of electrical resistance after compaction of soil table 2 dry density and electrical resistivity of soil for different percentages of water content borehole percentages of water (%) dry density (gm/cm 3 ) electrical resistivity (kω-m) bh 1 7.7 1.62 108.071 9.94 1.662 20.86022 11.78 1.68 5.02656 14.18 1.724 3.76992 15.96 1.79 2.51328 19.42 1.676 2.261952 23.68 1.564 2.261952 bh 2 7.723 1.685 123.2451 9.95 1.812 40.36958 12.36 1.868 13.365916 14.88 1.936 3.3211 16.12 1.8345 3.293211 18.98 1.652 3.2995 23.87 1.523 3.2855 bh 3 6.6 1.523 196.211 9.65 1.632 22.3311 11.65 1.698 11.321 14.962 1.792 4.31544 16.33 1.6932 4.325 19.97 1.6236 4.3111 24.102 1.523 4.3211 bh4 8.44 1.526 160.2113 10.95 1.695 63.2525 12.3669 1.725 9.255 14.978 1.821 5.361 16.321 1.654 5.321 18.966 1.5622 5.35111 24.654 1.459 5.3569 chik, z., murad, o.f., rahmad, m/. dependency of dry density of soil on water content in the measurement of electrical resistivity of soil (2015) 144 decrease with the increasing amount of water content. because at that certain percentage of water content, soil density is maximum so the electrical conductivity is also maximum at that specific water content (figure 5). simillar trends can also be observed for other boreholes. from the graph between dry density and electrical resistivity it can be observed that electrical resistivi-ty gradually decreases with the increase of soil dry density (figure 6). but just point density increases dramatically with the slight reduction of electrical resistivity soil. however electrical resistivity almost stops changing at approximately 2.51 kω-m. after that dry density of soil, electrical resistivity remains constant with the lower dry density of soil sample. from fig 2 and 6 it can be observed that, in both graphs for the dry densities of 1.79, 1.936, 1.792 and 1.821 gm/cm3, trend line of changes its direction. but water content decreases gradually with decreasing dry density of soil. on the other hand after maximum density of soil sample, electrical resistivity does not change with the dry density. so it can be understand that for optimum mois-ture content of soil sample, electrical resistivity is minimum. within this specific percentage of water, electrical conductivity of soil is maximum. even if more water is added with the soil, it does not have any effect in electrical conductivity. also at maximum dry density, soil grains supposed to be placed closest position with each other. for this reason maximum dry density provides minimum resistivity of soil sample. v conclusion compaction of soil is one of the vital parameter for geotechnical engineering. dry density of soil is directly related with the compaction soil. it is more expensive and time consuming to collect the soil sample from the site and measure the dry density of soil in laboratory. for this reason it is much convenient to measure the dry density at the site of construction using electrical resistivity of soil. but it is very difficult to measure the exact soil density because of water content of soil. in this purpose resistivity ratio between different layers of soil can be considered to avoid the effect of water content on resistivity of soil. though resistivity mostly depends on the water content of soil, at least maximum dry density of soil can be deter-mined using soil electrical resistivity. acknowledgment the authors wish to thank all the laboratory assis-tants of geotechnical engineering laboratory (ukm). this work was supported in part by a grant from ergs/1/2012/tk03/ukm/01/3 and gup2012-031. references [1] i. d. lestariningsih and k. hairiah, ―assessing soil compaction with two different methods of soil bulk density measurement in oil palm plantation soil,‖ procedia environ. sci., vol. 17, pp. 172–178, jan. 2013. [2] f. i. siddiqui and s. b. a. b. s. osman, ―electrical resistivity based non-destructive testing method for determination of soil’s strength properties,‖ in advanced materials research, 2012, vol. 488–489, pp. 1553–1557. [3] m. e. s. c. v. kalinski, ―a geophysical approach to construction quality assurance testing of compacted soil using electrical conductivity measurements (asce),‖ in earthquake engineering and soil dynamics, 2005, pp. 1–10. [4] e. laloy, m. javaux, m. vanclooster, c. roisin, and c. l. bielders, ―electrical resistivity in a loamy soil: identification of the appropriate pedo-electrical model,‖ vadose zo. j., vol. 10, no. 3, p. 1023, aug. 2011. [5] z. chik and t. islam, ―study of chemical effects on soil compaction characterizations through electrical figure 6: graph between dry density and electrical resistivity figure 6: graph between dry density and electrical resistivity figure 5: graph between electrical resistivity and water content chik, z., murad, o.f., rahmad, m/. dependency of d ry density of soil on water content in the measurement of electrical resistivity of soil (2015) 145 conductivity,‖ int. j. electrochem. sci., vol. 6, pp. 6733–6740, 2011. [6] j. lin, y. sun, and p. schulze lammers, ―evaluating modelbased relationship of cone index, soil water content and bulk density using dual-sensor penetrometer data,‖ soil tillage res., vol. 138, pp. 9–16, may 2014. prof. zamri bin chik. dip civil engg (utm), bsc (aberdeen), msce,phd pittsburgh), p.eng, miem, cpesc (geotechnical engineering) mohammad omar faruk murad. completed b.sc. in civil engineering from ahsanullah university of science and technology with dean’s list of honor.continuing masters by research in geotecnical engineering in universiti kebangsaan malaysia (ukm). presently working as a graduate research assistant (gra) at universiti kebangsaan malaysia (ukm). muhamad rahmad. b.sc. in civil engineering in universiti kebangsaan malaysia (ukm). ices5 proceeding-pp. 1-3 gaza, 4-5 november14 journal of engineering research and technology, volume 2, issue 1, march 2015 87 landfill leachate treatment by low cost activated carbon prepared from agriculture waste 1 nurshazwani bt. azmi, 2 alexanderrayar singarayah, 3 mohammed j.k. bashir*, 4 sumathi sethupathi 1,2,3,4 department of environmental engineering, faculty of engineering and green technology (fegt), university tunku abdul rahman, 31900 kampar, perak, malaysia. 1 shazwani103@gmail.com (n.b.azmi) 2 alex_rxz89@yahoo.com (a.singarayah) 3 jkbashir@utar.edu.my (m.j.k.bashir) 4 sumathi@utar.edu.my (s.sethupathi) abstract—adsorption via activated carbon (ac) is one of the superior treatments for stabilized landfill leachate, but expensive and limited resource of ac precursor (bituminous and lignite) limit application of this technique in landfill leachate treatment. based on previous studies, agriculture waste performed as an excellence potential for ac precursor. thus, present study evaluates the sugarcane bagasse derived activated carbon (sbac) for adsorptive removal of ammonical nitrogen, cod, and color from old anaerobic landfill leachate located in perak, malaysia. the chemical and physical properties of adsorbent were examined by fourier transform infrared spectroscopy (ftir) and scanning electron microscope (sem). the effects of ac dosage (g) on adsorption performance were investigated in a batch mode study. equilibrium data were favorably described by langmuir isotherm model, with a maximum monolayer adsorption capacity for nh3n, cod and color at 14.62 mg/g, 126.58 mg/g and 555.56 pt/co, respectively. the results illustrated the potential usability of sbac for treatment of anaerobic landfill leachate. index terms: landfill, landfill leachate, sugarcane bagasse, adsorption, activated carbon. i. introduction landfill is a well-known municipal solid waste (msw) disposal method as up to 95% of msw collected worldwide is buried in the landfill [1]. due to such advantages such as low cost, simple disposal procedure [2] and ability to deal with high amounts of waste [3], landfill still widly used option for msw disposal. however, the major drawback of landfilling is due to the generation of hazardous landfill leachate which can cause a serious environmental and aesthetic problem. landfill leachate generally define as a dark color liquid with a strong odor due to the percolation of excess water with mixture of organic and inorganic pollutant deposited within the waste layers of the landfill [4]. as the consequences, landfill leachate characterized with high concentration of pollutant includes the organic matter, bod, cod, ammoniacal-nitogen, heavy metal, and inorganic salt. moreover, stabilized landfill leachate contain humid and fulvic substances that simply hard to be treated by biological treatment. thus, physicochemical method such as activated carbon (ac) adsorption could be a considerable option for stabilized landfill leachate treatment. due to its unique properties such as larger surface area, high micropore volume, rapid adsorption, low acid-base reactivity, and favor pore size distribution [5], ac becomes as one of the best filtration media in the world. however, high manufacturing cost and expensive carbonaceous material [6] limit ac adsorption method applications in stabilized landfill leachate treatment. currently, there is a great interest in finding low-cost and effective alternative to the existing commercial activated carbon [7]. the cost production of ac from cellulose waste material is very low compared to the cost of commercial ac. furthermore, low-cost activated carbon may contribute to environmental sustainability and offer benefits for future commercial applications [8]. thus, the present work focusing on production of sugarcane bagasse derive activated carbon (sbac) for adsorptive removal of ammoniacal-nitrogen, cod and color from stabilized landfill leachate. mailto:%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%201shazwani103@gmail.com mailto:2alex_rxz89@yahoo.com mailto:3jkbashir@utar.edu.my mailto:4sumathi@utar.edu.my n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 88 the structural, functional and surface chemistry of the prepared ac was evaluated. the adsorptive removals of pollutants were studied by adsorption equilibrium and isotherm model. ii material and methods landfill leachate sampling landfill leachate was collected from an anaerobic landfill known as sahom landfill located at kampar, perak, malaysia. the landfill is equipped with leachate collection system, however, there is no leachate treatment system prior to discharge. leachate samples were collected from leachate collection pond and characterized according to standard method of water and wastewater [9]. preparation of sugarcane bagasse derived activated carbon (sbac) sugarcane bagasse (sb) was used as ac precursor in this study. sb was obtaind from the neighboring shop. the sb was cut into small pieces, boiled and washed exhaustively in order to eliminate the impurities from the surface followed by drying at 105˚c overnight in order to remove the unwanted moisture content. dried bagasse was ground using grinding machine (zm2000, germany) with 0.1 mm blade and sieved to retain the particle sized ranging from 1.4 mm to 0.5 mm. the prepared bagasse was used for the char production with performed by the carbonization unit. the prepared bagasse was placed into a muffle furnace and carbonized at 700˚c. the char produced was mixed with potassium hydroxide (koh) solution with (char: koh) impregnation ratio at 1: 2.73 (wt%) and the wet bagasse was dried at 105 o c for three days before activated in muffle furnace for 3 hours at 600 o c with a ramping rate at 10˚c/min [10]. the resultant activated carbon was washed with 0.1m hci and rinsed repeatly with deionized water until the ph of the filtrate reach 6.5-7 for removing organic matters residue and alkalis. finally, the prepared sbac was dried at 105 o c for 24 hours prior to leachate treatment process. characterization of sbac textural morphology of sbac and chemical characterization of surface functional group was carried out by scanning electron microscope (fesemjeol 6701-f) and fourier transform infrared spectrometer (perkin-elmer spectrum rxi). chemical characterization of surface functional groups was detected using the pressed potassium bromide (kbr) pellets which containing 5% of carbon sample. the ftir spectra were recorded between 4000-400 cm -1 . batch study this study was concentrated on the identification of the optimum operational conditions. the experiments were conducted in a series of 250 ml erlenmeyer flask containing mixture of 100 ml raw leachate and sbac with agitation speed of 200 rpmand contact time of 180 min. after each run, the media were filtered and the filtrates were kept for adsorptive uptake analysis of color, cod and nh3-n. the color concentration was measured at 455 nm wavelength using hach color method 8025, whereas cod concentration was measured using hr+ cod vials by dr 6000 spectrophotometer, while nh3-n was measured with spectrophotometer dr6000 at wavelength 640nm. all tests were conducted in accordance with the standard methods for examination of water and wastewater [9]. equilibrium study the performance of the experiment was studied using adsorption isotherm which describes the relationship between the concentrations of adsorbate accumulated on the adsorbent and the concentration of the dissolved adsorbate at equilibrium [11]. amounts of adsorbate accumulate on the adsorbent were measured by the difference between the initial concentration of adsorbate with the concentration of adsorbate at equilibrium within the dissolve solution where it is expressed as following equation: 𝑞 = (𝐶𝑜 − 𝐶𝑒)𝑉 𝑚 (1) where qe is amount of adsorption at equilibrium, co is initial concentration for the adsorbate while ce is the amount of adsorbate at equilibrium. v (l) is the volume of the solution and m (g) is the mass of the dry sorbent used. the pollutant removal percentage is calculated by the following equation: n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 89 𝑅𝑒𝑚𝑜𝑣𝑎𝑙 (%) = (𝐶 − 𝐶 ) 𝐶 𝑥 100 (2) where co and ce are the initial and equilibrium stage liquid-phase concentration of adsorbate, in term of color, cod and nh3-n. iii. result and discussion leachate characteristic table 1 shows leachate characteristic of sahom landfill located in kampar, perak, malaysia. the leachate has high concentration of color, cod, and nh3-n, with lower value of bod5: cod ratio (< 0.1). thus sahom landfill can be categorized as stabilized landfill leachate [12]. as stabilized landfill leachate contains refractory organic compound [13], effectiveness of biological process decreases and physico-chemical processes in particular ac adsorption may become one of the appropriate options. table 1: sahom landfill chractristics sbac characterization ftir analysis the ft-ir analyses of sbac before and after treatment were illustrated in figure 1. the spectra of adsorbents were plotted to determine the vibration frequency changes in the functional group of the adsorbent. the absorption peaks around 3426 cm -1 indicates the free and intermolecular bonded hydroxyl groups [14]. the peak around 1562 cm -1 may be attributes to aromatic group of lignin compound. the peak observed at 1084 cm -1 can be assigned to c-o band, due to och3 group also confirm the presence of lignin structure of sbac [15]. meanwhile, peak at 1384cm -1 may involve c-h deformation. after adsorption treatment, it was found that oxygen containing –oh groups are affected after uptake process. this is judged from shifts of its position to the lower frequency (3426 3423 cm -1 ). other remarkable shift included the co band from 1084 to 1090 cm -1 . the results indicated that the participation of these groups via oxygen for pollutant binding in leachate to sbac in agreement with person principal for hard-soft acids and bases [16] . sem analysis scanning electron microscope (sem) analysis of sbac was presented in figure 2. base on the figure, it showed the development of micropore structure on the sbac. as the non-carbon elements such as hydrogen, oxygen and nitrogen released in the form of tars and gases during pyrolysis process, a rigid carbon skeleton with a rudimentary pore structure known as char formed from the aromatic compound [17] . pretreatment of the char with dehydrating agent (koh) inhibit formation of tar and other undesired product [17]. consequently, co2 creates activated carbon with larger micropore volume and narrower micropore size distribution [17] that bring to higher adsorption capacity of the pollutant. experimental performance adsorbent dosage the effect of ac dosage on percentage removal of color, cod and nh3-n was illustrated on figure 3. the shaking speed (200 rpm) and contact time (180 minutes) were remained constant with varied ac dosages (0-9g) throughout the experiment. based on figure 3, it is apparent that adsorptive removal parameters unit average value temperature o c 26.9-27.1 ph 8.60-8.75 conductivity ms 10.93-11.02 resistivity ω 90.04-90.08 turbidity ntu 105.6-126.0 color pt/co 3300-3500 cod mg/l 1490-1570 nh3-n mg/l 1860-1950 bod5 mg/l 106-120 bod5/cod 0.071-0.076 total suspended solids mg/l 203-227 n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 90 of color, cod and nh3-n increased considerably by increasing adsorbent dosage from 0g/100ml to 2g/100ml. however, further increase in adsorbent dosage up to 9g/100ml showed steadily increased of pollutant uptake. according to moodley et al. (2011), further increase of adsorbent dosage lead to overlapping of surface site due to the overcrowding of adsorbent particles [18]. as the result, it will bring to the decrease of accessible surface area of adsorbent, thus lowering the pollutant removal per unit adsorbent. besides, adsorbent dosage presents a profound effect on the adsorption process due to the reason that it predicts the cost of adsorbent per unit of pollutant to be treated [19]. thus, the optimum sbac dosage for pollutant removal is 7g/100ml with percentage removal of 94.7 % color, 83.6% cod and 46.6% nh3 –n. (a) (b) figure 1: ftir analysis of (a) sbac before treatment and (b) scac after treatmen. n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 91 figure 2: sem micrograph of sugarcane bagasse derive activated carbon at 900x magnification figure 3: removal efficiency in terms of cod, nh3-n and colour vs activated carbon dosage 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 0 10 20 30 40 50 60 70 80 90 100 0 1 2 3 4 5 6 7 8 9 e ff lu e n t c o n c e n tr a ti o n r e m o v a l e ff ic ie n c y ( % ) ac dosage (g) cod removal (%) nh3-n removal(%) colour removal (%) cod effluent (mg/l) nh3-n effluent (mg/l) colour effluent (ptco) n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 92 equilibrium study adsorption properties of sbac was studied using langmuir and freundlich isotherms, which are the most common models for describing the adsorption characteristic of adsorbents used in water and wastewater treatment [20] . langmuir isotherm describes the monolayer adsorption of adsorbate on specific homogenous site of adsorbent meanwhile, freundlich isotherm theory assumes multilayer coverage of adsorbate over a heterogenous adsorbent surface. equation of both isotherm were expressed as below: langmuir isotherm: 1 𝑞 = 1 𝑄𝑏𝐶 + 1 𝑄 (3) freundlich isotherm: log𝑞 = log𝐾 + log𝐶 (4) where ce is the equilibrium liquid-phase concentration and qe is the equilibrium uptake capacity (mg/g), while q (mg/g), b (l/mg), 1/n and k are the constant. based on table 2, the adsorption of colour, cod and nh3-n was rationally explained by langmuir and freundlich isotherm. the r 2 value of langmuir isotherm model for colour, cod and nh3-n were 0.9818, 0.9653 and 0.9728 while for freundlich isotherm model were at 0.9524, 0.9085 and 0.923. as langmuir model yielded relatively high r 2 value and close to unity, thus the adsorption of pollutant on sbac took places as monolayer adsorption, with the maximum adsorption capacity of 555.56 pt/co, 126.58 mg/l and 14.62 mg/l for color, cod and nh3-n removal. table 2: isotherm equation parameters for colour, cod and nh3-n adsorption onto activated carbon parameter langmuir isotherm coefficient freundlich isotherm coefficient q b r 2 k 1/n r 2 mg/g (l/mg) mg/g ((l/mg)1/n) colour 555.556 0.0005224 0.9818 0.678734617 0.8199 0.9524 cod 126.582 0.0005521 0.9653 0.002761849 1.5929 0.9085 nh3-n 14.6199 0.0004753 0.9728 0.000000037 2.847 0.923 (iv) recommendation the present study determines the optimum treatment for stabilized landfill leachate in terms of adsorbent dosage as well as the best fitted isothermal models for color, cod and nh3 –n removal. there are much more research gaps which are yet to be explored in terms of landfill leachate treatment. following are suggestions on research area for future studies. n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 93  comparison on treatment efficiency using various types of experimental conditions such as the shaking speed, contact time and ph of the adsorbate.  removal of different types of pollutant such as heavy metal (manganese, zinc, chromium, lead, copper and cadmium), organic and inorganic compound.  activated carbon derived from other source of agricultural waste such as from fruit peel, fruit seed and cellulosic waste material. (v) conclusion the potential of sugarcane bagasse derived activated carbon (sbac) for adsorptive removal of colour, cod and nh3-n collected from a stabilized landfill leachate was examined. based on the batch adsorption study, the optimum percentage removals of color (94.71), cod (83.61) and nh3n (46.65) were obtained at optimum adsorbent dosage of 7g/100ml of leachate. the adsorptive removal was well-fitted with langmuir isotherm model with the maximum adsorption capacity of color, cod and nh3-n was at 555.56 pt/co, 126.58 mg/l and 14.62 mg/l, respectively (iv) references [1] t.a. kurniawan, w.h. lo, g.y.s. chan, ― degradation of recalcitrant compounds from stabilized landfill leachate using a combination of ozone-gac adsorption treatment,‖ journal of hazardous material, vol.137, no.1, pp. 443455, sept. 2006. [2] m.j.k. bashir, h.a.aziz, m.s. yusoff, m.n. adlan, ― application of response surface methodology(rsm) for optimization of ammoniacal nitrogen removal from semi-aerobic landfill leahchate using ion exchange resin,‖ desalination, vol. 254, no. 1-3, pp.154-161, may. 2010. [3] u.n. ngoc, and h.schintzer, ―sustainable solutions for solid waste management in east asian countries,‖ waste management, vol.2, no.9, pp. 1982-1995, 2009. [4] yao, p, ―perspectives on technology for landfill leachate treatment,‖ arabian journal of chemistry, doi: 10.1016/j.arabjc.2013.09.031, sept,2013. [5] h.s. li, s.q. zhou, y.b. sun, p. feng, j.d. li, ― advanced treatment of landfill leachate by a new combination process in a full-scale plant,‖ journal of hazardous material, vol. 172, no.1, pp. 408-415, dec. 2009. [6] d. mohan, c.u.j. pitman, ―activated carbon and low cost adsorbent remediation of triand hexalent chromium from water,‖ journal of hazardous material, vol.137, no.2 ,pp. 762811, sept.2006. [7] z.a.alothman, m.a. habila, r.ali, m.s. eldin hassouna, ― valorization of two waste streams into activated carbon and studying its adsorption kinetics equilibrium isotherms and thermodynamics for methylene blue removal.,‖arabian journal of chemistry, vol.2, pp. 212, 2013. [8] m.a. aseel, n.a. abbas, f.a. ayad, ― kinetics and equilibrium study for adsorption of textiles dyes on coconut shell activation. arabian chemistry journal, doi:10.1016/j.arabjc.2014.01.02 [9] apha, ―standard methods for the examination of water and wastewater, 18th ed., american public health association, washington united states, 2005. [10] k.y. foo, b.h. hameed, ―microwave-assisted preparation and adsorption performance of activated carbon from biodiesel industry solid reside: influence of operational parameters,‖ bioresource technology, vol.103, no.1, pp.398– 404, january, 2012. [11] r. droste, ― theory and practice of water and wastewater treatment,‖ john wiley and sons,inc.,usa,1997. [12] h. alvarez-vazquez, b. jefferson, and s. j. judd, ―membrane bioreactors vs conventional biological treatment of landfill leachate: a brief review,‖ journal of chemical technology and biotechnology, vol. 79, no. 10, pp. 1043–1049, 2004. [13] huo shoulian, xi beidou, yu haichan, he liansheng, fan shilei, and liu hongliang, ― characteristic of dissolved organic matter (dom) in leachate with different landfill ages,‖ journal of environmental sciences,vol.20, pp. 492-498, sept. 2007. [14] x.colom, f. carillo, f.nagues, p. garriga, ―structural analysis of photodegraded wood by means of ftir spectroscopy,‖ polymer degradation and stability, vol.80, pp.543-549, 2003. [15] u.garg, m.p. kaur, g.k. jawa, d. sud, v.k. n.b.azmi,a.singarayah,m.j.k.bashir,s.sethupathi/ landfill leachate treatment by low cost activated carbon prepared from agriculture waste 94 garg, ― removal of cadmium (ii) from aqueous solutions by adsorption on agriculture waste biomass,‖ journal of hazardous material, vol. 154, pp. 1149-1157, 2008. [16] r.g. pearson, ―hard and soft acids and bases,‖ journal of the american chemical society, vol. 85, pp. 3533-3539, 2010. [17] a.r. mohamed, m. mohammadi, g.n. darzi, ― preparation of carbon molecular sieve from lignocellulosic biomass: a review,‖ renewable and sustainable energy reviews, vol. 14, pp.1591-1599, 2010. [18] k. moodley, r.singh, e.t.musapatika, m.s. onyango, a. ochieng, ― removal of nickel from wastewater using an agricultural adsorbent,‖ water sa, vol. 37, pp. 41-46, 2011. [19] s. kushawa, b. sreedhar, p.p. sudhakar, ―a spectroscopic study for understanding the speciation of cr on palm shell based adsorbents and their application for the remediation of chrome plating effluents,‖ bioresourouce technology, vol.116, pp.15–23, 2010. [20] t.l. eberhardt, s.h. min, ―biosorbents prepared from wood particles treated with anionic polymer and iron salt: effect of particle size on phosphate adsorption,‖ bioresource technology, vol.99, pp. 626-630, 2008. transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 136 a review of double layer rubberized concrete paving blocks euniza jusli 1 , hasanan md nor 2 , ramadhansyah putra jaya 3 , zaiton haron 4 , azman mohamed 5 1 department of geotechnics and transportation, faculty of civil engineering, universiti teknologi malaysia, 81310 skudai, johor, malaysia, eunizajusli@gmail.com 2 professor, department of geotechnics and transportation, faculty of civil engineering, universiti teknologi malaysia, 81310 skudai, johor, malaysia, hasanan@utm.my 3 senior lecturer, department of geotechnics and transportation, faculty of civil engineering, universiti teknologi malaysia, 81310 skudai, johor, malaysia, ramadhansyah@utm.my 4 senior lecturer, department of structure and materials, faculty of civil engineering, universiti teknologi malaysia, 81310 skudai, johor, malaysia, zaitonharon@utm.my 5 lecturer, department of structure and materials, faculty of civil engineering, universiti teknologi malaysia, 81310 skudai, johor, malaysia, az_man@ic.utm.my abstract— the objective of this paper is to presents and study a review of waste material i.e. rubber granules as a partial aggregate replacement in different percentage and size of rubber granules; and different thickness of block facing layer. tyre is designed to be very high in toughness and owing to its technological and economical advantages, both strength and toughness of concrete can be increased. incorporation of rubber granules as aggregates in concrete mixture not only improves the toughness of concrete, but also improves the acoustics element by the increase of sound absorption level. previous studies on low-noise concrete paving blocks are very limited. physical, chemical and mechanical test will be carried out to evaluate properties of double layer rubberized concrete paving blocks (dl-rcpbs) with 10, 20, 30, and 40% replacements of rg by weight of aggregate and the blocks aredesigned with 10 mm, 20 mm, 30 mm and 40 mm of facing layer thickness. the sound absorption level and noise reduction coefficient of dl-rcpbs with different thickness of facing layer will be studied. index terms— waste tyre rubber; rubber granule; rubberized concrete; concrete paving block i introduction in various countries, the concrete block pavement (cbp) becomes an attractive engineering and economical alternative to the both flexible and rigid pavement. the cbp has been developed in early fifties in the netherlands whereby its potential usage started to be known worldwide [1]. in general, cbp is suitable for aircraft hard standing, car parks, cycle paths, domestic drives, factory floors, industrial pavements, paving for exceptional loads, pedestrian areas, roads for low speed traffic, medium speed traffic and service areas. cbp provides a durable surface that is comfortable to walk on, pleasant to look at, easy to maintain and ready for immediate use. furthermore, cbp is used in the areas subjected to large point loads due to its durability against huge loading. the cbp is also used extensively for traffic calming where the intention is to improve safety by reducing traffic speeds. althought cbp offers many advantages, however this type of pavement is not suitable to be used for high speed traffic. this is due to generation of tyre-road interaction noise which contributes to the increased of traffic noise and disturbants for residents living near roads and highways. implementation of noise barrier would be costly when compared to the cost of using low-noise pavement. in order to employ the existing advantages of cbp in trems of strength and durability, this study is conducted to add another advantage which is the development of low-noise cbp. according to previous researches, rubberized concrete shows positive results on the sound absorption factors and was suggested to be used as sound insulator. in this study, the purpose of replacing natural aggregate with rubber granules (rg) is to increase concrete‘s flexibility, elasticity, and capacity to absorb sound. it is believed that concrete acting as a binder mixed with rubber aggregate can make blocks more flexible and provide softness to block surface. the increase of demand to develop new technology of concrete materials that leads to the application of sustainable and green technology. blending waste tyre rubber in concrete mixture is one of the best ways to reuse this type of waste and has become a common recommendation in concrete technology research. euniza j., hasanan m. n., ramadhansyah p. j., zaiton h. and azman m/ a review of double layer rubberized concrete paving blocks . (2015) 137 ii waste tyre rubber classification according to ganjian et al. [2] and siddique and naik [3], generally, three broad categories of waste tyre rubber have been considered in most of the researches which are shredded or chipped tyres, ground and crumb rubber. the processed used tyres also involve two stages of magnetic separation and screening. shredded or chipped tyres were used as coarse aggregate replacement. irregular shape of torn tyre shred with 300-460 mm long and 100-230 mm wide is produced in the first shredding stage. in the second stage tyre shreds were tears apart by passing between rotating corrugated steel drums and produce tyre withdimension 100-150 mm. by the end of this stage, particles size of about with 13-76 mm in dimensions is produced and known as ‗‗shredded particles‖. rubber granule is produced by extracting larger tyre particles into smaller particles through granular process. tyre particles were shears apart using revolving steel plates. different sizes of rubber particles may be produced depending on the kind of mills used and the temperature generated in this stage. the size of particle produced varies from 4.75 to less than 0.075 mm and normally used to replace sand. various size fractions of rubber are recovered in more complex procedures. in micromilling process, the particles size (crumb rubber) produced are in the range of 0.075-0.475 mm. for crumb rubber less than 0.075 mm may be used to replace binder or cement depending on the equipment for size reduction. iii waste tyre rubber properties rubber granules (rg) used in this study was obtained from continuous shredding process of waste tyre. the particle size of rg ranges from 1-4 mm and 5-8 mm (see fig. 1). the physical and chemical properties of waste tyre rubber from previous studies are shown in table 1 and table 2. the physical and chemical properties in table 1 and 2 varies may be due to the rubber origin, as well as to the tyre type, namely car, truck or motorcycle tyres. figure 1 waste tyre rubber (rubber granules) table 1 physical properties of waste tyre rubber researcher specific gravity fineness modulus khatib & bayomy [9] 1.18 (tyre chips) 1.12 (crumb rubber) topcu [6] 0.65 1.58 1.91 sukontasukkul [4] 0.77-0.96 3.77-4.93 khaloo et al. [8] 1.16 table 2 chemical composition of waste tyre rubber composition percentage (%) reference [5] [8] [3] natural rubber 23.1 14 synthetic rubber 17.9 27 carbon black 28 29 28 steel 14.5 14-15 fabric, fillers, accelerators, antizionants, etc 16.5 16-17 ash content 5.1 5 plasticizer 10 polymer 50 iv rubberized concrete eldin and senouci [7] studied the variation in strength of portland-cement concrete incorporating with waste tyre. aggregates (fine or coarse) were partially replaced with rubber aggregate by the increments of 25 percent by volume. it was observed that higher reduction of compressive strength (85%) and tensile strength (50%) of concrete with coarse rubber aggregate, whilst smaller reduction of compressive strength (50%) when sand was replaced by crumb rubber. specimen tested does not exhibit brittle failure under compression and split tension. it shows that the rubberized concrete able to absorb higher capacity of plastic energy for both compression and tension loading. khaloo et al [8] include two type of scrap tyre in their study which consist of crumb rubber and coarse tyre chips with maximum size of 4.75 and 20 mm respectively. toutanji [10] demonstrated that by incorporating rubber tyre chips in concrete mixture results in reduction of compressive and flexural strength. it was indicated that the reduction of compressive strength is doubled compare to the flexural strength of rubberized concrete. however, it is found that the higher toughness of concrete incorporating with rubber tyre chips. li et al. [5] investigated the effect of using different form of waste tyre rubber on hardened concrete characteristics. in this study, waste tyre rubber chips and fibers were used to evaluate the characteristics of waste tyre modified concrete. euniza j., hasanan m. n., ramadhansyah p. j., zaiton h. and azman m/ a review of double layer rubberized concrete paving blocks (2015) 138 as a result, the waste tyre rubber fibers perform better compare to waste tire chips. rubberized concrete is found to have higher post-crack toughness compare to normal concrete without waste tyre. ling [11] reported that the density of rubberized concrete blocks decreased with the increased of the rubber content. the density was reduced by about 8% when 50% of the total sand was replaced by rubber, irrespective of the w/c ratio. this is mainly attributed to the low specific gravity of rubber particles as compared to natural river sand. similar finding were also reported by sukontasukkul and chaikaew [12]. flocculation of the rubber particles during concrete mixing creates large voids inside the block and leads to a higher porosity. siddique and naik [3] mentioned that the non-polar nature of rubber particles may tend to entrap air if their rough surfaces increase, which in turn increases the air content and reduces the density of the concrete mixtures. sukontasukkul and chaikaew [12] and ling and nor [13] reported that the rubberized concrete block exhibit better skid resistance as compared to control block (portland concrete cement block). this is mainly due to the higher elastic properties of rubber which allow block surface to deform more and create more friction. sukontasukkul [4] claimed that crumb rubber concrete exhibits superior sound properties than normal concrete as measured by the increase in sound absorption coefficient and noise reduction coefficient (nrc). owing to the advantages of the lower density of crumb rubber concrete, it seems that this type of concrete is suitable to be applied as sound insulator especially for highway construction. v ongoing studies the authors of this paper are conducting research on engineering and sound absorption properties of double layer rubberized concrete paving blocks (dl-rcpbs) (fig. 2) incorporating with rg as partial replacement of aggregates (fine and coarse). rubber granules were produced by yong fong rubber industries, malaysia. the experimental work includes the properties of hardened concrete containing rg and sound absorption level of dl-rcpbs with 100 x 200 x 80 mm in dimension. the concrete mixes containing 10%, 20%, 30% and 40% of rg as substitution for fine and coarse aggregates and water/cement ratio of 0.47. sound absorption coefficient (α) and noise reduction coefficient (nrc) of dl-rcpbs with 10 mm, 20 mm, 30 mm and 40 mm thickness of layer 1 will be measured. for layer 1, the coarse aggregate will be replaced with 5-8 mm rg, whereas 1-4 mm rg will be used to replace fine aggregate in layer 2. in order to maximize the sound absorption especially tyre-road interaction noise, coarser rg is used on the facing layer (layer 1) of the dl-rcpbs. this experimental work is still ongoing, and is expected to be completed by the end of august 2014. layer 1 layer 2 100 mm 200 mm 80 mm figure 2 double layer rubberized concrete paving block (dl-rcpbs) vi experimental program a hardened concrete characteristics the test of hardened concrete can be classified into two tests which are destructive and non-destructive test. in this study, destructive test will cover compressive strength test, flexural strength test and tensile splitting strength. nondestructive test will cover density, porosity, water absorption, weight loss and skid resistance. b field emission scanning electron microscopy the morphology of rg and rubberized concrete will also be explored. field emission scanning electron microscopy (fesem) is an instrument designed primarily for studying the surface of solids at high magnification. information about the sample including external morphology (texture), chemical composition, crystalline structure and orientation of materials making up the sample will be determine using fesem (fig. 3). figure 3 field emission scanning electron microscopy (fesem) c thermogravimetry analysis (tga/dta) the amount and rate of change in the weight of a material as a function of temperature or time in a controlled atmosphere will be determined by thermogravimetry analysis (tga) as euniza j., hasanan m. n., ramadhansyah p. j., zaiton h. and azman m/ a review of double layer rubberized concrete paving blocks . (2015) 139 shown in fig. 4. the composition of materials and prediction of thermal stability of temperature up to 1000 °c will be analyzed. characterization of materials that exhibit weight loss or gain due to decomposition, oxidation, or dehydration can be developed in tga. figure 4 thermogravimetry analysis (tga/dta) d x-ray fluorescence (xrf) the chemical compositions of the rg and rubberized concrete were determined by using xrf, which is generally used to identify the elements or components present in a sample by irradiating the test sample with monochromatic x-rays. a rigaku rix3000 wavelength xrf will be used to distinguish the samples (fig. 5). figure 5 x-ray fluorescence (xrf) e x-ray fluorescence (xrf) fourier transform infra-red (ftir) as shown in fig. 6 is a widely used qualitative technique to characterize raw materials. the ftir analysis is carried out using the potassium bromide (kbr) pellet method (1 mg sample per 100 mg kbr) on a spectrometer, with 32 scans per sample collected from 4000 to 650 cm -1 at 32 cm -1 resolution. figure 6 fourier transform infra-red (ftir) f sound absorption the acoustic measurement is limited to the sound absorption coefficient. the method used to measure the sound absorption coefficient is described in astm e1050 [14]. impedance tube as shown in fig. 7 will be used to measure the sound absorption coefficient of double layer rcpb. sound absorption coefficient is determined for frequencies from 60 hz to 1600 hz. figure 7 impedance tube vii conclusion different replacement percentage of rubber granules at 0%, 10%, 20%, 30% and 40% will be obtained to determine the optimum level of rubber granules in concrete mixes. microstructure analysis covers fesem, tga/dta, ftir and xrf analysis will be related to the ability of double layer rcpbs to absorb sound. the highest sound absorption level of double layer rcpbs for different thickness of layer will also be evaluated. acknowledgment the support provided by malaysian ministry of education and universiti teknologi malaysia by providing research grant (rug 03h47) for this study is very much appreciated. references [1] a. a. van der. vlist, "the development of cbp in netherlands", first international conference on concrete block paving, pp. 14–22,1980. [2] e. ganjian, m. khorami, a.a. maghsoudi, ―scrapeuniza j., hasanan m. n., ramadhansyah p. j., zaiton h. and azman m/ a review of double layer rubberized concrete paving blocks (2015) 140 tyre-rubber replacement for aggregate and filler in concrete‖, construction and building materials, vol. 2, no. 5, 1828-1836, 2009. [3] r. siddique, and t. r. naik, ―properties of concrete containing scrap-tire rubber – an overview‖, waste management, vol. 24, 563–569, 2004. [4] p. sukontasukkul, ―use of crumb rubber to improve thermal and sound properties of pre-cast concrete panel‖, construction and building materials, vol. 23, 1084–1092, 2009. [5] g. li, m. a. stubblefield, g. garrick, j. eggers, c. abadie and b. huang, ―development of waste tire modified concrete‖, cement and concrete research, vol. 34, 2283–2289, 2004. [6] i. b. topcu, ―the properties of rubberized concrete‖, cement and concrete research, vol. 25, no. 2, 304– 310, 1995. [7] n. n. eldin and a. b. senouci, "rubber-tire particles as concrete aggregate", journal of materials in civil engineering, vol. 5, no. 4, 478–496, 1994. [8] a. r. khaloo, m. dehestani and p. rahmatabadi, "mechanical properties of concrete containing a high volume of tire–rubber particles", waste management, vol. 28, no. 12, 2472–2482, 2008. [9] z. k. khatib and f. m. bayomy, "rubberized portland cement concrete", journal of materials in civil engineering, vol. 11, no. 3, 206–213, 1999. [10] h. a. toutanji, "the use of rubber tires particles in concrete to replace mineral aggregates", cement and concrete composites, vol. 18, 135–139, 1996. [11] t. c. ling, "prediction of density and compressive strength for rubberized concrete blocks", construction and building materials, vol. 25, 4303–4306, 2011. [12] p. sukontasukkul and c. chaikaew, "properties of concrete pedestrian block mixed with crumb rubber", construction and building materials, vol. 20, no. 7, 450–457, 2006. [13] t. c. ling and h. nor, (2006). granulated waste tyres in concrete paving. engineering, 6th asia-pacific structural engineering and construction conference, pp. 65–70, kuala lumpur, 2006. [14] american society of testing and materials, "standard testing impedance and absorption of acoustical materials using a tube, two microphones and a digital frequency analysis system", united states: astm e1050, 2010. euniza jusli. phd candidate in highway materials, faculty of civil engineering, universiti teknologi malaysia. she graduated bachelor in civil engineering at same university in 2011. her main research area is concrete block pavement and highway materials. hasanan md. nor. professsor in highway materials and construction, faculty of civil engineering, universiti teknologi malaysia. he graduated in civil engineering at university of leeds, united kingdom. his main research at the moment is concrete block pavement. ramadhansyah putra jaya. senior lecturer and re-searcher, faculty of civil engineering, universiti teknologi malaysia. he graduated in civil engineering at universiti sains malaysia. his main research area is nano technology and materials in civil engineering. zaiton haron. senior lecturer and re-searcher, faculty of civil engineering, universiti teknologi malaysia. she graduated in civil engineering at university of liverpool, united kingdom. her main research area is concrete technology. azman mohamed. lecture, faculty of civil engineering, universiti teknologi malaysia. his main research area is highway materials and contruction. transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 131 coupled resonator diplexer for lte-advanced system h. abukaresh 1 , t. skaik 2 unrwa, gaza, bh2003azem@hotmail.com islamic university of gaza , gaza, tskaik@iugaza.edu.ps abstract— this paper presents the design of a microstrip hairpin diplexer. the design is based on coupledresonator structure using u-shaped resonators. it is designed to meet the long term evolution-advanced (ltea) system band 7, operating at uplink (ul): 2.50–2.57 ghz and downlink (dl): 2.62–2.69 ghz for base transceiver station antenna. the structure has three ports with 10-coupled resonators with direct coupling to produce diplexer with chebyshev filtering response. the diplexer does not involve any external junctions for distribution of energy, so it can be miniaturized in comparison to conventional diplexers. index terms— component; coupled-resonator, coupling matrix, diplexer, optimization. i introduction the diplexer is a device that isolates the receiver from the transmitter while permitting them to share a common antenna. it is often the microwave key component that allows two way radios to operate in a full duplex manner. an ideal diplexer provides perfect isolation with no insertion loss, to and from the antenna. conventional diplexers consist of two channel filters connected to an energy distribution network. the channel filters pass frequencies within a specified range, and reject frequencies outside the specified boundaries, and the distribution network divides the signal going into the filters, or combines the signals coming from the filters [1]. the most commonly used distribution configurations are eor h-plane n-furcated power dividers [2,3], circulators [4], manifold structures [5-8], y-junction [9] and t-junction [10]. in [11] a coupled resonator diplexer has been implemented at x-band with waveguide cavity resonators. the difference in this paper from the work in [11] is that the diplexer presented here is designed to work on a different frequency band that is used for the full duplex lte-a system and it is implemented using microstrip technology. the synthesis procedure of the proposed diplexer in this paper is based on elimination of the additional common junction. this approach to diplexer design can achieve reductions in the size and volume of the circuit. the coupled-resonator based diplexer, without additional common junction is presented in [11-12]. this method for synthesizing coupled resonator diplexers is based on optimization of coupling matrix of multiple coupled resonators representing a three-port network, and it is performed in the normalized frequency domain. ii diplexer synthesis there are many possible topologies for coupled resonators that can achieve a chebyshev response. one example is illustrated in figure 1; it is a schematic of a diplexer with resonators. each circle represents a resonator, and the lines between resonators are internal couplings. the arrowed lines between resonators and ports represent external couplings. figure 1 n -resonator based diplexer. the coupling matrix of a multiport circuit with multiple coupled resonators has been used in the synthesis. a unified solution for the coupling matrix ][ a has been utilized and it is generalized for both types of magnetic and electric couplings [13,14]. transmission (s21, s31) and reflection (s11) scattering parameters of a three-port coupled resonator circuit that have been found in terms of the general coupling matrix may be generalized as [13]: (1) 1 11][ 1 2 1 11   a e q s 1 1][ 1 2 21   xa ex q e q s 1 1][ 1 2 31   ya ey q e q s h. abukaresh and t. skaik / coupled resonator diplexer for lte -advanced system (2015) 132 it is assumed that port 1 is connected to resonator 1, ports 2 and 3 are connected to resonators x and y respectively. a general normalized coupling matrix ][ a in terms of coupling coefficients and external quality factors is as follows [13]: [a] = [q]+ p[u]− j[m] (2)                                      nn m nn m n m nn m nn m n m n m n mm j p ey q ex q e q a )1(1 )1()1)(1(1)1( 1)1(111 100 010 001 1 00 0 1 0 00 1 1 ][              (3) where ei q is the scaled external quality factor ).( fbw ei q ei q  of resonator i, fbw is the fractional bandwidth given by 0 /) 12 (  fbw , ][u is the ][ nn  identity matrix, n is the number of the resonators, p is the complex lowpass frequency variable, ][m is the coupling matrix and entry ij m is the normalized coupling coefficient between resonators i and j , )/( fbw ij m ij m  , and the diagonal entries ij m account for asynchronous tuning, so that resonators can have different self-resonant frequencies [13]. the optimization of the coupling matrix ][m is based on minimization of a cost function that is evaluated at the frequency locations of the reflection and transmission zeros. the cost function used here is given as [13,15], 2 2 1 2010 )(. 1 )])(([ 11 .2 1 2 1 ) 1 )])(([ 11 .2 )( 2 2 1 1 )])(([ 1 .2 2 1 1 1 )])(([ 1 .2                r v r l pv s ae q pv sacof r j e q rj sacof rj s a t k eb q e q tk sa b cof t i ea q e q ti sa a cof (4) where 1e q , ea q , and eb q are the external quality factors at ports 1, 2 and 3 respectively, )( xsa  is the determinant of the matrix ][ a evaluated at the frequency variable x , and )])(([ ysa mn cof  is the cofactor of matrix ][ a evaluated by removing the thm row and the th n column of ][ a and finding the determinant of the resulting matrix at the frequency variable ys . ti s , tk s are the frequency locations of transmission zeros of 21 s , 31 s respectively, 1 t , 2 t are the numbers of the transmission zeros of 21 s , 31 s respectively, r is the total number of reflection zeros , r l is the specified return loss in db )0( rl , rjs and pvs are the frequency locations of the reflection zeros and the peaks frequency values of 11 s in the passband. the last term in the cost function is used to set the peaks of 11 s to the required return loss level. it is assumed here that both channels of the diplexer have the same return loss level. iii diplexer design an lte-advanced band 7 diplexer operating at uplink (ul): 2.50–2.57 ghz and downlink (dl): 2.62–2.69 ghz with symmetrical channels has been designed using microstrip hairpin resonators. the diplexer has a chebyshev response with passband centre frequency of 2.535 ghz for channel 1 and 2.655 ghz for channel 2, minimum isolation of 60 db, and a desired return loss at the passband of each channel is 20 db. the diplexer topology is shown in figure 2. figure 2 topology of diplexer with 10 resonators. the total number of resonators is n=10, the fractional bandwidth is fbw = 0.073267. a formula in [13] has been used to calculate the normalized external quality factors of diplexers with symmetrical channels and their values are found as 6e q = 10e q =2.636 and 1eq =1.318. the normalized coupling coefficients between any adjacent resonators are initially set to 0.5 in the initial coupling matrix, and the same for self-coupling coefficients 3,3 m , 4,4 m , 5,5 m , 6,6 m , 7,7 m , 8,8 m , 9,9 m and 10,10 m . the coefficients 11 m , 22 m and the couplings that do not exist between resonators are set to zero. a gradient based optimization technique has been utilized to synthesize the coupling coefficients, and the cost function in equation (4) has been used. the optimization has been h. abukaresh and t. skaik / coupled resonator diplexer for lte -advanced system (2015) 133 carried out using matlab. the optimized normalized coupling matrix is shown in table 1, and the response of the diplexer is shown in figure 3. figure 3 diplexer prototype response with 10 resonators from optimization process. lte-advanced 10-resonator diplexer has been designed using hairpin microstrip coupled resonators. electromagnetic computer simulation techelectromagnetic computer simulation technology (em cst) simulator has been used to extract the desired design dimensions according to a prescribed general coupling matrix and external quality factors. the em simulated performance of the diplexer is shown in figure 4. it can be shown from the simulation results that the return loss is better than 12 db in the transmit and receive band, the insertion loss is only about 0.3 db in transmit and receive band and isolation greater than 60 db in the uplink channel and greater than 35 db in the downlink channel. figure 4 the em simulated performance of the diplexer. the top view of diplexer structure is shown in figure 5. iv conclusions an lte-advanced band 7 coupled resonator diplexer has been presented, and its synthesis is based on coupling matrix optimization. the diplexer structure consists of resonators coupled together, and it does not involve any external junctions for distribution of energy. this enables miniaturization in comparison to the conventional diplexers. the diplexer has been designed with microstrip hairpin resonators, and the simulation results showed acceptable return loss and isolation. table 1 normalized coupling matrix of diplexer with 10 resonators from optimization process. 1 2 3 4 5 6 7 8 9 10 1 0 0.794 0 0 0 0 0 0 0 0 2 0.794 0 0.379 0 0 0 0.379 0 0 0 3 0 0.379 0.543 0.258 0 0 0 0 0 0 4 0 0 0.258 0.606 0.242 0 0 0 0 0 5 0 0 0 0.242 0.626 0.331 0 0 0 0 6 0 0 0 0 0.331 0.613 0 0 0 0 7 0 0.379 0 0 0 0 -0.543 0.258 0 0 8 0 0 0 0 0 0 0.258 -0.606 0.242 0 9 0 0 0 0 0 0 0 0.242 -0.626 0.331 10 0 0 0 0 0 0 0 0 0.331 -0.613 h. abukaresh and t. skaik / coupled resonator diplexer for lte -advanced system (2015) 134 references [1] i. carpintero, m. padilla-cruz, a. garcialamperez, and m. salazarpalma,‖generalized multiplexing network,‖ u.s. patent 0114082 a1, jun. 1, 2006. [2] j. ruiz-cruz, j. montejo-garai, j. m. rebollar, and s. sobrino, "compact full ku-band triplexer with improved e-plane power divider," progress in electromagnetics research, vol. 86, 2008, pp. 39-51. [3] j. dittloff, j. bornemann, and f. arndt, "computer aided design of optimum eor hplane n-furcated waveguide power dividers," in proc. european microwave conference, sept. 1987, pp. 181-186. [4] r. mansour, et al., ―design considerations of superconductive input multiplexers for satellite applications,‖ ieee transactions on microwave theory and techniques, vol. 44, no. 7, pt. 2, 1996, pp. 1213-1229. [5] j. rhodes and r. levy, ―design of general manifold multiplexers,‖ ieee transactions on microwave theory and techniques, vol. 27, no. 2, 1979, pp. 111-123. [6] d. rosowsky and d. wolk, ―a 450-w output multiplexer for direct broadcasting satellites,‖ ieee transactions on microwave theory and techniques, vol. 30, no. 9, sept. 1982, pp. 1317-1323. [7] m. uhm, j. lee, j. park, and j. kim,―an efficient optimization design of a manifold multiplexer using an accurate equivalent circuit model of coupling irises of channel filters,‖ in proc. ieee mtt-s int. microwave symp., long beach, ca, 2005, pp. 1263-1266. [8] r. cameron and m. yu, ―design of manifold coupled multiplexers,‖ ieee microwave mag., vol. 8, no. 5, oct. 2007, pp. 46–59. [9] t. king, a. ying ying, s. tiong, ―a microstrip diplexer using folded hairpins,‖ proceeding of the ieee international rf and.microwave.conference.(rfm), seremban, malaysia, 12th-14th dec. 2011, pp. 226-229. [10] h. zhang and g. james, ―a broadband tjunction di-plexer with integrated iris filters,‖ microwave and opti-cal technology letters, vol. 17, no. 1, 1998, pp. 69-72. [11] t. skaik, michael lancaster, ―coupled resonator diplexer without external junctions‖ journal of electromagnetic analysis figure 5 the layout of the lte-a 10-resonator diplexer design h. abukaresh and t. skaik / coupled resonator diplexer for lte -advanced system (2015) 135 and applications, vol. 3, 2011, 238-241. [12] t. skaik, “a synthesis of coupled resonator circuits with multiple outputs using coupling matrix optimisation”, phd thesis, march 2011, school of electronic, electrical and computer engineering, the university of birmingham. [13] t. skaik, m. lancaster, and f. huang, “synthesis of multiple output coupled resonator microwave circuits using coupling matrix optimization,” iet journal of microwaves, antenna & propagation, vol.5, no.9, june 2011, pp. 1081 1088. [14] j. hong, microstrip filters for rf/microwave applications. new york, ny: john wiley, 2011. [15] a. jayyousi and m. lancaster, ―a gradientbased optimization technique employing determinants for the synthesis of microwave coupled filters,‖ ieee mtt-s international microwave symposium, usa, 2004, pp. 1369-1372. h. abukaresh received the b.sc. degree in 2007 from the islamic university of gaza and m.sc. degree in communications engineering in 2013 from the islamic university of gaza. he is currently it & information assistant at united nations relief & works agency for palestine refugees in the near east, gaza field office. t. skaik received the b.sc. degree in 2004 from the islamic university of gaza, where he worked as a teaching assistant until 2006. he was awarded hani qaddumi scholarship and received m.sc. degree in communications engineering with distinction in 2007 from the university of birmingham, uk. he was awarded the orsas scholarship for doctoral study at the university of birmingham, uk and received phd degree in microwave engineering in 2011. throughout his phd study, he worked as a teaching assistant, and also as a research associate on micromachined microwave circuits. he is currently an assistant professor at the islamic university of gaza. transactions template journal of engineering research and technology, volume 1, issue 4, december 2014 132 pmsm sensorless speed control drive youakim kalaani1,rami haddad2, adel el shahat3 1department of electrical engineering, georgia southern university, usa 2department of electrical engineering, georgia southern university, usa 3department of electrical engineering, georgia southern university, usa abstract— permanent magnet synchronous machines (pmsm) are very popular in many industrial applicationssuchasinmechatronics, automotive, energy storage flywheels, centrifugal compressors, vacuum pumps, and robotics. this paper proposes sensorless control for a pmsm speed drive which is based on aclosedloopcontrol system using a proportional and integral (pi) controller that is designed to operate in flux weakening regions under a constant torque angle.thissensorlesselementwasadopted for best estimating the pmsm rotor position based on its performance characteristicseliminatingthe need for speed sensorswhich are usually required insuchcontrol applications. to achieve this goal, apulse width modulation (pwm) control scheme was developed to work in conjuction with a field oriented motor controldriveusingsimulink.thisinnovative control system was simulated assuming realistic circuit components to maximize the accuracy of the proposed model.finally, simulation results obtained under different operation conditions at below and above the rated speed of the motorwere presented and discussed in this paper. index terms——permanent magnet, synchronous machine, control, sensorless, simulink and field oriented. i introduction the vector control of ac machines was introduced in the late 1960s by blaschke, hasse, and leonhard in germany. following their pioneering work, this technique, allowing for the quick torque response of ac machines similar to that of dc machines, has achieved a high degree of maturity and become popular in a broad variety of applications.for many years, pmsm have been the subject of intense studies and various speed control schemes have been proposed in the literature. for instance, c. bowen et al. [1] have addressed the modeling and simulation of pmsmsupplied from a six step continuous inverter based on state space method. furthermore, c. mademlis et al. [2]presented an efficiency optimization method for vector-controlled interior drive, and a modular control approach was applied by x. jian-xin et al [3].in motor drive applications, a shaft encoder or a hall sensor is typically used to measurethe rotor position [4-8]. due to the flux-weakening technology, the operating speed range can be extended by applying negative magnetizing current component to weaken the air-gap flux [9, 10]. this has led to a new design concept of permanent magnet (pm) machine for flux-weakening operation proposed by l. xu et al. [11]. for their part, tapia et al. have explored a magnetic structure termed the consequent-pole(cppm) machine which had inherent field weakening capability [12]. soong and miller proved that maximum torque field-weakening control can be achieved through optimal high-saliency interior pm motor design [13] and a two control techniques to enhance the performance of pm drives over an extended speed range were presented by macminn and jahns [14]. however, the techniques of maximum torque per ampere (mtpa) operation at a break-point speed was first investigated by sebastian and slemon [15]anda current-regulated flux-weakening method for reduced air-gap flux was introduced by dhaoudi and mohan [16]. although current vector controlandfeedforward decoupling compensation appeared in work done by morimoto et al[17,18], it was not until sudhoff et al [19] who set forth a flux-weakening control scheme that is relatively simple and does not require prior knowledge of the machine and inverter parameters. along these lines, sozer and torrey [20] presented an adaptive control over the entire speed range ofpm motor. several flux-weakening control methods based on voltage regulation were proposed by y. s. kim et al [21], j. m., kim et al [22], and j. h. song et al 23] in which the voltage error signalis generated between the maximum output voltage and the voltage command. in vector control of pm motors, the output of the voltage regulator is used to determine the required demagnetizing curren needed to prevent saturation. however, the added controller could only operate properly under well-tuned conditionswhich are not easily reached [24] and the d-q axis currents cannot beindependently controlled due to the crosscoupling effects which become dominant at high speeds. as a result, the dynamic performance of pm motors are degradedwithoutthe presence of a decoupling control scheme and effective control offast dynamic response requires accurate rotor position[21-27]. adaptive control methods seem to youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 133 be the most promising modern control strategy [28], [29] and a model reference adaptive control (mrac) schemecharacterized by reducedcomputation was proposed by cerruto et al [28]. this model was further refined by baik et al [30] byestimating the values of slowly varying parameters using lyapunov stability criteria. the use of sensors to measure motor speed can result in increased cost and reduced control robustness and/or reliability. the first breakthrough in senseloss control theory was reported by a. rostami, and b. asaei [31] who developed a method for estimating the rotor positionas well as other proposed mothods [32-35]. however, many challengesremain in the design of sensorlesscontrolto operate over a wide speed range of pm motors. improved position-sensorless control schemes were developed in the last decade [36-40], especially in the concept area of direct drive which achieved higherdynamic response, increased efficiency, and low acoustic noise.in modern applications, the pmsm machine is designed to operate in constanttorqueand power modesat below and above the rated speed which can significantly reduce the cost and size of the overall drive system. the constant-torque operationcaneasilybe achieved by conventional vector control but the motor will not be able to operate in constant-torque mode at above the rated speed. however, this problem was alleviated by the introduction of flux-weakening techniques which extended the operating speed range by applying negative magnetizing current component to weaken the air-gap flux [41], [42]. in this paper, a sensorlessvectorcontrol of pmsm drivesusing flux weakening techniquesis presented. a pi controller operating under constant-torque angle is implemented using a novel pwm controlscheme for field oriented motor controldrive. this controller was tested using simulink and different operation conditions under variable speed were presented and discussed in this paper. this sensorless drive system is also usefullin electric vehicle (ev) applications. ii pmsm dynamic modeling the pmsm drive system with and without speed sensoris described in this section. itincludes different components such as permanent magnet motors, position sensors, inverter, and current controller with sensor and speed estimation unit for sensorlesscontrol. both components are presented in fig.1and fig.2 respectively. fig.1-drive system schematic with position sensor control input dc source load pm motorinverter rotor position controller ia ib ic ia ib ic gate signals position estimation fig.2-drive system schematic without position sensor the pmsm equivalent circuit used to derive the dynamic equations in the d-q axisis presented in fig.3. fig.3pmsm equivalent circuit the stator windingsareassumed to have equal turns per phase in the d-q axis. the rotor flux is also assumed to be concentrated along the d-axis while there is zero flux along the q-axis. in addition, it is assumed that the machine core losses are negligible. variations in rotor temperature can alter the magnet fluxbut its variation with respect to time is considered to be negligible. iii pmsmstator flux – linkage the equations for the stator flux-linkage along the d-q axis are given by: vq = rq iq + ρ (q) + r d (1) vd = rd id + ρ (d)– r q (2) where: ρ: is the d/dtdifferential factor;rq, rd are the winding resistancesand refered as rs when equal. the q-d axis stator flux linkagesreflected to the rotor reference frames can be written as: q = ls iq + laf iq (3) d = ls id + laf id (4) theoritcally, the self – inductances of the stator q-d axis are equal to ls only when the rotor magnets are at 180electrical degrees apart but this is hardly the case in practice. when the stator winding is aligned with the rotor, theinductance ld(d-axis) is the lowest while the winding facing the interpolar path results in higher inductancelq(q–axis [43]. the excitiation of the permanent magnetis modeled as a constant current sourceifralong the d-axis.since there is no flux along the q-axis, the rotor currentis assumed to be zero.therefore, the flux linkages can be written as: control input dc source loadpm motorinverter rotor position controller ia ib ic ia ib ic position sensor gate signals youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 134 vq = rsiq + ρ ( q ) + rd vd = rs id + ρ ( d ) – r q q = lqiq d = ld id + lmifr = ld id + af af= lmif where: lm is the mutual inductance between stator and rootorwindings;r: electrical velocity of the rotor; af :flux linkage due to rotor; ρ (af) = 0, af = lmifr; ρ : operator. iv pmsm torque equations the electromagnetic torque is given by: (5) this torque is derived fromthe input power as follow: pin = vaia + vbib + vcic (6) equation (6) has three parts; 1) power loss in the conductors; 2) energy rate of change in the magnetic field; and 3) conversion to mechanical energy. the electromechanical power is given by pem = rmte = (3/2) r( diq – q id ) (7) r = (p/2) rm (8) where: p is the number of poles and rmthe mechanical velocity of the rotor. therefore, the torque can be written as (9) where, the first term of equation (9) presents the magnet alignmentand the second term presents the torque reluctance. the general mechanical equation for the motor is written as te = tl + td + b rm + j ρrm (10) where: b: viscous frictions coefficient; j: inertia of the shaft and load system; td: dry friction; tl: load torque v pmsm dynamic simulation the dynamic simulation presented in this paper was performed using simulinkin matlab package.a pmsm block is shown in fig. 4where the voltage and load torque are presented as inputswhile the motorspeed and current are presented as outputs. fig.4model block of pmsm dynamic a more detailed model [44-46] is providedin fig. 5. fig.5detailed model of pmsm vi pmsm current control high-performance drives utilize control strategies which develop command signals for the ac machine currents. ngcurrent controlseliminate stator dynamics (effects of stator resistance, stator inductance, and induced emf) and thus, to the extent that the current regulator functions as an ideal current supply, the order of the ystemcan significantly be reduced. however, ac current regulators which form the inner loop of the drive system are complex since both amplitude and phase shift of the stator currents must be controlled. they must provide minimum steady-state error and also require the widest bandwidth in the system. both current source inverters (csi) and voltage source inverters (vsi) can be operated in controlled current modes. pwm current controllers [47] are widely used since they can generate a control scheme based on comparing a triangular carrier wave of desired switching frequency to the error of the controlled signal. the error is the difference between the reference signal generated in the controller and the actual motor current. if the error command is above the triangle waveform, thevsi leg is held switched to the positive polarity (upper switch on). contrarily, if the error command is below the triangle waveform, the inverter leg is switched to the negative polarity (lower switch on). in this study, a pwm current controller is used with generated signals as shown in fig. 6. )( 22 3 dqqde ii p t   ))(( 22 3 dqqdqafe iilli p t   pmsm dynamic model voltage load torque speed current ns 1 current in q -axis u(1)/lq current in d -axis f(u) te calculation f(u) sine wave 3 sine wave 2 sine wave 1 poles / 2 -k mechanical f(u) load torque t_l integrator 1 1 s integrator 1 s gain 4 -k gain b from 3-phase to d -qin 1 v_d v_q flux in q -axis v_q i_q lambda_d omega_r lambda_q flux in d -axis v_d i_d lambda_q omega_r lambda_d constant 1 t_d constant j .. . youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 135 fig.6pwm current controller vi i pmsm field oriented control a pmsm field oriented or vector control is derived from the machine dynamic model and it is based on the decoupling of the torque components. the 3-phase currents flown in the stator windings can be transformed to the rotor reference frame using park’s transformationas follow: (11) where α is the angle between the rotor field and stator current; ωr is the electrical rotor speed. in the rotor reference frame, the qaxis current (iq) and the d -axis current (id) are usually constant since α is fixed for a given load torque. under this condition, iq and id are called respectively the torque and flux producingcomponents of the stator current. they can be written as: (12) and, the electromagnetic torque is given by: (13) the field oriented or vector control can be utilized under two modes of operation: a constant flux operation in this mode of operation, it is possible to produce maximum torque by setting angle α in equation (12) to 90º which makes id zero and iq equals to is. therefore, torque equation (13) can be rewritten as a function of the motor current: qte ikt . (14) (15) b flux-weakening operation flux weakening is the process of reducing the flux in the daxiswhich yieldshigher speed range. the weakening of the field flux is required for operation above the rated speed or base frequency. under this mode, the motor drive is operated at a constant voltage over frequency (v/f) ratio which results in a reduction of the torque proportional to the change in the frequency. under this condition, the motor operates in the constant power region [48]. when permanent magnets are used, flux-weakening is achieved by increasing the negative id current and using armature reaction to reduce the air-gap flux [49]. the torque can be varied by altering the angle between the stator mmf and the rotor d-axis. in the flux weakening region where ωr>ωrated, it is possible to change the value of α by adjusting id and iqasshown below (16) since torque is a function of iqcuyrrent, the torque will also be reduced. the generated reference signals are used by the current controller to drive the inverter and the load torque given by equation (17) can be adjusted for different reference speeds ωr (17) viii implemetingspeed control loop the precise control of speed and position is required in many applicationssuch as in robotics and factory automation.a typical control system consists of a speed feedback system, a motor, an inverter, a controller, and a speed setting device. a properly designed feedback controller makes the system insensible to disturbance and changes of the parameters. closed-loop control systems have fast responsebut areexpensives due to the need of feed back components such as speed sensors. a block diagram of atypical pmsm drive system with a full speed range is shown in fig. 7. the system consists of a motor, an inverter, a controller (constant flux and flux-weakening operation, and reference signals) fig.7block diagram for original drive system current error saw tooth pwm signal ) 3 2 sin( ) 3 2 sin( )sin(         tii tii tii rsc rsb rsa                  cos sin s d q i i i ]sin2sin)( 2 1 [ 22 3 2  safsqde iill p t  aft p k ) 2 )( 2 3 ( )( )( r rated ratedel tt    )(tan 1 d q i i   pmsm voltage source inverter c u rr e n t c o n tr o l flux weakening constant torque ang. pi control unit iabc id iq  r  r  ref  r s   1/s youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 136 fig.8block diagram for sensorless drive system a pmsm speed sensorless drive system is shown in fig. 8 in which the speed sensor is replaced by a postion estimation and its derivative. speed controller calculates the difference between the reference speed and the actual speed producing an errorwhich is fed to the pi controller. pi controllers are widely used for motion control systems. they consist of a proportional gain that produces an output proportional to the input error and an integration to eliminate the steady state error due to a step input. a block diagram for a typical pi controller is shown in fig. 9. fig. 9block diagram of a pi controller motor speed controllers consist of an inner loop for the current and an outer loop for the speed. depending on the response of the system, the current loop is at least 10 times faster than the speed loop. the current control is performed by the comparison of the reference currents with the actual motor currents. a simplified control system may be obtained by setting the gain of the current loop to unity as displayed in fig. 10. fig.10simplified speed controller block diagram viii inverter-motor equivalent circuit the equivalent cuircuitof an inverterused forpmsm speed drive is provided in fig. 11. (18) fig. 11inverter-motor equivalent circuit the motor voltages provided by the inverter are equivalent to a 3-phase voltage source [50,51] that can be written with a modified expression as: (t) v(t) v (t)v (t) v(t) v (t)v (t) v(t) v (t)v oncnco onbnbo onanao    (19) for a star connected system, the following relationship must be satisfied at all time: 0vvv coboao  (20) using equations (19) and (20), the null voltage is derived as: )/3vvv(v cnbnanon  (21) the phase voltages collected at the inverter leg are a function of the dc source and the switching time (da,db,dc) as follows: dc . v v db . v v da . v v dccn dcbn dcan    (22) from which the line voltages can be derived as: ) da dc ( . v v ) dc db ( . v v ) db da ( . v v dcca dcbc dcab    (23) with futher derivation, the phase voltages can be written as: pmsm voltage source inverter c u rr e n t c o n tr o l flux weakening constant torque ang . pi control unit iabc id iq r  r ref r s   1/s position estimation derivative (t) v(t) v (t)v (t) v(t) v (t)v (t) v(t) v (t)v ancnca cnbnbc bnanab    youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 137 ) 3 / dc) db (da dc ( . v v ) 3 / dc) db (da db ( . v v ) 3 / dc) db (da da ( . v v dcc dcb dca    (24) the dc-link voltage vdc,may be obtained using vsn (maximum phase voltage) as follow [52]: ).sin(. .2 v dc sn v p p    (25) where vsn: peak amplitude of phase voltage ix observer for speedestimation a postion-sensorlesspmsm drive makes use of an observer instead of a sensor or encoder to estimate the speed of the motor. this concept is based on the two-axis theory to derive an equivalent quadrature-phase model to represent the threephase machine. in fact, the d-axis and q-axis currents are related to the actual three-phase stator currents by the following transformation: (26) where (27) conversion to the new stationary (α-β) frame is also known as clark transformation (insert referenc here). similarly, voltage (ѵ)and flux linkage (λ)can also be transfered from (a-b-c) frame to (α-β) frame by the following transformations: (28) where t ssss iiii         00  (29) t ssss         00   thflux linkage is transformed as ms s s il 00. 0 0       (30) where             0 sin cos 0 0 r r m m      (31) furthermore, the induced back emf in the windings of the fictitious quadrature-phase machine can be written as a function of the flux linkages and rotor position (angle) as:               r r mr s s s e e e       cos sin (32) finally, the stator iabccurrents can readily be obtained from the idq0currents by the following reverse transformation: (33) x simulink simulation of pmsm drive simulink was chosen from several simulation tools because of its flexibility in working with analog and digital devices. the pmsm drive systempresented in this paper was made of several block diagrams as shown in the following figures using sumilinkand then connected together to build the whole system. for instance, idq0 to iabcreverse transformation block is shown in fig 12, the vector control reference current block with pi speed controller depicted in fig.13, the voltage source inverter shown in fig. 14, and the sensorless rotor position estimation block is given in fig 16. the block diagram for the complete pmsm drive system is presented in fig. 17.for simulation purposes, the voltages are assumed to be the system inputs and the current are the outputs. clark transformation blocks with the flux linkages block were simulated to estimate the rotor position and parks transformation were used for converting vabc to vdqo.also as shown, vector control requires a block for the calculation of the reference current using angle α, rotor position, and the magnitude of current is.inverter action is implemented using reference currents to generate the gate pulses for the igbts. ia =ib ic cos  cos ( 120) sin ( 120) sin  cos ( 120) sin ( 120) 1 1 1 iq id i0 abcsabc s iti     . 0 0   ss s s pirv 00. 0 0 .       t ssss vvvv         00  youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 138 fig. 12 idqo to iabc block fig. 13vector control reference current block with pi speed controller fig. 14voltage source inverter fig. 15pwm current controller block fig. 16rotor position (speed sensorless) estimation block fig. 17complete speed sensorless drive system xi simulation results simulation results of the pmsm drive systemusing the proposed pwm current control scheme are presented in this section. the motor wasruninconstant-torque modebelow its rated speed (what is it?) and in flux-weakening modeabove rated speed.currents, torques, and speeds were all plotted under these two operation modes. simulation results are given at motor speeds of 2000 rpm and 2400 rpm respectively.as shown in fig 18 and fig 26, the motor speed reached the desired spe levels in less than .01s with all oscillation died out within .02s. the steady state error due to a step input (reference speed voltage) was shown to be zero. fig.18motor speed vs time at 2000 rpm ic ib ia i _ c i _ b i _ a integrator 1 1 s from d q i to 3 phase i in 1 i _ a i _ b i _ c (u (1 )* cos (u (3 )-( 2 * pi / 3 ))+ u (2 )* sin (u (3 )-( 2 * pi / 3 ))) f ( u ) (u (1 )* cos ( u ( 3 )+( 2 * pi / 3 ))+ u (2 )* sin (u (3 )+( 2 * pi / 3 ))) f ( u ) (u (1 )* cos ( u ( 3 ))+ u ( 2 )* sin (u (3 ))) f ( u ) in 1i _ q 1 i _ d 2 wr 3 is is 5 i*_c 4 i*_b 3 i*_a 2 i_abc _reference 1 wr_reference -cu(1)*sin(u(3)+u(2)-2*pi /3) f(u) u(1)*sin(u(3)+u(2)+2*pi /3) f(u) u(1)*sin(u(3)+u(2)) f(u) reference iabc currents in1 i*_a i*_b i*_c pi controller pi ki kp integrator 2 1 s k ts z-1 alfa _ref pi /2 error 3 in 1 2 wr 1 vc 6 vb 5 va 4 vca 3 vbc 2 vab 1 u(4)*(u(3)-u(1)) f(u) u(4)*(u(3)-(u(1)+u(2)+u(3))/3) f(u) u(4)*(u(2)-u(3)) f(u) u(4)*(u(2)-(u(1)+u(2)+u(3))/3) f(u) u(4)*(u(1)-u(2)) f(u) u(4)*(u(1)-(u(1)+u(2)+u(3))/3) f(u) vdc vdc 3 ph inverter voltages in 1 vab vbc vca va vb vc in 1 4 dc 3 db 2 da 1 dc 3 db 2 da 1 signal generator 2 signal generator 1 signal generator relay _a2 relay _a1 relay _a i _c 6 i _b 5 i _a 4 i *_c 3 i *_b 2 i *_a 1 lambda _af .alfa lambda _af .beta gain 2/4 thetare f(u) resistance1 6.8 resistance 6.8 integrator 1 1 s integrator 1 s inductance 1 -k inductance -k derivative du /dt 3-phase to alfa & beta voltages in 1 v_alfa v_beta 3-phase to alfa & beta currents in 1 i_alfa i_beta iq wr lambda _af .alfa lambda _af .beta lambda _beta 3 lambda _alfa 2 ns 1 gain 2/4 current in q -axis u(1)/(0.0115 ) current in d -axis f(u) vdc 1 302 te calculation f(u) t _l 1.2 thetare f(u) sign scope 9 scope 8 scope 7 scope 6 scope 5 scope 4 scope 3 scope 2scope 15 scope 14 scope 13 scope 12 scope 11 scope 10 scope 1 scope resistance 1 6.8 resistance 6.8 relay _a3 relay _a2 relay _a product mechanical f(u) j -cintegrator 4 1 s integrator 2 1 s integrator 1 1 s integrator 1 s inductance 1 -k inductance -k id 0 gain 2 -k gain 1 4/2 gain -k from d -q i to 3-phase i 1 in 1 i_a i_b i_c from d -q i to 3-phase i in 1 i_a i_b i_c from 3-phase to d -q-0 in 1 v_d v_q v_0 flux in q -axis v_q i_q lambda_d omega_r lambda_q flux in d -axis v_d i_d lambda_q omega_r lambda_d discrete pi controller pi derivative du /dt constant 1 -cconstant -calfa & beta to d & q currents in 1 i _ d i _ q 3-phase to alfa & beta voltages in 1 v_alfa v_beta 3-phase to alfa & beta currents in 1 i_alfa i_beta 3 ph inverter voltages in1 vab vbc vca va vb vc time (sec) s p e e d ( r p m ) youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 139 fig.19 iabccurrents vstime at 2000 rpm fig. 20-idqcurrents vstime at 2000 rpm fig.21torque vs time at 2000 rpm the 3-phase iabccurrents drawn by the motor and obtained by park's reverse transformationare shown for the two speeds in fig 19 and 27 respectively. the corresponding idqcurrentsare dispayedin fig. 20 and 28 in which the value of id in fig 20 is zero since field oriented control is used. the torqueses developed by the motor were also shown in fig. 21 and 29 where thestarting torque is almosttwice the steady state or rated torque value. fig 22iabc reference currents vstime at 2000 rpm fig23inverter phase (a) pulses vs time at 2000 rpm fig 24speed error vs time at 2000 rpm fig 25 phase (a) voltage vs time at 2000 rpm reference currentsobtained by this type of control are shown in fig 22 and 30. phase (a) inverter pulse, speed, error, and inverter phase (a) voltage for 2000 rpm speed are presented in fig 23, 24 and 25respectively. and those for 2400 rpm speed are displayed in fig 30, 31, and 32. fig 26motor speed vs time at 2400 rpm fig 27iabccurrents vstime at 2400 rpm fig 28-idqcurrents vs time at 2400 rpm time (sec) t o r q u e ( n .m ) time (sec) ia b c r e fe r e n c e ( a m p ) time (sec)i n v e rt e r p h a s e ( a ) p u ls e s time (sec) s p e e d e rr o r (r a d /s e c ) time (sec) p h a s e ( a ) in v e r te r v o lt a g e ( v ) time (sec) s p e e d ( rp m ) time (sec) ia b c c u r r e n ts ( a m p .) time (sec) id q c u r r e n t s ( a m p .) id iq youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 140 fig. 29torque vs time at 2400 rpm fig 30iabc reference currents vs time at 2400 rpm fig 31-inverter phase (a) pulses vs time at 2400 rpm fig 32-speed error with time at 2400 rpm fifig 33phase (a) voltage vs time at 2400 rpm it should be noted thatnegative speed was observed in fig 26 due to the speed acceleration effects which make the machinerunas a generator at first before rurningas a motor.without flux weakening, the torque was also observed torapidlydecrease to zero with increasing speed above the rated speed and briefelyturnednegativein response to sudden variations in the dc bus voltage. this mode of operation is unstable since the machine drive is out of control at thattime. this can be resolved by fluxweakeningwhich can ensureproper control in the whole speed and voltagerange.furthermore, the negative effect ofthe pure feedback control could be avoided by torquesetpoint rate limitation which is necessary to limit increase in acceleration anyway. v conclusion although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. a conclusion might elaborate on the importance of the work or suggest applications and extensions. authors are strongly encouraged not to call out multiple figures or tables in the conclusion—these should be referenced in the body of the paper. references [1] b. cui, j. zhou, and z. ren, "modeling and simulation of permanent magnet synchronous motor drives," 2001. [2] c. mademlis and n. margaris, "loss minimization in vector-controlled interior permanent-magnet synchronous motor drives," industrial electronics, ieee transactions on, vol. 49, pp. 1344-1347, 2002. [3] x. jian-xin, s. k. panda, p. ya-jun, l. tong heng, and b. h. lam, "a modular control scheme for pmsm speed control with pulsating torque minimization," industrial electronics, ieee transactions on, vol. 51, pp. 526-536, 2004. [4] r. gabriel, w. leonhard, and c. nordby, “field oriented control of standard ac motor using microprocessor,” ieee trans. ind. applicat., vol. ia-16, pp. 186–192, 1980. [5] l. harnefors, “design and analysis of general rotor-flux oriented vector control systems,” ieee trans. ind. electron., vol. 48, pp. 383–389, apr. 2001. [6] m. schroedl, “sensorless control of ac machines at low speed and standstill based on the “inform” method,” in conf. rec. ieee-ias annu. meeting, vol. 1, 1996, pp. 270–277. [7] p. l. jansen and r. d. lorentz, “transducerless position and velocity estimation in induction and salient ac machines,” ieee trans. ind. applicat., vol. 31, pp. 240–247, mar./apr. 1995. [8] p. l. jansen, r. d. lorenz, and d. w. novotny, “observer-based direct field orientation: analysis and comparison of alternative methods,” ieee trans. ind. applicat., vol. 30, pp. 945–953, july/aug. 1994. [9] t. m. jahns and v. blasko, “recent advances in power electronics technology for industrial and traction machine drives,” proc. ieee, vol. 89, pp. 963–975, june 2001. [10] thomas m. jahns, “motion control with permanentmagnet ac machines,” in proc. ieee, vol. 82, aug. 1994, pp. 1241-1252. [11] l. xu, l. ye, l. zhen and a. el-antably, “a new design time (sec) e le c t. t o r q u e ( n .m ) time (sec) r e fe r e n c e c u r r e n ts (a m p .) time (sec) p h a s e ( a ) in v e rt e r p u ls e s time (sec) s p e e d e rr o r (r a d /s e c ) time (sec) p h a s e ( a ) in v e r te r v o lt a g e ( v ) youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 141 concept of permanent magnet machine for flux weakening operation,” ieee trans. ind. applicat., vol. 31, pp. 373-378, march/april, 1995. [12] j. a. tapia, f. leonardi, and t. a. lipo, “consequentpole permanent-magnet machine with extended field-weakening capability,” ieee trans. ind. applicat.,vol. 39, pp. 1704-1709, nov./dec., 2003. [13] w. l. soong and t. j. miller, “field-weakening performance of brushless synchronous ac motor drives,” proc. iee—elect. power applicat., vol. 141, no. 6, pp. 331–340, nov. 1994. [14] s. r. macminn and t. m. jahns, “control techniques for improved high-speed performance of interior pm synchronous motor drives,” ieee trans. ind. applicat., vol. 2, pp. 997-1004, sept./oct. 1991. [15] t. sebastian and g. r. slemon, “operating limits of inverter-driven permanent magnet motor drives,” ieee ch2272-3/86, pp. 800-805, 1986. [16] r. dhaouadi and n. mohan, “analysis of currentregulated voltage-source inverters for permanent magnet synchronous motor drives in normal and extended speed ranges,” ieee trans. energy conv., vol. 5, pp. 137-144, mar. 1990. [17] s. morimoto, m. sanada and k. takeda, “wide-speed operation of interior permanent magnet synchronous motors with high-performance current regulator,” ieee trans. ind. applicat., vol. 30, pp. 920926, july/aug. 1994. [18] s. morimoto, y. takeda, t. hirasa, and k. taniguchi, “expansion of operating limits for permanent magnet by current vector control considering inverter capacity,” ieee trans. ind. applicat., vol. 26, pp. 866-871, sept./oct. 1990. [19] s. d. sudhoff, k. a. corzine and h. j. hegner, “a fluxweakening strategy for current-regulated surfacemounted permanent-magnet machine drives,” ieee trans. energy conv., vol. 10, pp. 431-437, sept. 1995. [20] y. sozer and d. a. torrey, “adaptive flux weakening control of permanent magnet synchronous motors,” in conf. rec. ieee-ias annu. meeting, vol. 1, st. louis, mo, 1998, pp. 475–482. [21] y. s. kim, y. k. choi and j. h. lee, “speed-sensorless vector control for permanent-magnet synchronous motors based on instantaneous reactive power in the wide-speed region,” iee proc-electr. power appl., vol. 152, no. 5, pp. 1343-1349, sept. 2005. [22] j. m. kim and s. k. sul, “speed control of interior permanent magnet synchronous motor drive for the flux weakening operation,” ieee trans. ind. applicat., vol. 33, pp. 43-48, jan./feb. 1997. [23] j. h. song, j. m. kim, and s. k. sul, “a new robust spmsm control to parameter variations in flux weakening region,” ieee iecon, vol. 2, pp. 11931198, 1996. [24] j. j. chen and k. p. chin, “automatic flux-weakening control of permanent magnet synchronous motors using a reduced-order controller,” ieee trans. power electron., vol. 15, pp. 881-890, sept. 2000. [25] a. consoli, g. scarcella and a. testa, “industry application of zero-speed sensorless control techniques for pm synchronous motors,” ieee trans. ind.applicat., vol. 37, pp. 513-521, march/april, 2001. [26] m. tursini, r. petrella and f. parasiliti, “initial rotor position estimation method for pm motors,” ieee trans. ind. applicat., vol. 39, pp. 1630-1640, nov./dec., 2003. [27] f. j. lin and s. l. chiu, “adaptive fuzzy sliding mode control for pm synchronous servo motor drives,” proc. iee—contr. theory applicat., vol. 145, no. 1, pp. 63–72, 1998. [28] e. cerruto, a. consoli, a. raciti, and a. testa, “a robust adaptive controller for pm motor drives in robotic applications,” ieee trans. power electron., vol. 10, pp. 62-71, jan. 1995. [29] k. j. åström and b. wittenmark, “a survey of adaptive control applications,” in proc. 34th ieee conf. decision and control new orleans, la, 1995, pp. 649-654. [30] i. c. baik, k. h. kim, and m. j. youn, “robust nonlinear speed control of pm synchronous motor using adaptive and sliding mode control techniques,” proc. iee—elect. power applicat., vol. 145, no. 4, pp. 369–376, 1998. [31] alireza rostami and behzad asaei, “a novel method for estimating the initial rotor position of pm motors without the position sensor,” energy conversion and management, vol. 50, (2009), pp. 1879– 1883. [32] m.s. merzoug and h. benalla, “nonlinear backstepping control of permanent magnet synchronous motor (pmsm),” international journal of systems control (vol.1-2010/iss.1, ),pp. 30-34. [33] jinpeng yu, junwei gao, yumei ma, and haisheng yu, “adaptive fuzzy tracking control for a permanent magnet synchronous motor via backstepping approach,” mathematical problems in engineering, hindawi publishing corporation, volume 2010, article id 391846. [34] h.m. hasanien, “torque ripple minimization of permanent magnet synchronous motor using digital observer controller,” energy conversion and management, volume 51, issue 1 (january, 2010), pp. 98-104 [35] li dong, wang shi-long, zhang xiao-hong and yang dan, “impulsive control for permanent magnet synchronous motors with uncertainties: lmi approach,” chinese physics b, vol.19,issue1,pp.010506-7(2010). t. markvart and l. castaner, practical handbook of photovoltaics, fundamentals and applications. elsevier, 2003. [36] r. gabriel, w. leonhard, and c. nordby, “field oriented control of standard acmotor using microprocessor,” ieee trans. ind. applicat., vol. ia-16, pp. 186–192,1980. youakim kalaani, rami haddad, adel el shahat/ pmsm sensorless speed control drive (2014) 142 [37] l. harnefors, “design and analysis of general rotor-flux oriented vector controlsystems,” ieee trans. ind. electron., vol. 48, pp. 383–389, apr. 2001. [38] m. schroedl, “sensorless control of ac machines at low speed and standstill basedon the “inform” method,” in conf. rec. ieee-ias annu. meeting, vol. 1, 1996,pp. 270–277. [39] p. l. jansen and r. d. lorentz, “transducerless position and velocity estimation in induction and salient ac machines,” ieee trans. ind. applicat., vol. 31, pp. 240–247, mar./apr. 1995. [40] p. l. jansen, r. d. lorenz, and d. w. novotny, “observer-based direct fieldorientation: analysis and comparison of alternative methods,” ieee trans. ind.applicat., vol. 30, pp. 945–953, july/aug. 1994. [41] t. m. jahns and v. blasko, “recent advances in power electronics technology for industrial and traction machine drives,” proc. ieee, vol. 89, pp. 963–975, june2001. [42] thomas m. jahns, “motion control with permanentmagnet ac machines,” in proc. ieee, vol. 82, aug. 1994, pp. 1241-1252. [43] r. krishnan, electric motor drives: modeling, analysis & control, prentice hall, 2006. [44] h. m. el shewy, f. e. abd al kader, m. el kholy, and a. el shahat,“ dynamic modeling of permanent magnet synchronous motor using matlab simulink” ee108, 6th international conference on electrical engineering iceeng 6, 27-29 may 2008, military technical college, egypt . [45] adel el shahat, and hamed el shewy, “permanent magnet synchronous motor dynamic modeling” paper id: x305, 2nd international conference on computer and electrical engineering (iccee 2009); dubai, uae, december 28 30, 2009. [46] adel el shahat, hamed el shewy, “pm synchronous motor dynamic modeling with genetic algorithm performance improvement”, international journal of engineering, issn 2141-2839 (online); issn 2141-2820 (print); science and technology vol. 2, no. 2, 2010, pp. 93-106. [47] b. k. bose, power electronics and variable frequency drives, 1 ed: wiley, john & sons, 1996. [48] r. krishnan, electric motor drives modeling, analysis, and control, pearson education, 2001. [49] x. junfeng, w. fengyan, f. jianghua, and x. jianping, "flux-weakening control of permanent magnet synchronous motor with direct torque control consideration variation of parameters," industrial electronics society, iecon 2004. 30th annual conference of ieee, vol. 2, pp. 13231326, 2004 [50] kazmierkowski m.p., tunia h.: automatic control of converter-fed drives, elsevier science & technology (united kingdom), 1994 [51] ned mohan, tore m. undeland and william p. robbins, power electronics, converters, applications and design, third edition, usa isbn 0-47122693-9, john wiley & sons, inc. [52] a. munoz-garcia and d. w. novotny, “utilization of third harmonic-induced-voltages in pm generators,” industry applications conference, 1996. thirty-first ias annual meeting, ias apos;96., vol. 1, 6-10 oct 1996, page(s):525 – 532. http://datamining.it.uts.edu.au/conferences/wi08/ http://ieeexplore.ieee.org/xpl/recentcon.jsp?punumber=9792 http://ieeexplore.ieee.org/xpl/recentcon.jsp?punumber=9792 http://ieeexplore.ieee.org/xpl/recentcon.jsp?punumber=9792 ices5 proceeding-pp. 1-3 gaza, 9-10 deember 2014 journal of engineering research and technology, volume 2, issue 2, june 2015 105 time management in engineering consulting firms nasreddin elmezaini, ph.d., p.eng assoc. professor, the islamic university of gaza abstract— human resources and manpower are the main assets for engineering consulting firms. the success of the firm depends on how well the managers can make use of their employee’s time. to maximize profit, it is necessary to maximize utilization and billability rate of the employees. lack of work in the firm will leave some employees with an inadequate amount of work, hence, their utilization and billability rates drop and the company starts to lose cash. sometimes however, billability rates drop not due to the lack of work, but due to the inefficient utilization of the existing manpower. this usually results from poor managerial practice and/or lack of efficiency of the employees. this paper discusses the efficiency and productivity of practicing engineers in consulting engineering firms. how can we maximize the efficiency of practicing engineers in our firms? the use of time sheet system will be demonstrated. how can we benefit from the time sheet to maximize production and minimize overhead. index terms—time sheets, consulting firms, chargeable time, flexitime schedule. i. introduction "time is money" was first coined by benjamin franklin, referring to the notion that time is a valuable asset, and that money is wasted when a person's time is not used productively and efficiently. (quote investigator 2010). this statement is virtually valid for all types of business but particularly accurate for engineering consulting firms. in engineering consulting firms, human resources and manpower are the main assets, and hence, the main source of income for the firm. a consulting firm is a business with mostly fixed costs; one hires a group of professionals, and how much profit they make depends on how well they make use of their time on client work. to maximize profit, it is necessary to maximize utilization and billability rates of the employees. (billability rate = ratio of chargeable time to the total working time). in other words, a consulting firm will be losing cash in any of the following situations: awhen there is no sufficient jobs/projects for the available staff; or bwhen utilization or billing rates of the staff is not cost effective. lack of sufficient work could be the result of poor marketing or due to a slow economy. in any case, lack of work will leave some employees with an inadequate amount of work, hence, their utilization and billability rates drop and the company starts to lose cash. sometimes, however, billability rates drop not due to lack of work, but due to inefficient utilization of the existing manpower. this usually results from poor managerial practice and/or lack of efficiency of the employees. in the traditional working system, employees usually spend about 8 hours a day in the office. they sign-in in the morning and sign-out in the evening. the amount of work produced in this period of time depends on the efficiency and loyalty of the employee. at the end of the day, all employees will be paid for their time in the office irrelevant of their productivity. accurate judgment and/or evaluation of the employee’s efficiency, will hence be subjective, and open for debate. the modern working system, which is widely used these days in most international companies, is based on a flexible working hour system using the time sheet forms. in this system, all employees need to fill out a daily report specifying the number of hours spent on each job or project. this system is not only about reporting working hours; it is actually a complete management system for the consulting firms. this article is devoted to presenting the time sheet and the flexible time system used for consulting engineering firms. the main source of information presented in this article is based on the author’s experience gained during his work for over 20 years with international companies in canada, usa and in the middle east. nasreddin elmezaini / time management in engineering consulting firms (2015) 106 the motivation to write this article came from the fact that most local consulting companies in the middle east are still following the old traditional system which is believed to be inefficient. this article is aimed to help local consulting firms and practicing engineers to upgrade their working system to international standards. ii. flexible time system alternative work schedule, such as flexitime and compress time workweeks, have been adopted by an increasing number of organizations over the past several decades. organizations have also begun introducing flexibility measures to increase the responsiveness of their products and services to market needs (eldridge & nisar, 2011). in the flexible time system, employees are given some choice over the actual time they work on their contracted hours (hill et al., 2001). most flexible working hours schemes have a period during the day when employees must be present. this is known as "core time". a typical core time would be 10:00 a.m. to 4:00 p.m. other than the core time, employees may choose when they start and finish work within flexible bands at the beginning and at the end of each day. these bands are typically 08:00– 10:00 and 16:00–18:00, (al-rajudi, 2012). by the end of the week, each employee will need to report how he/she spent his/her contracted hours using the ―time sheet‖. (contracted hours usually range between 38-42 hours/week). several researchers have investigated the impact of a flexible time schedule in comparison with the traditional working schedule. baltes et al, (1999) studied the effect of flexible and compressed workweek schedules on workrelated criteria (productivity, performance, job satisfaction and absenteeism). k.m. shockley & t.d. allen (2007), studied the relationship between flexible work arrangements availability and work–family conflicts. they found that family responsibility significantly moderated these relationships. russell et al. (2007) investigated the relationship between different flexible working arrangements; flexi-time, part-time and working from home, against two key employee outcomes: work pressure and work-life conflict. barry a.t. brown (2001), investigated the use and representations of flexible time sheet system in a large british oil company. most previous researches indicated that a flexible time system helps both the employer and the employee in several ways. the following points summarize the benefits of the flexible time system that were addressed by several researchers. flexible time system:  increases performance and productivity.  decreases administration work loads.  increases employees job satisfaction,  increases organizational commitment;  increase responsiveness to market needs;  increases applicant attraction to an organizations offering flexitime. iii. time sheets (ts) time sheets, also known as time tracking forms, are used in engineering consulting firms for tracking the time spent by each employee in each job. usually, a practicing engineer is involved in more than one project at the same time. in the timesheet each engineer needs to fill out the hours he/she spent on each project every day. the timesheet is usually submitted by the end of each week. it needs to be approved by the employer’s direct supervisor. in most companies, senior professionals do not necessarily need approval for their time sheets. they just need to submit it to the accounting department so that their time can be properly charged against the projects in which they are involved. in the past, employees used to fill out their time sheets on paper forms (hard copies). recently, most companies switched to using time sheet software, some of which provide online access and many other capabilities. figure-1 shows a typical time sheet form. since most engineering projects involve different phases and functions/tasks, the employee will indicate, on his/her ts, the project, phase, and function numbers in which he/she spends his/her time. the use of project and phase numbers will be explained in the subsequent sections of this paper. in addition to the actual project chargeable time, which is time spent on active projects, the employee will also report the hours he/she spent in all other activities such as professional or business development. vacations and sickness times are also reported on the time sheet as overhead expenses which will be called as none chargeable time (nct). the target for each employee is to maximize his chargeable (or billable) time and to minimize his none chargeable time (nct). nasreddin elmezaini / time management in engineering consulting firms (2015) 107 employee your name week ending 26-may 2014 description pin pn fn 5/20 5/21 5/22 5/23 5/24 5/25 5/26 sat. sun mon tue wed thu fri project 1 12601 50 160 4.0 3.5 project 2 13223 20 220 3.5 2.0 2.5 project 3 14103 20 320 5.5 3.5 4.0 professional development 2014 10 80 2.0 3.0 business development 2014 10 70 office closed (holiday) 2014 10 60 sick 2014 10 50 7.5 nct 2014 10 10 2.0 hours charged above certified correct: total/day 7.5 7.5 7.5 7.5 7.5 5.5 total/week 43.0 signature date 8/7/2014 for administration use only pin= project initiation number, pn=phase number, fn= function number figure-1: typical time sheet iv. benefits of the timesheet with the right time sheet solution, engineering consulting firms can dramatically improve their revenue. the information collected from the ts can be benefited in several ways, including:  assists the company to properly charge their clients for time spent of there jobs.  helps monitoring the budgets, and compare them to the progress of work. .  accurately calculate how much each project actually cost.  determine which project was profitable and which one was not.  helps to quantifiably compare and evaluate the productivity and efficiency of the employees.  promote self-control with employees.  helps project managers to scope and price projects more accurately.  reduces overhead and administration cost associated with managing time. consulting engineering firm nasreddin elmezaini / time management in engineering consulting firms (2015) 108 v. how the system works? to explain how the ts system works, we need to define the following terminology:  project initiation number (pin)  phase number (pn)  function number (fn)  chargeable time (ct)  none chargeable time (nct).  business development (bd)  professional development (pd)  billing rate (br) a. project initiation number (pin) when a new project is started, a unique number is assigned to the project called as project initiation number (pin). this number will be used as the reference in all aspects or correspondence related to the project. the pin number usually consists of 5 or more digits (example of a pin is: 13031). the first two digits refer to the year in which the project started. the last three digits refer to the project serial number. depending on the company size and policy, more digits may be added to include the area code to define the project zone or site location. the pin is assigned using a special form known as the project initiation form, which is usually prepared by the project manager. this form typically includes the following information:  project title and description.  project phases.  budget allocated for each phase.  names of the project manager, project director, and the team leaders.  names of the quality assurance team.  client contact details and billing information. b. phase number (pn) during the tender stage (tender proposal), the project is usually divided into phases such as, preliminary design, detailed design, and contract administration. the project manager (pm) and his team, based on their experience, will estimate the number of people and the amount of work (manpower) necessary to complete the required tasks in each phase. this estimate is necessary to price the project for tender. usually, each phase of the project is assigned a specific number of hours and an allocated a budget. the phase number (pn) will be used by all staff members when preparing their time sheets. examples of project phases: preliminary design : phase 10 detailed design : phase 20 contract administration: phase 50 the data collected from the time sheets will help projects managers to monitor the progress of work in each phase for each project. it will also assist them to control and monitor the budget. c. function number (fn) each phase usually includes several tasks or functions in which engineers from different departments are involved (e.g. structural, architectural, and electro mechanical). the amount of work for each group is also estimated, and hence, the pm will distribute the budget of the active phases between the different groups. therefore, each function is usually given a specific number to help track the budget consumed by each group. the following is an example of the function numbers which are commonly used: project management : 100 architectural design : 200 structural design : 300 electrical design : 400 mechanical design : 500 inside each function different tasks can be assigned. for example, fn=310 refers to structural engineering time, whereas fn=320 refers to structural drafting time, and so on. the level of elaboration in this detail depends on the size of the projects and on company policy. the diagram shown in figure-2 shows the sequence of project phases and functions. nasreddin elmezaini / time management in engineering consulting firms (2015) 109 figure-2: project phases, functions and tasks fn: 100 (admin) fn: 200 (architect) fn: 300 (structural) engineering: 210 drafting: 220 engineering: 310 drafting: 320 overheads: 100 management: 110 fn: 100 (admin) fn: 200 (architect) fn: 300 (structural) fn: 400 (electrical) fn: 500 (mechanical) engineering: 410 drafting: 420 engineering: 510 drafting: 520 pin (project initiation number) phase 10 (preliminary design) phase 20 (detailed design) phase 50 (contract administration) nasreddin elmezaini / time management in engineering consulting firms (2015) 110 d. chargeable time (ct) chargeable time is the actual working time spent by employees on active projects or other activities with available budget. using the time sheet system, chargeable time can be categorized based on the phase, function and task numbers. by the end of the project, the project manager will be able to know exactly how much each discipline and each individual used from the allocated budget. e. none chargeable time (nct) none chargeable time is any time spent by the employee that cannot be charged to a project or has no available budget. this may include but not limited to the following: 1sick and vacation time. 2time spent in the office without a job in hand (doing nothing) 3time spent on a project over its allocated budget. nct is an overhead expense to the company that needs to be limited or minimized as much as possible. f. business development (bd) business development (bd) activities are essential for getting new projects. companies encourage their employees, especially senior associates, to spend part of their time for business development. bd time is the time spent on proposal writing, as well as, other marketing activities such as communicating with potential clients. although this time is considered as direct overhead for the company, yet, most companies allocate part of their budget for bd. therefore, bd time is considered as chargeable time for the employee. g. professional development (pd) it is important for the company to invest in its employees by providing training to them and helping them be up-todate on every aspect related to their profession. attending training courses or participating in seminars, workshops, exhibitions, or conferences is considered as professional development (pd). therefore, most consulting companies allocate some of their budget for pd. the employee can charge his/her pd time to this allocated budget. h. billing rates (br) billing rates are the cost of engineering service that will be charged to the project per hour. staff members and professional personnel have different billing rates depending on their expertise and seniority level. br is calculated as follows: br = basic salary (bs) + overhead + profit usually br is 2.0 – 3.0 times bs generally, each company submits a copy of its staff billing rates to the client along with the tender documents. vi. how those numbers are used? the process starts with the proposal for tender. when the pm prepares a tender for a specific job, he/she needs to define the following:  scope of the work needed for the project  stages and phases included, such as : o preliminary design o detailed design o contract administration the pm also needs to indicate the total hours required to complete each function and the staff member who is going to do each of those functions or tasks. usually, the pm discusses these issues with his engineers involved in the different disciplines. knowing the billing rate for each staff member, the budget for each phase can be calculated. the target for the project manager and his team is to meet the project deadline and to be within the allocated budget. a. performance and efficiency: the performance of the pm and his team is measured on the basis of how good they are in meeting the specified targets (budget and deadlines). usually, each employee will be assigned specific tasks to do in a given time period. efficient, employees finish their tasks properly and on time. others may lack the skills required to complete the assigned task efficiently. if the employee fails to finish his/her job on time, he/she could be in trouble. efficiency of an employee can then be evaluated based on the time he/she needs to complete a job compared to the originally allocated time. based on one’s efficiency, they will be rewarded, penalized, or sent for extra training. b. efficiency index (ei): ei = x 100 no one is expected to achieve 100% efficiency. there will always be some nct, and overheads that can’t be charged to any job. nct also includes vacations and sick leave times. c. problems: problems arise when the expenses charged to the project exceeds the originally allocated budget. typically this problem results from one or more of the following reasons:  lack of coordination between disciplines. (pm problem)  underestimating the scope of work. (pm problem) chargeable time total time nasreddin elmezaini / time management in engineering consulting firms (2015) 111  inefficient staff members. (staff problem) reasons for not meeting the budget can be identified by examining the data collected from the time sheets. the ts will tell us how many hours each engineer spent on each task. if an engineer puts more hours than the allocated budget it means that;  his/her budget was underestimated, or  he/she was not efficient. d. inefficient engineer: if an engineer is shown to be inefficient in most of their work, they may be given some other chances. if they cannot improve their performance; pms may not consider them for new projects. then they will not be able to put enough chargeable time on their ts. accordingly their nct ratio will increase and their efficiency will drop. they will then be given warnings. if they cannot improve themselves, they may lose their job. e. efficient engineer: those who always meet the schedule and the allocated budget will be in demand for pm’s. accordingly, they will get more work and will be busy all the time, and hence, they will score a high efficiency rate. accordingly, they will get promoted and will receive more incentives. vii. summary and recommendations: the modern working system which is based on the flexible work arrangement using the time sheet format appears to have several advantages over the traditional fixed time arrangement. it helps increase productivity and promotes self-control. time sheets help engineering consulting firms track and bill their time, as well as, quantifiably compare and evaluate their employees. it also helps the consulting firms reduce their overheads and maximize their profit. this system is recommended to be implemented in all consulting firms. seminars and training sessions can be arranged through the engineering association to raise the awareness of this efficient system. references: [1] quote investigator (may 14, 2010), time is money. benjamin franklin? retrieved from http://quoteinvestigator.com/2010/05/14/time-ismoney/ . (accessed nov 2014) [2] eldridge, derek and nisar, tahir m., employee and organizational impacts of flexitime work arrangements. department des relations industrielles. vol. 66, no. 2, (2011). [3] hill, e. j., hawkins, a. j., ferris, m., & weitzman, m. finding an extra day a week: the positive influence of perceived job flexibility on work and family life balance. family relations, 50, 49–58, (2001). [4] al-rajudi, k. and al-habil, w. impact of flexible work arrangements on workers' productivity in information and communication technology sector. "an empirical study of the gaza strip ict firms" a thesis for the degree of master in business administration, iug, (2012). [5] baltes, b.b., briggs, t.e., huff, j.w., wright, j.a., neuman, g.a.. flexible and compressed workweek schedules: a meta-analysis of their effects on work related criteria. journal of applied psychology 84, 496–513. (1999) [6] k.m. shockley, t.d. allen, when flexibility helps: another look at the availability of flexible work arrangements and work–family conflict. journal of vocational behavior 71 (2007) 479–493 [7] russell, helen; o'connell, philip j.; mcginnity, frances: the impact of flexible working arrangements on work-life conflict and work pressure in ireland. the economic and social research institute (esri), dublin, no. 189 (2007) [8] barry a.t. brown. unpacking a timesheet: formalisation and representation. computer supported cooperative work (cscw). 2001, volume 10, issue 34, pp 293-315 nasreddin elmezaini, ph.d., p.eng. is an associate professor of civil engineering with over 29 years of academic and professional experience. his research interests include: finite element analysis, behavior of buildings under abnormal loading conditions, soil structures interactions, and repair and strengthening of buildings. during his work at the university, he occupied several managerial positions and chaired several educational and scientific committees. beside his academic experience, elmezaini was also involved in the industrial sector as a professional engineer with local and international consulting firms in canada and in the middle east. his professional experience covers wide variety of engineering projects. http://quoteinvestigator.com/2010/05/14/time-is-money/ http://quoteinvestigator.com/2010/05/14/time-is-money/ transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 80 groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study khalil m. alastal 1 , jawad s. alagha 2 , azzam a. abuhabib 3 , rachid ababou 4 1 department of civil engineering, islamic university of gaza (iug), palestine, kastal@iugaza.edu.ps 2 ministry of public works and housing (mpwh), palestine, jawad_s78@yahoo.com 3 department of environmental engineering, islamic university of gaza (iug), palestine, azz200@hotmail.com 4 institut de mécanique des fluides de toulouse (imft), university of toulouse, france, ababou@imft.fr abstract—water resources in arid and semi-arid regions, such as the gaza strip (gs), are generally under increasing stresses in terms of water quality and quantity. therefore management of these valuable resources is one of the crucial concerns and challenges facing researchers and specialists worldwide. in these areas, there is a pressing need to evaluate the water situation in terms of its quality using the available limited data. the water quality index (wqi) helps managers and planners working in the water sector to qualitatively map water quality; which in turn enables them to propose the possible management options, and to prioritize the capital investment for the water sector. being the only source of water for the gs population of more than 1.8 million, the gaza coastal aquifer (gca) is in a disastrous quality situation. it represents an extreme prototype model on how several negative factors (unstable political environment, disastrous economic situation, decaying environmental conditions, unplanned & disorganized human activities), are combined together to further deteriorate the groundwater quality. the objective of this paper is to assess and map the spatial distribution of groundwater quality of the gca using the wqi approach. research results indicate that severe water quality deterioration has occurred in the gs. the area fraction that is classified as "not good" based on the wqi jumped from about 30% to 55% over 10 years (between 2000 and 2010). the wqi maps developed in this research work assisted in forming a simple yet comprehensive view about groundwater quality in the gs. this, in turn, helps spotting critical locations in terms of water quality, and setting management priorities accordingly. in summary, the space-time analyses of wqi provide support for the decision making process, i.e., for drawing policies and proposing remediation measures to restore and maintain water resources. index terms—coastal aquifers, contour maps, gaza strip, groundwater, water quality index (wqi). i introduction groundwater (gw) constitutes about 89% of the freshwater on the earth, and it is considered as an important source for sustainable economic growth in any community especially in arid and semi-arid regions [1]. gw is the unique source of freshwater for more than one third of the world's population, for whom there is a limited or contaminated surface water resource. worldwide, the demand on gw is continuously increasing to meet water demand [2, 3]. however, this valuable resource is not completely isolated from the surrounding environment. it is affected by both natural and anthropogenic contamination sources. therefore an assessment of gw quality is of great importance for society, and particularly, in consideration of public health aspects [4, 5]. the water quality index (wqi) is frequently used to assess the suitability of surface water as well as groundwater for drinking and agriculture purposes. the construction of the wqis for different purposes has been described in the literature by various authors. singh and khan [6] used the wqi and geographic information system (gis) to assess and map the spatial distribution of ground water quality of the dhankawadi ward of pune, india. ganeshkumar and jaideep [7] emphasized the use of the wqi approach in the assessment of groundwater quality of vedaranyam taluk, india. adhikari et al. [8] created wqi iso-maps to evaluate water quality parameters with respect to irrigation potential and to mark differences in water quality between seasons. stigter et al. [9] created a groundwater quality index (gwqi) with the aim of monitoring the influence of agriculture practices on several key parameters of groundwater chemistry and potability. ramakrishnaiah et al. [4] evaluated the suitability of groundwater for human consumption in tumkur taluk, karnataka state (india) based on computed wqi values. srinivas et al. [10] analyzed groundwater samples from twenty-five locations in kurmapalli vagu basin, india for various physico-chemical parameters in terms of wqi to determine its suitability for drinking purposes. gebrehiwot et al. [11] investigated the groundwater quality of the hantebet watershed (ethiopia) for human consumption through wqi investigation of the different hand dug wells in the watershed. mailto:kastal@iugaza.edu.ps mailto:jawad_s78@yahoo.com mailto:azz200@hotmail.com mailto:ababou@imft.fr khalil m. alastal, jawad s. alagha, azzam a. abuhabib and rachid ababou / groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study (2015) 81 this paper aims at assessing and mapping the spatial and temporal distribution of ground water quality (for drinking purposes) in the gaza strip using the wqi method. i. study area the gaza strip area is a part of the palestinian occupied territories as shown in figure 1. it is a narrow and low-lying stretch of sand dunes located at the eastern coast of the mediterranean sea between longitudes 34°2 ″ and 34°25 ″ east, and latitudes 31°16 ″ and 31°45 ″ north [12]. the gs is one of the most densely populated areas in the world, with an average density of about 4400 inhabitants/km 2 [13]. the climate of the gs is semiarid; the mean annual rainfall ranges between 400 mm/year in the north to about 022 mm/year in the south [12, 14]. the annual average of relative humidity is about 72%, and the average mean daily temperature ranges from 25ºc in summer to 13ºc in winter [15]. regarding land use, heavy agricultural activities take place in the gs, where agricultural land occupies about 65% of the land surface and it is the dominant economic sector [12]. figure 1. location of gaza strip [16] the gaza coastal aquifer (gca) is the only natural water resource for different purposes in the gs, and is extensively utilized to satisfy agricultural, domestic, and industrial water demands [14, 17]. water is pumped from the gca by more than 4000 municipal and agricultural wells [15]. the gca is a part of the coastal aquifer that extends from the gs in the south to about 120 km in the north along the mediterranean coastal line as shown in figure 2. the gca thickness varies from about 120 m in the west (at the shoreline) to a few meters in the east [18]. on the other hand, the depth of water of the gca ranges from about 60 m below ground surface in the east to a few meters near the coastline in the west [14]. the gca is composed of layers of sand dunes, sandstone, calcareous sandstone, and silt. it also contains several siltyclayey impermeable layers that partially intercalate and subdivide it into sub-aquifers as shown in figure 2, section a-a [18, 19]. figure 2. layout of gaza coastal aquifers showing a typical geological cross-section (section a-a) [14]. regional groundwater flow in gca is from east to west toward the mediterranean sea. however, intense pumping (abstraction) disturbs the regional natural flow patterns. consequently, large cones of depression have formed over the past years within the major highly populated urban centers within the gs [12]. the gca experiences such severe water quality deterioration that, as a consequence, up to 90% of the gw in the gs is currently not safe for drinking purposes without adequate treatment. furthermore, it is reported that based on the current water and sanitation situation, the gw in the gs could khalil m. alastal, jawad s. alagha, azzam a. abuhabib and rachid ababou / groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study (2015) 82 become unusable as early as 2016, and moreover, the damage of the gw in gs would become irreversible by 2020 [20]. the concentrations of many chemical parameters, particularly nitrate (no3 ) and chloride (cl ), have reached dangerous levels in many locations within the gs [21]. thus, cl concentration is continually increasing, such that less than 5% of water wells in the gs meet the cl standards of the world health organization (who): this is due to seawater intrusion and excessive water withdrawals [16, 21]. similarly, no3 concentrations also reach levels that threaten public health. the primary sources and causes of no3 elevated levels are septic effluents, followed by agricultural applications of sludge and synthetic fertilizers [22]. the concentrations of other physico-chemical parameters such as tds, ec, ca and mg, also reach elevated levels. ii. methodology water quality data for gs strip municipal wells between 2000 and 2010 were collected from the database of the related institutions working in the water sector, particularly the palestinian water authority (pwa) and ministry of health (moh). in this study, the wqi was calculated based on 6 water quality parameters, namely; chloride (cl ), nitrate (no3 ), calcium (ca 2+ ), magnesium (mg 2+ ), sulphate (so4 2), and alkalinity. the selection of these parameters depends on several factors, such as the purpose of the index, the importance of the parameter, and the availability of data [9]. the selection of chloride (cl ) is referred to the fact that the majority of the gw in gs suffers from high levels of (cl ). additionally, chloride is an indicator of salinity, and it has direct effects on human health. as for nitrate (no3 ), it constitutes the main problem in the study area due to its direct effects on human health [23]. finally, ca, mg and so4 are typically associated to agricultural activities. the ca and mg cations are indicators of gw hardness, and high concentrations of these cations in water may affect its acceptability to the consumers in terms of taste and scale deposition. high levels of so4 2 can cause dehydration and gastrointestinal irritation, and may also contribute to the corrosion of distribution systems. alkalinity is introduced into the water by dissolving carbonate-containing minerals [24]. the high concentration of sewage and industrial waste may be the cause of high alkalinity in polluted water [25]. alkalinity control is important in boiler feed water, cooling tower water, and in the beverage industry [24]. excessive alkalinity may cause eye irritation in human and chlorosis in plants [26]. other parameters are voluntarily left out of this analysis, such as tds, ec, po4, no2, and hardness, because these may be indirectly included due to their correlation with the other included parameters. for example, the parameters tds, ec and (cl ) are all indicators of water salinity. therefore, to avoid redundancy, only (cl ) was retained for constructing the quality index. the same justification holds concerning hardness, which is strongly correlated with mg and ca. the palestinian standards for drinking purposes (table 1) were considered for the calculation of the wqi, which was based on a procedure presented in details in the next section (see steps a, b, …, g). after computing the wqi, groundwater quality maps were created based on a geostatistical, kriging interpolation algorithm using the surfer 12 software (golden). table 1 palestinian drinking water standards, and weights, for the selected parameters parameter palestinian standard, mg/l weight cl 600 4 no3 70 5 ca 200 2 mg 150 2 so4 400 3 alkalinity 400 2 iii. water quality index the water quality index is one of the most effective tools to communicate information on the quality of water to the concerned citizens and policy makers. it becomes an important parameter for the assessment and management of groundwater [4]. traditionally, water resource professionals communicated drinking water quality status by comparing the individual parameters with guideline values. however, this language is too technical, and it would not provide a global picture of drinking water quality. to resolve this decision-making problem, horton (1965) made a pioneering attempt to describe the water quality as an index, the wqi [27]. horton defined the water quality index as a reflection of composite influences of individual quality characteristics on the overall quality of water [10]. the wqi concept is based on the comparison of the water quality parameter with respective regulatory standards and provides a single number that express overall water quality at certain location based on several water quality parameters [6]. the wqi summarizes large amount of water quality data into simple terms, i.e., excellent, good, bad, etc., which are easily understandable and usable by the public [25]. therefore, by combining multiple parameters into a single index, a more comprehensive picture of the pollution state is provided. when mapping the index, the areas of high and low water quality can easily be distinguished [9]. khalil m. alastal, jawad s. alagha, azzam a. abuhabib and rachid ababou / groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study (2015) 83 the advantages of a wqi include its ability to represent measurements of a variety of variables in a single number, its ability to combine into a single metric various measurements having different physical dimensions (units), and also, its effectiveness as a communication tool [28]. generally, the construction of the wqi involves three steps: (1) selection; (2) standardization and (3) aggregation of the parameters to be included [9]. in the standardization step, the raw analytical results for selected water quality parameters, having different units of measurement, are transformed into unitless sub-index values [27]. the resulting values are then aggregated using some type of sum or mean (e.g. arithmetic, harmonic, geometric) [9, 27, 29]. in what follows, the methodology used to calculate the wqi is developed step by step. a parameter selection: the selection of the parameters that will make up the index depends on several factors, such as the purpose of the index, the importance of the parameter, and the availability of data [9]. in drinking water quality assessment, priority should be given to those substances which are known to be important to health (potability) and which are known to be present in significant concentrations in the water source (world health organization [27]). as stated previously in the methodology section, the wqi in this paper is calculated based on six water quality parameters: chloride, nitrate, calcium, magnesium, sulphate, and alkalinity. b weight assignment: the purpose of an assignment of weights to water quality parameters is to denote each parameter’s importance to the overall water quality. a larger weight value implies greater importance of the variable with respect to public health [29]. therefore, each of the selected parameters has been assigned a weight ( ) based on a scale of 1 to 5, where 5 mean high importance. the chosen weights are shown in table 1. these weights are based on the prevailing weights used in previous studies [4, 11, 12, 24] . c relative weight calculation relative weight ( ) can be determined by dividing the individual weight of each parameter ( ) by the sum of weight of all selected parameters (w): where is the relative weight, is the weight of the parameter under consideration. d quality rating calculation the fourth step is the calculation of a quality rating scale ( ) for each parameter, as follows: where is the quality rating, (mg/l) is the concentration of each parameter in each water sample in, and (mg/l) is the palestinian drinking water standard for each chemical parameter. e sub-index calculation the sub-index for each chemical parameter is determined using the following equation: is the sub-index of the i th parameter. it combines its quality rating as well as its assigned weight. f calculation of the water quality index wqi (subindex aggregation) aggregation is a most important aspect of the wqi concept, and an important step in its construction. the sub-index aggregation of a wqi mathematically combines sub-indices into an overall index [29]. the multiplicative and additive aggregation functions are the popular aggregation techniques in the wqi approach. besides, some researchers have also adopted some other aggregation techniques [27]. in the present study, additive aggregation was applied. accordingly, we aggregate the index as follows, which leads finally to the wqi: ∑ g classification of water quality index scores the aggregation equations generate a wqi score, higher wqi scores indicating worse water quality, and lower scores indicating excellent water quality. the computed values are classified into five types, “excellent water” to “water unsuitable for drinking”, according to table 2. table 2 classification of wqi scores wqi value water quality <50 excellent water 50-100 good water 100-200 poor water 200-300 very poor water >300 water unsuitable for drinking khalil m. alastal, jawad s. alagha, azzam a. abuhabib and rachid ababou / groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study (2015) 84 iv. results and discussion the wqi contour maps for the gs area are presented in figures 4, 5, 6, for the years 2000, 2005, 2010, respectively. in the year 2000, it can be observed that the area classified as “excellent” and “good” occupied 70.4% of the overall gs area. a slight decline of the area belonging to the same category was noticed in the year 2005 (68.5% classified as “excellent” and “good”). however, between 2005 and 2010, a steep deterioration occurred in water quality: the area of “excellent” and “good” water quality dropped to only 45.2%. equivalently, this also indicates that the areas which are "not good" based on the wqi jumped from 29.6% to 54.8% within 10 years. in other words, the bad quality areas increased by 80 % from 2000 to 2010, as depicted in figure 3. figure 3. percentage of gs areas (relative to total gs area) that is classified as "not good" for gw quality according to the wqi: evolution over ten years. with regards to the spatial distribution of the wqi, and particularly, the geographical locations of low quality zones. in 2000, the zones classified as "not good" in terms of the wqi were limited to two main regions: the first one located in gaza city (area a in figure 4) which is the central and most highly populated city of gs, and is characterized by large commercial and economic activities. the poor water quality in this area is mainly due to elevated concentrations of nitrate and chloride; the former is due to deficiency of wastewater collection and treatment system, while the latter is due to seawater intrusion resulted from high gw abstraction rates. the second area of high wqi was located beneath the south eastern part of gs (area b in figure 4), whose poor wqi is also related to elevated concentrations of nitrate and chloride. being the main economical and dominant activity in this area, agricultural activities including excessive use of fertilizers and manures are the main source of the elevated nitrate. as for chloride, lateral flow from the neighboring eastern eocene aquifer (characterized by high chloride levels) is the main source of high chloride concentrations in that area [30-32]. figure 4. water quality index contour map for the year 2000 in 2005, a new major zone of low quality gw ("not good" according to wqi) appeared in the middle region of the gs (area c in figure 5). this low quality resulted from the same reasons as for the city of gaza (area a). however, there is an unexpected improvement of wqi in the south eastern part of the gs (area d in figure 5). this unexpected trend resulted from improvement of various water quality parameters namely, nitrate, calcium, sulfate and magnesium. this trend is actually due to a decline of agricultural activities as a result of continuous israeli military incursions in the gaza strip between 2001 and 2005 on agricultural lands. for example, one of the effects is the uprooting of plants, especially near the eastern borders of the gs [33]. figure 5. water quality index contour map for the year 2005 29.6% 31.5% 54.8% 0% 10% 20% 30% 40% 50% 60% 2000 2005 2010 % o f "n o t g o o d " a re a years area a area b area c area d khalil m. alastal, jawad s. alagha, azzam a. abuhabib and rachid ababou / groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study (2015) 85 another potential reason of the above-mentioned wqi positive trend may be related to an artefact in the processing of data: the wells used to develop the wqi contour maps (2000, 2005, 2010) were different from one map to another (i.e. the wells change over the years). only those wells having data records where utilized for developing the maps. this will in turn affect to some degree the interpolation accuracy of wqi contour maps and their evolution with time. in the year 2010, it can be seen that the southern and the middle "not good" wqi areas have extended and merged together, now forming a very large zone (area e in figure 6). this low quality zone, in 2010, includes the major part of the southern and middle gs, except for a coastal portion in the south western coastal area. elevated concentrations of several water quality parameters, notably chloride and nitrate, led to the observed deterioration in wqi in area a. it is also noticed that the effect of seawater intrusion is not clear in the south, compared with that in gaza city. the lack of seawater intrusion effects there may be related to several factors: comparatively low gw abstraction; existence of sand dunes and open areas in the south western gs (these geologic features favor higher gw recharge, which in turn improves water quality by reducing seawater intrusion). figure 6. water quality index contour map for the year 2010 v. conclusion despite the simplicity of the wqi concept and its theoretical background, it is still widely utilized as a preliminary tool to give an overview about gw quality worldwide. in this study, six water quality parameters (chloride, nitrate, sulfate, calcium, magnesium, and alkalinity) were used to develop and map the wqi in the gaza coastal aquifer, gaza strip, palestine. although decision makers cannot depend only on wqi for groundwater management, nevertheless, this tool can help focus on the hot spots and the priority areas. tracking water quality situation in the gs with time using contour maps showed that the gca is subjected to severe and rapid quality deterioration that threatens the sustainability of water supply and ecosystems. this deterioration is related to the fact that the aquifer is affected by multiple influencing variables such as seawater intrusion, lateral flow from adjacent aquifer, contamination from mixed and uncontrolled land use, and other factors. deterioration of water quality in the gca necessitates sustainable and wise management practices to alleviate stress on the aquifer and to improve water quality. developing contour maps for the wqi is an effective tool to present the situation and illustrate the extent of water quality problems to non-specialists and to the community. however, the study showed that using identical wells (the same group of wells) to develop wqi contour maps is one of the necessary measures to be considered. the fact that different wells appear in the dataset at different years, may significantly affect the accuracy of the developed wqi maps. on the one hand, the time evolution of the resulting wqi maps can be somewhat misleading if a different group of observation wells is used at each different time (or year). on the other hand, it would be better to use all the data available if possible. in future, new data processing procedures could be implemented in order to improve the resulting space-time maps, e.g., combining space-time interpolations, using interwell correlations and augmenting the water quality datasets using hydrometeorological data. acknowledgment the authors wish to thank the iug students: ibrahim abu zohri, ahmed alssaqa, mohammed alnakhala and farooq alkhateeb for their help in collecting data and generating the wqi contour maps through the surfer software. the authors are also grateful to the bilateral cooperation program al maqdisi for financial support of this work. references [1] sheng, z., "impacts of groundwater pumping and climate variability on groundwater availability in the rio grande basin". ecosphere, 2013. 4(1): p. art5. [2] koundouri, p., "current issues in the economics of groundwater resource management". journal of economic surveys, 2004. 18(5): p. 703-740. [3] morris, b.l., a.r.l. lawrence, p.j.c. chilton, b. adams, r.c. calow, and b.a. klinck, "groundwater and its susceptibility to degradation: a global assessment of the problem and options for management", in early warning and assessment report series2003, united nations environment programme (unep): nairobi, kenya. [4] ramakrishnaiah, c.r., c. sadashivaiah, and g. ranganna, "assessment of area e khalil m. alastal, jawad s. alagha, azzam a. abuhabib and rachid ababou / groundwater quality assessment using water quality index (wqi) approach: gaza coastal aquifer case study (2015) 86 water quality index for the groundwater in tumkur taluk, karnataka state, india". e-journal of chemistry, 2009. 6(2): p. 523-530. [5] sener, e., s. sener, and a. davraz, "assessment of aquifer vulnerability based on gis and drastic methods: a case study of the senirkentuluborlu basin (isparta, turkey)". hydrogeology journal, 2009. 17(8): p. 2023-2035. [6] singh, p. and i.a. khan, "ground water quality assessment of dhankawadi ward of pune by using gis". international journal of geomatics and geosciences, 2011. 2(2): p. 688-703. [7] ganeshkumar, b. and c. jaideep, "groundwater quality assessment using water quality index (wqi) approach case study in a coastal region of tamil nadu, india". international journal of environmental sciences and research 2011. 1(2): p. 50-55. [8] adhikari, k., b. chakraborty, and a. gangopadhyay, "assessment of irrigation potential of ground water using water quality index tool". asian journal of water, environment and pollution, 2012. 10(3): p. 11-21. [9] stigter, t.y., l. ribeiro, and a.m.m. carvalho dill, "application of a groundwater quality index as an assessment and communication tool in agro-environmental policies – two portuguese case studies". journal of hydrology, 2006. 327(3-4): p. 578-591. [10] srinivas, p., g.n. pradeep kumar, a. srinivasa prasad, and t. hemalatha, "generation of groundwater quality index map: a case study". civil and environmental research, 2011. 1(2): p. 9-20. [11] gebrehiwot, a.b., n. tadesse, and e. jigar, "application of water quality index to assess suitablity of groundwater quality for drinking purposes in hantebet watershed, tigray, northern ethiopia". isabb journal of food and agriculture science, 2011. 1(1): p. 22-30. [12] almasri, m.n. and s.m.s. ghabayen, "analysis of nitrate contamination of gaza coastal aquifer, palestine". journal of hydrologic engineering, 2008. 13: p. 132. [13] pcbs, "statistical yearbook of palestine ", 2012, palestinian central bureau of statistics 2011: ramallapalestine. [14] unep, "desk study on the environment in the occupied palestinian territories", 2003, united nations environment programme: geneva. [15] qahman, k. and a. larabi, "evaluation and numerical modeling of seawater intrusion in the gaza aquifer (palestine)". hydrogeology journal, 2006. 14(5): p. 713-728. [16] al-khatib, i.a. and h.a. arafat, "chemical and microbiological quality of desalinated water, groundwater and rain-fed cisterns in the gaza strip, palestine". desalination, 2009. 249(3): p. 1165-1170. [17] metcalf and eddy, "coastal aquifer management plan (camp). final model report (task 7). ", 2000, usaid. [18] baalousha, h., "vulnerability assessment for the gaza strip, palestine using drastic". environmental geology, 2006. 50(3): p. 405-414. [19] melloul, a. and m. collin, "sustainable groundwater management of the stressed coastal aquifer in the gaza region". hydrological sciences journal, 2000. 45(1): p. 147-159. [20] unct, "gaza in 2020 a liveable place?", 2012, office of the united nations special coordinator for the middle east peace process (unsco) a report by the united nations country team in the occupied palestinian territory: jerusalem. [21] shomar, b., s. fkher, and a. yahya, "assessment of groundwater quality in the gaza strip, palestine using gis mapping". journal of water resource and protection, 2010. 2(2): p. 93-104. [22] shomar, b., k. osenbruck, and a. yahya, "elevated nitrate levels in the groundwater of the gaza strip: distribution and sources". science of the total environment, 2008. 398(1-3): p. 164-174. [23] cissé, i.a. and x. mao, "nitrate: health effect in drinking water and management for water quality". environ. res, 2008. 2: p. 311-316. [24] kalpana, g.r., d.p. nagarajappa, k.m. sham sundar, and b. suresh, "determination of groundwater quality index in vidyanagar, davanagere city, karnataka state, india". international journal of engineering and innovative technology (ijeit), 2014. 3(12): p. 90-99. [25] gangwar, r.k., j. singh, a.p. singh, and d.p. singh, "assessment of water quality index: a case study of river ramganga at bareilly u.p. india". international journal of scientific & engineering research ijser, 2013. 4(9): p. 2325-2329. [26] sisodia, r. and c. moundiotiya, "assessment of the water quality index of wetland kalakho lake, rajasthan, india". journal of environmental hydrology, 2006. 14(23): p. 1-11. [27] ramesh, s., n. sukumaran, a.g. murugesan, and m.p. rajan, "an innovative approach of drinking water quality index a case study from southern tamil nadu, india". ecological indicators, 2010. 10(4): p. 857868. [28] tambekar, d.h. and b.b. neware, "water quality index and multivariate analysis for groundwater quality assessment of villages of rural india". science research reporter, 2012. 2(3): p. 229-235. [29] song, t. and k. kim, "development of a water quality loading index based on water quality modeling". journal of environmental management, 2009. 90(3): p. 1534-1543. [30] yakirevich, a., a. melloul, s. sorek, s. shaath, and v. borisov, "simulation of seawater intrusion into the khan yunis area of the gaza strip coastal aquifer". hydrogeology journal, 1998. 6(4): p. 549-559. [31] zoller, u., l.c. goldenberg, and a.j. melloul, "the “short-cut” enhanced contamination of the gaza strip coastal aquifer". water research, 1998. 32(6): p. 1779-1788. [32] al-agha, m.r. and h.a. el-nakhal, "hydrochemical facies of groundwater in the gaza strip, palestine/faciès hydrochimiques de l’eau souterraine dans la bande de gaza, palestine". hydrological sciences journal, 2004. 49(3). [33] pchr, "the effects of closure on the agricultural exports in gaza strip palestinian center for human rights", 2011, palestinian center for human rights: gaza. ices5 proceeding-pp. 1-3 gaza, 4-5 november 2014 journal of engineering research and technology, volume 1, issue 4, december 2014 24 influencing cost factors in road projects in gaza strip using ann hasan kh. abujamous 1 , rifat n. rustom 2 , and mahmoud y. abukmail 3 1h. abujamous, civil engineering department, islamic university-gaza, palestine, e-mail mhmod_85@hotmail.com 2r. rustom, ph.d. in civil engineering from drexel university in the u.s.a., palestine, e-mail rustom@iugaza.edu.ps 3m. abukmail, civil engineering department, islamic university-gaza, palestine, e-mail. mhmod_85@hotmail.com abstract— conceptual cost estimate can serve the owners’ feasibility estimate and assists in the establishment of the owner's funding which aids the engineers in designing to a specific budget. conceptual estimating exhibits low accuracy level due to the lack of project information and the high level of uncertainty at early stage of project development. the purpose of this paper is to determine the most influencing cost factors in road projects using delphi technique and artificial neural networks. these factors were employed in a neural network (nn) for building a multi-layer perceptron (mlp) model to estimate the road project cost. historical data of gaza strip road projects were used to train and test the mlp model. the model developed showed a reduced error rate of 6.3% which demonstrates the ability to estimate the cost of road projects at early stage with higher accuracy. index terms— cost factors, conceptual cost estimate; artificial neural networks. i. introduction early stage cost estimate plays a significant role in any initial road projects decisions; despite the project scope has not yet been finalized. major problems faced are lack of preliminary information, database of road works costs and appropriate cost estimation methods [2, 3]. project managers in gaza strip, often need to estimate the cost of road projects at early stage quickly and approximately to provide funding or to obtain the adoption of the budget from decision-makers. therefore, it is important to know the cost of the road projects in a short time with acceptable accuracy. artificial neural networks (ann) is well suited to model complex problems where the relationship between the model variables is unknown [4]. the main objective of this research is to develop ann model to estimate the cost of road projects in gaza strip at early stage to reduce the error in estimation. to achieve this, the factors that affect the cost of road projects that can be available at early stage were identified and modeled. ii. literature review cost is on the mind of every business. every business is expected to do more with less. the objective is to minimize cost, maximize profit, and maintain the competitive edge. methods for cost estimation vary as the project evolves from the early stages of conception to the construction phase. conceptual cost estimate is made at the early stage of the project where the budgets are to be decided, and available information is limited. it is conducted without working drawings or detailed specifications. the estimator may have to make such an estimate from rough design sketches, without dimensions or details and from an outline specification and schedule of the owner's space requirements [5]. the conceptual cost estimate can serve several purposes, such as:  it supplements or serves as the owners feasibility estimate.  it aids the architect/engineer in designing to a specific budget.  it assists in the establishment of the owner's funding. a. parameters affecting cost of road projects there are a lot of parameters that affected the cost of road projects, hegazy and ayed [2] relied on ten parameters to determine highway construction cost in canada. those parameters were project type, project scope, year, construction season, location, duration, size, capacity, water body, and soil condition. wilmot and mei [6] took on price of labor, price of material, price of equipment, pay item quantity, contract duration, contract location, quarter in which contract was let, annual bid volume, bid volume variance, number of plan changes, and changes in standards or specifications for estimating the escalation of highway construction costs over time in louisiana state. whilst for estimating the cost at conceptual phase of highway projects in poland and thailand, sodikov [7] picked out conmailto:mhmod_85@hotmail.com mailto:mhmod_85@hotmail.com hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 25 struction factors which are predominant work activity (asphalt or concrete), work duration, pavement width, shoulder width, ground rise fall, average site clear/grub, earthwork volume, surface class (asphalt or concrete), and base material (crushed stone or cement stab). likewise in the west bank – palestine mahamid and bruland [8] adopted construction factors. which are road length, pavement width, pavement thickness after compaction, asphalt hauls distance, pavement area, base coarse thickness after compaction, base coarse haul distance, the base coarse area, terrain condition (semi even, hilly), soil drillability (good, poor), and soil suitability (good, poor) to predict the cost of road construction activities. pewdum et al. [9] used traffic volume, topography, weather condition, evaluating date, construction budget, contract duration, % of as planned completion, and % of actual completion to forecast final budget and duration of a highway construction project during construction stage. attal [10] employed norm cost estimate, location of the project, area location (rural, urban, etc.), loops and ramps, new signal counts, construction length, sidewalks, curb and gutter, crossover count, average daily traffic, and geometric design standard for predicting highway construction cost in virginia. as shown above the factors varied widely. this is referred to several reasons like the location of study and the tools, which was used to determine the parameters. a number of techniques were available to determine the influencing factors on road projects costs. delphi technique is one of these methods. it provided the opportunity to evaluate knowledge based on the experience of individual practitioners and it is suitable for this research [11]. while this research focused on the "implementation factors" that affected the budget of road projects in the gaza strip. the research adopted the most nine influential factors in roads budgets, which are determined by using delphi technique. b. artificial neural networks (ann) neural networks are the preferred tool for many predictive data mining applications because of their power, flexibility, and ease of use [12]. a neural network is an adaptable system that can learn relationships through repeated presentation of data and is capable of generalizing to new, previously unseen data [13]. during training, both the inputs (representing problem parameters) and outputs (representing the solutions) are presented to the network normally for thousands of cycles. at the end of each cycle, or iteration, the network evaluates the error between the desired output and actual output. then use this error to modify the connection weights according to the training algorithms used [14]. over the last few years, the use of artificial neural networks (ann) has increased in many areas of engineering. many research in construction management have been carried out to use ann in various topics. moselhi et al [15] are among the first scholars to research ann as a promising management tool in construction. following on from their work, moselhi and hegazy [16] used neural network methodology to markup estimation. then ann became widely spread and, it is used in construction management. recently hola and schabowicz [17] estimated earthworks execution time cost by means of artificial neural networks. wang and gibson [18] studied pre project planning and project success by using anns and regression models. chen [19] developed hybrid ann-case based reasoning (cbr) model for disputed change orders in construction projects and oral et al [20] predicted productivity of the construction crew by using nn with supervised versus unsupervised learning. many researchers focused on predicting the cost of construction by using nn, like arafa and alqedra [4] who developed ann model to predict the early stage cost of buildings. the analysis of the training data revealed that there are seven key parameters. kim et al [21] developed a hybrid conceptual model for estimating cost of large building projects. gunaydın and dogan, [22] also, built a neural network model to estimate the cost in early phases of building design process. cost and design data were used for training and testing the neural network methodology with eight design parameters utilized in estimating the square meter cost of reinforced concrete structural systems of 4–8 storey residential buildings in turkey, an average cost estimation accuracy of 93% was achieved. likewise, in korea kim et al. [23] used the construction cost data for residential buildings constructed between 1997 and 2000. the back-propagation network (bpn) model incorporating genetic algorithms (gas) were used to improve the accuracy of construction cost estimation. ann technique was also used in highway projects. pewdum, rujirayanyong and sooksatra [9] presented a study of back-propagation neural networks for predicting final budget and duration of highway construction projects by using the actual data collected from project progress reports of 51 http://www.sciencedirect.com/science/article/pii/095605219390039y http://www.sciencedirect.com/science/article/pii/095605219390039y hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 26 highway construction projects in thailand between 2002 and 2007. sodikov [7] focused on the development of a more accurate estimation technique for highway projects in developing countries at the conceptual phase using artificial neural networks. he used database of road works cost data from two developing countries poland and thailand, which have a relatively large number of projects. therefore, they investigated the relationship between project cost and other variables such as work activity, terrain type, road parameters, etc. the ann model was developed by multilayer perception (mlp) with back-propagation algorithm. wilmot and mei [6] developed an artificial neural network model, which relates overall highway construction costs to improve a procedure that estimates the escalation of highway construction costs over time. the model was able to replicate past highway construction cost trends in louisiana with reasonable accuracy. the multilayer feed-forward network structure for anns was chosen for this study, and for training, the backpropagation learning algorithm was used. this research used multi-layer perceptron architecture of ann applications to introduce a model for estimating the cost of road projects in gaza strip at the early stage. iii. methodology the research was carried out to achieve the objective of the study. in the first step, delphi technique was used for determining the influencing implementation factors on the cost of roads. secondly, historical data of road projects implemented between 2011 and 2012 in gaza was collected. in the third step, this data was used in developing the neural networks proposed models. the model was tested on separate data for best-possible architecture. these steps will be explained in the following sections. iv. influencing factors and data collection to obtain the factors, which have the most effect on the cost of road projects, the delphi method was utilized and the following steps were followed[1]:  seven exploratory interviews were done with experts. the experts worked in various positions: cost engineers, managers and site engineers. they worked with consulting offices, municipalities and contractors.  the factors that affect the cost of road projects, which have been drawn from previous studies were presented to them.  then the experts' opinions were showed consensus on nine factors, which have the greatest impact on the road project cost in gaza strip. neural networks models require many data. therefore, eighty-six (86) projects were collected from municipalities, ministry of public works and housing, contractors and consultants. as a result for using delphi technique, the following nine factors were recommended which cover the parameters of the cost of road projects influencing: project scope: the collected data includes 26 projects that have a "new project with a good soil" scope, 27 projects that have a "new project with a poor soil" scope and 33 projects that have a "rehabilitation" scope. this is a clear indication that the collected projects are distributed as per the project scope so it was considered a representative sample and can be used in modeling. pavement type: the sample was representative of two types; 48 projects with asphalt pavement and 38 with interlock pavement. pavement area: there is no data for projects, which have pavement area less than 2,000 m 2 , while 65% of projects in the data set have pavement area between 2000 to 10,000 m 2 . the low rate of projects that have an area more than 20,000 square meters may lead to reducing the accuracy of the model in estimating the cost for this category. length of road: gives an indication for the size of works in the project. the shortest length in the data set is 250m and the number of projects that have road lengths more than 2,250m is less than five percent. while 77% of the projects in the data set, have road lengths between 250 to 1,250 m. sewage, water and lighting networks: the cost of projects in the data set contains the cost of implementing one or more of the networks listed in table 1. the data represented all feasible possibilities in the presence or absence of the three networks. this supports the possibility of the adequacy of this data to build the model. curbstone length: a quarter of the projects in the data set do not include the cost of curbstones, which means it was well represented. the projects, which include curbstones with more than 5000 meter are few. this may reduce the accuracy of the model in estimating the cost of this category of projects. pavement area of (side walk+ island): forty-two percent of the projects in the data set did not include the cost of paving sidewalks and islands. hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 27 table 1: number of projects including different networks projects budget for the gathered cases are presented in figure 2. it is very clear that more than 67 percent of the projects in the data set have a budget less than 400,000 dollars. this means that the accuracy of the model will be good for projects that have cost within this range. v. model development to obtain the best models with minimum error, the research followed the procedures explained in figure 1. a. model structure design the choice of ann architecture depends on a number of factors such as the nature of the problem, data characteristics and complexity, the number of sample data, etc.[7]. the models designed to include an input layer of nine processing elements (neurons) corresponding to the nine input parameters and an output layer of one processing element (neuron) as the target. in this research, the data is textual and numeric, so it is encoded to be only numeric or integer according to table 2. table 2: inputs/output encoding. the design of the neural network architecture is a complex and dynamic process that requires the determination of the internal structure and rules (i.e., the number of hidden layers and neurons, update weights method, and the type of activation networks no. of projects % sewage only 7 %8.1 water only 3 %3.5 lighting only 14 %16.3 no networks 35 %40.7 sewage &water 6 %7.0 sewage &lighting 4 %4.7 water &lighting 6 %7.0 sewage, water &lighting 11 %12.8 no input parameters code 1. project scope new with a good soil = 1 new with a bad soil = 2 rehabilitation = 3 2. pavement type interlock = 1 asphalt = 2 3. pavement area in m 2 4. sidewalk & island pavement area in m 2 5. road length in meters length 6. curbstone length in meters length 7. water networks exist = 1 not exist = 0 8. lighting networks exist = 1 not exist = 0 9. sewage networks exist = 1 not exist = 0 output parameter code 1 project budget in thousand dollar's figure 2: number of projects according to their budget train the model does the performance of the testing set is acceptable? no implement the model design the model structure varying the size of the network and/or learning parameters save the model that has the best performance has acceptable performance been reached? test the model end save the model start no yes yes figure 1: modelling procedures flowchart [1]. hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 28 function) [22]. this research depended on the backpropagation algorithm, which is a type of supervised learning algorithms that is mostly used in civil engineering applications. also, levenberg-marquardt learning rule is selected. the choice of ann in this study is based on optimum design and prediction using multilayer perceptron neural network architectures. neurosolution 6.07 application and microsoft excel 2007 were selected to build the models. there are many types of activation functions, which are used to transform an input signal into output. the hyperbolic tangent (tanh) was used. b. model implementation the problem at hand needs to identify and tag the data as input or as output. so the data was organized in the preliminary stage to neural network modeling. then three processes were followed in modeling: data sets: any model selection strategy requires validation by the process data. traditionally, available data is divided into three sets [24]; training set (in-sample data), cross-validation set and a test set (out-of-sample). learning is performed on the training set, which is used for estimating the arc weights while the cross validation set was used for generalization that is to produce better output for unseen examples [7]. however, the test set is used for measuring the generalization ability of the network and network performance evaluation [25]. the total available data is 86 exemplars that are divided randomly into three sets:  training set (includes 60 exemplars ≈ 70%),  cross validation set (includes 16 exemplars ≈ 18%) and  test set (includes 10 exemplars ≈ 12%). normalizing data: data is generally normalized for confidentiality and for effective training of the model being developed. the normalization of training data is recognized to improve the performance of trained networks [22]. the input/output data is scaled, zero is the lower bound and the upper bound is one to suit neural networks processing. neurosolution 6.07 automatically scales input values to {lower upper} according to equations (1), (2) and (3). (data ) = amp × data + off ‎0 (1) where: (datai)nor: data represent value for one input for one sample after normalization. datai: data represent value for one input for one sample. amp = ( ) ( ) (2) off = uperbound − amp × max (3) where maxi and mini are the maximum and minimum values found within channel i, and upperbound and lower-bound are equal 0 and 1 respectively. initial networks building: the modeling was started with small networks and increased their size until the performance in the test set is appropriate. this proposed method of growing neural topologies ensures a minimal number of weights, but the training can be fairly long [13]. c. training models and testing training a nn is an iterative process of feeding the network with the training examples and changing the values of its weights in a manner that is mathematically guaranteed to reduce consecutively the error between the network's own results and the desired output. neural networks are able to generalize solutions to problems by learning from pairs of input patterns and their associated output pattern [16]. after building a small topology as viewed above, training with cross-validation and testing phase will begin. the optimum architecture is often achieved by trial and error according to the complexity of the respective problem, also by testing few proposed designs to select the one that gives the best performance[26]. figure 3 explains the series of processes to get the best weights, which give the minimum percentage error. vi. performance measures the performance measures are important to evaluate the models. there are five values that can be used to measure the performance of the network for a particular data set. mean square error (mse): according to principe et al [13] the mse formula is: mse = ∑ ( ) (4) where: n= number of exemplars in the data set. yij= network output for exemplar i at pe j. dij= desired output for exemplar i at pe j. hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 29 correlation coefficient (r): according to principe et al [13], the correlation coefficient between a network output x and a desired output d is: 𝑟 = ∑ ( ̅)( ̅) √∑ ( ̅) √∑ ( ̅) (5) mean absolute error (mae): according to willmott and matsuura [27], the mae is defined by the following formula: mae = ∑ | | (6) where: n= number of exemplars in the data set. dyij= denormalized network output for exemplar i at pe j. ddij= denormalized desired output for exemplar i at pe j. mean absolute percentage error (mape): according to principe et al., (2010) [13], the mape is defined by the following formula: mape = ∑ | | (7) this research considered hegazy and ayed [2] methodology in determining the total mape. the training phase was represented by fifty percent of the total mape while the test set equals the remaining fifty percent. total mape can be calculated by the following formula: total mape = ( ape × + ape × ) ( + )⁄ + ape 2 (8) where: mapet= mape for training data set. nt = number of exemplars in the training data set. mapec = mape for cross validation data set. nc= number of exemplars in the cross validation data set. mapes = mape for test data set. total accuracy performance (tap): according to wilmot and mei [6], the accuracy performance is defined as (100−mape) %. total accuracy performance (tap) can be calculated by the following formula: tap = 100 − total mape (9) vii. results and discussion from the previous procedure of training and testing, many multilayer perceptron topologies were trained for several trials. the best structure has one hidden layer with five neurons although two hidden layers topologies were trained, see figure 4. the models were trained on sixty exemplars while sixteen exemplars of cross validation set were used for generalization to produce better output for unseen examples. the models were tested on ten exemplars. the results are summarized in table 3. determine the total number of epochs use c.v data randomize the networks weights vary the size of the network save the model perform the sensitivity analysis end start has acceptable performance been reached? n=0 n=n+1 if n=n no yes no test the model does the performance of the testing set is acceptable? yes no yes n: is a counter number. n: is a number of the required runs. figure 3: training and testing model flowchart [1]. figure 4: the architecture of the mlp model. hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 30 table 3: performance measurements for the model. training set c.v set test set mse 152.2 1446 2819 r 0.999 0.992 0.99 mae 8.812 30.97 43.3 mape 3.49% 9.59% 7.80% ap 96.52% 90.41% 92.20% the back-propagation algorithm involves the gradual reduction of the error between model output and the target output. it develops the input to output by minimizing a mean square error cost function measured over a set of training examples [28]. the value of the mse for mlp model has mse 152, 1446 and 2819 at training, cross validation and testing sets respectively. the size of the mean square error (mse) can be used to determine how well the network output fits the desired output, but it does not necessarily reflect whether the two sets of data move in the same direction. for instance, by simply scaling the network output, we can change the mse without changing the directionality of the data. the correlation coefficient (r) solves this problem [13]. as show in table 3 the correlation coefficient (r) for any set of data is not less than 0.989, this means that the fit of the model to the data is reasonably good. mean absolute error is another factor to measure the models performance. the mlp model has mae of 8.8, 31 and 43.3 at training, cross validation and testing sets respectively. note that mae factor alone is not enough because its value can easily be misleading. for example, say that output data is in the range of 0 to 10. for one exemplar, the desired output is one and the actual output is two. even though the two values are quite close and the mae for this exemplar is one but the mean absolute percentage error is 100. therefore, this research used the mape. the values of the mape for the mlp model were 3.5, 9.6 and 7.8 at training, cross validation and testing sets respectively. the accuracy of the best model developed by multilayer perceptron sounds very favorably with data based from the test set. it can be seen from the results that the model performs well and no significant difference could be discerned between the estimated output and the desired budget value. results of training, cross validation and test set are shown in figure 5, figure 6 and figure 7 respectively. an average accuracy of 93.7% was achieved, this means that the total mape equals 6.3%. the previous results show that mlp model has excellent performance with minor error. as shown in figure 5, figure 6 and figure 7 perfect agreement between the actual and predicted values draws a 45-degree line; this line means that the actual cost values equal the predicted ones. figure 5, figure 6 and figure 7 indicate reasonable configure 5: desired output and actual network output for training set exemplar. figure 6: desired output and actual network output for cross validation set exemplar. figure 7: desired output and actual network output for test set exemplar. hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 31 centration of the predicted values around the 45degree line. the coefficient of determination between the actual and the predicted cost values were 0.998, 0.992 and 0.99 for training, cross validation and test set respectively. viii. sensitivity analysis sensitivity analysis is the method that discovers the cause and effect relationship between input and output variables of the network [13]. the neurosolution program provides a useful tool to identify sensitive input variables called ‘‘sensitivity about the mean’’. the sensitivity analysis was run by batch testing on the mlp model after fixing the best weights then started by varying the first input between the mean ± one standard deviation, while all other inputs are fixed at their respective means. the network output was computed for 50 steps above and below the mean. this process was then repeated for each input. figure8, summarizes the variation of output with respect to the variation of each input generated. as shown in figure 8, the pavement area parameter has the value 56.6 that is the greatest effect on the budget output. the second parameter affecting the total budget is pavement type, which has 41.15. these results are logical when compared to actual practice. on the other hand, project scope has a weak impact; likewise, road length has the weakest impact, which may be due to the presence of the pavement area parameter. ix. conclusion this research was achieved the ability to estimate the road projects cost at early stage with high accuracy and minor error. mlp model had error rate equal 6.3% and mape 3.49 %, 9.59% and 7.8% for training, cross validation and test sets respectively. in addition, the value of correlation coefficient does not less than 0.989 for any set. this research focused on the "implementation factors" that affected the budget of road projects in the gaza strip. the research adopted nine factors, which are determined by using delphi technique. the remarkable, that the sensitivity analysis results were very logical and showed the impact of each parameter on the cost. which the pavement area parameter had the greatest effect on the budget output. nevertheless, project scope and road length had low impact. ann are well suited to model complex problems where the relationship between the model variables is unknown. also, ann does not need any prior knowledge about the nature of the relationship between the input/output variables, which is one of the benefits that ann has compared with most empirical and statistical methods. although the anns have advantages, on the other hand there are disadvantages. the principal disadvantage being that they give results without being able to explain how they were arrived to their solutions. their accuracy depends on the quality of the trained data and the ability of the developer to choose truly representative sample inputs. in addition to trial and error method is the best solution to obtain the formula to decide what architecture of ann should be used to solve the given problem and which training algorithm to use. one looking at a problem and decide to start with simple networks or going on to complex ones to get the optimum solution is within the acceptable limits of error. references [1] h. k. abujamouse, "parametric cost estimation of road projects using artificial neural networks," thesis for the fulfillment of master of scince master thesis, civil engineering, islamic university, gaza, 2013. [2] t. hegazy and a. ayed, "neural network model for parametric cost estimation of highway projects," journal of construction engineering and management, vol. 124, pp. 210-218, 1998. [3] y. a. e.-r. al-shanti and k. sha'at, "a cost estimate system for gaza strip construction contractors," master, faculty of engineering, the islamic university, gaza, 2003. figure 8: sensitivity output hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 32 [4] m. arafa and m. alqedra, "early stage cost estimation of buildings construction projects using artificial neural networks," journal of artificial intelligence, vol. 4, pp. 63-75, 2011. [5] a. s. ayed, "parametric cost estimating of highway projects using neural networks," master, faculty of engineering & applied sciences, memorial university, newfoundland, 1997. [6] c. g. wilmot and b. mei, "neural network modeling of highway construction costs," journal of construction engineering and management, vol. 131, pp. 765-771, 2005. [7] j. sodikov, "cost estimation of highway projects in developing countries: artificial neural network approach," journal of the eastern asia society for transportation studies, vol. 6, pp. 10361047, 2005. [8] i. mahamid and a. bruland, "preliminary cost estimating models for road construction activities," in facing the challenges – building the capacity, sydney 2010. [9] w. pewdum, t. rujirayanyong, and v. sooksatra, "forecasting final budget and duration of highway construction projects," engineering, construction and architectural management, vol. 16, pp. 544557, 2009. [10] a. attal and m. tatari, "development of neural network models for prediction of highway construction cost and project duration," master, the department of civil engineering, ohio university, ohio, 2010. [11] g. d. creedy, m. skitmore, and t. sidwell, "risk factors leading to cost overrun in the delivery of highway construction projects," phd, faculty of built environment and engineering, queensland university of technology, queensland, 2006. [12] m. j. noruis, spss neural networks™ 17.0. chicago: spss incorporated, 2007. [13] j. principe, w. lefebvre, g. lynn, c. fancourt, and d. wooten, "neurosolutions-documentation, the manual and on-line help," 2010. [14] j. l. mcclelland and d. e. rumelhart, "parallel distributed processing," psychological and biological models, vol. 2, 1987. [15] o. moselhi, t. hegazy, and p. fazio, "neural networks as tools in construction," journal of construction engineering and management, vol. 117, pp. 606-625, 1991. [16] o. moselhi and t. hegazy, "markup estimation using neural network methodology," computing systems in engineering, vol. 4, pp. 135-145, 1993. [17] b. hola and k. schabowicz, "estimation of earthworks execution time cost by means of artificial neural networks," automation in construction, vol. 19, pp. 570-579, 2010. [18] y.-r. wang and g. e. gibson, "a study of preproject planning and project success using anns and regression models," automation in construction, vol. 19, pp. 341-346, 2010. [19] j.-h. chen, "hybrid ann-cbr model for disputed change orders in construction projects," automation in construction, vol. 17, pp. 56-64, 2007. [20] m. oral, e. l. oral, and a. aydın, "supervised vs. unsupervised learning for construction crew productivity prediction," automation in construction, vol. 22, pp. 271-276, 2012. [21] h.-j. kim, y.-c. seo, and c.-t. hyun, "a hybrid conceptual cost estimating model for large building projects," automation in construction, vol. 25, pp. 72-81, 2012. [22] h. murat günaydın and s. zeynep doğan, "a neural network approach for early cost estimation of structural systems of buildings," international journal of project management, vol. 22, pp. 595-602, 2004. [23] g.-h. kim, j.-e. yoon, s.-h. an, h.-h. cho, and k.-i. kang, "neural network model incorporating a genetic algorithm in estimating construction costs," building and environment, vol. 39, pp. 1333-1340, 2004. [24] m. ghiassi, h. saidane, and d. zimbra, "a dynamic artificial neural network model for forecasting time series events," hasan kh. abujamous, rifat n. rustom, and mahmoud y. abukmail (2014) 33 international journal of forecasting, vol. 21, pp. 341-362, 2005. [25] g. zhang, b. eddy patuwo, and m. y hu, "forecasting with artificial neural networks:: the state of the art," international journal of forecasting, vol. 14, pp. 35-62, 1998. [26] n. bakhary, k. yahya, and n. ng chin, "univariate artificial neural network in forcasting demand of low cost house in petaling jaya," jurnal teknologi b, pp. 67-75, 2004. [27] c. j. willmott and k. matsuura, "advantages of the mean absolute error (mae) over the root mean square error (rmse) in assessing average model performance," climate research, vol. 30, p. 79, 2005. [28] m. bouabaz and m. hamami, "a cost estimation model for repair bridges based on artificial neural network," american journal of applied sciences, vol. 5, pp. 334339, 2008. hasan kh. abujamous has m.sc. civil engineering from islamic university of gaza in palestine in 2013. he is a project manager in el-jazeera construction company for consulting (jcec). his primary research interests include neural networks, simulation, construction management, construction analysis and concrete technology. prof. rifat rustom has m.sc. and ph.d. in civil engineering from drexel university in the u.s.a. in 1993. he is the rector of the university college of applied sciences (ucas). prof. rustom is former vice president for external affairs and it at the islamic university of gaza (iug) prof. rustom has research interests in construction management, institutional management and development, geosynthetics, and concrete technology. mahmoud y. abukmail has m.sc. civil engineering from islamic university of gaza in palestine in 2013. he is a project manager in el-jazeera construction company for consulting (jcec). his primary research interests include simulation, neural networks, construction managements, and computer applications in construction. transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 15 integration of sustainability in engineering education in palestine ahmed abu m. hanieh 1 , afif a. hasan 2 , sadiq a. abdelall 3 1 mechanical engineering department, birzeit university, palestine, ahanieh@birzeit.edu 2 mechanical engineering department, birzeit university, palestine, ahasan@birzeit.edu 3 industrial engineering department, islamic university of gaza, palestine, sabdelall@iugaza.edu.ps abstract— supporting engineering education is considered as one of the main goals that lead to strong palestinian economy due to the strong interactions and synergy effects between education and the economy. engineering education is considered a mid-point that connects natural resources at one side to industrial products at the other side. to keep this connection in a sustainable manner that guarantees maintaining these resources for the longest time of life, engineering courses and programs are designed by integrating sustainability aspects into engineering education in order to increase productivity resource efficiency without damaging the environment.the focal point of this paper is the academic faculties of engineering where education and training courses are designed and delivered on one hand and innovation and research are fostered on the other hand. this paper demonstrates an overview of the potential contributions of academia in altering the attitude of industries toward more sustainable resource consumption and capacity building for implementing sustainable engineering. bachelor and master engineering programs will be considered in the paper. cooperation and partnership between higher educational institutions and industry will be addressed in the context of sustainability and taking into account national and international indicators for this partnership. index terms— engineering education; sustainability; development; partnership. i introduction palestine has a highly renewable human capital resource, 60.3% of the population in west bank and gaza are under the age of twenty [1]. this resource has its impact on the incentive for more investment in the education and transfer of knowledge. according to the palestinian ministry of education and higher education, there are 49 palestinian universities and colleges working in west bank and gaza [2]. education has been of great importance to the palestinian people. studies indicate that the ratio of university students to the total population is considerably higher for palestinians than all other arab nations and many advanced european nations [3]. palestinian students have exceptionally high educational aspirations in spite of disruptive influence of the israeli occupation and dire poverty. students work hard in school and are supported by their parents. palestinians of all social and economic origins and all political persuasions agree on the necessity of high-quality education for their youth. this is perhaps the highest priority of every palestinian family [4]. the overall number of palestinian graduates increased by 18.5% between the academic years 2003/2004 and 2011/2012 [5]. engineering programs are the favorite choice of the palestinian society, hence engineering program along with health science attract the best students. it is very common that, students with grades above 90% in high school diploma are enrolled in engineering programs. in 2010/2011 around 11.2% of the students accepted in the universities in the bachelor programs are accepted in engineering disciplines. on the graduation level 8.8% among bachelor degree university graduate are from engineering disciplines [3]. however, as in other countries in the region, an educated workforce is not correlated with economic productivity. mismatch between the qualifications demand and supply is a major challenge for the educational institutions; [5]; [6]. preparing students based on the real-time job market qualifications demand will increase the chances of getting and retaining a job that equip them in the best way for a rewarding career and with the most relevant skills for their chosen field. employers will have the needed skilled individuals for enhancing their competitive advantage. a energy problems energy acts as a main indicator of industrial activity and improved standard of living. gnp is correlated easily with energy per capita consumption. for non-fossil fuel producing countries in the region such as palestine energy supply can be a limiting factor of growth and prosperity. maximizing the use of the available energy resources on one hand and utilizing renewable energy resources on the other hand will be the wise option for such countries. for palestine electricity grid reaches 99% of population unlike other countries in the region. even in the oil rich countries the electricity grid and supply is not 100%. palestinians in the west mailto:ahanieh@birzeit.edu mailto:ahasan@birzeit.edu mailto:sabdelall@iugaza.edu.ps ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 16 bank do not generate their electrical power. the total power purchased is around 98%, the bulk is supplied by israel, and jordan provides around 6%. in gaza strip, israel supply 50% and egypt supply 7%. the rest is supposed to be generated by the gaza 140-mw power station (gps). main problem in palestine is the equality of electricity and its duration. interruptions and break downs are very frequent in winter time. in gaza strip situation is much worse due to the unavailability of fuel for the gps, such that it operates only a few hours a day. people depend for most of the time on diesel generators generating electricity at a high price and polluting the environment. other constrains when it comes to energy is the price and the cost. electricity prices in palestine are very high because energy is imported from israel at a relatively high cost and then taxed by the palestinian authority. the average selling price of electricity is 0.115 €/kwh. there are no subsidies; energy therefore takes a large part of the household income of palestinians. the average annual income per capita in palestine is € 1,030; the electricity bill amounts to about 10% of the family income [7]. b water problems in general palestine suffers from shortage of water. in particular it suffers from continuous increase of the water scarcity. climate changes and environment issues are adding to already present political concerns over the water problem [9]. palestinian water abstractions have declined over the last ten years, as the result of the combined effect of dropping water tables, israeli restricted drilling, deepening and rehabilitation of wells. water withdrawals per capita for palestinians in the west bank, are about one quarter of those of the israelis, and are continously declining over the last decade. by regional standards, palestinians have the lowest access to fresh water resources as shown in table 1 below [10]. domestic water supply in palestine is variable and discontinuous. the nominal daily supply rates to a quarter of the connected population are less than 50 liter/capita per day. some network services providing as little as 10-15 liter/capita per day, such rate is below the minimum international humanitarian disaster levels. actual household use in the west bank is estimated to average 50 liter/capita per day. in addition, about 50% of households claim quality problems in their drinking water supply [10]. c material problems material management is an engineering technique concerned with planning, organizing and control of flow of materials from their initial purchase to destination. material management aims at getting the right quality and quantity of supplies in the right time and place at the right cost. the objectives of material management refer to material planning, purchasing, procuring, storing and inventory control. table 1 per capita availability of renewable water resources in jordan basin (sources: world bank, 2007, pwa, [11]). country m 3 per capita per annum west bank 75 gaza 125 jordan 200 israel 240 lebanon 1200 syria 1500 on the other hand, material management helps in organizing supplies, distribution and quality assurance of materials. in general, the best procedure for material management flow is outlined in figure 1 that represents a material management cycle [12]. palestine suffers from shortage in different types of materials that are considered vital for development of this region. therefore, it is even more important to use adequate management techniques to manage the most important material resources like petroleum and construction materials to sustain the resources for a longer period. stone, marble and aggregate make up to 50% of materials used in construction. managing the life cycle of this material will improve the efficiency of this resource and the regional economy as well [13]. the rest of materials used in construction are divided into metals and non-metals. the most widely used metal is steel followed by aluminium while the non-metals relate to rubber, plastic, and wood. petroleum producing countries in mena depend on oil and gas, responsible for 90% of their gdp [14]. figure 1: material management flow diagram i. engineering programmes in palestine palestinian education sector strategy 2011-2013 was built on the four core pillars: enrolment, quality of education, management, and linkage with the needs of the market and society. ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 17 a existing programs engineering programs along with it attract the best students in palestine. in 2010/2011 11.2% of the 40000 students accepted in the universities in the bachelor programs are accepted in engineering disciplines. among the 103000 university students 18.1% are in engineering bachelor programs. figure 2 shows existing engineering programs at palestinian universities [15]. engineering programs are 5 year programs that include basic science, basic engineering science, specialized courses, labs and practical training in addition to cape stone or graduation project. those programs are based on credit hour systems. however because of difficulties imposed on importing lab equipment by israeli occupation and adding to this the limited funding available for such equipment, some of labs and practical aspects are not covered properly in classes. such situation assures the need for a stronger relationship between academic programs and industry in palestine; this is to give student better opportunities to get hands-on experience in direct contact with the local industry [16]. figure 2: engineering programs at palestinian universities. b proposed model for engineering education a new model for engineering education is developed based on the integrated definition function (idef) modelling technique. the engineering education model shown in figure 3 is based on considering ―engineering education‖ as the main function to be modelled. this function is supported mainly by six variables: inputs, outputs, controls, mechanisms, information and dynamics. these variables are connected as follows: outputs: the main outputs of the engineering education process are:  knowledge knowledge is the most important output on which most of other outputs depend.  graduates graduate engineers from all disciplines should obtain the necessary level of education that fits to the needs of the local society.  opportunities having good education process leads to opening new opportunities, jobs, and business.  development this educational technique helps in the development of local industry and society leading to improving the life level of people. inputs, information, controls & mechanisms: the main inputs to the engineering education process are:  curriculum – preparing curriculum for engineering program requires taking inputs from existing literature. to support traditional educational techniques, cooperative education should be applied to assure sustainability.  students – students are the centre of education process. besides to theoretical information, students need to be aware of technological issues related to their disciplines.  assessment is used to control and evaluate the level and adequateness of these students. students need to attend lifelong learning courses during and after their study period to stay up to date with all recent advances. figure 3: developed model for engineering education  facilitators – in modern educational theories, teachers are called facilitators because their main job is facilitating ways for students rather than lecturing. facilitators get their experience from global contacts with higher educational institutions and industrial 0 1 2 3 4 5 electrical… communicati… computer… chemical… building… multimedia…industrial… medical… automotive… industrial… ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 18 partners. they transfer knowledge obtained through these contacts to the education process. the level and appropriateness of this knowledge is measured and controlled by taking feedback from local society.  resources – all previous items require resources to be accomplished. these resources are decided referring to the experimentation requirements and should be related directly to the existing resources of the local society. resource efficiency methods should be followed her including the preparation for an open knowledge platform for resource efficiency (okpre). this platform plays the role of intermediate between universities and society. it can be either in personal form or just an electronic platform with public access.  dynamics the dynamics deteriorating engineering education process take their input from market needs. the variables affecting these dynamics are: theory, research and practice used to vary awareness of students. the output of these dynamics is the knowledge given to students. in order to obtain a balanced knowledge level, all previous variables must be balanced. c proposed model for industry-academia partnership figure 4 depicts a proposed model for industry-academia partnership represented in integrated definition function schematic diagram. this figure shows that academia requires at least four inputs to fulfill the learning process from one side and the partnership with industry from the other side. curriculum is considered the main input, courses and teaching plans should be prepared carefully to qualify graduates to be able to compete in the local market and worldwide. moreover, the curriculum must take into account needs of the local market to contribute in its development. the second input is the students; these students are considered the core of the academic process where modern learning methods are student based learning techniques. other resources are required to support the academic institutes in their partnership with industry. laboratories, ict and technical facilities are considered significant part of these resources besides the necessity of libraries, books and search engines. in order to establish a serious partnership between academic university and industry, the university must study carefully the needs of the market and feed it into its curriculum and learning techniques. these information help in building awareness of the necessity for a real cooperation that leads to social and economic development of the local society. industry-academia partnership can take the forms of activities as shown in figure 4: figure 4: integrated definition model for industry-academia partnership in order to establish a real partnership between academic university and industry, the university must understand the needs of the market and incorporate them into its courses and learning techniques. these information help in building awareness of the necessity for a real cooperation that leads to social and economic development of the local society. industry-academia partnership can take the following forms of activities: 1cooperative education: cooperative (co-op) education is considered one of the most important learning methods for engineering, information technology and business educational disciplines. co-op can be divided into two main techniques: in-class cooperative learning and ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 19 in-market cooperative learning. it helps students to share ideas and opinions, ask for reasoning, work in teams, encourage everyone to participate and energize groups. on the personal level, it leads students to learn monitoring, observing, intervening and processing. in-market co-op learning aims at developing partnership with local market and industry and opening new opportunities for students and graduates in their future career and business. on the other hand, it helps in bridging the gap between theory and practice and qualifying students to be ready for work challenges from the first working day. it improves the level of education in palestine and encourages students to continue their higher education in the region avoiding brain drain and leading to better development of their countries. 2lifelong learning: lifelong learning (lll) is a very wide concept and has been defined by different people at different definitions depending on the national context. it can hold the following definitions: • adult learning. • non-traditional students in a formal and informal environment. • supplementary (non-degree) study programmes. the activities carried out under lll can vary from parttime, distance, adult, mixed-mode, electronic and open learning. lifelong learning can be monitored either by heis or by topic providing private sector associations. nevertheless, it is required from the governments to lay out rules and measures for the implementation of lifelong learning in the frame of cooperation between higher education and industry. in palestine lll contributes in educating and updating knowledge of engineers and technicians working in the local industry. this aims at providing these people with the state of the art about modern developments and innovations arise all over the world. in order to enhance lll process it is necessary to make student, staff and technician mobility between palestine and european countries. this mobility aims at transferring knowledge and know-how about recent developments. 3scientific research: research is considered as one of the main building blocks used in the development of societies. this research requires a serious study of the requirements of local industry tackling practical problems in this industry to be solved. university professors must work together with their students on solving technical problems specified by the industry. working on these problems requires deep knowledge of scientific theories and experimental processes to attach theory to practice. 4practical training: in scientific and practical faculties every students needs to make a practical training after the fourth year of his study to fulfil the graduation requirements. trainees are supervised by senior engineer working in the training company and followed up by a university professor from his faculty. a daily report should be written by the trainee, signed by the supervising engineer and submitted to the promoting university professor. this training qualifies the student for conducting the practical skills and tightens relations and cooperation between the university and industry. 5open knowledge platforms: platforms for disseminating knowledge should be established and formed by people from universities and others from the industry. each platform will handle the open distribution of knowledge in a specific topic. forums, websites and social media can be used to build these platforms. the existence of these platforms contributes in increasing sustainability and improving resources efficiency in palestine. partnership between industry and academia contributes in increasing the added value for the participating industrial sectors improving the contribution value of these sectors in the local economy. productivity of the partner companies will be significantly improved due to the influence of the scientific research conducted by the partner university to solve the different problems of the production lines. on the societal scale, this cooperation leads to developing the social and economic situation of the palestinian society. regardless of the hard political situation in palestine that deteriorates the development and continuity of the industrial sector, this industry-academia partnership increases the sustainability of this sector and helps in saving resources and products. iii. sustainability education in palestine a bachelor programs related to sustainability a working definition for sustainable engineering will include the topics of energy, water, natural resources, solid waste, quality, management and relevant issues. sustainable engineering program does not exist on the palestinian universities neither as bs nor ms level. however table 2 presents some of the related programs or those which include some elements of sustainable engineering. two bachelor programs in environment engineering exist in palestine, one in west bank and another one in gaza strip. one program is recently introduced in an–najah university. while water related graduate programs exist in 3 universities in west bank and one in gaza strip. only one graduate program for sustainable development exists at al-quds university. most b.s engineering programs have courses related to energy, water, solid waste and environment. table 3 presents an example of such courses in some of palestinian universities. ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 20 table 2 sustainable engineering related programs in palestinian universities program level university environment engineering technology b.s palestine polytechnic university environment engineering b.s. islamic university of gaza energy and environmental engineering b.s. an –najah university water resources engineering ms islamic university of gaza water and environmental sciences ms al-azhar university water and environmental engineering m.s birzeit university water and environmental sciences m.s birzeit university water and environmental engineering m.s. an –najah university clean energy and energy conservation engineering m.s an-najah university rural sustainable development m.s al-quds university table 3 sustainable engineering related courses in some b.s. engineering programs. university program e n e r g y w a te r s o lid w a ste e n v ir o n m e n t birzeit university electrical engineering 1 mechanical engineering 2 1 civil engineering 6 1 1 architectural engineering 1 annajah university electrical engineering 1 mechanical engineering 2 civil engineering 4 7 architectural engineering 1 2 chemical engineerig 1 1 4 industrial engineering 1 figure 5: industry need for sustainable courses based on market survey [17]. ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 21 b ms sustainable engineering at birzeit university the middle eastern partnership in sustainable engineering (me-eng) tempus project comes to address the sustainability challenges discussed above. the master program in sustainable engineering will be established in the faculty of engineering at birzeit university with joint resources from an-najah national university. this program will teach graduate courses related directly to the needs of the local industry and sustainable engineering. market study and survey of industries showed their interest in sustainable engineering courses as given in figure 5. sustainable engineering program at birzeit university aims at meeting the economic development needs in palestine by raising the national production level and providing environmental needs, it is based on the market survey results as depicted in figure 5. this is consistent with the international trends for conserving the natural resources and utilizing the renewable energy resources taking into account water conservation, pollution reduction and implementing remanufacture, reuse and recycle processes. therefore, the program conforms to the principles of sustainability in manufacturing, production and building processes in all industrial sectors in palestine and abroad. the program aspires to implement the sustainability principle as a foundation for building and development to maintain human life on this globe without causing harm to future generations. the main objective of the program is to build palestinian human resources in sustainable engineering. graduates of this program will have a comprehensive overview in sustainable production. they can integrate sustainability through efficient utilizing of materials, water and energy while decreasing their influence on environment. they will gain analytical tools for the evaluation and assessment of the effect of sustainability on the product life cycle. the program aims to achieve the following specific objectives:  qualifying local human resources to manage and operate the local industrial establishments.  development of production processes and quality control in national industry.  providing engineers with analytical tools in the fields of sustainability and cleaner production.  increasing competitive capabilities of the local products.  enhancing skills required for the best resource efficiency and utilization of local resources.  preserving environment and avoiding pollution of air, water and soil.  establishing scientific research in sustainable production and its applications.  spreading awareness of quality and sustainability.  exploring technical and engineering aspects in sustainable development. table 4 presents some of the courses to be delivered in this program and their type. table 4 courses in sustainable engineering program. course title type 1 sustainable engineering obligatory 2 energy efficiency and renewable energy obligatory 3 life cycle analysis elective 4 clean production elective 5 water efficiency and water & wastewater treatment technologies in industry elective 6 special topics in sustainable engineering elective 7 thesis/ seminar obligatory iv. role of education in sustainable development sustainable development is mostly defined as ―development that meets the needs of the present without compromising the ability of future generations to meet their own needs‖ [18]. sustainability involves the integration of: economic, environmental, and social dimensions. economic aspect defines the framework for making decisions. environmental aspects recognize the diversity and interdependence within living systems, the goods and services produced by the world's ecosystems, and the impacts of human on the ecosystem. social aspect refers to interactions between institutions and people, human values, aspirations and well-being, ethical issues, and decision making process. the three main elements of the sustainability paradigm are usually thought of as equally important, and within which trade-offs are possible. strong sustainability implies that trade-offs among natural, human, and social capital are not allowed or are very restricted, while weak sustainability implies that trade-offs are unrestricted or have few limits. three important findings were reported by millennium ecosystem assessment mea [19]. firstly, approximately 60% of the ecosystem services examined are being degraded or used unsustainably. secondly, there is established but incomplete evidence that human caused changes are increasing the likelihood of nonlinear changes in ecosystems such as disease emergence, abrupt alterations in water quality, the creation of dead zones in coastal waters, the collapse of fisheries, and shifts in regional climate. thirdly, the harmful effects of the degradation of ecosystem services are being borne disproportionately by the poor, are contributing to growing inequities and disparities across groups of people, and are sometimes the principal factor causing poverty and ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 22 social conflict. water, air, and food are the most important natural resources to people. humans can live only a few minutes without air, about a week without water, and about a month without food. water also is essential for our oxygen and food supply. plants, which require water to survive, provide oxygen through photosynthesis and form the base of our food supply chain. conservation of water, efficient water devices, water recycling, waste water treatment are some aspects to be considered for implementing sustainable engineering. sustainable engineering principles are to be implemented for the success of any sustainable development. such principles will include efficient use of natural resources among which is water, soil, rock, metals and non-metals resources. it also will involve efficiency in the use of energy and use of efficient equipment as well as use of renewable energy resources. sustainable engineering requires the life cycle analysis of products and use of efficient management tools. in addition common sustainability metrics need to be studied, such sustainability metrics are generally based within certain disciplines such as ecology, economics, and physics, and how they may reflect on other disciplines [20]. on other hand there are dozens of environment performance indicators, epis that can be used to evaluate sustainability. examples of multi-component methods that allow comparisons at a national level, which is necessary for promoting many types of systemic changes, include esi, and empi. environmental sustainability index (esi), the esi uses 76 variables to create 21 indicators of sustainability. the emergy (the term, emergy, is a contraction of embodied energy) performance index (empi) differs in omitting the social variables, and instead creates a single unit that can be used to describe the production and use of any natural or anthropogenic resource v. conclusions and recommendations the foregoing discussion in this paper talked about the existing engineering programs in the palestinian universities. the diversity of these programs fulfills most of the needs of the palestinian people, but the absence of concentrated specializations makes it difficult to reach the objectives of the world in terms of sustainability. the main problems faced in palestine are related to energy, water and materials. the high shortage in these resources declares the necessity to include sustainable engineering in most of engineering educational programs. the second part of the paper talks about the existing engineering programs according to the statistics of the palestinian ministry of higher education. on the other hand, there are two new developed models for education. the first model for engineering education emphasizing on the inputs, outputs, dynamics and mechanisms of this system. the second proposed model is developed for connecting engineering education to industry. this makes an academia-industry partnership. this partnership is considered one of the main machines capable of holding the process of development in industry and economy. the third part of the paper handles the integration of sustainability in engineering education showing some mechanisms and results. the fourth section explores the impact of this integration with some indicators. it is recommended to distribute the results of this study on higher education institutions and production companies in palestine for more discussions and to get their feedback for further improvements. acknowledgment the authors wish to acknowledge the team of the tempus project ―middle eastern partnership for sustainable engineering‖ whom results have been widely referenced in this paper. references [1] cia. (2013). "the world factbook, middle east: saudi arabia " retrieved november, 2013, from www.cia.gov [2] mohe (2011). palestinian higher education statistics, ministry of education and higher education [3] wbg (2006). west bank and gaza education sector analysis: impressive achievements under harsh conditions and the way forward to consolidate a quality education system, world bank group, middle east and north africa, human development group. [4] fronk, c., r. l. huntington, et al. (1999). "educational attitudes and trends in palestine." educational studies 25(2): 217-243. [5] mas (2012). employment opportunities and market assessment for university graduates in palestine: the relevance of hqsf fields of support, palestine economic policy research institute. [6] unesco (2003). higher education in the arab region 1998-2003. document prepared by unesco regional bureau for education in the arab states. s. a. c. o. united nations educational, place de fontenoy, 75352 paris 07 sp. paris, france, printed at unesco. [7] palestinian national authority; palestine energy authority; the power sector, 2009, letter of sector policy, ramallah. economical, technological and environmental impact assessment of national regulations and incentives for re and ee: country report palestine, recreee. [8] palestinian national authority; palestinian central bureau of statistics, 2008, energy consumption in the palestinian territory. annual report, ramallah. [9] palestinian energy authority, 2009, energy efficiency improvement & greenhouse gas reduction project (eiger), implemented by palestinian energy authority in cooperation with undp & gef. renewable energy projects implemented in palestine. [10] the world bank , 2009, middle east and north africa region, sustainable development. http://www.cia.gov/ ahmed m. abu hanieh, afif a. hasan and sadiq a. abdealall / engineering education (2015) 23 [11] shuval, hillel and hassan dweik (eds), 2007, water resources in the middle east. springer berlin. pp 454. [12] patel, k and vyas, c, 2011, construction materials management on project sites, national conference on recent trends in engineering & technology, gujarat, india. [13] abu hanieh, a, abdelall, s. and hasan, a, 2012, stone and marble sector in palestine, model for sustainability, 10th global conference on sutainable manufacturing, istanbul, turkey. [14] al khalifa, a-j, july 2011, economic development of petroleum producing countries in mena region, drillers and dealers, special article. [15] ministry of education (2010) education sector and cross-section strategy . – 2013‖ january 10, 2010 preliminary draft. [16] abu hanieh, a, abdelall, s. and hasan, a (2012). stone and marble sector in palestine, model for sustainability, 10th global conference on sustainable manufacturing, istanbul, turkey. [17] survey of local market needs in palestine. middle eastern partnership in sustainable engineering. tempus iv project number 517065-tempus-12011-1-si-tmpus-jpcr 2012. [18] sustainability: a comprehensive foundation collection editor: tom theis and jonathan tomkin, editors. http://cnx.org/content/col11325/1.38/ rice university, houston, texascopyrighted by u of i open source textbook initiative.pdf generated: july 19, 2012 [19] 19http://www.maweb.org/en/index.aspx, retrieved, 18/4/2013. [20] environmental, health, and safety (ehs) guidelines general ehs guidlines: environmental energy conservation. ahmed m. abu hanieh, assistant professor in faculty of engineering at the birzeit university. he obtained his phd in mechanical engineering from the free university of brussels. worked on developing new curriculum for mechanical engineering, mechatronics and master program in sustainable engineering besides to developing new community based learning methods. skills in: active control of vibrations, operations and technology management; robotics; analog and digital control of mechanical systems. abu hanieh is the author of two books, eight journal papers and participated in more than 25 international conferences. he participated in establishing several specialized societies and forums. afif a. hasan, dean of the faculty of engineering and professor in the mechanical engineering department at birzeit university. he obtained his phd in chemical engineering from university of utah in usa. he worked as a faculty member in the reading university in england. in palestine he worked in al quds university, an najah university and birzeit university. afif hasan is a member in the international solar energy society ises and world renewable energy network wren. he participated in several research projects and international conferences. prof. hasan is the author 17 journal papers and more than 25 conference papers. sadiq a. abdelall, assistant professor in faculty of engineering at the islamic university of gaza. he obtained his phd at institute of machine tools and factory management, at technical university of berlin, germany. he was appointed in 2011 as tu berlin’s coordinator of the eu tempus project titled middle eastern partnership in sustainable engineering. in 2012 he was appointed for another eu tempus called modernising undergraduate renewable energy education: eu experience for jordan. from 2010 he is organizing the annual global conference on sustainable manufacturing which sponsored by cirp academy. transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 146 odour assessment decision tree for odour sampling and measurement ros nadiah rosli 1 1 school of civil engineering, universiti sains malaysia, 143000 nibong tebal, pulau pinang, malaysia, nadiahros@yahoo.com abstract— there are various method in the world to sample and analyze odour. no matter what method or technique that is used, it should be accordingly to the standard. for the new researcher or people involved in order management, they might lack in knowledge on how to use a proper or a suitable technique to assess odour. in malaysia, there is no specific method of handling the odour problem. currently in this country is following the european standard, which using the olfactometer to analyze odour. since the olfactometer is expensive for the first time of installation, a cost effective odour threshold test has been developed from japan was trying to introduce. a new method from canada called sm100 olfactometer was also available in the laboratory. comparisons between those methods are studied and suitability for use are presented. for odour sampling, there are three types of source that need to be considered; point, area and volume. proper techniques should be done in order to sample at various sources. this paper would guide on sampling method, test procedure and data analysis of some method. this would make sense as the newer can choose their technique based on available instrument and environment condition. index terms— guideline, odour sampling, odour measurement i introduction each country has their own legislation and regulation on odour emission and permissible standards [1] and [2]. review paper of odour legislation by [2] had summarized the legislation in canada, united states, australia, new zealand, europe and asia. there have been various different legislative responses to the need to protect air quality from being affected by industrial odour emission in industrialized nations [3]. the legislation is important for each country to make sure the citizens are free from unpleasant odour and each industry, farm or factory could control their emission limit. to fulfill the legislative, odour emission that produced from each source must be measured to ensure it is within the limit. an agreement in techniques for measurement of odour in each country should be achieved before implementation of legislation. until now, there are no specific guidelines on equipment and techniques used for odour sampling as mentioned by [4]. however, all sampling and measurement of odour would be referred to the european standard en 13725:2003, air quality – determination of odour concentration by dynamic olfactometry [5], the offensive odour control law japan [6] or the vdi guideline from germany [7] as those standards provide information about the odour sampling. the standards describe on dealing with sampling materials and possible sampling method with sampling procedure in order to maintain olfactometry characteristic of the sample as constant as possible from moment of sampling to analysis. the given indications are not enough, thus, leaving much argument on choice of sampling procedure and equipment [4]. in this country, malaysian standard ms 1963 [8] that simulates to european standard en 13725 is used for the determination of odour concentration. odours are a common problem around the world. therefore, it is not surprising that there are a lot of techniques that used to measure the odour. the purpose of sampling odour is to get information on the typical characteristics of the sources of odour by collecting a suitable volume fraction of the effluent [4]. before sampling the odour, several characteristics of odour source need to consider. there are geometrical configuration, either point, area or volume source, suitable equipment for that source and duration of sampling storage before analysis. the aim of this study is to provide some information about the technique of odour sampling and measurement, which is illustrated by the decision tree. the decision tree can be used by the engineers, researchers and also interested individuals towards the study. ii. static and dynamic odour sampling there are two types of sampling methods, which is static and dynamic [4]. static sampling provides sample to be in an enclosed suitable container (either canister or a sampling bag) which is connected to the measurement device in a second moment. example of static sampling is when the odour needs to sample on site and bring to the laboratory for analysis. usually, the odour sample will be drawn inside a conros nadiah rosli/ odour assessment decision tree for odour sampling and measurement (2015) 147 tainer for example eco-drum and bring to the laboratory and analyze the samples using the olfactometer [1] or triangular odour bag method [9]. some technique would directly measure the odour straightly from the emission without collecting the sample. usually, the technique is implemented by the in-field olfactometer. the in-field olfactometer is an example of dynamic sampling, as the preparation of odour threshold is determined by dilution which directly done by the equipment. example of in-field olfactometer is the nasal ranger [10] and scentroid, sm 100 [11], which is a new developed in-field olfactometer from canada. dynamic sampling has advantages of minimizing the possibility of sample modifications due to adsorption on sampling equipment or chemical reaction between the compounds contained inside the sampling bag. this method provides an air flow to be analyzed and deducted directly from the source to the measurement device. iii. sampling technique a study by jiang and kaye [12] mentioned that there are two different types of emission, which are emitted from point sources and area source. the authors have stated that the emissions of point source are typically from a stack with a know flow rate from a vent of processing building [12]. meanwhile, the emission from area source is typically a liquid or solid surface of a large area. another sampling source has discussed by zara [4], which called a volume source. the volume source is a typically building from which odours come out, through naturally ventilated ducts, as well as throughout windows, doors or other opening [4]. a odour sampling at point source during odour sampling at hot emission sources, condensation might occur inside the sampling bag if the odour sample, and the sampling container are at different temperatures [13]. the occurrence of condensation process will affect the odour concentration, by reducing the odour concentration value [13]. this phenomenon usually occurs at the stack of the factories, where the sample emission is hot (> 50 o c). to avoid the condensation process, proper sampling equipment should be used, for example the dilution sampler [13]. the dilution sampler operates dynamically, by diluting the odour concentration using odourless gas upon sampling [14]. b odour sampling at area source the sampling of odour usually conducted by using the ecodrum [8] or any other sampling equipment for example, vacuum bottle, handy pump or diaphragm pump [6]. sampling bag, for example the nalophan bag is inserted inside eco-drum prior start to collect the sample. nalophan bag is recommended as satisfactory storage material [5] and sample loss during storage is also minimal [12]. after sampling the odour, the sample must be analyzed within 12hours and analyzing more than 30hours should be avoided [15]. c odour sampling at volume source odour sampling of volume sources is typically sampled from odour emitted from the building. the characteristic of the odour emission is challenging as it is difficult to measure a representative of odour concentration. sampling of ambient odour inside the building can be done by using depression pump or any other equipment that is used to sample at the point source [16]. if the odour emission is collected at the boundary or at environmental, the same odour sampling procedure is conducted which by using the eco-drum or any other odour sampling equipment. d problems related to odour sampling researchers [15], [17] and [4] have found that there is an effect of using different odour bag materials with respect to the storage time. bokowa [13] had also advised to evaluate samples as soon as possible, especially for sample collected from sources where hydrogen sulphide is expected to be present. the age of odorous storage inside the sampling bag will affect the odour concentrations as reported by van harreveld [15]. he had reported that the odour concentration in nalophan bag remains unchanged 4 to 12 hours after sampling [15]. however, after 30 hours, the odour concentration decays about half of the value at age 4. the researcher also conducted studies of bag material between metal and nalophan bag. result shows that odour concentration in metal bag at age 4 hour is significantly lower than nalophan bag by a factor of 6. the decay in metal bag seems to occur shortly after sampling, approximately between ages 0 to 4 hours. van harreveld [15] had also measured the effect of different neutral gas types used for pre-dilution and there is no significant effect on the odour concentration or decay characteristics. iwasaki [17] had investigated about the stability of samples (fish meal plant, chocolate plant, incinerator and printing) taken from the site. six panels were used to examine the samples. from the study, result shows that the odour concentration remains almost the same within 10 days. after that, the odour concentration value starts to decrease. unfortunately, proper procedure of the study is not available in the paper [17]. in the study conducted by zarra [4], the results obtained show that the odour concentration determined by dynamic olfactometry in air samples from odorous compound significant decrease in time elapsed especially after having elapsed 30 hours, as required by european standard [5]. the european standard [5] had concluded that storage in teflon bags is the most stable, while nalophan bags are less reliable. the highest repeatability and accuracy of the sensors measure was found in that study is when using teflon bags and carrying out the analysis always at the same elapsed time after the sampling phase and specifically within rosli, r. n. / odour assessment decision tree for odour sampling and measurement (2015) 148 14 hours [4]. there is also the possibility of adsorption phenomena during sample storage. therefore, specific sampling materials are encouraged to be used; for example, using an odourless sampling container and reduce the sampling storage time [4]. this will minimize any interaction between sample gas and the sampling container. iv. odour mesurement method there are two types of odour measurement method, which is using sensory (the human nose) or by equipment such as the electronic nose, gas chromatograph and diffusion tube [10]. the sensory method uses numbers of panel in order to obtain the odour concentration value. on the other hand, the equipment odour measurement type requires column or standard in order to get the chemical composition inside the odorous compound. this study is focused about the sensory method because the method is user friendly and sensitivity due to using the human nose. a olfactometer a sensorial technique that used dilution instrument is called the olfactometer, which presenting the odour at different concentration level. nowadays the dynamic olfactometer is widely used and following the european standard [4] and [5]. it is called dynamic since the sample is diluted and mixing automatically inside the olfactometer before flowing out through the sniffing port [4]. generally, the olfactometer has two standard methods which are ―yes/no method‖ and ―force-choice‖ [4],[18] and [19]. for the first method, the panels are required to sniff from a single port and communicate if an odour is detected or not. the odour sample that has been diluted with odourless air or only contains the odourless air will randomly exit from the sniffing port. on the other hand, for the second method, two or more sniffing ports are used. the odour sample is presented at one of the sniffing port and the odourless air at the other port. therefore, the panel has to compare different samples and choose the port which the odour exist. the differentials of two types of olfactometers are illustrated in figure 1 (a) and (b). the initial set-up for the olfactometer is approximately $50,000 including all required equipment during the odour assessment [10] and [20]. the maintenance cost also might be expensive because the olfactometer used other equipments (for example, the flux hood and air supply unit) to be coupled with the olfactometer upon odour measuring [13]. (a) (b) figure 1: types of dynamic olfactometer (a) yes/no method (b) force-choice method b triangular odour bag in japan, the triangular odour bag method has been studied and published in 1960 by n.a.huey [17] using the a.s.t.m syringe method. unfortunately, at early development, the method had several disadvantages such as small volume of syringe (100 ml), adsorption of odour on syringe surface, long preparation time of the highly diluted sample, occurrence of unnatural feeling when sniffing odour from syringe to nose and influence of preconception of panel members [17]. therefore, more researched had been done in order to improve the previous method [17] and [21]. in 1972, the triangular odour bag was developed and it was introduced into the offensive odour control law in 1995 [17], [21] and [22]. to reduce the disadvantages of previous a.s.t.m. syringe method, the triangular odour bag method is introduced by using 3 l of plastic bag instead of using syringe [17]. this method requires three sampling bags per dilution per panels. three of the bags are labeled and filled with odourless air. the odour sample is randomly injected into one of the odour bag by using air-tight glass syringe [21].the panels are asked to guess which bags that contained odour. total of six panels is usually involved in the odour assessment. by this rosli, r. n. / odour assessment decision tree for odour sampling and measurement (2015) 149 way, the odour bag eliminates disadvantage of a.s.t.m syringe method of having the bag as the dilution medium and panels are required to choose one out of three bags that contain odour [17]. c odour threshold test in malaysia, a new method that simulates to triangular odour bag has been developed as an alternative to the olfactometer [23]. odour threshold test is an easy and simple odour measurement with lower cost compared to the olfactometer. the difference from previous triangular odour bag method is the material and equipment that available in malaysia. panel is selected from local students and technical staff of the university that has difference sensory from japanese as each person has a different sensory smell. the setup cost of odour threshold test method is approximately $9000 including sampling and odour bags, and equipments that are air-supply unit, air-tight glass syringe and activated carbon. the estimation was based on preliminary experiments [23] in order to develop the odour measurement method. d in-field measurement field olfactometer is usually used to determine the odour concentration in the ambient environment. a journal paper conducted by bokowa [13] mentioned that ambient air odour measurement analysis is not suitable by using dynamic olfactometer because the presence of a variable odour background in the field may strongly affect panel response. therefore, field olfactometer is suitable and effective for the use of an ambient odour. unfortunately, this method neither has disadvantage because it is often not filled in correctly and the community can easily lose their passion in observing the odour. the cost of the in-field olfactometer is approximately $550 [10]. v. decision tree for sampling and measuring odour sampling and measuring of odour are done to check the concentration of odour from sources. there are so many techniques that are used around the world, either to sample or measure odour. even some of the techniques are still under development. figure 2 shows the decision tree of sampling and current measuring odour tools, especially the tools that has been highlighted in this study. generally, there are two types of analysis; either to carry out the odour measurement on site (in-field analysis) or bring the odour sample to the laboratory for odour measurement analysis. if an assessor requires low budget (< $50,000) of in-field analysis, it is suggested to use the nasal ranger since the price is in between $5,000 to $6,000. the triangular odour bag or the new developed method, odour threshold test that is introduced in this research could be applied if the assessors wish to analyse the odour concentration at a lower price. these methods require the simple laboratory procedure, such as air-tight syringe and 18 odour bag for each odour assessment session. the sampling apparatus and sample storage technique are decided according to previous researchers [9] and [24] and also from preliminary experiments conducted [23]. vi. conclusion odour can be measured using various methods. each method has different procedure and equipment. this study highlighted about sensory analysis, which using the human nose to measure odour concentration. there are also equipments for odour measurement method available around the world, however, it is not discussed in the present study. usually, the odour measurement is done by human olfactometry rather than instrument because the nose is sensitive compared to the instrumental technique. the decision tree that provided in this study can be used for the new venture in this field, especially in malaysia since there is very lack information and knowledge of odour measurement. acknowledgment the author wishes to thank person involved in this project, especially my teammate and supervisors. this work was supported under exploratory research grant scheme (ergs) by a grant from the ministry of higher education, malaysia (mohe). references [1] van harreveld, a. p. (2003). ―odor regulation and the history of odor measurement in europe‖. odor measurement review, japan ministry of the environment. pp 54-61. [2] bokowa, a. h. (2010). review of odour legislation. chemical engineering transaction. volume 23,2010. [3] sironi, s., capelli, l., dentoni, l. and del rosso. r. (2013). chapter 6 odor regulation and policies in odour impact assessment handbook.john wiley & sons, ltd., publication. [4] zarra, t., naddeo, v., and belgiorno, v. (2013). chapter 3: intruments and methods for odour sampling and measurement in odour impact assessment handbook. john wiley & sons, ltd., uk. pp 33 – 83. [5] cen, committee for european normalization. (1999). pren13725: proposed draft standard: air quality – determination of odour concentra rosli, r. n. / odour assessment decision tree for odour sampling and measurement (2015) 150 tion by dynamic olfactometry, brussels, belgium. [6] moe, ministry of environment, government of japan. (2003). the offensive odour control law in japan. available online: http://www.env.go.jp/en/laws/air/offensive_odor/a ll.pdf. [accessed on 12 may 2014] [7] frechen, f. b. (1997). odour measurement and odour policy in germany. available online: http://www.orea.or.jp/en/pdf/g_sydney_1997_s d.pdf. [accessed on 12 may 2014]. [8] malaysian standard, ms. (2007). ms1963:2007 air quality: determination of odour concentration by dynamic olfactometry. department of standards malaysia. [9] saiki, k. (2003). standard odors for selection of panel members.odor measurement review, japan ministry of the environment. pp. 102-105. [10] sheffield, r., and ndegwa, p. (2008). sampling agricultural odors. university of idaho. january 2008. a pacific northwest extension publication. [11] ides canada inc.(2014). scentroid sf450 flux chamber.[brochure].canada. [12] jiang, j. and kaye, r., (2001). chapter 5 sampling techniques for odour measurement in odours in wastewater treatment: measurement, modelling and control book, published by iwa publishing. [13] bokowa, a. h. (2010). the effect of sampling on the measured odor concentration.chemical engineering transaction. 23:43-48. [14] mef, ministry of environment and forest. (2008). guidelines in odour pollution and its control. central pollution control board, government of india. [15] van harreveld, a. p. (2003). odor concentration decay and stability in gas sampling bags.journal air waste manag assoc. 2003 jan;53(1):51-60. [16] capelli, l., sironi, s. and rosso, r. d. (2013).odor sampling: techniques and strategies for the estimation of odor emission rates from different source types: a review. sensors 2013, 13, 938-955 [17] iwasaki, y. (2003). the history of odor measurement in japan and triangle odor bag method.odor measurement review, japan ministry of the environment. pp. 37-47. available online: http://www.env.go.jp/en/air/odor/measure/02_1_1. pdf. [accessed on 17 april 2013]. [18] sneath, r. w. (2003). quality control of olfactometry at sri and europe. odor measurement review, japan ministry of the environment. pp. 82-87. [19] ueno, h., amono s., merecka, b., and kosmider, j. (2009). difference in the odor concentrations measured by the triangle odor method and dynamic olfactometry. water sci technol. 2009;59(7) [20] brattoli, m., de gennaro, g., de pinto, v., loiotile, a.d., lovascio, s., and penza, m. (2011).odour detection methods: olfactometry and chemical sensors. sensors 2011, 11, 52905322. [21] higuchi, t. (2003). odour measurement in japan.east asia workshop on odor measurement and control review.office of odor, noise and vibration, enviromental management bereau, ministry of the environment, government of malaysia. [22] kamigawara, k. (2003). odor regulation and odor measurement in japan.odor measurement review, japan ministry of the environment. pp. 48-53. [23] zaman, n. k., rosli, r. n and yaacof, n. (2013). the odour threshold test, a tool for odour assessment: preliminary observations. proceeding conferences. current research in malaysia, cream., universiti utara malaysia. [24] heber, a.j., lim, t.t., ni, j. and sutton, a.l. (2000). odor and gas emission from anaerobic treatment of swine manure. final report. november, 2000. rosli, r. n. is a msc candidate in environmental engineering from the school of civil engineering, university of science malaysia. in 2012, she had received a bachelor in civil engineering with honors, from the same university. since undergraduate in 2010, she has been involved in many research projects and became a part timer research assistant. her current interest includes the sampling and mesurment of odour http://www.env.go.jp/en/laws/air/offensive_odor/all.pdf http://www.env.go.jp/en/laws/air/offensive_odor/all.pdf http://www.ncbi.nlm.nih.gov/pubmed/12568253 http://www.env.go.jp/en/air/odor/measure/02_1_1.pdf http://www.env.go.jp/en/air/odor/measure/02_1_1.pdf http://www.ncbi.nlm.nih.gov/pubmed/19380999 http://www.ncbi.nlm.nih.gov/pubmed/19380999 rosli, r. n. / odour assessment decision tree for odour sampling and measurement (2015) 151 figure 2: odour assessment decision tree transactions template journal of engineering research and technology, volume 2, issue 4, december 2015 217 accurate quadrature encoder decoding using programmable logic yassen gorbounov y.gorbounov is with the department of automation of mining production, university of mining and geology “st. ivan rilski”, sofia, bulgaria, e-mail: y.gorbounov@mgu.bg abstract— the quadrature encoder (or incremental detector) is amongst the most widely used positional feedback devices for both rotational and linear motion machines. it is well known that in conventional circuits for quadrature signals decoding, an error emerges during the direction change of the sensor movement. in some situations this error can be accumulative but in any case it provokes a position error, that is equal to the resolution (one pulse) of the sensor. a corrective algorithm is proposed here that fully eliminates this type of error. it is an improvement over a previous research of the author which is much simpler and resource saving without compromising the performance of the device. a xilinx cpld platform has been chosen for the experiments. the inherent parallelism of programmable logic devices permits a multi-channel cnc machine to be fully served by a single chip. this approach outperforms the capabilities of any conventional microcontroller available on the market. index terms— quadrature encoder, incremental encoder, angular position measurement, motion control, programmable logic, parallel algorithms. i introduction in motion control, the quadrature encoder can be of a rotary or linear-type, with an optical or magnetic sensing principle, which gives information about both the relative position and the motional direction of a shaft. this device is directly connected to the numerical sub-system which controls the motor. since it lies in the feedback path, it plays a very important role in keeping the system up and running. a missing pulse can lead to a major miscalculation and can even damage the mechanical drive system. the basic principle of operation is as follows: the two phases a and b of the encoder are displaced in 90 degrees each other, which allows the phase difference to be used as a sign that shows the movement direction. the time diagram of these two pulse sequences is given in fig.1. if the upper sequence determines conditionally the movement in positive direction, the pulse sequence from fig.2 visualizes the outputs a and b in the negative movement direction. a case of direction reverse is illustrated in fig.3 by the upper two rows. in order to augment the resolution of the sensor, the frequency of pulses is being quadrupled by generating a pulse on every rising or falling edge and these higher frequency pulses are divided into two separate channels indicating positive and negative direction of movement (the lower two rows). the transformation of the signals given in fig.3 gives opportunity for easy position measurement, using simple figure 1 a time diagram of a positive direction pulse sequence figure 2 a time diagram of a negative direction pulse sequence figure 3 a case of direction change yassen gorbounov / accurate quadrature encoder decoding using programmable logic (2015) 218 up/down counter and easy speed measurement by using an ordinary low pass filter. ii the essence of the problem the conventional electrical circuit element commonly used to determine the direction of movement is the d-type flipflop. the way it works is depicted in fig.4. when dir=1 the direction of the movement is positive and when dir=0 it is negative. as can be seen in the figure there is some delay between the actual direction change and the time of sensor activation. it is obvious from the time diagram that the two-circled pulses are on the wrong channel. if the two channels are wired to an up/down counter this will provoke an error of 2 count pulses. this is equal to the resolution of the incremental encoder, which in some cases might be of significant importance. the general block diagram that is commonly used for the detector implementation is given in fig.5. a corrective circuit will be discussed further that updates the d-type flipflop appropriately and fully compensates for the erroneous situations. there exist many solutions on the market that deal with the decoding of incremental encoders. some of them are processor-based solutions [4], [8], [14], others are fpgabased [9], [10], [12] and there exist also dedicated integrated circuits [6], [7], to mention just a few. although the problem is analyzed deeply and broadly discussed in the litterature, none of these sources fully handles all the reversal situations and the arising errors and thus none of them fully compensates the error. based on a previous research of the author, the aim of this work is to offer an optimized solution of the problem i.e. full elimination of the error, which is compact, resource saving and suitable for parallel implementation using programmable logic devices (pld). iii definition of erroneous situations there are 8 situations that can lead to an error during direction reversal. four of them happen on negative clock transition (see signal b in fig.4) and they are shown in fig.6. another four happen on positive clock transition which is shown in fig.7. figure 4 a conventional direction identification circuit and the emerging error figure 5 the conventional device block diagram figure 6 the reverse situations, emerging errors during the negative edge figure 7 the reverse situations, emerging errors during the positive edge yassen gorbounov / accurate quadrature encoder decoding using programmable logic (2015) 219 every one of them is assigned a letter from the greek alphabet, namely α, β, γ, δ, ε, δ, ε and ζ. all the situation can be described as follows: α – the original movement is forward, the system reverses after the second positive edge. the performed transition is forward-backward. β – the original movement is backward, the system reverses after the second positive edge. the performed transition is backward-forward. γ – the original movement is forward, the system reverses after the first positive edge. the performed transition is forward-backward. δ – the original movement is backward, the system reverses after the first positive edge. the performed transition is backward-forward. ε – the original movement is forward, the system reverses after the second negative edge. the performed transition is forward-backward. δ – the original movement is backward, the system reverses after the second negative edge. the performed transition is backward-forward. ε – the original movement is forward, the system reverses after the first negative edge. the performed transition is forward-backward. ζ – the original movement is backward, the system reverses after the first negative edge. the performed transition is backward-forward. in some situations the error emerging during direction reversal is accumulative. this is the situation when one of the ε or ζ case combines with the other six. the probability this to happen is 0.43 and it is given in eq. 1 where p denotes the probability and c denotes the combination: iv operation of the error correction circuit the entire device structure is implemented in verilog hdl using the behavioral modeling approach [2], [3], [5] and represents in fact a simple finite state machine. for the sake of readability here it is presented in schematic form which is much easier to adopt. the schematic is separated in three major parts. at first the input frequency obtained by the encoder channels a and b is being quadrupled in order to increase the sensor resolution (see fig.8). this is done using one d flip-flop per channel (a delay element) and a very simple xor method. this way a short pulse (ap and bp) is produced on every rising and falling edge. both channels are combined with an and logic gate. at the same time a conventional direction detector algorithm works in parallel. this signal contains the error and it will be processed further. the second stage schematic is shown in fig.9. it is comprised of one shift-register per channel which is clocked by the quadrupled frequency of the inputs. the and gate permits to filter out every two consecutive pulses that may appear on the channel. the presence of such a sequence indicates the need for direction correction. the output of this circuit serves as a "reset direction" signal which emerges precisely at the time of reversal. the "erroneous direction" and the "reset direction" signals are both used in the schematic shown in fig.10 which represents the corrective circuit. its purpose is to set or reset the direction signal at the exact times of reversal. finally, the quadrupled frequency and the correct direction signals are used to drive a multiplexer circuit that out figure 9 direction reset circuit figure 8 frequency quadrupling and erroneous direction detector circuit figure 10 corrective and output forming circuit yassen gorbounov / accurate quadrature encoder decoding using programmable logic (2015) 220 puts the up and down signals which may be used further by an up/down counter. the entire device is built and experimented using a xilinx coolrunnerii cpld programmable logic device [17]. simulation results are shown in fig.11 to fig.18, each one corresponding to a situation represented in fig.6 and fig.7 respectively. what is not discussed here is the problem of synchonization of the inputs a and b which are asynchronous in regards to the internal clock domain of the device and thus may lead to metastability issues. these problems are very well revealed in [13], [15], [16]. a convenient digital filtering method for ignoring glitches is proposed in [9], [14]. the corrective algorithm implementation occupies 14 macrocells and 27 product terms compared with 17 macrocells and 28 product terms occupied by the design proposed in [1]. that is an improvement of 17.65 % in macrocells and permits the design to fit in a single function block, in terms of xilinx cpld technology. the input clock frequency is required to be at least 4 times higher than that of the quadrupled frequency while the upper limit is bounded by the resources of the chosen pld. the abilities for parallel algorithm processing inherent to programmable logic devices gives opportunity for a multi-axis cnc machine to be fully served by a single chip. figure 11 case alpha forward-backward transition figure 12 case beta backward -forward transition figure 13 case gamma forward-backward transition figure 14 case delta backward-forward transition figure 15 case epsilon forward-backward transition figure 16 case zeta backward-forward transition figure 17 case eta forward-backward transition figure 18 case theta backward-forward transition yassen gorbounov / accurate quadrature encoder decoding using programmable logic (2015) 221 v conclusion when it is about a high precision servo drive or motion system with a quadrature encoder-based positional feedback a fast and expensive sensor and decoding logic may be required. the proposed method suggests an error-free algorithm that handles all the erroenous situations emerging during direction reversal. the implementation of such a device is cheap and easy to be done using reconfigurable programmable logic device. due to the capabilities for parallel algorithm execution multiple devices of the proposed type can be implemented on a single chip be it of a cpld or fpga type. this approach outperforms the capabilities of any conventional microcontroller available on the market. references [1] c. pavlitov, y. gorbounov: precise incremental decoding device. in: edpe 2005, dubrovnik, croatia, isbn10 9536037432, 2005. [2] m. mano: digital design, 3ed, prentice hall, isbn-10: 8120320956, 2002. [3] s. palnitkar: verilog hdl, 2ed, prentice hall, isbn-10: 0132599708, 2003. [4] h. toliyat, s. campbell, dsp-based electromechanical motion control, isbn 0-8493-1918-8, crc press, 2004. [5] c. pavlitov, y. gorbounov, programmable logic devices in electromechanics (book in bulgarian), technical university of sofia, isbn 978-954-438-645-0, 2007. [6] ls7184n quadrature clock converter and decoder, lsi computer systems inc., 1235 walt whitman road melville, ny 11747-3010, usa, 2015. [7] ls7266r1 24-bit x 2 axis dual-axis quadrature counter, lsi computer systems inc., 1235 walt whitman road melville, ny 11747-3010, usa, 2009. [8] p. panchal, b. land, "incremental encoder signal analyzer" project report, school of electrical and computer engineering, cornell university, 2012. [9] k. chapman, rotary encoder interface for spartan-3e starter kit, application note, xilinx ltd., feb. 2006. [10] j. nicolle, quadrature decoder, fpga4fun web site, http://www.fpga4fun.com/quadraturedecoder.html, 2013 [11] f. jacob, handbook of modern sensors: physics, designs and applications, springer, isbn-10: 8181284321, 2006 [12] p. alfke, b. new, quadrature phase decoder, application note xapp012, xilinx ltd., nov. 1995. available at: www.xilinx.com/support/documentation/application notes/xapp012.pdf [13] j. stephenson, d. chen, r. fung, j. chromczak, understanding metastability in fpgas, white paper wp01082-1.2, altera ltd., jul. 2009 [14] i. clark, intelligent incremental encoder system development, may 2009, available at: http://www.secondvalleysoftware.com/documentation/p dfs/encoder.pdf [15] d. kinniment, k. heron, g. russell, measuring deep metastability, newcastle university, uk, available at: http://www.async.org.uk/david.kinniment/research/pa pers/async2006.pdf [16] a. cantoni, j. walker, t. tomlin, characterization of a flip-flop metastability measurement method, ieee transactions on circuits and systems, vol.54, no.5, doi:10.1109/tcsi.2007.895514, may 2007 [17] coolrunner-ii cpld family, application note ds090 (v3.1), xilinx ltd., sep. 2008.available at: http://www.xilinx.com/support/documentation/data_she ets/ds090.pdf yassen gorbounov the author received a phd degree from the faculty of automation at technical university of sofia, department of electrical drives automation in 2013. he is a chief assistant professor in the university of mining and geology "st. ivan rilski", sofia and delivers lectures in ―measurement of nonelectric quantities‖, ―microprocessor systems‖ and ―digital systems‖. he is a member of the ieee, region 8, the federation of the scientific and engineering unions in bulgaria (fnts) and john atanasoff union of automation and informatics (uai). he is an author or coauthor of over 20 journal and conference papers and is a co-author of 1 book. his research interests include automatic control of electrical drives, switched reluctance motor and generator control, application of neural networks and fuzzy logic for motor control, parallel processing algorithms with programmable logic devices, digital control of electromechanical systems. transactions template journal of engineering research and technology, volume 2, issue 1, march 2015 65 strategies for safety and productivity improvement lina ahmed abuhamra¹ and adnan ali enshassi² ¹researcher in msc of construction management, department of civil engineering, islamic university, gaza, palestine. e-mail:lina.a.abuhamra@gmail.com ²professor in the faculty of engineering at the islamic university of gaza, palestine. e-mail: enshassi@iugaza.edu.ps abstract— the aim of this paper is to study the connection between occupational health and safety (ohs) and increasing employee productivity in construction industry from the point view of contractors in gaza strip. this has been done by identifying strategies that effectively promote both safety and productivity during a construction task. in this study, a quantitative research was adopted as the main statistical component. the survey approach (descriptive and analytical survey) has been chosen and has been conducted through some faceto-face interviews and a written closed and semi closed questionnaire for a chosen sample with a size of forty three people from the construction population. after making validity test, relia-bility test, and descriptive analysis, the test of the relative importance index (rii) has been conducted to determine the relative importance of various factors. strategies that can be followed to effectively affect safety and productivity were resulted from the survey feedback. these strategies fall under five major groups; planning, training, monitoring, communication skills, and inspection. results indicated that “training workers to carry out works properly, especially in the new types of work" factor has been ranked in the 1st position with regard to its importance in sustaining safety and productivity of project. this factor belongs to the “training group”. the research is only confined to the safety and productivity relationship in the public construction environment in gaza strip, but not the west bank. the small sample size of the survey would probably not be indicative of the general population of contactors in gaza strip, although the most of the results were reasonably accepted and were related to the literature review. more research is needed to understand the topic since literature on safety and productivity in palestine and surrounding region is very limited. construction companies need to substantially improve ohs as well as improving construction productivity and reducing costs. contractors need to plan for a strategy to achieve that, and need to move from strategies to implementation. in other words, contractors are recommended to act strategically to protect workers by continuously identifying hazardous conditions and by training and monitoring. index terms— construction, safety, productivity, strategy, safety plans. i introduction to successfully complete a modern construction project, managers must ensure that the facility is delivered on time and under budget while meeting specified quality requirements and acceptable safety standards [1]. productivity is one of the most important factors affecting the overall performance of any organization, large or small [2]. at the same time it can be said that productivity of various trades in construction is the basis of arriving at estimates for time and cost required to complete a construction process [3]. siriwardana and ruwanpura [4] said that improving labor productivity is an effective approach to improve the overall productivity of the industry. for example, it is vital for construction managers and engineers to understand how safety and productivity are interrelated [1]. a manager in turkey, who has a long and extensive international experience, expressed the view that workers in mediterranean countries are especially unlikely to take the required safety measures, even if management insists that they do so. in contrast, a site manager of a company that has a joint venture with an american firm stated that their site engineers do successfully control the safety situation by random stops and checks of work operations throughout the day [2]. it must be known that construction accidents are the major element of many human tragedies, demotivate workers, disrupt site activities, delay project progress and adversely affect the overall cost, productivity and reputation of the construction industry [5]. the relationship between safety and productivity is clear [6]. if the workplace is poor in health and safety, it will affect the individual, the workplace and the community. it will reduce productivity [7]. therefore it is important to develop working cultures in a direction which supports health and safety at work, which promotes a positive social climate and smooth operation, and enhance the productivity [8]. the objective of this paper is to identify strategies that effectively promote both safety and productivity during a construction task. ii literature review project objectives are: to get a production in high quality, on time, on budget and with zero accidents. these objectives mailto:lina.a.abuhamra@gmail.com mailto:enshassi@iugaza.edu.ps lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (2015) 66 are not easy as construction sites are busy places where time pressures are always present and the work environment ever changing [9]. hallowell [1] said that: cost, schedule, quality and safety are in conflict in the most of time. leaders around the world increasingly recognize that a well-managed safety system provides an operational strategy to improve overall management. in recent years a significant number of major organizations have discovered that applying the tools and techniques of good safety management gives them not only reduced injuries and illnesses but also measurable improvements in efficiency, quality, and productivity [10]. roberts [11] argued that there is must be a strategy makes significant increases in productivity and efficiency whilst reducing accidents and creating strong awareness of safety in the workplace. it drives also to cost reduction and overall greater profitability. some factors such as distractions in the work environment and human error have a negative impact on safety and productivity while other factors, such as planning, communication and teamwork, have a positive impact on both safety and productivity [1]. chapman and butry [12] said that management practices affect productivity over the life cycle of a construction project in a number of ways, which are including: planning; resource supply and control; and supply of information and feedback. human resources practices are important to project and safety management. these include: giving out incentives based on an individual's safety performance; meting out punishment; providing safety training; maintaining close communication and feedback; allowing workers to participate in safety matters; management commitment; evaluating workers based on their safety performance; and providing welfare benefits [13]. altabtabai studied the general concept of safety culture, and indicated that the most important factors in construction site accidents were management policies (such as safety meetings, training, and supervisory attitudes and messages) and risk acceptance by workers [14]. the weekly work plan meeting promotes two-way communication and team planning to share information on a project in an efficient and accurate way. it can improve safety, quality, the work flow, material flow, productivity, and the relationship among team members [15]. hse [7] insisted that health and safety should be treated as an integral part of productivity, competitiveness and profitability. hammad et al. [16] concluded that the most effective ways to improve productivity and safety are analyzing the entire construction process in detail; providing better planning to mitigate the impact of work changes and to eliminate the loss of time that results from imprecise planning; training for supervisors and the crew; regular meetings; and safety planning. each construction project has unique problems and challenges, so that planning should lead to improved safety performance to ensure high production. to achieve that, managers should identify in advance any special equipment, tools, or safety devices to do job efficiently and safely should be taken in mind. in addition to that, detailed planning help to reduce accidents by eliminating crisis situations which can occur when a crew is suddenly confronted with an unplanned for situation [17]. iii methodology forty three questionnaires were distributed to randomly selected contractors to get their opinion, on a five point likert scale, about the strategy that should be taken to improve safety and productivity in gaza strip. all questionnaires were returned and completed for quantitative analysis. a well-designed questionnaire was developed for the study with mainly closed ended questions and some open questions. the questionnaire was built on three sections that cover the main questions of the study. the first section is related to the demographic information about respondents and company profile. second section is related to extent of importance of safety topic in the company and it also includes some questions about labor productivity. third section is related to strategies that can be followed to effectively affect both of occupational safety and increasing productivity. it includes 5 main groups with 26 factors. the five groups are planning, training, monitoring, communication skills and inspection. these have been developed from the interviews and the factors that have been mentioned by (hallowell [1]; volkman [18]; chapman and butry [12]; lai et al. [13]; whiting and bennett [10]; walshl and sawhneyz [14]; salem et al. [15]; levitt and samelson [17]; hse [7]; roberts [11]; hammad et al. [16]; national business group on health [19]; peng et al. [20]). before the distribution of the questionnaire, a face validity of the questionnaire was conducted by discussing a draft of the questionnaire with a group of professors and experts as well as a statistician. after that a pre-test for the questionnaire was conducted with five respondents of colleagues and key decision-makers (site engineers and project managers). at its core, pretesting was conducted to make sure that people can understand the questions, and to verify the completeness of questionnaire. many improvement changes were implemented on the questionnaire after the feedback from the pre-test. the respondents have recommended to change the answer options in the questions of section two beside modifying some wordings of some questions in sections two and three to clarify some confusion and ambiguity which were reported by them. after the pre-test, a pilot study was conducted with fifteen respondents. it was done to discard questions that are not providing useful data and to make final revisions of the questionnaire. the results of the pilot study were reliable. accordingly, the pilot study sample were included under the full main sample. research population includes contractors in the public construction sector as a target group. they have a valid registration by the palestinians contractors unions (pcu) in gaza strip. they classified in the first class. contractors were selected from the first class because they usually work in large lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (201 5) 67 projects and thus supposed that occupational safety is part of their work plan. sample was selected randomly. sample size was chosen to provide adequate information on reliability and a certain degree of validity. forty three respondents as a sample were included in this study. although the sample size of the survey would probably not be indicative of the general population of construction industry in gaza strip, but the most of the results were reasonably accepted and were related to the literature review as will be shown. the relative importance index (rii) test was adopted for similar studies to determine the relative importance of various factors. the rii test adopted for this study to determine the relative importance of the factors in part three by depending on responses from contractors. the five point scale ranged from 1 (very low important) to 5 (very high important) was adopted and transformed to relative importance index. the rii was used to rank the strategies that will improve both productivity and occupational safety in construction from the point view of contractors in gaza strip. iv results and discussion a. respondent’s general information the respondent’s general information is shown on table (1). b. occupational safety practice in construction sites 1. occupational safety and company's policy figure 1: occupational safety and the company's policy table 1: profile of the respondent and the company profile of the respondent and the company frequency percentage % the title position executive director 3 6.9 project manager 11 25.6 site engineer 22 51.2 office engineer 1 2.3 supervisor engineer 1 2.3 safety engineer 1 2.3 foreman 3 6.9 procurement engineer 1 2.3 years of experience in the field of construction? less than 1 year 6 14.0 profile of the respondent and the company frequency percentage % 1 to 3 years 14 32.6 4 to 6 years 2 4.7 7 to 11 years 6 14.0 11 to 11 years 4 9.3 more than 15 years 11 25.6 what is the number of projects implemented over the past three? less than 10 22 51.2 10 – 20 19 44.2 21 – 30 2 4.6 more than 30 what is the value of the projects implemented during the last three years ($)? less than 40000 2 4.7 40000 – 100000 4 9.3 more than 250000 1 to 3 million 3 7.0 4 to 6 million 8 18.6 more than 6 million 26 60.5 figure (1) shows that majority of respondents (93%) of the sample mentioned that occupational safety forms a part of company policies, while the rest of the study sample (7%) believed otherwise, and expressed that occupational safety does not form a part of company policy. the result is a good sign for construction companies that classified as first class in gaza strip, as this evidence of increased awareness of the importance of occupational safety in construction. even though that many companies which safety forms a part of its policies do not apply such safety policy. a written health and safety policy helps to promote an effective occupational safety and health (osh) program. such a policy should reflect the special needs of the company in terms of safety and should be regularly reviewed and updated. 2. safety program as shown in figure (2), on a question about if the company has a safety program for each project or not, (62.8 %) of respondents stated that their companies designed a safety program for each project. on the other hand, (37.2 %) replied negatively with respect to this question. safety program should be written in a manner that takes into account both the safety and productivity. companies that do not have a safety program said that the availability of the safety program for each project depends on the request of supervision or owner (financier of the project), and there are those who said that each project manager is responsible for the safety of the site and therefore no need for a special safety program. there was also a saying that all construction projects are similar, and therefore would not require each project to a special safety program. furthermore, they have considered that safety program is useless and costly in view of worker compensation and injury treatment. results showed that most of companies are realizing that safety pro93 % 7 % yes no lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (2015) 68 55.8% 4.7% 32.6% 7% 0 10 20 30 40 50 60 yes no sometimes depends on donor gram is not only beneficial for the employees. it is also a way to gain a competitive edge over the competition level. figure 2: safety program for each project the reason of why many companies in gaza strip started to consider safety programs is that most of projects are funded by international or regional donors. the international donors come from regions where construction safety occupies top priorities of construction industry. in developed countries, it is usual and obligatory to provide safety programs by contracting companies according to project and company size. thus, when donors started to fund construction projects in palestine, they required that contractors should provide safety program for the projects. 3. safety training figure (3) shows whether company provides employees in each project a safety training courses or not. (60.5 %) replied positively while (39.5 %) mentioned that project employees didn't join any kind of safety training. this result shows that companies which provide safety training to the project employees are more than those who don't provide. it reflects that companies appreciate the important role of safety training in construction. all employees are required to attend safety training from manager to worker. figure 3: companies safety training safety training gives employees opportunity to identify hazards and the best practices to avoid such hazards at workplace. safety training programs should be offered to meet the current demand of the construction industry. there was a clarification from the companies that do not provide a safety training program for staff by saying that most projects do not pose a threat to the lives of workers. in addition to that, they think that safety training is costly and takes from the time of the project. 4. planning activities in accordance with the standards of occupational safety figure (4) describes respondents' responses when they have been asked whether project is planned and implemented according to safety measures. it is shown that (55.8 %) of respondents replied with yes, while only (4.7 %) replied with no. (32.6%) of respondents said that projects, in sometimes, are planned and implemented according to safety measures and (7%) said that it depends on the request of donors. projects that are planned by taking into account safety are projects that cost less and are performed well. in other words, when safety is included into project planning, compensation will reduce, productivity will increase and quality will increase too. compensation will reduce because planning for safety means that project employees and workers will be less exposed to expected hazards and thus accidents and its inherent compensations will decrease. thus, as found through the literature review, hammad et al. [16] said that safety planning is an important element for increasing the productivity at construction sites. also, saurin et al. said that effective planning for health and safety is essential if projects are to be delivered on time, without cost overrun, and without experiencing accidents or damaging the health of site personnel [9]. respondents who said that project sometimes is planned and implemented according to safety measures may don't have enough experience to know the meaning of safety planning. the word "sometimes" could have different meanings in this questionnaire; respondents may refer to some projects that planned by taking into consideration safety measures in accordance with the conditions of the contract, while some of them may have thought that all projects are planned with regard to safety measures, but not 100%. they might also have thought that company applies safety 62.8 % 37.2 % yes no 60.5 % 39.5 % yes no figure 4: planning activities in accordance with the standards lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (201 5) 69 measures while planning in different levels of importance according to the size, cost, and importance of the project. 5. safety meetings between owner and contractor figure 5: safety meetings between owner and contractor figure (5) shows that (9.3%) of the study sample holds safety meetings with owner of the project every week, and (18.6%) holds safety meetings with the owner every month. the majority of respondents (58.1%) holds safety meetings with the owner only when they need to that, and (7%) of the respondents said that holding safety meetings with owner depends on the occurrence of serious accidents. also, figure (5) shows that (7%) of the respondents never hold meetings with the owner. safety engineer is responsible for conducting safety meetings periodically with the owner to discuss different topics such safety rules, expected hazards, corrective actions, accident prevention, and reviews of accidents that have occurred recently. such meetings should be held at least once monthly. 6. inspection on occupational safety by the ministry of labor figure (6) shows that (7%) of the study sample mentioned that there is always an inspection on the sites by the ministry of labor. in contrast, there is (25.6 %) of respondents said that the construction sites never be inspected by the ministry of labor. while there was a saying by (39.5%) that on-site inspection was not fully disconnected, but occurred only in case the need to write a report about a particular accident. also, there was (27.9%) of respondents said that the inspection visits occur intermittently. hassouna [9] explained in the analysis of the results of his questionnaire that (83%) of the respondents said that there was no governmental institution that follows up safety in constructions, enlightenment of the construction employees, in applying safety legislation, or help in improving safety performance in construction sites in gaza strip. figure 6: inspection on occupational safety by the ministry of labor the other (17%) of his respondents noted a representative of the ministry of labor visits their sites, but in a much separated periods and without serious actions. the role of government towards construction safety in gaza strip seems bad. there is an inherent need to activate the role of the government to enforce safety in our local construction industry. 9.3 % 18.6 % 0 58.1 % 7 % 7 % 0 10 20 30 40 50 60 70 weekly monthly yearly time of need to that upon the occurrence of serious accidents never hold meetings 7% 27.9 % 39.5% 25.6% 0 10 20 30 40 50 yes always yes, in intermittent times only in the case of an accident reporting never be inspected lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (2015) 70 7. actions against the contractor in the case of noncompliance with health and safety procedures figure 7: actions against the contractor in the case of noncompliance with health and safety procedures figure (7) shows that (62.8%) of the study sample stated that there are strict actions against their companies in the case of non-compliance with health and safety procedures during project implementation, such as receiving a warning message, or the imposition of a penalty, and punishment may reach to suspension of work at the site until the contracting company is committed to the standards of safety. in the other hand, there are (37.2%) of respondents said that there are no actions against them if they don’t work according to safety standards. there may be a great need to follow the style of imposing sanctions in case it is not commitment to safety standards, especially if the concept of the need to commitment to safety standards does not represent an essential part in the company's vision. 8. accidents rate figure 8: accidents rate in the projects of company figure (8) shows that (53.5%) of the study sample believed that the accidents rate is decreasing in the projects of their company, but there were (46.5%) of respondents did not notice if the accident rate has increased or decreased. the second result gives a serious indicator of occupational safety at construction sites, where non observation of the accidents rate in the workplace means that the issue of the safety of the workers does not have any importance and does not be taken seriously. although it has been reviewed previously among the literature review that incidents which lead to accidents and disasters require time and resources to be overcome, but even near-miss incidents will usually hurt productivity. moreover, occupational injuries can harm the reputation of a company, decrease productivity, and result in huge costs [2]. 9. measures are taken to avoid the recurrence of incidents figure 9: measures are taken to avoid the recurrence of incidents figure (9) shows that there are (90.7%) of the respondents say that their companies have taken measures to avoid a repetition of the incidents that occurred at construction sites, such as searching for gaps regarding to measures of occupational safety and trying to treat the problem, in addition to raising awareness of the workers regarding safety standards, and ensuring the availability of all the necessary safety tools, as well as following-up and monitoring of workers and imposing sanctions on those who do not adhere to safety standards. in the other hand, there are (9.3%) of the respondents said that there are not any procedures to be followed by their companies to prevent the repetition of the incidents which occurred at construction sites. they say that there is no need for that because the number of injuries is too small and it is not affect the workflow. c. strategies that can be followed to effectively affect safety and productivity table (2) demonstrates the results from the survey feedback in the rii according to overall respondents. it is about strategies that can be followed to effectively affect safety and productivity. these strategies fall under five major groups; planning, training, monitoring, communication skills, and inspection. results indicated that “training workers to carry out works properly, especially in the new types of work" factor with (rii = 88.15) has been ranked in the 1st position with regard to its importance in sustaining safety and productivity of project. this factor belongs to training group. 62.8 % 37.2 % yes no 0 % 53.5 % 0 % 46.5 % 0 10 20 30 40 50 60 increasing decreasing always the same rate no note 90.7 % 9 .3 % yes no lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (201 5) 71 in terms of productivity, skillful worker is a productive one because he performs his tasks on time and with quality. results of research prove that orientation of either newly hired workers or regular workers is essential especially for irregular job tasks. it helps avoiding discrepancies with safety regulations. table 2: relative importance index (rii) and ranking for each item of the field: “strategies that can be followed to effectively affect safety and productivity” strategies that can be followed to effectively affect safety and productivity rii % ranking group training workers to carry out works properly, especially in the new types of work 88.15 1 training supervisor should be firm with the contractor in safety conditions because it will positively affect productivity 86.80 2 inspection foreman should put daily and weekly work plans and define tools that should be used. this will increase productivity and ensure safety 86.53 3 monitoring drug test for workers 85.60 4 monitoring scheduling adequate number of workers to complete the heavy tasks, which helps to decrease injuries, as well as to foster a spirit of teamwork and increase productivity 85.20 5 monitoring workers should be trained about dealing with changes in working conditions, such as extreme heat, rain and slippery surfaces to prevent injuries and to get excellent productivity 83.87 6 monitoring necessity of coordination between the contractor and the ministry of labor to apply occupational safety standards 82.27 7 inspection workplace safety signs maintain facility and keep workers safe, healthy, and productive 81.48 8 communication skills giving workers breaks time, and urges workers to take a rest when feel tired and fatigue, as well as not deprive of holidays 81.07 9 monitoring first aid training 80.20 10 training a safety engineer at site is necessary to prevent accidents and increase productivity 80.13 11 planning managers, engineers and supervisors must be a good example for workers in compliance with the safety standards, such as wearing safety shoes, hats and etc., as this is considered an indirect message to workers to abide safety standards 79.73 12 communication skills owners have to assess contractors before awarding the tender on the basis of the commitment to safety standards, where it affects the productivity and profit later 78.93 13 planning it is necessary to allocate a portion of project budget for the application of health and safety standards perfectly 78.93 14 planning housekeeping is important in the workplace to get effective results with zero accidents 78.23 15 planning planning each stage of work will help to adhere to the schedule with ensuring safety and productivity 77.87 16 planning it is important to assess workers in terms of commitment to safety standards and doing work properly, in addition to give incentives 77.87 17 communication skills define any special equipment, tools, and safety devices to perform work efficiently and safely 77.87 18 planning foreman or supervisor should have communication skills with workers to manage safety and to obtain higher productivity 77.60 19 communication skills detailed planning for facing crisis situations that can occur helps to increase safety and productivity 77.47 20 planning periodically safety meetings for managers, engineers and workers for discussing risks of activities to avoid accidents and to increase productivity 77.20 21 communication skills lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (2015) 72 strategies that can be followed to effectively affect safety and productivity rii % ranking group a safety program must be written to include all safety matters such as expected hazards and techniques to avoid hazards, training, equipment, tools and recording of injuries 76.40 22 planning workers should not use broken tools or equipment, in addition to the need for tools maintenance 76.00 23 monitoring workers should be trained to select and use of appropriate tools 75.12 24 training training of managers and supervisors to define responsibilities and to cover any shortfall in awareness for occupational safety, and illustrate how important to be a good example for workers 70.80 25 training workers should be trained on occupational safety techniques and wear appropriate clothing 69.47 26 training results have also indicated that “supervisor should be firm with the contractor in safety conditions because it will positively affect productivity” factor has been ranked in the 2 nd position with (rii = 86.80 %). this factor belongs to inspection group. this is due to the culture in gaza strip. also, results show that “foreman should put daily and weekly work plans and define tools that should be used. this will increase productivity and ensure safety” factor has been ranked in the 3rd position with (rii = 86.53 %). this factor belongs to monitoring group. this indicates that a worker and task allocation is major component of good safety and productivity management. “drug test for workers” is an important strategy which has been ranked in the 4th position with (rii = 85.60 %). it belongs to monitoring group. substance abuse program is defined as a program that includes both pre and post-hiring testing for illicit drugs use. cii report on zero accident techniques research program pointed out that the studies showed that when random tests for drugs are conducted, better safety performance results are gained [21]. also, results showed that “scheduling adequate number of workers to complete the heavy tasks, which helps to decrease injuries, as well as to foster a spirit of teamwork and increase productivity” factor has been ranked in the 5th position with (rii = 85.20 %). this factor belongs to monitoring group. when there are enough workers to help each other with heavy tasks, chances of exposing crew members to injuries will be reduced. “workers should be trained on occupational safety techniques and wear appropriate clothing” factor has been ranked in the 26th position with (rii = 69.47 %). this factor belongs to training group. this result is consistent with results obtained from a previous study which showed that engineers in arab region almost receive no training. in general it can be seen that the first five strategies, that have been selected based on the experiences of respondents, are already reflect the culture of the people to the importance of increasing productivity and ensuring occupational safety at the same time. although the topic of the safety and productivity improvement is not embedded deeply in the mind of who works in the construction world in gaza strip, but there are good things reflect this aspect. furthermore, after visiting several construction sites, and interviewing a number of experts, as well as referring to some previous studies that related to the same subject in gaza strip, it was observed that the application of occupational safety only comes from fears of punishment and fears from the supervisor, especially if the project is huge and is funded by a foreign donor. only then, there will be commitment to safety standards because of accurate monitoring and strict supervision, otherwise the punishment will be on the contractor according to the condition in the contract of the project. it may seem unacceptable when it is noted that the company that had committed to the standards of occupational safety in a project, which was funded by a foreign donor, has not committed to the same standards of occupational safety in another project. this is due to the presence of a clause in the contract about safety and the punishment if the company has not committed to that, and the lack of that clause in the contract of the other project. this confirms that the occupational safety standards do not represent an essential part of the culture of workers in the construction industry in gaza strip. this was matched with what abo mustafa found in his thesis research that safety is a new topic in the construction sector in gaza strip, so contracting companies have a little awareness about the impact of safety factors on labor productivity. this was in the line with the study results of kazaz and ulubey [2], where they found that workers in mediterranean countries are especially unlikely to take the required safety measures. while when the company has a joint venture with an american firm, it was stated that the site engineers had successfully controlled the safety situation by random stops and check of work operations throughout the day. thus, training and then monitoring strategies are acceptable to be taken firstly to instill the concept of safety culture and its importance to increase productivity. it will be done through strict supervision, accurate monitoring and the use of incentives and sanctions with adopting respect in dealing lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (201 5) 73 all the time with workers. on the contrary, it has been observed from hse [7]; roberts [11]; hammad et al. [16]; levitt and samelson [17] that planning is the first strategy that can be followed in the united states and european countries and other developed nations in the construction industry, so as to increase productivity and ensure safety at the same time, where planners and managers put a plan for that and then individuals can easily be committed to that plan. v conclusion this research studies the connection between ohs and increasing employee productivity in construction industry from the point view of contractors in gaza strip. to achieve this aim, one main objective has been outlined which is identifying strategies that effectively promote both safety and productivity during a construction task. the study concluded that integration of safety management and productivity improvement are very important for achieving the strategies that developed by the company in the construction work. after studying the literature review about the topic of research and by using the questionnaire survey approach, many important results were found from the respondents of the target group, which were the contractors from the first class in gaza strip. for example, the strategies that can be followed to effectively affect safety and productivity fall under five major groups, which are; planning, training, monitoring, communication skills, and inspection. the strategies, in the descending order from the top to the lowest, are; training workers to carry out works properly, especially in the new types of work (under training group), supervisor should be firm with the contractor in safety conditions (under inspection group), foreman should put daily and weekly work plans and define tools that should be used (under monitoring group), drug test for workers (under monitoring group), and scheduling adequate number of workers to complete the heavy tasks, which helps to decrease injuries, as well as to foster a spirit of teamwork and increase productivity (under monitoring group). training and monitoring strategies are acceptable to be firstly taken for instilling the concept of safety culture and its importance to increase productivity. this is due to that the occupational safety standards do not represent as an essential part of the culture of workers in the construction industry in gaza strip. it can be done through: training of new workers on company's safety policies and procedures before they start work; encouraging the buddy system by having new workers learn from experienced workers; training of workers to select and use the right tool for the job and correct them when necessary; and alerting workers about the changed working conditions such as extreme heat, rain, or slippery surfaces. in addition to that, there is a real need for a strict supervision, an accurate monitoring and a use of incentives and sanctions with adopting respect in dealing all the time with workers. beside some necessary elements required, such as a good level of cooperation between the management and employees, to ensure the success of an ohs in-tervention and the subsequent increases in productivity. on the contrary, planning was the first strategy that can be followed in the united states and european countries and other developed nations in the construction industry, so as to increase productivity and ensure safety at the same time, where planners and managers put a plan for that and then individuals can easily be committed to that plan. vi recommendation safety and productivity are interdependent. to achieve good safety is also important to achieve good productivity. all stakeholders for the project, including contractor, should come together to look into ways to enhance safety and productivity together. according to that, the study recommended companies to plan for a strategy to achieve that. it is important to develop working cultures in a direction which supports health and safety at work, and promotes a positive social climate and smooth operation, and thus enhance the productivity. after that, companies need to move from the planning phase to the implementation phase for the strategies. in other words, contractors are recommended to act strategically to protect workers by continuously identifying, evaluating, and mitigating hazardous conditions, as activities, work locations, and other conditions change in workplace. they should talk about safety in the same manner as about cost and schedule, use incentives with caution, and conduct regular safety meetings to discuss the safety issues in the construction sites. also, pre-planning and organizing each phase of a job can help in meeting schedules while making work safer and smoother. a detailed work plan will give an opportunity to deliver all materials and equipment which are necessary to perform each task safely. the plan also identifies all the danger tasks which help to take all the safety procedures during performing these tasks. in addition to that, contractors should prepare safety training programs which help personnel to carry out various preventive activities effectively. they should concern in training of the workers and teaching them the significance of using safety equipment, the good use of construction equipment, and the cooperation to identify hazards, the costs and results of injuries. as a foreman, or a site engineer, or any employee works in a key position in the workplace should help to increase company‘s work production while reducing injuries. to achieve that, training and orientation must be applied by an accurate monitoring with maintaining on respect in the dealing with the workers. furthermore, contractors should utilize a self-inspection program even if the ministry of labor does not inspect construction sites periodically. lina ahmed abuhamra, and adnan ali enshassi / strategies for safety and productivity improvement (2015) 74 references [1] hallowell, m. (2011). understanding the link between construction safety & productivity: an active learning simulation exercise. journal of safety, health & environmental research, vol. 7: 1, pp:19. [2] kazaz, a., & ulubeyli, s. (2007). drivers of productivity among construction workers:a study in a developing country. building and environment, vol.42 : 5, pp: 2132–2140. [3] ailabouni, n., gidado, k., & painting, n. (2007). factors affecting employee productivity. retrieved on october 20, 2012, from http://www.arcom.ac.uk/-docs/proceedings/ar20090555-0564_ailabouni_painting_and_ashton.pdf [4] siriwardana, c. s., & ruwanpura, j. y. (2012). a coneptual model to develop a worker performance measurement tool to improve construction productivity. retrieved on october 30, 2012, from http://rebar.ecn.purdue.edu/crc2012/papers/pdfs/225.pdf [5] mohamed, s. (1999). empirical investigation of construction safety management activities and performance in australia. journal of safety science, vol. 33: 3, pp: 129-142. [6] berg, g., & dutmer, r. (1998). productivity, quality and safety, a relationship for success. construction dimensions conference (pp. 96 105). u.s.a.: official publication of awci. [7] hse. (n.d). managing for health and safety: guidance for regulatory staff on the practice of assessing health and safety management. retrieved on november 1, 2012, from health and safety executive: http://www.hse.gov.uk/managing/regulators/regulators.p df [8] world health organization. (2002). good practice in occupational health services: a contribution to workplace health. retrieved on october 15, 2012, from who regional office for europe: http://www.euro.who.int/__data/assets/pdf_file/0007/11 5486/e77650.pdf [9] hassouna, a. m. (2005). improving safety performance in construction projects in the gaza strip. gaza, palestine: unpublished thesis in construction management submitted to the islamic university of gaza. [10] whiting, m. a., & bennett, c. j. (2003). driving toward " 0" best practices in corporate safety and health. retrieved on september 20, 2012, from the conference board, inc.: http://www.osha.gov/dcsp/compliance_assistance/conf_ board_report_2003.pdf [11] roberts, j. (2012, september 27). productivity-driven safety management. retrieved on november 1, 2012, from liquid learning,inc.: http://liquidlearning.com.au/documents/psd0912m/pd s0912m_i.pdf [12] chapman, r. e., & butry, d. t. (2008, july 17). measuring and improving the productivity of the u.s. construction industry: issues, challenges, and opportunities. retrieved on november 2, 2012, from http://www.cpwr.com/pdfs/measuring%20productivity %20of%20the%20us%20const%20industry. [13] lai, d. n., liu, m., & ling, f. y. (2011). a comparative study on adopting human resource practices for safety management on construction projects in the united states and singapore. journal of international journal of project management, vol.29:8, pp:1018–1032. [14] walsh, k. d., & sawhney, a. (2004). agent-based modeling of worker safety behavior at the construction workface . retrieved october 20, 2012, from http://www.iglc2004.dk/_root/media/13101_103-walshsawhney-final.pdf [15] salem, o., solomon, j., genaidy, a., & luegring, m. (2005). site implementation and assessment of lean construction techniques. lean construction journal, vo.l 2 : 2, pp: 1555-1369. [16] hammad, m. s., omran, a., & pakir, a. h. (2011). identifying ways to improve productivity at the construction industry. journal of acta technica corvininesis bulletin of engineering, vol.4:4, pp: 47. [17] levitt, r. e., & samelson, n. m. (1994). construction safety management. new york: john wiley & sons, [18] volkman, b. (2011, december 6). a culture for high productivity. retrieved october 30, 2012, from 101 ways to improve construction productivity: http://www.fewerhours.com/ [19] national business group on health. (2010). the health and productivity advantage. north america: towers watson. [20] peng, g. j. (2009). improving construction productivity on alberta oil and gas capital projects. alberta, canada: alberta finance and enterprise. [21] benzekri, m. (2010). identification and analysis of practices that positively impact construction productivity. unpublished thesis (msc) submitted to the university of texas. lina ahmed abuhamra is an architect engineer and a researcher in msc of construction management, department of civil engineering, islamic university, gaza, palestine. adnan ali enshassi is a distinguished professor at the civil engineering department at the islamic university of gaza. he has research and teaching experience over 20 years. he has published several papers in refereed journals and conferences. he is a member in several professional international organizations and a member of editorial board in refereed international journals. http://www.arcom.ac.uk/-docs/proceedings/ar2009-0555-0564_ailabouni_painting_and_ashton.pdf http://www.arcom.ac.uk/-docs/proceedings/ar2009-0555-0564_ailabouni_painting_and_ashton.pdf http://rebar.ecn.purdue.edu/crc2012/papers/pdfs/-225.pdf http://rebar.ecn.purdue.edu/crc2012/papers/pdfs/-225.pdf http://www.hse.gov.uk/managing/regulators/regulators.pdf http://www.hse.gov.uk/managing/regulators/regulators.pdf http://www.euro.who.int/__data/assets/pdf_file/0007/115486/e77650.pdf http://www.euro.who.int/__data/assets/pdf_file/0007/115486/e77650.pdf http://www.osha.gov/dcsp/compliance_assistance/conf_board_report_2003.pdf http://www.osha.gov/dcsp/compliance_assistance/conf_board_report_2003.pdf http://liquidlearning.com.au/documents/psd0912m/pds0912m_i.pdf http://liquidlearning.com.au/documents/psd0912m/pds0912m_i.pdf http://www.cpwr.com/pdfs/measuring%20productivity%20of%20the%20us%20const%20industry http://www.cpwr.com/pdfs/measuring%20productivity%20of%20the%20us%20const%20industry http://www.iglc2004.dk/_root/media/13101_103-walsh-sawhney-final.pdf http://www.iglc2004.dk/_root/media/13101_103-walsh-sawhney-final.pdf http://www.fewerhours.com/ journal of engineering research and technology, volume 2, issue 2, june 2015 159 sound visualization for deaf assistance using mobile computing aiman a. abu samra 1 , mahmoud s. alhabbash 2 1 associate professor, faculty of computer engineering, islamic university of gaza, palestine. 2 master of computer engineering, faculty of computer engineering, islamic university of gaza, palestine. abstract— this thesis presents a new approach to the visualization of sound for deaf assistance that simultaneously illustrates important dynamic sound properties and the recognized sound icons in an easy readable view. .in order to visualize general sounds efficiently, the mfcc sound features was utilized to represent robust discriminant properties of the sound. the problem of visualizing mfcc vector that has 39 dimension was simplified by visualizing one-dimensional value, which is the result of comparing one reference mfcc vector with the input mfcc vector only. new similarity measure for mfcc feature vectors comparison was proposed that outperforms existing local similarity measures due to their problem of one to one attribute value calculation that leaded to incorrect similarity decisions. classification of input sound was performed and attached to the visualizing system to make the system more usable for users. each time frame of sound is put under k-nn classification algorithm to detect short sound events. in addition, every one second the input sound is buffered and forwarded to dynamic time warping (dtw) classification algorithm which is designed for dynamic time series classification. both classifiers works in the same time and deliver their classification results to the visualization model. the application of the system was implemented using java programming language to work on smartphones that run android os, so many considerations related to the complexity of algorithms is taken into account. the system was implemented to utilize the capabilities of the smartphones gpu to guarantee the smoothness and fastness of the rendering. the system design was built based on interviews with five deaf persons taking into account their preferred visualizing system. in addition to that, the same deaf persons tested the system and the evaluation of the system is carried out based on their interaction with the system. our approach yields more accessible illustrations of sound and more suitable for casual and little expert users. index terms— android, mobile computing, mfcc, sound visualization i introduction 1.1 sound awareness people use sound mainly to gain awareness of the state of the world around them. for example, many everyday devices such as mobiles, doorbells. at street, one might hear the horn of cars and guess a passing car is becoming closer. according to palestinian central bureau of statistics, more than 43617 people in gaza and west bank are deaf and 95% of them suffer from the illiteracy [1] as they need special equipment and learning criteria. 1.2. assistive listening devices based on vision vision can help a hearing-impaired individual extract meaning (or assign meaning) to sound events, if the sound visualizing describes the sound properly for hearing impaired , e.g. the use of sign language can be extremely useful tool to those who cannot hear. the rapid development of video technology has inspired many researches for sound expression on visual displays. our proposed system will pick the most suitable sound features, similarity measures, classifiers, rendering frame work to achieve the main goals of the system 1.3 sound features many different types of sound features have been proposed to describe sound coming from speech recognition community [2][3][4][5][6]. aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 160 temporal shape features a. temporal shape features b. temporal features c. energy features d. spectral shape features e. harmonic features f. perceptual features : in this thesis we will focus on spectral shape features as it proved higher discrimination results than other features [7]. 1.4 similarity measures there are many methods that can be used to compare and derive the differences between two vectors. they are grouped into main categories according to their functionality. a) local dissimilarity/distance measure, such as euclidean [8], cosine[9]. b) statistical similarity measures, such as fullback leibler distance [10], and the hotelling t2-statistic distance [11]. the local similarity measures are more suitable to our proposed system because it is hard to have full dataset for all environmental sounds, so statistical measures will be biased according to the dataset. 1.5 methodology our method started by making interviews with deaf persons living in different environments by considering their profession, capabilities, and ages. then we tried to pick the most discriminative sound features taking into consideration the computation power of the device where the system will be implemented. finally, after combining the results of the whole previous steps we noticed that it is hard for the deaf to use our proposed system directly without continuous help, so we added recognition module to the system that classifies some prior known sounds to help the user. 2. related works audio visualization for hearing impaired and deaf has been proposed in [12][13][14]. in [12] the authors analyzed the techniques used by deaf people for sound awareness; they made interviews with deaf and based on the results, two sound displays have been presented. one is based on a spectrograph and the other is based on positional ripples. in the spectrograph scheme, height is mapped to pitch and color is mapped to intensity (red, yellow, green, blue, etc.). in the positional ripple prototype, the background displays an overhead map of the room. sounds are depicted as rings, and the center of the rings denotes the position of the sound source in the room. as shown in figure (1) the size of the rings represents the amplitude of the loudest pitch at a particular point in time. each ring persists for three seconds before disappearing. figure 1: speech visualized by positional ripples [12]. this architecture however is impractical since it requires prior knowledge of the surrounding place (e.g. office); also it is expensive in terms of equipment setup (array of microphones placed at certain corners in the room) and is also not portable (bound to the workplace environment). in [13], new models have been proposed, based on the proposed system in [12]. the authors proposed two models. the first model, based on single icon scheme, which displays recognized sounds as icons, located on the upper right corner of the user’s computer monitor. it was used throughout the analysis and was shown to give good results. according to the survey performed in [13], all participants liked it because it identified each sound event. the disadvantage of this method however is the actual need for prior knowledge of the type of sound to be detected which is very hard for a person who cannot hear well. aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 161 3. proposed system 3.1 gathering design requirements using interviews the properties of good sound visualization system must answer the following questions;  what sounds are important to people who are deaf?  what display size is preferred (e.g. mobile, pc monitor, or large wall screen)?  what information about sounds is important (e.g. sound classes, location, or characteristics like volume and pitch)?  how the person who is deaf can be aware with the visualizing system? the initial data for the deaf participants was gathered by interviewing five of the deaf persons. the participants were chosen in different ages and jobs. 3.2 sound input the sound is sampled from the portable device microphone at 44100 samples per second, 16 bit per sample and mono. further sound processing requires framing the sound to be processed in real time . a good solution was found using hamming window of size n=1024 samples with overlap of 50% at sampling rate of 44100 sample/second, which approximately corresponds to 23.2ms of sound input. 3.3 feature extraction the most well-known state of art-feature extraction methods are mfcc and lpc; by considering their popularity in sound recognition systems [15]. the widespread use of the mfccs is due to its low computational complexity and better performance for most asr systems [16]. mfccs are used for speech data in most cases but it can be generalized for environment sound as in [16]. the characteristics of mfccs that made it preferable for our system is that it has lower computational complexity than many other algorithms )), ], its discrimination rate , and for its simplicity in implementation . seven computational steps for generating mfcc vectors are summarized in figure (2) and expressed as following; figure 2: mfcc block diagram 3.4 the proposed similarity measure in this subsection, we introduce efficient algorithm for measuring the similarity between two vectors of mfcc taking into account ―shuffling property‖ and the different impact of the coefficients of the mfcc vector. algorithm (1) handles the shuffling property of the mfccs, as it differentiates between two mfcc vectors regarding to the places of the dominating coefficients we used manhattan distance [17] as it completes the idea of the proposed distance measure of calculating the required steps of converting one mfcc vector into another. aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 162 algorithm (1): the proposed distance measure finally, equation (1) summarizes the modified proposed distance. ) where is the manhattan distance, is the distance calculated from alogrithm1, and m is the next power of then after the maximum value of . 3.5 visualization system framework in this section, we present our proposed visualizing system based on our made interviews. figure (3) illustrates the overall system separated in modules (audio processing module, similarity measure module, classification module, and visualizing module). figure 3: the overall system design 4. experiments and results the performed experiments were described and discussed. different groups of sound features, similarity measures, and classifiers where tested and compared in order to choose the best of each group to build the proposed visualizing system. 4.1 data set we built a database of sound samples by collecting the preferred samples from well-known datasets [18] [19] [20]. the dataset contains 430 (80 door bells, 100 cars, 130 speech, 70 crowds, and 50 explosions) samples. all signals in the database have a 16 bit resolution and are sampled at 44100 hz mono channel. in this way, all possible sound spectrum components can be introduced for experimentation purpose. this point is very important for environmental sounds, because some sounds show an important energy content in the highest frequencies, like glass breaks for example. the samples duration is fixed of four seconds but have different loudness levels. each sound sample is assigned to exactly one of the five classes. 4.2. algorithm choosing phase the system that was used during this phase is matlab program version 7.8.0.347 (r2009a). purpose : to measure the distance between two vectors of mfcc input : mfcc vector a, mfcc vector b of length n output: distanc between a, b procedure: 1 create two vectors ai, bi with the same length of a,b to store the indices of the elements in a,b 2 distance=0 3 sort the elements of both a,b descending with corresponding indices in ai, bi 4 for i=0 to n-1 5 distance += wi|ai(i)-bi(i)|, wi is the corresponding weight of each attribute 6 return (distance) aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 163 we used a platform of intel core i5 with 4 gb ram during the experiments. the goal of this framework is to choose the most suitable sound features, similarity measure, and classifier for the proposed system to be implemented on smartphone. thus, the challenges arise when considering the smart phone computation power and real time performance with complex algorithms. 4.3 distance measures algorithms the evaluated distance measures are considered local distance measure, so the evaluation criteria we used is the recognition rate of a classifier that uses local distance measure for classification. we used k-nn classifier for classifying every time frame in real time and dynamic time wrapping dtw for classifying input sounds with long duration to preserve the perceptual properties of the sound figure (4) shows the recognition rate for the k-nn classifier and dtw using the mentioned distances. we can notice the benefit of the proposed distance measure for increasing recognition rate for both classifiers more than any other distance measure. figure 4: similarity measures evaluation using k-nn and dtw 4.4 system implementation phase the overall system was implemented on smart phone of model number galaxy ace 2 made by samsung co , which has a dual-core 800 mhz , 4 gb storage, 768 mb ram , and runs android operating system. android is a free open source operating system for mobile devices, running on a linux kernel, and owned by google. android provides various applications written in java programming language. this operating system includes a set of core libraries [21] that provides most of the functionalities available in the core libraries of the java programming language. in order to develop android 0 10 20 30 40 50 60 70 80 90 100 c la ss if ic a ti o n r a te distance/similarity measure similarity measures evaluation k-nn dtw aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 164 applications, developers use the android system development tool kit (sdk). it provides all the necessary tools to write, compile and run an android application with or without a connected mobile device, as the emulator emulates an android mobile phone. once the latter is installed, it is easy and simple to use it with eclipse ide. for fast video rendering of the visualized sound we used opengl es 2 framework on android [22], which uses the phone’s gpu and provides simple api to call the native interfaces implemented inside. 4.5 visualization the visualization is drawn on a rectangular canvas with adaptive size to fit on any android device’s display. it consists of two parts; the first part is the 3d colored visualization of the acquired sound while the second part series of images displaying the symbols of recognized sounds .the visualized sound flows from left to right as so as the additional icons that appear if the classifiers recognize any sound. 4.6 speech figure (5) shows the visualizations of a number of different voiced arabic vowels (ا , و , ي). since vowels shows some constant representation of the sound during the voice, we can notice clearly the visualized sound even if it cannot recognized by the classifiers. figure 5: the visualized arabic vowels the reference sound used for the similarity measure and hence for the visualization system, is picked from the third vowel, so we can notice that the third vowel has the lowest height in 3d mesh. the additional notices from figure (5.4) is that the three different vowels show related colors (except the third, because we used the reference from one of its frames) because they belong to speech class. the third vowel shows blue color during the visualization for expressing our point of view only hence we use a reference sound represents silence in the real time application. 4.7 door bells figure (6) shows the visualizing result for doorbell sound. the interesting thing about this visualized doorbell is that it displays icon of doorbell in yellow (warning color) above bird icon. this happened because in fact the doorbell is designed to output bird sounds. since one of the classifiers, detect that this sound is likely to be a bird sound and the other for doorbell sound. the visualizing system displays the icon of both classes. figure 6: visualized doorbell tone 4.8 explosions this class represents the most severe case among all other classes. the mobile phone makes a vibration besides the visualization. the explosions include gunshots, heavy falling mass and real known explosions. figure (7) shows the visualization of explosion sound. the visualizing system showed the explosion icon with vibration on the test smart phone. aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 165 figure 7: explosion sound visualization 4.9 testing phase for every sound, the answers of the trainees are collected with their response time for every answer is recorded for analysis. due to the deaf inability to analyze sound from previous experience, the tests were made repeatedly and the results only picked in the last two sessions and only for correct answer rate of 90% and above. figure (10) shows the average duration of correct answers curve for the testing users. as we can note, the users at first find some difficulties for giving correct answers with the new sounds during session 1. in the next sessions , the users shows improvements in response time . the interesting notice about the final results is that the response time reached several few seconds this indicates that they can use the program in real time with little difficulties figure 10: testing sessions for users and their response time 5. conclusion we proposed a new visualization system to help deaf person to experience surrounding sounds. this system depends on vision sense of deaf to understand the sound visualization. technically, the system depends on extracting robust sound features and comparing them with reference sound feature for using the comparison result for visualizing the sound in 3d curve with different colors. the building of the system involved in using feature extraction, similarity measures, classification, and rendering frameworks. the sound feature that was used for representing sound is mfcc by evaluating many sound features and picking the highest recognition rate feature vector. since, there is wide range of feature vectors proposed previously, our evaluation done on the most well-known features in open literature. we formed sound database from other three databases to get different sound classes that fits the resulted application-working environment. we used our database for evaluating many sound features, similarity measures, and classification algorithms. the visualization system renders the frames of sound as 3d dynamic mesh changing over time to give the user real time feeling with sound. the dynamic color and height of the visualized sound can be read easily by little experienced user references [1] palestinian central bureau of statistics.( 2012). [online] available at: [accessed 10 june 2013]. [2] foote, j. (1997). content-base retrieval of music and audio multimedia storage and archiving systems ii, multimedia storage and archiving systems ii. in proc. of spie, vol. 3229, pp. 138-147. [3] scheire, e. and saleny, m. (1997). construction and evaluation of robust multifeature speech/music discriminator. ieee international conference on acoustics, speech, and signal processing, vol. 2, pp. 1331 – 1334. [4] borwn, j. (1998). music instrument identification using autocorrelation coefficients. proceedings international symposium on musical 0 50 session 1 session 2 session 3 session 4 r e sp o n se t im e ( s) testing sessions response time user 1 user 2 user 3 user 4 user 5 aiman abu samra, mahmoud alhabbash/ sound visualization for deaf assistance using mobile computing 166 acoustics isma 1998, leavenworth, washington, pp. 291-295. [5] matin, k and kim, y. (1998). instrument identification: a pattern recognition approach 136 th meet, usa [6] rabiner, l. and jung, b. (1993). fundamentals of speech recognition. prentice-hall,usa. [7] geofroy, p. (2004). a large set of audio features for sound description (similarity and classification), iracm project ,france. [8] foote, j. (1999). visualizing music and audio using self-similarity. conf. multimedia, 7th acm int (part 1), orlando, pp. 77–80. [9] foote, j. (1999). visualizing music and audio using self-similarity. conf. multimedia, 7th acm int (part 1), orlando, pp. 77–80. [10] siegler, m. jain, u. raj, b. and stern, r. (1997). automatic segmentation, classification and clustering of broadcast news audio. darpa speech recognition workshop, usa, pp. 97-99. [11] zhou, b. and hansen, j. (2000). unsupervised audio stream segmentation and clustering via the bayesian information criterion. proc. isclp'00, china, vol.3, pp. 714-717. [12] wai-ling, f. mankoff, j. james a. (2003). from data to display: the design and evaluation of a peripheral sound display for the deaf. in proc. of chi, p. 8. [13] matthews, t. fong, j. and mankoff, j. (2005). visualizing non-speech sounds for the deaf. in proc. of acm sigaccess on computers and accessibility (assets). baltimore, pp. 52-59. [14] yeo, w. and berger, j. (2006). a new approach to image sonification, sound visualization: sound analysis and synthesis. in proc. of the international computer music conference, new orleans, pp.34-50. [15] temko, a. (2007). acoustic event detection and classification. phd thesis, university politecnica de catalunya, spain. [16] zhang, x. (2009). audio segmentation, classification and visualization. ph.d. thesis, auckland university of technology, new zealand. [17] deza, e. & deza, m. marie. 2009 . encyclopedia of distances. pp.94-236. [18] dewolf sound effect database .(2013).[online] available at: < www.dewolfe.co.uk > [accessed 5 january 2013]. [19] efx guns library. (2013).[online] available at: [accessed january 2013]. [20] sound spaces environmental sound library.(2013).[online] available at: [accessed 5 june 2013]. [21] saha, a. (2008). developer's first look at android: linux for you. in proc. of hot mobile, pp. 48-50. [22] android frame wrok samples. (2013). [online] available at: [accessed 13 june 2013]. dr. aiman a. abu samra is an associate professor at the computer engineering dept. at the islamic university of gaza. his research interests include computer networks, computer architecture and mobile computing. he received his phd degree from the national technical university of ukraine in 1996. he managed several funded projects in cooperation with industry. mahmoud s. alhabbash is computer science researcher at computer engineering department in the islamic university of gaza. his research interests include sound recognition, data visualization, human computer interfaces, and mobile computing. he received his msc degree from islamic university of gaza on 2014. http://www.dewolfe.co.uk/ http://www.efx-sound.com/ http://sounds.bl.uk/environment/soundscapes transactions template journal of engineering research and technology, volume 2, issue 4, december 2015 222 nano generator simulation using fuzzy logic muhammad faisal wasim 1 , muhammad waseem ashraf 1 , shahzadi tayyaba 2 , basit ali 1 and nitin afzulpurkar 3 1 department of physics (electronics), gc university lahore, pakistan 2 department of computer engineering, the university of lahore, pakistan 3 school of engineering and technology, asian institute of technology, bangkok, thailand *email: muhammad.waseem.ashraf@gmail.com abstract: this papar presents the design, modeling and simulation of nanogenrator. nano structure based energy genrator is used to convert the small physical motion into electrical energy such as body part motion, heart beating, vibrating parts of the factory machines walls, floors and thrilling parts of transports. in the present study, two input parameters like force and thickness and two output parameters like voltage and current have been considered and simulation was performed using the fuzzy logic technique. this approach helps to optimize the durable, accurate and efficient nano-generator. basic madami model of the fuzzy logic is used for calculations. the input and output parameters have been assigned three membership functions (mfs). the device works according to the instructions well-defined in the fuzzy inference system (fis). on the basis of simulation we observed operational diagrams. surface viewer were used to observe the curves and to analyse the results of all defined mfs. diverse rules by making various group were defined in matlab rule managing editor and and logic is adopted for simulation. mamdani’s expression is used for the calculation of the outputs. the calculation based results and simulated results show very littile variation for fuzzy logic (fl) nanogenrator. there is present 1% error in simulated and theoretical values that shows that fl based nano-generator controller is very efficient to harvest the energy from nano structure based device. key points – nano generator, fuzzy logic, fuzzy inference system, membership function. i introduction nanomaterails are waslty used in the modern technologies due to very effective functionality. piezoelectric properties of the nanostructured materials are used to change the mechanical vibrations into the electrical signals which are the basics of the microelectromechanical system (mems) based nanogenretors. small mechanical vibrations are one of the most important aspects of our physical nature. these vibrations exist at all the time, everywhere and every field of life. the basic idea of mems based nanogerator is to detect the small mechanical motion and vibrations. if a piezo material is coated on a flexible surface then the energy can be easily produced by applying mechanical motions. most common natural vibrations are body parts motion such as heart beating, lungs motion, chest expansion and contraction, walking, running which can be converted into electrical energy by using these nanogenretors. in industry, mechanical movements like thrilling and vibrating part of machines surfaces generate energy. the movement, running and jumping of vehicals can be converted into electricity to meet its own requirements of energy. in this way, we can reduce the fuel consumption and can save energy. if we use the piezoelectric generator under the road then due to the vibration and the motion of the vehicles, electiricity can be generated. in short, these mems devices can be used as the new source for energy production. energy produced from these generators may be useful and instantly available for wear able devices. the applications of nano generator schematic are shown in fig 1. figure 1 schematic diagram mems based nanogenrator application various types of nano generator have been developed by different researcher. thin film of the zno-cuo is deposited by sol-gel method on the silicon substrate and surface was treated at temperatures of 100 -800 o c. the spin coating techique was used and thickness was increased by increasing the number of coatings. the x-ray diffraction (xrd) technique was used for the surface study [1]. mems based nano-generators are popular due to light weight and low cost. the mems based nanogenrator devices have been useful in reducing the need for electric wiring, wet and dry chemical batteries or other power sources which are massive and expansive. the piezo nano devices are very effective for energy generation. muhammad faisal wasim et..al /nano generator simulation using fuzzy logic (2015) 223 the harvesting of energy from such devices is easy and time saving from different aspects. these nanogenrators can produce a power of 1µw only by adjusting the device dimensions 1cm×1cm which is a reasonable amount of energy production at small scale [2]. nanoelectromechanical system (nems) based energy device has produced the energy by applying force and varying the thickness. if the force of the natural vibration would be used in the order of n and thickness of the device would be in nm range. then the device could produce the voltage in the range of v and the current in the range of a [3]. fuzzy logic based intelligent system was developed and input parameters were adjusted to get the outputs in a specific range [4]. highly flexible and efficient piezoelectric based nano device for the energy harvesting was developed. the two modes of the fabrication were used for forward and reversed biased. the sol-gel method for the thin film deposition was used. the device showed better performance [5]. it was reported that the accuracy and unlimited impact of fuzzy logic system for day-to-day life problem showed significant importance [6]. tang et al. reported a fuzzy logic based system for developing genetic algorithm [7]. mems based energy harvester was developed by using the zno piezoelectric thin film. two different strategies like sputtering method and wet chemical method were adopted to fabricate the mems device. the results showed that the maximum load frequency, power and voltages were 1300 hz, 1.25 uw and 2.06 v respectively [8]. piezoelectric nano materials were used to develop nano devices [9, 10]. using zno thin film deposition on the silicon wafer after coating by the pt/ti, the device was developed [11, 12]. thin film of piezoelectric material was used for the energy harvesting [13]. it was reported that as the length of piezo device was increased, the sress level was also increased [14]. different computational techniques were reported using fuzzy logic controller [15]. ii simulation first of all, we select the input and output parameters for the proposed system. the simulation has been performed for mems based nanogenrator for the enegy harvesting. pressure and thickness have been considered as the inputs while voltage and current was considered as out put as shown in fig. 2. figure 2 circuit representation scheme schematic of complete system is shown in fig. 3. each input and output has been associated by three memebership function and their ranges were defined in order of nano meters. figure 3 fuzzy logic nanogenerator the inputs and outputs vaues of nanogenerator were defined in fuzzy logic inference system. each of input and output has been given three membership functions with specific ranges. the range of the limits for mf1 is 0-50 and mf2is 0-100 and while the mf3 is from 50-100 as shown in table 1. table 1 ranges in percentage mfs %range force thickness voltage current mf1 0-50 l l l l mf2 0-100 m m m m mf3 50-100 h h h h l = low, m = medium, h= high fig. 4 and fig. 5 show the input membership function and fig. 6 and fig. 7 show the output membership function. figure 4 input membership function plot for force figure 5 input membership function plot for thickness muhammad faisal wasim et..al /nano generator simulation using fuzzy logic (2015) 224 figure 6 output membership function plot for current figure 7 output membership function plot for voltage fig. 4 is the graph between the degrees of memebership function and the force exerted on the fl nanogenrator. this shows the membership function of the input “force” and its distribution in the different sections. the range of the first overlapping unit is 0 -50 which is called negative region or region i. the second overlapped region is of range 0 -100 and 50 -100 is called region 2. these distributions are same for all inputs and the outputs. on the basis of possible combinations of the physical inputs and outputs parameters different rules have been established for the better performance and accurate simulation results. the rules involve the simple if and then statement and the and logic. the 3d simulated results in surface viewer are shown in fig. 8 and fig. 9. figure 8 graphical representation of collective behaviour of thickness verses force and output voltage figure 9 graphical representation of collective behaviour of thickness verses force and output current this graph shows that when specific values of applied force on the device has been increased then the output value of the voltages also increases by keeping the thickness and area constant. from the 3d graph, it is observed that if we increase the force or pressure on the piezo thin film upto an extent, then the compression increases directly. hence, by more expansion and compression of the piezo thin film, more voltage can be produced. in the same way if we increase the thickness of the material thin film at a certain extent by maintaing it into the nano-range, the resulting output voltage will also increase directly with applied force or pressure. iii design algorithm an algorithm of fuzzy logic system for the nanogenerator has been desined for the energy harvesting. fig. 10 represents the collective behavior of adjusted inputs and outputs for proper solution of algorithm. the yellow areas represent the inputs like force and thickness while the blue areas represent the outputs like voltage and current. figure 10 collective representations of all possible rules of assigned inputs and its respective outputs muhammad faisal wasim et..al /nano generator simulation using fuzzy logic (2015) 225 for calculation the values of parameters are: force = 5.42, thickness = 10.2, voltage = 50. the value of the force in simulation is (5.42 n) that lies in region 1 as shown in fig. 11. the related membership functions are low (l) and medium (m) as indicated in the region i. the mf1 and mf2 for these values are: mf1 = (50 5.42)/50 = 0.892 mf2 = 1 mf1 = 1 (0.892 ) = 0.108 figure 11 graph showing the values of the membership functions mf1 and mf2 for the pressure similarly, the thickness of the thin film is (10.22 nm) that lies in the region 1 as shown in fig. 12. membership functions of the region 1 are low (l) and medium (m). the mf3 and mf4 for these values are: mf3 = (50 10.20)/50 = 0.796 mf4 = 1 mf3 =10.80 = 0.204 figure 12 graph showing the values of the membership functions mf3 and mf4 for the thickness the suitable rules for the high performance of the nanogenrator are selected for calculation of design algorithm. the input parameters like force = 5.42 and thickness =10.22 have been used for calculation. the value of input parameter force f= 5.42 lies in region 1. the inputs like force and thickness, voltage and singleton values are shown in table 2. table 2 selected rules the value of membership functions for force is mf1 = 0.892 and mf2 = 0.108. a. voltages and current calculation by using the mamdani’s formula mamdani formula is used for calculation of output under the following conditions. the input value of force, f = 5.42 lies in region i and value of thickness is t = 10.22. the calculated values for membership functions are: mf1 = 0.892, mf2 = 0.108, mf3 = 0.796 and mf4 = 0.204. singleton values for calculation are 0.5 for medium (m) and 1 for high (h). hence, ii x σ si = 0.846, σri = 1.692. [ σii × si / σii ] = 0.846/ 1.692 = 0.51 simulated value by matlab = 0.50 mamdanis formula value = 0.510 differenc = 0.510 0.50 = 0.010 the percentage error in output voltage is only 1% which is very small. similarly, the value of current has been calculated as 50a and has lesser error as for voltage. thus the presented simulation for nano generator is accomplished well. iv conclusion here, the fuzzy logic based analysis of nano generator for the energy harvesting has been presented. the values of inputs like force and thickness are 5.42 n and 10.42 nm respectively. three membership functions are used for each input and each output. the output voltage and current have the values of 50v and 50a respectively. the fuzzy inference system works according to the defined rules. results were analysed by the surface viewer. and logic was used for simulation. the presented nanogenerator controller has 1% error and can be used for harvesting the energy. reference [1] h. y. bae et al., “electrical and reducing gas sensing properties of zno and zno-cuo composite thin films fabricated by spin coating method”, sensor and actuator b 55, 47-54, 2001. [2] z. l. wang, “simulation of the nanogenrator for the energy” journal of materials, pp. 24-29. [3] b. p. nabar, zeynep celik butler, donald. p. butler. “piezoelectric zno nanorod as a carpet vibrational energy harvester”. elsevier, nano energy. 10, 71-82, 2014. [4] j. l. castro “fuzzy logic controllers are universal approximators”ieee transactions on systems, man, and cybernetics, vol 25, no. 4, 1995. [5] k. park , j. h. son , g. hwang , c. k. jeong, j. ryu , m. koo, i. choi, s. hyun lee, m. byun , z. lin wang, and k. jae lee, “highly-efficient, flexible piezoelectric pzt thin muhammad faisal wasim et..al /nano generator simulation using fuzzy logic (2015) 226 film nanogenerator on plastic substrates”, doi: 10.1002/adma.201305659, wiley-vch verlag gmbh & co. kgaa, weinheim, 2014. [6] n.chyuan, “human powered mems-based energy harvest devices”. science direct applied energy,volume 93, pp 390–403( 2012). [7] h. liu, c. lee, t. kobayashi, c. j. tay and c. quan, “investigation of a mems piezoelectric energy harvester system with a frequency-widened-bandwidth mechanism introduced by mechanical”.microsystem technologies”, vol.18, issue 4, pp. 497-506, 2012. [8] zadeh, r. r. yager et al. “fuzzy sets and applications: selected papers by l.a. zadeh”, (john wiley, new york, 1987). [9]piehong wang et al.(2015) “zno film piezoelectric mems vibration energy harvester with two piezoelectric elements for higher output performance” journal of review of scientific instruments ,0034-86075002, aip publishers llc,6748,86(7) [10] l. dhakar, f. e. h. tay and c.lee, “investigation of contact electrification based broadband energy harvesting mechanism using elastic pdms microstructures”, vol.24, no.10, 2014. [11] jia, y. seshia, a. a, “white noise responsiveness of an aln piezoelectric mems cantilever vibration energy harvester” journal of physics: conference series, volume 557, issue 1, 2014. [12] t. omori et al. “ preparation of the piezoelectric pzt micro-dics by the solgell”, ieee, japan, vol.121-e,no.9. 2001. [13] bala, u. b. krantz, c. matthias, gerken, martina, “electrode position optimization in magnetoelectric sensors based on magnetostrictive-piezoelectric bilayers oncantilever substrates”, ultrasonics, ferroelectrics, and frequency control, ieee transactions on vol.61, issue 3, 2014. [14] kellogg, a. rick, a.sumali and hartono, “piezoelectric energy harvester having planform-tapered interdigitated beams”.united state patent kellog et al, patent no.: us 7,948,153 b1, 2011. [15] leadenham, stephen, erturk, alper, “global nonlinear electroelastic dynamics of a bi morph piezoelectric cantilever for energy harvesting, sensing, and actuation”.spie digital library active and passive smart structures and integrated systems, vol. 9057, issue 02, 2014. http://www.sciencedirect.com/science/journal/03062619 http://www.sciencedirect.com/science/journal/03062619/93/supp/c http://adsabs.harvard.edu/cgi-bin/author_form?author=jia,+y&fullauthor=jia,%20y.&charset=utf-8&db_key=phy http://adsabs.harvard.edu/cgi-bin/author_form?author=seshia,+a&fullauthor=seshia,%20a.%20a.&charset=utf-8&db_key=phy http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=58 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=58 http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6746310 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.osti.gov/doepatents/biblio/1017264 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.osti.gov/doepatents/biblio/1017264 http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l المحتويات journal of engineering research and technology, volume 8, issue 2, september 2021 table of contents # paper page 1. development of flood defense alternaives for the beach of deir el balah camp, palestine mazen taha abualtayef 1-11 2. analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents zhe wei 12-21 3. parsing arabic verbal sentence using grammar ontology khaled m. almunirawi, rebhi s. baraka 22-32 https://journals.iugaza.edu.ps/index.php/jert/article/view/10257 https://journals.iugaza.edu.ps/index.php/jert/article/view/10453 https://journals.iugaza.edu.ps/index.php/jert/article/view/10453 https://journals.iugaza.edu.ps/index.php/jert/article/view/10493 transactions template journal of engineering research and technology, volume 2, issue 3, september 2015 175 mems based energy harvesting controler using fuzzy logic basit ali 1 , muhammad waseem ashraf 1 , shahzadi tayyaba, muhammad faisal wasim 1 1 gc university lahore, pakistan 2 the university of lahore, pakistan *email: muhammad.waseem.ashraf@gmail.com abstract—this paper presents the design and simulation of mems based energy harvester controller. energy harvesting from physical motion such as finger motion, heart beating, walking and running are becoming so important now-a-days. in this study outputs voltage and current of the mems based energy harvester can be controlled by using fuzzy logic based control system. a new way of controlling the outputs by changing the inputs has been proposed in this study. the system has been developed in matlab using fuzzy logic mamdani model.two inputs pressure and area have been selected. the three membership functions to the input parameters have been assigned. the two outputs like voltage and current are selected for output power. three membership functions are also assigned to the outputs. the system works according to the rules defined in the fuzzy inference system (fis). the results according to suggested rules belonging to assigned mfs have been displayed in surface viewer.different rules of combinations were defined in matlab rule editor and we used and logic for simulation. the rsults obtained from fuzzy logic controller have been verified by using mamdani’s formula for specific values of the inputs and the outputs. as there is only 1% error for both the outputs, current and voltage, this shows that the system performs very well. index terms — fuzzy logic, fuzzy rule editor, fuzzy inference system, mems, membership functions, matlab. i introduction vibrations are one of the most common phenomena that exist at all the time and at everywhere and in every field of life.some common and simple natural acts occurring in nature that can produce vibrations are figure motion ,heart beating, walking, running and any mechanical instrument in action. the mechanical actions like running cars or any other function of the human body is the best source to produce vibrations which can easily be converted into electrical energy through mems based energy harvesters. this ambient energy would be useful for wearable devices, household applications, sensors for medical implants, compute or communicate practically everywhere.very low power 10-100μw is requied for the normal operation of vlsi design and lithium-ion batteries can produce a power 160w.h/kg but it is very large in size and has a limited life [1]. the mems based energy harvesting devices approch in different fields eliminating the need for wiring, chemical batteries or power sources which are bulkier and higher in cost. therefore, mems based energy harvester becomes more appealing or even essential. mems based energy harvesters are so good that we can produce a power of 1µw only by using a device of dimensions 1cm×1cm and this power can easily be converted into 1mw if it is given to a little capacitor. [2]. lin et al. proposed an intelligent fuzzy logic system whose basis was artificial neural network. it was seen that by linking the self-organized and supervised system the results taken were astonishing. the system considered was user friendly and could be easily understood by anyone. the system was seen to be benevolent, solid and provide the best results [3]. castro presented a work on the topic how does fuzzy logic is an approximate method and why it has an edge over other logics. he also described why does fuzzy logic shows great performance while other logics cannot do so. people mostly criticize the performance of fuzzy controller but in his work he proved that the fuzzy controller had great impact on daily life problems.castro had shown both quantitative and qualitative approach to the fundamental problems[4].tang et al. proposed a fuzzy logic based system for creating genetic algorithm. he used a method in which a system was divided into two different inputs. fuzzy membership functions were used to define the parameters of inputs. fuzzification was done on the input and rules were defined, then defuzzification was done on the outputs. fuzzy logic based system was used due to its unique way of attempting complex and non-linear problems [5]. sue et al. [6] (2011) particularly investigate and characterize the energy harvesters that can be used to produce power from human body. they also reviwed the currently available mems based energy harvester. they evaluate and briefly described power gain, different methods for tunning the frequency and showed that micro-energy harvesters are biologically harmless. liu et al. [7] (2012) fabricated a new piezoelectric cantilever at microlevel for harvesting vibrational energy at very small frequencies and low accelerations. they obtained a maximum potential difference 42 mv and a power of 0.31 μw g −2 by acceleration of 0.06g. elizabet et al. [8] (2012) studied the enhancement of the mechanical response of mems based energy harvester by optical excitation. in this way they http://link.springer.com/search?facet-author=%22huicong+liu%22 basit ali, et.al. / mems based energy harvesting controler using fuzzy logic (2015) 176 gave a pathway for the moving structures to response with heating effect. they showed how a mems based connected structure can responds mechanically to the temperature change of the element. the heating purpose they used infrared radiations. liu et al. [9] (2012) presented a piezoelectric energy harvester (peh) system with a large operating bandwidth. they showed that their device could produce an output power of 34 to 100 nw. dhakar et al. [10] (2014) presented a triboelectric energy harvesting devices. the maximum output power measured from the device was observed to be 0.69 μw. jia et al. [11] (2014) reported a piezoelectric mems cantilever vibrational energy harvester. in this way they were able to produce 0.7μw with 7g and 2.56 μw at 3 ms -2 . cao et al.[12] (2011) designed a piezoelectric cantilever. they used finite element method (fem) for simulation. they verified their results and showed that the optimized cantilevered piezoelectric energy harvesters could produce a 56v peak open-circuit voltage. the proposed method would be suitable for optimization design of piezoelectric energy harvester. bala et al. [13] (2014) described an electrode position optimization in magnetoelectric sensors based on piezoelectric bilayer cantilever substrates. they applied the finite element method (fem) for simulations.a 15% higher signal voltage across the piezoelectric layer was obtained for optimally positioned electrodes with a simple layered cantilever and an insulating magnetostrictive material. they also described that the signal voltage was increased 25% for a trenched cantilever. kellogg et al.[14] (2011) studied piezoelectric energy harvester. they said that by increasing the length of the cantilever the stress level of the cantilever increased and in this way power output of each piezoelectric element increased. leadenham et al. (2014) [15] described a piezoelectric cantilever for sensing, actuation and energy harvesting. they found the the proposed model and experimental investigation were in close agreement with each other. mutlaif et al.[16] (2015) presented a mathematical derivations for piezoelectric energy harvester.they used matlab and comsol multiphysics software for simulation.they also studied the the effect of length and shape of the cantilever beam on the output voltage. rivadeneyra et al. [17] (2015) reported a low frequency <300 hz vibrational energy harvester due to the fact that many industrial and commercial devices operate at these frequencies. in their paper they investigate the influence of perforating sections of the si beam had on the resonant frequencies of the cantilever by numerical simulation. kim et al.[18] (2013) fabricated dual-beam cantilevers on the microelectromechanical system (mems) scale with an integrated si proof mass. they used the finite element method (fem) with parametric analysis carried out in the design process. according to simulations, the resonant frequency, voltage, and average power of a dualbeam cantilever was 69.1 hz, 113.9 mv, and 0.303µw, respectively, at optimal resistance and 0.5 g. the harvested power density of the dual-beam cantilever compared favorably with the simulation. their experimental results for the resonant frequency, voltage, and average power density were 78.7 hz, 118.5mv, and 0.34 µw. the error between the measured and simulated results was about 10%. the maximum average power and power density of the fabricated dual-beam cantilever at 1 g were 0.803µw and 1322.80 µw cm −3 , respectively. fuzzy logic is basically a flexible technique and is a numerical representation of system in which answer is just not only high or low, 0 or 1, on or off and true or false. it is a free technique which is not bounded by any specific states. for example in thermally heated metal where value is not just only hot or cold but also between them, this system could be easily developed by using fuzzy logic as it can tell that some part of the metal is at normal temperature. the most common way of using fuzzy logic is to solve it through matlab software. in this paper we have done the simulation for an efficient mems based energy harvester that can convert mechanical energy into electricity. the rsults obtained from fuzzy logic controller have been verified by using mamdani’s formula for specific values of the inputs and the outputs. as there is only 1% error for both the outputs current and voltage, this shows that the system performs very well.on the analysis of the results obtained from simulation we will draw the conclusion. ii design methodology a designing in matlab flc is comprises of two inputs with three membership functions and two outputs also with three membership functions. fig. 1 shows two input variables: pressure, area and voltage and current as outputs. figure 1 fuzzy logic controller (flc) flc system in fuzzy logic inferring system (fis) editor could be assigned by several numbers of inputs but here it has two inputs and each input has three membership functions (mfs). the ranges should be selected according to the desired values of input (mfs) and output (mfs). the ranges of inputs and output have been taken (0-100) for both inputs and outputs as shown in table1 table 1 ranges selected for inputs and outputs http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l basit ali, et.al. / mems based energy harvesting controler using fuzzy logic (2015) 177 the figure 2 shows that how the common regions have been differentiated. the first overlapped region of the range 0 to 50 and 0 to 100 is called region 1 and second overlapped region of range 0 to 100 and 50 to 100 is called region 2. it is same for the inputs and the outputs. the calculations have been taken according to this regional division . figure 2 division of the regions in designing of this system different rules have been established for the better result. the rules involve the simple if and then statement and the and logic. table 2 rules for the inputs and output if then pressure area voltage current l v l l l s m m l la m m m v m m m s m m m la m m h v h h h s m m fig.3, fig.4, show membership functions of input variables and fig.5 and fig.6 show the membership functions of the output variables voltage and current. figure 3 mfs graph for area figure 4 mfs graph for pressure figure 5 mfs graph for the output voltage figure 6 mfs graph for the output current figure 7 surface viewer graph among area, presure and output voltage this graph shows that by increasing the pressure output voltage will increase but for medium to high value of the pressure the output voltage will be high. similarly by increasing the area the output voltage will increase but for the large value of the area the output voltage will remain medium. basit ali, et.al. / mems based energy harvesting controler using fuzzy logic (2015) 178 figure 8 surface viewer graphs among area, presure and output current this graph shows that by increasing the pressure the output current wil lincrease but for the medium and high value of the pressure the current wiil remain high. similarly, by increasing area the value of the output current increases but for the large value of area the output current will remain medium. balgorithm design for flow controller system for design algorithm of fuzzy logic controller the %age value of the input and output parameters are as pressure = 23.5 area = 75.3 voltage = 55.6 figure 9 graphs showing the % age values of the input parameters pressure and area and the corresponding matlab simulated values of the output voltage and current the value of the pressure (23.5) lies in region 1as shown in the fig. 10. membership functions for region 1 are low (l ) and medium (m ). the mfs mf1 and mf2 for these values are mf1 = 50-23.5/50 =0.53 and mf2 = 1 mf1 = 1-0.53 =0.47 figure 10 graph showing the values of the membership functions mf1 and mf2 for the pressure. for area (75.3) values lies in the region 2 as shown in the fig. 11. membership functions for region 2 are very small (v) and small (s). the mfs mf3 and mf4 for these values are mf3 = 100 – 75.3/50 = 0.494 mf4 = 1 – mf3 =10.494= 0.506 figure.11 graph showing the values of the membership functions mf3 and mf4 for the area for voltage (55.6) value lies in region 2 as shown in fig. 12. membership functions for region 1 are low(l) and medium(m). the mfs mf5 and mf6 for these values are mf5 = 100 – 55.6/50 = 0.888 mf6 = 1 mf5 = 10.888 = 0.112 figure 12 graph showing the values of the membership functions mf5 and mf6 for the voltage selected rules for fuzzy logic controller according to value of input parameters (pressure =23.5, area =75.3) are listed in table 3. table 3 used for selected rules the value of pressure is lying in region 1 p= 23.7; area is in region 2 a= 75.3, voltage is in region 2 v = 55.6 and current is in region 2 i=55.6. pressure the 1st input of the system; whose value lays in region 1 in mf graphs. mfs are low (l) and high (h).the mfs mf1 and mf2 for these values basit ali, et.al. / mems based energy harvesting controler using fuzzy logic (2015) 179 are mf1 = 0.53 and mf2 = 0.47. the 2nd input parameter for the system is area; whose value lays in region 2 of mf graphs. mfs are: small (s) and large (l). the mfs mf3 and mf4 for these values are mf3 = 0.494 and mf4 =0.506. the fisrt output parameter is voltage whose value lies in region 2 of mf graphs.mfs are medium (m) and high (h).the mfs mf5 and mf6 for these values are mf5 = 0.888 and mf6 = 0.112.table 4 shows the singleton values for this system: table 4 shows the singleton values table 5 shows the rules corresponding to the mfs calculations using the mamdani’s formula by using the formula for mamdani’model output is calculated for both the conditions as the value of pressure is lying in region p= 23.7; area is in region 2 a= 75.3, voltage is in region 2 v = 55.6 and current is in region 2. pressure the 1st input of the system; whose value lays in region 1 in mf graphs. mfs are low (l) and high (h).the mfs mf1 and mf2 for these values are mf1 = 0.53 and mf2 = 0.47. the 2nd input parameter for the system is area; whose value lays in region 2 of mf graphs. mfs are: small (s) and large (l). the mfs mf3 and mf4 for these values are mf3 = 0.494 and mf4 =0.506. where ri are rules of table 5 and si are singleton values of table 3.here singleton values corresponds to 2 different variables of current 0.5 for medium and 1 for high. hence σ si × ri = s1× r1 + s2 ×r2+ s3 × r3+ s4× r4 =0.50×0.494+0.5×0.112+0.5×0.506+1×0.112=0.247+0.056 +0.253+0.112=0-668 σri = r1 + r2 + r3 + r4 = 0.494+0.112+0.506+0.112 =1.224 flow controller = [ σri × si / σri ] = 0.668/ 1.224 = 0.546 matlab simulation value= 0.556 calculated value= 0.546 difference= 0.556-0.546 = 0.010 percentage error will be only 1% which is very small; therefore, the proposed system will performed well. the value of pressure is lying in region p= 23.7; area is in region 2 a= 75.3, voltage is in region 2 v = 55.6 and current is in region 2. pressure the 1st input of the system; whose value lays in region 1 in mf graphs. mfs are low (l) and high (h).the mfs mf1 and mf2 for these values are mf1 = 0.53 and mf2 = 0.47.the 2nd input parameter for the system is area; whose value lays in region 2 of mf graphs. mfs are: small (s) and large (l). the mfs mf3 and mf4 for these values are mf3 = 0.494 and mf4 =0.506. the second output parameter is current whose value lies in region 2 of mf graphs.mfs are medium (m) and high (h).the mfs mf5 and mf6 for these values are mf5 = 100 – 55.6/50 = 0.888 mf6 = 1 mf5 = 10.888 = 0.112 hence σ si × ri = s1 × r1 + s2 × r2+ s3 × r3+ s4× r4 =0.50×0.494+0.5×0.112+0.5×0.506+1×0.112=0.247+0.056 +0.253+0.112=0.668 σri = ra + rb + rc + rd = 0.494+0.112+0.506+0.112 =1.224 flow controller = [ σri × si / σri ] = 0.668/ 1.224 = 0.546 matlab simulation value= 0.556 calculated value= 0.546 difference= 0.556-0.546 = 0.010 percentage error will be only 1% which is very small; therefore, the proposed system will perform well. iii results and discussions fuzzy logic (fl) based control mems energy harvester is being proposed here for the control of outputs voltage and current of the mems based energy harveste. the given system contains fl controller which has two inputs (pressure and area) and 2 outputs (voltage and current).and logic and the mamdani’s model has been used here the results of whom are given below: matlab simulation value= 0.556 calculated value= 0.546 difference= 0.556-0.546 = 0.010 rules pressure area voltage singleton value r1 l s m 0.5 r2 l la m 0.5 r3 m s m 0.5 r4 m la h 1 rules membership functions r1 mf1˄mf3˄mf5 = 0.53˄0.494˄0.888= 0.494 r2 mf1˄mf3˄mf6 = 0.53˄0.494˄0.112= 0.112 r3 mf1˄mf4˄mf5 = 0.53˄0.506˄0.888 = 0.506 r4 mf1˄mf4˄mf6 = 0.53˄0.506˄0.112 = 0.112 basit ali, et.al. / mems based energy harvesting controler using fuzzy logic (2015) 180 percentage error will be only 1% for both the out puts voltage and current, which is very small; therefore, the proposed system will performed well. iv conclusion in this study outputs voltage and current of the mems based energy harvester can be controlled by using fuzzy logic based control system. a new way of controlling the outputs by changing the inputs has been proposed. the system has been developed in matlab using fuzzy logic mamdani model.two inputs pressure 75.5 and area 23.5 have been selected. the three membership functions to the input parameters have been assigned. the two outputs like voltage 55.6 and current 55.6 are selected for output power. three membership functions are also assigned to the outputs. the system works according to the rules defined in the fuzzy inference system (fis). the results have been displayed in surface viewer.different rules of combinations were defined in matlab rule editor and we used and logic for simulation. the rsults obtained from fuzzy logic controller have been verified by using mamdani’s formula for specific values of the inputs and the outputs. as there is only 1% error for both the outputs, current and voltage, this shows that the system performs very well. reference [1] s.j.roundy, ―energy scavenging for wireless sensor nodes with a focus on vibration to electricity conversion‖.phd thesis university of california,berkely (2003). [2] zhong lin wang, youtube video on piezoelectricity….energy harvesting at the nanolevel. [3] chin-teng lin, c. s. george lee, ―neural-networkbased fuzzy logic control and decision system‖ ieee transactions on computers, vol. 40, no. 12, december 1991. [4] j.l.castro―fuzzy logic controllers are universal approximators‖ieee transactions on systems, man, and cybernetics, vol 25, no. 4, april 1995. [5] k.s.tang, k.f.man, z.f.liu, s.kwong ―minimal fuzzy memberships and rules using hierarchical genetic algorithms‖ ieee transactions on industrial electronics, vol. 45, no. 1, february 1998. [6] c.y.sue, n.chyuan, ―human powered mems-based energy harvest devices‖. science direct applied energy,volume 93, pp 390–403( 2012). [7] h.liu, c.lee, t.kobayashi, c.j.tay and c.quan, ―investigation of a mems piezoelectric energy harvester system with a frequency-widened-bandwidth mechanism introduced by mechanical‖.microsystem technologies (2012), vol.18, issue 4, pp497-506(2012). [8] e.a.dobisz, l.a.eldada, ―study of two-dimensional device-error-redundant single-electron oscillator system‖. nanoengineering: fabrication, properties, optics, and devices ix, vol. 8463 (2012). [9] h.liu, c.lee, t. kobayashi, c. j. tay and c. quan, ―a new s-shaped mems pzt cantilever for energy harvesting from low frequency vibrations below 30 hz‖.smart matrials and structures vol.21 no.3 (2012). [10] l. dhakar, f. e. h. tay and c.lee, ―investigation of contact electrification based broadband energy harvesting mechanism using elastic pdms microstructures‖, vol.24, no.10.(2014). [11] jia, y. seshia, a. a, ―white noise responsiveness of an aln piezoelectric mems cantilever vibration energy harvester‖ journal of physics: conference series, volume 557, issue 1(2014). [12] c.junyi, zhou, s, ren, x. cao, binggang, ―piezoelectric cantilevers optimization for vibration energy harvesting‖.third international conference on smart materials and nanotechnology in engineering. vol. 9,pp 8409 (2011). [13] bala, u.b.krantz, c.matthias, gerken, martina, ―electrode position optimization in magnetoelectric sensors based on magnetostrictive-piezoelectric bilayers oncantilever substrates‖. ultrasonics, ferroelectrics, and frequency control, ieee transactions on vol.61, issue 3 (2014). [14] kellogg, a.rick, a.sumali and hartono, ―piezoelectric energy harvester having planform-tapered interdigitated beams”.united state patent kellog et al, patent no.: us 7,948,153 b1 (2011) [15] leadenham, stephen,erturk, alper, ―global nonlinear electroelastic dynamics of a bi morph piezoelectric cantilever for energy harvesting, sensing, and actuation‖.spie digital library active and passive smart structures and integrated systems (2014), vol.9057, issue 02, doi 10.1117/12.2045161,(2014). [16] a.muthalif, a.gani, ―optimal piezoelectric beam shape for single and broadband vibration energy harvesting: modeling, simulation and experimental results‖.elsevier (2014). [17]r.almudena,s.rueda,j.manuel,o'keeffe,rosemary,b. jesus,j.nathan,m.alan,l.v.j.antonio,―tunable mems piezoelectric energy harvesting devices‖ microsystems technologies (2015), doi: 10.1007/s00542-015-2455-1. [18]k.moonkeun, l.s.kyun, y.y.suk, j. jaehwa, m. nam ki, k.k.ho, ―design and fabrication of vibration based energy harvester using microelectromechanical system piezoelectric cantilever for low power applications‖. journal of nanoscience and nanotechnology, vol. 13, no. 12, pp. 7932-7937 december (2013). http://www.sciencedirect.com/science/journal/03062619 http://www.sciencedirect.com/science/journal/03062619 http://www.sciencedirect.com/science/journal/03062619/93/supp/c http://adsabs.harvard.edu/cgi-bin/author_form?author=jia,+y&fullauthor=jia,%20y.&charset=utf-8&db_key=phy http://adsabs.harvard.edu/cgi-bin/author_form?author=seshia,+a&fullauthor=seshia,%20a.%20a.&charset=utf-8&db_key=phy http://adsabs.harvard.edu/cgi-bin/author_form?author=cao,+j&fullauthor=cao,%20junyi&charset=utf-8&db_key=phy http://adsabs.harvard.edu/cgi-bin/author_form?author=zhou,+s&fullauthor=zhou,%20shengxi&charset=utf-8&db_key=phy http://adsabs.harvard.edu/cgi-bin/author_form?author=ren,+x&fullauthor=ren,%20xiaolong&charset=utf-8&db_key=phy http://adsabs.harvard.edu/cgi-bin/author_form?author=cao,+b&fullauthor=cao,%20binggang&charset=utf-8&db_key=phy http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.ncbi.nlm.nih.gov/pubmed/24569244 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=58 http://ieeexplore.ieee.org/xpl/recentissue.jsp?punumber=58 http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6746310 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.osti.gov/doepatents/biblio/1017264 http://science.gov/scigov/link.html?type=result&redirecturl=http://www.osti.gov/doepatents/biblio/1017264 http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://science.gov/scigov/link.html?type=result&redirecturl=http://adsabs.harvard.edu/abs/2014spie.9057e..02l http://malrep.uum.edu.my/rep/author/home?author=abdul+muthalif%2c+asan+gani http://dx.doi.org/10.1007/s00542-015-2455-1 http://www.ingentaconnect.com/content/asp/jnn المحتويات journal of engineering research and technology, volume 7, issue 2, september 2020 table of contents # paper page 4. maximum power point tracking control for grid-connected photovoltaic system under partial shading conditions mohammed s. ibbini, areen g. al-obeidallah 1-11 5. investigation of power quality indices in jordan university of science and technology grid-tie photovoltaic plant mohammed s. ibbini, abdullah h. adawi 12-16 6. integrated land use and transportation modeling within data-poor contexts emad b. dawwas 17-22 7. blockchain-based quality of service for healthcare system in the gaza strip abdelkhalek i. alastal, raed a. salha, maher a. el-hallaq 23-34 https://journals.iugaza.edu.ps/index.php/jert/article/view/8412 https://journals.iugaza.edu.ps/index.php/jert/article/view/8412 https://journals.iugaza.edu.ps/index.php/jert/article/view/8413 https://journals.iugaza.edu.ps/index.php/jert/article/view/8413 https://journals.iugaza.edu.ps/index.php/jert/article/view/8540 https://journals.iugaza.edu.ps/index.php/jert/article/view/8822 journal of engineering research and technology, volume 2, issue 2, june 2015 122 comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate salem s. abu amr* 1 , hamidi abdul aziz 1, 2 , mohammed j.k. bashir 3 shuokr qarani aziz 4 , tamer m. alslaibi 5 1 school of civil engineering, engineering campus, universiti sains malaysia, 14300 nibong tebal, penang, malaysia 2 solid waste management cluster, engineering campus, universiti sains malaysia, 14300 penang, malaysia 3 department of environmental engineering, faculty of engineering and green technology, universititunku abdul rahman, 31900 kampar, perak,malaysia. 4 department of civil engineering, college of engineering, university of salahaddin–erbil, iraq. 5 1 environment quality authority, general administrator of natural resources, water recourse department, gaza strip, palestine *corresponding author: dr. salem s. abu amr tel: + 60-45996215; fax: +60-45941009. e-mail: sabuamr@hotmail.com abstract-leachate pollution is one of the main problems in landfilling. among the most problematic parameters in stabilized leachate are cod, ammonia, and color. the treatment technology that can be used may differ based on the type of leachate produced. even after treatment, the effluent characteristics are always hard to comply with the discharge standard. ozonation is one of the chemical processes that can be used in the treatment of landfill leachate. however, its performance when use alone is low; its effectiveness can be improved using advanced oxidants. to date, application of fenton and persulfate reagents separately to improve ozonation process in one ozone reactor was not well established. the study aimed to evaluate and compare the performance of the three treatment processes, namely ozone, ozone/fenton and ozone/persulfate in treating stabilized leachate separately at different experimental conditions. the performance of the three methods in the treating stabilized leachate was compared. according to the results, the performance of ozone alone was poor, and utilizing new advanced oxidation material during ozonation of such leachate was required to improve leachate treatability. ozone/fenton process is a viable choice for degrading and decolourizing stabilized leachate. furthermore, ozone/persulfate process has higher performance in ammonia removal as well as it has good removal efficiency of cod and color from stabilized leachate. suitable data for establishing fully stabilized leachate treatment plant using ozone/fenton and ozone/persulfate was suggested. the final effluent of ozone/fenton process complied with the discharge standard for cod and colour. index terms advanced oxidation process, ozonation, fenton, persulfate, treatment efficiency. abu amr et al.,/ comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate (2015) 123 i. introduction growing population and industrial development have increased waste generated by urban areas and otherwise. in most countries, sanitary landfilling is the most common way of eliminating municipal solid waste (msw) (renou et al., 2008) [1]. msw is waste from domestic, commercial, and industrial activities in urban areas (bartone 1990) [2]. sanitary landfilling is the most economical and environment-friendly method for disposing municipal and industrial solid waste (tengrui et al., 2007)[3]. malaysia generates about 6.2 million tons of solid waste per year, which equals approximately 25,000 tons per day. this amount is expected to increase to more than 31,000 tons per day by 2020 because of increasing population and per capita waste generation (yahya 2012)[4]. food, paper, and plastic constitute 80% of the overall weight of malaysian waste (manaf et al., 2009)[5]. the average amount of msw generated in malaysia is 0.5 kg/capita/day to 0.8 kg/person/day, and that in major cities is as high as 1.7 kg/capita/day (kathirvale et al., 2003)[6]. despite the many advantages of landfilling, the resulting highly polluted leachate has been a cause of significant concern, especially because landfilling is the most common technique of solid waste disposal (ghafari et al., 2005)[7]. landfill leachate is liquid that has seeped through solid waste in a landfill and extracted dissolved or suspended materials in the process. the environmental impact of leachate depends on leachate strength, proper leachate collection, and the efficiency of leachate treatment. leachate contains high amounts of organic compounds, ammonia, and heavy metals and sometimes contaminates ground and surface water (christensen et al., 2001)[8]. landfill leachate usually contains a complex variety of materials and organic compounds, such as humic substances, fatty acids, heavy metals, and many other hazardous chemicals (schrab et al., 1993)[9]. researchers worldwide are still searching for a total solution to the leachate problem. multiple-stage treatments are still required to remove leachate pollution thoroughly. no single method can effectively remove all pollutants simultaneously. treatment by a conventional water treatment system (i.e., a combination of sedimentation, biological treatment, filtration, and carbon adsorption) cannot remove salts or organics, such as harmful recalcitrant compounds. this research aims to establish new technology and knowledge in stabilized leachate treatment by using ozone – based advanced oxidation processes (ozone, ozone/fenton, and ozone/persulfate) to reduce treatment time and improve the efficiency of treatment by increasing oxidation potential. ii. materials and methods a. ozone oxidation ozone experiments were conducted in a 2 l sample using an ozone reactor with a height of 65 cm and an inner diameter of 16.5 cm. the reactor was supported by a cross column ozone chamber to enhance the ozone gas diffusion (figure 3.8 a and b). the water bath and cooling system supported the ozone reactor in keeping the internal reaction temperature at <15 °c as an optimal half-life of the dissolved ozone (30 min) in water (lenntech, water treatment solution, 2012). ozone was produced by a bmt 803 generator (bmt messtechnik, germany) fed with pure dry oxygen with recommended gas flow rate of 200–1000 ml/min ± 10% under 1 bar pressure. the recommended input ozone gas concentration (30 – 80 in g/m 3 non-thermal plasma (ntp) ± 0.5%) was measured by an ultraviolet gas ozone analyzer (bmt 964). the initial ph of leachate samples was adjusted at different ph values ranges between 3 and 11, in order to investigate an optimal initial ph in treating stabilized leachate by ozone. the reaction time was varied between 10 and 120 min to determine an optimal ozonation time (tizaoui et al., 2007)[10]. b. ozone/fenton in the advanced oxidation process fenton reagent (h2o2/fe 2+ ) was employed in the advanced oxidation during the ozonation of stabilized leachate. h2o2 (30%) and ferrous sulfate heptahydrate (fe2so4∙7h2o, 278.02 g/mol) were used in preparing the fenton reagent, which was then added to the leachate sample in the ozone reactor as one reaction process. c. ozone/persulfate in the advanced oxidation process persulfate (s2o8 2− ) as sodium persulfate (na2s2o8, m = 238 g/mol) was employed in the advanced oxidation during the ozonation of stabilized leachate, which was added to the sample in the ozone reactor as one reaction process abu amr et al.,/ comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate (2015) 124 d. biodegradable and soluble cod fractions the effects of the three ozonation treatment processes such as ozone alone, ozone/fenton and ozone/persulfate on biodegradable and soluble characteristics of stabilized solid waste leachate were investigated in this research. the fractions of biodegradable cod(bi), non-biodegradable cod(ubi), soluble cod(s), biodegradable soluble cod(bsi), non-biodegradable soluble cod(ubsi), and particulate cod (pcod) were examined and calculated before and after each ozonation treatment processes. iii. results and dissection a. comparison of the three oxidation processes the comparison of different ozone oxidation processes is of interest to determine the best removal performance of cod, colour and ammonia, as well as enhancing of biodegradability and their effects on cod fractions of stabilized leachate. the aim of this study was to evaluate the above mentioned approaches in terms of reduced organic load and ammonia, decreases colour, and enhances the biodegradable characteristics of stabilized leachate. to investigate the performance of combined ozone application and two advanced oxidant reagents, stabilized leachate was treated with ozone oxidation alone, ozone/fenton and ozone/persulfate in the aops, respectively. 1. comparison on cod, colour and nh3-n removal the three ozone oxidation processes are compared in figure 1 in terms of cod, colour, and ammonia reduction based on optimal operational conditions. in the o3/h2o2/fe 2+ system, the fenton ions reacted with h2o2, resulting in the formation of hydroxyl radicals (∙oh) (eq. (1)). •oh has the potential to destroy and degrade organic pollutants (hermosilla et al., 2009)[11]. fe 2+ +h2o2→fe 3+ +oh+ oh. (1) the reaction of ozone with h2o2 generates .oh radicals. h2o2 is also dissolved in water and dissociates into the hydroperoxide ion (ho 2), which rapidly reacts with ozone to initiate a radical chain mechanism that generates hydroxyl radicals (staehelin et al., 1982; glaze et al., 1987)[12,13] . the removal efficiency of the target parameters was generally decreased with increasing fenton molar ratio. in ozone/persulfate reaction; the na2s2o8 dosage was fixed as a cod/s2o8 2− ratio (g/g), namely, 1/1 to 1/10 during 60 min ozonation of leachate (fig. 3), to evaluate the role of s2o8 2− in ozonation improvement. persulfate oxidation can be enhanced by the release of sulfate radicals, which have powerful effects on the oxidation of organics (watts 2011)[14]. the generation of sulfate radicals during oxidation can be significantly enhanced by catalysts, such as heat and uv radiation (eq. 2 4), which were found to improve the persulfate oxidation potential (gao et al., 2012; abu amr and hamidi 2012)[15, 16]. + → (2) (3) (300c10 years) leachate is >0.3, 0.1 to 0.3, and <0.1, respectively (schiopu et al. 2010). stabilized leachate with very low biodegradability (bod5/cod=0.034 to 0.05) and very strong organics made the biological treatment difficult. different ozone applications have been used to enhance the biodegradability of landfill leachate (tizaoui et al., 2007; bila et al., 2004; cortez et al., 2011b; cortez et al., 2011a) [10, 27 – 29]. however, the performance of ozone alone in improving the ratio was still very low. based on the results, the ozone/persulfate process is an efficient method for enhancing the biodegradability of stabilized leachate (table 1). table 1 comparison the effect of the three ozone applications on biodegradability process bod5/cod raw leachate 0.034 – 0.05 ozone alone 0.06 ozone/fenton 0.14 ozone/persulfate 0.29 4. comparison the effect of cod fractions cod fractionation is the most important parameter for leachate quality. however, the effects of ozone applications on these fractions have not been evaluated. the effects of the three ozone applications on cod fractions in stabilized leachate are compared in this section (table 2). the quantity of biodegradable and soluble cod fractions in raw leachate was relatively low, whereas that of non-biodegradable, non-soluble, and particulate fractions was high. as shown in table 2, each treatment process improved the biodegradable soluble and biodegradable soluble cod fractions but reduced the non-biodegradable, particulate, and nonbiodegradable soluble fractions. based on the results, ozone/persulfate is an efficient process to improve biodegradability and solubility of organics in stabilized leachate. table 2 comparison the effect of the three ozone applications on cod fractions process fraction r a w le a c h a te o z o n e o n ly o z o n e /f e n t o n o z o n e /p e r s u lf a te biodegradable cod (%) 24 28 36 39 non-biodegradable cod (%) 76 72 68 61 soluble cod (%) 59 65 72 72 particulate cod (%) 41 35 28 28 biodegradable soluble cod (%) 40 43 51 55 non-biodegradable soluble cod (%) 60 57 49 45 0 100 200 300 400 500 600 raw leachate ozone only ozone /fenton oxidation ozone/persulfate oxidation s u lf a te r e si d u a l co n ce n tr a ti o n ( m g /l ) treatment process sulfate residual (mg/l) abu amr et al.,/ comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate (2015) 127 5. comparison on ammonia removals during aeration as a post treatment stage the removal of ammonia from leachate before and after ozonation during batch aeration as a posttreatment stage was documented to evaluate and compare the performance of the three ozonation processes. in raw stabilized leachate, full removal of ammonia was obtained after 7 days of aeration, but ammonia removal did not significantly improve after the ozone-alone process (figure 4). in the ozone/fenton process, ammonia was completely removed after 4 day of aeration, and removal significantly improved during the first day of aeration (69%) (figure 5). therefore, the performance of fenton oxidation alone in ammonia removal was much poorer. in the ozone/persulfate process, ammonia was completely removed after only 2 days of aeration (figure 6). around 92% removal rate of ammonia was also achieved after the first 24 h of aeration compared with the 52% and 69% after the ozone-alone and ozone/fenton processes, respectively. moreover, the performance of persulfate oxidation alone in ammonia removal was much poorer. different applications have been reported for ammonia removal during aerobic and anaerobic biological processes of leachate. gotvajn et al. (2009)[30] obtained an 86% removal rate for ammonia after 50 h of aeration at ph of 11. figure 4: ammonia removal from leachate before and after ozone alone by aeration process figure 5: ammonia removal from leachate before and after ozone/fenton treatment by aeration process figure 6: ammonia removal from leachate before and after ozone/persulfate treatment by aeration process leachate is formed when water mainly from rain infiltrates deposited waste. as the liquid moves through the landfill, many organic and inorganic compounds, such as ammonia and heavy metals, are 0% 20% 40% 60% 80% 100% 0 100 200 300 400 500 600 700 800 900 0 1 2 3 4 5 6 7 n h 3 -n r e m o v a l ( % ) n h 3 -n ( m g /l ) aeration time (day) nh3-n in raw leachate (mg/l) nh3-n after o3 (mg/l) nh3-n removal in raw leachate (%) nh3-n removal after o3 (%) 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 100 200 300 400 500 600 700 800 900 0 1 2 3 4 5 6 7 n h 3 -n r e m o v a l (% ) n h 3 -n ( m g /l ) aeration time (day) nh3-n in raw leachate (mg/l) nh3-n after o3/fenton (mg/l) nh3-n removal in raw leachate (%) nh3-n removal after o3/fenton (%) 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 100 200 300 400 500 600 700 800 900 0 1 2 3 4 5 6 7 n h 3 -n r e m o v a l ( % ) n h 3 -n ( m g /l ) aeration time (day) nh3-n in raw leachate (mg/l) nh3-n after o3/persulfate (mg/l) nh3-n removal in raw leachate (%) nh3-n removal after o3/persulfate (%) abu amr et al.,/ comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate (2015) 128 transported into the leachate. the leachate then moves to the surface or base of the landfill cell and may pollute the surface and groundwater, which may affect human health and aquatic environment. many factors affect the quality and quantity of leachate, such as seasonal weather variation, landfilling technique, waste type and composition, and landfill structure (mohajeri, 2010)[20]. leachate pollution in malaysia is very serious, and the high generation of landfill leachate in tropical areas such as malaysia is mainly attributed to the high amount of rainfall (lema et al., 1988)[31]. iv. conclusion the performance of the three ozonation techniques in aop, namely, ozone alone, ozone/fenton, and ozone/persulfate treating stabilized leachate was investigated and compared. according to the results, the performance of ozone alone was poor, and utilizing new advanced oxidation material during ozonation of such leachate was required to improve leachate treatability. ozone/fenton in aop is a viable choice for degrading and decolourizing stabilized leachate. this process was found to be ideal because it can achieve 99% of colour removal and 79% of cod removal and up to 50% reduction in treatment time compared with the classical combination of fenton and ozone processes. the removal efficiency was also higher. the process achieved a desired oc value for cod removal (0.29 kg/kg cod) compared with other methods. furthermore, the process reduced iron ions (3.5 mg/l) to lower than the maximum acceptable levels (5 mg/l). moreover, biodegradability (bod5/cod ratio) was significantly improved, as were biodegradable and soluble organic fractions. ozone/persulfate in aop significantly removed nh3–n, cod, and colour. the process achieved high biodegradability (bod5/cod; 0.29) compared with other treatment methods, which suggests further organic degradation via biological process as a post-treatment. the performance of the ozone/persulfate process to improving biodegradable and soluble organic fractions in stabilized leachate was better than that of other processes. furthermore, the process completely removed iron ions from stabilized leachate and produced no undesirable sludge. sulfate ions are not harmful to the environment, and sulfates can decompose in further biological processes. biodegradability (bod5/cod) enhanced from 0.034 to 0.05, 0.14 and 0.29, after ozone alone, ozone/fenton and ozone/persulfate, respectively. the results reveal that ozone/persulfate in aop achieved higher ratio among the three treatment process which recommended suggesting biological process as a post treatment for further organic degradation and ammonia removal. the effect of the three ozonation processes on cod fractions in stabilized leachate was documented in this research. based on the results, the ozone/persulfate process is the best choice for improving biodegradable and soluble fractions in stabilized leachate. acknowledgment this work is funded by universiti sains malaysia under iconic grant scheme (grant no. 1001/ckt/870023) for research associated with the solid waste management cluster, engineering campus, universiti sains malaysia. references [1] bartone, c.r., (1990) economy and policy issues in resource recovery from municipal solid wastes, resources, conservation and recycling 4 (1-2)7-23. [2] renou s., givaudan j.g , poulain s., dirassouyan f., moulin p. (2008). landfill leachate treatment: review and opportunity, journal of hazardous materials 150 (2008) 468– 493. [3] tengrui, l., al-harbawi, a.f., bo, l.m., jun, z. (2007). characteristics of nitrojen removal from old landfill leachate by sequencing batch biofilm reactor. journal of applied sciences, 4, 211 – 214. [4] yahaya n., (2012). solid waste management in malaysia: the way forward, national solid waste management department, ministry of housing and local government, retrieved in 8, july 2013 at http://ensearch.org/wpcontent/uploads/2012/07/paper-13.pdf [5] manafa l. a., samah m. a. a., zukki n. i. m. (2009). municipal solid waste management in malaysia: practices and challenges. waste management 29(11), 2902-2906. [6] kathirvale, s., muhd yunus, m. n., sopian, k., samsudding, a. h. (2003). energy potential from municibal solid waste in malaysia, renewable energy, 29, 559-567. [7] ghafari, s., aziz, h.a., isa, m.h. (2005). coagulation process for semi-aerobic leachate treatment using polyaluminum chloride, in: the aeeseap international conference “engineering a better environment for mankind”, kuala lumpur, malaysia, june, 7–9. [8] christensen, t.h., kjeldsen, p., bjerg, p.l., jensen, d.l., christensen, j.b., baum, a. (2001). albrechtsen, h., heron, g., biogeochemistry of landfill leachate plumes. applied geochemistry,16, 659–718. abu amr et al.,/ comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate (2015) 129 [9] schrab, g.e., brown, k.w. donnelly, k.c. (1993). acute and genetic toxicity of municipal landfill leachate. water, air, and soil pollution, 69, 99–112 [10] tizaoui, c., bouselmi, l., mansouri, l., ghrabi, a. (2007). landfill leachate treatment with ozone and ozone/hydrogen peroxide systems, journal of hazardous materials, 140, 316–324 [11] hermosilla, d., cortijo, m., huang, m.c.p. (2009). optimization the treatment of landfill leachate by conventional fenton and photo-fenton process. sciences total environmental, 407, 3473-3481. [12] staehelin, j., hoigne, j. (1982) decomposition of ozone in water rate of initiation by hydroxide ions and hydrogenperoxide, environmental sciences and technology, 16, 676– 681. [13] glaze, w.h., kang, j.w., chapin, d.h. (1987). the chemistry of water-treatment processes involving ozone, hydrogen-peroxide and ultraviolet-radiation, ozone: scinces and engineering, 9, 335–352. [14] watts, r. j. (2011). enhanced reactant-contaminant contact through the use of persulfate in situ chemical oxidation (isco), serdp project er-1489 washington state university. [15]gao, y. gao, n. deng, y. yang, y. ma, y. (2012). ultraviolet (uv) light-activated persulfate oxidation of sulfamethazine in water, chem. eng. j., 195–196 , 248–253 [16] abu amr, s.s. aziz, h.a. (2012). new treatment of stabilized leachate by ozone/fenton in the advanced oxidation process. waste manage. 32 1693 – 1698. [17] shiying, y., ping, w.,.xin, y., guang, w., wenyi, z., . liang, s. (2009). a novel advanced oxidation process to degrade organic pollutants in wastewater: microwave-activated persulfate oxidation, journal of environmental sciences 21(2009) 1175–1180 [18] abu amr, s.s. aziz, h.a. (213). optimization of stabilized leachate treatment using ozone/persulfate in the advanced oxidation process, waste management,33(6):1434-41 [19] environmental quality, control of pollution from solid waste transfer station and landfill, regulations, (2009). under the laws of malaysiamalaysia environmental quality act 1974, 2009. [20] mohajeri, s. (2010). treatment of landfill leachate using electro-fenton processes, phd thesis, school of civil engineering, universiti sains malaysia. [21]arslan, v., bayat, o. (2009) iron removal from turkish quartz sand by chemical leaching and bioleaching. minerals and metallurgical processing 26 (1), 35 – 40. [22] aziz, h.a. adlan, m.n. zahari, m.s.m. alias, s. (2004). removal of ammoniacal nitrogen (n-nh3) from municipal solid waste leachate by using activated carbon and limestone, waste management and research, 22 371–375. [23] mohajeri, s., aziz, h. a., isa, m. h., zahed, m. a., zahed, adlan, m. n. (2010b) statistical optimization of process parameters for landfill leachate treatment using electrofenton technique, journal of hazardous materials, 176, 749– 758 [24] eren, z., acar, f. n. (2006). effect of fenton’s reagent on the degradability of ci reactive yellow, 15 coloration technology, 122 (5), 259 – 263. [25] monheimer, r. h., (1975). effects of three environmental variables on sulfate, uptake by aerobic bacteria, applied microbiology, 30, 975-981 [26] meulepas, r.w., jagersma, c. g., khadem, a.f., buisman, c. a., stams, a. m, lens, p.l. (2009). effect of environmental conditions on sulfate reduction with methane as electron donor by an eckernfo¨rde bay enrichment, environ. sci. technol, 43, 6553–6559 [27] bila, d. m., montalvao, a. f., silva, a c. dezotti m. (2005). ozonation of landfill leachate: evaluation of toxicity removal and biodegradability improvement. journal of hazardous materials b 117, 235-242. [28] cortez, s. teixeira, p., oliveira, r. (2011a). mature landfill leachate treatment by denitrification and ozonation, process biochemistry, 46, 148–153 [29] cortez, s., teixeira, p., oliveira, r. mota, m. (2011b). evaluation of fenton and ozone-based advanced oxidation processes as mature landfill leachate pre-treatment. journal of environmental management, 92, 749-755 [30] gotvajn, z., derco, a. tisler, j., cotman, t., koncan, m. j. (2009). removal of organics from different types of landfill http://www.ncbi.nlm.nih.gov/pubmed?term=%22zgajnar%20gotvajn%20a%22%5bauthor%5d http://www.ncbi.nlm.nih.gov/pubmed?term=%22derco%20j%22%5bauthor%5d http://www.ncbi.nlm.nih.gov/pubmed?term=%22derco%20j%22%5bauthor%5d http://www.ncbi.nlm.nih.gov/pubmed?term=%22cotman%20m%22%5bauthor%5d http://www.ncbi.nlm.nih.gov/pubmed?term=%22zagorc-koncan%20j%22%5bauthor%5d http://www.ncbi.nlm.nih.gov/pubmed?term=%22zagorc-koncan%20j%22%5bauthor%5d abu amr et al.,/ comparison and optimization of ozone – based advanced oxidation processes in the treatment of stabilized landfill leachate (2015) 130 leachate by ozonation. water science and technology, 3, 597603. [31] imai, a., onuma, k., inamori, y. sudo, r. (1998). effects of preozonation in refractory leachate treatment by the biological activated carbon fluidized bed process. environmental technology, 19, 221–73. dr. salem abu amr is a post doctorate in civil/environmental engineering, university sciences malaysia (usm), he has a ph.d in environmental engineering from the usm in 2013. he obtained his b.sc. in environment and earth sciences in 2001 and his m.sc. in water resources management from faculty of civil engineering, islamic university, gaza in 2005. he acquired practical experience working on various environmental engineering aspects including water/wastewater treatment and management, drinking water and sanitary sewer distribution system monitoring, and development of advanced water/wastewater treatment technologies. he has reported over 35 publications in several international conferences and isi journals: 20 articles in referred isi & scopus index journals, 5 international articles, 10 publications in international conference proceedings in this field. dr. hamidi abdul aziz is a professor in environmental engineering in the school of civil engineering of universiti sains malaysia. professor aziz received his phd degree in civil engineering (environmental engineering) from the university of strathclyde in scotland in 1992. to date, he has published over 200 refereed articles in professional journals and proceedings, 16 chapters in refereed international books, and 8 chapters in refereed national books. he has also published 9 research books. dr aziz continues to serve as a peer reviewer for more than 40 international journals. to date, he has reviewed 400 international papers. he also serves as a guest editor of the special issue on landfill leachate management and control of the international journal of environment and waste management (ijewm). professor aziz currently serves as the editor-in-chief of the international journal of scientific research in environmental sciences (ijsres). he also serves as the managing editor of the international journal of environment and waste management, ijewm and the international of journal of environmental engineering, ijee. aside from these, he is a member of the editorial board of 10 other international journals in the environmental discipline. professor aziz's research focuses on alleviating problems associated with water pollution issues from industrial wastewater discharges and from solid waste management via landfilling, such as landfill leachate. advanced oxidation processes is one of his research focuses. dr mohammed j.k. bashir is an assistant professor in environmental engineering at the faculty of engineering and green technology, universiti tunku abdul rahman (utar), malaysia. dr. bashir received his b.sc. degree in civil engineering from islamic university of gaza, palestine. he received m.sc and ph.d in environ. eng. from school of civil engineering, universiti sains malaysia. he received several award and has published many refereed articles in professional journals/proceedings. dr. bashir's research has focused on wastewater treatment, solid and hazardous waste management, environmental sustainability. dr. shuokr qarani aziz is asst. professor in the civil engineering department, college of engineering, salahaddin university-erbil, iraq. currently, he is head of civil engineering department. he received b.sc. degree in civil engineering and m.sc. in sanitary engineering from salahaddin university-erbil, iraq; phd in environmental engineering from universiti sains malaysia (usm), malaysia. he has more than 40 published works in the fields of water and wastewater treatment, solid waste management and noise pollution. dr.tamer m. alslaibi has a ph.d in water resources engineering from universiti sains malaysia under the world academy of sciences (twas) fellowship. has m.sc. in water resources engineering from the islamic university of gaza. one of the winners of the energy globe world award prize as a best practical project for water category in 2014. marquis who's who, as a testament to his hard work, has selected his biographical profile for inclusion in the new who's who in the world® 2015 (32nd edition). water and wastewater treatment, activated carbon production, adsorption and solid waste management are the area of his primary research interest. authored or co-authored several papers in isi journals and international conference proceedings in the fields of his interest. currently, works as a director of water quality department in palestinian environment quality authority and as a reviewer in more than ten high impact factor journals (isi). a member in international water association (iwa) and american institute of chemical engineering (aiche). javascript:al_get(this,%20'jour',%20'water%20sci%20technol.'); transactions template journal of engineering research and technology, volume 2, issue 3, september 2015 197 a backpropagation feedforward nn for fault detection and classifying of overhead bipolar hvdc tl using dc measurements assad abu-jasser 1 , mahmoud ashour 2 1 associate professor, electrical engineering department, the islamic university of gaza, palestine, p.o. box 108 ajasser@iugaza.edu.ps 2 msc of electrical engineering, gaza electricity distribution company, palestine , mymar2500@hotmail.com abstractthis paper suggests the use of back-propagation feed-forward artificial neural networks (nn) for fault detection and classification in the high voltage direct current (hvdc) transmission line (tl). to achieve these tasks, post-fault measurements of the dc voltages and currents at the rectifier station related to the pre-fault measurements are used as inputs to the neural network. a bipolar hvdc tl model of 940 km long and ±500 kv is chosen to be studied. this paper handles most frequent kinds of overhead bipolar hvdc tl power faults, and the results obtained are completely satisfactory. index terms— hvdc transmission lines fault detection fault classification – nn feedforward backpropagation – bipolar – overhead dc measurements three gorges changzhou (3gc). i introduction electrical power is generated as an alternating current (ac). it is also transmitted and distributed as ac and apart from certain traction and industrial drives and processes, it is consumed as ac. but high-voltage ac transmission links have disadvantages, which may compel a change to dc technology, hvdc is preferred to use in the following categories [1]: 1. transmission of bulk power where ac would be uneconomical, impracticable or subject to environmental restrictions. 2. interconnection between systems, which operate at different frequencies, or between non-synchronized or isolated systems. hvdc tl, which are growing in size and complexity, will be always exposed to failures of their components. in the case of a failure, the faulty element should be disconnected from the rest of the sound system in order to minimize the damage of the faulty element and to remove the emergency situation for the entire system. this action should be taken fast and accurately and is accomplished by a set of automatic protective relaying devices [2]. a bipolar hvdc tl is one type of hvdc tl configurations which consists of two poles, each of which includes one or more twelve-pulse converter units, in series or parallel. there are two conductors, one with positive and the other with negative polarity to ground [3]. figure (1) shows a bipolar hvdc system. this type of hvdc tl can be exposed to different kinds of faults, the most frequent faults of them can be described as follows: 1. positive line to ground fault (+ve/gnd). 2. negative line to ground fault (-ve/gnd). 3. positive line to negative line fault (+ve/-ve). 4. positive line open circuit fault (+ve o.c). 5. negative line open circuit fault (-ve o.c). the identification and classification of faults is important for safe and optimal operation of power systems [4]. neural network technique is one of useful methods for detecting and classifying faults in hvdc tl since the nature of the outputs (finding faults and their classes) functional relationship is neither well defined nor easily computable. furthermore, neural networks are able to compute the answer quickly by using associations learned from previous experience, gained either from time-domain simulations or previously gained practical experience [5]. direct currents and voltages at the rectifier station are the most affected elements when a fault occurred in the hvdc tl and when the fault type changed. this characteristic can be used to detect and classify the hvdc tl faults by using the measurements of these voltages and currents as inputs to a nn. figure (1): bipolar hvdc system [3] assad abu-jasser , mahmoud ashour / a backpropagation feedforward nn for fault detection and classifying of overhead bipolar hvdc tl using dc measurements (2015) 198 figure (2): basic 3 –layer feed-forward nn [11] in 1992, the use of nn to identify faults of ac-dc system with back-to-back hvdc construction was studied [6]. that paper focused on identifying faults of hvac tl with probability of fault in back-to-back hvdc section. the researchers found a way to detect and classify faults but they do not handle dc faults and they only refer to them with (dc fault) without classification. in 1993, using nn in hvdc system faults diagnosis was studied [7]. a 20-12-4 nn structure was used to classify 16 different fault types for a six-pulse hvdc system. this paper focuses on detecting and classifying faults in ac-dc section and faults that may occur in the converter. in 1998, radial basis function nn was used for fault diagnosis in a hvdc system. the researchers used eight different inputs to classify five types of faults (four of them designated to classify ac faults). the researchers decrease inputs by using ground current instead of three currents of ac system [8]. as mentioned in previous papers the researchers did not handle hvdc tl section and they only referred to the faults in hvdc by (dc fault) without classification. in 2000, a new method to reduce the needed training data at the cost of time delay by using expert systems with nn to classify hvdc faults only. they use the same way that was used in 1998 to reduce inputs by using ground current. expert knowledge was used to reduce training data. in that paper, inputs took as patterns with window length covers both pre/post fault regions [9]. in 2014, ieee researchers used ann for fault classification on hvdc systems and succeeded to diagnose hvdc faults. here, nn output can predict the change in the firing angle required for the hvdc rectifier unit, where each value of the firing angle refers to special type of fault [10]. in this paper, the work will be focus on the faults of a hvdc tl link , not on the ac grid of the power system nor on the rectifier or the inverter stations. depending on only four dc measurements , used as inputs to a special neural network, five types of most frequent faults ,that can expose to a twelve-pulse bipolar overhead hvdc tl, can be detected and classified precisely. ii artificial neural networks neural network (nn) can be described as a set of elementary neurons that are usually connected in biologically inspired architectures and organized in several layers [11]. the topology of a three layer feed-forward ann is shown in figure (2). there are ni numbers of neurons in each i th. layer and the inputs to these neurons are connected to the previous layer neurons. the input layer is fed with the excitation signals. simply put, an elementary neuron is like a processor that produces an output by performing a simple non-linear operation on its inputs [12]. a weight is attached to each and every neuron and training an ann is the process of adjusting different weights tailored to the training set. an artificial neural network learns to produce a response based on the inputs given by adjusting the node weights. hence, we need a set of data referred to as the training data set, which is used to train the neural network. iii hvdc system model the used hvdc system model was constructed in china, in 2003, to connect the land of three gorges with changzhou (3gc). it is a 940-kilometre (580 mi) long, 3000 mw capacity, 12-pulse and bipolar hvdc transmission line. the (3gc) ±500 kv dc transmission project is an integral part of the three gorges hydroelectric power project. the dc transmission used to transmit the bulk power generated by this project to the shanghai area in east china. the project interconnected the central power region of china to the eastern power region of china. the 3000 mw rated power have transmitted to a distance of 940 km on one single bipolar dc line at ±500 kv. figure (3) shows the single line diagram of 3gc [13]. the hvdc link is designed for continuous rating of 2x1500 mw under relatively conservative conditions specified for the system, ambient and outage. it has overload capability for temperature being lower than the extreme value, redundant cooling equipment being in service, and allowance in equipment design. the bipolar link has a continuous overload capability of 3480 mw and 5 second overload capability of 4500 mw. the nominal reverse power transfer capability is 90% of the rated power. the hvdc link is designed to operate continuously down to a dc voltage of 70% of rated voltage [14]. the hvdc system main component specifications summarized in table (1). figure (4) shows the used hvdc network, this network was simulated by using matlab program, simpowersystem toolbox, and have been developed from [15]. iv studying model outputs many outputs can be detected from the studied model but the interesting is in the dc voltages and currents at the rectifier terminals. the network will be simulated with sampling time of 5×10 -5 second for five types of faults each 3 and 5-km t.l long from the 15 th km to 925 th km and will be simulated many times for no fault cases. in each simulation the needed data will be registered to use it as neural network inputs. to study the power fault, the voltages and currents at the instant of fault doesn’t give clear vision to make the neural network work properly. so, instead the data of voltages and current outputs must have a relation between post and pre fault data. the data of voltages and currents after one http://en.wikipedia.org/wiki/hvdc assad abu-jasser , mahmoud ashour / a backpropagation feedforward nn for fault detection and classifying of overhead bipolar hvdc tl using dc measurements (2015) 199 figure (3): three gorges –changzhou single line diagram [13] figure (4): hvdc matlab model table (1) summarized specifications of the system rectifier side ac source converter transformers smooth reactors ph-ph voltage internal resistance internal inductance frequency type capacity voltages value 210.4kv 0 ω 98.03mh 50 hz single phase 297.5 mva 525/210.4 kv 290 mh transmission line length resistance inductance capacitance 940 km 0.015 ω/km 0.792 mh/km 14.4 µf/km inverter side ac source converter transformers smooth reactors ph-ph voltage internal resistance internal inductance frequency type capacity voltages value 200.4kv 0 ω 28 mh 50 hz single phase 283. 7 mva 500/200.4 kv 270 mh assad abu-jasser , mahmoud ashour / a backpropagation feedforward nn for fault detection and classifying of overhead bipolar hvdc tl using dc measurements (2015) 200 cycle time of fault related to the data of voltages and currents before one cycle time of fault is used to represent the output data. this period is taken as reference to all measurements in this paper. choosing of time period which takes as reference must be less than the adjusted protection system time to take all measurements in online environment. using cycle time which is an ac period while the used measurements are in dc form is because of the effect of harmonics in that measurements. the used sampling time is 5×10 -5 and the network frequency is 50 hz, it means that each cycle needs 400 impulses to represent it. if the frequencies were different in both sides, the used frequency in simulation must be multiplier of both. the network outputs which are the nn inputs will be : where: x+400 represents the value of x at 400 impulses after the instant of fault and x-400 represents the value of x at 400 impulses before the instant of fault. the 400 impulses represent a complete cycle in 50 hz frequency and 5×10 -5 sampling time. vdc1: dc voltage of positive line at the rectifier dc side. vdc2: dc voltage of negative line at the rectifier dc side. idc1: dc current of positive line. idc2: dc current of negative line. v hvdc tl fault detection to detect faults; we must study the differences between the outputs of simulated model when the fault occur and when there is no fault. table (2) shows model outputs of each type of faults at distances of 190-,375-,560and 755 km away from the rectifier side and data in no fault cases. from table (2) each type of faults has special data. the no fault case is the normal case which has data of approximately ones. by studying dc data, we can compare between each type of faults and detecting the fault. vi training and testing of detection fault neural network to train neural network to detect existence of fault, 952 different cases for different types of faults and no fault type used as input for the neural network. the output of the neural network is fault exist with value of '1' and no fault with value of '0'. many nn topologies have been trained to get the best performance, figure (5) shows the chosen 4-4-8-1 nn topology and the performance when uses conjugate gradient with beale powell restarts training function. best validation mean square error (mse) of 1×10 -11 and gradient of 6.73×10 -11 are achieved. to test the chosen neural network 393 output of model simulation for conditions differ from the training conditions are used as input for the chosen neural network. the maximum error value between the nn outputs and targets is 7.3×10 -7 (approximately no error). table (2): model outputs for various fault types and no fault conditions fault +ve/gnd -ve/gnd +ve /-ve +ve o.c -ve o.c no fault distance 190 375 190 375 190 375 190 375 190 375 x * 1dc v 0.26 0.23 0.72 0.68 0.58 0.34 2.52 2.30 1.02 0.99 0.99 * 2dc v 0.74 0.70 0.27 0.23 0.58 0.34 1.05 1.02 2.52 2.30 0.99 * 1dc i 3.60 3.60 0.12 0.17 2.81 2.62 0.00 0.01 1.27 1.29 1.00 * 2dc i 0.03 0.08 3.61 3.60 2.81 2.62 1.16 1.18 0.00 0.01 1.00 fault +ve/gnd -ve/gnd +ve /-ve +ve o.c -ve o.c no fault distance 560 755 560 755 560 755 560 755 560 755 x * 1dc v -0.25 0.07 0.72 0.69 -0.07 0.06 1.83 1.77 0.98 0.95 0.99 * 2dc v 0.75 0.71 -0.25 0.07 -0.07 0.06 1.00 0.97 1.83 1.77 0.99 * 1dc i 3.56 3.56 0.20 0.27 2.68 2.54 0.00 0.00 1.31 1.32 1.00 * 2dc i 0.11 0.19 3.56 3.56 2.68 2.54 1.21 1.23 0.00 0.00 1.00                                 )400/()400( )400/()400( )400/()400( )400/()400( 22 11 22 11 * 2 * 1 * 2 * 1 dcdc dcdc dcdc dcdc dc dc dc dc ii ii vv vv i i v v outputs assad abu-jasser , mahmoud ashour / a backpropagation feedforward nn for fault detection and classifying of overhead bipolar hvdc tl using dc measurements (2015) 201 figure (5): fault detection nn performance vii hvdc tl fault classifier after detecting fault, the next level of classifying that fault starts. five types of faults are studied, each has a special code to represent it. table (3) shows the fault types and their represented codes. each digit in the code represents an output of nn. so, the used nn will be with 4 inputs and 3 outputs. viii training and testing of fault classification nn for classifying hvdc tl faults a nn of 4-10-20-10-3 topology with conjugate gradient with beale powell restarts training function was chosen , the best validation mse of 8.8×10 -12 was found. figure (6) shows the 4-10-20-10-3 nn topology and its performance. to be sure that we construct a powerful nn, 355 cases of different faults differ from the cases used in training the nn is used to test the nn. the maximum error in digits value of nn simulation outputs related to the target was 2.8×10 -6 . ix conclusion a feed-forward back propagation neural network has been constructed and trained for detecting and classifying most frequent types of faults on a bipolar hvdc transmission lines. the model employed to conduct the research is a 940 km long and ±500 kv overhead bipolar hvdc line. the measurements of the dc voltages and currents at the rectifier side of hvdc system were used as inputs to the chosen nn to give four inputs, which monitored normally in the sending station and do not need special hardware. a fourlayer nn of 4-4-8-1 topology was used to detect the existence of faults with a mse of 1×10 -11 in the training level and approximately no error when tested. five types of hvdc tl faults were classified by using five-layer network of 4-10-20-10-3 topology with a mse of 8.8×10 -12 in the training level and with no error when tested. using only four inputs to get all that results may be look not enough but all the trained and tested cases give no error when simulated. the used technique is easy, reliable and gives satisfactory results. the method is very fast and also works on-line and do not need more than a time of one cycle after fault occurrence to register the needed data. this time is less below that of a practical switch-off relay where the time needed is approximately the time of 3-5 cycles. table (3): fault types and their codes fault code +ve/gnd 001 -ve/gnd 010 +ve /-ve 011 +ve o.c 100 -ve o.c 101 assad abu-jasser , mahmoud ashour / a backpropagation feedforward nn for fault detection and classifying of overhead bipolar hvdc tl using dc measurements (2015) 202 references ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ [1] hvdc for beginners and beyonds, carl barker, alstom grid worldwide magazine, 2010. [2] fault location on power networks, murari mohan saha, jan izykowski, eugeniusz rosolowski, springer, 2010. [3] design aspects of korean mainland to cheju island hvdc transmission, jl haddock, fg goodrich, se il kim, power technology international, sterling publication ltd p.125, 1993. [4] identification of faults in hvdc system using wavelet analysis, k.satyanarayana, saheb hussain md, b.ramesh, international journal of electrical and computer engineering (ijece), april 2012. [5] fault identification in an ac-dc transmission system using neural networks, n. kandil, v.k. sood, transactions on power systems, vol. 7, no. 2, may 1992. [6] fault identification in an ac-dc transmission system using neural networks, n. kandil, v.k. sood, transactions on power systems, vol. 7, no. 2, may 1992. [7] hvdc systems fault diagnosis with neural networks, l.l la1, f.ndeh-che, tejedo chari, the european power electronics association, 1993. ـــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ [8] application of a radial basis function (rbf) neural network for fault diagnosis in a hvdc system, k. g. narendra, v. k. sood , ieee transactions on power systems, vol. 13, no. 1, february 1998. [9] neural network based fault diagnosis in an hvdc system, h.etemadi, v.k.sood, k.khorasni, r.v.patel, international conference on electric utility deregulation and restructuring and power technologies, april 2000. [10] classification of fault analysis of hvdc systems using ann, p. sanjeevikumar, b. paily, m.a basu, m. conlon, ieee, 2014. [11] artificial neural network based fault location for transmission lines, suhaas bhargava ayyagari, 2011, university of kentucky. [12] neural networks for optimization and signal processing, cichoki a, unbehauen r, john wiley & sons, inc., 1993, new york. [13] role of three gorges – changzhou hvdc in interconnecting central and east china, abhay kumar, yuan qingyun , shanghai symposium , 2003. [14] three gorges changzhou hvdc : ready to bring bulk power to east , the 4th international conference on power transmission & distribution technology 2003 14-16 october, 2003. [15] high voltage direct current transmission arrilaga, j., ieee power engineering series 6, peter peregrinus, ltd., 1983. المحتويات journal of engineering research and technology, volume 8, issue 1, september 2021 table of contents # paper page 1. a stress-state based peridynamics model for elastio-plastic material modeling mahmoud m. jahjouh 1-09 2. enhancing the documentation process of traffic accidents registry in gaza city using gis maher a. el-hallaq 10-19 3. text file privacy on the cloud based on diagonal fragmentation and encryption tawfiq s. barhoom, mahmoud y. abu shawish 20-26 https://journals.iugaza.edu.ps/index.php/jert/article/view/9033 https://journals.iugaza.edu.ps/index.php/jert/article/view/9664 https://journals.iugaza.edu.ps/index.php/jert/article/view/9380 transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 112 effect of building proportions on the thermal performance in the mediterranean climate of the gaza strip ahmed s. muhaisen 1 , huda m. abed 2 1 associate professor, architecture department, the islamic university of gaza, palestine, p.o. box 108 amuhaisen@iugaza.edu.ps 2 architecture department, the islamic university of gaza, palestine, p.o. box 108, arch_huda@hotmail.com abstract—this paper examines the effect of building proportions and orientations on the thermal performance of housing units located in the mediterranean climate of the gaza strip. the study is carried out using computer programs, namely, ecotect and ies. the study concluded that the surface to volume ratio of buildings is considered the main geometrical parameter affecting the thermal performance of different geometric shapes. about 39% of energy consumption can be reduced through choosing the optimum building width to length ratio (w/l), which is 0.8. the roof to walls ratio has a considerable influence on the thermal response of buildings. using the (roof/ walls) ratios, which range between 0.4 to 0.6 is preferable for both cooling and heating requirements. the horizontal arrangements of residential apartments are thermally better than the vertical arrangements of the same (s/v) ratio. therefore, the study recommends to apply passive solar design strategies, especially with regard to geometric shape and orientation of buildings in the first stage of the design process. index terms— surface to volume ratio, thermal performance, energy saving, efficient building design. i introduction the building form is one of the main parameters, which determines the building envelope and its relationship with the outdoor environment. hence, it can affect the received amounts of solar radiation, the rate of air infiltration and as a result the indoor thermal conditions. some forms such as htype or l-type can provide self-shading of surfaces, which can decrease the direct solar radiation [1]. also, the building form affects wind channeling and air flow patterns, and the opportunities for enhancing the use of natural daylight [2]. generally, geometry variables including length, height, and depth control the area and volume of the building [3]. the amount of heat coming through the building envelope is proportional to the total gross exterior wall area [4]. the main proportions affecting the geometric shape are the surface-to-volume ratio and the width to length ratio. the surface to volume ratio is a rough indicator of urban size, representing the amount of exposed ‗skin‘ of the buildings, and therefore, their potential for interacting with the climate through natural ventilation, day lighting, etc [5]. however, the counter-indication to a high surface to volume ratio is the increase in heat loss during the winter season and heat gain due to exposure to solar radiation during the summer season [6]. ling et al. (2007) [7] mentioned that the exposed surface-to-volume ratio (s/v ratio) for geometric shape depends on the width to length w/l ratio. geometric shapes with higher value of w/l ratio contained lower value of s/v. they indicated that the main factors that determine the relationship between solar insolation level and building shape are w/l ratio and building orientation [7]. different studies have dealt with the form aspects. alanzi et al. (2008) [8] developed a simplified method to predict the impact of shape on the annual energy use for office buildings in kuwait. basically, the study depends on the relative compactness (rc) of the building and correlates it with the annual energy use. the relative compactness based on the ratio between the volume of a built form and the surface area of its enclosure compared to that of the most compact shape with the same volume. the results of this study indicated that the effect of building shape on total building energy use depends on the relative compactness, rc, the windowto-wall ratio, wwr and glazing type. also, it is found that the total energy use is inversely proportional to the building relative compactness independent of its form. pessenlehner and mahdavi (2003) [9] criticized the use of relative compactness in evaluation of the energy efficiency as it does not capture the specific three-dimensional massing of a building's shape, which can affect the thermal performance via self-shading for example. also, changing orientation and distribution of glazing changes the building morphology, shading potential and its thermal performance without changing the relative compactness. they examined the annual heating load and overheating index for 12 different shapes with 3 glazing area options and 5 glazing distribution options and 4 orientations as a function of the relative compactness (rc). the results indicated a significant association between the values of compactness indicators rc and simulated heating loads of buildings with various shapes, orientation, glazing percentage, and glazing distribution [9]. however, these indicators do not appear to capture http://eng.iugaza.edu.ps/ar/amuhaisen@iugaza.edu.ps ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of the gaza strip (2015) 113 the geometry of a building to the extent necessary for the predictive assessment of the overheating risk. ling et al. (2007) [7] studied the effect of geometric shapes on the total solar insolation received by high-rise buildings in malaysia. the study based on variations in the width to length ratio (w/l) and orientation for two generic building shapes (square and circular). the study didn‘t correlate the percentage of increasing in the width ratio with the percentage of decreasing in the surface to volume ratio (s/v) and the percentage of decreasing in the total solar insolation. behsh, (2002) [5] suggested the relation between the roof area and walls area and the relation between the walls areas according to their orientation to be effective in evaluating the thermal response of different forms. nevertheless, he simulated complex shapes and multistory shapes with different (s/v) ratio, which makes this ratio to be the main dominate for the thermal response. catalina et al. (2011) [10] studied the impact of building form on the energy consumption. their study based on using the building shape factor (lb) (also called building characteristic length), which is defined as the ratio between the heated volume of the building (v) and the sum of all heat loss surfaces that are in contact with the exterior, ground or adjacent non-heated spaces. they examined the heating demand of several shapes with various building shape factor and in different climates. it is found from all the previous studies that the surface to volume ratio is the main factor responsible for the thermal response in different geometric shapes. however, the impact of building geometries with the same (s/v) ratio has not been discussed extensively to find out the effect of self shading obtained by these geometries on the thermal performance. generally, any specific shape can have different (s/v) ratios depending on its proportions, such as the width to length ratio (w/l) (also called the aspect ratio) and the roof to walls ratio. building height is another important factor in determining the thermal response of buildings with the same (s/v) ratio. understanding the relation between the building geometry, proportions, ratios and the thermal performance can be obtained by investigating the main parameters, which define the building form. these integrated parameters, which are the surface to volume ratio, the width to length ratio, the roof to walls ratio and the building height were handled in 3 cases as follows: the first case: studying the effect of width to length ratio (w\l) with constant volume. the second case: effect of (w\l) ratio and (roof/walls) ratio on the thermal performance. the third case: effect of height with constant surface to volume ratio on the energy consumption. ii. simulation tools ecotect is a software package with a unique approach to conceptual building design. it offers a wide range of internal analysis functions, which can be used at any time while modeling. these provide almost instantaneous feedback on parameters such as sun penetration, potential solar gains, thermal performance, internal light levels, reverberation times and even fabric costs [11]. ecotect based on the cibse steady state methods. this method uses idealized (sinusoidal) weather and thermal response factors (admittance, decrement factor and surface factor) that are based on a 24-hour frequency [12]. the integrated environmental systems (ies) software is an integrated suite of applications linked by a common user interface (cui) and a single integrated data model (idm). this means that all the applications have a consistent ―look and feel‖ and that data input for one application can be used by the others, [13]. simulations were performed using the ecotect software. also, the virtual environment (ies) software was used to validate the simulation results. the 3d models were created using modelit. then the solar shading analysis was performed using suncast. finally, a dynamic thermal simulation was carried out using apachesim. the simulation results were expressed in terms of annual total loads (in mwh). a. study assumptions simulations were carried out during the months of january–december. the internal spaces were assumed to be fully air conditioned with the heating and cooling set points were assumed to be 18.0 0 c and 26.0 0 c respectively. using of buildings (hours of operation) was assumed to be on continuously. as the study focuses on the incident solar radiation as one of the most important variables in the mediterranean climate affecting the heating and cooling energy consumption, the internal heat gain from occupancy and appliances as well as the ventilation heat gain weren‘t considered in the study. other environmental parameters, including natural ventilation, and daylight are also considered out of the research scope. external walls have u-values of 1.77 (w/m2. k) in ecotect and 1.9487 (w/m2. k) in ies. the roof u-values are 0.896 (w/m 2 . k) in ecotect and 0.9165 (w/m 2 . k) in ies. glazing u-values are 6 (w/m 2 . k) in ecotect and 5.5617 (w/m 2 . k) in ies. the values of thermal transmittance, u-value for walls, roof and floor were assumed to achieved the minimum requirements of the u-values as recommended by the palestinian code for energy efficient building (2004) [14]. for solar radiation calculations, ecotect uses hourly recorded direct and diffuse radiation data from the weather file. b. climate the gaza strip is a coastal area in the west-southern part of palestine, with an area equals (365 km 2 ) [15]. the ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of t he gaza strip (2015) 114 geographical coordinates of the gaza strip are 31° north, and 34° east [16]. according to arij, (2003) the gaza strip forms a transitional zone between the sub-humid coastal zone of palestine in the north, the semiarid loess plains of the northern negev desert in the east and the arid sinai desert of egypt in the south [15]. according to the koppen system for climatic zoning, gaza has a mediterranean subtropical climate with dry summer and mild winters. this climate is classified as csa indicating that the warmest month has a mean temperature above 22°c. the average daily mean temperature which ranges from 25°c in summer to 13°c in winter [15], see appendix 1. iii. the first case: studying the effect of width to length ratio (w\l) with a constant volume a. the study parameters the study correlated the percentage of increasing in the width to length ratio (w/l) with the percentage of decreasing in the surface to volume ratio (s/v) and the percentage of decreasing in the total solar insolation. ten width to length ratios were adopted for the rectangular shape ranging between 0.1 to 1 in one degree steps. the area, height and volume for all the ten cases were kept constant. the area was taken to be 500 m 2 , which represents one of the common options in multi story residential buildings in gaza. also, the building height was taken to be 20m (6 storeys) and the volume was taken to be 10000 m 3 . table 1, illustrates the ten cases. combinations of parameter values analyzed in this study are summarized in table 2. ten values of orientation were considered, namely 0°e, 10°e, 20°e, 30°e, 40°e, 50°e, 60°e, 70°e, 80°e and 90°e as shown in figure 1. table 1 parameters of the investigated cases w\l 0.1 0.2 0.3 0.4 0.5 perspective s/v ratio 0.36 0.29 0.26 0.24 0.23 w\l 0.6 0.7 0.8 0.9 1 perspective s/v ratio 0.23 0.23 0.23 0.23 0.23 table 2 combination of parameters investigated in the study shape w\l ratio orientation rectangular 0.10.20.30.4 0.5-0.60.70.8 0.91 0e10e20e30e40e50e60e70e80e90e figure 1. the ten values of building‘s orientations considered in the study b. results effect of width to length ratio (w/l) figure 2,3 show the effect of changing the (w/l) ratio at different orientations on the total loads throughout the year using the ecotect and ies. the results indicate that the total loads for the simulated shapes are reduced by 39.6% with increasing the width to length ratio (w/l) from 0.1 to 1 at the eastwest orientation (0°e) in ecotect. it is noticed that the reduction in the total loads is more remarkable with increasing the (w/l) ratio from 0.1 to 0.5. about 37.4% of reduction in the total loads occurs with increasing the (w/l) ratio from 0.1 to 0.5 while only 3.5% of the reduction occurs with increasing the (w/l) ratio from 0.5 to 1. it is noticed that the optimum width to length ratio is 0.9 with a slight effect of changing the width ratio from 0.5 to 1. so, it is advisable to select the building‘s (w/l) ratio in the range of 0.5 to 1 in order to reduce the energy consumption. the same trend can be observed using ies as about 31.8% of reduction in the total loads occurs as a result of increasing the width to length ratio (w/l) from 0.1 to 1 at the same orientation. figure 2. effect of (w/l) ratio on the annual loads, using ecotect ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of the gaza strip (2015) 115 figure 3. effect of (w/l) ratio on the annual loads, using ies changing the building orientation from the eastwest orientation (0°e) to the northsouth orientation (90°e) can increase the effect of the width to length ratio. the total loads are reduced by 45.7% with increasing the width to length ratio (w/l) from 0.1 to 1 at the northsouth orientation (90°e) in ecotect. also, increasing the width to length ratio (w/l) from 0.5 to 1 reduced the total loads by about 7.9% and 7.5% in ecotect and ies respectively in the northsouth orientation comparing with only 3.5% and 1.5% of reduction in the case of the eastwest orientation in ecotect and ies respectively. so that, more attention must be paid to the width ratio in the northsouth orientation even between the shapes with (w/l) ratios range between 0.5 and 1. it is noticed that changing the (w/l) ratio affects the total exposed surface and the relation between its two main components, the roof and the walls. as the (w/l) ratio increases and the building reaches to the square shape (w/l= 1), the exposed surface decreases at the same trend of decreasing the total loads. taking a fixed roof area in all cases, it is reasonable that the (roof/walls) ratio increases with increasing the (w/l) ratio. the square shape (w/l= 1) was taken as a reference shape. the percentage of difference between the other nine shapes and the reference shape in the four variables; (w/l) ratio, (s/v) ratio, (roof/walls) ratio and the total loads was evaluated. figure 4, summarizes the relation between the percentage of changing in the (w/l) ratio and the (s/v) ratio, (roof/walls) and the total loads as a consequence. it can be mentioned that decreasing the (w/l) ratio by 90% from the reference shape (w/l= 1) to the worst ratio (w/l= 0.1) can increase the (s/v) ratio by about 57.7% and decreasing the (roof/walls) ratio by 42.5% and increasing the total loads by 65.7%. so it is recommended to decrease the (s/v) ratio and increase the (roof/ walls) ratio and increase the (w/l) ratio. figure 4. effect of changing (w/l) , (s/v), (r/w) ratios on the total loads effect of orientation figures 5,6 illustrate the effect of changing the form's orientation on the total loads for various width ratios using both ecotect and ies respectively. changing the orientation of the simulated shapes with different width to length ratios (w/l) is seen to have the ability to change the required energy, as it affects the amounts of solar radiation falling on the various components of the building surface. the results indicate that the total loads for the simulated shapes are increased by 11% with changing the orientation from the east-west orientation (0°e) to the north-south orientation (90°e) for the shape with width to length ratio (w/l) equals to 0.1 in ecotect. this ratio is decreased to reach 9.1% in the case of the shape with width ratio (w/l) equals to 0.2 and 7.6% in the case of the shape with width to length ratio (w/l) equals to 0.3. as the shape approaches to a square, the effect of orientation in changing the total loads is decreased. this is due to the four equal sides of the square shape, which makes the east-west orientation (0°e) and the north-south orientation (90°e) have the same performance. contrary, the worst orientation in this case is (45°e) with unnoticeable difference in the total loads, which reaches to 1.8%. in ies results, changing the orientation from the east-west orientation (0°e) to the north-south orientation (90°e) increased the total loads by about 17.3%, 13.6% and 10.7% in the case of the shapes with width to length ratios (w/l) equal to 0.1, 0.2 and 0.3 respectively. the ratio decreased to reach about 1.9% between the east-west orientation (0°e) and (45°e) orientation in the case of the square shape. it should be mentioned that the trends of ecotect and ies results are almost identical. the small variations in the values of the results are referred to the deference in the thermal properties of the building materials used in the two programs. this clearly validates the results and indicates high reliability of the archived buildings performance. ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of t he gaza strip (2015) 116 figure 5. effect of orientation on the total loads, using ecotect figure 6. effect of orientation on the total loads, using ies incident solar radiation the results indicate that the shapes with (w/l) ratio equals to 0.1 receives the highest amounts of incident solar radiation on the south façade, as shown in figure 7. it is considered that this shape has the highest area of the south façade, which exceeds by about 216% that of the shape with (w/l) ratio equals to 1. this explains the worst thermal performance of this shape from the energy consumption point of view. it is observed that the shape with (w/l) equals to 0.1 receives about 56.7% of its total solar radiation on its south façade comparing with 27.3% and 19.8% for the shapes with (w/l) equal to 0.5 and 1 respectively. the south façade forms about 39.2% from the total exposed surface area of the shapes with (w/l) equals to 0.1. it is evident that the percentage of incident solar radiation on the south façade is the main responsible factor affecting the energy consumption of the three considered simulated shapes with (w/l) ratio equals 0.1, 0.5 ,1. for more illustration, figure 8, shows the same trend for the percentage of incident solar radiation on the south façade and the total required energy for the three simulated shapes. figure 7. incident solar radiation on the forms' surfaces figure 8. the relationship between the solar radiation on south elevation of the form and the total loads iv. the second case: effect of (w\l) ratio and (roof/walls) ratio on the thermal performance a. the study parameters the study introduces the main relations affecting the form morphology. building morphology can be determined throughout the relationship between its components. the main relation in this case is that between the roof area and the walls area, which affects the building height. the second relation is the (w/l) ratio, which affects the building elongation. for investigating the effect of these ratios, 10 (w/l) ratios ranging between (0.1-1) with 5 (roof/walls) ratios ranging between (0.2-1) were examined. the volume of the base case was obtained from the assumption that the minimum width of the rectangular form is 4 m, as it represents the average of a room width. the maximum length can be obtained from the smallest (w/l) ratio, which equals to 0.1. this means that the rectangular length is 40 m and the area (a) is 160 m 2 , which represents the average area of residential units in gaza. the maximum height can be obtained from the (roof/walls) ratio equals to 0.1, which mean that the walls area is 1600 m 2 and the total exposed surface area is 1760 m 2 . the perimeter for the assumed base case equals to 88 m and the height equals to 18.18 m (6 storey), and thus the volume equals to 2909 m 3 . all the forms investigated in this study have the same volume, table 3 illustrates this set of forms. ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of the gaza strip (2015) 117 table 3 the simulated cases in the study ratios w\l= 0.1 w\l= 0.5 w\l= 1 roof\wall = 0.2 roof\wall = 0.4 roof\wall = 0.6 roof\wall = 0.8 roof\wall = 1 a. results effect of width to length ratio (w/l) apparently, it can be noticed that with increasing the width to length ratio (w/l) the required loads gradually reduced at all values of (roof/walls) ratio, as shown in figure 9. with increasing the width to length ratio (w/l) from 0.1 to 1 at the eastwest orientation (0°e), the total loads for the simulated shapes are reduced by 31.6%, 27%, 27%, 27.2%, 27.5% for the shapes with roof/walls ratio equals to 0.2, 0.4, 0.6, 0.8 and 1 respectively. this means that the effect of the (w/l) ratio in changing the total loads reduces with increasing the (roof/walls) ratio. figure 9. effect of (w/l) ratio at various (r/w) rations on the total loads effect of (roof/walls) ratio increasing the (roof/ walls) ratio, which means decreasing the building height with the same volume have considerable effects on the required energy as shown in figure 10,11. increasing the (roof/ walls) ratio from 0.2 to 1 at the eastwest orientation (0°e) reduced the total energy by 30.9%, 29% and 28.8% for the shapes with the width to length ratio (w/l) equals to 0.1, 0.5 and 1 respectively. this means that varying the width ratio has small effects (about 2%) in affecting the impact of the (roof/ walls) ratio on changing the total loads. the same trend can be observed in ies results, as increasing the (roof/ walls) ratio from 0.1 to 1 reduced the total energy by 22.4%, 24.9% and 26.4% for the shapes with the width to length ratio (w/l) equals to 0.1, 0.5 and 1 respectively as shown in figure 10. the important point to be mentioned about ies results, is that the total loads decreased with increasing the (roof/ walls) ratio until the ratio equals 0.6. after that the total loads increased in a slight percentage. for more explanation, increasing the (roof/ walls) ratio from 0.1 to 0.6 reduced the total loads by about 27.3%, 29.1% and 30.1% for the shapes with the width to length ratio (w/l) equals to 0.1, 0.5 and 1 respectively. however, increasing the (roof/ walls) ratio from 0.6 to 1 increased the total loads by about 4%, 3.3% and 2.9% for the shapes with the width to length ratio (w/l) equals to 0.1, 0.5 and 1 respectively. figure 10. effect of (r/w) ration on the total loads, using ecotect figure 11. effect of (r/w) ration on the total loads, using ies ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of t he gaza strip (2015) 118 in order to explain this behavior, figure 12, shows the relationship between (r/w) ratio and (s/v) ratio for the form with (w/l) equals 0.5. it can be shown that the (s/v) ratios for the simulated cases have the same trend of the total loads. increasing the (roof/ walls) ratio from 0.1 to 0.6 reduced the (s/v) ratio by about 24.9%, which is compatible with the percentage of reduction in the total loads (29.1%). increasing the (roof/ walls) ratio from 0.6 to 1 increased the (s/v) ratio by about 5.4%. hence, the thermal behavior of the simulated cases can be explained as a consequence of changing the (s/v) ratio. determining the fabric heat gain for the same cases can also explain their behavior. as shown in figure 13, the heat loss during the winter period (decemberfebruary) decreases by about 31% with increasing the (roof/ walls) ratio from 0.2 to 1, which decreases the heating loads in the shapes with higher (roof/ walls) ratios. however, the heat gain during the summer period decreases by about 11% with increasing the (roof/ walls) ratio from 0.2 to 0.6, which decreases the cooling loads. increasing the (roof/ walls) ratio from 0.6 to 1 increased the heat gain by about 3%. figure 12. the relationship between (r/w) and (s/v) ratio for the form with (w/l) equals 0.5 figure 13. fabric gain for the simulated cases it can be concluded that the (roof/ walls) ratio equals to 0.6 is more preferable for both cooling and heating requirements. taking into consideration the unnoticeable difference in the total loads between the two values of the (roof/ walls) ratio equals to 0.4 and 0.6, there is a flexibility in selecting the (roof/ walls) ratio to range between 0.4 and 0.6. also, the width to length ratio (w/l) equals 0.8 is advisable from the energy saving point of view. v. the third case: effect of height with constant surface to volume ratio on the energy consumption a. the study parameters the study investigated one of the main parameters in the building form, which is height. in order to compare the performance of buildings with different heights, the building volume was kept constant. it is evident that increasing the height would decrease the area and thus the (roof/ walls) ratio would change in each case. nine heights were adapted to the rectangular shape, namely 6, 9, 12, 15, 18, 21, 24, 27 and 30 m. the storey height was taken to be 3 m, which means that each one of the simulated cases increases by one storey from the previous case. the smallest area was assumed to be 200 m 2 and the maximum height was assumed to be 30 m (10 storey) and thus, the assumed volume was taken to be 6000 m 3 . the (w/l) ratio in the base case was assumed to be 1 (square shape) and the exposed surface area was considered to be 1897 m 2 and thus, the (s/v) ratio was taken to be 0.316. as the purpose of this study is to investigate the height effect, the (s/v) ratio is assumed to be fixed for all the simulated cases. in order to achieve this purpose, the area increased as the height reduced and the (w/l) ratio also increased. combinations of the parameter values analyzed in this study are summarized in table 4. the studied forms were simulated at different orientations ranging from 0°e to 90°e in ten degrees steps. table 4 parameter combinations of forms investigated in the study height h= 6m h= 9m h= 12m perspective area 1000 666.6 500 (r/w) 1.11 0.54 0.35 (w\l) 0.30 0.20 0.21 height h= 15m h= 18m h= 21m perspective area 400 333.33 285.71 (r/w) 0.26 0.21 0.17 (w\l) 0.25 0.29 0.35 height h= 24m h= 27m h= 30m perspective area 250 222.222 200 (r/w) 0.15 0.13 0.11 (w\l) 0.44 0.56 1 ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of the gaza strip (2015) 119 a. results effect of height the results indicate that the total loads for the simulated shapes are increased by 62.5% with increasing the building height from 6 m to 30 m at the eastwest orientation (0°e), as shown in figure 14. the increasing percentages are 20.6%, 33.1%, 41.7%, 47.7%, 55.5%, 58.7% and 62.5% with increasing the building height from 6 m, 9 m, 12 m, 15 m, 18 m, 21 m, 24 m, 27 m and 30 m. it can be noticed that there is a nonlinear relationship between the building height and the total loads. as the building height increases, the percentage of increasing in the total loads is decreased. figure 14. effect of height on the required load in order to determine the main factor affecting the total loads when increasing the building height, the shape with 6 m height was taken as a reference shape, because it requires the lowest energy load. the percentage of increasing in the total loads and decreasing in the (roof/walls) ratio and increasing in the (w/l) ratio between the other eight shapes and the reference shape was evaluated, as shown in figure 15. it is observed that the trend of the curve of the percentage of increasing in the total loads is similar to the trend of the curve of the percentage of decreasing in the (roof/walls) ratio. it can be concluded that increasing the total loads required by the building geometries with the same (s/v) ratio as a result of increasing the height is more related to the decreasing in the (roof/walls) ratio which increases the vertical walls surfaces. figure 15. the relation between the percentage of increasing in the total loads and decreasing in the (roof/ walls) ratio three options of buildings height (6m, 12m and 24 m), which involve the same volume and exposed surface areas, were considered, as shown in table 5. each of them was divided into the same number of residential apartments (16 apartments), where each apartment has the same area (125 m 2 ), as it is considered one of the common options in the apartment buildings in the gaza strip. as stated above, the total loads of the geometry with 12m and 24m heights increase by 33% and 55.5% respectively with reference to the load required by the geometry of 6m height. this means that the horizontal arrangements of residential apartments are better thermally than the vertical arrangements of the same (s/v) ratio. table 5 configuration of three building forms height h= 6m h= 12m h= 24m perspective percentage of increasing in the total loads (%) 0 33% 55.5% effect of orientation the east-west orientation (0°e) was taken as a reference case, as at which forms it require the lowest amount of energy. the percentage of difference between the other nine orientations for four heights (12 m18 m24 m30 m) and the reference shape was evaluated. as illustrated in figure 16, changing the orientation from (0°e) to (90°e) can increase the required heating and cooling loads by 6.8%, 5% and 3.5% for the cases of 12 m, 18 m and 24 m height respectively. figure 16. effect of orientation on the total loads vi. conclusion it is concluded that the surface to volume ratio is one of the most important aspect affecting the thermal performance of geometric shapes. the other form parameters including ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of t he gaza strip (2015) 120 (w/l) and (r/w) ratios have also a considerable effect on the buildings requirements of energy. the incident solar radiation falling on the building surfaces has a significant effect on the thermal response. the compact forms, which contain the same volume with the smallest (s/v) ratio is recommended in the climate of the gaza strip. more attention must be paid to the width to length ratio in the northsouth orientation even for the shapes of width to length ratio ranging between 0.5 and 1. about 20.5% of the cooling loads can be increased with changing orientation from the east-west orientation (0°e) to the north-south orientation (90°e) for the shape with width to length ratio (w/l) equal to 0.1. so, it is recommended to pay more attention in selecting orientations, especially for the shapes with small width to length ratios. it is recommended to use shapes with the (roof/ walls) ratios range between 0.4 to 0.6, which are more preferable for both cooling and heating requirements. it is recommended to use horizontal arrangements for residential apartments, which were found to be better thermally than the vertical arrangements of the same (s/v) ratios. references [1] nayak, j.k. and prajapati, j.a. (2006). handbook on energy conscious buildings. indian institute of technology, bombay and solar energy center ministry of non-conventional energy sources, government of india. [2] goulding, john; lewis, owen and steemers, theo (1992). energy in architecture: the european passive solar handbook, b.t. batsford for the commission of the european communities, directorate general xii for science, research and development, london. [3] yi, yun kyu and malkawi, ali (2009). optimizing building form for energy performance based on hierarchical geometry relation, automation in [4] nikpour, mansour; zin kandar, mohd; ghomeshi, mohammad; moeinzadeh, nima and ghasemi, mohsen (2011). investigating the effectiveness of self-shading strategy on overall thermal transfer value and window size in high rise buildings, world academy of science, engineering and technology, vol. 74, p: 165170. [5] b. basam, "building form as an option for enhancing the indoor thermal conditions". building physics 20026th nordic symposium, session 18: indoor environment, vol. 2, p: 759766, 2002. [6] ratti, carlo; raydan, dana and steemers, koen (2003). building form and environmental performance: archetypes, analysis and an arid climate, energy and buildings, vol. 35, p: 4959. [7] ling, chia sok; ahmad, mohd. hamdan and ossen, dilshan remaz (2007). the effect of geometric shape and building orientation on minimizing solar insulation on high-rise buildings in hot humid climate. journal of construction in developing countries, vol. 12, no. 1, p: 2738. [8] a. adnan; s. donghyun and k. moncef, "impact of building shape on thermal performance of office buildings in kuwait". energy conversion and management, vol. 50, p: 822828, 2008. [9] pessenlehner, werner and mahdavi, ardeshir (2003). building morphology, transparence, and energy performance, eighth international ibpsa conference, eindhoven, netherlands. [10] catalina, tiberiu; virgone, joseph and iordache, vlad (2011). study on the impact of the building form on the energy consumption, proceedings of building simulation 2011: 12th conference of international building performance simulation association, sydney. [11] marsh, andrew (2003). ecotect and energy plus. the building energy simulation user news, vol. 24, no. 6. [12] beattie and ward (2012). the advantages of building simulation for building design engineers, available at: http://www.ibpsa.org/proceedings/bs1999/bs99_p b-16.pdf [13] ve-pro user guideies virtual environment 6.4 (2011). [14] ministry of local government (2004). the palestinian code for energy efficient building. [15] applied research institute (arij). climatic zoning for energy efficient buildings in the palestinian territories (the west bank and gaza), technical report submitted to united nations development program/ program of assistance to the palestinian people (undp / papp), jerusalem, palestine. 2003. [16] ministry of local government (2004). the palestinian guidelines for energy efficient building design. dr. ahmed muhaisen is an associate professor at the architecture department in the islamic university of gaza (iug). he is specialized in energy efficient building design, with more than ten years academic and professional experience in this field. he teaches modules to bsc and msc students related mainly to building design and energy performance. he obtained his msc and phd degrees from nottingham university (uk) in the field of energy efficiency of buildings. he has special interests in subjects related to energy efficiency of buildings, passive solar design and architectural heritage preservation. huda abed m.sc. (architectural engineering)faculty of engineering, the islamic university of gaza (iug), gaza, palestine. lecturer at architectural department, the islamic university of gaza, gaza, palestine. http://www.ibpsa.org/proceedings/bs1999/bs99_pb-16.pdf http://www.ibpsa.org/proceedings/bs1999/bs99_pb-16.pdf ahmed s. muhaisen, huda m. abed / effect of building proportions on the thermal performance in the mediterranean climate of the gaza strip (2015) 121 appendex1: climatic data of gaza city elevation: 16 meters latitude: 31 30n longitude: 034 27e average temperature annual jan feb mar apr may jun jul aug sep oct nov dec c 19 13 14 15 18 20 23 25 26 25 22 19 15 average precipitation annual jan feb mar apr may jun jul aug sep oct nov dec mm 300 76 49 37 6 3 --------14 46 70 average length of day annual jan feb mar apr may jun jul aug sep oct nov dec hours 12.6 10.8 11.5 12.4 13.4 14.2 14.6 14.4 13.7 12.7 11.8 11 10.6 average daily solar radiation global annual jan feb mar apr may jun jul aug sep oct nov dec mj/m2 20.6 14.2 24.1 30.4 26.9 17.6 10.2 10.9 19.3 27.9 29.1 23.2 12.8 maximum daily solar radiation global annual jan feb mar apr may jun jul aug sep oct nov dec mj/m2 20.6 14.2 24.1 30.4 26.9 17.6 10.2 10.9 19.3 27.9 29.1 23.2 12.8 minimum daily solar radiation global annual jan feb mar apr may jun jul aug sep oct nov dec mj/m2 18.7 11.1 21.9 29.4 25.8 16.4 8.3 9.9 16.2 25.3 27.1 22.4 10.8 maximum and mean values of hourly wind speed at 50 m height (m/s) annual jan feb mar apr may jun jul aug sep oct nov dec max mean 23.9 4.2 24.4 4.9 22.7 5 23.9 4.8 19.6 4 20 3.9 15.1 3.5 23.7 3.4 17.2 3.5 16.6 4.5 16.5 4.8 16.4 4.8 17.3 5.1 source of data: http://www.weatherbase.com/ transactions template journal of engineering research and technology, volume 2, issue 3, september 2015 181 nonlinear analysis of concrete beams strengthened with steel fiber-reinforced concrete layer nasreddin elmezaini (1) and mohammed ashour (2) (1) assoc. professor of civil engineering, the islamic university of gaza (2) m.sc. of civil engineer abstract—the behavior of concrete beams strengthened with a steel fiber-reinforced concrete (sfrc) layer was studied by nonlinear finite element analysis using ansys software. four beams that were experimentally tested in a previous research were considered. beam b-1 is made of ordinary reinforced concrete, b-2 is made of sfrc material, b-3 is made of two parts, rc beam with sfrc overlay and b-4 is made of rc beam with sfrc underlay. ordinary concrete as well as sfrc were modeled using the multi-linear isotropic hardening constants where they are assumed to have a linear behavior up to 30% of the compressive strength. afterwards, a multi-linear stress-strain curve was defined. for reinforcing steel, a linear-elastic perfectly-plastic material model was used. steel fiber reinforced concrete was modeled by the smeared modeling technique. the results obtained by fea showed good agreement with those obtained by the experimental program. this research demonstrates the capability of fea in predicting the behavior of beams strengthened with sfrc layer. it will help researchers in studying beams with different configurations without the need to go through the lengthy experimental testing programs. index terms— none linear fea, sfrc, rc beams with sfrc overlay. i introduction the use of steel fiber-added reinforced concrete (sfrc) has become widespread in several structural applications such as tunnel shells, concrete sewer pipes, and slabs of large industrial buildings. usage of sfrc in load-carrying members of buildings having conventional reinforced concrete (rc) frames is also gaining popularity because of its positive contribution to both the energy absorption capacity and the concrete strength. recently, sfrc start to make its way into strengthening techniques, such as an additional layer as an underlay or overlay on existing beams, jackets for beams, columns, and other structural members. members fabricated from sfrc exhibited a remarkable improvement in crack behavior, ductility, compression and tensile strength, and durability. the effectiveness of using sfrc underlay or overlay techniques depends on the ability of the strengthened beam to act monolithically as one unit while being loaded. several researches have been conducted to examine the effectiveness of adding sfrc layers on the behaviors of existing rc beams. most of these researches are based on costly and time consuming experimental programs. in this study, the validity of numerical analysis will be demonstrated in predicting the behavior of rc beams with sfrc added layers. four (4) beams which were previously tested in an experimental program are being considered. the beams were strengthened with sfrc overlays, which were mechanically bonded to the original beams. numerical analysis will be carried out using the well-known fe code ansys apdl v13.0. the importance of this study comes from the fact that if a numerical approach can be validated, it will help researchers in predicting the behavior of different rc/sfrc beams without the need to go through the lengthy and costly experimental programs. ii literature review steel fibers drew the attention of many researchers in the n. elmezaini & m. ashour / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 182 last decade, mainly in the strengthening field, due to its ability to enhance the strength, ductility, and durability of rc members. most of the research was devoted to studying the mechanical properties of sfrc, and its influence on flexural, shear, and torsional behavior of beams. few studies were found for rc beams with sfrc layers. the following is a brief summary for some related publications. tiberti et al [2] conducted an experimental study to investigate the ability of fibers in controlling crack spacing and width. they concluded that the addition of fibers in concrete resulted in narrower and more closely spaced cracks compared to similar members without fibers. they also concluded that added fibers may significantly improve the tension stiffening. it also provided noticeable residual stresses at the crack area due to the bridging effect provided by its enhanced toughness. zhang et al [3] conducted three-point bending tests on notched beams of sfrc. the results show that the fracture energy and the peak load increase as the loading rate increases. the gain of the fracture energy and peak load is around 10% compared with its quasi-static values. altun et. al. [4] studied the mechanical properties of concrete with different dosage of steel fibers. experimental tests indicated that beams with sf dosage of 30 kg/m3 exhibited a remarkable increase in strength when compared to rc beams without steel fibers. the same study also showed that increasing the fiber dosage to 60 kg/m3 adds only a small improvement to the beam toughness. kim et. al. [5] studied the shear behavior of sfrc members without transverse reinforcement. they proposed a model based on a smeared crack assumption. the model was verified against experimental results and showed a good agreement. deluce & vecchio [6] conducted an experimental study on 4 large-scale sfrc specimens containing conventional reinforcement to study their cracking and tension-stiffening behavior. it was found that the cracking behavior of sfrc was significantly altered by the presence of conventional reinforcement. crack spacing and crack widths were influenced by the reinforcement ratio and bar diameter of the conventional reinforcing bar, as well as, by the volume fraction and aspect ratio of the steel fiber. ziara, m. [7] conducted an experimental test program consisting of nine beams to study the influence of sf overlays on rc beams. he found that beams strengthened by sf overlays exhibited a remarkable increase in load carrying capacity. mechanically bonded overlays showed better performance when compared with chemically bonded overlay which exhibited inter-laminar shear failure. from the aforementioned literature review, it can be seen that several experimental researches have been conducted on sfrc. as a result, the behavior and mechanical properties of sfrc are reasonably identified. however, to the best knowledge of the author, no research was found on the numerical analysis of rc beam strengthened with sfrc layers. iii case study to study the behavior of rc beams strengthened with sfrc layers, four beams with different configurations were considered as shown in figure 1. beams b1 & b2 are used as control beams. b1 is made entirely of ordinary reinforced concrete (rc) while b2 is made entirely of sfrc material. b3-ol consists of two parts, the bottom part is made of ordinary concrete while the top part (overlay) is made of sfrc. b4-ul is also consists of two parts but the sfrc part is on the bottom (underlay). the first three beams (b1, b2 and b3-ol) were previously tested in an experimental program conducted by ziara, [7]. in this study, nonlinear finite element analysis is conducted for the 4 beams. the validity of the numerical analysis is verified by comparing the obtained results with the experimental program results. note: b4-ul was not experimentally tested, but was added to complete the parametric study. a. geometric properties as shown on figure-1, all beams have the same geometrical properties, with an overall length of 2000mm and width to depth dimension of 150mm×240 mm. all beams were reinforced with 4 longitudinal bars (2ø14mm@bottom and 2ø8mm@top). the stirrups are 8mm@55mm as shown. b. material properties for the sake of this study, the same mechanical properties for rc and sfrc that were used in the experimental program will be used for the numerical analysis. in sfrc, the steel fibers dosage used is 1.5% of the volume. according to song and hwang [8] for this dosage of sfrc, the tensile strength can be taken as 16% of the compressive strength. the material properties used for the beams are as follows: for ordinary concrete: compressive strength fc’ = 25 mpa tensile strength ft = 2.52 mpa for sfrc material: compressive strength fc’ = 26 mpa tensile strength ft = 4.16 mpa yield strength main steel fy = 410 mpa yield strength for stirrups fy = 280 mpa c. loading all beams were loaded with two symmetrical loads and two simple supports as shown in figure-1. the loads were gradually increased on the beams until failure. n. elmezaini & m. ashor / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 183 figure 1: beams configurations figure 1-a: beam b1-rc (ordinary concrete) figure 1-b: beam b2-sf (sfrc) figure 1-c: beam b3-ol (sfrc overlay) figure 1-d: beam b4-ul (sfrc underlay) n. elmezaini & m. ashour / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 184 d. finite element modeling nonlinear finite element analysis was conducted on the four beams to predict the behavior of these beams during all loading stages up until failure. the well known fe package ansys apdl v13.0 has been used. the ansys program includes a library of elements for different applications. solid65 is used for the 3d modeling of concrete with or without reinforcing bars. solid65 is capable of cracking in tension and crushing in compression. it can also consider plastic deformation and creep. for steel rebar, ansys presents link180 to model reinforcing steel which is simply a pin-joined one dimensional element. the geometry and the coordinate system for the solid65 and the link180 elements are shown in figure 2 figure 2: solid65 geometry e. concrete material description concrete is a quasi-brittle material and has a highly nonlinear and ductile stress-strain relationship [9]. the nonlinear behavior is attributed to the formation and gradual growth of micro cracks under loading. fig. 3 shows a typical stress-strain curve for normal weight concrete. in compression, the stress-strain curve for concrete is linearly elastic up to about 30% of the maximum compressive strength. above this point, the stress increases gradually up to the maximum compressive strength. after it reaches the maximum compressive strength fcu , the curve descends into a softening region, and eventually crushing failure occurs at an ultimate strain εcu . in tension, the stress-strain curve for concrete is approximately linearly elastic up to the maximum tensile strength. after this point, the concrete cracks and the strength decreases gradually to zero. figure 3: typical stress-strain curve for concrete in ansys, a multi-linear isotropic hardening constants model is used for concrete. the slope of the first segment of the curve corresponds to the elastic modulus of the material and no segment slope should be larger. no segment can have a slope less than zero. the slope of the stress-strain curve is assumed to be zero beyond the last user-defined stress-strain data point, i.e., the apex of the stress-strain curve. linear material properties of normal weight concrete includes modulus of elasticity ec, and poisson’s ratio ʋc which can be evaluated according to the aci-318-08 design code as per the following empirical equation [10]: √ for the nonlinear part of the material properties, the popular stress-strain model of hognestad [11], can be used. this model consists of a second-degree parabola with an apex at a strain of 1.8fc ’’ /ec, where fc ’’ =0.9fc ’ , followed by a downward-sloping line terminating at a stress of 0.85fc ’ and a limiting strain of 0.0038. the stress at any point can be evaluated using the formula [11]: ( ( ) ) where ε0 equals 1.8fc ’’ /ec and εc represents strain at different stress values. in order to get a realistic response of concrete elements and determine the load failure accurately, a failure criterion should be defined. ansys uses willam-warnke [12] failure criterion, figure-4, with the following five parameters to define the failure surface. 1. the uniaxial compressive strength, fc. 2. the uniaxial tensile strength, ft. 3. the equal biaxial compressive strength, fcb. 4. the high-compression-stress point on the tensile meridian, f1. 5. the high-compression-stress point on the compressive meridian, f2. link180 solid65 compression tension softening f'c ft e1 e2 e3 strain   stress n. elmezaini & m. ashor / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 185 cl top steel load p stirrups ordinary rc support bott. steel figure 4: willam-warnke failure surface [11] however, the failure surface can be specified with a minimum of two constants, ft and fc. the other three constants default to willam and warnke: fcb = 1.2 × fc, f1 = 1.45 × fc and f2 = 1.725 × fc along with the five parameters, an open-and-close crack retention factor must be defined. this transfer coefficient βt represents a shear-strength reduction factor for those subsequent loads, which induce sliding (shear) across the crack face. a multiplier to account for the amount of tensile stress relaxation shall be defined as well. the following table summarizes the failure surface, the shear retention, and stress relaxation factors for the four beams: table 1 numerical parameters used in the fe models parameter b1 f1 s4 oc sfrc ol/ul βt (open) 0.3 0.15 0.3 0.1 βt (close) 0.8 0.3 0.8 0.2 ft 2.52 4.16 2.52 3.44 fc 25.2 26 25.2 -1 tc 0.6 0.6 0.6 0.6 typical shear transfer coefficients range from 0.0 to 1.0, with 0.0 representing a smooth crack (complete loss of shear transfer) and 1.0 representing a rough crack (no loss of shear transfer). when the element is cracked or crushed, a small amount of stiffness is added to the element for numerical stability. the stiffness multiplier cstif is used across a cracked face or for a crushed element and defaults to 1.0e-6. f. steel material description for steel reinforcement, a linear-elastic perfect-plastic material model was adopted. for practical reasons, steel is assumed to exhibit the same stress-strain curve in compression as in tension. passion’s ratio for steel will be set to 0.3, modulus of elasticity will be set to 200 gpa, the yield strength for flexural reinforcement will be set to 420 mpa, while for secondary reinforcement and stirrups it will be set to 280 mpa. the tangent modulus for the flexural, secondary reinforcement and stirrups will be set to 2000 mpa. g. model structure establishing an fe model with proper mesh and proper boundary conditions could be a tedious task. therefore, a special code for automatic mesh generation was developed. assuming perfect bond, concrete and reinforcement elements were connected at common nodes to assure displacement compatibility. a mesh convergence study indicated that an element with dimensions of 27.5mm×20mm× 27.5mm in the x, y, and z directions, respectively, would yield satisfactory results. concrete elements were densified in locations of contact with loading and supporting plates to consider stress concentration. (figures 5 & 6) considering the symmetry along the mid-span, only one half of the beam were modeled. this required applying additional boundary constrains perpendicular to the axis of symmetry. cl load p support sfrc overlay ordinay rc figure-5: fe mesh for b1/b2 (ordinary rc/sfrc beam) figure 6: fe mesh for b3 (rc with sfrc overlay) n. elmezaini & m. ashour / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 186 h. bonding between the two layers in the experimental program, the inter laminar shear (bonding) between the ordinary concrete and the sfrc layer was resisted in two different ways, a) by chemical bonding agent, and b) by mechanical dowels. in the experiments, mechanical bonding was shown to be more efficient. in numerical analysis for beams b3 & b4, mechanical bonding was represented by combining common nodes at the crossing steel (stirrups) locations. i. numerical solution parameters setting numerical solution parameters involves defining the analysis type, specifying load step options and defining convergence criteria. all parameters were set to default, as the analysis will be of a ―small displacement static‖ type. two convergence criteria were used, i.e. force and displacement with values of 0.005 and 0.05, respectively. iv. results the results obtained from the numerical analysis by ansys for the four beams include extensive information that describes the behavior of the beams during all load stages up until failure. results include first crack load, yield load, failure load, failure deflection, plastic strain, crack pattern and load deflection curve. the following table summarizes the results obtained from the numerical analysis of the four beams: table 2 results of numerical analysis for the four beams description b1 b2 b3-ol b4-ul first crack load (kn) 15.65 15.29 15.05 23.52 yield load (kn) 103.24 110.39 105.33 109.20 strain at failure 0.0092 0.0132 0.0062 0.0345 failure load (kn) 113.5 125.0 130.5 141.1 deflection (mm) 7.16 11.57 7.78 18.80 failure mode flexural flexural flexural flexural a. numerical versus experimental results in general, the results of the numerical solution by ansys compares very well with the experimental results obtained by ziara [7] for the same beams in regards to the load carrying capacity, ductility and failure mode. minor differences in load-deflection curves between numerical and experimental models can be attributed to the shortcomings in numerical material description, constitutive models, and numerical instability in modeling the cracks. table 3 comparing numerical to experimental results type b1 b2 b3-ol b4-ul failure load (kn) numerical 113.48 125.02 130.55 141.12 experimental (1) 113.50 126.80 133.50 na deflection (mm) numerical 7.16 11.57 7.78 18.80 experimental (1) 7.58 11.62 7.40 na (1) experimental result obtained by ziara [7] the load deflection curves obtained from ansys for the four beams were plotted against those of the experimental result as shown in figure 7, 8 & 9 0 20 40 60 80 100 120 140 160 0 1 2 3 4 5 6 7 8 9 10 11 12 13 lo a d ( k n ) deflcetion (mm) ansys experimental 0 20 40 60 80 100 120 140 160 0 1 2 3 4 5 6 7 8 9 10 11 12 13 lo a d ( k n ) deflcetion (mm) ansys experimental 0 20 40 60 80 100 120 140 160 0 1 2 3 4 5 6 7 8 9 10 11 12 13 lo a d ( k n ) deflcetion (mm) ansys experimental figure 7: beam b1-rc figure 8: beam b2-sfrc figure 9: beam b3-ol n. elmezaini & m. ashor / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 187 figures 10& 11 show the crack patterns obtained by ansys for the beams b1 and b3. b. discussion of the results referring to the load deflection curves for the four beams, it can be seen that the fea was able to predict the behavior of the different beams fairly well. it captured the softening phenomena at first crack, major crack proliferation, yield point, and just before complete failure, which is not clear in the experimental results. both crack patterns and failure modes of numerical models compared very well to those of the experimental models, this is mainly because failure of the models are controlled by proliferation of cracks into the concrete matrix rather than crushing of concrete. due to the presence of steel fiber in “b2”, this beam shows improved ultimate capacity and ductility as compared to b1. this result is consistent with previous experimental results. beam “b3-ol” which was strengthened with sfrc overlay containing stirrups in the shear span reached its full flexural capacity, which was equal to 130.55 kn, i.e. 1.15 of beam ―b1‖. the increase of flexural capacity by 15% compared to the control beam can be attributed to the presence of steel fibers and the prevention of inter-laminar shear failure by simulating a welding situation of stirrups in sfrc overlay to the existing stirrups in the ordinary concrete part. however, the ductility of the beam was similar to that of the control beam. absence of steel fibers from the section under the neutral axis where tensile stresses were formed, led to significant proliferation of cracks, and hence, beam behavior in terms of ductility was similar to the control beam modeled using ordinary concrete properties. beam “b4-ul” which is a strengthened beam with sfrc underlay containing stirrups in shear span reached its full flexural capacity, which was equal to 141.12 kn, i.e. 1.24 of beam ―b1‖. this beam experienced the highest flexural capacity of all four beams due to the prevention of interlaminar shear failure by simulating a welding situation of stirrups in the shear span of sfrc underlay to the existing stirrups in ordinary concrete. moreover, presence of steel fibers in concrete under neutral axis where tensile stresses were developed managed to delay the first crack, prevent sudden cracks from developing, spread under small amounts of loads, and enhance stress redistribution between concrete and steel reinforcement. crack width for the aforementioned beams were not detectable. ansys is not equipped with discrete crack modeling technology, where crack width can be evaluated based on the separation in the mesh. smeared crack technology can predict both the crack location and orientation, but not the crack width. the sudden increase in deflection at the first crack for all of the beams is a reflection of the stress redistribution phenomena, where concrete cannot withstand tensile stresses much longer. in this case, steel reinforcement would resist those tensile stresses formed due to loading of beams. presence of steel fibers can delay the stress redistribution as in beams ―f1‖ to a certain load, where tensile stresses developed at that load cannot be resisted by steel fibers as well. the same thing goes for the ―jumps‖ in deflection under a small rate of loading, during the entire loading process. v. parametric study to further demonstrate the validity of the numerical analysis, a parametric study was carried on beam ―b3-ol‖. in this study the influence of sfrc compressive strength and fraction volume on the overall behavior of the beam were examined. the following is a summary of the obtained results. a. effect of compressive strength the influence of compressive strength of the sfrc on the beam load carrying capacity was examined. three different values of compressive strength were used (31.5 mpa, 41.5 mpa and 51.5 mpa). the obtained results indicated significant improvement in the load carrying capacity of the beam as well as the overall ductility as shown in figure-12. 0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 160.00 180.00 0.00 5.00 10.00 15.00 20.00 25.00 s4-ol-31.5 mpa s4-ol-41.5 mpa s4-ol-51.5 mpa beam s4-ol figure 12: effect of compressive strength on “ s4-ol" figure 10: crack pattern for b1-rc figure 11: crack pattern for b3-ol n. elmezaini & m. ashour / nonlinear analysis of concrete beams strengthened with sfrc layer (2015) 188 b. effect of fraction volume beam ―b3-ol‖ was tested using 0.5%, 1%, and 2% fraction volumes of steel fibers. results indicated an enhancement in the ductility of the beam at fraction volume of 2.0%. while the load carrying capacity maintained its original levels. (see figure -13) increasing or decreasing the fraction volume did not affect the overall behavior of the beam significantly. this is mainly due to the fact that the sfrc overlay lies above the neutral axis where compressive stresses are formed, while tensile stresses to be resisted by the steel fibers are formed below the neutral axis. v. summary and conclusion experimental studies of reinforced concrete beams could be costly and time consuming. advancement in computer capabilities and progression in developing sophisticated constitutive models provides a suitable approach that would save time and cost as compared to an actual experimental testing program. this study is intended to demonstrate the capability of a numerical approach in predicting the behavior of beams strengthened with layers of sfrc. the use of ansys apdl program was demonstrated on four beams, which were previously tested in the lab. the result obtained from the numerical analysis were found to be in good agreement with those obtained from experimental models. the differences between results are within an acceptable range. the study indicated that the load carrying capacity of rc beams strengthened with sfrc overlays can be improved by about 15%. for beams strengthened with sfrc underlays, the improvement goes far for about 24%. using sfrc as an underlay shows a remarkable improvement in load carrying capacity and ductility. the ductility obtained by the sfrc underlay seems to control the overall ductility of the beam. existence of steel fiber in the tension side of the beam helps improve the ductility and assure better stress redistribution. references [1] ansys® apdl, release 13.0, help system, coupled field analysis guide, ansys, inc [2] g. tiberti, f. minelli and g. plizzari, "cracking behavior in reinforced concrete members with steel fibers: a comprehensive experimental study," cement and concrete research, vol. 68, p. 24–34, 2014. [3] x. zhang, g. ruiz and a. m. abd elazim, "loading rate effect on crack velocities in steel fiber-reinforced concrete," international journal of impact engineering, vol. 76, p. 60–66, 2014. [4] f. altun, t. haktanir and k. ari, "effects of steel fiber addition on mechanical properties of concrete and rc beams," construction and building materials, vol. 21, pp. 654-661, 2007. [5] k. s. kim, d. h. lee, j. h. hwang and d. a. kuchma, "shear behavior model for steel fiber-reinforced concrete members without transverse reinforcement," composites, vol. 43, p. 2324–2334, 2012. [6] j. r. deluce and f. j. vecchio, "cracking behavior of steel fiber-reinforced concrete members containing conventional reinforcement," aci structural journal, vol. 110, pp. 481-491, 2013. [7] m. m. ziara, "behavior of beams strengthened with steel fiber rc overlays," journal of advanced concrete technology, vol. 7, no. 1, pp. 111-121, 2009. [8] p. song and s. hwang, "mechanical properties of highstrength steel fiber-reinforced concrete," construction and building materials, vol. 18, p. 669–673, 2004. [9] v. b. dawari, g. r. vesmawala " application of nonlinear concrete model for finite element analysis of reinforced concrete beams", international journal of scientific & engineering research, volume 5, issue 9, september-2014 [10] aci committee 318, "building code requirements for structural concrete (aci 318m-08) and commentary," aci, michigan, 2008 [11] e. hognestad, n. w. hanson and d. mchenry, "concrete stress distribution in ultimate strength design," aci journal, vol. 52, no. 4, pp. 475-479, 1955. [12] k. j. willam and e. d. warnke, "constitutive model for the triaxial behavior of concrete," international association for bridge and structural engineering, vol. 19, pp. 174-195, 1974. nasreddin elmezaini, ph.d., p.eng. is an associate professor of civil engineering with over 29 years of academic and professional experience. his research interests include: finite element analysis, behavior of buildings under abnormal loading conditions, soil structures interactions, and repair and strengthening of buildings. mohammed ashour, b.sc., m.sc. is a practicing engineer with 3 years of experience. he completed his master degree and is continuing towrd ph.d. his main interest is finiat elmenet analsysi of structures. 0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 160.00 0.00 2.00 4.00 6.00 8.00 10.00 12.00 s4-ol (0.5%) s4-ol (1.0%) s4-ol (2.0%) beam s4-ol (1.5%) figure 13: effect of volume fraction on “ s4-ol" transactions template journal of engineering research and technology, volume 2, issue 3, september 2015 189 effect of galleries on the thermal performance of buildings in the gaza strip ahmed s. muhaisen 1 , nidal abu mustafa 2 1 associate professor, architecture department, the islamic university of gaza, palestine, p.o. box 108 amuhaisen@iugaza.edu.ps 2 architecture department, the islamic university of gaza, palestine, p.o. box 108, arch.nidal@hotmail.com abstractthis study examines the effect of galleries design in a symmetrical street canyons on the thermal performance of buildings located in the mediterranean climate of the gaza strip. it was carried out using two computer simulation tools, namely ecotect and ida ice. the gallery design parameters and street orientation were investigated to find out the extent to which they affect the external and internal thermal comfort. the study concluded that the thermal stress can affectively be mitigated if galleries are appropriately configured with relation to the solar orientation. the galleries on (e-w) oriented street are more protected from the sun than that on (n-s) oriented streets. about 39.4% of reduction in the incident solar radiation falling on (ew) street occurs with increasing the gallery width to street width ratio (w/w) from 0.2 to 1.4. and about 22.8% of reduction occurs with increasing the gallery height to building height ratio (h/h) from 0.2 to 0.9. on the other hand, higher galleries effectively reduce the total requirement in the covered spaces, especially the upper floors. therefore, more attention must be paid to the gallery dimensions in order to reduce energy consumption of buildings. index terms— galleries thermal performance – energy – orientation. i introduction the street shape influences both indoor and outdoor environments, as the street consists of ―shared‖ active facets between the building envelope and the open spaces. thus, the street design is an important issue in the global approach for a passive urban design 1. different studies have dealt with the aspects of an urban street design. the use of galleries at street level as a shading device is usual and already known in ancient times 2. gallery is an element of bioclimatic architecture; it is an intermediate space or passage located on the ground floor (and/ or higher floors) in the southern façade. a glazed gallery in most old buildings enables the energy obtained to be used in cold seasons to support the building’s air conditioning system, and thus minimize energy consumption 3. the effects of galleries on thermal comfort in urban street canyons are investigated by toudert & mayer (2007) 4 by means of numerical modeling by using the three-dimensional microclimate model envi-met 3.0. the results revealed that galleries have a strong effect on the heat gained by a human body and hence on the resulting thermal sensation. consequently, a sensitive decrease of the area of thermal discomfort is achieved. wang et al. (2014) 5 studied the thermal performance of a gallery in the hygrothermal environment using both measured and modelling data. the results for the simulations of the bms system and purposely fitted monitoring instruments showed that the use of galleries can contribute effectively to reduce energy consumption. generally, the thermal situation in the area of the galleries depends on the orientation of the street canyon along with the dimensions of the gallery i.e. height and width 6. thomas (2003) studied street galleries as an environmental approach to achieve sustainable urban design 7. the study showed that strategies of asymmetrical street shape, together with using galleries and vegetation have an effective impact on outdoor comfort. covered streets are common in a hot-dry climate in order to provide selfshading façades and protect pedestrian from undesirable solar radiation8. according to the previous studies and others, it is clear that the configurations of street galleries affect outdoor thermal comfort. however, the studies do not sufficiently pay attention to the impact of galleries and its orientation on the indoor thermal conditions, which may be part of the solutions to the problem of excessive energy consumption. therefore, this study is an attempt to propose suggestions for improving the buildings environmental design and to find out the extent to which the solar and thermal performances of buildings are affected by the use of galleries. the study aims to find out the optimum street configurations with relation to galleries that ensure minimum use of energy to provide thermal comfort in buildings in the mediterranean climate of gaza. although there are many climatic factors that affect the thermal performance of buildings, this study focuses particularly on solar radiation. other climatic factors, such as ventilation and humidity may be covered by other researches. http://eng.iugaza.edu.ps/ar/amuhaisen@iugaza.edu.ps ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 190 ii study tools and assumptions a simulation tools two simulation tools, namely ecotect and ida were used to carry out the investigations. the following are brief descriptions of the two computer programs: ecotect is a software package with a unique approach for conceptual building design coupling an intuitive 3d design interface with a comprehensive set of environmental performance analysis functions and interactive information displays. ecotect provides its own fast and intuitive modelling interface for generating even the most complex building geometry 9. ecotect visualize incident solar radiation on windows and surfaces over any period. its displays the sun’s position and path relative to the model at any date 10. international development association indoor climate and energy program (ida ice) is a whole year detailed and dynamic multi-zone simulation application for the study of indoor climate of individual zones within a building as well as energy consumption of an entire building 11. ida ice is an extension of the general ida simulation environment. weather data is supplied by weather data files, or is artificially created by a model for a given 24-hour period. consideration of wind and temperature driven airflow can be taken by a bulk air flow model 12. b study assumptions simulations were carried out during the summer and winter months. hvac system was assumed to be fully air conditioned, lower band is 18.0° c and upper band is 26.0° c. the internal heat gain from occupancy, appliances and the ventilation heat gain were considered constant in the simulation, as the study concerns the incident solar radiation on the facades which overlooks the street and on the street ground. external walls have u-values of 2.25 w/m²*k in ecotect and 2.24 w/m²*k in ida ice. the roof u-values are 2.35 w/m²*k in ecotect and 2.35 w/m²*k in ida ice. glazing u-values are 6 w/m²*k in ecotect and 5.8 w/m²*k in ida ice. these values were assumed to achieve the requirements of the u-values as recommended by the palestinian code for energy efficient building 13. c location and climate of the gaza strip the gaza strip is a narrow strip of land in the westsouthern part of palestine extending along the eastern mediterranean beach 14. it has a total area of about 365 km 2 15. it is located on longitude 34° 26' east and latitude 31° 10' north 16. according to the koppen system for climatic zoning, winter in the gaza strip area is rainy and mild while summer is hot and dry. the average daily mean temperature ranges from 25c◦ in summer to 13c◦ in winter 17. the area of the gaza strip includes generally two climatic zones. the first is located in the western part along the cost of the mediterranean sea, which has semihumid mediterranean climatic conditions. the main cities, which include about 97% of the inhabitants in the gaza strip are located in this zone. the second zone is the semiarid loess plains located to the east of the gaza strip, and considered an extension of negev desert. according to the application of olgyay's bioclimatic chart for gaza climatic conditions, , it is indicated that during summer months (june to september) ventilation is most recommend to minimize the adverse effect of humid and hot air and consequently achieve comfort. in winter, (december to march) solar radiation is advantageous to achieve comfort and minimse heating loads. in the rest of the year (april, may, october and november) the climatic conditions are within the comfort zone 16. iii the first case: effect of galleries design on the incident solar radiation a the study parameters an urban canyon of h/w = 2, which was considered as an average profile between shallow and deep streets, with different gallery depths and heights were simulated, see figure 1. a segment of the street that consists of six buildings figure 1 the gallery parameter. figure 2 gallery width to street width ratio simulated in the study ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 191 (three in each side) with a constant height of (20m) separated by the street width was considered as a representative of the whole street length. the setback distance between the adjacent buildings is taken to be 4m. the investigated gallery width to street width ratio (w/w) are 0.2, 0.4, 0.6, 0.8, 1.0, 1.2 and 1.4, see figure 2. in addition, a gallery canyon of (w/w) = 1.0 was simulated, taking into consideration a variable gallery heights. the investigated gallery height to building height ratio (h/h) are 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9, see figure 3. they were simulated at east – west and north – south orientations, see figure 4. the simulation results were expressed in terms of incident solar radiation on the facades of buildings overlooking the street and on the street ground (in kwh/m 2 ). b results the first case study is concerned with the impact of galleries width on the solar radiation received on the facade of the central building, during the summer and winter months. figure 5 presents percentages of reduction in incident solar radiation on the wall facing east in (n-s) street and the wall facing south in (e-w) street. the results indicate that the incident solar radiation decreases with increasing the gallery width to the street width ratio (w/w). the shallowest gallery with (w/w=0.2) receives the largest amount of solar radiation, whereas, the least amount is received in the deepest gallery with (w/w=1.4). increasing the galleries width from 0.2w to 1.4w in the summer months can decrease the incident solar radiation on the wall facing east, which overlooks the north-south oriented street axis in ecotect by about 13.1%, 20.83%, 24.57%, 26.75%, 27.8%, 28.5% and 28.6%, whereas the percentages of decrease equal about 23.6%, 30.0%, 32.7%, 33.48%, 34.04%, 34.04% and 34.04% in the winter months at the correspondent width ratios respectively. this indicates that the deepest gallery with (w/w=1.4) is the most advisable in summer, since it is the most protected from undesirable solar radiation. in winter, the opposite is true, as the shallowest gallery with (w/w=0.2) would be the most recommended to receive maximum solar radiation when it is welcome. so, it is recommended to pay more attention to the ratio of gallery width to the street width to be closer to the intermediate ratio to attract a large amount of solar radiation in the winter and less amount of radiation in the summer. figure 3 gallery height to building height ratio considered in the study (a) in summer (b) in winter figure 4 incident solar radiation on façades overlooking (n-s) and (e-w) street as a result of varying the galleries width by ecotect (c) in winter figure 5 percentage of reduction in incident solar radiation on façades overlooking (n-s) and (e-w) street as a result of varying the galleries width by ecotect figure 6 the two main street orientations considered in the study ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 192 the same trend can be observed in the results of ida ice program for the same cases, although there are slight differences in the quantitative amounts of solar radiation, figure 6. about 18.15%, 25.83%, 31.57%, 34.75%, 36.85, 37.91% and 38.6% of decreasing in the solar radiation for the same ratios in the summer months, whereas about 33.88%, 40.25%, 42.42%, 43.48%, 44.04%, 44.05% and 44.05% in the winter months, see figures 6. the discrepancy in the results of ecotect and ida ice can be referred to the different calculation algorithms and slight variations in the specifications of building materials. overall, the general agreement between the results of the two programs indicates a high reliability and confirms the validity of the simulation outcomes. with respect to the street orientation, figure 5, reveals that changing the street orientation from n-s (90°) to e-w (0°) in the summer months results in an average decrease in the solar radiation received on the buildings façades by about 30.78%, 34.37%, 39.55%, 44.73%, 46.64%, 46.74% and 46.74% at the correspondent width ratios respectively. however, an average decrease in the solar radiation of ida ice results are 35.78%, 40.37%, 45.95%, 51.73%, 54.64, 55.94 and 56.64% respectively, see figures 6. this means that the building façades with galleries overlooking (e-w) streets orientation are more shaded than that overlooking n– s street. in contrast, the effect of galleries in the east-west oriented street axis in the winter period is not remarkable. this is attributable to the low altitude position of the sun in the winter periods and the lack of solar radiation intensity. hence, e-w streets are warmer than n-s streets especially in winter. accordingly, e-w orientation of streets seems to be the most desirable for both summer and winter months. figure 7, shows the impact of galleries height on the solar radiation received on the building façades. it is clear that the incident solar radiation increases with increasing the gallery height to the building height ratio (h/h) from 0.2 to 0.9, in both summer and winter. increasing the galleries height from 0.2h to 0.9h in the summer month can decrease the incident solar radiation on the wall facing east, which overlooks the north-south oriented street axis by about 39.88%, 33.49%, 28.1%, 23.8%, 18.67%, 13.31%, 7.63% and 1.22%, whereas decrease the radiation on the façade in the winter months by about 54.26%, 47.78%, 40.6%, 36.04%, 33.98%, 29.78%, 18.54% and 3.84%. this indicates that, the amount of solar radiation falling on facades can be reduced effectively by using deeper and lower galleries. hence, attention is drawn here on the relevance of galleries dimensions to the thermal comfort in buildings and within galleries. (a) (b) in winter figure 7 incident solar radiation on façades overlooking (n-s) and (e-w) street as a result of varying the galleries height by ecotect (a) (b) in winter figure 6 incident solar radiation on façades overlooking (n-s) and (e-w) streets as a result of varying the galleries width by ida ice (a) in summer (a) in summer ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 193 with regard to the impact of orientation along with the gallery height, figure 7, reveals that changing the street orientation from n-s (90°) to e-w (0°) in the summer months results in an average decrease in the solar radiation received on the buildings façades by about 58.22%, 52.99%, 49.61%, 46.44%, 42.08%, 36.17%, 29.49% and 25.99% at the correspondent width ratios respectively. in addition to decrease solar radiation in the winter months by about 22.82%, 21.61%, 20.41%, 19.83%, 18.57%, 16.78%, 15.03% and 15.19%. accordingly, galleries design in southern façade overlooking e-w oriented streets is more effective than the eastern or western façade overlooking streets with north-south orientation, since they allow an acceptable degree of protection from the undesirable solar radiation in summer, and in the same time, allow a reasonable amount of solar radiation to hit the buildings facades in winter. it should be noted that the galleries design also affects the amount of incident solar radiation falling on the street ground. east-west street orientation with different gallery dimensions were simulated to assess their impact on the amount of radiation falling on the street ground in the summer months. figure 8, shows that the incident solar radiation received on the street horizontal space decreases with the increasing the gallery width to street width ratio. the shallowest gallery with (w/w) = 0.2, achieves the highest amount of solar ration. in contrast, the deepest gallery, with (w/w=1.4), achieves the best thermal behavior due to its high degree of protection from the sun rays. on the other hand, the highest gallery with (h/h) = 0.9 , achieves the highest amount of solar ration, whereas the lowest gallery, with (h/h=0.2), achieves the best thermal behavior because of the short period of exposure to the sun, see figure 9. thus, the thermal situation in streets with galleries is better than that in streets that are completely irradiated. thus, galleries are considered a good way to protect pedestrian spaces. it is worthy of note that, the dimension of the gallery in combination with the street orientation are decisive. this emphases the need to be configured properly to ensure optimum solar performance for buildings and outdoor spaces. iv the second case: effect of galleries design on the thermal performance of buildings a the study parameters the thermal performance of the central building in the examined segment of the street was investigated taking into consideration various gallery depths and heights. the examined buildings were assumed to have 5 stories, in addition to the ground level, with a constant height of 20m. the percentage of windows to wall area was taken to be 10%, and the setbacks between adjacent buildings are 4m. these configurations represent the most common case of multi-story buildings in gaza 18. the investigated gallery widths are 0.5m, 1.0m, 1.5m and 2.0m, see figure 10. in addition, a gallery width equals to 2.0m were simulated, taking into consideration a variable gallery heights. the investigated gallery heights are 1 floor (3.3m), 2 floors, 3 floors, 4 floors and 5 floors, see figure 11. the simulation results were expressed in terms of the heating and cooling energy (in kwh/m³) required to achieve comfort. figure 8 incident solar radiation on (e-w) street as a result of varying the galleries width by ecotect figure 9 incident solar radiation on (e-w) street as a result of varying the galleries height by ecotect figure 7 gallery widths simulated in the study ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 194 b results the first examined case is concerned with the impact of gallery width on the cooling and heating loads in the ground floor of the building facing west in (n-s) oriented street. figure 12, shows that the cooling energy decreases as the gallery width increases, this is attributable to the effectiveness of horizontal shading of the deeper galleries. it is worthy of note that the minimum amount of cooling energy is required by ground floor at the deepest gallery (2.0m), and then decreases gradually with approaching the gallery width to 1m. increasing the gallery width from 0.5m to 2.0m in the summer results in an decrease of 3.92% in the energy required to achieve comfort. in contrast, increasing the gallery width from 0.5m to 2.0m in winter, increases the heating energy by about 2.0%. so, the shallower the gallery is, the more desirable will be for reducing the heating energy required to achieve comfort in winter. it should be noted that changing the orientation angle of n-s street to be along e-w axis results in a decrease of about 6.32% in the energy required by the ground floor. accordingly, buildings with galleries overlooking e-w oriented streets is the most preferable throughout the year, since it requires the minimum amount of energy to provide thermal comfort. figure 13, shows the effect of a gallery width on the total heating and cooling energy required throughout the year. it is clear that the trend of the total energy is the same as that of the cooling energy, explained in figure 12(a). this is referred to the significant effect of the gallery's deepness on reducing the required energy to achieve the best thermal comfort. hence, it is advisable to increase the width of gallery to achieve maximum area of shading, and consequently reduce the energy required to achieve thermal comfort. the second examined case is concerned with the impact of the gallery height on the cooling and heating loads. the thermal performance of all floors, which includes the gallery in the examined building was analysed to find out the extent to which each floor will be affected by changing the gallery height. the results indicate that the reduction in the cooling loads in summer is noticeable in higher floors, which are located directly under the gallery, whereas the heating loads in winter is slightly increased, see figure 14. however, the covered spaces also face periods of low stress. this is due to the exposure of the lower floors facades and the ground surface to direct solar beams as well as the outgoing heat from the ground in the case of the higher galleries. for more clarity, the gallery height equals to 4 and 5 figure 8 gallery heights considered in the study (a) cooling loads (a) (b) heating loads figure 9 required energy to achieve comfort in the building facing west in a (n-s) street and the building facing south in a (ew) street as a result of varying the gallery width figure 13 total energy required to achieve comfort in the examined building as a result of varying the galleries width by ida ice. ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 195 floors can increase the cooling loads by about 1.05% and 1.53% respectively in the ground floor. therefore, the upper floors, which are exposed to intense solar radiation should be carefully provided with appropriate galleries to ensure minimum penetration of solar radiation, and consequently less energy requirement. figure 15, shows that changing the street orientation from n-s to e-w, decreases the required cooling load in the summer period by about 6.32% in the ground floor in the case of the gallery height equal to 1 floor, by about 14.08% in the first floor in the case of the gallery height equal to 2 floors, by about 19.24% in the second floor in the case of the gallery height equals to 3 floors, by about 23.48% in the third floor in the case of the gallery height equals to 4 floors and by about 29.57% in the fourth floor in the case of the gallery height equals to 5 floors. it is also shown that the increase in heating loads in (ew) street is less than the increase in (n-s) street, which confirms that the galleries on an (e-w) street are well protected and the extent of discomfort is very limited. so it is concluded that eastwest oriented street axis with galleries effectively reduces the bad effect of undesirable radiation falling on the southern apartments and thus mitigate the thermal stress, especially in the summer at the same time keeps the desirable heat in the winter. v conclusion this study discussed the impact of gallery design on the incident solar radiation and thermal performance of buildings. the study emphasized that using galleries is useful for reducing thermal stress. this is due to the effectiveness of their horizontal shading which reduced undesirable intense solar radiation. it was concluded that deeper and lower galleries can play an important role in reducing the incident solar radiation, especially on the walls facing south which are the most important to shade. it was found that the deepest gallery with width ratios (w/w) =1.4 can reduce undesirable radiation falling on the façade by about 28.6% and 46.74 for n-s and e-w oriented streets respectively. moreover about 22.8% of reduction occurs with increasing the gallery height to building height ratio (h/h) from 0.2 to 0.9. it was found that orienting the gallery axis along e-w direction is advisable to ensure the short duration of exposure to the sun rays in summer and the long duration in winter. the deeper the gallery is, the less the energy will be required to achieve thermal comfort throughout the year. the optimum galleries width in the lower floors for both east-west and north-south oriented street is 2.0m, as it offers a reduc (b) heating loads figure 10 required energy to achieve comfort in the building facing west in (n-s) street as a result of varying the galleries height by ida ice. (a) cooling loads (a) (b) heating loads figure 15 required energy to achieve comfort loads in the building facing north in (e-w) street as a result of varying the galleries height by ida ice. (a) cooling loads ahmed s. muhaisen, nidal abu mustafa/ effect of galleries on the thermal performance of buildings in the gaza strip (2015) 196 tion in energy consumption per cubic meters by about 6.3% and 3.92% respectively. it is recommended to use high galleries to protect the upper floor' facades from the intense solar radiation, taking into consideration that lower floors are usually shaded by opposite buildings. the use of galleries as a horizontal shading device will be advisable to achieve indoor thermal comfort with minimum use of energy during the year. references 1 f.a. toudert, "dependence of outdoor thermal comfort on street design in hot and dry climate," freiburg, november 2005. 2 n. lechner, "heating cooling, lighting. design methods for architects," john wiley & sons, new york, 1991. 3 m.j. suarez, a.j. gutiérrez, j. pistono, and e. blanco, " cfd analysis of heat collection in a glazed gallery," energy and buildings, vol. 43, pp. 108–116, 2011, doi:10.1016/j.enbuild.2010.08.023. 4 f.a. toudert and h. mayer, "effects of asymmetry, galleries, overhanging facades and vegetation on thermal comfort in urban street canyons," journal of solar energy, vol. 81, pp. 742–754, 2007, doi:10.1016/j.solener.2006.10.007. 5 f. wang, k. pichetwattana, r. hendry, and r. galbraith, "thermal performance of a gallery and refurbishment solutions," energy and buildings, vol. 71, pp. 38–52, 2014, doi: org/10.1016/j.enbuild.2013.11.059. 6 p.j. littlefair, m. santamouris, s. alvarez, a. dupagne, d. hall, j. teller, j.f. coronel, and n. papanikolaou, "environmental site layout planning: solar access, microclimate and passive cooling in urban areas," crc. london, 2001. 7 r. thomas, "sustainable urban design, an environmental approach," spon, london, 2003. 8 a. krishan, "the habitat of two deserts in india: hot–dry desert of jaisailmer (rajasthan) and the cold–dry high altitude mountainous desert of leh (ladakh)," energy and buildings, vol. 23, pp. 217–229, 1996. 9 a. marsh, "ecotect and energy plus. the building energy simulation user news," 24 (6), 2003. 10 autodesk ecotect analysis., available at: http://usa.autodesk.com/ecotect-analysis/, 2010. 11 equa simulation ab, getting started with ida indoor climate and energy version 4.5, 2013a 12 equa simulation ab, user manual ida indoor climate and energyversion 4.5, 2013b. 13 ministry of local government, " the palestinian code for energy efficient building," 2004. 14 applied research institute (arij), "climatic zoning for energy efficient buildings in the palestinian territo ries (the west bank and gaza), " technical report submitted to united nations development program/ program of assistance to the palestinian people (undp / papp), jerusalem, palestine., 2003. 15 r. bashitialshaaer, " analysis and modelling of desalination brines," phd dissertation, water resources engineering, lund sweden, 2011. 16 ministry of local government, "the palestinian code of energy efficient buildings," ramallah, palestine, 2004. 17 m. kottek, j. grieser, ch. beck, b. rudolf, and f. rubel, "world maps of köppen-geiger climate classification updated," meteorologische zeitschrift, vol. 15, no. 3, pp. 259-263, 2006. 18 ministry of local government, "the palestinian code of energy efficient buildings," ramallah, palestine, 2004. 18 a. muhaisen, h. dabboor, "studying the impact of orientation, size, and glass material of windows on heating and cooling energy demand of buildings in the gaza strip," journal of king saud university, architecture & planning, vol. 27, pp. 1-15, 2015. http://usa.autodesk.com/ecotect-analysis/ transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 95 design optimization of semi-rigidly connected steel frames using harmony search algorithm mohammed arafa 1 , ashraf khalifa 2 , mamoun alqedra 3 1 associate professor, the islamic university of gaza, p. o. box 108, palestine, marafa@iugaza.edu.ps 2 m.sc., the islamic university of gaza, palestine, khalifa.tec@gmail.com 3 associate. professor, the islamic university of gaza, palestine, malqedra@iugaza.edu.ps abstract—in this paper, a design optimization algorithm is presented for non-linear steel frames with semi-rigid beamcolumn connections using harmony search algorithm. the design algorithm obtains the minimum steel weight by selecting from a standard set of steel sections. strength constraints of american institute of steel construction load and resistance factor design (aisc-lrfd) specification, displacement, deflection, size constraint and lateral torsional bulking are imposed on frames. harmony search (hs) is a recently developed meta-heuristic search algorithm which is based on the analogy between the natural musical performance and searching the solutions to optimization problems. the hs algorithm accounts for the effect of connections’ flexibility and the geometric non-linearity of the members. the frye–morris polynomial model is used for modeling semi-rigid connections. two design examples with extended end plate without column stiffeners are presented to demonstrate the application and validity of the algorithm. index terms— optimum design; harmony search algorithm, genetic algorithm, semi-rigid connections; frye and morris model. i introduction structural design optimization of steel frames generally requires the selection of steel sections for the beams and columns from a discrete set of practically available steel section tables. this selection is carried out in such a way that the steel frame has the minimum weight, while the design is limited by constraints such as the choice of material, the feasible strength, displacements, deflection, size constraints, lateral torsional buckling and the true behavior of beam-to-column connections. the design algorithm aims at obtaining minimum steel weight frames by selecting a standard set of steel sections such as aisc wide-flange shapes [1]. computer-aided optimization is traditionally used to obtain more economical designs since the 1970s [2]; [3] and [4]. numerous algorithms have been developed for accomplishing the optimization problems in the last four decades. presently engineers and designers are compelled to achieve more economical designs and to search or develop more effective optimization techniques; this is why heuristic search methods emerged in the first half of 1990s [5], [6] and [7]. in recent years, structural optimization witnessed the emergence of novel and innovative stochastic search techniques. these stochastic search techniques make use of the ideas taken from nature and do not suffer the discrepancies of mathematical programming based optimum design methods. meta-heuristic algorithms typically intend to find a suitable solution for any optimization problem by ‘trial-and-error’ in a reasonable amount of computational time. during the last few decades, several metaheuristic algorithms have been proposed. these algorithms include: genetic algorithms (gas), which are search algorithms based on natural selection and the mechanisms of population genetics. the theory was proposed by holland [8] and further developed by goldberg [9] and others. genetic programming, which is an extension of genetic algorithms, was developed by koza [10]. he suggested that the desired program should itself evolve during the evolution process. evolutionary programming, which was originally developed by (fogel et al.) [11], described the evolution of finite state machines to solve prediction tasks. evolution strategies, were developed to solve parameter optimization problems by (schwefel et al.) [12], in which a deterministic ranking is used to select a basic set of solutions for a new trial [13]. ant colony optimization (aco), which was first formulated by dorigo [14] and further developed by other pioneers [15] and [16]. this algorithm was obtained from the pheromone trails of ants. particle swarm optimization (pso), was developed by kennedy and eberhart [17], inspired by the swarm behavior of fish and bird schooling in nature [18], [19] and [20]. bee algorithms (abc), is developed by karaboga and basturk [21] for solving optimization problems. not long ago, a large number of optimum structural design algorithms have been developed which are relying on these effective, powerful and novel techniques such as genetic algorithm based optimum design of nonlinear planar steel frames with various semi-rigid connections by kameshki and saka [22], design of steel frames using ant colony optimization by camp, et. al. [23] and optimum design of cellular beams using harmony search and particle swarm optimizers by erdal, et. al. [24]. geem and kim [25] developed a new harmony search (hs) meta-heuristic algorithm that was conceptualized using the musical process of searching for a perfect state of harmony. the harmony in music is analogous to the optimization solution vector, and the musician’s improvisations are analogous to local and global search schemes in optimization techniques. the hs algomailto:marafa@iugaza.edu.ps mailto:malqedra@iugaza.edu.ps m. arafa, a. khalifa and m. alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 96 rithm does not require initial values for the decision variables. furthermore, instead of a gradient search, the hs algorithm uses a stochastic random search that is based on the harmony memory considering rate and the pitch adjusting rate (defined in harmony search meta-heuristic algorithm section) so that derivative information is unnecessary. compared to earlier meta-heuristic optimization algorithms, the hs algorithm imposes fewer mathematical requirements and can be easily adopted for various types of engineering optimization problems. the main differences between hs and ga are summarized as: (i) hs generates a new design considering all existing designs, while ga generates a new design from a couple of chosen parents by exchanging the artificial genes; (ii) hs takes into account each design variable independently. on the other hand, ga considers design variables depending upon building block theory. (iii) hs does not code the parameters, whereas ga codes the parameters. that is, hs uses real value scheme, while ga uses binary scheme (0 and 1). the current study develops an algorithm to obtain the optimum design of steel frames with semi-rigid beam-column connections using harmony search (hs) technique. the design optimization problem was formulated to obtain the minimum steel frame weight. the aisc-lrfd specifications [1] were imposed on the strength, displacement, deflection, size constraints. the frye and morris polynomial model is introduced to model the semi-rigid connections. to demonstrate the application of the developed algorithm, two steel frames with extended end plate moment connections are presented. ii design optimization problem. the formulation of the current problem as an optimization problem is carried out by identifying the design variables, objective function, penalized objective function and penalty function. a design variables. structural design optimization of steel frames generally requires selection of steel sections for its beams and columns from a discrete set of practically available steel section tables. the design algorithm aims to obtain the minimum steel weight of frames by selecting a standard set of steel sections. the current study utilizes the aisc wide-flange shapes from w40 to w8 as the design variables of the optimization problem. these sections are considered the most practical sections for steel beams and columns. b the objective function. the adopted optimization problem of the design of steel frames is to minimize the overall steel weight. the objective function of the minimization problem is formulated as follows: 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑊(𝑥) = ∑𝐴 𝜌 𝐿 (1) in equation 1, w(x) is total weight of the members, ng is total numbers of groups in the frame, ai is cross-sectional area of member, ρi and li are density and length of member i. c penalized objective function. in order to assess the fitness of a trial design and determine its distance from the global optimum, the eventual constraint violation should be computed by means of a penalty function. the penalty function consists of a series of geometric constraints corresponding to the dimensions and shape of the cross sections, and a series of constraints related to the deflection and internal forces of the members of the structure. thus, the penalty will be proportional to constraint violations, and the best design will have the minimum weight with no penalty. there are several studies devoted to the selection of penalty functions [26], [27] and [28]. in this study, the penalized objective function φ(x) is applied and written for american institute of steel construction load and resistance factor design (aisc-lrfd) code as follows [1]: 𝜑(𝑥) = 𝑊(𝑥).(1 + 𝛾.𝑉) (2) where, φ(x) = penalized objective function, γ = penalty constant, v = constraint violation function and ε = penalty function exponent. in this study γ = 1.0, ɛ = 2.0 are considered [23]. d penalty function. the constraints of the current optimization problem comprise displacement constraints, size constraints, deflection constraints and strength constraints. therefore, the constraint violation function of the optimization problem is expressed as: 𝑉 = ∑𝑉 + ∑𝑉 + ∑𝑉 + ∑𝑉 + ∑𝑉 + ∑𝑉 (3) where, vi td is constraint violations for top-storey displacement, vi id is constraint violations for inter-storey displacement, vi sc and vi sb are constraint violations for size constraints of column and beam, respectively, vi db is constraint violations for beam deflection and vi i is the interaction formulas of the lrfd specification; nj t = number of joints in the top storey. ns and nc are number of storeys except the top storey and number of beam columns, respectively. ncl is the total number of columns in the frame except the ones at the bottom floor. nf = is the number of storeys. the computation of the penalty function of these constraints is illustrated below: the penalty may be expressed as, 𝑉 = { 0 𝑖𝑓 𝜆 ≤ 0 𝜆 𝑖𝑓 𝜆 > 0 (4) the displacement constraints are, 𝜆 = 𝑑 𝑑 − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,……. .𝑁 (5) 𝜆 = 𝑑 𝑑 − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,……. .𝑁 (6) where, dt: maximum displacement in the top storey, dt u : allowable top storey displacement (max height/300), di : interstorey displacement in storey i, di u : allowable interstorey displacement (storey height/300). the size constraint is given as follows, 𝜆 = 𝑑 𝑑 − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,……. .𝑁 (7) 𝜆 = 𝑑 𝑑 − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,……. .𝑁 (8) where, dun and dbn are depths of steel sections selected for upper and lower floor columns, dbf, dbc are the width of beam flange and column flange respectively. m. arafa, a.khalifa and m.alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 97 the deflection control for each beam is given as follows, 𝜆 = 𝑑 𝑑 − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,……. .𝑁 (9) where, ddb: maximum deflection for each beam, ddu: allowable floor girder deflection for unfactored imposed load ≤ l/360. for members subjected to bending moment and axial force, the adopted strength constraints based on [1] are expressed as follows, 𝑓𝑜𝑟 𝑃 ∅ 𝑃 ≥ 0.20 𝜆 = ( 𝑃 𝜙 𝑃 ) + 8 9 ( 𝑀 𝜙 𝑀 + 𝑀 𝜙 𝑀 ) − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,….𝑁 (10) 𝑓𝑜𝑟 𝑃 ∅ 𝑃 < 0.20 𝜆 = ( 𝑃 2𝜙 𝑃 ) + ( 𝑀 𝜙 𝑀 + 𝑀 𝜙 𝑀 ) − 1.0 ≤ 0 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1,….𝑁 𝑁 (11) where, pu = factored applied compression load, pn = nominal axial strength (compression), mux= factored applied flexural moment about the major axis, muy = factored applied flexural moment about the minor axis, mnx = nominal flexural strength about the major axis, mny = nominal flexural strength about the minor axis (for two-dimensional frames, muy = 0), φc= resistance factor for compression (equal 0.90), φb = flexural resistance factor (equal 0.90). a the nominal compressive strength of a member. 𝑃 = 𝐴 .𝐹 (12) 𝐹 = 𝜋 .𝐸 ( 𝐾𝐿 𝑟 ) (13) 𝐹 = 0.658 ( )⁄ 𝐹 𝑤ℎ𝑒𝑛, 𝐹 𝐹 ≤ 2.25 (14) 𝐹 = 0.877 𝐹 𝑤ℎ𝑒𝑛, 𝐹 𝐹 > 2.25 (15) where, pn = nominal axial strength (compression), ag = crosssectional area of member, fcr = critical compressive stress, fe = euler stress, fy = yield strength of steel, e = modulus of elasticity, k = effective-length factor, l = member length, r = governing radius of gyration. the effective length factor k, for an unbraced frame is calculated from the following approximated equation taken from [29]. the out-of-plane effective length factor for each column member is specified to be ky = 1.0, while that for each beam member is specified to be ky = l/6 (i.e., floor stringers at l/6 points of the span). the length of the unbraced compression flange for each column member is calculated during the design process, while that for each beam member is specified to be l/6 of the span length. 𝐾 = √ 1.6𝐺 𝐺 + 4.0(𝐺 + 𝐺 ) + 7.50 𝐺 + 𝐺 + 7.50 (16) where, subscripts a and b denote the two ends of the column under consideration. the restraint factor g is stated as 𝐺 = ∑(𝐼 /𝐿 ) ∑(𝐼 /𝐿 ) (17) where, ic is the moment of inertia and lc is the unsupported length of a column section; ib is the moment of inertia and lb is unsupported length of a beam section. σ indicates a summation for all members connected to that joint (a or b) and lying in the plane of buckling of the column under consideration. b the nominal flexural strength of a member. design strength of beams is φb mn. as long as λ ≤ λp, the mn is equal to mp and the shape is compact. the plastic moment mp is that calculated from the following equation. 𝑀 = 𝑀 = 𝐹 .𝑍 (18) where, mn = nominal flexural strength, mp = plastic moment, fy = yield stress of steel, z = the plastic section modulus, λp = slenderness parameter to attain mp. φb =flexural resistance factor (equal 0.90). details of the formulations are given in the [1]. iii connection modeling and analysis of steel frames. the modeling of beam – column connection and steel frame members has been demonstrated by ansys software. the beams and columns of the frame were modeled using beam3, from ansys library elements. beam3 is a uniaxial element with tension, compression, and bending capabilities. the extended end plate connections without column stiffeners were simulated using a non-linear spring element, combin39. it is a unidirectional element with nonlinear generalized force (moment) – deflection (rotation) capabilities that can be used in any analysis. in the present study, the non-linear properties of the extended end plate connections were modeled using frye-morris polynomial model [30] because of its simple implementation. this model has the general form: 𝜃 = 𝑐 (𝑘𝑀) + 𝑐 (𝑘𝑀) + 𝑐 (𝑘𝑀) (19) where, θr is a rotation (rad x10 -3 ), m is a moment connection (kip.in), k is a standardization constant which depends upon the connection type and geometry; c1, c2, c3 are the curve fitting constants. the values of these constants are given in table 1 [31]. iv design optimization using harmony search algorithm. harmony search technique (hs) was proposed by (geem et al.) [25], [32], [33], [34] and [35] for solving combinatorial optimization problems. this approach is based on the musical performance process that takes place when a musician searches for a better state of harmony. a brief description of the implementation steps of the hs technique is presented in the following subsections: a initialize the harmony search parameters. m. arafa, a. khalifa and m. alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 98 the hs technique comprises several parameters to identify an algorithm which better represents a specific problem. these parameters comprise harmony memory (hm) matrix, harmony memory size (hms), harmony memory consideration rate (hmcr), pitch adjusting rate (par), random uniformly distribution (rand), design variables (xsl) and maximum iteration (maxiter). b initialize harmony memory matrix. in this step the harmony memory (hm) matrix is initialized by random selection of design variables from the adopted steel section list. the random selection is performed by using the interval [0, 1].the hm matrix can be represented as shown below: )( )( )( )( 1 2 1 121 11 1 1 2 1 1 22 1 2 2 2 1 11 1 1 2 1 1 hms hms hms ng hms ng hmshms hms ng hms ng hmshms ngng ngng x x x x xxxx xxxx xxxx xxxx hm                                        (20) where, xi 1 , xi 2 ,…, xi hms-1 ,xi hms and φ(x 1 ),φ(x 2 ),…, φ(x hms-1 ), φ(x hms ) are design variables and the corresponding unconstrained objective function value, respectively. c generating a new harmony vector. a new harmony xi new is improvised from either the hm or design variables (xsl). three rules are applied for the generation of the new harmony. these are hmcr, par and rand. 𝑥 ← { 𝑥 ∈ {𝑥 ,𝑥 ,……. ,𝑥 ,𝑥 } 𝑖𝑓 𝑟𝑎𝑛𝑑 ≤ 𝐻𝑀𝐶𝑅 𝑥 ∈ 𝑥 𝑖𝑓 𝑟𝑎𝑛𝑑 > 𝐻𝑀𝐶𝑅 (21) at first, a random number (rand) is generated, if this random number is equal or less than the hmcr value, xi new is selected from the current values stored in the i th column of hm. if rand is higher than hmcr, xi new is selected from the design variables (xsl). any design variable of the new harmony, xi new which obtained by the memory consideration is examined to determine whether it is pitch-adjusted or not. pitch adjustment rate (par) which investigates better design in the neighboring of the current design as follow: 𝑥 ← { 𝑌𝑒𝑠, 𝑖𝑓 𝑟𝑎𝑛𝑑 ≤ 𝑃𝐴𝑅 𝑁𝑜, 𝑖𝑓 𝑟𝑎𝑛𝑑 > 𝑃𝐴𝑅 (22) a random number (rand) is generated for xi new , if this random number is equal or less than the par, xi new is replaced with its neighboring section in the design variables (xsl). if rand is higher than par, xi new remains the same. hmcr and par parameters are introduced to allow the solution to escape from local optima and to improve the global optimum prediction of the hs algorithm [25] and [32] . d update the harmony memory. if the new harmony vector is better than the worst harmony in the hm, judged by objective function value, the new harmony is included in the hm and the existing worst harmony is excluded from the hm. e termination criteria. if the termination criterion (maxiter) is reached, computation is stopped. otherwise, steps c and d are repeated. v harmony search based structural optimization and design procedure. figure 1 shows the detailed procedure of hs algorithm-based method to determine optimal design of steel frame structures. the detailed procedure can be divided into the following two steps: step 1: initialization. hs algorithm parameters such as hms, hmcr, par, maximum number of searches and design variable are initialized. harmonies (i.e., solution vectors) are then randomly generated from the possible variable bounds that are equal to the size of the hm. here, the initial hm is generated based on a finite element analysis (ansys) subjected to the objective function and penalized objective function. step 2: search. a new harmony is improvised from the initially generated hm or possible variable values using the hmcr and par parameters. these parameters are introduced to allow the solution to escape from local optima and to improve the global optimum prediction in the hs algorithm. the new harmony is analyzed using the finite element analysis (ansys), and its fitness is evaluated using the constraint functions. if satisfied, the weight of the structure is calculated using the objective function. if the new harmony is better than the previous worst harmony, the table 1 curve fitting constants and standardization constant for frye-morris polynomial model. extended end plate without column stiffeners. curve fitting constants unit (inch) standardization constant unit (inch) c1= 1.83 x 10 -3 c2= 1.04 x 10 -4 c3= 6.38 x 10 -6 k = dg -2.4 tp -0.4 db -1.5 m. arafa, a.khalifa and m.alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 99 new harmony is included in the hm and the previous worst harmony is excluded from the hm. the hm is then sorted by the objective function value. the computations terminate when the maximum number of the search criterion is satisfied. if not, this step is repeated. vi benchmark design examples. two design problems have been examined in the present study to implement the developed optimum design algorithms. the design of steel frames with semi-rigid connections were compared those of rigid connections under similar design requirements. these semi-rigid and rigid connections frames were analyzed linearly and non-linearly including p-∆ effect of beam-column members. in addition, two catalogs was used, full catalog section (fcs) that contain all beam-column members with w40 to w8. these sections are considered the most practical sections for steel beams and columns. selected catalog section (scs) that contains two separate population section lists was used to search economic solutions. the first one is column catalog with the height/width ratio less than 2; the second one is a beam section list with the height/width ratio greater than 2. the design algorithm aims at obtaining the minimum steel weight of frames by selecting a standard set of steel w-sections from the aisc standard sections. aisc strength, displacement, deflection and size constraint for all members and lateral torsional buckling were imposed on frames [1]. a comparative study was carried out between the hs optimization results and the results obtained for similar frames optimized using genetic algorithm (ga) techniques published by kameshki and saka [22]. a design of three-storey, two-bay steel frame. the geometry and loading of a three-storey, two-bay frame are shown in figure 2. the modulus of elasticity and yield stress of the steel sections are 29,000 ksi and 36 ksi, respectively. the top storey and inter-storey sway (h/300) is limited to 1.44 inch, 0.48 inch, respectively. the allowable deflection for service imposed load (l/360) is considered 0.66 inch. the out-of-plane effective length factor for each column (ky) is taken 1.0. the out of plane unbraced length (l/6) for beams is specified to be 40 inch. bolt diameter and end plate thickness are taken to be 1 inch, 0.685 inch, respectively. the following tuning parameters are applied in hs algorithm; the harmony memory size (hms) and the harmony memory consideration rate (hmcr) are selected as 15 and 0.90, respectively. the pitch adjustment rate (par) selected as 0.45 and bandwidth (bw) with a randomly selected neighboring index of -2 or +2, for example, if xi new is w14x68, the neighboring index of -2 or +2 forms a list of w14 x 90, w14 x 82, w14 x 74, w14 x 68, w14 x 61, w14 x 53, w14 x 48. the algorithm selects a random neighboring section from the four sections, namely; w14 x 82, w14 x 74 or w14 x 61, w14 x 53). the maximum iteration (maxiter) is 2500 i th [36]. start initialization of algorithm parameter for sizing optimization 1. harmony memory size (hms). 2. harmony memory consideration rate (hmcr). 3. pitch adjusting rate (par). 4. maximum number of search. fem analysis (ansys ) generation of harmony (x )... (x ) 1 hms uniform random number generation of initial hm (harmony memory) i.e., solution vectors generation of a new harmony form hm or possible variable value hmcr, par fem analysis (ansys ) (x ) new new harmony generated better than in hm updating of hm maximum number of searches satisfied? yes no no stop i n it ia li z a ti o n s e a r c h unconstrained objective function figure 1 harmony search algorithm optimization and design procedure. 8 kips 0.22 kips/in 240'' 8 kips 4 kips 0.22 kips/in 0.17 kips/in 1 4 4 '' 1 4 4 '' 1 4 4 '' c 4 c 5 0.22 kips/in c 1 c 3 0.22 kips/in 0.17 kips/in c 2 240'' c 6 b1 c 1 c 3 c 2 b1 b1 b1 b1 b1 figure 2 three-storey, two-bay steel frame. m. arafa, a. khalifa and m. alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 100 the results of ten independent runs of the hs steel design optimization algorithm are presented in table 2. it is observed that the non-linear analysis including p-∆ effect of semi-rigid connection frames showed 3.50% less steel weight than those with rigid connections. table 2 revealed that the optimum weight of semi-rigid connection with non-linear analysis in both full catalog section (fcs) and selected catalog section (scs) has the same weight, but the convergence was obtained using only 65% of the expected iteration with fcs. this means that the scs has an added flexibility in choosing beams and columns over those with fcs. in addition, the solution with linear analysis of semi-rigid connections yielded lighter frame weight than those with rigid connections 2.58% and the optimum weight converged at 1579 th iterations was obtained using only 63% of the expected maximum iteration table 2 minimum steel frame weight of three-storey, two-bay steel frame based on hs. frame analysis no. rigid connection semi-rigid connection fcs fcs scs linear non-linear linear non-linear non-linear weight ib iter. i th weight ib iter. i th weight ib iter. i th weight ib iter. i th weight ib iter. i th 1 6504 1746 6528 2235 6336 1579 6300 2366 6300 1547 2 6528 2199 6576 1392 6348 1802 6348 1255 6336 1513 3 6552 1578 6768 1621 6372 2488 6372 1135 6432* 1546 4 6624 1624 6792* 2235 6396 1615 6396* 1496 6432* 1249 5 6672 1125 6792* 1586 6456 1748 6396* 1490 6492 2005 6 6720 2303 6792* 1803 6816 2110 6504 1751 6504* 1558 7 6792 1542 6864 2130 6852 1657 6516 1622 6504* 1558 8 6804 1872 6888 2133 6876 1681 6648 1204 6744 1490 9 7116 2442 6924 1600 6912 1642 6696 2091 6756 1674 10 7248 2386 7272 1155 7020 2206 7128 1622 6852 1452 min weight (ib) 6504 6528 6336 6300 6300 mape (%) 3.61 4.2 4.4 3.41 3.52 time (min) 40 65 50 75 65 note: 1the * symbol have the same weights but different sections. 2mean absolute percentage error (mape) = number. analysis frame =n where, valueactual valuem inimumvalueactual%100 1   n in table 3 optimum design results, of three-storey, two-bay steel frame. three story, two bay frame gas ( kameshki , saka, 2003) hs ( khalifa, 2011) group member type rigid connection semi-rigid connection rigid connection semi-rigid connection fcs fcs fcs fcs scs linear non-linear linear non-linear linear non-linear linear non-linear non-linear 1 column w24x55 w24x55 w21x50 w18x36 w21x44 w21x48 w12x30 w18x40 w12x35 2 column w21x44 w16x31 w18x35 w14x26 w14x30 w12x26 w12x26 w12x26 w12x26 3 column w12x26 w12x40 w18x35 w8x18 w10x22 w10x22 w8x24 w8x21 w8x24 4 column w30x108 w18x35 w27x84 w24x68 w14x38 w16x40 w14x48 w16x40 w14x43 5 column w24x55 w18x35 w24x55 w24x68 w14x30 w12x30 w12x30 w12x30 w12x30 6 column w18x35 w12x35 w18x46 w18x35 w10x22 w10x22 w12x30 w8x21 w10x22 7 beam w14x26 w16x26 w18x35 w16x26 w16x26 w16x26 w16x26 w14x26 w16x26 total weight (ib) 8496 7404 9300 7092 6504 6528 6336 6300 6300 % weight decrease 25.42 14.91 31.87 11.16 2.58 3.50 0 0 0 top story sway 0.48 0.64 0.39 0.61 0.78 0.63 1.13 0.93 0.92 note: allowable top storey sway 1.44 inch 0.93 1.35 1.21 1.43 figure 3. optimum design history of three-storey, two-bay frame (scs). m. arafa, a.khalifa and m.alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 101 (maxiter), 2500. moreover, table 2 reveals that in all cases of the hs mean absolute percentage error (mape) was obtained 3.40% 4.40%, which reflected the accuracy of algorithm technique. figure 3 shows a typical convergence history for an hs design of the three-storey, two-bay steel frames with semi-rigid connections for scs analysis. this figure further illustrates that the optimization process decreased gradually to fine-tune. because the values of pitch adjustments par and neighboring bandwidth (bw) decreased with time to prevent overshoot, oscillations and forcing the algorithm to focus more on intensification in the final iterations. furthermore, the figure revealed that the convergence curve was high up to the iteration number 500 th , and the optimum value was obtained at the iteration number 1547 th and kept unchanged until the maximum iteration is reached (maxiter), 2500. b comparison of hs with genetic algorithm for threestorey, two-bay steel frame. the optimum steel section designations obtained by the harmony search (hs) method is given in table 3. the optimum weight of frames with semi-rigid connections is generally less than that of frames with rigid connections. in addition, the optimum weight of semi-rigid connection with non-linear analysis in both fcs and scs has the same weight, but the sections are completely different. when comparing the results of hs with the corresponding frames optimized using genetic algorithm (ga) technique, the hs indicated 11.16% lighter weights than those optimized using gas. furthermore, table 3 revealed that in all cases hs yielded lighter frames between 11.16% 31.87% compared with those obtained by gas. the results also showed that the lateral displacement at the top storey was 0.92 inch in case of non-linear semi-rigid frame with scs analysis, which is higher than those obtained by gas, but within the allowable limit of aisc-lrfd (1.44 inch). this can be attributed to the fact that lighter sections sways more than heavier members. c ten-storey, one-bay steel frame. the geometry and loading of a ten-storey, one-bay frame are shown in figure 4. the modulus of elasticity and yield stress of the steel sections are 29,000 ksi and 36 ksi, respectively. the top storey and inter-storey sway (h/300) is limited to 4.92 inch, 0.48 inch, respectively. the allowable deflection for service imposed load (l/360) is considered 1.00 inch. the out-of-plane effective length factor for each column (ky) is taken 1.0. the out of plane unbraced length (l/6) for beams is specified to be 60 inch. bolt diameter and end plate thickness are taken to be 1.125 inch, 1.00 inch, respectively. the following tuning parameters are applied in hs algorithm; the harmony memory size (hms) and the harmony memory consideration rate (hmcr) are selected as 20 and 0.90, respectively. the pitch adjustment rate (par) and neighboring bandwidth (bw) are selected as 0.45 and ± 2, respectively. the maximum iteration (maxiter) is 5000 i th . [36]. the results of ten independent runs of the hs steel design optimization algorithm are presented in table 4. it is observed that the linear analysis of semi-rigid connection frames showed 2.03% less steel weight than those with rigid connections. table 4 revealed that the optimum weight of semi-rigid connection with linear analysis using scs showed 1.87% less steel weight than those with fcs. this means that the scs has flexibility in choosing beams and columns than those with fcs. furthermore, the solution with non-linear analysis of semi-rigid connections resulted in a heavier frame weight than those with rigid connections 1.40% due to the magnitude of loading and frame configuration. over the above, table 4 revealed that in all cases of the hs mean absolute percentage error (mape) was obtained 1.78% 6.21%, which reflected the accuracy of algorithm technique. 2.5 kips 0.5 kips/in 360'' 0.25 kips/in 14 4' ' 2.5 kips 2.5 kips 2.5 kips 2.5 kips 2.5 kips 2.5 kips 2.5 kips 2.5 kips 1.25 kips 0.5 kips/in 0.5 kips/in 0.5 kips/in 0.5 kips/in 0.5 kips/in 0.5 kips/in 0.5 kips/in 0.5 kips/in 14 4' ' 14 4' ' 14 4' ' 14 4' ' 14 4' ' 14 4' ' 14 4' ' 14 4' ' 18 0' ' c 1 b1 c 1 c 2 c 2 c 3 c 3 c 4 c 4 c 5 c 5 c 1 c 1 c 2 c 2 c 3 c 3 c 4 c 4 c 5 c 5 b1 b2 b2 b2 b3 b3 b3 b4 b1 figure 4 ten-storey, one-bay steel frame. m. arafa, a. khalifa and m. alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 102 figure 5 shows a typical convergence history for an hs design of the ten-storey, one-bay steel frames with semi-rigid connections for scs analysis. as shown in this figure, the optimization process decreased gradually to fine-tune. because the values of pitch adjustments par and neighboring bandwidth (bw) decreased with time to prevent overshoot, oscillations and forcing the algorithm to focus more on intensification in the final iterations. also, the figure revealed that the convergence curve was high up to the iteration number 1000 th , and the optimum value was obtained at the iteration number 4050 th yet remained unchanged until the maximum iteration (maxiter), 5000. d comparison of hs with genetic algorithm for ten-storey, one-bay steel frame. the optimum steel section designations obtained by the harmony search (hs) method is given in table 5. the optimum table 4 minimum steel frame weight of ten-storey, one-bay steel frame based on hs. frame analysis no. rigid connection semi-rigid connection fcs fcs scs fcs linear non-linear linear linear non-linear weight ib iter. i th weight ib iter. i th weight ib iter. i th weight ib iter. i th weight ib iter. i th 1 48828 4122 48420 4421 48744 3492 47832 4050 49110 4864 2 48972 4919 48654 3727 49068 4994 48216 3916 49146 4274 3 48984* 4244 48750 3815 49134 3627 49956 3186 49158 4175 4 48984* 4244 48972 4255 49248 2954 50082 4250 49302 3492 5 49086 3105 49242 4009 49734 4690 50484 2933 49464 4772 6 49230 4908 49596 4671 50316* 3323 51996* 2670 50748 3590 7 49242 2468 49668 4226 50316* 3323 51996* 2670 50508 4894 8 50106 3674 49908 3479 51792 2948 52206 4190 51132 3489 9 51600 2071 50166 4872 51810 4908 54000 3074 51168 4442 10 52368 2982 52254 4326 52428 4395 54042 4155 51912 3814 min weight (ib) 48828 48420 48744 47832 49110 mape (%) 1.78 2.26 2.95 6.21 2.06 time (min) 75 160 90 100 175 note: the * symbol have the same weights but different sections. figure 5. optimum design history of ten-storey, one-bay frame (scs). table 5 optimum design results, of ten-storey, one-bay steel frame. ten-story, one-bay frame gas ( kameshki , saka, 2003) hs ( khalifa, 2011) group member type rigid connection semi-rigid connection rigid connection semi-rigid connection fcs fcs fcs fcs scs fcs linear non-linear linear non-linear linear non-linear linear linear non-linear 1 column w36x135 w36x182 w36x160 w36x182 w36x150 w36x150 w24x162 w27x146 w33x152 2 column w33x141 w36x135 w36x135 w36x135 w30x132 w33x130 w24x131 w21x122 w30x132 3 column w30x108 w30x108 w36x135 w33x118 w27x114 w33x118 w21x101 w21x101 w30x108 4 column w27x102 w24x68 w33x118 w27x102 w24x84 w27x84 w14x82 w18x76 w30x90 5 column w14x90 w21x111 w30x108 w14x99 w18x76 w24x68 w14x68 w14x82 w27x84 6 beam w24x68 w24x68 w24x68 w33x118 w24x76 w24x76 w24x68 w24x68 w24x68 7 beam w24x68 w24x68 w24x68 w24x76 w24x76 w24x76 w24x68 w24x68 w24x68 8 beam w27x84 w24x68 w24x68 w21x93 w24x68 w24x68 w27x84 w27x84 w24x76 9 beam w30x108 w21x44 w18x35 w18x50 w21x48 w21x44 w21x62 w21x62 w18x65 total weight (ib) 51498 49764 51858 58950 48828 48420 48744 47832 49110 % weight decrease 7.11 1.31 7.76 16.70 2.03 1.87 0 0 top story sway 0.93 1.35 1.21 1.43 0.91 1.28 1.45 1.43 1.96 note: allowable top storey sway 4.92 inch 0.93 1.35 1.21 1.43 0.91 m. arafa, a.khalifa and m.alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 103 weight of frames with semi-rigid connections is generally less than that of frames with rigid connections. when comparing the results of hs with the corresponding frames optimized using genetic algorithm (ga) technique, the hs indicated 7.76% lighter weights than those optimized using gas. furthermore, table 5 revealed that in all cases hs yielded lighter frames between 1.31% -16.7% compared with those obtained by gas. the results also showed that the lateral displacement at the top storey was 1.43 inch in case of linear semi-rigid frame with scs analysis, which is higher than those obtained by gas, but within the allowable limit of aisc-lrfd (4.92 inch). this can be attributed to the fact that lighter sections will sway more than heavier members. vii conclusion. the optimum design algorithm is developed for semi-rigid steel frames based on the harmony search method which is a new stochastic random search approach that simulates the musical process of searching for a perfect state of harmony. the benchmark design examples presented in this study revealed that the designs with semi-rigid connection resulted in lighter frames than the ones with rigid connections. in addition, it is observed that nonlinear semi-rigid frames are lighter in some cases and heavier in some others, compared to linear semi-rigid frames, depending on the magnitude of loading and frame configuration. the results obtained showed that the harmony search method is an efficient and robust technique, because it reached lighter frame sections than gas 1.31%31.87%. moreover, the optimization using selected catalog section (scs) resulted in lighter frame sections than using full catalog section (fcs) about 1.87%. it is further noticed that the mean absolute percentage error (mape) was obtained 1.78% 6.21%. furthermore, the maximum sway obtained at the optimum design increased smoothly in case of semi-rigid frame in compression with a rigid frame, this can be attributed to the fact that lighter sections sway more than heavier members yet remain within the allowable limit by aisc-lrfd. references. [1] aisc, manual of steel construction, load and resistance factor design. chicago.: american institute of steel construction, 2010. [2] m pincus, "a monte carlo method for the approximate solution of certain types of constrained optimization problems," operation research, vol. 18, pp. 1225-1228, 1970. [3] ss rao, engineering optimization: theory and practice, 4th ed. hoboken, nj, usa.: john wiley & sons inc., 2009. [4] e g talbi, metaheuristics: from design to implementation. hoboken, nj, usa: john wiley & sons, inc., 2009. [5] wm jenkins, "towards structural optimization via the genetic algorithm," computers and structures, vol. 40, pp. 1321–1327, 1991. [6] k tang and x yao, information sciences, vol. 178, no. special issue on ‘‘nature inspired problem-solving", pp. 2983–2984, 2008. [7] x s yang, nature-inspired metaheuristic algorithms. frome, u.k.: luniver press, 2008. [8] j.h holland, "adaption in natural and artificial systems," the universtiy of michigan press, ann arbor, 1975. [9] d e goldberg, genetic algorithms in search, optimization and machine learning.: addison-wesley, reading, mass, 1989. [10] j r koza, "genetic programming: a paradigm for genetically breeding populations of computer programs to solve problems," 1990. [11] l j fogel, a j owens, and m j walsh, "artificial intelligence through simulated evolution," john wiley, chichester, 1996. [12] h p schwefel, j zurada, r marks, and c robinson, "on the evolution of evolutionary computation," computational intelligence: imitating life, pp. 116-124, 1994. [13] d.b fogel, "a comparison of evolutionary programming and genetic algorithms on selected constrained optimization problems," simulation, vol. 64(6), pp. 399-406, 1995. [14] m dorigo and t stutzle, "ant colony optimization," mit press, cambridge, 2004. [15] e bonabeau, m dorigo, and g theraulaz, "swarm intelligence:from natural to artificial systems.," oxford university press, 1999. [16] m dorigo and c blum, "ant colony optimization theory: a survey. theor," comput. sci, vol. 344, pp. 243-278, 2005. [17] j kennedy and r c eberhart, "particle swarm optimization.," proceedings of ieee int. conf. neural networks, pp. 19421948, 1995. [18] j kennedy, r eberhart, and y shi, "swarm intelligence," morgan kaufmann publishers, 2001. [19] y shi, h liu, l gao, and g zhang, "cellular particle swarm optimization," information sciences, vol. 181, pp. 4460– 4493, 2011. [20] y wang et al., "self-adaptive learning based particle swarm optimization," information sciences, vol. 181, pp. 4515– 4538, 2011. [21] d karaboga and b basturk, "on the performance of artificial bee colony (abc) algorithm," applied soft computing, vol. 8, pp. 687-697, 2008. [22] e s kameshki and m p saka, "genetic algorithm based optimum design of nonlinear planar steel frames with various semi-rigid connections," journal of construction steel research, vol. 59, pp. 109-134, 2003. [23] c v camp, j b barron, and p s scott, "design of steel frames using ant colony optimization," journal of structural engineering asce, vol. 131, pp. 369–379, 2005. [24] f erdal, e dogan, and m p saka, "optimum design of cellular beams using harmony search and particle swarm optimizers," journal of constructional steel research, vol. 67, pp. 237-247, 2011. [25] zw geem and jh kim, "a new heuristic optimization algorithm: harmony search," simulation, vol. 76, pp. 60-68, 2001. [26] j t richardson, m r palmer, g liepins, and m hilliard, "some guidelines for genetic algorithms with penalty m. arafa, a. khalifa and m. alqedra / design optimization of semi-rigidly connected steel frames using harmony search algorithm (2015) 104 functions," morgan kaufmann, san mateo, calif., pp. 191197, 1989. [27] a homaifar, c x qi, and s h lai, "constrained optimization via genetic algorithms," simulation, pp. 242-254, april 1994. [28] c v camp, s pezeshk, and g cao, "design of 3-d structures using a genetic algorithm," struct. optimization, april 1996, chicago. [29] p dumonteil, "simple equations for effective length factors.," engineering journal aisc, vol. 29, pp. 111–115, 1992. [30] mj frye and ga morris, "analysis of flexibly connected steel frames," canadian journal of civil engineering, vol. 2, no. 3, pp. 280-291, 1975. [31] wf chen and em lui, stability design of steel frames. boca raton, florida: crc press, 1991. [32] ks lee and zw geem, "a new structural optimization method based on harmony search algorithm," computers and structures, vol. 82, pp. 781-798, 2004. [33] ks lee and zw geem, "a new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice," comput. methods appl. mech. engrg., vol. 194, pp. 3902-3933, 2005. [34] zw geem, "optimal cost design of water distribution networks using harmony search," engineering optimiztion, vol. 38, pp. 259-280, 2006. [35] zw geem, harmony search algorithms for structural design optimization. heidelberg, berlin.: springer, 2009. [36] aj khalifa, "design optimization of semi rigid steel framed structures to aisclrfd using harmony search algorithm," civil engineering department, islamic university of gaza, gaza, palestine, msc thesis 2011. dr. mohammed arafa is associate professor in structural engineering at faculty of engineeringislamic university of gaza. dr. arafa has wide experience in structural analysis, finite element method, computational mechanics and optimization of structures. ashraf khalifa. has m.sc. dgree in civil engineering, the islamic university of gaza, palestine. dr. mamoun alqedra is associate professor in civil engineering at faculty of engineeringislamic university of gaza. dr. alqedra has wide experience in design and analysis of steel structure, geotechnical engineering, computational mechanics, and optimization of structures. transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 152 investigating the efficiency of wordnet as background knowledge for document clustering iyad alagha 1 , rami nafee 2 1 faculty of information technology, the islamic university of gaza, ialagha@hotmail.com 2 faculty of information technology, the islamic university of gaza, raminafe2002@hotmail.com abstract—traditional techniques of document clustering do not consider the semantic relationships between words when assigning documents to clusters. for instance, if two documents talk about the same topic but by using different words, these techniques may assign documents to different clusters. many efforts have approached this problem by enriching the document’s representation with background knowledge from wordnet. these efforts, however, often showed conflicting results: while some researches claimed that wordnet had the potential to improve the clustering performance by its capability to capture and estimate similarities between words, other researches claimed that wordnet provided little or no enhancement to the obtained clusters. this work aims to experimentally resolve this contradiction between the two teams, and explain why wordnet could be useful in some cases while not in others, and what factors can influence the use of wordnet for document clustering. we conducted a set of experiments in which wordnet was used for document clustering with various settings including different datasets, different ways of incorporating semantics into the document’s representation and different similarity measures. results showed that different experimental settings may yield different clusters: for example, the influence of wordnet’s semantic features varies according to the dataset being used. results also revealed that wordnet-based similarity measures do not seem to improve clustering, and that there was no certain measure to ensure the best clustering results. index terms—document clustering, wordnet, similarity measure, ontology i introduction document clustering is a technique that aims at grouping document collections into meaningful groups. in traditional techniques of document clustering, documents are represented as bag of words, and are then assigned to clusters according to the similarity scores obtained from a document similarity measure. these techniques ignore the semantic relationships between the document words, and thus cannot accurately group documents based on meaning similarity. for example, a document that only contains the word “plane” and another that only contains the word “jet” are assigned to different clusters as the two words will be considered different. existing research has tried to overcome this limitation by proposing clustering techniques that are based on meaning similarities. the similarity in meaning can be measured by exploiting background knowledge in form of domain ontologies or lexical resources such as wordnet. similarity scores obtained from wordnet can be used to enhance the document’s representation by giving more weight to words that are semantically related [1]. with the enhanced document’s representation, the clustering algorithm can better assign documents to clusters based on their semantic similarities to each other. several efforts have investigated different approaches to incorporate the semantic features of ontologies in an attempt to improve document clustering, and have shown that information semantics have the potential to improve the quality of the obtained clusters [2-5]. wordnet [6] is one of the most popularly used semantic networks for determining semantic relations between words. wordnet has an ontology alike structure: words are represented as having several meanings (each such meaning forming a synset, which is the atomic structure of wordnet), and relations between words (hyponymy, hyperonymy, antonymy, and other relations) are represented as links in a graph. many similarity measures use the relations defined in wordnet to determine the semantic relatedness between words. due to its wide coverage as compared to other restricted domain ontologies, many efforts used it as background knowledge for document clustering [7-9]. despite the significant amount of research on wordnetbased clustering, existing approaches often resulted in conflicting results: while some approaches showed that wordnet could enhance the document’s representation with semantic features, yielding to better clustering [9-11], other approaches claimed that wordnet resulted in little or no improvement, or might even degrade the clustering results due to the introduced noise [7, 12, 13]. given this contradiction, the objective of this research is to try to resolve this issue by seeking answers to the following questions: iyad alaga and rami nafee / investigating the effeciency of wordnet as background knowledge for document clustering (2015 ) 153  what potential factors could make wordnet useful for document clustering in particular situations while not in others?  do different experimental settings, i.e. different datasets, document’s representations and similarity measures affect the potential of wordnet to improve clustering?  what is the best similarity measure to use with wordnet-based clustering? ii related work the idea of incorporating semantic features from the wordnet has been widely explored to improve document clustering techniques. however, there were major differences among the findings of these efforts: while some efforts affirmed the value of wordnet in improving document clustering, other efforts indicated the opposite. for example, hotho et al. [1] discussed different strategies for representing text documents that take background knowledge from wordnet into account. their performed evaluations indicated improved clustering results. gharib, fouad, and aref [14] matched the stemmed keywords to terms in wordnet for word sense disambiguation. their approach outperformed the traditional clustering techniques; however, it seemed to over generalize the affected keywords [15]. fodeh et al. [13] addressed the effect of incorporating the polysemous and synonymous nouns into document clustering, and showed that they play an important role in clustering. chen et. al [16] proposed a document clustering approach that combines fuzzy association rule mining with wordnet for grouping documents. they also proposed a technique to filter out noise when adding hypernyms into documents. wei et. al [17] presented a wordnet-based semantic similarity measure for word sense disambiguation whereas lexical chains are employed to extract core semantic features that express the topic of documents. in contrast, some studies indicated that the use of wordnet as background knowledge does not necessarily lead to better clusters, and may even produce noise that degrades the clustering performance. for example, dave et al. [18] used synsets as features for the document’s representation and subsequent clustering, and reported that wordnet synsets actually decreased or added no value to clustering performance. amine et al. [19] found that the mapping of document words to concepts in wordnet might increase ambiguity and induce loss of information. passos, a. and j. wainer [20] showed that many similarity measures between words derived from wordnet are worse than the baseline for the purposes of text clustering, and indicated that wordnet does not provide good word similarity data. however, they worked on a single dataset, and did not examine other approaches of incorporating wordnet’s features into the document’s representation. sedding, j. and d. kazakov [7] showed that synonyms and hypernyms disambiguated only by part-of-speech tags are not successful in improving clustering effectiveness. this could be attributed to the noise introduced by all incorrect senses that are retrieved from wordnet. the above discussion reveals inconsistent results regarding the ability of wordnet to operate as background knowledge for document clustering. this demands further investigation into the factors and circumstances causing this inconsistency. approaches that exploit wordnet or any other ontology for clustering often rely on some types of semantic similarity measures to estimate the similarity between document words. these measures can be classified into four groups: path length based measures, information content based measures, feature based measures, and hybrid measures. an exhaustive overview of these approaches can be found in [21]. a previous study [22] compared the use of different similarity measures with medical ontologies for document clustering, and indicated that there was no a certain type of similarity measure that significantly outperforms the others. our study also compares the use of similarity measures for clustering but with wordnet rather than domain-specific ontologies. we also examine the effect of wordnet’s semantics with different datasets and document’s representations. amine et al. [19] compared three different clustering algorithms which were all based on the synsets of wordnet as terms for the representation of documents. while their study aimed to determine the best clustering algorithm to use with wordnet, this study aims to explain the opposite findings regarding the efficiency of wordnet for document clustering. iii using wordnet to enhance the document’s representation clustering of a document collection typically starts by representing each document as a bag of words. the simple bag of words representation may be enhanced by weighting the terms according to their information content by, for example, tf-idf. subsequently, a similarity measure, such as the cosine similarity, is used to assign a score to each pair of documents, and similar documents are accordingly assigned to the same cluster. there are two approaches that are commonly used to enhance the document’s representation with wordnet, which are explained in what follows: a enhancing the document’s representation by replacing synonyms one limitation of using the traditional bag of words representation is that words are weighted separately without considering the similarity between them. for example, the terms are weighted separately despite of all being synonyms. this leads to information loss as the importance of a determinate concept is distributed among different components of the document’s representation. existing approaches [1, 13] addressed this issue by referring to lexical databases such as wordnet to identify synonyms. subsequently, the document’s bag of words is modified by replacing all synonyms with a single descriptor. for iyad alaga and rami nafee / investigating the effeciency of wordnet as background knowledge for document clustering (2015 ) 154 example, the terms: may be replaced with the term . afterwards, the document is represented by using the tf-idf scheme. therefore, the replacing term will have a cumulative weight that is equal to the sum of the tf-idf weights of replaced synonyms. finally, a clustering algorithm, such as k-means, is applied. b enhancing the document’s representation by using similarity measures having documents with different term sets does not necessarily mean that documents are unrelated. document terms can be semantically related even though they are syntactically different. for example, the terms: cow, meat, milk and farms are all related with some relations which cannot be captured without using background knowledge. as discussed earlier, the document’s representation can be enhanced by identifying and replacing synonyms of the same term. however, this approach only considers synonyms, while terms that are not synonyms but are semantically related are ignored. for example, the words: and are related in meaning, and this relation will not be captured by simply identifying and replacing synonyms. to overcome this limitation, it is necessary to represent the document in a way that reflects the relatedness in meanings between the document terms. similarity measures have been commonly used to measure the semantic relatedness between documents words, and then relatedness scores are incorporated into the document’s representation. similarity measures exploit knowledge retrieved from a semantic network (i.e., wordnet) to estimate the similarity between term pairs according to the topology structure of wordnet. similarity scores are then incorporated into the document’s tf-idf representation so that terms are related will gain more weight. reweighting terms according to their semantic relatedness may help discount the effects of class-independent general terms and aggravate the effects of class-specific “core” terms [22]. this can eventually help to cluster documents based on their meanings. employing similarity measures on wordnet is an idea that have been explored by several efforts [3, 17, 23] for the purpose of improving document clustering. iv experimental study after presenting the approaches for enhancing the document’s representation with knowledge in wordnet, the following subsections report on the experimental study we conducted with the following objectives in mind: 1) compare between the approaches previously explained and examine their influence on document clustering by testing with different datasets. 2) examine the use of different ontologybased similarity measures in order to identify the best measure(s) to use with wordnet. 3) explain, in light of the results obtained from 1 and 2, the contradiction between existing works regarding the value of wordnet’s semantics for document clustering. a datasets two datasets were used for the study, which were: reuters-21578, and ohsumed. details of each dataset are given below. in addition, the rationale behind using these particular datasets is illustrated. reuters-21578 [24] : the documents in the reuters-21578 collection appeared on the reuters newswire in 1987. the documents were assembled and indexed with categories by personnel from reuters ltd[24]. reuters-21578 dataset has been widely used for evaluating document clustering algorithms. its domain is not specific, therefore it can be understood by a non-expert [7]. ohsumed [25]: the ohsumed test collection is a set of 348,566 references from medline, the on-line medical information database, consisting of titles and/or abstracts from 270 medical journals over a five-year period (19871991). the available fields are title, abstract, mesh indexing terms, author, source, and publication type. the national library of medicine has agreed to make the medline references in the test database available for experimentation [26]. the above datasets were chosen because they have different characterises that might lead to different clustering performance: the reuters-21578 is considered a heterogeneous dataset with no specific domain, covering a wide verity of dissimilar topics from the newswire. in contrast, ohsumed is a domain-specific dataset strictly covering the domain of medicine. the intention was to explore how the use of datasets of different homogeneity could yield different clustering performance. b experiments we conducted three experiments, each of which used a different approach of representing documents. k-means clustering algorithm was applied in the three experiments. these experiments were as follows: traditional clustering without background knowledge: this represented the baseline case. documents were preprocessed by applying tokenization, stemming and stopword removal. documents were then represented in tf-idf prior to applying k-means for clustering. note that the conceptual relations between document terms were ignored, and terms were weighted only according to their frequency of co-occurrence in the document collection. enhancing the document’s representation by identifying and replacing synonyms: before finding synonyms in documents, the following pre-processing steps were applied: first, all documents were broken down into sentences which undergone part of speech tagging. part of speech tags were essential to correctly identify synonyms which should have the same pos tag. after tagging the document words, other pre-processing steps including tokenization, stemming and stop-word removal were applied. the following step was to search the document collection for terms that are synonyms with the help of wordnet. synonyms of a particular concept were replaced by a unique term in the document’s bag of words. the modified bag of words of each document was iyad alaga and rami nafee / investigating the effeciency of wordnet as background knowledge for document clustering (2015 ) 155 then represented in tf-idf. enhancing the document’s representation by using ontology-based similarity measures: first, pre-processing steps consisting of tokenization, stop-word removal and stemming were applied on the document collection. documents were then represented in tf-idf scheme. ontology based similarity measures were used to measure the wordnet-based similarity between each pair of words in the document collection. similarity scores were then used to augment the tf-idf weights so that terms gained more weight according to their similarity to each other. this process is formally represented as the following: let d = *w ,w ,w , . . ,w + be the document’s representation where w is the weight of term t in document d, and is computed using the tf-idf scheme. the similarity between each pair of terms in the document was calculated by using each similarity measure shown in table 1. afterwards, if-idf weights were reweighted using the following equation [27]: w′ = w + ∑ w ∗ sim(i, j) ( ) , where: w′ stands for the augmented tf-idf weight of term i, w is the tf-idf weight of term j of the same document, and sim(i, j) is the semantic similarity score between terms t , t rated from 0 to 1, where 1 represents the highest similarity. this equation assigns more weight to terms that are semantically related. weights of terms that are not related to any other terms or that are not included in wordnet remain unchanged. after augmenting the tf-idf document’s representation with similarity scores, k-means algorithm was applied. since it was of our objectives to assess different similarity measures, the above process was repeated for every similarity measure shown in table 1. these measures have been widely used for semantically-enhanced text clustering. short descriptions of these measures are also given in table 1. table 1 similarity measures used in the study. description id leacock and chodorow [28]: this measure relies on the length of the shortest path between two terms. it is limited to is-a links, and the path length is scaled by the overall depth of the taxonomy. lch wu and palmer [29]: this measure calculates similarity by considering the depths of the two terms in wordnet, along with the depth of the least common subsumer. wup jiang and conrath [30]: this measure uses the notion of information content, but in the form of the conditional probability of encountering an instance of a child-synset given an instance of a parent synset. jcn lin[31]: math equation is modified a little bit from jiang and conrath: 2 * ic(lcs) / (ic(synset1) + ic(synset2). where ic(x) is the information content of x. lin resnik [32]: this measure defined the similarity between two terms to be the information content of the most speres cific common subsume. banerjee and pedersen [33]: the relatedness of two terms is proportional to the extent of overlaps of their dictionary definitions. lesk hirst and st-onge [34]: two terms are semantically related if their wordnet synsets are connected by a path that is not too long and that does not change direction too often. hso v results and discussion the clustering performance was evaluated by using fmeasure [35] and purity[36]. table 2 summarizes the results whereas rows indicate the three experiments and columns indicate the two datasets used. when using similarity measures with wordnet, the clustering process was repeated several times while varying the similarity measure and the best result was considered for comparison. the id of the similarity measure giving the best result is shown alongside the result between brackets (ids of similarity measures are shown in table 1). table 2 clustering results of the three experiments. experiment reuters-21578 ohsumed purity fmeasure purity fmeasure without background knowledge 0.57 0.64 0.36 0.47 with replacing synonyms 0.64 0.77 0.49 0.65 with wordnetbased similarity measures 0.59 (lch) 0.70 (lch) 0.43 (res) 0.60 (res) table 3 lists the different similarity measures we used for the third experiment, i.e. clustering with wordnet-based similarity measures, and the clustering performance per each measure. results are discussed in the following subsections, and related efforts are revisited, where appropriate, in light of our results. table 3 clustering results for each similarity measure. similarity measures reuters-21578 ohsumed purity f-measure purity f-measure lch 0.59 0.70 0.39 0.49 wup 0.56 0.64 0.41 0.55 jcn 0.40 0.48 0.30 0.39 lin 0.48 0.55 0.41 0.55 res 0.48 0.61 0.43 0.65 lesk 0.54 0.67 0.42 0.57 hso 0.46 0.58 0.42 0.62 a comparing the document’s representation techniques iyad alaga and rami nafee / investigating the effeciency of wordnet as background knowledge for document clustering (2015 ) 156 in this subsection, we compare results across the three experiments, i.e. clustering without background knowledge (baseline), clustering with replacing synonyms and clustering with similarity measures. in the case of reuters dataset, clustering with replacing synonyms outperformed other approaches (f-measure =0.77 and purity =0.64), followed by clustering with similarity measures (f-measure=0.70, purity=0.59).when using the ohsumed dataset, the best results were also achieved by replacing synonyms (f-measure=0.65, purity=0.49), followed by clustering with similarity measures (fmeasure=0.60, purity=0.43). in general, this result shows that potential of wordnet to improve the clustering results, either by replacing synonyms or by using similarity scores, as compared to clustering without background knowledge. this result conforms to other studies which indicated the value of wordnet semantics for document clustering [1, 7, 11, 17]. however, the use of similarity measures with wordnet has unexpectedly produced results worse than those produced by replacing synonyms, but slightly better than the baseline case, i.e. clustering without background knowledge. it should be noticed that table 2 shows the top result obtained from all the seven similarity measures. this result concurs with some studies which indicated that the use of similarity measures with wordnet had little impact on text clustering and may produce worse results [20, 23]. the above result suggests that wordnet-based similarity measures do not seem to improve the clustering results. we think that this can be attributed to the structure of wordnet taxonomy which is mainly designed to represent specific relations (e.g. hyponymy, hyperonymy) but is not designed to capture similarity between words. for example, when measuring the similarity between the words: “camel” and “desert”, or between the verb “sit” and the noun “chair”, the similarity scores were close to 0. b comparing the influence of datasets comparing results across the two datasets, we can see that the improvement resulted from semantic-based approaches (synonyms and similarity measures) was more obvious in the case of the ohsumed dataset than in the reuters dataset. we think this difference can be explained by the nature of the dataset in terms of the disparity of its content: for example, the reuters dataset is heterogeneous in the sense that it covers news from unrelated domains, a thing that makes it difficult to identify semantic relations between the document words. it was noticed experimentally that the scores obtained by applying the similarity measures on the reuters dataset were often low. considering the fact that most similarity measures rely, mainly or partially, on the taxonomical distances within wordnet, the similarity scores will decrease as the differences between terms increases. in contrast, ohsumed is a domain-specific dataset with different classes of documents belonging to the medical domain. this makes it easy to identify terms that belong to a specific domain and measure similarities between them. this explains the better results obtained from the ohsumed dataset as compared to the results obtained from the reuters dataset. the above discussion reveals that the use of different datasets can result in different results: the more homogeneous and domain-specific the dataset is, the easier it becomes to capture similarities between terms included in the dataset, hence the more influence the wordnet has on the clustering results. we should also bear in mind that the wordnet is a general-purpose lexical database of english terms but it does not provide a thorough coverage of every domain of knowledge. although its use has improved the clustering performance in our experiment, wordnet is not meant to be used with domain specific applications. it is always recommended to use domain-specific ontologies to cover domainspecific datasets. c comparing the wordnet-based similarity measures comparing the use of different similarity measures, results vary as shown in table3: in the case of reuters dataset, the lch measure achieved the best results followed by the path and wup measures. in the case of ohsumed, the res measure gave the best results, followed by the hso and lesk. however, the improvement on the results was not significant [t-test, p>0.05]. in addition, the clustering performance with some similarity measures was even lower than the performance of the baseline case where no background knowledge was used (e.g. refer to jcn measure in table 3). these results indicate that there was no certain measure to ensure the best clustering results. they also support our conclusion about the inadequacy of wordnet to be used with ontology-based similarity measures. however, this result does not generalize to other types of ontologies as our study focused strictly on wordnet. vi conclusions and recommendations the different outcomes of exsiting approaches regarding the influence of wordnet on document clustering have motivated us to conduct this work. multiple experiments on document clustering were conducted with multiple datasets, document’s representations and similarity measures. in summary, our study found that the characteristics of the datasets being clustered in terms of the disparity of its topics may reduce the ability to capture the semantic relations between terms from the taxonomy of wordnet. results also indicated that augmenting the document’s representation by replacing synonyms may achieve better results than those achieved by using similarity measures or the baseline case, i.e. clustering without background knowledge. based on these findings, we draw some recommendations to be considered when using wordnet as background knowledge for text clustering: first, experimenters should consider the nature of the dataset in hand and the diversity of its topics before deciding to use wordnet for measuring similarities. second, the wordnet structure does not seem to support the application of similarity measures. alternatively, wordnet can be better exploited by capturing specific types of relations such as “is-a”, “hyponymy” and hypernymy, iyad alaga and rami nafee / investigating the effeciency of wordnet as background knowledge for document clustering (2015 ) 157 and then use them to enhance the document’s representation. for example, capturing and replacing synonyms in the document collection outperformed other approaches in our experiments. references [1] a hotho, s staab, andg stumme (2003). ontologies improve text document clustering, data mining, 2003. icdm 2003. third ieee international conference on: ieee (pp. 541-544). [2] a charola, ands machchhar. (2013). comparative study on ontology based text documents clustering techniques. data mining and knowledge engineering, 5(12), 426. [3] h h tar, andt t s nyunt. (2011). ontology-based concept weighting for text documents. world academy of science, engineering and technology, 81(249-253. [4] g bharathi, andd venkatesan. (2012). study of ontology or thesaurus based document clustering and information retrieval. journal ofengineerir1g and applied sciences, 7(4), 342-347. [5] q dang, j zhang, y lu, andk zhang (2013). wordnetbased suffix tree clustering algorithm, 2013 international conference on information science and computer applications (isca 2013): atlantis press (pp. [6] g miller, andc fellbaum, "wordnet: an electronic lexical database", mit press cambridge1998. [7] j sedding, andd kazakov (2004). wordnet-based text document clustering, proceedings of the 3rd workshop on robust methods in analysis of natural language data: association for computational linguistics (pp. 104-113). [8] h-t zheng, b-y kang, andh-g kim. (2009). exploiting noun phrases and semantic relationships for text document clustering. information sciences, 179(13), 2249-2262. [9] d r recupero. (2007). a new unsupervised method for document clustering by using wordnet lexical and conceptual relations. information retrieval, 10(6), 563-579. [10] a hotho, staab, s. and stumme, g (2003). wordnet improves text document clustering, in proc. of the semantic web workshop at 26th annual international acm sigir conference, toronto, canada (pp. [11] y wang, andj hodges (2006). document clustering with semantic analysis, system sciences, 2006. hicss'06. proceedings of the 39th annual hawaii international conference on: ieee (pp. 54c-54c). [12] l jing, l zhou, m k ng, andj z huang, "ontologybased distance measure for text clustering", proceedings of the text mining workshop, siam international conference on data mining2006. [13] s fodeh, b punch, andp-n tan. (2011). on ontologydriven document clustering using core semantic features. knowledge and information systems, 28(2), 395-421. [14] t f gharib, m m fouad, andm m aref, "fuzzy document clustering approach using wordnet lexical categories", advanced techniques in computing sciences and software engineeringspringer. pp. 181-186. 2010. [15] c bouras, andv tsogkas. (2012). a clustering technique for news articles using wordnet. knowledge-based systems, 36(115-128. [16] c-l chen, f s tseng, andt liang. (2011). an integration of fuzzy association rules and wordnet for document clustering. knowledge and information systems, 28(3), 687-708. [17] t wei, y lu, h chang, q zhou, andx bao. (2015). a semantic approach for text clustering using wordnet and lexical chains. expert systems with applications, 42(4), 2264-2275. [18] k dave, s lawrence, andd m pennock (2003). mining the peanut gallery: opinion extraction and semantic classification of product reviews, proceedings of the 12th international conference on world wide web: acm (pp. 519-528). [19] a amine, z elberrichi, andm simonet. (2010). evaluation of text clustering methods using wordnet. int. arab j. inf. technol., 7(4), 349-357. [20] a passos, andj wainer (2009). wordnet-based metrics do not seem to help document clustering, proc. of the of the ii workshop on web and text intelligente, são carlos, brazil, (pp. [21] l meng, r huang, andj gu. (2013). a review of semantic similarity measures in wordnet. international journal of hybrid information technology, 6(1), 1-12. [22] x zhang, l jing, x hu, m ng, andx zhou, "a comparative study of ontology based term similarity measures on pubmed document clustering", advances in databases: concepts, systems and applicationsspringer. pp. 115-126. 2007. [23] l jing, l zhou, m k ng, andj z huang (2006). ontology-based distance measure for text clustering, proceedings of the text mining workshop, siam international conference on data mining, (pp. 537541). [24] d d lewis. (1997). reuters-21578 text categorization test collection, distribution 1.0. http://www. research. att. com/~ lewis/reuters21578. html, [25] w hersh, c buckley, t leone, andd hickam (1994). ohsumed: an interactive retrieval evaluation and new large test collection for research, sigir’94: springer (pp. 192-201). [26] x u group, 07/15/2005 [cited 2014 1/12/2014]; available from: http://davis.wpi.edu/xmdv/datasets/ohsumed.html. [27] g varelas, e voutsakis, p raftopoulou, e g petrakis, ande e milios, "semantic similarity methods in wordnet and their application to information http://www/ http://davis.wpi.edu/xmdv/datasets/ohsumed.html iyad alaga and rami nafee / investigating the effeciency of wordnet as background knowledge for document clustering (2015 ) 158 retrieval on the web", proceedings of the 7th annual acm international workshop on web information and data managementacm: newyork. pp. 10-16. 2005. [28] c leacock, andm chodorow. (1998). combining local context and wordnet similarity for word sense identification. wordnet: an electronic lexical database, 49(2), 265-283. [29] z wu, andm palmer (1994). verbs semantics and lexical selection, proceedings of the 32nd annual meeting on association for computational linguistics: association for computational linguistics (pp. 133138). [30] j j jiang, andd w conrath. (1997). semantic similarity based on corpus statistics and lexical taxonomy. arxiv preprint cmp-lg/9709008, [31] d lin (1998). an information-theoretic definition of similarity, icml, (pp. 296-304). [32] p resnik. (1995). using information content to evaluate semantic similarity in a taxonomy. arxiv preprint cmp-lg/9511007, [33] s banerjee, andt pedersen, "an adapted lesk algorithm for word sense disambiguation using wordnet", computational linguistics and intelligent text processingspringer. pp. 136-145. 2002. [34] g hirst, andd st-onge. (1998). lexical chains as representations of context for the detection and correction of malapropisms. wordnet: an electronic lexical database, 305(305-332. [35] b larsen, andc aone, "fast and effective text mining using linear-time document clustering", proceedings of the fifth acm sigkdd international conference on knowledge discovery and data miningacm. pp. 16-22. 1999. [36] y zhao, andg karypis. (2001). criterion functions for document clustering: experiments and analysis. machine learning, iyad m. alagha received his msc and phd in computer science from the university of durham, the uk. he worked as a research associate in the center of technology enhanced learning at the university of durham, investigating the use of multi-touch devices for learning and teaching. he is currently working as an assistant professor at the faculty of information technology at the islamic university of gaza, palestine. his research interests are semantic web technology, adaptive hypermedia, human-computer interaction and technology enhanced learning. rami h. nafee received his bsc from al-azhar universitygaza and his msc degree in information technology from the islamic university of gaza. he works as a web programmer at information technology unit at al-azhar universitygaza. he is also a lecturer at the faculty of intermediate studies at al-azhar university. his research interests include data mining and semantic web. transactions template journal of engineering research and technology, volume 3, issue 1, march 2016 1 temperature dependency of ytterbium-doped fiber laser (ydfl) based on fabry-perot design operating at 915 nm and 970 nm high power pumping configuration fady i. el-nahal1, abdel hakeim m. husein2 1electrical engineering department, islamic university of gaza, gaza, palestine fnahal@iugaza.edu.ps 2physics department, al aqsa university, gaza, palestine. abstract— the variation of the output power of ytterbium-doped fiber lasers (ydfls) with temperature has been evaluated. temperature-dependent rate equations of ytterbium fiber laser based on fabry-perot design have been discussed. the results demonstrate that the output power decrease with the increase of temperature. the effect of the temperature on the output performance increases by increasing the pump power. the effect of temperature can be ignored only for lower pump power. the theoretical result is in agreement with the published experimental results. index terms— ytterbium-doped fiber laser, temperature-dependent rate equation. 1. introduction since the first report in 1962 of laser achievement in ytterbium ion (yb3+) doped silicate glass [1]. ytterbium (yb)doped fiber lasers (ydfls) have attracted great interest because they offer the advantages of compact size and structure, high gain, guided mode propagation, highly stable processes, their outstanding thermo-optical properties and high doping levels are possible [1-6]. moreover, it does not have some of the drawbacks associated with other rare-doped fiber such as excited state absorption phenomenon that can reduce the pump efficiency and concentration quenching by interionic energy transfer. thus, it offers high output power (or gain) with a smaller fiber length. ydfa’s have a simple energy level structure and provide amplification over a broad wavelength range from 915 to 1200 nm. furthermore, ydfa’s can offer high output power and excellent power conversion efficiency [1,7,8]. lately, interest has been shown in yb3+ as a laser ion, in the form of yb3+-doped silica and fluoride fiber lasers [9]. ydfls have been widely used in advanced manufacturing, high energy physics and military defense [10]. there are several theoretical analyses of the ydfl based on rate equations and power differential transmission with fixed or variant parameters of the fiber laser, the results of which are important for optimization of fiber lasers. the numerical analysis of thermal distribution and its effects on the high power ydfls have been studied, as thermal damage, refractive index variation of the gain fiber, and output wavelength [11]. several papers studied the temperature effects on the output performance of ydfls. it is reported that the central wavelength of output laser shifts to longer wavelength and the output power decrease with increase of temperature [12, 13]. brilliant et. al. showed their experimental results of temperature tuning in a dual-clad ytterbium fiber laser, they varied the temperature of the fiber from 0 to 100◦c and found important changes in operating wavelength, power and threshold [14]. today, the wavelength shift can be controlled with the using of fiber grating. thus the temperature-dependence study about fixed wavelength is required. the effect of temperature on the optical properties of ydf lasing at different wavelengths has been analyzed [15]. nevertheless, to our knowledge, there is no articles that discuss the effect of temperature on the best possible conditions of ydfls theoretically so far. in this article, the temperature-dependency model based on ions’ rearrangement emerging from temperature variation for ydfls with two-end fabry-perot mirrors has been presented. in this model, the output performance can be studied (slope efficiency and the output power) depending on the temperature. in addition, the heat distribution along the laser cavity, and the numerical results of slope efficiency are combined. the optimal fiber length is obtained by taking into account the variation of temperature. 2theoretical model the energy level system for yb having possible transitions is shown in figure 1 [16]. the effect of the temperature on the ion distribution between upper and lower energy level within a manifold is considered. however, the redistribution of ions between the excited manifold 2f5/2 and the ground manifold 2f7/2 is ignored. this can be justified because of the large energy gap between these two manifolds, on the basis of boltzmann distribution and the energy level diagram. the standard rate equations for two-level systems are used to describe the gain and propagation characteristics of the fiber mailto:fnahal@iugaza.edu.ps el-nahal: temperature dependency of ytterbium-doped fiber laser 2 2 laser because the ase power is negligible for a high power amplifier with sufficient input signal (about 1 mw). after the overlap factors are introduced and the fiber loss ignored, the simplified two-level rate equations and propagation equations are given as follow [17] 2 12 12 1 21 21 21 2 ( ) ( ) p lp s up p up s us us dn w f w f n w f w f a f n dt      (1) where w12 and w21 are the stimulated absorption and stimulated emission transition probability, respectively. they can be given as: 12 12 ; p ap p s as s p s p s p p w w ah ah         (2) 21 21 21 1 ; ; p ep p s es s p s p s p p w w a ah ah           (3) where ( , ), ( , ) s p p z t p z t are the signal and pump power respectively. as es and  are the signal absorption and emission cross sections. and s p   are the frequencies of signal and pump light, respectively. h is planck’s constant. a is the doped area of the fiber. a21 is the spontaneous radiation transition probability and is the upper state lifetime. s  and p  are the overlapping factor of laser power and the pump power, respectively. ( ) s s  is given by 1e1-v ,where v can be obtained by 2 2 / s a na  , where na is the numerical aperture. p  can be approximately got by (a/b)2. a and b refer to the radius of the fiber core and the radius of the inner cladding of the ydf [18]. figure 1 energy level diagram of yb in silica with 976 nm, 1040 nm, and 1064 nm transitions labeled. source: [24] the ybdopant concentration is nt, and given by[19]; 1 2 t n n n  (4) where n1 and n2 are the ground and upper-level populations. the energy level diagram for yb in silica may vary with each individual fiber. because of the splitting of the levels depends on the glass composition, concentration of dopants and co-dopants, and the degree of structure disorder of the glass network. the absorption and emission cross-sections for yb in silica are related to the temperature and the energy of the levels[20]. the saturation of ytterbium absorbing transition occurs when population of two stark levels involving in the transition are matched. the photon energy is the energy difference between the highest stark level of the ground state 2f7/2 (4) and the lowest stark level of the excited state 2f5/2 (1) of the yb 3+ ion in phosphate glass. therefore, we assume in the model that just these two sublevels and calculate the boltzmann occupation factors fli and fui of lower and upper manifolds for lower and upper levels from measured stark splitting ei and ej and they can be expressed as [21-23] / / 4 1 i b e k tj b e k t li i e f e      for yb ground state (5) / / 3 1 i b e k tj b e k t ui j e f e      for yb excited state (6) kb is boltzmann’s constant and equals 23 1.38 10 j/k.   at steady state, 2 / 0dn dt  , then 12 12 2 1 21 21 21 ( ) ( ) upp slp up us usp s w f w f n n w f w f a f     (7) el-nahal: temperature dependency of ytterbium-doped fiber laser 3 3 12 122 21 1 21 1 21 21 21 ( ) ( ) p lp s ls p lp s ls p up s us us w f w fn n w f n w f n w f w f a f       (8) where the subscripts s and p represent laser and pump, respectively. at the same time, by considering the scattering losses both for pump and laser, and then the power time independency differential transmission equations considering temperature by ignoring the amplified spontaneous emission (ase) can be expressed as follows:  2 ( ) ( ) ( ) p p up lp ep up up lp p p p dp z n f f f n p z p z dz            (9)  2 ( ) ( ) ( )s s as es es us as es s s s dp z n f f f n p z p z dz               (10) with the boundary condition 1 (0) (0), s s p r p    (11) 2 ( ) ( ) s s p l r p l    (12) where the superscript of s p  and pp represent the propagation direction for the power along the fiber, the positive superscript represents forward direction and the negative superscript represents backward direction of laser beam and l is the fiber length. p  and s  are scattering loss coefficients of pump light and laser light respectively. r1 and r2 are the power reflectivity of fabry perot reflectors at laser wavelength at z = 0 and z = l, respectively. from above equations, the numerical results of power distribution along the fiber laser can be calculated. the temperature distribution of fiber core t1 and fiber cladding t2 respectively, as a function of fiber radius and fiber length and can be given as [24]: 2 2 1 2 ( ) ( , ) ( ) 4 ( ) ln (0 ) 2 c c q z t r z t a r q z d a r a a dh                   (13) 2 2 2 ( ) ( , ) ln ln 2 ( ) ( ) 2 c c q z a d r t r z t a a q z a a r d dh                       (14) where q(z) is the heat power density,  represents thermal conductivity, tc is environment temperature and hc is the heat transmission coefficient of the fiber surface. 3results and discussion numerical simulations are carried out by solving the rate equations to study the effects of temperature variation on the performance of high power ydfls. the parameters used in the simulations are included in table 1. the variations of output power with pump power for 915 nm and 970 nm signals at different temperatures 253k, 293k and 333k are shown in figure 2 and 3 respectively. it is clear from the results that increasing the pump power will increase the output power. at the same time the output power decreases with increasing temperature from 253k to 333k which agrees with previous results and can be explained by the fact that as the population of the sublevel “a” decreases with the increase in temperature, while the population of sublevel “c” increases, that is the absorption of pump declines and the absorption of laser rises [13,14]. the variations of output power with fiber length for 915 nm and 970 nm signals at different temperatures (243k & 363k) with the pump power of 1500 w are shown in figure 4 and 5 respectively. it is clear from the results that the optimal fiber length is about 12 m. according to figs. 3 and 4, when the fiber length is longer, the difference of output power at difference temperature becomes smaller and smaller as shown in figures 4 and 5 respectively. this indicates that the effect of temperature is the smallest when the pump power is absorbed entirely. table 1 parameters used in the simulation. parameters value signal wavelength (λs) 1100nm pump wavelength (λp) 915,970nm yb ion density (n) 80 x 1024 m-3 numerical aperture (na) 0.2nm excited-state lifetime (τ) 0.8ms core radius 2.5µm environment temperature (tc) 298k heat transmission coefficent (hc) 17w/(m2k) thermal conductivity ( ) 1.38 w/(mk) scattering loss coefficient of laser light ( ) 5 x 10-3 scattering loss coefficient of pump light ( ) 3 x 10-3 absorption cross section at pump wavelength (σap) 2.5 x 10-24 m2 absorption cross section at laser wavelength (σas) 1.5 x 10-26 m2 emission cross section at pump wavelength (σep) 2.5 x 10-24 m2 emission cross section at laser wavelength (σes) 3.2 x 10-25 m2  s  p  el-nahal: temperature dependency of ytterbium-doped fiber laser 4 4 figure 2 output power as a function of pump power (915nm) for different temperature when l = 3 m. figure 3 output power as a function of pump power (970nm) for different temperature when l = 3 m. figure 4 the output power as a function of fiber length for different temperature when pump power = 1500 w for 915nm signal figure 5 the output power as a function of fiber length for different temperature when pump power = 1500 w for 970nm signal the variation of the output power with the fiber length for different pump power for 915nm and 970nm signals are shown in figures 6 and 7 respectively. it is clear from the results that the output power increases with increasing pump power. the optimal length for different pump power is around 12m. figure 6 the output power as a function of fiber length at room temperature (273k) for 915nm signal. figure 7 the output power as a function of fiber length at room temperature (273k) for 970nm signal. el-nahal: temperature dependency of ytterbium-doped fiber laser 5 5 4conclusion this paper has described in detail a temperature-dependent model for (ydfl) based on fabry-perot design. the output performance mainly the output power of ydfl variation with temperature is investigated. it is clear from the results that the output power decreases with the increase of temperature. moreover, the variation of output power with temperature increases with the increase of the pump power. the theoretical results obtained here is in agreement with the published experimental result. these results show that the temperature effect must be considered especially when the laser is operated at higher pump power. references: [1] h. w. etzel, h. w. candy. and r. j. ginther, “stimulated emission of infrared radiation from ytterbium-activated silicate glass,” appl. opt., vol. 1, pp. 534, 1962. [2] r. paschotta, j. nilsson, a. c. tropper, and d. c. hanna, ytterbium-doped fiber amplifiers, ieee journal of quantum electronics, vol. 33, no. 7, pp. 1049-1056, july 1997. [3] j. chen, z. sui, f. chen and j. wang, output characteristics of yb3+-doped fiber laser at different temperatures, chin. opt. lett., vol. 4, no. 3, pp. 173-174, (2006). [4] m. j. f. digonnet, rare-earth-doped fiber lasers and amplifiers model for rare-earth-doped fiber amplifiers and lasers, crc press; 2nd edition (2001). [5] j. yi, y. fan, s. huang, study of short-wavelength yb: fiber laser, photonics journal, vol. 4, no. 6, pp. 2278 2284, (2012). [6] b. zhang, r. zhang, y. xue, y. ding, and w. gong, temperature dependence of ytterbium-doped tandem-pumped fiber amplifiers, photonics technology letters, vol. 28, no. 2, pp. 159-162, (2016). [7] h. m. pask, r.j. carman, d. c. hanna, a. c. tropper, c. j. mackechinc, p. r. barber, and j. m. dawes, ieee sel. top. quantum electron. 1, 2 (1995). [8] l.yan, w. chunyu, l. yutian, a four-passed ytterbiumdoped fiber amplifier, optics & laser technology, pp. 11111114 39 (2007). [9] j. y. allain, j. f. bayon, m. monerie, p. bernage, and p. niay, ytterbium-doped silica fiber laser with intracore bragg gratings operating at 1.02 µm, electron. lett., vol. 29, no. 3, pp. 309-310, (1993). [10] y. jeong, j. k. sahu, d.n. payne, j. nilsson, ytterbiumdoped large-core fiber laser with 1 kw of continuous-wave output power electronics letter vol. 40, no. 8, pp. 470-472, (2004). [11] y. wang, thermal effects in kilowatt fiber lasers, ieee photonics tech. lett. 16, pp. 63-65, (2004). [12] j. x. chen, z. sui, f.s. chen, opto-electron. eng. 32, pp. 77–79, (2005). [13] q. j. jiang, p. yan, j.g. zhang, m.l. gong, analysis on thermal characteristic of ytterbium-doped fiber laser, chin. j. laser 35, pp. 827–829, (2008). [14] n. a. brilliant, and k. lagonik, thermal effects in a dualclad ytterbium fiber laser, opt. lett. 26, pp. 1669–1671, (2001). [15] d. a. grukh, a. s. kurkov, v. m. paramonov, e. m. dianov, effect of heating on the optical properties of yb3+-doped fibers and fiber lasers, quant. elect. 34, pp. 579–582, (2004). [16] l. j. henry, t. m. shay, d. w. hult and k. b. rowland jr., thermal effects in narrow linewidth single and two tone fiber lasers, vol. 19, no. 7, optics express 6165, (2011) [17] b. k. zhou, principle of laser, sixth ed., national defense industry press, china, 2008. [18] x. peng, l. dong, temperature dependence of ytterbiumdoped fiber amplifiers, j. opt. soc. am. b 25, pp. 126–130, (2008). [19] a. h. m. husein, a. h. el-astal, f.i. el-nahal, the gain and noise figure of yb er-codoped fiber amplifiers based on the temperature-dependent model, opt.mater. 33, pp. 543–548, (2011). [20] a. h. m. husein, f. i. el-nahal, model of temperature dependence shape of ytterbium -doped fiber amplifier operating at 915 nm pumping configuration, (ijacsa) international journal of advanced computer science and applications, vol. 2, no. 10, pp. 10-13, (2011). [21] t.y. fan, optimizing the efficiency and stored energy in quasi three-level lasers, ieee j. quant. elect. 28, pp. 2692– 2697, (1992). [22] b. majaron, h. lukac, m. copic, population dynamics in yb:er: phosphate glass under neodymium laser pumping, ieee j. quant. elect. 31, pp. 301–308, (1995). [23] j. t. foumier and r. h. bartram, “inhomogeneous broadening of the optical spectra of yb3+ in phosphate glass,” j. phys. chem. solids, vol. 31, pp. 2615-2624, (1970). [24] d. c. brown, h. j. hoffman, thermal, stress, and thermooptic effects in high average power double-clad silica fiber lasers, ieee j. quant. electron. 37, pp. 207–217, (2001). transactions template journal of engineering research and technology, volume 2, issue 4, december 2015 203 coherent phonons scattering by interstitial impurities in quasiplanar crystals mohammed said rabia mouloud mammeri university, bp 17 rp hasnaoua ii, tizi-ouzou 15000, algeria, m2msr@yahoo.fr abstract—using the matching method formalism, this work presents the transmission and reflection coefficients of coherent phonons in a quasi 2d quantum waveguide perturbed by reticular defects as interstitial impurities. our waveguide is modelled by two infinite atomic chains. the implied interactions refer only to the bonding strengths between nearest and next nearest close neighbours. .numerical results show that the transmission spectra exhibit fano-like resonance features which result from degeneracy of localized-impurity states and propagating continuum modes. in addition, the scattering by multiple impurities induces interferences between diffused and reflected waves in the defect region giving birth to fabry-pérot oscillations. this interference phenomenon could provide an interesting alternative to investigate structural properties of materials. the results could be also useful for the design of phonon devices. index terms—mesoscopic disordered systems; reticular dynamics; phonons scattering; defect in nanostructures, matching method; numerical simulation. i introduction the survey of scattering and localization phenomena in the disordered mesoscopic systems interested the researchers at all times [1-3] because of the numerous applications found in classic metallurgy, in electrochemistry, in catalysis and in electronics. our present knowledge of the related phenomena has been given by the work of landauer [4], in which the studied sample is represented by a set of scatterers (reticular defects) inserted in bulk or on surface of crystalline structure. he showed that the conductance of a quantum wire is bound directly to the scattering properties of such system, considered as a waveguide perturbed by defects. his approach has stimulated many researchers [5-10] to look for the effects of quantum coherence, most of the time by numerical methods, in dc transport particularly. actually these phenomena are of renewed interest owing to advances in nanotechnologies, the basic motivation being to understand the limitations that reticular disorder may have on mechanical and vibrational properties of crystalline materials. in the present work, we study the phonons scattering by an interstitial impurity localized in an infinite double atomic chain. we analyze the behaviour of a plan wave which propagates throughout this crystal which is assimilated to a quasi-planar crystallographic waveguide. we concentrate in calculating the reflected and transmitted parts of the incidental wave, the phononic conductance as well as the displacements of the irreducible atoms composing the perturbed region. we are also interested by the determination of the localized induced impurity states especially important for transmission spectra interpretation. different defect configurations are considered. the mathematical treatment of the problem resorts to the matching method [7,11] in the harmonic approximation framework [12-14] while using scattering boundary conditions. ii structural model the considered model consists of two linear parallel periodic chains of masses, assimilated to a quasi-onedimensional planar waveguide in which interstitial impurities are incorporated. the parallel chains are composed of specific masses aligned along the direction of propagation (x axis). the situation is depicted in figure 1. each mass is linked to its nearest and next nearest neighbours by harmonic springs of stiffness constants k 1 and k 2 . the additional constants as k lv , k l and k v , are represented on the figure. to simplify, the distances between adjacent masses are considered equal in the two cartesian directions x and y of the plan. also, to take account of the modification of the bonding strength field in the perturbed region (grey area m), we introduce a proportionality factor λ which indicates the ratio of the different force constants between the defect zone masses and those of the perfect lattice areas g (left) and d (right-hand side) located in sites separated by equivalent distances. iii matching method principle initiated by feuchtwang in the sixties then revisited by szeftel and al. in the eighties, the matching method returns account in a satisfactory way for the phonons dispersion curves [7-9] and for surface resonances. it gives also a more mailto:m2msr@yahoo.fr mohammed said rabia / phonons scattering by interstitial impurities (2015) 204 general definition of the resonance concept and allows a more transparent analysis of the displacements behaviour in the vicinity of the van hove singularities [15]. however, its execution requires the crystal subdivision in three distinct regions having all the same periodicity along the surface. the procedure was described in details in references [8]. we will just present the necessary stages to the comprehension of the results analysis. a perfect lattice dynamics for an atom occupying the site (l) and vibrating at the frequency  , the equations of motion can be written, using the harmonic approximation framework [14], in the following form:               ll lulu d rr llk lulm ' 22 2 22 ,',', ,       (1) where  and  represent the yx, directions of the plan;   mlm  indicates the atom mass located at site l ; r is the component of the relative position vector between sites l and 'l , d the distance separating them and  ',llk the bonding strength constant between the two atomic sites. taking into account the problem symmetry and applying the scattering boundary conditions for which we get plan wave solutions, the perfect lattice atom equation of motion (1) rewrites itself in following matrix system:    0, 22   urzdi (2) where 1 22 km is the dimensionless frequency, i the identity matrix, )2,( rzd the )33(  dynamical matrix of the perfect lattice and u the vector displacement. the 2r parameter denotes the force constants ratio between nearest and next-nearest neighbours. the scattering problem in presence of defects imposes the knowledge of both propagating modes ( 1z ) and evanescent ones ( 1z ) of the perfect waveguide. in other words, for a given frequency, all solutions are necessary even those whose module is lower than unity. these solutions can be obtained by increasing the eigenvectors basis: 8,,5;)( 1 )(     lu z lv . (3) we then rewrite equation (4) in the z eigenvalue problem form,          )( )( with)( lv lu wwbzwa     , (4) where a and b are (4 4) matrices coming from the basis change. let us note that the dimension of this generalized eigenvalue problem is twice as large as the original problem. b coherent phonons scattering at defects since the perfect waveguides do not couple between different eigenmodes, we can treat the scattering problem for each vibratory eigenmode separately. generalization to every combination of these modes does not pose a particular problem. for an incoming wave from the left of figure 1 in the eigenmode  , form must accompany your final submission. authors are responsible for obtaining any security clearances. 1;)(  iuzv ii in   , (5) where z is the attenuation factor of the entering mode, u  its eigenvector; the superscript )1( i indicates the site occupied by the atom with respect to the direction of propagation. … -2 -1 0 n n+1 n+2 ... g d m matching régions figure 1: schematic representation of a planar quasi-1d waveguide made up of two linear infinite chains perturbed by interstitial defects. the grey area m indicates defect region, g and d two semi infinite perfect waveguides. kl1 kv k2 klv kl k1 y x incident wave transmitted wave reflected wave mohammed said rabia / phonons scattering by interstitial impurities (2015) 205 the resulting scattered waves are composed of a reflected and transmitted parts, which can be expressed as a superposition of the eigenmodes of the perfect waveguide at the same frequency, i.e., 1;)(.. 11       iuru z i z i r       , (6)   2;)(..   izuztu ii t     , (7) where r and t indicates the reflection and transmission coefficients normalized beforehand by group velocities (slopes of the dispersion curves) of the plan wave, set equal to zero for the evanescent modes. the evanescent modes are needed for a complete description of scattering in presence of defect, although they do not contribute at all to the energy transport. with the definitions (6) and (7), we can rewrite the dynamical equations for the perturbed double chain. since there are perfect waveguides in regions g and d, we only need to solve eqs. (1) for the masses inside the perturbed zone m and in the boundary columns (-1) and (2), which are matched to the rest of the perfect waveguide by eqs. (6) and (7). isolating the inhomogeneous terms describing the incidental wave, we obtain an inhomogeneous system of linear equations      inff vrdxrrd  ),,(),,( 22   (8) where ),,( 2 rd f  indicates the dynamical defect matrix, x  the vector gathering all the problem unknowns, inv  the incidental vector and r the matching matrix. as example, for an isolated defect we obtain a dynamical matrix  2618fd ; from where a matching matrix  1826r is deduced. then the vector x  will be composed of eighteen unknowns including the ten displacements )(lu of the irreducible atoms, the four transmission coefficients and four reflection ones. iv irreducible atoms vibration spectra in combining the matching procedure to the green’s functions and for a given wave vector parallel to the direction of propagation [16], the matrix phonon spectral density reads ,)(2)( 22')',( ),(   m m l i l i ll pp   (9) where (l) ands (l’) are two atomic sites, α and β designate two different cartesian directions and l ip is the component in direction α of the polarization vector of the atom (l) for the mode having a frequency ωm. the vibration density of states (dos) per atomic site ni(ω) in the perturbed defect region could be calculated by summing over the trace of the spectral density matrix (g (ω 2 +jε) = ((ω 2 +jε)i-df (r2,λ)) -1 being the green operator), i. e for (l)= (l’),                      )(imlim 2 )( 2 0 jgn ll l (10) we strongly encourage authors to carefully review the material posted here to avoid problems with incorrect files or poorly formatted graphics. v results and discussion phonons scattered by impurity are analyzed relatively to an incidental wave coming from the left of figure 1, with unit amplitude and a zero phase on the border atom (-1) located just at the beginning of the defect region m. a single impurity scatterer the numerical results for the transmission and reflection coefficients in terms of the incident phonon frequency are consigned in figure 2 in the case of an impurity mass mm 5.1' . we notice that the presence of the interstitial defect leads to a general decrease of the probability amplitude. as expected, the influence of the impurity is relatively small in the acoustical regime because of the low implied frequencies. for 0 , we get 1t ; the subscript  (=1 to 4) refers to the number of eigenmodes characterizing the double attomic chain [7-9]. moreover, the transmission spectra are marked by pronounced typical fano-like resonances (null transmission in figures 2). these asymmetric resonances can be attributed to the presence of impurity-induced resonant states, whose frequency depends on the value of the bonding forces in the defect region m. consequently, these resonances take place at low frequencies for heavy defects and inversely for the light ones. these findings are in good agreement with those of tekman and bagwell [2], who used a two mode-mode approximation. frequency ω figure 2: transmission (full line) and reflection (doted line) coefficients vs. the phonon frequency for an isolated interstitial impurity mass m’= 1.5m. the dashed curve shows the good complementarity between the two coefficients. t1 r1 t2 r2 t3 r3 t4 r4 mohammed said rabia / phonons scattering by interstitial impurities (2015) 206 lastly the well known theoretical relation translating the conservation of energy principle,   1  tr (11) is fortunately satisfied and always checked for each frequency (dashed lines in figures 2). besides, this condition constitutes an effective control method of the results. the results of the conductance )( are shown on figure 3. in addition to the curves of conductance relating to each impurity mass considered, we also represented that of the perfect lattice (dash-dotted histogram). in this case, the entering wave is totally transmitted in each propagating mode. the conductance of the system becomes then more important where the modes overlap. for this reason, its value reaches more than unity in the concerned frequencies ranges. otherwise the conductance spectrum is much more affected in the case of light impurity mass (dashed curve). in addition to resonances, this influence is translated by a less amplitude compared to for bigger masses at weak frequencies. b extended defect the increase of the defect region width doesn't bring anything of qualitatively new in relation to the case of the single impurity. the addition of impurities results solely in the increase of the size of the linear system (8), but the matrix d ~ keeps its structure. the supplementary blocks have the same shape as those characterizing a single defect. we have limited our study to only ten interstitial impurities which already generates a ( 8072 ) defect matrix dimension. the effects described previously in the case of isolated step appear, but they are even more difficult to isolate because of the biggest number of peak-dip structures near in frequencies. it is why we are not going to study in details these regions. on the other hand, we will limit ourselves to present a more global change of the transmission curves, provoked by the fabry-pérot oscillations issued from interferences between the multiple scatterings of propagating states in the perturbed region. the phonon scatterings, considered for an extended defect composed of several interstitial impurities, are presented in figure 4 for heavy impurity mass (m’=1.5m) in the antisymmetric mode 1 [8]. it can be seen that the transmission curves structure became richer of several peaks. we observe a drastic dependence of fabry-pérot oscillations with the number of impurities. however, the number of main dips remained the same corresponding to the total number of lattice parameter a contained in the width of the perturbed region. the fact that their number seems to be lower on the figures is simply related to a resolution problem in the implied frequency range. same results are observed by v. pouthier and al. [17] on the transmittance spectrum of a nanowire containing a set of linear clusters separated by different spacing. otherwise, the upper level of the fabry pérot oscillation can merge with the fano-resonance peak. it should be noted that on average the global shape of the transmission curves is quite similar to that obtained in the case of an isolated impurity (in dashed line on the figures). the transmission curves are turned into a number of peak-dip structures, the reason is that the modes will interfere with each other due to the multiple reflections of the phonon waves in the perturbed region. in general, the multiple interferences in the perturbed waveguide imply the more complex transmission spectra. these interferences between multiply scattered waves result in fabry–pérot oscillations of increasing amplitudes with the frequency and whose number depends intimately of the number of impurities. similar results are obtained in the study of adatomic defects [8,9,17-20] and substitutional defect columns [8] in the perturbed double quantum chain. defects are separated by frequency ω figure 3: the total transmission probability vs phonon frequency for impurity masses m’=0.5m (dashed line), m’=m (dotted line) and m’=1.5m (full line) in the case of a single impurity scatterer. the dashed-dotted histogram represents the total hypothetical phonon transmission capacity of the system. c o n d u c ta n c e λ (ω ) figure 4: transmission coefficient as a function of the phonon frequency for an extended defect composed of n defects of impurity mass m’=1.5m. the dashed curve refers to an isolated scatterer having the same mass. frequency ω t ra n sm is si o n p ro b a b il it y t 1 mohammed said rabia / phonons scattering by interstitial impurities (2015) 207 different spacing in both configurations. c phonons densities of states in figure 5 are shown the phonon densities of states (dos) versus the non-dimensional frequency for defect irreducible atoms (see fig. 1). the results were calculated, according to eq. (10), for two inhomogeneity masses light (m’=0.5m) in dashed line and heavy (m’=2m) in full line in the case of stiffened force constants. due to obvious symmetry effects, analogous behaviors are observed for both columns (0) and (1). therefore, the density of states is shown only for atom (0,0). it can be seen that spectra for this kind of lattice atoms is quite similar and present five main features as in the overall transmission spectrum, mainly at resonant frequencies ω≈1, 1.4, 1.7, 2.0 and 2.3 for heavy impurity mass. for light mass, the resonant peaks happened at frequencies ω≈1.0, 1.4, 1.7 and 2.4. these resonances can be attributed to the presence of defect-induced resonant states. the phonon modes of the impurity atom reveal four significant resonant peaks (two for light impurity mass) at high frequency. these strongly localized modes are due to interstitial defect induced states. these peaks correspond mainly to the longitudinal modes near the high brillouin zone boundary. the low-frequency peaks are mainly contributed by the transverse modes. as previously, these resonant peaks shift to higher (lower) frequencies for smaller (larger) impurity mass as expected. v conclusion in this work, we have analyzed the behaviour of elastic waves propagating through a quantum waveguide perturbed by interstitial impurities. our calculation resorts to the matching procedure based on the landauer-büttiker approach. the scattering is considered for isolated and extended impurity defects. in both configurations, strong asymmetrical resonances are observed in the transmission spectra; these structures, identified to fano resonances, describe usually the interference between a propagating transmitted mode and a local defect mode. the resonance frequency depends closely of the impurity mass (or of the bonding force constants) in accordance with the relation ω 2 =(k/m) defining the harmonic oscillator frequency. for extended defects, resonance peaks and their number are determined by the perturbed region width given by the number of impurities. moreover, the transmission spectrum is also characterized by other oscillations of fabry-pérot type due to the interferences between transmitted ant reflected waves in the perturbed region. also their number depends closely of the defect region width. the transmission spectra can thus be used for identifying defects of specific structures and then being used for their characterization. the interference effects are of interest for improvements in the design of transducers and noise control [21] whereas fano-type resonances are commonly used to build filters [22]. the results could be also useful for the design of phonon devices. acknowledgment i wish to thank pr. s. bouarab and pr. m. bennaki for their encouragements. references [1] b. kramer, quantum coherence in mesoscopic systems, 1991 (plenum, new york). [2] e. tekman and p. f. bagwell, fano resonances in quasi-one dimensional electron waveguide, phys. rev., b48, 18 299, 1993. [3] yu.a.kosevich, vibrations localized near surfaces and interfaces in nontraditional crystals, prog. surf. sci. vol 55, 1, 1997. [4] r. landauer, electrical transport in open and closed systems, z. phys. b 68, 217, 8099, 1987; conductance determined by transmission: probes and quantised constriction resistance, j. phys. condens. matter, 1, 8099, 1989. [5] m. büttiker, four-terminal phase-coherent conductance, phys. rev. lett., 57, 1761, 1986. [6] e.s. syrkin, p.a. minaev, a. g. shkorbatov and a. feher, influence of interfacial layers on resonance phonon transport, microelec. eng. vol. 81 p 503, 2005. [7] j. szeftel and a. khater, calculation of surface phonons and resonances: the matching method revisited i, j. phys c, 20,4725 (1987) [8] a. fellay, f. gagel, k. maschke, a. virlouvet and a. khater, scattering of vibrational waves in perturbed quasi-onedimensional multichannel waveguides, phys.rev, b55, 1707, 1997. [9] m. s. rabia, surface defect characterization in quantum wires by acoustical phonons scattering, j. mol. struc-theochem, 777, 131-138, 2006. [10] a. khater, n. auby and d. kechrakos, surface-surface phonon scattering by surface inhomogeneities, j. phys. condens. matter, 4, 3743-3752, 1992. a) atom (0,0) frequency ω figure 5: density of states (dos) of the irreducible atoms versus incident phonon frequency for a) light (m’= 0.5 m, dotted line), and b) heavy (m’= 2 m, full line) impurity masses. d e n si ty o f s ta te s (a rb . u n it s) b) impurity mohammed said rabia / phonons scattering by interstitial impurities (2015) 208 [11] t. e. feuchtwang, dynamics of a semi-infinite crystal in a quasiharmonic approximation. ii. the normal mode analysis of a semi-infinite lattice, phys. rev., 155, 731, 1967. [12] r. f. wallis, surface effects on lattice vibrations, surf. sci. 2, 146, 1964. [13] a. a. maradudin, and j. melangailis, some dynamical properties of surface atoms, phys. rev. 133, a1188, 1964. [14] a. a. maradudin, e. w. montroll, g. h. weiss and ipatova, theory of lattice dynamics in theharmonic approximation, 1971 (academic press new york and london). [15] l. van hove, phys. rev. 89, 1189, 1953. [16] s.m. grannan, s.e. labov, a.e. lange, b. sadoulet, b.a. young, j. emes, in: m. meissner, r.o. pohl (eds.), phonon scattering in condensed matter vii, 1993 (springer-verlag). [17] m. s. rabia, h. aouchiche and o. lamrous, elastic waves scattering by extended defect surmounting a perfect lattice plan, eur. phys. j. – a. p. 23, 95-102, 2003. [18] v. pouthier and c. girardet, electronic transmission through a set of metallic clusters randomly attached to an absorbed nanowire: localization-delocalization transition, phys. rev. b 66, 115322, 2002. [19] w. kress, f. w. de wette (eds), surface phonons, 1991 (springer-verlag, berlin). [20] m. s. rabia, coherent phonons scattering by geometric defects in a planar quantum waveguide, j. physica e 42, 1307-1318, 2010. [21] m. guglielmi, f. montauti, l. pellegrini, and p. arcioni, correction to ―implementing transmission zeros in inductive window bandpass filters‖, ieee trans. microwave theory technol, 43, 1991, 1995. [22] m. s. kushwaha, a. akjouj, b. djafari-rouhani, l. dobrzynski, j.o. vasseur, acoustic spectral gaps and discete transmission in slender tubes, solid state commun. 106, 659, 1998. mohammed said rabia is graduated from université du québec à montréal (uqam), montréal, canada (b. sc. a en technologie de la mécanique) in 1980. he received the m.sc. and ph.d. degrees from mouloud mammeri university of tiz-ouzou (ummto), algeria in 2000 and 2008, respectively. he is currently an associate professor at the ummto and researcher at the lmse laboratory, algeria. his research interests: elastic waves scattering in mesoscopic disordered systems, guidance and confinement of elastic waves in phononic structures, modelling and simulation. email: m2msr@yahoo.fr transactions template journal of engineering research and technology, volume 2, issue 2, june 2015 167 strategic planning in construction companies in gaza strip khalid el-hallaq 1 bassam a. tayeh 2 1 civil engineering department, islamic university of gaza, gaza, palestine, khalaq@iugaza.edu.ps 2 civil engineering department, islamic university of gaza, gaza, palestine, btayeh@iugaza.edu.ps abstract--this paper presents a study that aims to explore the reality of the strategic planning in the construction companies in gaza strip. the clarity of the scientific concept of strategic planning has been investigated, in addition its significance, the degree of implementation and use, also the participation involvement in setting strategic plans and ability to adapt with their internal and external changing environment to be considered. this study relied mainly on the field study methods, where a special questionnaire was designed and distributed on a sample of "149" construction company, therefore, a full comprehensive survey was adopted. a "66" questionnaire out of "90' were retrieved and were processed and analyzed. the results of the study recommended the necessity of starting up with the use of strategic planning as an administrative tool to help these companies to adapt with their internal and external environments, also to provide more training courses for top management on strategic management and planning, and to emphasize on the sharing principle when setting strategic plans, where different administrative levels are involved. index terms: construction companies; gaza strip; strategic planning; strategy 1. introduction palestine is a developing country in the asia region that suffers from economic and financial problems due to the current unstable political situation. the construction industry has played an important role in palestinian economic growth [1]. the industry has contributed approximately 5-8% of the national gross domestic product (gdp). technology has been continuously improving, causes high business pressures that affect organizations' current and future competitiveness. these pressures cause common and rapid changes on all industries. construction industry is also affected by these changes and firms which operate in the construction industry are challenged with increased global competition. in this global environment, it is clear that construction firms will have to be vigilant and forward looking to survive [2]. tactical considerations will need to be replaced by, or at least put in the context of, strategic ones. the need to adopt a strategic perspective to business operations has been recognized in other industries of the economy for over four decades. more recently, frameworks and priorities have shifted to a greater extent from the short-term and tactical to the long-term and strategic [3]. however, this shift is relatively slow in the construction industry when compared with other ones. the concept of strategy is also important in the construction industry especially in this global competitive environment. when industry conditions and high competition is considered, the concept of strategy and the need for strategic planning is obvious in the construction industry in gaza strip. this paper presents the investigation of the perspectives of contractor companies in gaza strip to the concept of strategy and strategic planning by describing the current strategic planning practices in construction industry in gaza strip. 2. literature review 2.1 the concept of strategy in the literature, the concept of strategy is based on two different sources in terms of the origin of the word. one of them is "stratum" in latin, which means the path or line. the second one is "strategos" in greek, which defined as the art of the general. the concept of strategy firstly recognized at the end of the 18th century when war tactics became increasingly important. hence, the strategy concept owes its progress as a scientific discipline to the military field. today, the concept of strategy is used to define achieving goals in various fields such as sports, politics, economics and etc. moreover, it has recently started to have an important place especially in the field of management. in this context, johnson and scholes (1999) defines strategy as follows: "strategy is the direction and scope of an organization over the long-term: which achieves advantage for the organization through its configuration of resources within a challenging environment, to meet the needs of markets and to fulfill mailto:khalaq@iugaza.edu.ps mailto:btayeh@iugaza.edu.ps khalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) 168 stakeholder expectations". therefore, strategy needs to focus on how an organization competes, how to position itself in the industry and how to turn its strengths to a strategic advantage. at this point, it is important to also mention about the concept of strategic planning. if the strategy is an overall approach and plan, strategic planning is the overall planning that facilitates the good management of a process. furthermore, strategic planning is an organization's process of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy. in this context, there are some important concepts which are related to the concept of strategy and strategic planning. these concepts are mission, vision, goals and objectives. these concepts are the key components of strategic planning. the first component is the mission. a strategic plan starts with a clearly defined mission which defines the fundamental purpose of an organization, briefly describing why it exists and what it does to achieve its vision. mintzberg (1994) defines a mission as follows "a mission describes the organization's basic function in society, in terms of the products and services it produces for its customers". the second one is the vision which outlines what the organization wants to be. the vision is a long-term view and concentrates on the future. other key components are goals and objectives. after determination of the organization's mission and vision, it is also important to determine goals and objectives which help the organization to guide, measure and evaluate its future strategies. therefore, goals are general guidelines that explain what you want to achieve. they are usually long-term and represent global visions. on the other hand, objectives form implementation steps to attain the identified goals. unlike goals, objectives are specific, measurable, and have a defined completion date. 2.2. strategic planning in the construction industry the concept of strategy and strategic planning is also very important in the construction industry. several studies have been done in order to put forward the importance of strategic planning and strategic management in the construction industry. one of them is the review of the application of strategic planning by enterprises in the construction industry [2]. it is found that all construction enterprises would ultimately have to consider strategic concepts to be able to operate effectively in the emerging industry context. another study presents a methodological procedure for strategic planning in a construction company for the development of a competitive strategy [6]. the procedure consists of four steps such as examination of the company's mission, surveying the company's business environment, analyzing the company's main resources, and development of a strategy. another methodology is described in order to analyze construction firms' long-term strategies [7]. this methodology provides a systematic approach to study and analyze external and internal scenarios for a construction firm doing strategic planning. the need for a strategic perspective has also been stressed by some country-specific studies, such as that in the uk construction industry. one of them is the study which sought to evaluate business strategies adopted by construction engineering firms within the uk in order to ascertain how they are coping with evolving market conditions [8]. other study aimed to review the current use of strategic management and examined how strategic management practices were changed within uk construction organizations [9]. another study reviewed recent literature on the strategic management process and considers several paradoxes viewed from a construction perspective [9]. it is found that for many construction organizations the key to success depends upon developing strategies that have an optimal balance within these paradoxes. moreover, studies on this subject have been varied through different perspectives such as frameworks for corporate strategy. for instance, a new conceptual model for corporate strategy in the construction industry is developed which adopts an open, generic format to cater to the diversities of success and failure factors in construction and the different theories related to strategy development [10]. another study is aimed to provide a structured and integrated framework of corporate strategy in order to help practitioners and researchers identify critical issues related to the chinese construction industry and analyze its dynamics from a holistic viewpoint [11]. furthermore, a survey was conducted to determine how widespread strategic planning is used as a management tool by contractors in ghana [12]. there are also studies in turkey in order to determine the importance of strategy and strategic planning in the construction industry. one of them is the investigation of the international competitiveness of turkish construction companies using porter's diamond framework [13]. another study is aimed to propose a conceptual framework for the analysis of a strategic perspective and present results of a questionnaire carried out to explore the strategic perspectives of turkish contractors [14]. two more study carried out in the turkish construction industry is a study that aims to define the present position of turkish construction companies in terms of strategic management [15,16]. 3. methodology to achieve the research objective, a questionnaire survey was used to collect factual profiles, perceptions and attitudes of the respondents [17,18]. the research focused on professionals from the palestinian contractors union (pcu) categories that are classified under the building categories in gaza strip. these categories are "1st, 2nd, 3rd, building categories" that have valid registration. the small categories (4th and 5th) were not considered due to the low practical khalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) and administrative experience of these companies in construction works and the low experience of their subcontractors. based on the list of registered contractors at the pcu in january 2011, the size of population for the 1st, 2nd, 3rd, building categories was 149 companies. to determine the sample size for each population of contracting companies, kish (1965) equation was used. n= n'/ [1+(n'/n)] n' is the sample size from infinite population, which can be calculated from this formula [n' = s²/v²]. the definitions of all variable can be defined as the following: n: sample size from finite population. n: total population (149 contracting companies). v: standard error of sample population equal 0.05 for the confidence level 95%, t = 1.96. s²: standard error variance of population elements, s²= p (1p); maximum at p= 0.2 the sample size for the contractors' and subcontractors' population can be calculated from the previous equations as follows: n' = s²/v² = (0.16)/(0.05)² = 64 n= 64/ [1+(64 / 149)] = 45 although the calculated sample size for construction companies is 45, the questionnaires were distributed to 90 contracting companies to overcome the risk of low participation from the respondents and to ensure higher reliability and benefits of the study. fortunately, the response rate was 73% for contracting companies. as shown in table 1. table 1: sample size and response rate of the study populations population category total population calculated sample size distributed questionnaire number of respondents response rate contracting companies 149 45 90 66 73% moser and kalton (1971) showed that a response rate of less than 30% is likely to produce results subject to non-response bias. based on this, they obtained response rates of 73% are reasonable and will reflect good results and outputs. the good design of the questionnaire is a key to obtain good survey results and warranting a high rate of return. the questions of the research questionnaire are constructed based on:  literature review.  several interviews with consultants to obtain many basic important thoughts which can be useful for creating questions. a questionnaire survey was undertaken to determine the regarding factors affecting the use of strategic planning in construction companies in gaza strip. a six page questionnaire, accompanied by a covering letter was sent to six consultants to judge and help creating it. the questionnaire comprised of two sections to accomplish the aim of this research, as follows: 1) section one: general information about the companies. 2) section two: factors affecting the use of strategic planning in construction companies. four previous studies [21-24] were incorporated in this research to compile a comprehensive list of factors. the previous studies were used to build a comprehensive list of factors affecting strategic planning in construction companies. it was decided to divide the factors into six main groups which are: 1. factors related to the use of the company's strategic planning, 2. factors related to corporate structure, 3. factors related to the prevalence of a culture of strategic planning in the company, 4. factors related to the application of the strategy, 5. factors related to the optimal use of available resources, and 6. factors related to the monitoring and evaluation of strategy. in order to fit into conditions in the palestinian construction industry, a pilot study was performed for preliminary questionnaire. six experienced experts were involved in this pilot study. all respondents were experienced industry professionals; with an average working experience in the construction industry of 25 years. the designations of the respondents were managing directors, general managers, and senior project managers. therefore, it is expected that the data collected from them are reliable. the six respondents were asked to critically review the design and structure of the questionnaire. their valuable comments and suggestions were used to revise the questionnaire. all suggested comments and modifications were taking into consideration. minor changes, modifications, and additions were accommodated based on pilot study findings to develop the final questionnaire. the questionnaire was validated by the criterion-related reliability test that measures the correlation coefficients between the factors selected for in each group and for all groups as one entity, and structure validity test (spearman test). cornbach’s a coefficient of internal conkhalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) 170 sistency reliability tests for level of frequency responses was also used. the relative index technique has been widely used in construction research for measuring attitudes with respect to surveyed variables. several researches [24,25] used the relative importance index in their analysis. the respondents were asked to gauge the identified interface problems on a five-point likert scale (1 for the strongly disagree to 5 for the strongly agree). based on the survey response, a relative importance index was tabulated using the following equation: relative importance index = n nnnnn an w 5 12345 12345    where w is the weighting given to each factor by the respondent, ranging from 1 to 5, (n1 = number of respondents for strongly disagree, n2 = number of respondents for disagree, n3 = number of respondents for neutral, n4 = number of respondents for agree, n5 = number of respondents for strongly agree). "a" is the highest weight (i.e. 5 in the study) and n is the total number of samples. the relative importance index ranges from 0 to 1. 4. results and discussion table 2 shows that the average mean equal 3.82 and the weight mean equal 76.36% which is greater than " 60%" and the value of t test equal 14.152 which is greater than the critical value which is equal 2.00 and the pvalue equal 0.000 which is less than 0.05, that means the company's use strategic planning largely. table 2: determining the use of the company's strategic planning items mean standard deviation weight mean t-value pvalue 1-the company provides adequate time to prepare their strategic plans. 4.08 0.751 81.52 11.643 0.000 2-managers are able to provide the time needed for the strategic planning process. 3.92 0.730 78.48 10.288 0.000 3-management believes in the importance of clarifying the concept of strategic planning for workers. 3.85 0.864 76.97 7.981 0.000 4-the company has a clear message in the mind of the director. 4.00 0.911 80.00 8.913 0.000 5-the company has a future vision. 3.98 0.920 79.70 8.699 0.000 6-the company has the flexibility to meet the changes that occur in the environment and adaptation. 3.95 0.812 79.09 9.550 0.000 7-management depends on a variety of information sources including personal experience in the preparation of the strategic plan. 4.03 0.859 80.61 9.746 0.000 8-the company compare and choose between strategic alternatives. 3.77 0.675 75.45 9.304 0.000 9-the company evaluated and follow-up strategies in place. 3.83 0.796 76.67 8.507 0.000 10-appropriate standards are developed in order to control the performance in the implementation of strategic plans. 3.89 0.844 77.88 8.609 0.000 11-the company moves to focus on projects with very short term, rather than large projects that require a long period of time. 3.23 1.174 64.55 1.573 0.121 12-the company has a strong tendency to diversify its investment activities in the areas of different. 3.62 1.019 72.42 4.951 0.000 13the company uses in the preparation of the plan for the swot to know the strengths and weaknesses and the risks and opportunities. 3.52 1.099 70.30 3.809 0.000 14-the company has a strong willingness to invest in opportunities and carrying out the risk of. 3.77 0.856 75.45 7.337 0.000 all items 3.82 0.470 76.36 14.152 0.000 *critical value of t at df "65" and significance level 0.05 equal 2.00. from table 4.2, it is shown that, the first factor " the company provides adequate time to prepare their strategic plans" was ranked first and confirmed by the mean reaching 4.08 and the weight mean 81.52%, indicating that contractors agree completely that they devote sufficient time to develop strategic plans. khalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) table 3 shows that the average mean equal 3.72 and the weight mean equal 74.37 % which is greater than “60%” and the value of t test equal 12.080 which is greater than the critical value which is equal 2.00 and the pvalue equal 0.000 which is less than 0.05, that means in developing the strategic plan is reviewed and revised the organizational structure in line with the strategic plan. table 3: corporate structure (organizational structure) items mean standard deviation weight mean t-value pvalue 15in developing the strategic plan is reviewed and revised the organizational structure in line with the strategic plan. 3.89 0.787 77.88 9.228 0.000 16it identify aspects of the activity in the organizational structure according to the strategic planning. 3.83 0.815 76.67 8.308 0.000 17there is an administrative post and wide to identify those points in your company. 3.65 0.813 73.03 6.509 0.000 18-there are mechanisms to enable management to perform a monitoring role in the implementation of the strategic plan. 3.80 0.728 76.06 8.963 0.000 19serve the functional sites present in the company's strategic planning process. 3.82 0.893 76.36 7.445 0.000 20influenced by strategic planning staff turnover rate in the organizational structure. 3.64 0.939 72.73 5.508 0.000 21management is largely responsible for the failure to implement plans, programs and contracts for their projects. 3.39 1.108 67.88 2.889 0.005 all items 3.72 0.483 74.37 12.080 0.000 *critical value of t at df "65" and significance level 0.05 equal 2.00. from table 4.3, it is shown that, the first factor also" in developing the strategic plan is reviewed and revised the organizational structure in line with the strategic plan" was ranked first and confirmed by the mean reaching 3.89 and the weight mean 77.88%, that most construction companies in the development of the strategic plan review and modify the organizational structure in line with the strategic plan. table 4 shows that the average mean equal 3.55 and the weight mean equal 71.07%which is greater than “60%” and the value of t test equal 8.371 which is greater than the critical value which is equal 2.00 and the pvalue equal 0.000 which is less than 0.05, that means encourage company management personnel at all organizational levels to participate in strategic planning and staff working in the company as a team to prepare a strategic plan . table 4: the prevalence of a culture of strategic planning in the company items mean standard deviation weight mean t-value pvalue 22the company encourage management personnel at all organizational levels to participate in strategic planning. 3.74 0.966 74.85 6.246 0.000 23 the company links it's management policy strategic culture of the local community. 3.68 0.826 73.64 6.708 0.000 24the company discuss its management strategic plan with the employees without imposing dictates. 3.20 0.996 63.94 1.607 0.113 25staff working in the company as a team to prepare a strategic plan. 3.79 0.937 75.76 6.833 0.000 26employees provides the information necessary for the implementation of the strategic planning activities. 3.61 0.820 72.12 6.001 0.000 27the company aspire the management staff constantly on the future plans. 3.53 0.964 70.61 4.468 0.000 28the company evolve its management staff to 3.44 1.040 68.79 3.433 0.001 khalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) 172 items mean standard deviation weight mean t-value pvalue study obstacles to strategic planning. 29the plan focuses the company's strategy to raise the level of services provided by. 3.67 0.934 73.33 5.801 0.000 30company is committed to working in the decisions of senior management. 3.82 0.858 76.36 7.750 0.000 31the company's management develop a sense of the company's management to work in. 4.03 0.976 80.61 8.575 0.000 32the discussion of ideas about working in the company is transformed to personal differences. 2.59 1.240 51.82 -2.680 0.009 all items 3.55 0.537 71.07 8.371 0.000 *critical value of t at df "65" and significance level 0.05 equal 2.00. from table 4.4, it is shown that, the tenth factor also" the company's management develop a sense of the company's management to work in" was ranked first and confirmed by the mean reaching 4.03 and the weight mean 80.61%, this high percentage shows the interest of overall management of the development of a sense of belonging for the construction companies because of its importance to the success of the administration's policies in the strategic planning process, and coincided a almadhon (2007), lashway (1997), the study bliss (1999) this result with the result of schaffer & taylor (1984) and el jondi (1999). through the development of performance and the involvement of employees and study. table 5 shows that the average mean equal 3.84 and the weight mean equal 76.82% which is greater than “60%” and the value of t test equal 11.591which is greater than the critical value which is equal 2.00 and the pvalue equal 0.000 which is less than 0.05, that means company's specific strategy is implemented immediately and plans that are developed are applied to a significant. table 5: the application of the strategy items mean standard deviation weight mean t-value pvalue 33-when you use your company's specific strategy is implemented immediately. 3.89 0.767 77.88 9.466 0.000 34plans that are developed are applied to a significant (more than 75%). 3.79 0.691 75.76 9.264 0.000 all items 3.84 0.589 76.82 11.591 0.000 *critical value of t at df "65" and significance level 0.05 equal 2.00. from table 4.5, it is shown that, the first factor also" when you use your company's specific strategy is implemented immediately " was ranked first and confirmed by the mean reaching 3.89 and the weight mean 77.88%, this high percentage shows that the construction companies, the implementation of strategic plans, which put them in as soon as possible so as to changes in the external environment of constantly changing. table 6 shows that the average mean equal 6.82 and the weight mean equal 76.49% which is greater than “60%” and the value of t test equal 12.155which is greater than the critical value which is equal 2.00 and the pvalue equal 0.000 which is less than 0.05, that means administrative leaders are trained are able to maximize the utilization of resources and possibilities available. khalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) table 6: the optimal use of available resources items mean standard deviation weight mean t-value pvalue 35limited resources available to the company. 4.05 0.732 80.91 11.597 0.000 36-there is participation by the various levels involved in the inventory of resource requirements. 3.86 0.762 77.27 9.204 0.000 37providing the resources required for the exercise of strategic planning. 3.73 0.833 74.55 7.094 0.000 38-employees are informed of the availability of the resources available. 3.77 0.908 75.45 6.914 0.000 39-the company works to stimulate the use of available resources. 3.82 0.875 76.36 7.592 0.000 40-employees are trained on how to use the resources available. 3.82 0.783 76.36 8.493 0.000 41-administrative leaders are trained to be able to maximize the utilization of resources and possibilities available. 3.73 0.937 74.55 6.304 0.000 all items 3.82 0.551 76.49 12.155 0.000 *critical value of t at df "65" and significance level 0.05 equal 2.00. from table 6, it is shown that, the first factor also" limited resources available to the company " was ranked first and confirmed by the mean reaching 4.05 and the weight mean 80.91%, this result coincided a attallah (2005) and also got the highest rank in her study. table 7 shows that the average mean equal 3.95 and the weight mean equal 79.09 % which is greater than “60%” and the value of t test equal 11.774which is greater than the critical value which is equal 2.00 and the pvalue equal 0.000 which is less than 0.05, that means there is a wide participation of management in the evaluation and review strategic plans. table 7: monitoring and evaluation strategy items mean standard deviation weight mean t-value pvalue 42-there is a wide participation of management in the evaluation and review strategic plans. 4.09 0.872 81.82 10.160 0.000 43-your company to pursue other companies that produce or provide the same services. 3.91 0.836 78.18 8.832 0.000 44-the plan is reviewed after putting them on a regular basis (every year six months -.....). 3.86 0.910 77.27 7.714 0.000 all items 3.95 0.659 79.09 11.774 0.000 *critical value of t at df "65" and significance level 0.05 equal 2.00. from table 7, it is shown that, the first factor also" limited resources available to the company " was ranked first and confirmed by the mean reaching 4.09 and the weight mean 81.82%, this refers that most construction companies are encouraged to participate in the evaluation and review of strategic plans through the work of a periodic review of plans. 5. conclusion the study showed that the percentage (76.36 %) of those surveyed agree that the strategic planning is used in the construction companies, which is a large use. the study showed that (74.37%) of the study sample there a relationship between strategic planning in construction companies and the organizational structure of the college, and that fit with the strategic planning process. the study showed that the percentage (71.07%) of a sample of the study support a relationship between strategic planning in the construction companies and the prevalence of a culture of strategic planning , to serve the process planning and development of the construction company. construction companies have plans with short and long term and have the vision, mission and goals khalid el-hallaq, bassam a. tayeh/ strategic planning in construction companies in gaza strip (2015) 174 are clear. the study showed that the overall management has a clear understanding and conviction of strategic planning and it sought to achieve competitive advantage. utilization of available resources is the best use to maximize the return from the use and maintenance of the survival of companies, but to varying degrees do not indicate the use of strategic planning in a good way. providing the resources required for the exercise of strategic planning was ranked last between the answers of respondents, as well as the extent of participation by the levels of inventory requirements required resources, the availability of personnel and information resources available. construction companies have effective channels of communication and evaluation with a good monitoring between senior management and other management levels. references [1] passia, "the palestinian academy society for the study of international affairs". jerusalem, 2008. [2] m. betts and g. ofori, "strategic planning for competitive advantage in construction". construction management and economics, vol. 10, pp. 511-532, 1992. [3] m. betts, strategic management of it in construction. london: blackwell science, 1999. [4] g. johnson, and k. scholes, exploring corporate strategy. london: prentice hall europe, 1999. [5] h. mintzberg, the rise and fall of strategic planning. new york: prentice hall, 1994. [6] a. warszawski, "strategic planning in construction companies". journal of construction engineering and management, vol. 122, no. 2, pp.133-140, 1996. [7] p. venegas, and l. f. alarcon, "selecting long-term strategies for construction firms". journal of construction engineering and management, vol. 123, no. 4, pp. 388-398, 1997. [8] s. yisa, and d. j. edwards, "evaluation of business strategies in the uk construction engineering consultancy". measuring business excellence, vol. 6, no. 1, pp. 23-31, 2002. [9] a. d. f. price, and e. newson, "strategic management: consideration of paradoxes, processes, and associated concepts as applied to construction." journal of management in engineering, vol. 19, no. 4, pp. 183-192, 2003. [10] c. y. j. cheah, and m. j. garvin, "an open framework for corporate strategy in construction." engineering, construction and architectural management, vol. 11, no. 3, pp. 176-188, 2004 [11] c. y. j. cheah and d. a. s. chew, "dynamics of strategic management in the chinese construction industry". management decision, vol. 43, no.4, pp.551-567, 2005 [12] a. dansoh, "strategic planning practice of construction firms in ghana". construction management and economics, vol. 23, pp.163 168, 2005. [13] o. oz, "sources of competitive advantage of turkish construction companies in international markets". construction management and economics, vol. 19, pp. 135-144, 2001. [14] i. dikmen, and m. t. birgonul, "strategic perspective of turkish construction companies". journal of management in engineering, vol. 19, no. 1, pp. 33-40, 2003 [15] a. kazaz, , and s. ulubeyli, "strategic management practices in turkish construction firms". journal of management in engineering, vol. 25, no. 4, pp. 185194, 2009. [16] p. i. cakmaka, and e. tasb, "strategic planning practices of contractor firms in turkey". social and behavioral sciences, vol. 58, pp. 40 – 46, 2012. [17] r. fellows, and a. liu, research methods for construction..blackwell science ltd., osney mead, oxford ox2 oel, uk, 1997. [18] g. d. israel, determining sample size, department of agriculture, institution of food and agricultural science, university of florida, 2003. [19] l. kish, survey sampling. new york. ny: wiley, 1965. [20] c.a., moser, and g. kalton, survey methods in social investigation. heinemann education, 1971. [21] e. al-kharouby, developed method is concerning risks of strategic planning for infrastructure, master thesis, islamic university, gaza, (2004). [22] s. attalla, the reality of the strategic planning in the construction sector: field study: the construction companies in the gaza strip, master thesis, islamic university, gaza, 2005. [23] m. el mobaued,, the relationship between strategic planning and and growth in small industrial businesses in palestine. master thesis, islamic university, gaza, 2006. [24] m. al-farra," management characteristics in gaza's manufacturing establishments: a comparative study". islamic university journal, vol. 12, no.1, 2004. [25] a. enshassi, m.a. faisal, and b. tayeh, "major causes of problems between contractors and subcontractors in the gaza strip". journal of financial management of property and construction, vol. 17, pp. 92-112, 2012. transactions template journal of engineering research and technology, volume 2, issue 4, december 2015 209 an efficient approach for supporting multi-tenancy schema inheritance in rdbms for saas tawfiq s. barhoom 1 , samir a. hillis 2 1 islamic university of gaza , tbarhoom@iugaza.edu.ps 2 islamic university of gaza, samir.hillis@gmail.com abstract— multi-tenancy refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple client organizations (tenants). common practice is to map multiple singletenant logical schemas in the application to one multi-tenant physical schema in the database. such mappings are challenging to create. this is due to the flexibility of base scheme to be extended by enterprise application tenants which provides different dynamically modified versions of the application. the fundamental limitation on scalability of this approach is the number of tables of database can handle. shared tables shared instances (stsi) is a state-of-the-art approach to design the schema. however, they suffer from poor performance and high space overhead. in this paper, we propose an efficient approach for supporting multi-tenancy schema inheritance. we trade-off stsi and our approach. experimental results show that our method achieves good scalability and high performance with low space requirement, and outperforms stsi methods at different rates depending on dml operations. index terms— cloud computing, database as a service (dbaas),multi-tenant database , schema-mapping technique. i introduction it is a clear trend that cloud data outsourcing is becoming a pervasive service. along with the widespread enthusiasm on cloud computing, in addition to cloud infrastructure and platform providers, such as amazon, google, ibm, microsoft and salseforce, more and more cloud application providers are emerging which are dedicated to offering more accessible and user friendly data storage services to cloud customers. cloud computing becomes a natural and ideal choice for organizations and customers. it provides it-related services over the network on-demand anytime. usually the objectives and characteristics of a cloud are to be highly available, scalable, flexible, secure, and efficient. the most important characteristic is scalability. this means applications would scale to meet the demands of the workload automatically. it’s important to note that the cloud should not just scale up, but also decreased in times where the demands are lower. availability is another critical characteristic of a cloud. an application deployed in a cloud is up and running on 24/7/365 basis. reliable of the cloud refer to an applications cannot fail or lose data when the system down, and users should not notice any degradation in service. now the software industry is adopting the software-as-aservice (saas) deployment model in many application domains. a special kind of saas offering is a multi-tenant software application [16]. it serves multiple tenants (e.g., companies or non-profit groups) from a single application instance. a special kind of saas offering is a multi-tenant software application [2,6] which runs from the same code base, and can thus be maintained centrally [6] . database as a service (daas) attempts to move the operational burden of provisioning, access control, configuration, scaling, performance tuning, backup, and privacy away from database users to the service provider. daas is so appealing because it promises to offer scalability as well as being an economical solution. it will allow for users to take advantage of the lack of correlation between workloads of different applications, the service can also be run using fewer machines than if each workload was individually provisioned for its peak [18]. cloud storage is a new business model for delivering virtualized storage to customers on demand. the formal term proposed by the storage networking industry association (snia) for cloud storage is data storage as a service (daas) – as “delivery over a network of appropriately configured virtual storage and related data services, based on a request for a given service level." [1] cloud service providers (csp) provide many services such as storage, platform and applications. the main benefit of multitenancy is to reduce the operating costs of running software from the provider’s perspective. multi-tenancy is the main property of saas [7], it allows vendors to provide multiple requests and configurations through a single instance of the application. in the same way, a single database is shared amongst customers to store all tenants’ data: this is known as "multi-tenant database". mailto:samir.hillis@gmail.com tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 210 multi-tenancy is a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. the tenants (application instances) can be representations of organizations that obtained access to the multitenant. the tenants may also be multiple applications competing for shared underlying resources. all this is achieved without changes of the application code to support each customer’s individual needs. in order to achieve this, individual meta data for each client has to be stored and has to have impact on the way the system behaves. multi-tenant databases is a feature that allows a single instance of an application to handle several end-users at the same time , this idea has been explored previously without any explicit connection with multi-tenancy [12] . ii multi-tenant data storage systems the concept multi-tenancy is not supported on the traditional dbms, it is appeared after the spread of cloud computing. however, despite the importance of multi-tenancy, it brings about several issues on security, implementation challenges, customization, configurability, scalability, and extensibility which can be seen only upon the deployment on a data center [14]. a well-designed saas application should be optimized to support multi-tenancy, scalability and configurability [15]. this leads to the implementation and adoption of an additional layer for the real data management. application developers experience additional problems with multi-tenant database architectures. not knowing the semantics and the relationships between data. thus, they can no longer be used for optimization and consistency management. scalability here refers to the ability of an application to support an increasing number of users without noticing a significant performance overhead [5]. customization is concerned with the support of specific features of users or meeting service level agreement by the means of configurations. due to the distributed and shared nature of multi-tenant applications appropriate security policies should be devised to prevent unauthorized users from accessing other users’ private data. there are three approaches to managing multi-tenant databases as shown in figure 1: shared machine, shared process and shared table processes [7]; these techniques also called separate databases, shared database separate schemas and shared databaseshared . the most interesting technique is the last one which aims at creating only once the application schema and mapping all tenants directly to this schema by making use of one of the available schema mapping techniques. we review existing multi-tenant database schema design methods (a) separate database: in this approach, a separate database is assigned to each tenant for data storage. each database contains some metadata used to redirect each tenant to the correct database. this approach is considered expensive in both implementation and maintenance. (b) independent tables and shared instances: in this approach all tenants share the same physical database, however, the schema different for each tenant. this approach is relatively simple to implement. (c) shared tables and shared instances (stsi): in this approach all tenants will share both the physical database and the schema. tables are shared by all tenants. customers’ information is separated using primary keys which are specified in the database design. this approach is relatively economic because it supports a large number of tenants per database server. selecting the appropriate approach depends on different criteria. for example, the separate database approach is the appropriate solution for large organizations tenants who need to store large amounts of data. the same approach is also suitable if security and legal requirements are of high concern. on the other hand, the shared database – shared schema is the appropriate solution for individual tenants who have low amounts of data to store. also, the same approach is the optimum solution in case of frequent changed applications. [15]. figure 1: types of multi-tenant data storage systems [22] iii schema requirements for multi-tenant databases standard relational dbmss have only very limited support for online schema evolution. for complex application updates there has to be a significant service downtime and even small schema changes, like the ones individual tenants initiate, have a severe performance impact, as stated in [9]. in turn a multitenant dbms needs to provide schema evolution capabilities. on the one hand, tenants need the ability to tailor the saas application to their needs without affecting other tenants. this may require schema modifications of already existing relations. on the other hand, saas applications are evolving constantly, as service providers are forced to integrate new features. these new features may require changes to the database schema. consider, for example, a situation where the service provider needs to deploy a new feature of the base application which requires changes to the schema of existing relations. these changes could be performed online, as long as they do not require changes in the application code, e.g., adding attributes or enlarging the value range of an attribute. scalability, namely the ability to serve an increasing number of tenants without too much query performance degradation. one way to achieve high scalability is to offer a single instance of the tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 211 software which serves multiple clients/organizations multitenancy. by consolidating multiple customers onto the same infrastructure, resources can be economized and used more efficiently [7,13] . costs for third-party software licenses are, therefore, drastically reduced, allowing the saved money to be invested in bigger capacities of the existing infrastructure (e.g. more disk space, memory, etc…). moreover, management processes can be enhanced while providing a uniform framework for system administration. in a multi-tenant situation we cannot assume that the number of tenant will remain the same or that the tenant does not require more than one application and database server . the scalability implies that resources can be scaled-up or scaled-down dynamically without causing any interruption in the service [20]. iv related works recently, cloud computing became a dominant field in the information technology world. it prevails over both academia and industry. many studies have been done by companies and researchers to supporting outsourcing database as a service, and extending relational dbms. cloud service providers (csp) provide many services such as storage, platform and applications. companies like force.com does its own mapping from logical tenant schemas to one universal physical database schema (weissman & bobrowski) to overcome the limitations of traditional dbmss. however, this approach complicates development of the application because of many dbms features such as query optimization. instead, a next-generation multi-tenant dbms should provide explicit support for extensibility [6]. bigtable [2] is developed and deployed by google as a structured data storage infrastructure for different google’s products. to scale up the system to thousands of machines and serve as many projects as possible, bigtable employs a simple data model that presents data as a sorted map in which each value is an uninterpreted string. we see that although google's bigtable is a high performance, distributed and proprietary storage system designed to easily manage structured data that scales across thousands of commodity servers, bigtable is currently not used nor distributed outside google, although it can be accessed from google app engine. since its release several open source implementations have been reported in the literature namely hbase and hypertable. bezemer, et al.[17 ] gives a very clear introduction to multitenancy, it defines the term and shows its main characteristics. in order to do research on multi-tenancy, the authors aim to introduce the term multi-tenancy and compare it against multiuser and multi-instance. curino et al. and moon et al. shows that schema evolution is still an important topic, especially in scenarios where information systems must be upgraded with no or less human intervention . in their view, multi-tenancy is efficient when giving a set of databases and workloads, it can be determined what the best way is to serve them from a given set of machines. relational cloud stores the data belonging to different tenants within the same database server, but does not mix data of two different tenants into a common database or table. [11,3,6]. s. aulbach et al. [4], presented a chunk folding approach that is a schema-mapping technique . the approach works by vertically partitioning logical tables into chunks that in turns are folded together into several physical multitenant tables and joined as needed. franclin s. foping et al. [10] have been contributed a new approach focuses on devising a mechanism to handle data between the real physical tables and the tenant tables including options for tenant schema extension but can be implemented in open source relational database products . in [7] jacobs et al. discusses the trend towards multi-tenancy for hosted applications and some main requirements, while comparing some implementations and showed the different possibilities in implementing multi-tenant databases on standard relational databases. they identified three approaches are: shared machine, shared process, and shared table. in the shared machine approach each tenants get their own database. in [9] stefan aulbach et al. introduce features like native schema flexibility which is handled by prototype data model called flexscheme which is optimized for a multi-tenant workload they describe a method for graceful on-line schema evolution without service outages. in[19] schiller, et al. proposes the concept of a tenant context to isolate a tenant from other tenants. they present a schema inheritance concept that allows sharing a core application schema among tenants while enabling schema extensions per tenant. they introduce a tenant context concept to determines the tenant’s view of the database, and a tenant-aware schema inheritance for sharing of the application’s core schema that is invariant among tenants while allowing extensions schema for tenants according to their individual needs. jiacai ni, et al. [22] build the physical tables from the attribute level instead of the tenant level by extract the highly important attributes from the tenants and build several base tables using such important attributes and propose an adaptive method to build base tables and supplementary tables based on database schemas of different tenants and query workloads. v contributions we used tpc-h schema [21]. the schema comprises 8 tables. database generator will be used it use to populate the database with data.  we propose a new virtual schema that inherit both shared data and metadata from shared schema. thereby, it allows extending tables and creating objects according to parent schema of a multi-tenant database system based on the standard rdbms.  we enhance tpc-h benchmark to suit cloud computing, we called it satbenchcloud.  we contribute a tenant data dictionary that allows integration with multi-tenant relational database. middleware for table/ metadata sharing schema inheritance concept: schema inheritance allows tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 212 deriving a schema from another schema. thereby, a derived schema inherits the objects that are defined in the parent schema. it allows extending and creating objects according to a defined set of rules. therefore, it defines three different schema types: shared schema, virtual schema and tenant schema. shared schema: multi-tenant applications use tables to store data that is specific to the application and invariant between tenants. in such a case, the tenants only read the table while the provider or an appropriate application maintains its contents. virtual schema: the hierarchically schema describes a virtual schemas where a core application may be customized based on individual tenants needs. because a virtual schema is without table instances. consequently, it is impossible to store data using a virtual schema. tenant schema relates to a specific tenant. each tenant possesses an associated tenant schema that represents a part of its context. a tenant schema must inherit from a virtual schema. a tenant schema includes table instances and a tenant schema is final with respect to inheritance. another schema cannot inherit from a tenant schema. tenant context is associated with a specific tenant and keeps all information that allows determining the tenant’s virtual database. in other words, the concept of a tenant context determines the tenant’s view of the database by isolate a tenant from other tenants. vi experimental this section describes the information needed to empirically evaluate the efficiency and scalability of the satbenchcloud. scalability is defined as the system ability to handle growing amounts of work in a graceful manner [20]. in our experiments, we consider the scalability of satbenchcloud by measuring system throughput as data scale increases. two sets of experiments are evaluated in terms of different dimensions of data scale: tenant amounts and number of columns in the shared table. we using the original shared table as the baseline in the experiments the multi-tenant databases benchmark. benchmarking a database is the process of performing well defined tests on that particular database for the purpose of evaluating its performance. in order to provide standards, the transaction processing performance council (tpc) defines transaction processing and database benchmarks that are widely used in industry and academia to measure performance characteristics of database systems [21]. today the most important of these benchmarks is tpc-h. in order to enhance the benchmark to suits with our work, we introduced simple modifications but important on some other related work such as that offered by salesforce.com, but they do not consider the extensibility issue of the shared table, which is the heart of our work. setting up the satbenchcloud our version of benchmark called satbenchcloud , it focus on a cloud environments with multi-tenancy support. satbenchcloud comprises four modules as shown in figure 2 a shows configurable database base schema, a private schema generator, a data generator, a query workload generator, and a driver. satbenchcloud can be used with any generic relational database schema and sql queries. figure 2: complete process for running the satbenchcloud schemagen tpc-h provide a schema, it should work with most database using only minor modifications. we add a tenant_id column is added to every table . consequently, the primary key has to be a combination of the tenant_id and the entity specific id field. we use oracle database 12c, it is complete with innovative multitenant architecture and designed for the cloud. we use the schema generator called schemagen to produce the schema for each tenant. clouddbgen clouddbgen is use to populate the database with data, it has a scaling factor that influences the amount of data. qgen qgen is a utility provided by the tpc to generate executable query text. the only difference is that the query optimizer add the clause restrict on tenant statement in the query to indicate which tenant does the tuples belong to. third party driver the mechanism used to submit queries and refresh functions to the system under test (sut) and reports the execution time and throughput of the system, and measure their execution time is called a driver. metadata-driven architectures this section proposes metadata-driven architectures to build a multi-tenant database schema. this database schema integrates multi-tenant relational tables and virtual relational tables and makes them operate virtually as a single database schema for each tenant and make it a suitable for multitenant database environment that can execute any business domain database. figure 3 shows the details of metadata-driven architectures that is very significant for multi-tenant applications. tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 213 table 5.1 brief description about metadata-driven fields. figure 3: metadata-driven database design integrated tpc-h schema and multi-tenant relational database we assumes that the service provider has three tenants. the first tenant was interested to use the original database schema. for simplicity we will use the orders table only as shown in figure (4-1). the second tenant found that he needs to use the columns predefined in the order table add new fields to fulfill his business requirements. it including 'ship country' and 'required date'. figure (4-2) represents this case. the third tenant found that he needs to add extra table. thus, this tenant created virtual database relationship between the already existing physical tables and his add extra table as shown in figure (4-3). experimental settings and results in this section we will present the experimental settings and results to supporting multi-tenancy schema inheritance in rdbms for saas and make a comparison with other techniques. in general there are two types of tests: the load test and the performance test, shown in figure 2. the load test involves loading the database with data and running the queries. the latter involves measuring the system’s performance against a specific workload. we will customize the tests and discuss the exact steps that need to be taken and the values to be measured. we first present settings for benchmark databases generation. then, we present hardware and software settings. two sets of experiments are examined to evaluate the scalability of the multi-tenant system, we considering the throughput and response time in relation to the amount of tenants and the effect of column amounts. first, by running schemagen to generate 3 groups of schemas for 100, 500, 1,000 tenants. these schemas are then used for evaluating the scalability of storage and query processing under different schema variability. figure 4: integrated tpc-h schema and multi-tenant relational database next, we running clouddbgen to generate data for three different databases named sat_10gb, sat_100gb and sat_300gb were respectively generated with the tpc-h workloads of scale factor 10, 100, and 300. as required by the tpc-h specification, the three different scale factors were selected in order to observe significant differences in query response between these three different scale factors. effect of tenants in this section, we present and evaluate the experimental results of satbenchcloud and stsi under different tenant amounts. satbenchcloud implement schema inheritance that allows deriving a schema from another schema. thereby, a derived schema inherits the objects that are defined in the parent schema. it allows extending and creating objects according to a defined set of rules. therefore, it defines three different schema types: shared schema, virtual schema and tenant schema. storage capability we compare the disk space usage of shared table and satbenchcloud under different tenant amounts as shown in figure 5. it can be clearly seen that satbenchcloud outperforms stsi in terms of storage requirement in all the experiments to store the same number of tuples. our interpretation of this that shared table consumes large disk space to store null values. on the other hand, satbenchcloud extract a data dictionary associated with a tenant from the overall data dictionary and exploitation some situations of data needs to be shared between tenants, rather than migrating data from tenant to another that requires storage consuming and may cause data duplication. tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 214 figure 5: disk space usage with different number of tenants throughput test a throughput test is using to measure the ability of the system to process the most queries in the least amount of time. we now investigate the performance of satbenchcloud and stsi on concurrent operations. the throughput test must be executed under the same conditions for both approaches. the driver runs all queries and the multi-tenant database system in a ―client/server‖ configuration to simulate a real multi-tenant environment. all the processes are executed in parallel against indexed attributes. to ensure the accuracy of the results, we execute tpc-h queries workload with its default settings and compare it with satbenchcloud result. we discuss the usability of our approach. data manipulation language (dml) performance based on our proposal we divide dml operations into three categories, the first is the dml from original database schema. the second is dml when the tenant add new columns . the third is the dml when the tenant add new tables. for each workload we repeat the experiments five times and obtain the average time. as shown in figure 4 we compare the operation costs among stsi and satbenchcloud according to the example which was explained above. the experiment will perform on the three databases with workloads of scale factor 10, 100 , and 300 gigabytes. we call the selection operations sel1, sel2 and sel3 and call the insert operations ins1, ins2 and ins3 respectively for short . similarly with the deletion and update. when sf = 10 , we assume that the number of tenants = 100, compared with stsi we see that sel1 and sel2 have much better performance than stsi. show figure 6. for sel3 performance will decline but it remains the best of the stsi since the costly join operation that required create virtual database relationship between physical tables and virtual table . the same thing applies to additions, deletions and updates. when sf = 100 as shown in figure 7 and sf = 300 as shown in figure 8, we can see that the performance of satbenchcloud is remains slower than sf = 10, but it is outperforming stsi in terms of system throughput. we can conclude that satbenchcloud is not affected by increasing the number of tenants. our interpretation of the efficiency of satbenchcloud uses fewer disk i/os to fetch the records of dml operations to memory than stsi because it displays the data for the one tenant only at a moment. on the other hand, index pivot table associated with a specific tenant improve and speed up the query execution time when retrieve data , the index is built on the tenant's identity column. in contrast, stsi use a big indexes records from all tenants. the lookup becomes inefficient with large number of tenants. figure 6: dml performance when scale factor = 10 figure 7: dml performance when scale factor = 100 figure 8: dml performance when scale factor = 300 effect of columns database as a service is designed to support a large number of tenants and each of them have different requirements, but a few of columns are common, for this reason we need to handle the situation that the base schema is very sparse and contains a large amount of configurable columns owned by different tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 215 tenants. one of the big challenges in the shared table model is decide the number of custom fields (columns in table) for tenants , providing less number of columns might restrict the ability tenants who wish to use a multi-tenant database systems and flexibility of extend the table. we investigate the scalability of satbenchcloud vs stsi with an increasing number of columns and the impact on the efficiency of the system performance and the use of suitable storage space. storage capability in this experiment, we will examine storage capability for each of satbenchcloud and stsi with the increasing number of columns. we assume that the number of columns in the shared table varies from 10 , 100 and 300 in our three different databases respectively. figure 9 illustrates the disk space usage of satbenchcloud and stsi. the figure shows that satbenchcloud requires less storage space compared with stsi. our interpretation that satbenchcloud operates according to the idea of tenant context , this means that there is a degree of integration between multi-tenant relational tables and virtual relational tables mean that storage data is associated with a particular tenant according to the columns defined by this user without leading to store any values of other tenants which shows a good scalability in respect to the system storage . this concept already applied in the column-oriented databases. also, any schema modifications of one tenant will not affect the logical schema of other tenants. throughput test our objective now is to evaluate the effect of increase columns on the system throughputs. we will test the three databases under different workloads. we will use qgen to generate executable query, then we will execute the same figure 9: disk space usage with different number of columns queries against these databases after the extension of the table by adding new columns and the response of the two approaches with the process. figure 10 displays the system throughput and response time for satbenchcloud and stsi. as is clear, there is a decline in the performance of two approaches when increasing the number of columns, but it does not affect the scalability. it is clear that satbenchcloud offers the best performance because it has the ability to selectively i/o in columns to improve the performance. the rate of improvement of 30%. figure 10: throughput test viii conclusion and future work in this paper we have proposed satbenchcloud , that it is an efficient approach for supporting multi-tenancy schema inheritance in rdbms for saas tailored to multi-tenancy. we offers different schema types for different situations. we focused on meta data management to overcome the null values, and bring the data by the tenant's identity, as well as building tenant indexes. our experiments results show that our approach decreases main memory consumption and lookup times of the data dictionary compared to stsi. in our future work, we intend to complete and efficient support for multi-tenancy, and to facilitate the migration of applications feature between cloud database services providers according to security requirements. references [1] "the nist definition of cloud computing" . national institute of standards and technology. september 2011. [2] f. chang, j. dean, s. ghemawat, w. c. hsieh, d. a. wallach, m. burrows, t. chandra, a. fikes, and r. e. gruber. bigtable: ―a distributed storage system for structured data‖. in osdi, 2006. [3] hyun jin moon, carlo curino, and carlo zaniolo. ―scalable architecture and query optimization for transaction-time dbs with evolving schemas‖. in elmagarmid and agrawal (2010), pages 207–218. isbn 978-1-4503-0032-2. [4] s. aulbach, t. grust, d. jacobs, a. kemper, and j. rittinger. ―multi-tenant databases for software as a service: schema mapping techniques‖. in sigmod ’08: proceedings of the 2008 acm sigmod international conference on management of data, pages 1195–1206, new york, ny, usa, 2008. acm. [5] mei hui, dawei jiang, guoliang li, yuan zhou, ―supporting database applications as a service‖. ieee international conference on data engineering, 2009. http://csrc.nist.gov/publications/nistpubs/800-145/sp800-145.pdf tawfiq s. barhoom , samir a. hillis/ an efficient approach for supporting multi -tenancy schema inheritance in rdbms for saas (2015) 216 [6] craig d. weissman and steve bobrowski. the design of the force.com ―multitenant internet application development platform‖ . in cetintemel et al. (2009), pages 889–896. isbn 978-1-60558-551-2. [7] d. jacobs and s. aulbach. ―ruminations on multi-tenant databases‖. in a. kemper, h. schoning, t. rose, m. jarke, t. seidl, c. quix, and c. brochhaus, editors, btw, volume 103 of lni, pages 514–521. gi, 2007. [8] curino, c., jones, e., popa, r., malviya, n., wu, e., madden, s., balakrishnan, h.,zeldovich, n. 2011. ―relational cloud: a database service for the cloud‖ . in cidr, pages 235–240. [9] stefan aulbach, michael seibold, dean jacobs, and alfons kemper. ―extensibility and data sharing in evolving multitenant databases‖. in proceedings of the 27th ieee international conference on data engineering (icde), pages 99–110, 2011. [10] franclin s. foping, ioannis m. dokas, john feehan and syed imran ―a new hybrid schema-sharing technique for multitenant applications‖ , ieee – digital information management 2019, cork constraint computation centre university college cork ireland 1-4 nov. 2009 [11] carlo curino, hyun jin moon, and carlo zaniolo. ―automating database schema evolution in information system upgrades‖. in tudor dumitras, iulian neamtiu, and elitilevich, editors, hotswup. acm, 2009. isbn 978-1-60558723-3. [12] d. jacobs. ―enterprise software as service‖. acm queue, 6(3):36–42 . [13] z. h.wang, c. j. guo, b. gao,w. sun, z. zhang, and w. h. an. ―a study and performance evaluation of the multitenant data tier design patterns for service oriented computing‖.in e-business engineering, 2008. icebe ’08. ieee international conference on, pages 94–101, oct. 2008. [14] r. elmasri and s. b. navathe. ―fundamentals of database systems‖, 5th edition. addison-wesley, 2007. [15] mateljan, v., cisic, d., ogrizovic, d.: ―cloud databaseas-a-service (daas) ― roi. in mipro, proceedings of the 33rd international convention, 1185—1188 (2010). [16] f. chong and g. carraro, ―architecture strategies for catching the long tail,‖ microsoft corporation, http://msdn.microsoft.com/en-us/library/aa479069.aspx, tech. rep., april 2006, (last visited 09-05-2014). [17] bezemer, c., zaidman, a., platzbeecker, b., & hart, a. (2010). ―enabling multi-tenancy : an industrial experience report‖. innovation, 1-8. ieee. doi:10.1109/icsm. 2010.5609735. [18] carlo curino , evan p. c. jones, raluca ada popa, nirmesh malviya "relational cloud: a database-as-a-service for the cloud." 5th biennial conference on innovative data systems research, cidr 2011, january 9-12, 2011 asilomar, california. [19] oliver schiller, benjamin schiller, andreas brodt: ―native support of multitenancy in rdbms for software as a service‖ proceedings of the 14th international conference on extending database acm new york, ny, usa ,2011. [20]. burgess g, what is the tpc good for? or, the top reasons in favour of tpc benchmarks, http://www.tpc.org/information/other/articles/topten.asp, (last visited 17-03-2015) . [21] tpc: transaction processing performance council , http://www.tpc.org/ (last visited 23-03-2015). [22] chang jie guo, wei sun, ying huang, zhi hu wang, and bo gao.‖ a framework for native multi-tenancy application development and management‖. [22] ni, jiacai, et al. "adaptive database schema design for multitenant data management." knowledge and data engineering, ieee transactions on 26.9 (2014): 2079-2093. authors profile dr. tawfiq s. barhoom received his ph.d degree from shanghai jiao tong university (sjtu), in 2004. this author is the dean of it, islamic university-gaza. his current interest research include , secure software, modeling, xmls security, web service and its applications and information retrieving. samir a. hillis received his bsc in it from al-quds open university at 2000. he works at palestine technical college since 2000 as lecturer and now he is the head of registration department. currently he is msc candidate in islamic university of gaza , his interest in cloud computing, database management systems, data mining and mobile applications. http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5351158 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5351158 http://msdn.microsoft.com/en-us/library/aa479069.aspx http://www.acm.org/publications transactions template journal of engineering research and technology, volume 9, issue 1, march 2022 1 study of the readiness for receiving desalinated seawater – gaza city case study mazen abualtayef1, mohanad salha2, khalid qahman3 1 civil and environmental engineering department, islamic university of gaza, palestine 2 municipality of gaza, palestine 3 environment quality affairs, palestine https://doi.org/10.33976/jert.9.1/2022/1 abstract— the readiness for receiving desalinated water in gaza city was evaluated, by ensuring the technical, economical and social readiness, and identifying the current obsticles and problems to achieve the greatest possible benefit from gaza seawater desalination plant. the conditions of existing water networks in gaza city were assessed through the age of network, the age of flowmeters, determining the locations of asbestos lines, determining the places of breakage and failure, and generating spatially thermal maps. furthermore, studying the percentage of collection system based on social and economical aspects was carried out to determine the acceptance of defined new tariff as a consequence of improving the water service. the results revealed that 68% of the water networks are aging more than 10 years, as well as 66% of water flowmeters are aging more than 10 years. there is a close relationship between the use of 2” diameter pipes with the occurrence of breaks and failure in the networks. the collection is not exceeding 4% of total bills value, which represents 29% of the number of cutomers. the socio-economic results found that 65% of the customers are not paying the bills due to bad economic and 35% due to poor water service. however, 64% of the customers agreed to raise the tarrif for reciving-desalinated water, and 97% agreed if the tarrif does not exceed 2 shekels per cubic meter. existing water networks problems should be maintained to ensure optimal use of desalinated water, and public awareness for citizens should be carried out to incrase the level of social responsibility and paying the bills in order to obtain a better service and sustainability. index terms— readiness; desalination; water services; gaza i introduction water demand in the gaza strip is increasing continuously due to economic development and population increase resulting from natural growth, while the water resources are constant or even decreasing due to urban development [2,10]. gaza strip is classified as a semi-arid region and suffers from water scarcity. the renewable amount of water that replenishes the groundwater system is less than the demanded amount, and this resulted in deterioration of the groundwater system in both quantitative and qualitative aspects. desalination of seawater has become a component of the strategic plan of the palestinian water authority (pwa), since the domestic water demand by year 2020 was about 182 mcm/year [1]. gaza strip is not only afflicted with a chronical shortage of drinking water, but also is suffering from the deterioration of its water quality. in order to solve these problems, there is no choice but to construct a large-scale seawater desalination plant as well as to restrict groundwater abstraction. under such conditions, pwa has been planning to construct gaza central desalination plant with initial capacity of 55 mcm/year and maximum capacity of 110 mcm/year based on the coastal aquifer management plan [2], this 55 mcm/year of production volume and about 14 mcm/year form short-term low-volume desalination plants that established in different areas in gaza strip will cover about 40% of water demand in 2025. the associated works include the north-south carrier lines, local non-revenue water reduction, and construction of reservoirs. despite of clear strategy of pwa and water utilities in gaza strip to provide customers with healthy and safe drinking water from the seawater desalination plants and imported drinkable water from mekorot. it is not clear that how water utilities will manage such amount knowing that the overall amount of drinkable water will cover 40% of domestic use and the cost of desalinated water will increase the tariff rate. pumping desalinated water in the existing water networks without considering their conditions such as pipes age, materials, and the losses; therefore, it will increase the cost, which will affect the sustainability of such strategic projects. in addition to the economical and social aspects, water utilities must change people's attitudes about the water that comes out of their taps, persuade them that it is drinkable water that can be used in everyday situations and does not cause any health problems, as people have developed a negative https://doi.org/10.33976/jert.9.1/2022/1 mazen abualtayef et al./ households’ affordability and willingness to pay for water services in khan younis city, palestine (2022) 2 perception of this source [3, 4, 5, 6]. in view of people's deteriorating economic status, the economic side is important, in which the tarrif rate may reach 1 $/m3 [7]. the aim of this study is to assess the readiness of receiving desalinated water by considering technical, economical and social aspects to achieve sustainability in water service. ii materials and methods 2.1 overview of the study area the study area of gaza city was considered, which serves a population of about 700 thousand inhabitants [8]. gaza city suffers from a scarcity of drinkable water resources, which resulted from more than 97% of the extracted water is not suitable for human use. there are three main sources of drinking water in the city, which are mekorot, private desalination plants and gaza seawater desalination. gaza city has a nonrevenue of about 38% and the water is being pumped from wells into the networks directly. the existing water networks consist of upvc, steel, asbestos and polyethylene pipes. the distribution schedule varies from one neighbourhood to another where the average number of hours of water access for the households is about 3 hours a day [9]. 2.2 methodology the methodology inlcudes technical part that assesses the status of the water networks, and economical and social part that examines the extent of readiness to receive desalinated water. a. technical part the data were collected from different departments at gaza municipality and maped spatially on the city map [9]. extensive efforts were done to verify the collected data with the operator. the collected data includes:  water quality data was collected through the water department. the actual water quality for each area in gaza city was obtained and signed spatially on a complete plan for gaza city.  the age of existing water networks was obtained from the gis department from all projects implemented in the city since 2010 and linked to coordinates. then the data was classified according to the age into a network of less than 10 years, and older than 10 years.  the material types of water networks were collected from the maintenance and gis departments and the type of material (asbestos, iron, upvc) was spatially maped on gaza city.  the pipe’s diameters of existing water networks were collected from the water and gis departments and identified by means of undersizes and witness problems during operation.  the breaks of water network are collected from operation and maintenance department, which obtained through the complaints receiving system at the gaza municipality. the archived complaints of network failure in the existing water network, their location and rehabilitation method for the period of 2018-2020 were collected. thermal maps of breaks were generated for all neighbourhoods of the city to demonstare the intensity of failure in the network, identify areas that need intervention, and make the necessary development to raise the network’s efficiency.  the ages of existing flowmeters data were obtained from the licensing department with the date of installion and classified into two groups less than 10 years or grater than 10 years. the data was calssified based on the neighbourhoods of the city to know how many meters have exceeded the age of 10 years.  the revenue percentage were obtained from the collection/financial department for each neighbourhood. the collected data were analyzed spatially using the geographic information systems and thermal maps were generated to illustrate the numerical values for easy reading and better decision making. b. socio-economic part the economical and social aspects taken into consideration to study the readiness of receiving desalinated water through a questionnaire targeting the residents of gaza city. the questionnaire was developed using previous studies [10, 11, 12, 13] for the study area. the questionnaire was conducted between august 2020 and january 2021 in the city with a representative sample of 380 out of 47,264 subscribers were distributed. the questionnaire was divided into:  general household’s socio-economic and water resources data.  household’s satisfaction with water quality and quantity.  citizen's ability to pay the cost of drinking water and willingness to pay in the event of improving the quality of the supplied municipal water. the questionnaire was distributed electronically to the target groups due to covid-19 pandemic using google tool. the data was analyzed using spss and excel. iii results and discussion the results of technical and socioeconomic aspects are addressed here. 3.1 results of technical part the followings are the main findings for evaluating the technical aspects of existing water networks in gaza city: age of existing flowmeters 66% of flowmeters are aging more than 10 years, where the old city neighborhood has the highest percentage of 80%. therefore, this neighborhood should be given priority in future rehabilitation projects. beaks in existing water networks several neiborhoods such as judaida, turkman, daraj, remal, sabraa and nasser have the highest records of breaks during 2018-2020. a heat map was prepared that shows the density of the break points as shown in figure 1. mazen abualtayef et al./ households’ affordability and willingness to pay for water services in khan younis city, palestine (2022) 3 age of existing networks 68% of the age of existing water networks are greater than 10 years and covering the whole city as shown in figure 2. pipe diameters of existing networks most of the maintenance activities took place in pipe diamters of 2 inches. it also increased the chance of pipe breaks (see figure 3). material types of existing networks 63.19% of pipe materials are from upvc, 18.78% steel, 17.64% pe and 0.39% asbesstos. figure 0 shows the location of asbestos pipes in gaza city. water quality of the provided municipal water: the city was classificated based on the total dissolved solids (tds) quality into 3 categories: less than 500, 500-1500, higher than 1500. figure 5 shows the tds of provided municipal water to customers. the collection percentage of bills 29% of subscribers are paying the bills in 2020, which represents of about 4% of the total bill values. the highest percentage paid in remal and sheikh ejleen areas (8-10%) and the lowest in turkman and judaida areas (2%). figure 1: water pipe breaks in 2018 (above), 2019 (middle) and 2020 (bottom) in gaza city figure 2: age of existing water networks in gaza city figure 3: location of 2” pipe diameters heat map in gaza city mazen abualtayef et al./ households’ affordability and willingness to pay for water services in khan younis city, palestine (2022) 4 figure 0: location of asbestos pipes in gaza city figure 5: tds values of municipal water 3.2 results of socio-economic part the socioeconomic readiness for receiving desalinated water in gaza city was carried out and the followings are the main findings. the monthly household expenditures 16.5% of the households their monthly expenditures are less than 1000 nis, 37.6% between 1001-2000 nis, 28.2% between 2001-3000 nis, and 17.6% more than 3000 nis. the main water supply the majority of households are using the municipal water as the main source (i.e. 95.3%), while 4.7% are using private wells source. the monthly bill values of municipal water 24.5% of the households are paying less than 50 nis per month, 56.9% are paying between 51-100 nis, 15.3% are paying between 101-150 nis, and 3.3% are paying more than 150 nis. the results revealed that the minority of households are paying bills with a value more than 150 nis for municipal water. the schedule of water distribution the majority of the households (68.7%) are following 2day water distribution schedule, 16% are following 3-day schedule, 13.6% are following municipal water in daily basis, and finally a very small percentage (1.6%) are following once a week schedule. the monthly cost of drinking water 29.2% of the households pay less than 20 nis for drinking water per month, (59.8%) pay between 20-40 nis, and 11.1% pay more than 40 nis. source of drinking water most households are using drinking water that purchased from vendors (97.3%), 1.1% are using small household ro units, 0.2% from municipal water, and 1.4% use other sources. satisfaction of municipal water quality and water distribution schedule 49.6% of households are satisfied with a moderate degree of municipal water quality in terms of salinity, while 36.7% are not. as for the water distribution schedule, the results showed that 55.1% are satisfied with a moderate degree, while 27.1% have a low degree of satisfaction. reasons for non-compliance with paying bills 12.9% of households do not pay the bills due to the high cost, 43.5% do not pay due to the low income, 19.1% do not pay due to the lack of water and its low quality, while 24.5% do not pay it due to their dissatisfaction with the level of municipality services. willingness to pay for service improvement 64% of the households are willing to pay and accept to increae the tarrif rate if the received desalinated. affordibility to pay 54.3% of households afford to pay less than 1 nis/m3, 58.3% between 1-2 nis/m3, and minority can pay between 2-3 nis/m3 in case of the municipal water quality is improved and receiving desalinated water. the relationship between main water supply and the source of drinking water is drawn. it is found that 87.7% of households rely on municipal water source are buying drinking water from vendors and 11.1% rely on home ro unit. in addition, 80% of households rely on private wells source are buying drinking water from vendors and 10% rely on a household ro unit. 57.3% of households rely on municipal water source do not want to pay the water bill for socio-economic reasons and 42,7% for low level of service quality. likewise, 40% of households rely on private wells source do not want to pay for economic reasons and 60% for low level of service quality. the relationship between unwillingness to pay the bill and agreeing to increase the tariff rate for improving the service is reflected, where 37.9% are against raising the tariff rate. while, 68.1% of households with poor municipal services mazen abualtayef et al./ households’ affordability and willingness to pay for water services in khan younis city, palestine (2022) 5 agreeing to raise tariffs for improving service. majority of customers who agree to raise the tarrif rate for improving the water quality and servies do not prefer to increase the cost more than 2 nis, in which 54.5% accept tarrif of 1 nis, 42.2% accept tarrif of 1-2 nis and 3.3% accept tariff of 2-3 nis. iv conclusions this study showed gaza city’s readiness to receive desalinated water by measuring the technical, economical and social aspects. the infrastructure development of the water networks is required, as 66% of the residential water flow meters are aging more than 10 years, and 68% of the networks age is more than 10 years. furterhmore, there is a close relationship between the location of 2” pipes with the break points, which requires replacement of undersize pipes. the collection system of the bill values does not exceed 4%, which represent 29% of the subscribers. this indicates a significant decline in the collection rate. the socio-economic study revealed that 64% of households are willing to increase the cost for improving the water quality and water services with a tarrif rate up to 2 nis/m3. acknowledgment greet thank goes to the middle east desalination research center and palestinian water authority for the financial support of this work. references [1] palestinian water authority, “palestine status report of water resources”. final report, pwa, palestine, 2020 [2] palestinian water authority, “coastal aquifer management program, camp, integrated aquifer management plan (task 7)”, final report, pwa, palestine, 2000 [3] a. akram, and s. olmstead, “the value of household water service quality in lahore, pakistan”. environmental and resource economics, vol. 49, no. 2, pp. 173-198, 2011 [4] s. behailu, a. kume, and b. desalegn, “household's willingness to pay for improved water service: a case study in shebedino district, southern ethiopia”. water and environment journal, vol. 26, no. 3, pp. 429-434, 2012 [5] m. doria, “factors influencing public perception of drinking water quality”. water policy, vol. 12, no. 1, pp. 1-19, 2010 [6] m. genius, e. hatzaki, e. kouromichelaki, g. kouvakis, , s. nikiforaki, and k. tsagarakis, “evaluating consumers’ willingness to pay for improved potable water quality and quantity”. water resources management, vol. 22, no. 12, pp. 1825-1834, 2008 [7] palestinian water authority, “gaza water status report, water resources directorate”. final report, pwa, palestine, 2018 [8] palestinian central bureau of statistics, “projected population in the palestinian territory”. final report, pcbs, palestine, 2017 [9] municipality of gaza, “water department annual report”. final report, mog, palestine, 2020 [10] s. assaf, “existing and the future planned desalination facilities in the gaza strip of palestine and their socio-economic and environmental impact”. desalination, vol. 138, no. 1–3, pp. 17–28, 2001 [11] a. kaliba, d. norman, and y. chang, “willingness to pay to improve domestic water supply in rural areas of central tanzania: implications for policy”. the international journal of sustainable development & world ecology, vol. 10, no. 2, pp. 119-132, 2003 [12] z. lema, and f. beyene, “willingness to pay for improved rural water supply in goro-gutu district of eastern ethiopia: an application of contingent valuation”. journal of economics and sustainable development, vol. 3, no. 14, pp. 145-159, 2012 [13] h. wang, j. xie, and h. li, “water pricing with household surveys: a study of acceptability and willingness to pay in chongqing, china”. china economic review, vol. 21, no. 1, pp. 136-149, 2010 mazen abualtayef is a professor of coastal engineering in environmental engineering department at the islamic university of gaza. he is a water engineering expert and got a versatile experience during his 25 years of experience of working in managing, designing, and supervising of infrastructure projects especially water and coastal projects. he participates in teaching many courses such as port and coastal engineering, brine management, renewable energy for desalination, o&m of water-sewerage-stormwater networks, engineering economics, surveying and gis, environmental modeling, fluid mechanics, hydraulics, numerical analysis. mohanad salha is the head of project preparation department in municipality of gaza, holds a master’s degree in civil engineering, he had 13 years of practical experience in the design and supervision of infrastructure and development projects in gaza, holds several international trainings in planning tools, and participated in several specialized committees for the preparation of strategic plans and identification of development projects in the city. khalid qahman is an expert in water resources engineering in which he began his professional career as a water engineer planner and contributed to several interdisciplinary projects. currently, he is working as assistant chairman of the palestinian minister of environment and a guest lecturer in postgraduate studies indifferent universities. his experience includes hydrological and environmental investigations and monitoring, water and wastewater systems, stormwater drainage assessments, design and management, irrigation networks design, and climate change adaptation. he conducted hydrologic, hydraulic and water quality modeling for various areas of palestine and morocco utilizing also gis capabilities. journal of engineering research and technology, volume 1, issue 1, march 2014 a novel elliptically shaped compact planar ultra-wideband antenna mohamed ouda abstract— a low profile planar elliptically shaped antenna for ultra-wide band applications is presented. the antenna consists of a conducting patch, a dielectric substrate and a partial conducting ground plane. the patch has the shape of modified elliptical rings and excited using a rectangular edge-fed microstrip feed line. the antenna size is 45mm x 23mm. the impedance bandwidth of the antenna extends from 3.5 ghz to 10.6 ghz, thus meeting the uwb system requirement. index terms— elliptical rings, printed antenna, ultra-wideband antenna. i. introduction ultra-wideband radio systems use a bandwidth extending from 3.1 ghz to 10.6 ghz to transfer data consistent with the regulations of the federal communications commission (fcc) [1]. microstrip patch antennas (mpa) are attractive candidates for use in developing ultra-wideband (uwb) antennas for short-range high-speed wireless communication networks due to their interesting features such as low cost, small profile and conformability [2]. good uwb antennas should have low return loss, suitable radiation pattern and high efficiency over all the bandwidth [3]. one of the challenges facing the development of uwb radios is developing antennas that meet the bandwidth requirements. the most straightforward way to improve the mpa bandwidth is to increase the patch-ground plane separation by using a thicker substrate [4], [5]. thick substrates, however, support surface waves that can increase mutual coupling in antenna arrays and possibly degrade the radiation pattern [6]. the bandwidth of an mpa can also be improved by combining several resonant structures into one antenna, such as increasing the metallization layers, increasing the number of patches or adding extra components [7]. many antenna configurations have been used in uwb antenna design, such as square, circular, elliptical, pentagonal and hexagonal shapes [8-11]. multiple ring monopole antennas were introduced in [12] and [13]. several multiple ring antennas with different ellipticity ratios and ring thickness were investigated in [14]. nazlı et al. presented enhanced elliptical slotted planar dipole antenna design for uwb communication and impulse radar systems [15]. in this paper, a low profile patch antennas for ultra-wide band applications is presented. the patch consists of modified elliptical rings and excited using a rectangular edge-fed microstrip line. a partial conducting ground plane is used for the impedance bandwidth enhancement of the antenna. ii. antenna design the geometry of the proposed antenna is shown in figure 1. it consists of a printed modified elliptical ring excited by a rectangular edge-fed microstrip line, the substrate and a short ground plane. the partial conducting ground and the antenna symmetrical shape have been used for bandwidth improvement in [11]. the antenna was designed on rogers rt/duroid 5880lz substrate with dielectric constant of εr =1.96, height of h =1.27mm, and a loss tangent of 0.0009. the substrate has a length of 45mm and a width of 23mm. the width of the partial conducting ground plane is 23mm and the length is 10mm. figure 1. antenna geometry ————————————————  m. ouda is with the electrical engineering department, islamic university of gaza, gaza, palestine. a novel elliptically shaped compact planar ultra-wideband antenna, mohamed ouda (2014) 9 figure 2. top view of antenna configurations (l1 = 45, l2 = 3.97, l3 = 4.13, l4 = 2.9, l5 = 2, l6 = 2.65, l7 = 2.9, l8 = 4.9, l9 = 7.7, w1 = 23, w2 = 8.48, w3 = 3.92. the top view of the modified elliptical ring patch configuration is shown in figure 2, where the dimensions of all parts are given in millimeters. the antenna was designed and simulated using ansoft’s high frequency structure simulator (hfss v12) [16]. figure 3 shows the variation of return loss with respect to the ground plane length. it is clear that changing the ground plane length affects the resonant frequency and the return loss of the antenna. the ground plane length was chosen to be 10mm. figure 3. simulated return loss for the different values of ground plane length iii. result and discussion the simulated results of the return loss (|s11|) and the standing wave ratio (swr) of the antenna for the frequency range 2-12ghz are shown in figures. 4 and 5, respectively. it is shown in the figure that the antenna impedance bandwidth is more than 7.5 ghz, from 3.5ghz to 10.6ghz, thus meeting the fcc ubw requirement. clearly the antenna is well matched across the frequency range without any need for a balun. also, for the impulse system, the swr level of the antenna is a critical parameter to avoid ringing effect. in order to avoid undesired ringing in the impulse system, the antenna input and the rf generator impedance should be matched over the wide frequency band. therefore, the antenna is very useful for impulse systems due to its low-level swr over the wide frequency band. figure 4. the return loss figure 5. the simulated swr. the antenna 2d radiation patterns at 4, 6, 8, and 10 ghz in both eand h-planes are shown in figures. 6 and 7, respectively. the radiation patterns of the antenna show that a novel elliptically shaped compact planar ultra-wideband antenna, mohamed ouda (2014) 10 the antenna has quite stable radiation pattern over its entire frequency band. figure 6. the 2-d polar radiation pattern at 4, 6, 8 and 10 ghz, and phi=00 figure 7. the 2-d polar radiation pattern at 4, 6, 8 and 10 ghz, and phi=900 a novel elliptically shaped compact planar ultra-wideband antenna, mohamed ouda (2014) 11 figure 8. the 3-d radiation pattern at 4, 6, 8 and 10 ghz. figure 9. simulated maximum antenna gain figure 10. the antenna group delay the 3d far-field radiation patterns of the antenna, at 4, 6, 8 and 10 ghz, are illustrated in figure 8. furthermore; the simulated maximum gain over the frequency range from 2 ghz to 12ghz is shown in figure 9 which shows that the antenna has acceptable gain in all of its bandwidth. group delay is an important parameter in uwb antenna design, which represents the degree of distortion of pulse signal. the antenna group delay is shown in figure 10. from the figure, it can be seen that the group delay variation is less than 0.25 ns for all frequencies above 5.8ghz, however; it increases to 1.5ns over a narrow segment of the bandwidth in the lower frequency region. iv. conclusion a compact 45x23 mm low profile planar ultra-wide band patch antenna for ultra-wide band applications was proposed. the antenna was excited using a rectangular edge-fed microstrip line. a partial conducting ground plane was used to enhance the bandwidth of the antenna. the effect of the ground plane length was studied and a suitable length was selected for the antenna. the impedance bandwidth of the antenna is about 7.5 ghz, extending from 3.5 ghz to 10.6 ghz. references [1] fcc, “federal communications commission revision of part 15 of the commission's rules regarding ultra-wideband transmission systems,” first report and order fcc, 02.v48, apr. 2002. [2] y. chen and y. p. zhang, “a planar antenna in ltcc for single package ultrawide-band radio,” ieee trans. antennas propagat, vol. 53, no. 9, pp. 3089 – 3093, sept. 2005. [3] c.d. zhao, “analysis on the properties of a coupled planar dipole uwb antenna”, ieee antennas and wireless propagation letters, vol. 3, pp. 317-330, dec. 2004. [4] j. bahl and p. bhartia, microstrip antennas. artech house, inc., london, 1980. [5] s. a. hosseini, z. atlasbaf, and k. forooraghi, “a new compact ultra wide band (uwb) planar antenna using glass as substrate,” journal of electromagnetic waves and applications, vol. 22, no. 1, pp. 47–59, 2008. [6] k. bhattaacharyya and l. shafai, “surface wave coupling between circular patch antennas,” electronic letters, vol. 22, no. 22, pp. 1198-1200, oct. 1986. [7] a. a. abdelaziz, “bandwidth enhancement of microstrip antenna”, progress in electromagnetics research, pier 63, pp. 311–317, 2006. [8] h.g., “bottom fed planar elliptical element uwb antennas”. ieee conf. on ultra wideband systems and technologies, november, 2003, pp. 219–223. [9] k. c. l. chan, huang, y., and x. zhu, “a planar elliptical monopole antenna for uwb applications”. ieee conf. on a novel elliptically shaped compact planar ultra-wideband antenna, mohamed ouda (2014) 12 wireless communications and applied computational electromagnetics, april 2005, pp. 182–185 [10] c. y. huang and w. c. hsia, “planar elliptical antenna for ultrawideband communications”, electron. lett., vol. 41, no. 6, pp. 296–297, mar. 2005 [11] y.-j. ren and k. chang, “ultra-wideband planar elliptical ring antenna”, electronics lett., vol. 42, no. 8, april 2006. [12] c. t. p. song, p.s. hall, h. ghafouri-shiraz, and d. wake, “multi-circular loop monopole antenna,” electronics lett., vol. 36, no. 5, pp. 391-393, mar. 2000. [13] c.t.p. song, p.s. hall, h. ghafouri-shiraz, “multiband multipie ring monopole antennas,” ieee trans. antennas propagat., vol. 51, no. 4, pp. 722-729, april 2003. [14] a. mirkamali, peter s. hall, and mohammad soleimani, “elliptical multiple ring monopole antennas” iee conf. on wideband and multi-band antennas and arrays, iee (ref. no. 2005/11059), 7 sept. 2005, pp. 123 – 127. [15] h. nazlı, e. bıçak, b. türetken, and m. sezgin, “an improved design of planar elliptical dipole antenna for uwb applications” ieee antennas and wireless propagation letters, vol. 9, pp. 264-267, 2010. [16] ansoft corporations, hfss v.12software based on the finite element method. journal of engineering research and technology, volume 9, issue 1, march 2022 6 the durability test on the potential of single rattan fibres yuhazri1 m.y., pazlin2, 3* m.s., amirhafizan2 m.h. 1faculty of mechanical and manufacturing engineering technology, universiti teknikal malaysia melaka, 2faculty of manufacturing engineering technology, universiti teknikal malaysia melaka, hang tuah jaya, 76100 durian tunggal, melaka, malaysia. 3department of mechanical engineering, politeknik melaka, 75250 melaka, malaysia. *e-mail: p051910004@student.utem.edu.my https://doi.org/10.33976/jert.9.1/2022/2 abstract-currently, rattan yarns are used in the furniture business because they are widely accessible, economical, non-hazardous to health, and biodegradable to the environment; hence, by using it as a composite material scattering fibre, it will be able to solve the environmental problem in the future. the purpose of this study was to get a technical examination of the tensile strength of rattan single fibre composite reinforced unsaturated thermoset resin. the goal of this study is to determine the tensile strength composite of rattan single fibre with varied fibre diameter sizes ranging from 1 mm to 5 mm maximum. the specimen trial result is served in tensile strength when compared to the tensile strength authorised by astm as a theory of standardisation test. from the resulting study, we found the maximum of tensile strength and maximum impact has got by composite with 5 mm diameter. the morphology of surface composition was examined using optical microscopy (om). index termssingle fibre, fibre diameter, optical microscopy, mechanical properties, young’s modulus i introduction natural fibres have strong mechanical qualities, are renewable, are environmentally friendly, and are economically viable. as a result, they have received increased attention in recent time as underpinnings for matrix composites [1,2]. nevertheless, natural fibres have disadvantages due to the local climate, conditions of growth and nature of the recovery process (prunning, enzyme treatment, etc) [3,4]. furthermore, thermoset reinforcing plastic filler is lightweight, has improved mechanical properties, and is free of health dangers, whereas synthetics are expensive and need a lot of energy to produce. natural fibres such as rattan are developing as cost-effective and seemingly environmentally better alternatives for synthetic polymers in composites, owing to the need for renewable fibre reinforced composites [5-7]. the ability to accurately and reliably quantify the tensile strength of natural fibres using a practical approach is thus critical for comparing different types of fibres and predicting the mechanical characteristic of their compounds. the single fibre tensile test is the most frequently used technique aimed at determining fibre tensile characteristics [8–10]. for synthetic and natural fibres, this technique offers adequate strength and modulus. in this study, we will discuss the problems and limits of single fibre test for rattan fibers. the rattan yarns supplied by afr craft enterprise address plot a, mile 7, bukit gedong, tg.kling, melaka while the resin supplied by chemibond enterprise sdn. bhd, petaling jaya, selangor was utilised to authorize the procedure. this study emphasizes on the mailto:p051910004@student.utem.edu.my https://doi.org/10.33976/jert.9.1/2022/2 yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) 7 objectives to evaluate the strength of single rattan fibre for the purpose of turning into weive fabric. although some progress has been made towards rattan fibre, further research is needed in the upgrading potential rattan fibre especially from the family of calamus caesius. customarily, calamus caesius has been utilized by rural community for making baskets, mats, and crafted works. the circular cane, skin peel and hyperbolic shape give significantly critical high-quality materials for the very advanced rattan furniture fabrication. commonly use for tide and reinforcement for bigger diameter rattan canes. the tone of the rattan influenced by components such as age, dampness substance and the light conditions during development [11-19]. the prototype sample for this study includes the original structure from the natural composite of rattan fibre. this study thus concurrent with the government policy to uphold handicraft as part of the national industry. the malaysian handicraft development corporation (pkkm), defines handicraft as the study of producing useful or decorative equipment with full hand or simple tools. according to the current state of the environment and the requirements of natural fibres for improvements in polymer composites, it was decided to examine the use of natural fibres as strengthening in polymer mixes as a strengthening method. because natural resources are used in a variety of industrial applications and manufacturing operations, this also serves as a method of promoting economic growth in rural regions. according to the malaysian government's concern for diversification of local woodland-based products, in addition to the craft and furniture sectors, this list has been compiled. according to the findings through this review [1], the characteristics of reinforced natural with synthetic fibers embedded with polymers are possible. ii single fibre test towards natural fibres a confines of single fibre test carried on natural fibres single fibre test was initially used to evaluate the tensile characteristics of man-made fibres (astm d 382201). herein approach, the fibre cross-section area is calculated supposing that the fibre is a rectangle, which is true for most man-made fibres. the tensile strength of a single fibre may be easily calculated from the force at which it fails in the tensile test. most synthetic fibres are homogeneous and almost spherical because they are manufactured in a well-controlled and optimised process. the conservative single fibre test using universal tensile test machine as shown in figure 2 provides good and reliable tensile properties for synthetic fibers. natural fibres, on the other hand, are not the same as synthetic fibres. natural technical fibre is frequently composed of a bundle of primary fibres, resulting in an uneven form depending on the quantity of fundamental fibres and how they are packed together. furthermore, the cross-section of elementary fibre is not completely round. as a result, the fibre diameter visible under microscope might vary greatly depending on the perspective. however, as illustrated in figure 1, natural fibres can include as a hollow structure termed a lumen that seems as a tiny open conduit in the centre of the cell and may impair the real fibre area under mechanical loading [20,21]. the average diameter measurement is used to evaluate single fibre diameter due to its uneven shape and nonuniformity along the fibre axis. to address this issue, the average value of five or more apparent diameters recorded at different places along the fibre was recommended. the fibre, on the other hand, is designed to fail at the point of greatest stress concentration. if no other significant flaws occur, this site has the smallest cross-section. as a result, such a recommendation would be ineffective in resolving the issue. furthermore, natural and manufactured flaws or faults always exist on natural fibres; fibre failure at the smallest cross-sectional site may not be clear at all [22, 23], as failure might happen at the time of tensile testing where the failure is situated. yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) figure 1: lumen existence in rattan cross section on sem figure 2: tensile test for single fibre figure 3: sample for tensile test of single fibre composite figure 4: open mould for sample preparation b sample selection for testing as previously mentioned, before and after testing the fibre breaking has enormous effects on performance, the results thus do not reflect the consistency of the fibre well. first, single technical rattan fibres with no splitting were hand5mm rattan control sample sample lumen yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) selected. in this step, if such defects are not reflective of fibre design even fibres with visible defects should be omitted. furthermore, infected fibres would lead to an obscured cross-section picture that would result in a mistake in cross-sectional area assessment and removal. the rattan fibre was fitted to the mould, and the specimen was then fixed with epoxy resin in the ratio of 100:29 (weight/weight) of epoxamite resin and 102 medium hardener, and dried at room temperature for 15 hours. figure 4 shows epoxy resin and figure 5 shows the hardener. the resin mix is then poured into an open mould with a rectangular cavity as shown in figure 6. these shapes are in accordance with astm d3822-01 standards. after around 15 hours of curing time, only the best samples peel out of the cavity and are ready for mechanical testing as shown in figure 3. according to the supplier's specifications, the combined viscosity of this compound is 650 cps by astm 2393. c fibre tensile testing procedure the pre-selected fibres are measured at desired temperature and condition in accordance with astm d3822-01. for fast handling and grasping, the two fibres ends were bound to a piece of tape respectively. during the test the strength-strain curve of the fibre is reported and the fibre properties are determined in its next step. d cross-section area determination an exact cross-section area must be acquired by estimating the tensile strength and fibres modulus elastisity. to enhance the cross-section area accuracy, then a plane and clean cross-section of the fibre rupture need at the end. consequently, the tested fiber was cautiously pinned on both end side of the long rattan fibre. proper size of the rattan fibre could avoid the specimen moving in the epoxy liquid to elude a tilted cross-section. from the sem image, the hollow structure (lumen) could be seen clearly in some elementary fibers as shown in figure 1; nevertheless, this lumen could be discounted because it was at the most 1.5% of the whole part. this lumen is a normal existence for every natural fibre as it grown with hydrophilic characteristics. iii experiments six types of specimens including 100% epoxy without rattan fibre (r0) and diameter 1 mm rattan retted by enzyme (r1) until 5 mm (r5). epoxy was kindly supplied by chemibond enterprise sdn. bhd, petaling jaya, selangor. the rattan yarns were kindly supplied by afr craft enterprise from bukit gedong, tg.kling, melaka. tensile testing were performed using an instron 5548 microtester. at room temperature, the following single fibre test was performed in accordance with astm d 3822-01. tensile tests were carried out on the carefully chosen rattan fibres (as described in previous section). both rattan fibre ends were fastened with plastic cellotape for ease handling and gripping. a grip length of 9 mm and a gauge length of 50 mm were utilised for testing in this study. in all testing, a translation speed of 100 mm/min was used. the tensile characteristics of the fibre may be readily assessed using astm d 3822-01 because the cross-section area of the fibre is calculated and its corresponding forceelongation is acquired from the test. the diameter of 5 specimens for each type of fibers was determined before testing. the diameter-determination methods were examined by measure the average fiber diameter at five different random locations on the fiber. the obtained diameter, d, is utilized to calculate the cross-section area using equation of area 𝐴 = 𝜋𝑟2 for the final mechanical data analysis. yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) iv results and discussion figure 3 depicts usual stress–strain curves of rattan fibre from the tensile test. with the exception of slight slippage at the start of the experiment, the rattan fibres show a single linear elastic distortion until rupture without plastic deformation. similar plot behaviour and form may be discovered for vegetable fibres described in other publications [24, 25]. because the natural fibre is so weak, the little slippage at the start of the test was difficult to manage. rattan fibres behave similarly to many organic fibres, which frequently exhibit plastic deformation following elastic deformation and have high ductility. by precisely defining the beginning of the curve, the zerostrain value was determined. all fibers show brittle failure with strain to failure of less than 2%. the average diameters, tensile strengths, modulus, and strains of the r0, r1, r2, r3, r4 and r5 fibers calculated from 30 specimens could be found in table 1. it has been reported that the tensile strength and modulus of the rattan fibers are within the range of 10-30 mpa and 27.6–1000-2000 mpa, respectively [16-18]. the average tensile strength of rattan fiber composite is 13.01 mpa. the flexural strength of rattan fiber it is found to be 131.56 mpa [12]. in order to decrease the default property difference, the average diameter may be attempted to replace the lowest value because the fibre will probably fail in a smaller transverse section. the findings based on the minimal fibre diameter are likewise presented in table 1. the standard deviations of these tests were still within acceptable bounds, perhaps due to the accuracy of the smallest cross section on the digital caliper. as mentioned above, in addition to the natural fibres' homogeneity, the significant standard diameter variance is largely attributable to the inaccuracy of the method employed in order to estimate their fibre diameter. as a result, the strength and modulus values for this specific fibre might be considerably varied as seen in figure 5 through figure 8. the growth area, climatic variables, and retting circumstances all contributed to the relatively lower tensile and specific strength values. despite the fact that only five specimens were examined for each sample in this approach, it gives sufficiently reduced standard deviations, which are less than 11%. furthermore, the approach of calculating the cross-section area by using the average diameter value of five random sites along the fibre is appropriate. table 1: rattan fibers parameters reinforced unsaturated thermoset obtained from single fiber test method. legend specimen name max load modulus young's ultimate tensile strength extension at break specific strength [n] [mpa] [mpa] [mm] (mpa.m 3/kg) r0 100% epoxy 1153.114 1508.772 16.158 7.229 724.665 r1 1mm 1508.687 1398.091 17.956 11.671 937.169 r2 2mm 1731.700 1654.205 23.279 9.310 1314.849 r3 3mm 1711.619 1420.181 20.633 9.669 1350.086 r4 4mm 1635.040 1378.460 20.492 11.679 1505.678 r5 5mm 1996.070 1761.444 26.972 9.164 2503.607 yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) figure 5: maximum load for rattan fibers figure 6: modulus young for rattan fibers figure 7: ultimate tensile strength for rattan fibers figure 8: specific strength for rattan fibe rs max load [n]; 1996.070 0.000 500.000 1000.000 1500.000 2000.000 2500.000 lo a d ( n ) max load vs specimen modulus young's [mpa]; 1761.444 0.000 200.000 400.000 600.000 800.000 1000.000 1200.000 1400.000 1600.000 1800.000 2000.000 m o d u lu s y o u n g 's [ m p a ] modulus young's vs specimen ultimate tensile strength [mpa]; 26.972 0.000 5.000 10.000 15.000 20.000 25.000 30.000 u lt im a te t e n si le s tr e n g th [ m p a ] ultimate tensile strength vs specimen specific strength mpa.m3/kg, (2503.607) 0 500 1000 1500 2000 2500 3000 100% epoxy 1mm 2mm 3mm 4mm 5mm s p e ci fi c s tr e n g th ( m p a .m 3 /k g ) specific strength vs specimen yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) 12 v conclusions the diameter of this rattan single fibre is used to compute the surface area for the tensile characteristic’s calculation with slightly different standard deviation for natural fibre based on the irregular shape, flaws and imperfections alongside the rattan fibres. in addition, the efforts to produce a sample of the cross-sectional area at the fault site were also concentrated on the rattan fibre assortment with a decent end of the fractures and on procedures. in this test, rattan yarn sliced into different diameter and pinned at both end. with this method, the maximum tensile strength, modulus young, maximum load, and specific strength of rattan yarn, was measured to be 26.972 mpa, 1761.444 mpa, 1996.07 n and 2503.607 mpa.m3/kg respectively, with the low standard deviation of less than11%. acknowledgement it is with gratitude that the authors acknowledge the support of universiti teknikal malaysia melaka (utem) for this research through the pjp-crg grant scheme: pjp/2019/ftkmp-amc/crg/s01702. this equipment and technical support were provided by the faculty of mechanical and manufacturing engineering technology (ftkmp) at utem, which is appreciated. the scholarship was provided by the jabatan pendidikan politeknik dan kolej komuniti of the kementerian pengajian tinggi, which was also acknowledged by the corresponding author. references [1] yaakob, m. y., m. p. saion, and m. a. husin. "potential of hybrid natural / synthetic fibers for ballistic resistance: a review." technology reports of kansai university issn: 04532198 volumes 62, issue 07, august, 2020 [2] yaakob, m. y., m. p. saion, and m. a. husin. “a review on potential of natural and synthetic composite for ballistic resistance”, e-proceeding of the 10th national conference in education technical & vocational education and training (cie-tvet) 2020 e-isbn: 978967-2258-29-2 [3] akin, d.e., foulk, j.a., dodd, r.b. and mcalister iii, d.d., 2001. enzyme-retting of flax and characterization of processed fibers. journal of biotechnology, 89(2-3), pp.193-203. [4] morrison iii, w.h., archibald, d.d., sharma, h.s.s. and akin, d.e., 2000. chemical and physical characterization of water-and dew-retted flax fibers. industrial crops and products, 12(1), pp.39-46. [5] dr. navdeep malhotra et al., "a review on mechanical characterization of natural fibre reinforced polymer composites," journal of engineering research and studies, vol. 3, pp. 75-80, 2012. [6] hanafiismail et al., "the effects of rattan filler loading on properties of rattan powder-filled polypropylene composites," bioresources, vol. 7(4), pp. 5677-5690, 2012. [7] hnin yu wai et al., "particleboards derived from rattan fibre waste," universities research journal, vol. 4, p. 3, 2011. [8] simkins, v.r., alderson, a., davies, p.j. and alderson, k.l., 2005. single fibre pullout tests on auxetic polymeric fibres. journal of materials science, 40(16), pp.4355-4364. [9] qian, h., bismarck, a., greenhalgh, e.s. and shaffer, m.s., 2010. carbon nanotube grafted silica fibres: characterising the interface at the single fibre level. composites science and technology, 70(2), pp.393-399. [10] boshoff, w.p., mechtcherine, v. and van zijl, g.p., 2009. characterising the time-dependant behaviour on the single fibre level of shcc: part 1: mechanism of fibre pull-out creep. cement and concrete research, 39(9), pp.779-786. [11] n.v. retch, p.s. ujeniya, r. k. misra, “mechanical characterization of rattan fibre polyester composite”, procedia materials science 6 (2014) 1396 – 1404. [12] osoka, e.c., onukwuli, o.d, “optimum conditions for mercerization rattan palm fiber”, ijemr (2015)144-154. [13] hanafiismail et al., "the effects of rattan filler loading on properties of rattan powder-filled polypropylene composites," bioresources, vol. 7(4), pp. 5677-5690, 2012. [14] hnin yu wai et al., "particleboards derived from rattan fibre waste," universities research journal, vol. 4, p.3, 2011. [15] kalkani, tushal, nikunh rachch, and prashant ujeniya. "comparative analysis of coir and rattan fiber composites for bulletproof jacket vest application." in proceedings of international conference on advancements in computing & management (icacm). 2019. yuhazri m.y., pazlin, m.s., amirhafizan m. h. /the durability test on the potential of single rattan fibres (2022) 13 [16] s. islam, s. azmy, and a. almamun, “open access comparative study on mechanical properties of banana and rattan fiber reinforced epoxy composites american journal of engineering research ( ajer ),” am. j. eng. res., vol. 8, no. 2, pp. 1–6, 2019. [17] obilade, i.o. and olutoge, f.a. “flexural characteristics of rattan cane reinforced concrete beams”. international journal of engineering and science, 3, 3842. 2014 [18] akinyele, j.o. and olutoge, f.a. “properties of rattan cane reinforced concrete façade subjected to axial loading”. journal of civil engineering and architecture, 5, 1048-1052. 2011 [19] lucas, e.b. and dahunsi, b.i.o. “bond strength in concrete of canes from three rattans species”. journal of applied science, engineering and technology, 4, 1-5. 2004 [20] bos, h. l., and a. m. donald. "in situ esem study of the deformation of elementary flax fibres." journal of materials science 34.13: 3029-3034. 1999 [21] bos, h. l., müssig, j., & van den oever, m. j. “mechanical properties of short-flax-fibre reinforced compounds”. composites part a: applied science and manufacturing, 37(10), 1591-1604. 2006. [22] bruce, d.m. and davies, g.c., “effect of environmental relative humidity and damage on tensile properties of flax and nettle fibres”. text. res. j, 68(9), pp.623-629. 1998. [23] arbelaiz, a., cantero, g., fernandez, b., mondragon, i., ganan, p. and kenny, j.m., “flax fiber surface modifications: effects on fiber physico mechanical and flax/polypropylene interface properties”. polymer composites, 26(3), pp.324-332. 2005 [24] nechwatal, a., mieck, k.p. and reußmann, t., “developments in the characterization of natural fibre properties and in the use of natural fibres for composites”. composites science and technology, 63(9), pp.1273-1279. 2003. [25] baltazar-y-jimenez, a., bistritz, m., schulz, e. and bismarck, a., “atmospheric air pressure plasma treatment of lignocellulosic fibres: impact on mechanical properties and adhesion to cellulose acetate butyrate”. composites science and technology, 68(1), pp.215-227. 2008. yuhazri m.y is an associate professor working as the dean of industrial collaborative at the university of technical malaysia malacca. he is also working as a senior university lecturer in the field of manufacturing and mechanical engineering technology. his research interests are in the area of composite and green technology. he obtained his phd degree in mechanical engineering from national defense university of malaysia in 2013. pazlin m.s is phd candidature and a researcher at the university of technical malaysia malacca. he obtained his msc degree in manufacturing engineering from university of technical malaysia malacca in 2011. amirhafizan m.h is a researcher in the field of manufacturing and mechanical engineering technology. his research interests are in the area of composite and green technology. he obtained his phd degree in manufacturing engineering from university of technical malaysia malacca in 2021. nuraziyati.s is a senior lecturer at the polytechnic of malaysia malacca. she obtained her msc degree in structure of civil engineering from university of technology malaysia kuala lumpur in 2013. she is a referee in a number of scientific journals in the above areas. she has joined several research and building structure projects related to material stability and structure strength issues. she is also a member and organizer of several scientific, academic and social committees and events. transactions template journal of engineering research and technology, volume 3, issue 1, march 2016 6 gaza metro network route site selection shafik jendia 1 , mohammed skaik 2 1 professor of highway engineering and infrastructure planning sjendia@iugaza.edu.ps m.sc. civil engineer – infrastructure and development, and mba abstract—this paper aims at studying and planning for selecting the best route for a metro network in the gaza strip as a strategic trend and solution in order to overcome current and future transportation challenges. geographic information systems (gis), multi-criteria decision making methods (mcdm), and expert systems (es) have extensively been used in solving metro components sites selection problems. the planning process in this paper mainly involves identifying the optimal locations for the main components of the system, which are (1) stations (end and intermediate stations) and (2) the best optimized route. the locations of such components can be classified as the best ones, if they satisfy certain criteria such as engineering, environmental, economic, social, and institutional requirements. the results show that 3 end stations have been chosen to be the origin and destination stations. it also shows that the total proposed metro line length is 51, 759 km. it covers all the gaza strip governorates. 57 metro stations have been chosen and distributed along the proposed metro route which is branched into two routes starting from deir albalah and reaching the two destinations in south gaza. index terms— gaza metro, public transportation, urban planning. 1. introduction growing number of vehicular trips by cars and other means which result in traffic congestion, air pollution and traffic accidents has become a major concern in urban areas. investments in high capacity rail based mass transit systems are being promoted to arrest this trend (advani and tiwari, 2005). metro has to be understood as, “a tracked, electrically driven local means of transport, which has an integral, continuous track bed of its own (large underground or elevated sections). metro systems have been introduced in many international cities. metro systems operate in higher density cities in inner metropolitan areas and rely on interchanges with suburban rail systems to serve commuters from further afield. (australian department of infrastructure, planning, and natural resources, 2004) 2. problem statement according to palestinian central bureau of statistics (2014), the total palestinian population counts 4,550,368 capita where 1,760,037 of which are population in the gaza strip. geography of the gaza strip is a tiny strip on the mediterranean sea with a total area of 565 km2 with a total land boundaries of 62 km. the boundaries with egypt has a length of 11 km while the boundaries with israel count 51 km in total. the total coastal line is 40 km. on the other hand, the natural increase rate of the population is 3.41% in the gaza strip. the problem statement of this paper could be diagnosed in five major features that summarize the current and the future problems in the transportation system and its infrastructure including the existing and potential road networks in palestine as a whole and in the gaza strip particularly as follows in figure (1): figure (1): features of thesis problem statement 3. methodology a diagnostic on the current status of the various transportation sub-sectors has been prepared. an assessment of such diagnostic study results in defining key transportation sectoral and sub-sector issues. based on such, the sectoral developmental framework can be well defined. a number of general sectoral key issues, as well as some sub-sector issues, are presented here. these include the following: (abueisheh and al-sahili, 2006):  there have been several restrictions imposed on the movement of passengers and goods within the palestinian territories and with the outside world. mailto:sjendia@iugaza.edu.ps shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 7  there is a considerable damage to the various components of the transportation sector as a result of israeli military actions, which has to be repaired or replaced.  there is still no linkage or free passage between the west bank and the gaza strip. the two major palestinian gates to the outside world and markets, the airport and the seaport that are both located in the gaza strip, are not being used due to military operations.  the inadequate road facilities, which restrict accessibility of a considerable portion of the population and limited accessibility to reliable public transportation services.  the poor structure and organization of public transportation.  there are political constraints, which retard the development of a sound and efficient transportation system.  development activities in the transportation system have not been following a national transportation master plan.  funding is a limiting factor for both the development of the transportation sector and the maintenance of the road network. 4. relevant effort the road network in the gaza strip is considered as the main foundation of infrastructure. it is the only way for people, light and heavy goods carriers. roads network should be developed, improved and continuously maintained to provide safe and secure driving environment for the rabid increase of driving population. (lubbad, 2013). few studies tried to investigate the possible solution of such a crisis. (elturkmani et al, 2003) suggested that a railway track in the gaza strip would be a possible solution. (lubbad, 2013) sat the road networks of the gaza strip as a case study, divided the road networks into groups, collected the roads representative data, surveyed the road information such as: road widths, number of lanes and counting location for the representative roads were collected, surveyed information on vehicles such as: number of vehicles, number of axles and weight of axle were collected. that will be very useful as a database for this research. a preliminary study was undertaken for a possible track for a metro project in the gaza strip by (jendia and hussein, 2011). al-yazori (2013), the majority of the gaza strip residents use taxi in their movement. relying completely on taxi and neglecting the availability of public transport could lead to more traffic congestion and negative impacts on the environment in the future. al-yazori, in his thesis for proposing a "metro" route as a public transportation mode in gaza city, examines the approval of the community of a metro network as a public transportation through a questionnaire. random sample was selected and 96 questionnaire papers were distributed among the potential users of a metro facility in gaza. the result was a full satisfaction by the community to use a "metro" route as a public transportation mode in gaza city. al-yazori recommends to extend the metro network to include other areas in the gaza strip to solve traffic congestions in the future. 5. metro systems characteristics metro systems have been introduced in many international cities worldwide. these systems have limited seating and several doors on each side of the carriage. this design allows a high capacity with a larger number of standing passengers and faster times for passengers to move in and out of the carriages, reducing station dwell times. these metro systems operate in higher density cities in inner metropolitan areas and rely on interchanges with suburban rail systems to serve commuters from further afield. (australian department of infrastructure, planning, and natural resources, 2004). a metro system generally has the following characteristics:  capacity to transport up to 20,000 people per hour;  typical average operating speeds between 35 and 65 kilometers per hour depending on geometry and other design parameters  exclusive rights of way and protected at-grade crossings with grade-separation preferred;  a corridor width of 12 meters for track sections and 18 meters at stations;  stations spaced at between 1 to 2 kilometers apart, the adjacent residential density or the location of employment and commercial and use; and  a minimum radius of 50 meters with a maximum gradient of 6 to 8 percent. 6. strategies for corridor planning planning process for a metro network includes three parameters: short term parameters, long term parameters, and the service efficiency level. these parameters are shown in the following figure: figure (2): metro network planning parameters short term parameters: these parameters should include design characteristics such as: typical operating speeds, corridor long term parameters: where metro network should satisfy the sustainable development aspects such as economic, service efficiency parameters: a metro network planning and designing process should serve the influence shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 8 7. route site selection of metro networks building a new urban transportation facility is a major longterm investment for owners and investors. route/site selection of such a capital project (e.g. a corridor rapid transit project like a metro-rail system) is considered a crucial action made by owners/investors that significantly affects their profit and loss. decisions related to the locations of the facilities (e.g. metro-rail routes, stations, depots, etc.) influence economies of the metropolitan area. routes/sites that satisfy the screening criteria are subjected to detailed evaluation. geographic information systems (gis), multi-criteria decision making methods (mcdm), and expert systems (es) have extensively been used in solving site selection problems. however, each of these techniques has its own limitations in addressing spatial data, which is indispensable when one is dealing with spatial decision problems such as a route or a site selection problem. an es is used to assist the decision makers in determine values for the screening criteria of the site screening phase, building the decision model and assigning weights to the attributes used as evaluation criteria for the site evaluation phase (farkas, 2009). 8. spatial multi-criteria evaluation (smce) spatial multicriteria decision analysis (smca) is a process that combines and transforms geographical data (the input) into a decision (the output). this process consists of procedures that involve the utilization of geographical data, the decision maker’s preferences and the manipulation of data. in the context of route/site selection of urban transportation facilities the value-focused approach has many advantages over the other (sharifi and retsios, 2004). according to eldrandaly, eldin, and sui (2003), the screening criteria include multiple measures, such as engineering, economic, institutional, social, and environmental factors. the goal in a route/site selection project is to find the best location with desired conditions that satisfy predetermined selection criteria. the various elements of this criteria structure are briefly described as follows: goal and objectives: the goal of this framework is to identify an effective public mass transportation system for a metropolitan area integrated with an efficient land use so that it meets the present and long-term socio-economic and environmental requirements of the residents of the marked territory. 9. planning and designing a metro network for the gaza strip the route site selection process mainly involves identifying the optimal locations for the main components of the system. the locations of such components should satisfy certain criteria such as engineering, environmental, and economical requirements. in other words, the location that reduces costs, has less impact on the environment, and can be used without any construction or engineering constraints is the favourable one. the researchers would suggest the potential metro line that links the north of the gaza strip with its south. this configuration is not randomly selected, but based on the analysis of future travel demand. for example, the eastern part is characterized by the agricultural and industrial regions which could be a source for attraction of many workers and employees currently and in the future. the western part is well known for tourist attraction. in addition, the operation of sea port of gaza would attract more and more people. the designing process went through following steps: 1. at the first step, it is crucial to state the problem precisely. the problem can be formulated as determining the best locations for the proposed metro lines and their stations. this step also involves breaking the main problem down into sub-problems so that it would be easier for handling it. 2. at the second step, the data needed for solving the problem is identified and collected from different sources. the data is identified based on the criteria defining the optimal sites. 3. the third step constitutes the core of this work because here the data is geo-processed and spatially analysed for making decisions regarding the best locations for metro lines. this step involves a series of geo-processes expressed as cartographic model. the model builder tool in arcgis is used for this purpose. 4. the fourth step includes the evaluation process of the results. if the results are not satisfactory, then the proposed methodology should be revised. 10.1 stating the problem and objective the main problem is finding the best location for metro system components. that means we need to identify the optimal sites for stations and routes. that can be achieved by considering the criteria that satisfy the engineering, environmental, and economical requirements. from one hand, the optimal site for a station should satisfy the following criteria: 1. it should be in close proximity to the highly dense populated areas 2. near the vital places, such as universities, schools, shopping centres, etc. those places plays important role in trip generation and attractiveness. 3. it should be in close proximity to the existing important intersections. the most important intersection is that one with the following characteristics: a. it has a high number of available parkings b. it has a large space. c. it has high traffic volume. 4. the distance between stations should be 1 km apart 5. the walking distance travelled by pedestrians to any proposed station should not exceed 0.5 km. 6. the station should be in a suitable land. 7. for construction and environmental considerations, the site should be selected where the water table level is deep. shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 9 8. the corridor width at the station should be at least 18m. the aforementioned criteria is specified for intermediate stations, for end stations extra criteria is need such as: being in a large area/space for parking and maintenance works, and being far away from the borders for security purposes. on the other hand, the optimal site for a route is the one that keeps the cost at its least value. therefore, it should satisfy the following criteria:  its intersection with existing buildings should be avoided as much as possible.  the route should be far from the water table.  the soil type is preferably to be sandy  the slope should be as minimal as possible. the maximum gradient should be 6 to 8%  the corridor width should be 12 meters for track sections. 10.2 data collection and pre-processing the data is one of the main components of any problem solving process. therefore, the required data should be first identified and collected from relevant sources. the data could be easily identified from the criteria mentioned in the previous section. when the data has a spatial context, they will be referred to as layers. the data needed is listed in table 1. table 1 data required for spatial analysis criteria data required data model 1. close to the highly dense areas  population density  table 2. near the vital places  vital places (e.g. universities, public facilities...).  vector layer: points 3. close to the existing important intersections  important intersections and parking places  vector layer: points  vector layer: points 4. walking distance not to exceed 0.5 km  existing road network  vector layer: polylines 5. station should be in a suitable land  land use  raster layer 6. water table level should be deep  water table level  raster layer 7. the intersection of routes with existing buildings should be avoided.  existing buildings  vector layer: polygons 8. soil type is preferably to be sandy  soil types  raster 9. slope should be as minimal as possible  digital elevation model  raster once the data was collected, the data was inspected and preprocessed to bring it into a suitable format for the next step, which is the analysis. the pre-processing includes converting all the data into suitable gis formats. for example, some data has been collected as cad files (e.g. buildings) and excel files (e.g. water table elevations). cad and excel files were converted into environmental systems research institute (esri) geodatabase feature classes. furthermore, all layers should have the same coordinate system, which is the palestine 1923 palestine grid. the following sections describes samples of the collected data in more details. the required data were collected from local sources such as municipalities, relevant ministries, and gis professionals. the collection process was one of the difficult tasks in this work. that was because of the lack of consistent and accurate data. even if it was available, it would be difficult to get it due to the legal constraints. the data collected are as follows: more information and detailed description and analysis for these data are described in appendix (1). 10.3 spatial analysis after data collection and pre-processing, the data is to be analyzed spatially and non-spatially to come to the final solution of the problem. the data was analyzed using the spatial analysis techniques provided by arcgis desktop 10.1. the analysis that is concerned with selecting best sites for facilities is known as suitability analysis. such analysis has been used extensively for selecting the optimal regions for metro stations. the procedure followed for identifying the optimal sites can be summarized in the following three steps: 1. select the optimal sites for end stations (master origin and master destination). 2. select the least cost path between the master origin and master destination. 3. distribute the intermediate stations along that path according to the criteria specified previously. the idea that would be used here is to rank the regions of the gaza strip according to the criteria mentioned previously in this chapter. the scale 1-10 will be used for such purpose. if a location is ranked 10, it would be the most suitable location for metro station. that also means 100% of the criteria has been satisfied. on the other hand, if a location is ranked 1, then it is the least suitable location for a metro station. that can also be interpreted as 10% or less of the criteria has been satisfied. in addition, number 0 in the scale means that this component is restricted and forbidden. 10.3.1 site sectionof endstations the optimal sites for end stations should satisfy the following criteria: 1. the station should be on a suitable land. 2. the station should be far away from borders including the coastal lines. 3. the station should be located in rafah or northern governorates since these are the border gover shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 10 norates while the gaza strip is a narrow, 40-km long slice of land between the mediterranean to the west and the negev desert to the east. 4. the area should be large enough so that the station works as a parking area for trains and for maintenance. after the data is collected and preprocessed, a sequence of geo-process are performed to answer the basic question, which is where the optimal sites for end stations should be located. please refer to the cartographic model in appendix 2. 10.3.2 site seclection of the least cost route after the selection of optimal sites for end stations, the major routes of the metro system can be constructed by searching for the least cost paths. the cost could be in terms of environmental, economic, construction costs. the least cost path should satisfy the following criteria: 1. intersection with existing buildings should be avoided. 2. route should be far from the water table. 3. soil type is preferably to be sandy. 4. maximum gradient should be 6 to 8% 5. the corridor width should be 12 meters for track sections. 6. close to vital regions (universities, parks, main stations, …) 7. close to dense populated areas in order to solve the problem of finding the least cost path for metro lines, some of geo-processing tools, which are provided by arcgis, are used extensively to make the final decision. the steps can be summarized as follows: 1. the above criteria are combined and weighted using the simple linear combination method as explained previously in this chapter. the output was a raster layer called cost raster surface with values from 1 to 10. 2. the cost distance tool is employed to compute the least cumulative cost distance for each pixel from the origin to destination. 3. the output of the cost distance is inputted to cost path tool which computes the least cost path from the origin to destination. 10.3.3 site section of intermediate metro stations criteria 1: population density the final output of this step is a raster composed of cells. each cell represents a region of size 30x30 m2 that could be a core of a station. each cell has a value representing how much it is suitable for a metro station. the regions with highest population density receives a rank of 10, and those with lowest density receives a rank of 1.0. the population data is obtained for each municipality in the gaza strip. the processes that are performed to get the raster suitability map is summarized as follows: 1. the population density was estimated for each region in the gaza strip. arcgis spatial analyst provides tools for such purpose. the kernel density tool is used in this work. 2. the output of the kernel density tool is a raster composed of cells. each cell has a value representing population density. 3. the density raster is reclassified to new values that ranges from one to ten (1-10). the reclassify tool is employed for that purpose. the cells with highest density values get the new value 10, and those with lowest density values get the new value 1.0. the output raster of this step will be overlaid with raster layers from other criteria, and averaged to get final suitability map. criteria 2: near vital places the final output of this step is a raster composed of cells. each cell represents a region of size 30x30m2. each cell has a value representing how much it is suitable for a metro station. the regions that are the closest to vital places receives a rank of 10, and those that are furthest receives a rank of 1.0. the processes that are performed to get the raster suitability map of this criterion are as follows: 1. for each region in the gaza strip, the distance to the nearest vital place is computed. the euclidean distance tool is used. the output is a raster composed of cells. each cell has a value representing the distance to nearest vital place. 2. the distances obtained in the previous step are reclassified. the cells with the highest values get a new value of 1.0 and those of lowest values get 10. the output raster of this step will be overlaid with raster layers from other criteria, and averaged to get final suitability map. criteria 3: close to the existing important intersections a procedure for reclassifying the intersections according to their importance is adopted. before we introduce the procedure, we need to know the specifications of important intersection. the intersection can be classified as important if:  it is located on regional and primary roads, and  it is located in a vital, active and dense area. 1. the procedure adopted for identifying the important intersections is summarized as follows: 2. all intersections are ranked according to their position on roads. the regional roads get highest score and the local roads get the lowest. 3. all intersections are also ranked according to their proximity to the vital places. 4. the ranks are combined using the weighted linear combination assuming equal influence for both ranks. 5. the intersection which gets the highest rank shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 11 10.the results of gaza metro network planning process the results shown in figure (3) highlights that 3 end stations have been chosen to be the origin and destination stations. it also shows that the total proposed metro line length is 51,759 km. it covers all the gaza strip governorates and serves the most vital places. intermediate stations are 57 stations distributed along the proposed metro route. it also serves the most important intersections and vital places in the gaza strip governorates. the proposed metro facilities seem to be a promising public transportation mode in the gaza strip. it will facilitate the movement of goods and passengers for different usages across the gaza strip governorates. in addition, it will facilitate the potential regional connection of the gaza strip beside the physical connection of the gaza strip with the west bank reaching to a national frame work of developing and improving the palestinian public transportation infrastructure and facilities. figure (3) also shows the best possible route and intermediate stations of the metro network. it shows that the best origin metro station is located in um-el naser area with coordinates (34o 32' e, 31 o 43' n) while two destination stations have been chosen in the east of khan yunis governorate with coordinates (34o 20' e, 31 o 17' n) and in rafah governorate with coordinates (34o 15' e, 31 o 19' n). this is clear where the main two borders of the gaza strip are rafat crossing border and beit hanoon (erez) crossing could be served by these origin and end destinations. the other crossing borders of the gaza strip could also be served by the intermediate stations assuming that goods movement will be served by the metro. the least cost route runs throughout the gaza strip governorates as shown in figure (3) with a total length of 51,759 m. the route is integrated with the main vital places and roads of the gaza strip as approached in the methodology adopted where further and deep investigation of another structural design components should be carried out. the corridor minimizes the intervention on the existing facilities and infrastructure as much as possible through taking into consideration all the criteria mentioned the previous chapters. figure (3) also shows that the metro network is divided into two branches starting from deir albalah governorate with coordinates (34o 21' e, 31 o 22' n) for the splitting point reaching the final destinations in khan yunis and rafat governorates. the intermediate stations are located along the route where it is spaced at 500 m inside the governorates or where there are vital place that should be served by these stations. on the other hand, the intermediate stations are spaced at 1.0 km between the gaza strip governorates where there is a light duty on the metro stations. the total number of the intermediate stations is 57 stations where each station locates on a suitable place based on the criteria mentioned previously. figure (3): gaza metro network components 11. recommendations  the authors recommend adopting the proposed metro network for the gaza strip as a feasible solution for current and future challenges.  the study recommends institutionalizing public transportation systems in order to provide better services for passengers in the gaza strip.  environmental impacts should be investigated to minimize the ecological consequences of such a strategic infrastructure project.  economic factors should be studied to highlight economic impacts of the establishing and running the metro network in the gaza strip.  a customer profile should be developed for the metro facility that helps in improving the service and the best practice for running the facilities taking into consideration: flexibility, convenience to reach stop/station, and speed. shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 12  national strategies and actions are needed to ensure that the corridor can meet the long term performance envisioned by corridor partners. these strategies include developing capital projects that will address significant capacity deficiencies and/or bottlenecks, other less-costly improvements that address specific safety and/or operational issues and policy-type directives that proactively promote development of the corridor vision through local ordinances and access guidelines. in short a transportation long-range funding solutions that are realistic, innovation, and focused. references [1] abu-eisheh, s., al-sahili, kh., 2006. "the framework for the development of a medium-term transportation program for an economy in transition: the palestinian case". the palestinian conference for development and reconstruction in the west bank. march 14-15, 2006. [2] advani, m., tiwari, g., 2005. "evaluation of public transport systems: case study of delhi metro". transportation research & injury prevention programme. [3] australian department of infrastructure, planning, and natural resources. 2004. "f6 corridor public transport use assessment. draft final report". roads and traffic authority, australia. [4] eldrandaly, k., eldin, n., sui, d. (2003): "a combased spatial decision support system for industrial site selection". journal of geographical information and decision analysis, 7 (2), 72-92 [5] el-yazory, kh. 2013. "a proposal to select "metro" route as a public transportation mode in gaza city using (gis) and spatial multi criteria decision analysis (smcda)". [6] farkas, a. 2009. "route/site selection of urban transportation facilities: an integrated gis/mcdm approach". 7th international conference on management, enterprise and benchmarking june 5‐6, 2009. budapest, hungary [7] jendia, s., el-turkmani, t., hammouda, et al., 2003. "the railway track design in gaza strip". a graduation project, engineering faculty at the islamic university of gaza. [8] jendia, s., hussein, h., 2011. "a preliminary study for a metro project in the gaza strip". [9] lubbad, i. 2013. "classification of road network in the gaza strip according to heavy traffic loads". a master thesis in the islamic university of gaza. [10] palestinian central bureau of statistics (pcbs), 2014. “annual census palestinian book”, palestinian national authority, reached though www.pcbs.gov.ps at august 10th, 2015. [11] sharifi, m. a., retsios, v., 2004. "site selection for waste disposal through spatial multiple criteria decision analysis". journal of telecommunications and information technology, 3, 1-11 http://www.pcbs.gov.ps/ shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 13 appendix 1. data acquired for spatial analysis fig u re (a ) p o p u la tio n d e n sity a cro ss th e g a z a s trip . f ig u re (b ) v ita l p la ce s in th e g a z a s trip f ig u re (c ) e x istin g r o a d n e tw o rk in th e g a z a s trip f ig u re (d ) im p o rta n t in te rse ctio n s in g a z a shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 14 f ig u re (e ) p o p u la tio n d e n sity a cro ss th e g a z a s trip . f ig u re (f ) w a te r t a b le d e p th in th e g a z a s trip f ig u re (g ) d ig ita l e le v a tio n m o d e l fo r th e g a z a s trip f ig u re (h ) s o il t y p e s d istrib u tio n in th e g a z a s trip shafik jendia, mohammed skaik / gaza metro network route site selection (2016) 15 2. cartographic model of end stations selection transactions template jo urnal o f engineering res earc h and tec hnology, volume 3, is sue 1, march 2016 16 al-ashwaiat areas in greater cairo region (gcr) a challenge for the state as. prof. dr. abdelrehim kenawy department of urban planning, faculty of engineering, al azhar university, egypt abstract— urban growth of egyptian cities takes place primarily on informal areas, where informal settlements grow at high rates that they have really become to represent connected cities where poor and middle classes adjoin. informal areas are a disgrace on the forehead of urban communities in egypt, as their environments and communities lack many basic elements of accepted human life. the informal areas issue has gained increasing concern since almost fifteen years ago, receiving more political and security attention. concern with this issue has become the official drift announced by the state. informal areas have several, diverse, and extremely disparate types. cairo alone includes 81 informal areas with eight million people living amongst 16 million inhabiting the capital. these informal areas occupy 62 % of gcr. two factors needed to be considered with regard to land use of gcr are that: (1) half of the urban expansion takes place on agricultural land; and (2) most of the growth occurs on the agglomeration fringes. for the future, it will probably be difficult to avoid a substantial increase in the urban population, and this is assuming that very favorable conditions will exist in the rural areas. but whatever the case may be, decision maker and planners should take this figure as a target because any failure to meet the demand for urban space would mean resorting to agricultural land for further development. aim this paper to address the issue of “al-ashwaiat areas in gcr as a challenge for the state ”. keywords — al-ashwaiat areas, egypt, greater cairo region, informal areas. introduction: egypt's cities are growing fast and will continue to grow. about half of the population of gcr lives in informal areas, under supplied and densely populated settlements with too little space and too few social services. the large amount of the alashwaiat areas are built on valuable farmland without building permits, some are built illegally on government-owned land. until now, former centrally steered attempts to solve egypt’s urban problems have not proven successful [1]. egypt's cities are growing rapidly, often in the absence of any governmental or urban planning. around 20 million people currently live in the gcr, the majority (around 60 per cent) in informal, underserved and densely built areas. there is a lack of basic social services and physical infrastructure such as health centers, schools, youth centers, access to drinking water, sewage and waste disposals as well as access to job opportunities. the extreme population density results in high environmental pollution. the local population is mostly poor, with low levels of formal education. generally, the residents develop their habitat independently without obtaining building permits, which often results in their deprivation of public services and infrastructure [2]. the growth of al-ashwaiat areas is a major concern in many cities of egypt. therefore, one of the most challenging tasks of urban planners is to gain a comprehensive understanding of the complex characteristics of informal growth in order to develop integrated and sustainable solutions. in this regard, the gcr is an extreme showcase with an estimation of almost half of the buildup area being al-ashwaiat areas. providing shelter for the growing urban population, informal areas have grown for decades as a consequence of the chronic lack of affordable housing. at the same time, al-ashwaiat areas in gcr hold complex problems: e.g. loss of valuable agricultural land, illegal tenure, unsafe building conditions, poverty and a lack of public infrastructure and services. political decision makers and urban planners are highly under pressure to deal with al-ashwaiat areas in a sustainable way in order to integrate them in the city. finding the right balance between addressing problems while strengthening potentials in an integrated, efficient and sensitive manner is obviously most challenging [3]. greater cairo region gcr: there are a number of definitions of what constitutes gcr. the most frequently used is the gcr which was defined by the general organization for physical planning gopp in 1982. this is a large area which includes all of cairo governorate and most of giza and qaliubia governorates . thus, a considerable number of distant rural areas and small towns in the latter two governorates which have little or no relation to gc are included in the gcr definition. it was decided to use a conservative and more limited definition of greater cairo so as to as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 17 reduce the rural bias which the inclusion of these areas (almost all of which could be considered informal) would introduce [4]. the gcr is a vibrant megalopolis. the population of gcr is currently around 18.3 million inhabitants, which represents almost a quarter of egypt’s population of 72.6 million inhabitants and almost half of the country’s urban population and making it the seventh-largest metropolitan area in the world, and one of the most densely populated (40,000 inhabitants per square kilometer) [5]. cairo is a “primate city” and has maintained its urban dominance over the last few decades. it comprises governorates (cairo, giza and qalyobiya ). over the past four decades, gcr has experienced rapid urban growth, during which population more than tripled, at an average annual growth rate of over 2.5 percent. now the population is growing only slightly faster than that of the nation (2.1 percent versus 2.03 percent per year) [6]. gcr is a rare phenomenon of a third world mega-city where, since the 1980s net in-migration has almost stopped. the metropolis’s expansion is fulled by natural increase and the incorporation of surrounding rural populations. this fact, clearly supported by census figures and various studies, seems however to be ignored by most egyptian observers, and the view is commonly held that rural migrants continue to pour into the city and that most of the problems are due to them [4]. cairo as the capital and prime city of the republic of egypt has a current population of 16 million people and by 2050 it is estimated that cairo will have 30 million inhabitants. the general organization of physical planning (gopp) therefore developed a vision for cairo 2050, labeled “international – green – connected”. to be able to achieve this vision a wide range of projects are proposed to overcome the major current urban development problems. besides the rapid urbanization and the task to provide housing, jobs and facilities for the expected 14 million new inhabitants, cairo is faced with a multitude of other challenges such as unplanned developments, especially on scarce agricultural land, lack of green space, polluting industries in the city center and traffic congestion. for the spatial analysis of gcr, gopp divided the city into 23 working zones and the 3 zones for the new towns of 6th of october and new cairo. these zones have been analyzed using several spatial data sets: land use at block level, the non-built-up areas, unplanned and unsafe areas (al-ashwaiat areas) and ongoing and proposed projects, and also for some basic statistics (population, population density, green space per person) [7]. the current situation for gcr: (1) the previous urban plan of 1997 expected that the population will reach 24 million by the year 2020 which means 9 million more than the current population, considering that capacity of the new cities can absorb about 5.6 million of this increase and the rest will be in the existing urban built-up areas. (2) the existing trend of unplanned development indicates that the gcr population will exceed 28 million by year 2020 which necessitate: (controlling the gcr urban growth to guarantee that its population will not exceed the planned capacity within in its boundaries / channeling the expected development directions to the planned areas to absorb this development). (3) gcr is composed of 3 governorates (cairo 8 million, giza 3.5 million and shubra el-kheima and other areas from qualiobya 2.5 million). it also includes 8 ‘new cities’ which are not managed by the governorates, but by the nuca. the density: 40,000 p/km2 (estimated 2000). (4) mostal-ashwaiat areas are on private agricultural land, over 50% households live inal-ashwaiat areas, out of which 82% built on agricultural land. more than 90% have access to utilities. in gcr, which houses 16 million inhabitants, there are 81 al-ashwaiat areas, some 62 % of the gcr’s population lives in al-ashwaiat areas comprising unlawful subdivision of agricultural land, squatting on desert land. governance and urban management in gcr: the gcr is not yet a legal entity. it is a contiguous metropolitan area that is administratively under the jurisdiction of three governorates (cairo, giza, and qalyobiya), in addition to a number of sectoral central government authorities. since cairo is the seat of national government, many other central government ministries and authorities have a direct role or indirect influence on urban management issues. as a result, development and public investment decisions affecting the gcr are typically taken at both the central and local government levels. the gcr is also a composite entity of several layers of local administration that varies by place. on the one hand, giza and qalyobiya follow a five-tier local administration system (governorate, district /markaz, city, village and quarter /hayy) on account of having a mix of urban and rural areas, whereas cairo a special case of an urban-only governorate follows a two-tier system (governorate and quarter/hayy) [4]. the challenge is to coordinate among these entities. the overlapping jurisdiction between central and local government complicates local management and service delivery. the situation is further exacerbated by the entities in charge of land-use planning and service delivery, which use different boundaries [6]. as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 18 land use of gcr: figure (1) reflects the situation and the land use for greater cairo in 2009. the land use classes are aggregated into 7 main urban land use classes (residential, industrial, commercial, institutional, security/army, cemetery and green areas). in addition to these 7 classes also the agricultural and vacant/desert areas are calculated. the land use class ‘others’ covers the remaining areas mainly used for roads, rail lines. the land use is summarized in table (1) [7]. figure 1 land use of gcr source: (gopp, 2009) figure 2 non built up of gcr source : (gopp, 2009) table 1 land use & non built up of gcr source : (gopp, 2009) governmental respond towards al-ashwaiat areas in gcr: in the 1960s, government respond was demolition and resettlement several areas in cairo have been demolished and developed for public housing projects. in the 1970s, with the pressure from international organizations, egyptian government has started some demonstration projects for upgrading informal settlements based on community participation. in the 1980s tolerance towards these al-ashwaiat areas, rapid consolidation of al-ashwaiat areas, absence of services and urban utilities and control, increase social unrest and violence. in the 1990s, the national slums upgrading policy, its stated goals; (improving the living standards of informal settlements, integrating slums within the formal city, providing slums with basic needs in terms of infrastructure and roads and sometimes services and security control of slum areas) [8]. gcr,s previous planning efforts: (1) first master plan in 1970: (target year is 1990, planning area is 685,000 acres, population is 6.1 million in 1966 and expected 14 million by 1990). (2) long term development master plan 1983: (creation of small urban settlements and new satellite cities being connected through development corridors / dividing the exiting built-up area into 16 homogeneous sectors in addition to 10 new urban settlements / population is 10 million in 1982 and expected to be 16 million by 2002). (3) first update of the master plan 1991. (3) second update in 1997: (population is 13 million in 1993 and expected to be 24 million in 2022 / combining several new urban settlements into one or more new city or new urban community, o combining new settlements no. 1,3,5 to be new cairo, o combining 6a and 6b to be sheikh zaied city). as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 19 public housing policies in gcr (1952–2011): according to (doaa abouelmagd [9]) and others, there are five political regimes; within this division one can document the evolution of the housing policies related to these periods, as following: (1) post second world war and before 1952: the first publicly subsidized low rent housing was constructed barley before the 1952 revolution. some 1100 dwelling units had been constructed in imbaba in governorate of giza. the project was named ‘the worker city [10]. (2) national state economy (19521974): in order to achieve a social equality, the state had legalized a series of rent control laws in 1952, 1958, 1961and 1962 to reduce and/or freeze rent. in long term, these laws had an impact on the housing market. first, the withdrawal of the private sector from the rental housing market due to its limited profitability, secondly, the deterioration of housing stocks and the decline of the maintenance of the buildings due to the declining income from rent [11]. (3) capitalism turn (1974-1981): with the creation of the new open door policy, many private companies started to operate again. they invested in construction upper income housing to achieve large amount of profit. but, only few of these companies invested in the construction of the middle-income housing [12]. there were no incentives of any of these companies to construct lowincome housing. as the reaction of the new economic policy, the cities had suffered from waves of inner migration. in 1977, the state strategy to face the increasing housing demands was through launching a policy called ‘the new towns policy’ and since then was formalized as law of new communities in 1979. the new shape of town movement quickly appeared to dominate egypt's urban development as well as budgetary allocations. it is still dominants current market. currently, there are 39 new towns in egypt [12]. (4) neoliberal period (1981-2005): during the 1990s, the government continued a direct housing provision policy. this policy started in the 1950s, ensuring its responsibilities to house the low income classes. during 1982-2005 the overall production of housing was 1.26 million units, with an annual average production of 54,700 units. the government programs were under different authorities, but the housing models and payment conditions have remained similar over decades [11]. (4) reformist period (2005-2011): in 2005, the state had launched a new housing program. it was aimed to construct 500,000 subsidized housing units over six years spread throughout the country for low income groups, and is located in new towns; the program was administrated by different governorates [11]. al-ashwaiat areas definition: al-ashwaiat is the arabic word used in egypt for informal areas or slums. it literally means (random or haphazard). the egyptian government uses the terms al-ashwaiat, informal areas, and slums interchangeably, and u.n. habitat uses slums [14]. there is a wide range of terminology for "alashwaiat areas": (unplanned, informal, spontaneous, popular, irregular). in arabic, the term "al-ashwaiat areas" is used to refer to them. in the context of gcr there exist different definitions for the informal areas established by the various political institutions involved. thus, this section examines the existing definitions from different perspectives. according to the reference from gopp, there are two main criteria defining the al-ashwaiat areas: legal status and level of deterioration. regarding legal status, the area that has been developed on unplanned land is considered informal. regarding deterioration, physical degradation is not the only key issue. environmental and social aspects, lack of basic services and infrastructure are also taken into consideration [15]. according to the new law of building and planning (no. 119), however, there are two definitions on the al-ashwaiat areas, comprising only physical factors: unplanned areas and redevelopment areas. the former refers to the areas that were developed without detailed plan on privately-owned agricultural land, and are consolidated over time, fed with infrastructures and services. the latter refers to unsafe areas that need to be partially or completely redeveloped [15]. according to the reference from isdf, it is a phenomenon began to fulfill the need to provide housing due to the increase in population as well as the migration of inhabitant from villages to cities seeking for job opportunities and good living conditions [16]. there are different attempts to define informal areas. the following three examples show the variety of their content: (1) informality is the citizen’s holding of land and building on it privately whereby violating the regulations for sound planning such as the building code. (2) informal areas are unplanned spontaneous high density expansions around existing cities or villages. (3) informal settlements are traps of poverty and deprivation. drawing a conclusion, there is no allinclusive definition of informality in the literature. each definition focuses on a particular aspect of the issue. the conclusion of the different definitions is, that al-ashwaiat areas (are not planned, have a high building and population density, have deficits in public services and have legal problems (occupy squatted land and/or are built without proper building permits) [1]. as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 20 figure 3 al-ashwaiat areas on agrictural land in gcr figure 4 al-ashwaiat areas in cairo the issue of al-ashwaiat areas: al-ashwaiat areas are considered an urban blight and the festering sore of urbanization. accordingly, more people will choose or will find themselves living in al-ashwaiat areas, unable to afford any other accommodation and enjoying freedom from rent and civic obligations. the government solutions were sought as one of three physically oriented approaches; upgrading and merging within city formal districts, buffering, or eradication. it seems that social aspects were not at all considered. after three decades of trial and error, the sought approach was involving the communities through participatory development. however, the issue of the al-ashwaiat areas is not yet closed; most inhabitants refuse to leave their settlements, reject proposed rehabilitation projects, and vandalize urban projects proposed [16]. al-ashwaiat areas and international covenant: (1) international covenant on economic social and cultural rights: egypt ratified on 1982, it states in article 11 on (ensure the right of everyone to an adequate standard of living, including adequate food, clothing and housing, and to the continuous improvement of living conditions / safe housing as main human right and adequate standard of living. (2) un habitat program: shelter focal point, responsible for monitoring slums worldwide and outlined criteria for identifying and classifying slums. (3) millennium development goals (mdg): adopted by the un member states in 2000 and goal 7 target 11 aims “by 2020, to have achieved a significant improvement in the lives of at least 100 million slum dwellers”. (4) paris declaration on aid effectiveness: adopted by the governments and donors in 2005 and places obligations on bilateral aid agencies to follow and align its programs with national strategies and priorities [17]. al-ashwaiat areas criteria (isdf, 2014): informal areas according to un habitat are: (1) areas lacking suitable housing: (built on unsafe areas (geological formations); under threat of railways accidents; buildings made of make-shift materials; ruins; in the vicinity of industrial pollution; under high voltage power cables). (2) areas lacking sufficient living space: (more than two person per room). (3) areas lacking accessibility to clean water (lack accessibility to clean drinking water). (4) areas lacking accessibility to sanitation (lack accessibility to improved sanitation). (5) lack of tenure areas (living areas that lack possession by documents of title or squatters on state land or other entities). informal areas according to isdf are 4 grades: grade (1): areas that threaten life (under sliding geological formations, in flood areas and under threat of railways accidents). grade (2): areas of unsuitable shelter conditions (buildings made of make-shift materials; sites unsuitable for building, solid waste dumps; ruins). grade (3): areas of health risks (lack accessibility to clean drinking water or improved sanitation; in the vicinity of industrial pollution; under high voltage power cables). grade (4): areas of instability of tenure (areas on state land; areas on territory dominated by central organization; areas on the territory of endowments. (grades are ordered according to degree of risk -thus higher risk overrules the lower) dwelling units in al-ashwaiat areas: a total of 2.63 million dwelling units were estimated to be found in al-ashwaiat areas of gcr. as expected, the large majority of units were associated with, “on private agricultural land”, representing 83.7% of the informal total. by far the largest type of dwelling unit found in informal areas is the “apartment”, representing 69.5% of total dwelling units. as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 21 main characteristics of al-ashwaiat areas: common characteristics among al-ashwaiat areas include: (1) the progressive and incremental construction of housing by small contractors and owners themselves. the noncompliance with standards for street width and public open space and the absence of architects. (2) the urban features of informal developments are therefore determined by existing street patterns and buildings, topography, and natural and manmade features. the lack of facilities, basic sewerage, amenities and infrastructure which leads to a very low standard of living and to environmental deterioration. (3) residents of these areas belong to the poorer segments of the population and are affected by unemployment, low level of professional skills, low educational level and spreading illiteracy especially among girls and women [18]. reasons of al-ashwaiat areas in gcr: there are many reasons but the most important are as following: (1) no formal access to land by urban poor. (2) long history of unsuccessful governmental housing policies. (3) long and difficult procedures to acquire land subdivision and building permit. (4) weak control of government on land. (5) unsuitable laws and regulations concerning planning and building codes. (6) weak enforcement of laws and regulations by local government [8]. al-ashwaiat areas and agricultural land: sprawl of buildings, projects, services and the nonagricultural utilization of agricultural lands led to the loss of huge areas of agricultural land in egypt reaching around a million fadden. available numbers clearly show significant disparity in agricultural land depletion rates resulting from urban growth. however, they almost agree that 53% of such depletion is owed to residential buildings, 26% to services, and 21% to other projects and infrastructure projects. it is however noticed that the expansion of villages, hamlets and satellites is horizontal using low buildings, thus with much less utilization of land than in informal urban growth, which is vertical with high utilization of land due to difference in living circumstances and lifestyles between urban and rural areas. thus, informal growth in rural areas is much graver than in cities. this issue has not yet received due concern. negative results of al-ashwaiat areas: negative results are numerous: (1) decreased agricultural production. (2) environmental degradation resulting from many development activities implemented in urban growth areas, particularly unplanned industrial activities and its impacts on the egyptian citizen. (3) contribution in the spread of the informal pattern in all aspects of life along with the hard living circumstances, decreased society productivity and values, and disturbance in the egyptian urban system and its adverse impacts. main obstacles of al-ashwaiat areas: one of the main obstacles to addressing the entire alashwaiat areas phenomenon in the gcr is the fast growth of per urban areas around the urban agglomeration of the gcr. the 2008 population of the nine per urban areas in gcr is estimated at 4.21 million inhabitants, representing 24.7 percent of gcr’s 17 million inhabitants. the population of these areas has been growing rapidly in an informal pattern over agricultural lands, averaging 3.27 percent per annul over the (1996–2006) period. although officially the per-urban areas of gcr are classified as rural, over the last few decades the role of agriculture has diminished significantly. by 1996, agriculture only accounted for 21 percent of the employment of the active population in gcr (compared to 47 percent for rural egypt), and the largest single sector was manufacturing, with 22 percent of employment (higher even than the national urban average) [6]. challenges for al-ashwaiat areas upgrading: there are many challenges but the most important are as following: political will to proceed with alashwaiat areas upgrading approach need for institutional reforms of local government (departments, staff, and decentralization) administrative culture constraints (civil service, representation, technocratic approach) and capacity development and orientation of civil society and private sector. al-ashwaiat areas in gcr: many of the residential areas in cairo are unplanned and some areas are also classified as unsafe. the table below estimates in area occupied by unplanned residential areas in comparison to the planned residential areas. for zone a, b and c there are 65,670 fadden planned to be finally (2027) developed into residential areas. unplanned areas have in general a high density and limited space reserved for green areas and services. unplanned areas are in many cases multi-story buildings which will not be easy to be converted in a later stage into other land uses. avoiding further unplanned developments is therefore a major spatial planning policy which requires enforcement and alternatives and affordable housing options in the right location. the fore seen population growth of greater cairo cannot be accommodated in the 23 zones but have to find in the 6th of october (zone a and b) and as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 22 new cairo (zone c) or elsewhere. figure 5 unplanned and unsafe areas in gcr source : (gopp, 2009) table 2 unplanned and unsafe areas in gcr source : (gopp, 2009) state reaction for al-ashwaiat areas: the increasing loss of agricultural land around cairo, plans to “close” the city to immigration (also through physical means such as the ring road), western consultants who arrived in the late 1970 to advise the government on how to deal with cairo’s growth, and also the inclusion of al-ashwaiat areas in the census in 1986 all show that the government was very well aware of the fast growth of al-ashwaiat areas but did little about it [19]. in 1992, two incidences increased the awareness of al-ashwaiat areas and started a public discourse. one was an earthquake in october that year, which not only revealed the danger of unplanned and densely populated areas but also involved a much faster emergency aid from islamic organizations. both incidents led to a policy of accepting the existence of al-ashwaiat areas. in consideration of their growth, established situation and a housing stock of relatively good quality the government gave up the idea of demolition in favor of development and upgrading [20]. in his may 1 speech in 1993, president mubarak announced intensified efforts to rehabilitate al-ashwaiat areas in all of egypt for stability and security reasons. in early 1993, the government launched a national programme for urban upgrading over le 4.5 billion to be spent through the governorates until 2002. the mubarak government sought to demonstrate that it had a policy for informal cairo beyond coercion, and counter its critics’ accusations of neglect and indifference” [19]. it the course of this programme, 16 areas in gcr was to be demolished, mainly because they were physically affected by the earthquake. around 80 settlements, among them those, in which the clashes took place, were announced to be serviced with basic infrastructure such as water, sanitary drainage and electricity, and streets were to be widened, paved and provided with street lighting. these exclusively physical upgrading measures were seen as raising control over areas difficult to control, but also to rehabilitate people that were thought to be uncivilized [20]. in gcr great efforts have been done to enhance access to sanitation and drinking water, electricity and road paving, and le 971 million were spent (51% of the national budget). however, processes to bring upgrading on the way were slow and complicated. because the top-down programme was missing a participatory approach, and basic information on the needs of informal areas were lacking, many people were not aware of the programme and did not feel any improvements. also, in the absence of a clear monitoring system, only 60% of the budget has actually been spent on informal areas. in terms of sanitation and access to clean water the programme has shown great achievements. according to the latest human development report sanitation systems – for which 40% of the budget have been spent – nationally improved from 54% in 1990 to 70% in 2004, water respectively from 94% to 98% (world bank 2008a:64). yet the access to basic education, youth centers and health units still shows great deficits, and from 13 million people in need less than 6 million have been targeted. “the overall impact has been less than expected with continued migration, unemployment and poverty which have outpaced government resources”. until today the national programme for urban upgrading has been the only governmental effort to systematically address the issue of informal areas besides some smaller donor-funded pilot projects of upgrading [20]. as an attempt to solve the problematic issue of what counts as al-ashwaiat areas, the informal settlement development facility (isdf), since its establishment by a presidential decree # 305/2008, has made a substantial change in the egyptian vocabulary by replacing the term “slums” or “informal settlements” or “al-ashwaiat areas” by two distinctive terms; “unsafe areas” and “unplanned areas”. unsafe areas are characterized by being subject to life threat, or having inappropriate as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 23 housing, or exposed to health threat or tenure risks, while unplanned areas are principally characterized by its noncompliance to planning and building laws and regulations [21]. according to the reference from isdf first stage: 19942004 informal settlements development program: provide basic urban services (electricity, water, sanitation, road paving, etc.) for about 325 informal areas and eradicated 13 deteriorated areas with total expenditure 3.2 billion l.e. second stage: 2004-2008 informal settlements belting program: focus on supporting local governments in preparing detailed plans to enable development efforts for restricts the growth of informal areas in greater cairo and alexandria. third stage: 2009informal settlements development facility (isdf): priority is given to development of slum areas. fourth stage: 2014ministry of urban renewal and informal settlements: development of informal settlements and solid waste and construction waste management. conclusion:  gcr's “al-ashwaiat areas” have become a controversial issue on most local and global agendas. lessons learned from world bank, united nations organizations and other bilateral donors emphasize the importance of the communities involvement in developmental projects. however, cairo society seems to evidence a paradoxical phenomenon in the relationship between subcultures. it is clear that the urban remaking of cairo is taking place at the expense of visually excluding the mass of the unwanted poor, sharpening class distinctions [18].  as the urban poor do not have adequate housing options and formal access to building land, they are forced to live in the al-ashwaiat areas. legally converting agricultural land to building land and getting a building permission is an enormous bureaucratic procedure in all egypt [20].  it is expected that al-ashwaiat areas will increase in size and number explosively. it is our impression that, if this dialectic is not resolved, the case will be turned upside down; dominance of informality on cairo's urban face.  the government has focused on operational policies or programs which address the problems of al-ashwaiat areas. this aggravated situation requires quick actions for developing an integrated urban policy that puts all state efforts in building new cities and desert hinterland villages, and developing informal settlements in the framework of a clear general plan that implements priorities.  and finally, one would expect, given that gcr represents almost 50 per cent of egypt’s total urban population, that the capital metropolis would be a logical starting point for a focused approach to combating al-ashwaiat areas.  realistic view of al-ashwaiat areas are a manifestation of wealth, not poverty and the majority of al-ashwaiat areas are safe, not a security hazard. social and political exclusion is a society-wide phenomenon, although more acute in informal areas poor urban governance and low quality of public services are affecting all urban areas, although more apparent in informal areas i.e. al-ashwaiat areas are not particularly subject to exclusion, but receive less attention and resources than formal parts of the city . so maintaining the status quo is part of ensuring control and manipulation.  ultimately, the egyptian government must recognize the value of the lives of those who live in the al-ashwaiat areas and see them not as marginalized, terrorists, or backward, but as full citizens who participate and contribute socially and economically to egyptian society. the government can then make political decisions prioritizing their lives and the livelihoods they can lead over other issues on the political agenda.  in conclusion, there are some positive indicators that must be used against issue of alashwaiat areas, for example: (1) the strong political support. (2) a national fund is created for comprehensive upgrading of informal areas. (3) recent planning law addresses informal areas classification and intervention strategies in a pragmatic way. (4) institutional setting for dealing with informal areas is becoming clearer (governorates’ units for dealing with informal areas, committees for participatory development, local initiatives, etc.) as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 24 recommendations:  one of the greatest innovations is the way that officials are currently addressing the challenge of al-ashwaiat areas in the gcr in an integrated manner. ongoing upgrading schemes address existing challenges, while the affordable housing programs, the implementation of guided urban development around squatter settlements, and the changing the legislative framework for urban development in egypt all aim to address the challenge in a sustainable manner in the long term [8].  strengthening management of urban growth boundaries: urban growth boundaries have been updated to control the urbanization within an expected limit in order to conserve precious green areas and regulate urban growth.  increasing the information level of the public administration through creating gis maps for alashwaiat areas e.g. identifying locations and neighborhoods which are particularly vulnerable to be hazards.  raising residents’ awareness of the consequences of informal areas on their livelihoods and ways to deal with these challenges in an adequate way.  promoting the implementation of community based small-scale adaptation measures in alashwaiat areas to strengthen the inhabitants’ resilience to al-ashwaiat areas change.  back to the development talk, knowing the community/building trust and sharing knowledge as main approach to facilitation and community participation regarded as the only credible approach to the development of alashwaiat areas. if this is an accepted debate, then the issue of integration versus exclusion should be put into consideration on the government’s strategic planning levels.  it is argued that social integration is prior to physical development. an educational shift is therefore paramount. emphasis on ebs (environment, behavior and society) should then be given more attention; linking physical settings to user groups to behavioral patterns, identifying a particular culture of a particular locale, in relevance to the greater urban context of cairo society, will evidently help resolve the dialectic of the other in the perception of our future architects and decision makers. this in turn will help working on the communities' developmental agendas, within a new talk rooted in trust and respect. bridging gaps of inferences and reinforcing communication channels will thus promote for application of community participation through shared visions and cooperation towards the betterment of our city victorious.  comprehensive reforms thus need to accompany any formalization program. again, these reforms generally require political will, and in this approach good will is one of the aspects that are rather more difficult to achieve.  the decision makers must be taken into account the previous studies that confirmed the following: (1) a balanced view of informal areas can lead to practical strategies for dealing with them therefor informal areas cannot be dealt with properly unless tackling city-wide problems such poor governance, neoliberal orientation, etc. (2) enhancing partnership and cooperation mechanism, so participatory upgrading of informal areas has the potential of achieving inclusiveness & equity. (3) consistent policies backed up with political support should guarantee no-going-back on institutionalizing participatory upgrading in local government. (4) development of efficient information management system. (5) provide effective development programs and projects and provide technical assistance and capacity building. (6) adjusting the legislative framework.  must be main principles for improving the urban development process in gcr: (improving related legislation to urban development through incorporating participatory planning, establishment of urban planning and development departments in governorates, cities and markaz to promote decentralization and adopting strategic planning and integrative development instead of master plans). in conclusion, the dealing with al-ashwaiat areas requires using the participatory development tools on the three levels: local, regional and national in an integrated manner as following: (1) ministries: requesting and coordinating support from ministerial level to meet priority needs, like (providing finance for al-ashwaiat areas upgrading, providing technical support and capacity development and training). (2) governorates: enabling local stakeholders of setting priorities and applying participatory tools “governorates are mediators between the local and national levels”. (3) as. p rof. dr. abdelrehim kenawy al-ashwaiat areas in great er cairo region (gcr) a challenge for t he st at e (2016) 25 districts and communities (local stakeholders). the role of local administration (governorates, districts and communities) should be: (1) detailed knowledge of al-ashwaiat areas: their boundaries, types, development needs and required resources. (2) managing the upgrading processes: coordinating the efforts of local stakeholders and external support agencies. (3) monitoring the development of alashwaiat areas: and its impact on their residents and on the city at large. finally result: next to the previous results and recommendations in this research, the researcher believes that out of this excruciating and complicated crisis that faced by successive governments. the egyptian government must have the intention to solve the problem of al-ashwaiat is not only in the gcr, but on the level of the republic, and by doing that establishment the ministry of al-ashwaiat and urban development, have the following tasks: inventory and classification of the al-ashwaiat on the level of the republic prepare the necessary policies to face the alashwaiat at all levels (national, regional, local) in a specified period of time and at specified stages possession of the ministry decision-making power and coordination with other ministries availability of funding for this ministry decentralization and lower densities to acceptable ratio abbreviations: gcr: greater cairo region gc: greater cairo gtz gesellschaft für technische zusammenarbeit gopp: the general organization for physical planning capmas central agency for public mobilization and statistics isdf informal settlement development facility un-habitat united nations human settlements project undp united nations development programme nuca new urban communities authority references and selected literatures: [1] mohamed ibrahim (2009). towards reliable spatial database of informal areas in greater cairo region, international workshop on spatial information for sustainable management of urban areas fig commission 3 workshop, mainz, germany, 2 4 february. [2] gtz (2004). participatory development program in urban areas (pdp). [3] gtz egypt & tub (2010). improving informal areas in greater cairo the cases of ezzbet al nasr & dayer el nahia. [4] sims, david (2003). “the case of cairo, egypt.” unhabitat (2008). global report on human settlements: slums and poverty in the city. london: university college. [5] capmas (2006). [6] the cities alliance (2008). slum upgrading up close, , http://www.citiesalliance.org. [7] gopp (2009), v ision for cairo 2050. [8] mostafa madbouly (2005). cities alliance public policy forum marrakech, 7-9 november. [9] doaa abouelmagd, public housing and public housing policies in greater cairo, cosmopolis city, culture & society, vrije universiteit brussel, dggf, pleinlaan 2, 1050 brussels, belgium. [10] abu-lughod .j (1971) ‘1001 years of the city victorious’, princeton university press. [11] sims, d. (2007) ‘review of egyptian subsidized housing programs and lessons learned. cairo’, report prepared by the technical assistance policy reform project, processed. [12] el-batran .m (2004) ‘profitability versus responsibility: sustainable housing in focus’, the 48th ifhp world congress: governance for urban change, oslonorway 5-8 september 2004. [13] nuca (2011) ‘the new cities’, url: http://www.urban-comm.gov.eg/english/index.asp. accessed on 1/05/2011. [14] shawn o’donnell, informal housing in cairo: are ashwa’iyyat really the problem? [15] gtz & tub (2010) ‘identification of informal areas: classification studies & approaches’, unpublished. [18] heba el deen & h el mouelhi (2009) new glasses theory: new thinking theory and mind, department of architecture, faculty of engineering – cairo university, 5th international conference 16th – 17th december. [17] isdf (2014). [18] christian arandel & manal el batran (1997). the informal housing development process in egypt, july. [19] dorman, w. j. (2007) the politics of neglect, the egyptian state in cairo, 1974-98. phd thesis, school of oriental and african studies (soas), university of london, london. https://epri nts.soas.ac.uk/155/ (26. 03.08). [20] carolin runkel (2009). the role of urban land titling in slum improvement – the case of cairo. unruh, j. d. (2007) urbanization in the developing world and the acutely tenure insecure. city, 11 (1), 115-120. [21] gtz/mn (2004) progress report for the bmz no. 5 (7-12/03), participatory urban development of manshiet nasser. gesellschaft für technische zusammenarbeit, cairo. [21] marwa a. khalifa (2011), redefining slums in egypt: unplanned versus unsafe areas, journal homepage: www.elsevier.com/locate/habitatint. http://www.citiesalliance.org/ http://www.elsevier.com/locate/habitatint transactions template journal of engineering research and technology, volume 1, issue 1, march 2014 32 use of nanofiltration for nitrate removal from gaza strip groundwater yunes mogheir, ahmed albahnasawi abstract— due to excessive usage of nitrate fertilizer in agriculture and discharging of wastewater from treatment plants, and leakage of wastewater form cesspools, nitrate level in the groundwater has increased. elevated nitrate in water resources could lead to serious problem including eutrophication, and potential hazards for human and animal health. the aim of this study is to investigate the use of nanofltraiton for nitrate removal in gaza strip as case study. one commercial membrane (nf90) was used in this study. the stirred dead end flow model was used. in addition, two types of water were used: aqueous solution and real water. the performance of the tested membrane was measured in terms of flux rate and nitrate rejection under different operation conditions: nitrate concentration was varied between 50-400mg/l, applied pressure (6-12) bar and tds concentration (500-3570) mg/l. the percentage of nitrate removal was in the range of 0.62% and 66.68% and the flux rate ranges between 2.61 and 30.12 l/m2.hr. these values depend on operation conditions such as nitrate concentration, tds compostion and operation pressure. in real water, the percentage of nitrate removal was influenced by tds value in general, but to be more specific, it was found that the concentration of sulphat has a great effect on nitrate removal, as the sulphat concentration increased the nitrate removal decreased. nf90 was observed to be an effective membrane for nitrate removal of gaza strip at higher permeate flux and lower applied pressure, especially in north gaza strip were low tds and sulphat concentration were observed. index terms— nanofiltration, nitrate, rejection, flux rate, well, total dissolved solids and pressure. i introduction water is essential to sustain life, and a satisfactory (adequate, safe and accessible) supply must be available to all. improving access to safe drinking water can result in tangible benefits to health. the gaza strip is a highly populated, small area in which the groundwater is the main water source. during the last few decades, groundwater quality has been deteriorated to a limit that the municipal tap water became brackish and unsuitable for human drinking consumption in most parts of the strip. the aquifer is intensively exploited through more than four thousands of pumping wells. as a result of its intensive exploitation, the aquifer has been experiencing seawater intrusion in many locations in the gaza strip; in addition high nitrate is measured in many places in gaza strip aquifer [1].nitrate in the groundwater in the gaza strip has become a serious problem in the last decade. as a result of extensive use of fertilizers, discharging of wastewater from treatment plants, and leakage of wastewater form cesspools, increased levels of nitrate, up to 400 mg/l, have been detected in groundwater. nitrate concentrations more than 50 mg/l are very harmful to infant, fetuses, and people with health problems. to overcome this serious situation, the reverse osmosis (ro) technology is used to replace the tap water or to improve its quality. several private palestinian water investing companies established a small-scale reverse osmosis (ro) desalination plants to cover the shortage of good quality drinking water in the whole gaza strip. desalination is a considerable alternative for water supply in order to improve the quality of water in the area. so, desalination plants began to be established in gaza strip using ro technique. the shortage of energy source become a big constrain facing desalination plants of which these plants are operating at limited operational hours, the need to find more choices to develop water sector in gaza strip become an essential priority. thinking of innovative actions for desalination sector needs balance and acceptable decisions [2]. new technologies including nanofiltration membrane (nf) application will be considered and experimentally investigated to measure the possibility of enhancing the performance of the desalination plants and increasing production in the near future. in addition, effluent brine treatment technology prior to disposal may be studied and recommended [3]. nanofiltration (nf) is a suitable method for the removal of a wide range of pollutants from groundwater or surface water. the major application of nf is softening, but nf is usually applied for the combined removal of nom (natural use nanofiltration for nitrate elimination from gaza strip ground water, yunes mogheir, ahmed albahnasawi ( 2014) 33 organic material), micropollutants, viruses and bacteria, nitrates and arsenic, or for partial desalination. industrial fullscale installations have proven the reliability of nf in these areas [4]. in the gaza strip there is no desalination plant using nanotechnology, the aim of this research to test if nanofiltration membrane is suitable for nitrate removal from groundwater. ii experimental setup a materials nf90 (dow filmtec) nanofiltration element is a high area, high productivity element designed to remove a high percentage of salts, nitrate, iron and organic compounds such as pesticides, herbicides and thm precursors. the high active area membrane combined with low net driving pressure of the membrane allows the removal of these compounds at low operating pressure. the system consists of hp4750 stainless cylindrical cell purchased from steirlitech uk with volume of 300ml. the cell is pressurized via nitrogen gas supplied by gas cylinder with a manual pressure regulator. the experiments are conducted at room temperature and at pressure range of (6 – 12) bar; figure 1 shows the system component. figure 1 system component. b sampling the filtration experiments were carried out on different samples: 1) pure sample: deionized water with ec=7μs/cm 2) synthesis standard solutions: (50-10-150-200-250300-350-400) ppm as no3 solution. 3) real sample: water samples were collected from different municipal wells distributed on all gaza strip governorates and divided based on the concentration of nitrates, the sample nitrates concentrates are chosen every fifteen mg/l, the concentrations of nitrates varied between (32364) mg/l. the water samples were collected based on palestine water authority (pwa) chemical tests results in 2011. c methods after collecting the samples, major chemical analysis were performed for these samples such as (ph, tds, and no3). nitrate measurement 4500-no3 nitrogen (nitrate) method was used in nitrate measurement. nitrate concentration was determined by ct2600 spectrophotometer. tds measurement concentration of tds was determined by conductivity meter (microprocessor conductivity meter bodds-307w, which measures the ec. to get the approximare tdc value we multiply ec by (0.6). ph measurement ph is a logarithmic notation used to measure hydrogen activity (i.e., whether a solution is acid or basic). ph = log [h+](1) as a simplification, it is assumed that ph is a function of the hydrogen ion concentr tion {[h+]} when in reality it is related to the hydrogen ion activity h+. since pure water is slightly ionized, it is expressed as an equilibrium equation termed the ion product constant of water. the concentration of these two ions is relatively small and is expressed as a simple logarithmic notation. ph is the negtive log of the hydrogen ion[5] . the ph was measured with (ph/orp/ise graphic lcd ph bench top meter, hanna instrument) ph meter. d tested parameter flux rate flux rate represent the volume of liquid passing through specific area of membrane at certain operating pressure during a period of time. the flux rate of a filter is important in determining how rapidly filtration can be completed. if there is nothing in the sample stream to clog the pores, the flux rate should remain constant. flux rate = v/a.t (l/m2.hr) (2) where; v: volume of water permeated at the time (t) (l). a: surface area of membrane (0.00146 m2). t: time of filtration(hr). note that these tests were carried out at different pressures (6, 8, 10, 12 bar), because this pressure ranges are lie in the operation pressure range of nf membrane (filmtec membranes product information). rejection the same meaning of removal efficiency, represent the ability of membrane to reject salts and impurities from feed water. this is one of the most important characteristics of membrane; that’s depended on the feed water characteristics, membrane characteristics and applied pressure. the ability of membrane to reject tds & no3 was measured using the following equation: %r= (1-cp/cf)*100 (3) where; cp: salt concentration in permeate (mg/l). cf: salt concentration in feed water (mg/l). iii result and discussion use nanofiltration for nitrate eliminati on from gaza strip ground water, yunes mogheir, ahmed albahnasawi (2014) 34 a flux rate 1 a queues solution many factor influence the flux rate such as operation pressure and ionic concentration figure 2 illustrate the relation between flux rate and operation pressure for pure water sample. flux rate dos not only depend on the operating pressure but also on the influent concentration.as ionic concentration increase the flux rate will be decrease as show in figure 3 the effect of operating pressure and ionic concentration on flux rate in nitrate solution sample. for each pressure, a linear relation can be obtained for flux rate against the feed nitrate concentration with high correlation ranges between (0.94 to 0.97). this reduction in flux crossing is increased when the ions is added, probably due to increasing solution osmotic pressure. figure 2 pure water flux rate with different pressure. figure 3 effect of feed nitrate concentration and opreating pressure on flux rate (nitrate sample (50,100,150,200,250,300,350 and 400 mg/l as no3. 2 real water sample. as in case of a queues solution, the flux rate increases linearly with increase of applied pressure figure 4 and table 1 show the effect of tds concentration and operating pressure on flux rate, the general trend is as tds concentration increases the flux rate decrease b rejection of ionic component 1 a queues solution figure 4 effect of tds concentrations on flux. the nitrate removal (rejection rate) of solution at different pressure were analyzed figure 5 shows the effect of operation pressure and ionic concentration on nitrate rejection, as pressure increased nitrate rejection increased on the contrary as nitrate feed concentration increases nitrate rejection decreases. this can be explained by considering salt transport through the membrane as a result of diffusion and convection, which are respectively due to a concentration and a pressure gradient across the membrane. at low transmembrane pressure figure 5 effect of feed nitrate concentration and opreating pressure on nitrate rejection rate(nitrate sample (50,100,150,200,250,300,350 and 400 mg/l as no3). (tmp), diffusion contributes substantially to the salt transport resulting in a lower retention. with increasing tmp, the salt transport by diffusion becomes relatively less important, so that salt retention is higher [6] [7]. use nanofiltration for nitrate elimination from gaza strip ground water, yunes mogheir, ahmed albahnasawi ( 2014) 35 table 1 flux rate and tds concentration. well id. tds(mg/l) pressure (bar) 6 bar 8 bar 10 bar 12 bar flux rate (l/m2.hr) a211 500 12.82 18.45 22.29 30.12 d75 630 12.57 17.37 21.95 29.87 d60 950 7.94 12.01 17.24 28.48 w2 970 9.22 13.93 18.54 28.86 darage 1200 6.42 10.6 16.17 23.98 hera 1350 7.79 11.69 17.13 28.24 s69 1506 5.57 9.6 13.72 20.16 r306 1587 5.55 10.18 15.64 21.1 c79a 1600 6.36 10.37 15.94 22.8 p145 1650 5.4 8.98 13.95 18.92 r25a 1900 6.04 10.28 15.85 21.82 l127 1950 5.35 9.69 14.19 18.26 r25b 2020 5.35 9.07 13.59 19.65 l198 2100 5.18 7.27 11.82 17.96 r74 2200 4.78 7.06 10.56 15.62 l87 2450 4.61 6.53 9.73 14.7 h104 2454 3.11 6.4 8.7 14.12 r311 2570 3.35 6.7 9.73 14.4 shoot 2574 5.03 6.72 10.33 15.38 seka 2673 2.92 6.53 8.81 14.12 astath 2900 3.11 5.97 8.7 13.7 g49 3010 2.9 5.78 8.66 13.7 e124a 3140 2.61 4.65 7.55 10.71 l190 3570 3.61 6.49 8.53 13.61 2 real water samples. as observed in aqueous solutions the effect of operating pressure was evaluated. in real water there were many factors that influenced the rejection percentage such as tds concentration and other chemical concentration. the result show that as operation pressure increases the removal of nitrate increases. however, for other wells, the operating pressure was not the main influencing factor. tds concentration plays an important role. table 2 shows the results of nitrate removal and operating pressures, the maximum rejection percentage at 12 bar was 55.56% at well a211 and the minimum nitrate rejection was zero at many wells when operating pressures was 6 bars depending to tds concentration and composition and nitrate concentration in feed water the result in figure 6 showed that in general that a relation between tds concentration and nitrate rejection, when we fixed the nitrate concentration in feed water. as shown in figure 6 there are drop in curve, but when the effect of nitrate concentration is fixed and plot the nitrate removal and sulphate concentration, a strong relation between sulphate concentration and nitrate rejection was found (figure 7). figure 6 and figure 7 show the nitrate rejection results against tds and sulphat concentration. to show this relation, nitrate concentration must be fixed. for example e124a well have 3140 mg/l as tds concentration and s69 use nanofiltration for nitrate eliminati on from gaza strip ground water, yunes mogheir, ahmed albahnasawi (2014) 36 well has 1506 mg/l as tds concentration, but nitrate rejection in e124a is higher than s69, although nitrate concentration in e124a is higher than in s68. this was due to that the sulphate concentration in e124a was 149 mg/l but in s69 was 240 mg/l. that means the sulphate concentration plays important role in nf90 nitrate rejection percentage. because of high removal of sulphate, because of their valance, nitrate is forced to pass through the membrane. the removal of monovalent such as nitrate was greatly decreased under the presence of sulphate ions. retention of the negative sulphate ion in concentration water disturbed the electrical equilibrium on both sides of the membrane that the nitrate ions was forced through the membrane in permeate water to maintain electric equilibrium [8]. it was also observed that an increase of sulphate concentration generally decreases the chloride rejection. the retention of chloride anion is lower for the salt mixtures than for single salts experiment. it seems that the presence of high valance anion (so4) drives more chloride into membrane, thus decreasing its retention [9]. the sequence of rejection of monovalent anions can be written as r (f)> r (cl)> r (no3), the observed retention of the three ions is similar to the ionic order and opposite to the hydration energy order for the monovalent ions, the f which has higher hydration energy is better retained than cl and no3 [10] [11]. from the above two paragraph it can be conclude that the chloride is better than nitrate in rejection according to rejection sequence, while sulphat has negative effect on chloride rejection so sulphate has negative effect on nitrate rejection. figure 6 relation between tds concentration and nitrate rejection. use nanofiltration for nitrate elimination from gaza strip ground water, yunes mogheir, ahmed albahnasawi ( 2014) 37 table 2 nitrate rejection result with sulphate and tds concentration. well no. tds nitrate no3 (mg/lno3) sulphate so4 (mg/l) 6 bar 8 bar 10 bar 12 bar a211 500 45 22 33.33 42.22 48.89 55.56 w2 970 71 108 18.31 28.17 39.44 42.85 e124a 3140 80 149 75 21.25 27.5 35 s69 1506 32 240 8.23 15.63 22.32 28.13 h104 2454 76 394 5.35 10.6 15.15 18.5 d75 630 133 41 42.87 48.12 50.38 52.63 r306 1587 136 155 8.09 14.71 19.85 23.53 r74 2200 120 219 5 10.83 15 18.33 r25a 1900 146 269 4.79 10.27 14.38 17.81 astath 2900 140 407 0.71 2.14 5.73 8.57 g49 3010 138 550 0.96 1.79 5.17 6.79 c79a 1600 190 105 16.84 19.47 24.21 36.63 darage 1200 178 111 16.85 23.6 27.53 34.27 l198 2100 185 375 1.62 4.86 8.11 10.81 l190 3570 193 628 0.52 1.04 1.55 2.07 d60 950 211 90 36.49 39.81 41.71 43.6 p145 1650 206 213 6.31 13.59 23.3 30.1 r25b 2020 226 280 7.52 12.83 16.81 21.24 seka 2673 230 359 2.17 10 12.61 15.22 r311 2570 217 444 1.38 3.23 8.76 12.44 hera 1350 273 135 21.61 31.87 36.26 45.42 l127 1950 364 157 18.68 23.9 29.67 32.97 l87 2450 304 271 4.61 9.21 14.47 17.76 shoot 2574 332 356 0.60 1.2 7.83 14.16 use nanofiltration for nitrate eliminati on from gaza strip ground water, yunes mogheir, ahmed albahnasawi (2014) 38 figure 7 relation between nitrate rejection and sulphate concentration. iv comparison between real water and aqueous solutions: 1) flux rate the performance of nf90 membrane varied in terms of flux rate. consequently, the pure water flux rate was higher than the real water flux rate. as the water contains more salts or other substances, the flux rate decreases. at this pattern the membrane performance, so the pure flux rate was higher than of real water flux. also complexity of water character play a good role in membrane behavior and that is why the queues solution flux rate is higher than real water flux rate. the maximum flux rate for aqueous solution was obtained at 12 bar (34.13 l/m2.hr) for pure water and minimum flux rate was obtained at 6 bar (16.31 l/m2.hr). the maximum flux rate for real water was obtained at 12 bar (30.12 l/m2.hr) for a211 and minimum flux rate was obtained at 6 bar (2.61 l/m2.hr) for e142a. 2) nitrate rejection generally, the overall rejection percentages of the nf90 membrane of aqueous solutions were found to be higher than the rejection of real water. for aqueous solution the maximum and minimum nitrate rejection of aqueous solution was 66.68% and 21.67% respectively, while for real water the maximum and minimum nitrate rejection of were 55.56% and 0 % respectively. the characteristics of feed water significantly affect the membrane rejection such as the content of sulphate and hardness. this explains the difference of rejection between real water and aqueous solution. in addition, real water may contain some colloids and many other substances that can negatively affect the membrane rejection. conclusion nf90 membrane showed good result for nitrate removal in real water, which varied between 0.62% and 55%, and flux rate between 2.61 and 30.12 l/m2.hr, when the operating pressure varied between 6 and 12 bar. it can be concluded that the sulphate has negative effect on chloride rejection and on nitrate rejection. as the real water contains more salts or other substance, the flux rate decrease. at this pattern the membrane performance, so the pure flux rate was higher than of real water flux. also complexity of water character play a good role in membrane behavior and that is why the nitrate solutions flux rate are higher than real water flux rate. nf90 was observed to be an effective method to nitrate removal of gaza strip at higher permeate flux and lower applied pressure, especially in north gaza strip were low tds and sulphat concentration were observed. in other gaza strip places tds and sulphat should be removed before using nanofiltration to nitrate removal. the characteristics of feed water significantly affect the membrane rejection such as the content of sulphate and hardness. this explains the difference of rejection between real water and aqueous solution. sensitivity of the system to the circumstances like temperature, quality of deionized water used in system flushing, regular insurance of zero leakage of pressure , the period of using membrane, using tools washed by deionized water , all these restriction make the test harder. the importance of testing nanofiltration membranes as new emerging technology in gaza strip is to improve the overall desalination quality with acceptable cost; carrying out tests helps to understand the behavior of nf90 for nitrate removal. desalination of brackish water using nanofiltration technique is seen as one of the promising solution that can assist gaza in filling the gap between the growing needs for water, limited water recourses, limited energy resource, the standard of domestic water and unacceptable water quality. references 1. aish, a.,water quality evaluation of small scale desalination plants in the gaza strip palestine. department of geology, faculty of science, al azhar university, p.o. use nanofiltration for nitrate elimination from gaza strip ground water, yunes mogheir, ahmed albahnasawi ( 2014) 39 box 1277 gaza, palestine (2010). 2. mogheir, y., a, a, abuhabib., a, hasouna., f, shehab., s,a, shaat., comparative approach for desalination of brackish water using low pressure nanofiltration membranes, global advanced research journales, (2013). 3. mogheir, y., a, foul., a, abuhabib., a, mohamad., assessment of large scale brackish water desalination plants in the gaza strip.,(2013). 4. bruggen, b.,van der., k. everaert, d. wilms, c. vandecasteel ,application of nanofiltration for removel of pesticides, nitrate and hardness from ground water: rejection properties and economic evaluation,(2001). 5. bailar, john, c. jr., chemistry, academy press, new york, new york(1978). 6. schaep, j., van der brugen,b., uyetherhoven, s., croux, r., vandecasteele, c., wilms, d., van houtte, e. and vanlerberghe, f., removal of hardness from groundwater by nanofiltration, desalination, 119,295-301.,(1998). 7. van gestel, t., vandecasteele, c., buekenhoudt, a.,.dotremont, c., luyten, j., leysen, r., van der bruggen, b. and maes, g., salt retention in nanofiltration with multilayer ceramic tio2 membranes, journal of membrane science, 209, 379-389(2002). 8. paugam,l., taha. j, dorange.g, elimination of nitrate ions in drinking water by nanofiltration, desalination, 152,271-274,(2002). 9. kriegm h.m, modise.s.j, keizer, k and neomagus, h.w.j.p., salt rejection in nanofiltration for single and binary salt mixtures in view of sulfates removal, desalination, 171, 205-215(2004). 10. diawara, c. k., sidy lô, m., rumeau, m., pontie, m. and sarr, o., a phenomenological mass transfer approach in nanofiltration of halide ions for a selective defluorination of brackish drinking water, journal of membrane science, 219, 103-112(2003). 11. wang, m. su, z.y. yu, x.l. wang, m. ando and t. shintani.,separation performance of a nanofiltration membrane influenced by species and concentration of ions, desalination 175, 219(2005). 12. schafer, al., ag fone. t.d waite., nanofiltration principles and application, (2008). transactions template journal of engineering research and technology, volume 10, issue 1, march 2023 28 received on (12-06-2022) accepted on (11-11-2022) luenberger observer-based speed sensor fault detection: real time implementation to dc motors moayed almobaied1, yousef al-mutayeb2 1 department of electrical engineering, islamic university of gaza, gaza, palestine. 2 department of engineering sciences, university college of science and technology, khanyounis, palestine. https://doi.org/10.33976/jert.10.1/2023/3 abstract— fault tolerant control systems (ftcs) have emerged as a critical area of study for enhancing the safety, reliability, and efficiency of modern control systems. the ftcs technique might be active or passive control in general. in this paper, the active control branch's fault detection and diagnosis (fdd) is used to detect faults in dc motor speed sensors. fdd methodologies can be divided into two types based on the process and the type of data available: model-based methods and data-based methods. the proposed method here investigates the use of the luenberger observer technique, which is part of the model-based approach. the selected method was implemented and experimentally evaluated. this observer is dependent on the residual signal, which serves as a fault indicator in the overall system and represents the difference between the measured and estimated speed signals from the plant. due to the increasing demand for these motors, particularly in electro-mechanical applications such as robotics, elevators, and electric-driven railways, a dc motor was chosen as a benchmark to test the proposed method. the output speed of the motor was subjected to four sensor faults: sensor fault, abrupt fault, intermittent fault, and incipient fault. the effectiveness of the suggested approach is demonstrated using matlab simulations, and the results show that faults are detected as anticipated with a high-performing response. therefore, the proposed method was also implemented experimentally in real time and the obtained results showed a close match with those from simulation, thus proving the accuracy and reliability of the proposed methodology for fault detection in the dc motor speed sensor. index terms— luenberger observer controller; faults; dc motor. i introduction high levels of system safety are provided by fault-tolerant control systems (ftcs), a class of extremely sophisticated control functions designed in a unified framework. there are two kinds of ftcs: passive and active [1]. this study looks into fault detection and diagnosis (fdd), a kind of fault-tolerant active control. fault tolerance control (ftc) has received a lot of attention in sensitive industrial applications over the last two decades. there is a growing need to improve the dependability and safety of electrical systems in many industrial applications. electric motors, aircraft, and robotic systems are examples of industrial applications. these applications have a great need for speed control tasks. one of the most well-known electric motors, the dc motor is commonly used in industrial appliances like electric vehicles, robot arms, cranes, and elevators [2]. dc motors are suitable for applications that require frequent and adjustable starting, good speed regulation, braking and reversing [3]. dc motors have several advantages over other types of motors such as: higher starting torque, speed variation, adjustable speed, applicability for low-speed torque and the required maintenance is easy in general [4]. fdd is a technique used by researchers interested in fault tolerance control to improve the efficiency and safety of electric motors [5]. a sudden failure of a motor while it is in the operation mode can result in costly downtime, environmental damages, equipment faults or even human danger [1]. due to the slow rate of deterioration of some faults, they can be identified early on, enhancing safety, preventing system failures and product damage, and extending the useful life of the equipment [6]. electrical motor faults are classified into two types: electrical and mechanical. a short circuit in the stator winding, rotor failure, an inverter fault, and a damaged end ring are all examples of electrical faults. gearbox failure, bearing damage and shaft bending are examples of mechanical faults [7], [8]. sensors play a crucial role in the operation of motors. when it comes to motor fault detection and diagnosis, sensor faults provide a significant issue. the two most common methods for monitoring and detecting faults in electric motors are model-based and data-based approaches. the model-based approach is based on comparing the behaviors of the real plant with those of the system's mathematical model. it employs the residual signal, which is the difference between the real (measured) and model processes (estimated signal). there are several techniques to generate residuals : parity equations, observer based generation and parameters estimation [9]. the most popular fault detection techniques are parameter estimation and observer-based techniques [10]. https://doi.org/10.33976/jert.10.1/2023/3 moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 29 data-based methods, on the other hand, are defined as " a dynamic model of an real process is not required ". to identify and diagnose faults, the entire fault detection method is based on physical quantities that are measured and examined [11]. there have been numerous contributions to the field of dc motor fault detection. using a luenberger observer to detect a speed sensor fault on a bldc motor was simulated by a software technique using matlab in [12]. in [13] a robust method for fault-tolerant control (ftc) of a permanent magnet dc motor has been created and tested in the presence of various actuator fault types. a luenberger observer based on the bond graph model was suggested in [14] as a method of fault detection. the authors in [15] presented an online sensor fdd via the model-based method using a luenberger observer applied on dc motor. robust design of unknown inputs (ui) propositional–integral (pi) observer to estimate the state of the system with ui and detect the existing fault has been presented in [16]. the authors in [17] presented an improvement for luenberger observer using fuzzy logic for detecting sensor faults. in this paper, the main contribution is that the model-based sensor fault detection methods is applied to dc motor (ya-070) in real time scenario where a luenberger observer is proposed to detect a speed sensor fault on the motor. fig. 1 illustrates the schematic diagram of the proposed method. in [12] we applied the proposed method to bldc motor in matlab simulation environment. figure 1. speed luenberger observer for ya-070 dc motor. this paper is organized as following: in section ii, the mathematical model of the classical dc motor is presented. section iii provides descriptions of the luenberger observer background. section iv presents the design and implementation of the hardware and the algorithm used. section v provides simulation and practical results of the proposed luenberger observer method. finally, some conclusion is discussed in section vi ii dc motor mathematical model a dc motor the dc motor was introduced as a speed control target. the parameters of the dc motor model type (ya-070) are calculated practically [18]. equivalent circuit of dc motor is shown in fig. 2, that uses the armature voltage control method. figure 2. dc motor equivalent circuit the specifications for the parameters are listed in table 1 and were derived from the motor's datasheet [19]. table 1 ya-070 dc motor specifications. motor specifications value nominal drive voltage 24 v nominal rotational speed 3000 rpm nominal current 200 ma resistance ra 7 ω inductance la 0.008436 h rotor of inertia j 2.2097e-04 kg.m^2 friction coefficient b 1.65e-04 n .m/rad/sec back emf constant kb 0.094 v/rad/sec torque constant kt 0.094 n .m/a b transfer function of dc motor the final speed transfer functions for dc motor relative to the input voltage can be written as follows [18]: 𝜔(𝑆) 𝑉𝑎(𝑆) = 𝐾𝑡 𝐿𝑎𝐽𝑆 2+(𝑅𝑎𝐽+𝐿𝑎𝐵)𝑆+(𝑅𝑎𝐵+𝐾𝑡𝐾𝑏) () the transfer functions of the dc motor in (1) are obtained in this study using the parameters in table 1. as a result, the resulting transfer function is: t. f = 0.094 1.864𝑒 −6𝑆2+0.001548 𝑆+0.00991 () c state space model of the dc motor state space modeling was used in the design of the observer of fault detection for dc motors. the following is the general form of a state-space model [20]: �̇�(𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(t) () 𝑦(𝑡) = 𝐶 𝑥(𝑡) () moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 30 where 𝑢(𝑡) is the input, 𝑥(𝑡) is the state, a is the state matrix, b is the input matrix, c is the output matrix and 𝑦(𝑡) is the output. state vector 𝑥(𝑡), has been chosen so that [21]: 𝑥1(𝑡) = 𝜔(𝑡) = 𝑑𝜃(𝑡) 𝑑𝑡 () 𝑥2(𝑡) = 𝑖(𝑡) () �̇�(𝑡) = ( �̇� 𝑖 ) = ( − b j kt j − kb la − ra la ) 𝑥 + ( 0 1 la ) 𝑢 () 𝑦(𝑡) = 𝜔(𝑡) = (1 0)𝑥 () where:𝜔(𝑡) is the angular velocity and 𝑖(𝑡) is the current of the motor. as a consequence, the a, b, and c matrices are as follows: 𝐴 = ( − 𝐵 𝐽 𝐾𝑡 𝐽 − 𝐾𝑏 𝐿𝑎 − 𝑅𝑎 𝐿𝑎 ) , 𝐵 = ( 0 1 𝐿𝑎 ) , 𝐶 = (1 0) () the a, b, and c matrices are given by substituting the parameter values in (9): a = ( −0.7467 425.4 −11.14 −829.8 ) , b = ( 0 118.5 ) , c = (1 0) iii luenberger observer method the primary principle behind the luenberger observer method is to estimate the states of the real system using measured data [22]. the purpose of generating the residual signal is to compare the observed states to the measured states of the real system. after that, the residue is calculated by subtracting the outputs of the observed and measured signals [23]. fig. 3 depicts the functioning diagram of the actual system as well as the luenberger observer model that was utilized for fault detection methods. the luenberger observer design for fault detection was really developed based on the state space theory. the following equations can be used to create the observer [22]: �̂̇� = 𝐴�̂�(𝑡) + 𝐵𝑢(𝑡) + 𝐿𝑒(𝑡) () �̂�(𝑡) = 𝐶�̂�(𝑡) () 𝑒(𝑡) = 𝑦(𝑡) − �̂�(𝑡) () where �̂�(𝑡)is the estimated system state, �̂�(𝑡) is the estimated output, l is the observer gain and e(t) is the output error (difference between the real measured output y(t) and its estimate �̂�(𝑡) ). replacing (12) in (10): �̂̇�(𝑡) = [𝐴 − 𝐿𝐶]�̂�(𝑡) + 𝐵𝑢(𝑡) + 𝐿𝑦(𝑡) () the estimated error is given by: 𝑒(𝑡) = 𝑥(𝑡) − �̂�(𝑡) () by using the first derivative of e(t) , we may obtain the following: �̇�(𝑡) = 𝑑 𝑑𝑡 (𝑥(𝑡) − �̂�(𝑡)) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) − 𝐴�̂�(𝑡) − 𝐵𝑢(𝑡) − 𝐿𝐶(𝑥(𝑡) − �̂�(𝑡)) () the dynamics of the error is given by: �̇�(𝑡) = (𝐴 − 𝐿𝐶)(𝑥(𝑡) − �̂�(𝑡)) () setting the eigenvalues of matrix (a-lc) to impose faster observer dynamics in comparison to real system dynamics defines the observer error dynamics [22]. figure 3. observer model and real system's functional diagram. the estimated and measured speeds are used to calculate the residual r(t). the residual r(t) is calculated as follows: 𝑟(𝑡) = 𝑦(𝑡) − �̂�(𝑡) () to detect faults, the residual value will be checked automatically. hence, if the residual signal vanishes or approaches to zero, the system will be fault-free; otherwise, a specific fault would arise. fig. 4 depicts the flowchart of the proposed technique for creating the luenberger observer for fault detection [24]. iv hardware and software design & implementation the experiments contain various hardware and software parts, which are described in this section. drive circuits for dc motor are designed using proteus software. luenberger observer was also designed using simulink-matlab. arduino mega 2560 board is used as the data acquisition card which can be programmed with the simulink environment. recently, the programmers can use the simulink support package for arduino hardware to generate and run simulink models on the arduino mega 2560 board [25]. moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 31 figure 4. luenberger observer flowchart for fault detection. a design and implementation for dc motor using simulink based on the step response of an armature-controlled dc motor, we constructed the model in simulink. the speed command module generates a step signal with a range 0-255. this model explains how to use the simulink blocks in order to generate pulse width modulation (pwm) output signals to control the dc motor. the pwm signal is sent through pin 5 of the arduino mega 2560 board as shown in fig. 5. the dc motor speed is determined by optical encoder. the encoder type is the a-b phase incremental encoder. each circle outputs 200 pulses. we used the pulse output of channel a to plot the speed curve and the output channel b to measure speed in rpm. the encoder pulses are counted on the arduino mega 2560 board via two of the board's digital inputs (pin 2, pin 3) [26]. simulink is used to create the logic for predicting the motor's speed based on encoder counts. the simulink supports a new package for arduino hardware includes a tachometer. sensors was used to measure motor speed in rpm through digital pin 2 of the arduino mega 2560, as shown in fig. 5. figure 5. overall simulink design for dc motor. b experimental setup for dc motor the control system consists of a dc motor with a sensor (optical encoder) , an arduino mega 2560 controller, dc motor drive (l298n h-bridge) and dc power supply. a schematic diagram of the equipment setup for this experiment is shown in fig. 6 and a corresponding photo of the experimental setup is shown in fig. 7. figure 6. schematic diagram of the equipment setup. figure 7. photograph of equipment used. c details of hardware implementation motor controller (arduino mega 2650): the arduino mega 2560 is a well-known microcontroller board based on the moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 32 atmega 2560, which is shown in fig. 8. a 16 mhz crystal oscillator, 54 digital input/output pins (some of them can be programmed as pwm output signals), 16 analog inputs, 4 uarts (hardware serial ports), a usb connection, a power jack, an icsp header, and a reset button. other specifications are given in [27]. figure 8. arduino mega 2650 board (motor controller). motor drive: the l298n is a dual h-bridge motor driver board that is used to control the speed and direction of two dc motors with the required currents and voltages [28]. the motor drive circuit is designed by using proteus software. the practical implementation of the pcb for the motor drive circuit shown in fig. 9. figure 9. motor driver (l298n h-bridge). optical encoder: the feedback used here is an optical incremental encoder, which is a linear or rotational electromechanical device with two output signals a and b that give out pulses when the device moves. more details about optical encoder can be found in [29]. fig. 10 shows the output signals of optical encoders where the sensor provides incremental position feedback, which can be extrapolated to accurate speed or position information. figure 10. optical encoder signals. the optical incremental encoder used in the ya-070 dc motoris the h9700, as shown in fig.11. figure 11. h9700 optical incremental encoder. a digital storage oscilloscope has been used to store the waveform of optical incremental encoder signal pulses. the sampled speed signal is shown in fig.12 . in this paper, the speed sensor signal is sampled as an analog signal using a low-pass filter. figure 12. optical sensor output (scale:2 v/div). v simulation and practical (real time) results both simulation and real-time experiments were used to verify the effectiveness and accuracy of the suggested luenberger observer for the dc motor. in the experimental system, all experiments were performed in real time, and they were moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 33 carried out at a sampling time of ts=1ms. for the dc motor (ya-070), all experiments are carried out with an input voltage of 1 v (which corresponds to 2100 rpm). dc motor models are created in the matlab/simulink environment. several types of well-known faults were tested on the motor speed sensor, which is defined in [24]: • additive fault: occurs as a result of a change in internal temperature or calibration issues. when there is an additional constant value in the speed sensor output, this fault occurs. there are two sorts of additive faults: abrupt (occurs instantly) and intermittent (appears and disappears repeatedly). abrupt faults are frequently caused by hardware damage, whereas intermittent faults are caused by partial wire damage [5]. while the abrupt fault model can be created as a step function in simulink, the intermittent fault model can be created as a series of pulses with varying amplitudes. • multiplicative fault: this type of fault can happen due to multiplier coefficients which are applied to the sensor. this kind of fault, known as an incipient fault, is characterized by gradual changes in its parameters. it is frequently the outcome of the system's aging [5]. the incipient defect model in simulink can be produced as a ramp function. • sensor fault: this catastrophic fault happens when a sensor malfunctions at a particular moment (possibly because of a disconnect) and generates a constant zero after the fault arises [5]. simulink allows for the simple multiplication of the speed signal by zero to simulate the sensor fault. this fault was practically implemented by separating the speed sensor from the motors for a short time period, then reconnecting them again, or by cutting off the power to the motors. fault detection thresholds are defined according to the maximum values achieved by residuals over several experiments. there are two types of thresholds, one of which is the fixed threshold, whereas the other is the adaptive threshold [30]. in this paper, a fixed type threshold was used, and many tests were performed on the motors in a fault-free state to determine the values of both upper and lower thresholds. as soon as the residual exceeds a specific threshold, the alarm is triggered to denote a fault. a buzzer device was used as an alarm. moreover, the alarm is activated when this fault occurs. the luenberger observer for dc motor simulation design is shown in fig. 13, which includes the four types of faults mentioned above. a simulink design of the fault detection method based on the luenberger observer (real-time) for the dc motor as represented in fig.14. figure 13. luenberger observer simulation design for dc motor. moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 34 figure 14. luenberger observer simulation design for dc motor (real-time). in this research, very close results have been demonstrated between simulation and practical implementation for every type of fault. a. simulation results without any faults in dc motor here, no faults with the dc motor's speed are present in this simulation. the output speed (measured), estimated speed, and residual output are shown in fig. 15, fig. 16, and fig. 17, respectively. figure 15. simulation of faultless output speed. figure 16. estimated speed simulation without fault. figure 17. simulating the residual output of a dc motor without a fault as illustrated in fig. 17, residual generation signal, which is nearly negligible, that the threshold value is error-free. b. experimental results without any faults in dc motor this experiment was performed with no fault in the speed sensor for dc motor. it was implemented in real-time. fig. 18 and fig. 19 show the measured and estimated output speeds. moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 35 figure 18. the measured speed of dc motor. figure 19. the estimated speed of dc motor. experimentally, tests were applied to the dc motor while it was fault-free in order to determine the threshold value from the residual generation. the upper threshold value was 0.0157 while the lower threshold value was -0.009 as shown in fig. 20. it is important to find these threshold values in order to develop an alarm when faults occur in the motor speed. figure 20. residual output without any fault, upper threshold 0.0157, lower threshold value was 0.009. obviously, the residual generation illustrated in fig. 20 is not equal to zero due to the noise that is presented in the dc motor and errors in the parameters of observer design. hence, the higher threshold value of 0.0157 and a lower value of 0.009 were chosen in order to avoid false alarms for the residual value. c. simulation results of aprupt fault in dc motor by adding a constant value to the sensor reading at the fourteenth second, as seen in fig. 13, an abrupt fault is applied to the dc motor speed sensor. fig. 21 depicts the output simulation and estimated dc motor speeds, whereas fig. 22 depicts the residual output. at the time of its appearance, the fault detection method was clearly carried out successfully. figure 21. output and estimated speed simulation with an abrupt fault figure 22. simulating residual output with an abrupt from fig. 22, it is obvious that the residual value at the fourteenth second exceeds the threshold value, which matches the simulation result in fig. 21. d. experimental results of aprupt fault in dc motor this experiment was performed with abrupt fault in the speed sensor for dc motor. it was implemented to the output speed at the third second. the experimental of the output and estimated dc motor speeds are presented in fig. 23 and the residual output is presented in fig. 24. here, the fault detection process was executed perfectly, where the fault detection was carried out successfully at the time of its appearance. moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 36 figure 23. measured and estimated speeds of dc motor with abrupt fault. figure 24. residual output with an abrupt fault. from fig. 24, it can be noted that the residual value at third second is greater than the threshold level. as a consequence, a fault in the speed sensor occurred. at this moment, the alarm is triggered. e. simulation results of incipient fault in dc motor the incipient fault that applied to the dc motor speed sensor is shown in fig. 13. at the eleventh second, it was applied to the output speed. fig. 25 depicts the output simulation and estimated dc motor speeds, whereas fig. 26 depicts the residual output. these figures show that, at the time the fault first appeared, the fault detection method worked. figure 25. simulation of output and estimated speeds with incipient fault. figure 26. simulation of residual output with incipient fault. as shown in fig. 26, at the start of the eleventh second, the residual value exceeds the threshold value, indicating a fault with the sensor's speed. f. experimental results of incipient fault in dc motor this experiment was performed with an incipient fault in the speed sensor for the dc motor. at the 2nd second, it was implemented to the output speed. fig. 27 shows the output experimentally and the estimated dc motor speeds, while fig. 28 shows the residual output. it is noted that where fault detection was successful at the time it occurred, the fault detection process was carried out successfully.. figure 27. measured and estimated speeds of dc motor with incipient fault. figure 28. residual output with incipient fault. the residual value, in fig. 28, is gradually increased above the threshold value in the 2nd second at the time where the fault occurred, which refers to a fault in the speed sensor. moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 37 g. simulation results of intermittent fault in dc motor in this section, the intermittent fault shown in fig. 13 is applied to the dc motor speed sensor. it is implemented by periodically adding a constant value to the sensor reading. an intermittent fault was applied to the output speed at the fifth, eighth, and eleventh seconds, respectively, with an amplitude of 1, 1.5, and 2. the pulse width of each of these signals is one second. fig. 29 depicts the output simulation and estimated dc motor speeds, while fig. 30 depicts the residual output where the residual value surpasses the threshold level. figure 29. output and an estimated speed simulation with an intermittent fault. figure 30. simulation of residual output with intermittent fault. h. experimental results of intermittent fault in dc motor this experiment was performed with an intermittent fault in the speed sensor for the dc motor. it was implemented in a real-time scenario. fig. 31 presents the measured output signal and the estimated dc motor speeds, while fig. 32 shows the residual output. it was noted that where the residual value exceeded the threshold value, the defect detection process was successfully completed. figure 31. measured and estimated speeds of dc motor with intermittent fault. figure 32. residual output with intermittent fault. i. simulation results of sensor fault in dc motor to use these faults in the simulation, the speed signal must be multiplied by zero. fig.33 displays the output simulation and the estimated dc motor speeds, while fig. 34 shows the residual outpu. figure 33. simulation of dc motor output and estimated speeds with sensor failure. moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 38 figure 34. residual output simulation in the presence of a sensor fault. when the fault occurs, the residuals value drops below the threshold value, as shown in fig.34. j. experimental results of sensor fault in dc motor when the speed sensor is unplugged from the dc source, a sensor fault occurs. at the third second, it was applied to the output speed. then the sensor reconnects after one second. this fault was implemented in real-time. fig. 35 shows the response of the measured and estimated speeds ,and fig. 36 shows the residual generation output. here, the fault detection process was successfully performed. figure 35. measured and estimated speed due to sensor failure. figure 36. residual output due to sensor failure. from fig.36, it can be noted that the residual value is lower than the threshold value at the time of applying the fault, which will indicate a fault in the sensor. at this time, the alarm is triggered to indicate the presence of the fault. vi conclusion and futuer work in this research, model-based methods employing luenberger observers were investigated in order to detect faults in a dc motor's speed sensor. four well-known fault types were applied to the speed sensor of the ya-070 dc motor in order to test the validity of the proposed method, namely: sensor fault, abrupt fault, intermittent fault, and incipient fault. the threshold algorithm is used in the suggested strategy to generate the residual signal that indicates a sensor failure. the proposed method's effectiveness is demonstrated using both simulation and real-time experiments. a possible future work for this research topic is to integrate the kalman filter or particle filter algorithms into the observer design process. references [1] k. s. gaeid, “fault tolerant control of induction motor,” modern applied science, vol. 5, no. 4, 2011. [2] q. a. f. m. sajidul, “electro-mechanical modeling of separately excited dc motor & performance improvement using different industrial controllers with active circuit realization,” international conference on mechanical, industrial and energy engineering, dec. 2014 [3] a. a. hassan, n. k. al-shamaa, and k. k. abdalla, “comparative study for dc motor speed control using pid controller,” international journal of engineering and technology, vol. 12, no. 24, pp. 15999-16007, 2017. [4] [online]. available: https://www.linquip.com/blog/dcmotor-working principles/#advantages_and_disadvantages_of_dc_motors. [accessed: 15-jul-2021]. [5] a.alkaya, “novel data driven-based fault detection for electromechanical and process control systems,” phd thesis, 2012. [6] d. miljković, "fault detection methods: a literature survey," 2011 proceedings of the 34th international convention mipro, opatija, 2011, pp. 750-755. [7] w. abed, robust fault analysis for permanent magnet dc motor in safety critical applications. 2015. [8] l. li, “robust fault detection and diagnosis for permanent magnet synchronous motors,” 2006. [9] k. s. gaied and h. w. ping, “wavelet fault diagnosis and tolerant of induction motor: a review,” international journal of physical sciences, vol. 6, pp. 358–376, 2011. [10] r. isermann and p. ballé, “trends in the application of model-based fault detection and diagnosis of technical processes,” control engineering practice, vol. 5, no. 5, pp. 709–719, 1997. [11] r. isermann, fault-diagnosis systems: an introduction from fault detection to fault tolerance. berlin: springer, 2006 [12] y. al-mutayeb and m. almobaied, “luenberger observer-based speed sensor fault detection of bldc motors,” 2021 international conference on electric power engineering – palestine (icepep), 2021. [13] q. fazal, m. liaquat, and n. naz, “robust fault tolerant control of a dc motor in the presence of actuator faults,” https://www.linquip.com/blog/dc-motor-working https://www.linquip.com/blog/dc-motor-working moayed almobaied, yousef al-mutayeb. / luenberger observer-based speed sensor fault detection: real time implementation to dc motors (2022) 39 2015 16th international conference on sciences and techniques of automatic control and computer engineering (sta), 2015. [14] g. saoudi, r. e. harabi, and m. n. abdelkrim, “graphical linear observers for fault detection: the dc motor case study,” 10th international multi-conferences on systems, signals & devices 2013 (ssd13), 2013. [15] a. alkaya ve i̇. eker , "luenberger observer-based sensor fault detection: online application to dc motor", turkish journal of electrical engineering and computer science, c. 22, sayı. 2, ss. 363-370, may. 2014, doi:10.3906/elk-1203-84. [16] m. a. eissa, m. s. ahmed, r. r. darwish, and a. m. bassiuny, “unknown inputs pi observer-based sensor fault detection technique for bldc motor,” 2015 7th international conference on modelling, identification and control (icmic), 2015. [17] m. a. eissa, m. s. ahmed, r. r. darwish, and a. m. bassiuny, “improved fuzzy luenberger observer-based fault detection for bldc motor,” 2015 tenth international conference on computer engineering & systems (icces), 2015. [18] a. a. hassan, n. k. al-shamaa, and k. k. abdalla, “comparative study of conventional and optimal pid tunned methods for pmdcm speed control,” international journal of applied engineering research, vol. 9, no. 6, pp. 4181–4192, 2017. [19] “imported ydk servo dc motor ya-070 ya-071 dc24v 3000 rotary motor,” china online shopping buy asian products online from the best shoping agent yoycart.com. [online]. available: https://www.yoycart.com/product/26695000175/. [accessed: 15-jul-2021]. [20] m. aghaee and a. a. jalali, “bldc motor speed control based on mp c sliding mode multi-loop control strategy – implementation on matlab and arduino software,” electrical engineering (icee), iranian conference on, 2018. [21] m. ridwan, m. n. yuniarto, and soedibyo, “electrical equivalent circuit based modeling and analysis of brushless direct current (bldc) motor,” 2016 international seminar on intelligent technology and its applications (isitia), 2016. [22] z. horváth and g. molnárka, “design luenberger observer for an electromechanical actuator,” acta technica jaurinensis, vol. 7, no. 4, 2014. [23] l. sellami, “simulink model of a full state observer for a dc motor position, speed, and current,” in world congress in computer science, computer engineering, and applied computing, 2014. [24] m. a. eissa, m. s. ahmed, r. r. darwish, and a. m. bassiuny, “model-based sensor fault detection to brushless dc motor using luenberger observer,” 2015 7th international conference on modelling, identification and control (icmic), 2015. [25] a. s. sadun, j. jalani, and j. a. sukor, “a comparative study on the position control method of dc servo motor with position feedback by using arduino ,” proceedings of engineering technology international conference (etic 2016), bali ,2016. [26] w. tang, z. liu and q. wang, "dc motor speed control based on system identification and pid auto tuning," 2017 36th chinese control conference (ccc), 2017, pp. 6420-6423, doi: 10.23919/chicc.2017.8028376. [27] facebook.com/zainnasir, “introduction to arduino mega 2560,” the engineering projects, 21-jun-2021. [online]. available: http://www.theengineeringprojects.com/2018/06/introduction-to-arduino-mega2560.html. [accessed: 02-aug-2021]. [28] dejan, “l298n motor driver arduino interface, how it works, codes, schematics,” howtomechatronics, 11may-2021. [online]. available: https://howtomechatronics.com/tutorials/arduino/arduino-dc-motor-control-tutorial-l298n-pwm-h-bridge/. [accessed: 02-aug-2021]. [29] http://www.creative-robotics.com/. [30] p. m. frank, s. x. ding, and t. marcu, “model-based fault diagnosis in technical processes,” transactions of the institute of measurement and control, vol. 22, no. 1, pp. 57–101, 2000. moayed almobaied is an assistant professor of electrical engineering at islamic university of gaza –palestine. he received his b.sc and m.sc degrees in control engineering from islamic university of gaza in 2001 and 2008, respectively. in 2017, he received the ph.d. in control and automation systems from istanbul technical university-itu. his current research interests include robust control, optimal control, designing of modern control systems, and robotics. yousef al-mutayeb is instructor assistant at the engineering science department in university college of science and technology khan yunis. he is also working as a lab supervisor in the department of electronics engineering to date. his research interests are in the area of control systems. he obtained his msc degree in electrical engineering from the islamic university of gaza in 2020. http://www.creative-robotics.com/ transactions template journal of engineering research and technology, volume 9, issue 2, october 2022 12 received on (15-03-2022) accepted on (15-08-2022) generating attractive advertisement text campaigns using deep neural networks atef ahmed, motaz saad, and basem alijla https://doi.org/10.33976/jert.9.2/2022/2 abstract— text generation task has drawn an increasing attention in the recent years. recurrent neural networks (rnn) achieved great results in this task. there are several parameters and factors that may affect the performance of the recurrent neural networks, that is why text generation is a challenging task, and requires a lot of tuning. this study investigates the impact of three factors that affect the quality of generated text: 1) data source and domain, 2) rnn architecture, 3) named entities normalization. we conduct several experiments using different rnn architectures (lstm and gru), and different datasets (hulu and booking). evaluating generated texts is a challenging task. there is no perfect metric judge the quality and the correctness of the generated texts. we use different evaluation metrics to evaluate the performance of the generation models. these metrics include the training loss, the perplexity, the readability, and the relevance of the generated texts. most of the related works do not consider all these evaluation metrics to evaluate text generation. the results suggest that gru outperforms lstm network, and models trained on booking set is better than the ones that trained on hulu dataset. index terms— deep learning, recurrent neural network, advertisements campaigns, text generation. i. introduction online adverting is the process of marketing and advertising services and products over the internet motaz . it has attracted the interest of investors and business owners. for instance, 77 % of eu businesses have a website and 26% of them use internet to advertise. in addition, 86 % of eu enterprises used at least one type of social media to build their image and to market their products [2]. the revenue of digital ads was worth $126 billion [3]. a successful advertising campaign is the one that has attractive ads, which are delivered to relevant and interested consumers (audience) with the precise, meaningful, and relevant contents. generating attractive and successful ad campaign is beneficial and worthwhile, and it is subject to reach target customers at the right time [4]. institutional advertisers use targeted advertisement method to generate attractive campaigns based on the requirements of advertising exchange system [5]. the very old methods of creating attractive contents of advertisement campaigns are either by hand of content writer or automatically base on filin-the blank" templates [6]. however, generating successful advertising campaigns that meet the customer's needs is very challenging, time-consuming and an expensive task. significant advertising knowledge and good understanding of customers needs is required. machine learning and deep learning are successfully used in various applications, including machine translation [7, 8], text summarization [9, 10], text generation [11-15], speechto-text and text-to-speech [16]. deep learning has evolved many network architectures such as recurrent neural networks (rnns) [17], long short-term memory networks (lstm) [18], and gated recurrent unit networks (gru) [14]. recent research showed impressive results of using deep learning techniques in nlp applications such as text generation and text summarization [11]. the work of [12] proposes a novel end-to-end model named to generate the ad post. the authors split the ad post generation task into two subprocesses: (1) select a set of products via the selectnet (selection network). (2) generate a post including selected products via the mgennet (multi-generator network). concretely, selectnet first captures the post topic and the relationship among the products to output the representative products. then, mgennet generates the description copywriting of each product. experiments conducted on a large-scale real-world ad post dataset demonstrate that their proposed model achieves impressive performance in terms of both automatic metrics as well as human evaluations. the work of [19] proposed explore the possibility of collaboratively learning ad creative refinement via a/b tests of multiple advertisers. for generating new ad text, the authors used an encoder-decoder architecture with copy mechanism, which allows some words from the (inferior) input text to be https://doi.org/10.33976/jert.9.2/2022/2 atef ahmed, motaz saad, and basem alijla / generating attractive advertisement text campaigns using deep neural networks (2022) 13 copied to the output while incorporating new words associated with higher click-through-rate. in[20], the authors proposed a query-variant advertisement text generation method that aims to generate candidate advertisement texts for different web search queries with various needs based on queries and item keywords. to solve the problem of ignoring low-frequency needs, they proposed a dynamic association mechanism to expand the receptive field based on external knowledge, which can obtain associated words to be added to the input. these associated words can serve as bridges to transfer the ability of the model from the familiar high-frequency words to the unfamiliar low-frequency words. with association, the model can make use of various personalized needs in queries and generate query-variant advertisement texts. this paper proposes a method of using deep learning models (lstm and gru networks) to generate attractive text advertising campaigns that meet customer needs using pre-defined keywords. we investigate the generation of advertisements text campaigns mainly in two domains: hotel booking and tv streaming (hulu). in addition, two datasets in the domain mentioned earlier have been acquired and prepared for this research, to train the neural networks to generate attractive ads, based on a given keywords feed as a seed to the neural network. besides the automatic evaluation metrics (perplexity and readability [21]), human annotators subjectively evaluated the readability and relevance of the generated ads. the rest of this manuscript is organized as follows. section ii describes the methodology of advertisement text generation including: data acquisition, data integration, data pre-processing, ads generation, and the evaluation. experimental studies and evaluation methods are presented in section iii. the discussion and experiments results are presented in section iv, finally, section v presents summary and conclusions. ii. deep neural networks to generate advertisement campaigns -shows the used meth خطأ! لم يتم العثور على مصدر المرجع. odology in this work. the methodology consists of five main steps: data acquisition, data integration, data pre-processing, text generation and evaluation for generating advertisement text campaign using recurrent neural networks. these steps are described in detains in the following sub-sections. although we use deep learning techniques but data preprocessing is needed because the data is noisy as it is collected from internet. a. data acquisition the data is collected using semrush toolkit [22], it provides marketing information such as top ads, keyword analytics, and search tracking, etc.., for a particular website. the semrush retrieve and rank the ads campaigns for the top-ranked website using google and bing search engines. table 1 describes the main charachteristics of the collected datasets. the collected data is limited to adversisment campagin for hotel and flights reservation, which is collected from expedia.com and booking.com websites, and tv and movies streaming collected from hulu.com websites. the data includes 42k text lines (campaigns) from booking and 13k text lines campaigns from hulu. the average campaign length is 67.53 and 227.07 for booking and hulu datasets respectively. the average number of words per line is 11.13 and 39.15 for booking and hulu datasets respectively. it is remarkable that hulu campaigns length is shorter than booking campaigns as shown in the table. table 1 : the main properties of collected dataset datasets size max length min length average lengths average # of words booking 42k 85 19 67.53 11.13 hulu 13k 368 19 227.07 39.15 b. dataset integration and preprocessing datasets were collected from two different sources. so, integrating data in a single and consistent representation is performed. then the dataset is pre-processed to be suitable to be feeder to the neural networks. خطأ! لم يتم العثور على مصدر -depicts the main steps of data pre-processing, includ المرجع. ing data clearing and data normalization, and name entity (ne) normalization. data cleaning involves the processes of removing and correcting corrupt, unnecessary, or inaccurate records. so unnecessary html tags like , , and
are removed. moreover, all duplicated records in the data are deleted. data normalization involves the process of converting text to lower case and removing special characters and punctuations. named entity refers objects name such as person's name, location's name, and product's names [23]. to further normalize the text, name entities are replaced with tag name using geotext [24] library and using static-ne list. geotext [25] is a python library used to extract country and city from given text, and it is trained on data taken from geonames.org to recognize cities and countries names for another dataset or text. all cities and countries are replaced by i-city and i-coun labels respectively. geotext may fail to recognize the names of some cities and countries because geotext depends on the training of data. so, static-ne list for cities and countries is proposed to overcome data accusition data integration data preprocessing text generation using rnn evalution data cleaning data normalization ne normalization figure 1: general five steps metodology for ads generation figure 2: pre-processing steps atef ahmed, motaz saad, and basem alijla / generating attractive advertisement text campaigns using deep neural networks (2022) 14 the limitation of geotext. two lists of 4144 city names and 206 countries are collected from geonames.org. consequently i-city and i-coun labels are proposed to replace city name and country name respectively. iii. experimental studies and evaluation methods this section presents the proposed methods for advertisement text generation. two implementations denoted as shakespear tensorflow (tf)1 and rnn tf char2 and word3 levels are adopted to implement the recurrent neural network (rnn) for gru and lstm encoding respectively. both are sequence-to-sequence model that take keywords (i.e., seed text) as input to generate relevant text. for instance, hotel, reservations, flights, booking, and travel are general keywords that could be used for generating ads related to booking domain. keywords such as series, tv, movies, channels, episode, and season could be supplied to the model for generating ads related to movie domain. the shake-spear tf only support the character level, while the rnn tf support both character level and word level encoding. the following factors are considered in the application of series of experiments to investigates their impact on the quality of generated advertisements campaigns. • dataset domain: datasets in booking/reservations and movies (hulu) domains are considered to train the nns. • neural network architectures: lstm and gru neural networks are investigated to generate the text. • name entity replacement: the impact of replacing named entities with tags using geotext and static lists are used to investigate the impact on the quality of generated texts. • input / output encoding level: character level and word level encoding sequence are also explored. the subsections present the experimental settings, and the evaluation metrics. a. parameters settings table 2 describes the parameters settings of lstm and gru neural networks that are used in the experiments. the parameters settings of character-level gru and both characterlevel and word-level lstm are presented. the parameters values are the most recommended values, which are tunned after a series of experiments. table 2: parameters setting values for lstm and gru nn parameter char-level lstm word-level lstm gru rnn size 128 256 512 hidden layers 2 2 3 sequence length 50 25 30 number of epochs 2000 2000 10 learning rate 0.002 0.002 0.001 optimizer adam adam adam 1 https://github.com/martin-gorner/tensorflow-rnn-shakespeare 2 https://github.com/sherjilozair/char-rnn-tensorflow the datasets that are used in our experiments are described in table 1. the datasets are split into three subsets, 70% for training subset, 15% for validation and 15% for testing. experimental studies focus on character level encoding over the word level encoding, because character encoding does not suffer from out-of-vocabulary issues, and being able to model different and rare morphological variants of a word, and do not require segmentation [7]. b. evaluation metrics the neural networks are trained on the forementioned datasets, and the evaluation metrics are the loss error and perplexity (ppl) criterion in order to judge the performance of learning models [26]. moreover, readability and relevance of the generated text are subjectively assessed by human annotators, and also readability is objectively evaluated with statistical propertied using a python tool called textstat [21]. text relevance refers to the match between the information inferred from the text and the reader’s goal [27]. in other words, text relevance means the match between the gendered text and the keywords/domains/campaigns used to for generation. the more match between reader's goals and inferred information the more relevant to the supplied keyworks. in this study, a total of 54 human annotators are hired from amazon mechanical turk to evaluate the generated texts. the annotators are english native speaker and eligible to do “human intelligence tasks” (hits)4. they are distributed into 18 groups of three participant in a every group. each group is provided with the generated campaigns and the same keywords, which are used in the generation process and asked to assess the relevance of the text by answering to two points: rating scale (r) for relevance and (i) for irrelevance. the majority answer of the three answers is consider as output of evaluation results. readability refers to the rate of easy of understanding the intended meaning of text. the less complex, difficult, grammatical and linguistical errors text is the more readable text [28]. the groups of annotators are also asked to evaluate the generated campaign, and to assess readability in four-point rating scale (easy, normal, difficult, and confusing). three annotators are asked to assess a given generated text and the final readability label is determined by voting their annotation as shown in table table 3. table 3 votes to determine readability evaluation result vote result easy easy easy standard difficult confused vote easy confusing 3 https://github.com/hunkim/word-rnn-tensorflow 4 selecting eligible workers amazon mechanical turk https://github.com/martin-gorner/tensorflow-rnn-shakespeare https://github.com/sherjilozair/char-rnn-tensorflow https://github.com/hunkim/word-rnn-tensorflow https://docs.aws.amazon.com/awsmechturk/latest/awsmechanicalturkrequester/selectingeligibleworkers.html atef ahmed, motaz saad, and basem alijla / generating attractive advertisement text campaigns using deep neural networks (2022) 15 the textstat tool uses the flesch reading ease score (fres) test to assess the overall readability of text based on the flesch reading ease formula [29]. fres is a seven points diffculty scale, and human anotator in this study evaluate readability in four point scal as shown in table 4. we make this mapping to convert fres measure to four point scale to be consistent with human anotators evalautions. table 4: normalize fres to corresponding four-point scale score difficulty normalized 4-point scale 90-100 very easy easy 80-89 easy 70-79 fairly easy 60-69 standard standard 50-59 fairly difficult difficult 30-49 difficult 0-29 very confusing confusing iv. experiments and evaluation results the experiments are conducted on a dedicated root server with a minimal ubuntu os version. the server has ram 64 gb ddr4, hard drive ssd 500gb, graphics card geforce® gtx 1080, cpu intel® core i7-6700 quadcore processor built and connection speed with 1 gbit/s-port. python 3.5 and tensorflow with enabled gpu is used to implement the proposed rnn architectures. a series of experiments are conducted to investigate the factors mentioned in section iii (dataset domain, nn architecture, name entity normalization, and input/ output sequence level), which affect the quality of generated texts. experimental results are presented in the next section. a. dataset domain experimental study to investigate the influence of dataset domain on the generated text, a total of 102 ads were generated by two shakespear tf character-level models. the first one is trained hulu dataset, and the second one in trained on booking datasets. table 5 shows ppl and training loss of tf shakespeare character-level models trained on hulu and booking datasets. table 5 ppl and training loss of tf shakespeare character-level models trained on hulu and booking datasets datasets loss error ppl relevance hulu 0.115 80 99% booking 0.503 45 99% the results show that loss error is 0.115 and 0.503 for hulu and booking datasets respectively. the ppl values are 80 and 45 for hulu and booking respectively. the results imply that campaigns generated on booking domain fits better than those that generated on hulu domain. human annotators are totally agreed that 99% of the ads generated in hulu and booking domains are relevant to the provided keywords. presents the evaluation results of readability of ads, which are generated by gru in hulu and booking dataset domains. table 6: results of evaluating readability for ads generated by gru in hulu and booking domains. datasets evaluator easy standard difficult confused vote hulu human 89% 8% 2% 1% textstat 96% 4% 0% 0% booking human 98% 0% 0% 2% textstat 92% 5% 3% 0% the results show the percentage of evaluation readability as rated by human annotators and textstat tool. in general, it can be noted from the results that the generated texts are mostly readable. in hulu domain, 89% and 96% are rated as easy to read by human and textstat respectively. in the booking domain, 98% and 92% are rated easy to read by human and textstat respectively. a very small percentage of ads, 8%, 2%, and 1% are rated standard, difficult, and confused vote respectively. figure 3 compares human evaluation and textstat tool evaluation results of readability for the ads, which are generated by the gru neural networks in hulu and booking domains. the results imply that the two methods of evaluation (i.e., human annotators and textstat) are very compatible. also, the gru architecture extremely success in generating easy to read advertisement for both domains. b. neural networks experimental study this study investigates the performance of the gru and lstm neural networks architectures which are implemented by shake-spear tf rnn and tf gru respectively. table 7 presents the training loss and the evaluation relevance of ads, generated by both gru and lstm neural networks using full character level encoding. table 7: percentage of relevant ads and training results (i.e., loss error and ppl) of gru and sltm neural network on booking dataset nn loss ppl relevance gru 0.503 45 99% lstm 0.505 78 68% figure 3: readability text from booking and hulu dataset. atef ahmed, motaz saad, and basem alijla / generating attractive advertisement text campaigns using deep neural networks (2022) 16 we limit the dataset in this experiment to booking dataset (102 ads) because it showed betters results than the movie domain dataset as shown in table 5, and because the main point in this experiment is to compare lstm and gru for text generation. the results of this experiments are shown in table 7. the results show that the training loss of lstm and gru trained on the booking dataset are 0.505 and 0.503 respectively. the loss errors for both are very close to each other’s. on the other had, the ppl results are significantly different (78 and 45 for lstm and gru respectively). the results imply that ads generated by the gru has fits better than the ads, which generated by the lstm. in addition, the results show that human annotators rated ads generated by gru are more relevant than the ones by lstm. it can be observed that there is a significant difference between in the performance the lstm and gru in terms of ppl and the relevance. table 8 presents the results of evaluating readability (human and textstat) of ads, which are generated by gru and lstm at the character level encoding in booking dataset domain. the results show that lstm neural networks generates 43% and 45% easy to read ads as rated by human and textstat respectively, while the gru generates 98% and 92% easy to reads ads as rated by human and textstat respectively. more than 55% as generated by lstm rated as difficult or confused ads. table 8 readability for ads generated by gru and lstm in booking domains nn evaluator easy standard difficult confused vote gru human 98% 0% 0% 2% textstat 92% 5% 3% 0% lstm human 43% 4% 26% 27% textstat 45% 25% 30% 0% the results imply that the human evaluation is compatible and supporting the evaluation results of textstat tool. in general, the result suggests that gru network outperforms the lstm network in generating easy to read and more relevance ads text. c. name entity experimental study. this experiment investigates the impact of ne normalization on the quality of generated texts. so, the ne normalization is applied on the training dataset (booking). three experimental studies are conducted, ne normalization are applied by two different tools, i.e., geotext tool and a static list. we compare their impact on the dl model in table 9, which presents the training loss and the ppl and the relevance results using ne normalization by geotext library, static list, and without ne normalization for 102 ads. the texts are generated by gru model trained on booking data. table 9: percentage of relevant ads and training results (i.e., loss error and ppl) of gru on booking dataset on three cases of ne removal ne library loss ppl relevance geotext 0.437 43 99% static-ne list 0.468 40 99% without ne normalization 0.503 45 99% the results in table 9 suggest that ne normalization has no significant impact on the generated texts. table 10 presents the results of evaluating readability level of ads by generated by gru nns trained on booking dataset. the table includes the readability level of three cases (geotext library, static list, and without ne normalization). table 10: results of evaluating readability of ads generated by gru on booking domains using different normalization techniques ne normalization evaluator easy standard difficult confused vote geotext human 98% 2% 0% 0% textstat 77% 12% 11% 0% static-list human 95% 2% 1% 2% textstat 77% 14% 9% 0% no ne normalization human 98% 0% 0% 2% textstat 92% 05% 03% 0% the results show that, in the case of applying ne by geotext tool, human rated 98% as easy to read and 2% ads as standard, while textstat is evaluated 77%, 12 % and 11% of ads as easy to read, standard and difficult respectively. in the case of performing ne using static-list, human rated 95% of ads easy to read, 2% ads standard, 1% difficult and 2% as confused vote. textstat evaluated 77%, 14%, 9%, and 2% of generated ads as easy to read, standard, difficult, and confused vote respectively. the results in table 10 suggest that the application of ne normalization does not influence the human evaluation either for readability or relevance, while the textstat evaluation is negatively affected. the results show that the percentage of easy-to-read ads is degraded from 98% to 77 % and the percentage of standard and difficult to read ads is increased to 12% and 11% respectively. d. input / output text sequence experimental study this experiment investigates the influence of encoding level on the performance of lstm neural networks. the experiment is limited to lstm because the shake-spear tf only support character encoding, while rnn tf implementation supports both character-level and word-level encoding. table 11 presents the training loss, the ppl, and the relevance of ads generated by the lstm network trained on booking dataset on both character-level and word-level encoding. atef ahmed, motaz saad, and basem alijla / generating attractive advertisement text campaigns using deep neural networks (2022) 17 table 11: percentage of relevant ads and training results (i.e., loss error and ppl) of lstm on booking dataset for character-level and word level encoding. encoding level loss ppl relevance character 0.505 78 83% word 1.232 86 89% it can be noted from training loss and the ppl results in the table that the results of lstm with character-level is better than lstm with word-level encoding level. on the other hand, word level is more relevant than the character level, and this is because of the text generated using the character level scheme has some words that has some typos errors. the results also suggests that the character level model generates a text that fits to the target text, while the word level model generates more relevant texts. table 12 presents the results of evaluating readability of ads, which are generated by lstm with the character-level and word-level encoding. human and textstat tool evaluation results is presented. table 12: results of evaluating readability of ads generated by lstm on character and word level encoding in booking domain. encoding level evaluator easy standard difficult confused vote character human 47% 06% 12% 35% textstat 43% 29% 28% 0% word human 96% 04% 1% 9% textstat 67% 14% 19% 0% the results show that 47%, and 35% of ads generated on for character-level are rated by human easy to read and confused vote. textstat evaluate that 43% and 29%, 28% are evaluated as easy to read, standard and difficult respectively. in word-level encoding 96% of ads are rated by human as easy to read, and 9% are rated as confused. textstat evaluated 67% as easy to read and 14% standard and 19% difficult. the results show that word-level lstm performs better than character-level lstm in booking domain. the results in table 12 also suggest that the word level scheme is better than the character level because the texts generated by the character level have some types / spell errors. in addition, the results in tables 11 and 12 support this conclusion. v. conclusion this research proposed the application of gru and lstm deep neural networks in generating advertisement text campaigns. two datasets’ domains i.e., hotel booking, and tv and movies streaming are included. presrocessing including normalization, and name entity processing are performed to reduce the number of strange names and prepare the dataset for machine learning. the main contribution of this research is to investigate the influence of four factors including, neural network architecture, dataset domain, ne normalization, and input encoding (character / word levels), on generating ads. readability of the generated ads is subjectively evaluated using human annotators and objectively assessed using textstat tool, whereas the relevancy is only evaluated by human annotators. the implication is several factors could be tuned to improve the performance of neural network in generating attractive ads. several experiments have been conducted to investigate the impact of the factors mentioned earlier. an investigation has been conducted to determine the influence of every factor on the quality of generated text. in general, the results indicate that the gru networks outperform the lstm networking in generating easy to reads ads campaign. in addition, training gru nn on booking domain has better performance compared to hulu dataset domain. it can be concluded from the results that the collected data and dataset domain and input/output encoding level are the most common factors influence the performance of the generated texts for future work, generating advertisement campaign in arabic language will be investigated. more experiments on other dataset domain including brands, shopping product need to be conducted too. besides that, investigations pertaining to generate multiple ads campaigns for every keyword is required in divers ads. references 1. handayani, w., s. muljaningsih, and h.j.t.s.o.s.j. ardyanfitri, online marketing supports promotion and advertising sales in communities dolly localization. 2018. 2(1): p. 75-87. 2. eurostat, internet advertising of businesses-statistics on usage of ads statistics explained. 2018. 3. iba, 2019 internet ad revenue report. 2020. 4. kong, s., et al., web advertisement effectiveness evaluation: attention and memory. 2019. 25(1): p. 130146. 5. bhatia, v. and v. hasija. targeted advertising using behavioural data and social data mining. in 2016 eighth international conference on ubiquitous and future networks (icufn). 2016. ieee. 6. bartz, k., c. barr, and a. aijaz. natural language generation for sponsored-search advertisements. in proceedings of the 9th acm conference on electronic commerce. 2008. 7. lee, j., k. cho, and t.j.t.o.t.a.f.c.l. hofmann, fully character-level neural machine translation without explicit segmentation. 2017. 5: p. 365-378. 8. singh, s.p., et al. machine translation using deep learning: an overview. in 2017 international conference on computer, communications and electronics (comptelix). 2017. ieee. 9. allahyari, m., et al., text summarization techniques: a brief survey. 2017. atef ahmed, motaz saad, and basem alijla / generating attractive advertisement text campaigns using deep neural networks (2022) 18 10. el-kassas, w.s., et al., automatic text summarization: a comprehensive survey. 2021. 165: p. 113679. 11. iqbal, t., s.j.j.o.k.s.u.-c. qureshi, and i. sciences, the survey: text generation models in deep learning. 2020. 12. chan, z., et al. selection and generation: learning towards multi-product advertisement post generation. in proceedings of the 2020 conference on empirical methods in natural language processing (emnlp). 2020. 13. hughes, j.w., k.-h. chang, and r. zhang. generating better search engine text advertisements with deep reinforcement learning. in proceedings of the 25th acm sigkdd international conference on knowledge discovery & data mining. 2019. 14. taneja, p. and k.g. verma, text generation using different recurrent neural networks. 2017. 15. yang, x., et al. advertising keyword recommendation based on supervised link prediction in multi-relational network. in proceedings of the 26th international conference on world wide web companion. 2017. 16. hoobyar, t., t. dotz, and s. sanders, nlp: the essential guide to neuro-linguistic programming. 2013: william morrow. 17. yang, z., et al., review networks for caption generation. 2016. 29: p. 2361-2369. 18. xiang, l., et al., novel linguistic steganography based on character-level text generation. 2020. 8(9): p. 1558. 19. mishra, s., et al. learning to create better ads: generation and ranking approaches for ad creative refinement. in proceedings of the 29th acm international conference on information & knowledge management. 2020. 20. duan, s., et al. query-variant advertisement text generation with association knowledge. in proceedings of the 30th acm international conference on information & knowledge management. 2021. 21. bansal., s. textstat python tool. 2018 2018 [cited 2021 8/9/2021]; available from: https://github.com/shivam5992/textstat,. 22. vinayak, s., semrush toolkit for bloggers: 8 tools to boost blog content for more traffic and revenue. 2021. 23. golikova, d.m.j.v.o., named entities for computational linguistics. 2018. 15(1): p. 207-215. 24. hu, y.j.g.c., geo‐text data and data‐driven geospatial semantics. 2018. 12(11): p. e12404. 25. palenzuela, y.m. geotext. 2014 [cited 2021 6-9-2021]; available from: https://geotext.readthedocs.io/en/latest/readme.html. 26. klakow, d. and j.j.s.c. peters, testing the correlation of word error rate and perplexity. 2002. 38(1-2): p. 19-28. 27. mccrudden, m.t., g. schraw, and b. hoffman, text relevance, in encyclopedia of the sciences of learning, n.m. seel, editor. 2012, springer us: boston, ma. p. 3307-3310. 28. deutsch, t., m. jasbi, and s.j.a.p.a. shieber, linguistic features for readability assessment. 2020. 29. farr, j.n., j.j. jenkins, and d.g.j.j.o.a.p. paterson, simplification of flesch reading ease formula. 1951. 35(5): p. 333. atef ahmed. holds a master’s degrees in data science from the islamic university of gaza, and he is a software engineer. motaz saad is a computer scientist, he holds a ph.d. degree in computer science from the university of lorraine, france. his research interests include ai, nlp, and machine learning, and he published several papers in the field. he is currently working as an assistant professor at the islamic university of gaza, palestine. basem o. alijla received the ph.d. degree in intelligent systems from the university science malaysia (usm), in 2015. he is currently assistant professor in computer science and deputy dean faculty of information technology, islamic university of gaza. he published several research papers in high impact factor journals and international conferences. his research interest includes evolutionary computing, optimization, machine learning, data mining and features extraction and selection. https://github.com/shivam5992/textstat https://geotext.readthedocs.io/en/latest/readme.html transactions template journal of engineering research and technology, volume 8, issue 2, september 2021 12 analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents zhe wei https://doi.org/10.33976/jert.8.2/2021/2 abstract—objective: frontal crash accidents remain a significant factor in causing the preventable injury and fatality for child occupants aged 3 in china. despite the increased public awareness and utilization of child restraint system (crs), inappropriate installations still exist and lead to a potential to result in injuries of head, thorax and abdomen regions of child occupants, especially when it comes to enhanced child restraint system (ecrs) with top tether. the current study focuses on the influence of top tether upon safety performance of ecrs with top tether in dynamic tests with different set-ups and explores the relationship between inappropriate installation of ecrs with top tether and the injury potential of child occupants aged 3 in a frontal crash. methods: a testing scheme including 4 dynamic tests was devised to ascertain the extent to which the top tether affected the accelerations of thorax, the abdominal penetration and the head displacements. different kinds of acceleration curves were employed to conduct the tests and to simulate the real status and situation of child occupants aged 3 in the crs installed with top tether and without top tether respectively. parameters of accelerations, abdominal penetrations, and head displacements were measured to analyze quantitatively the influence of inappropriate installations of ecrs with top tether under different conditions. results: the safety performance of ecrs with the use of top tether was found better than that of ecrs without the use of top tether either in the normal condition or in the extreme condition. the test using the acceleration curves defined by regulations, the accelerations of thorax, abdominal penetrations, and head displacements of p3 manikin in the ecrs with the top tether connected to the anchor point revealed results that all met the requirements. while in the test using acceleration curves of the same kind, and when the top tether was not connected, the parameters measured displayed that the safety performance of the sample was worse than the former one. as for the tests using the more severe acceleration curves defined at will, it was more obvious that top tether could affect the function and safety performance of ecrs greatly, and the functional failure and severe damages occurred to the ecrs without the use of top tether. ecrs with the use of top tether was partly qualified even under the more severe conditions. conclusions: inappropriate installation of ecrs such as omitting the step of connecting top tether to anchor point could cause severe injuries and fatalities in frontal crash accidents. effective measures should be taken to minimize the chances of inappropriate installations of ecrs. index terms—child occupants; child restraint system; top tether; frontal crash; passive safety i introduction with the development of economy and due to the construction of infrastructures such as the highways and expressways, car ownership increases rapidly around the world, leading to more and more crash accidents, in which the drivers and passengers suffer from many kinds of injuries directly caused by crashes. in order to reduce or minimize injuries and fatalities from accidents related to automotive crashes, seatbelt was invented to protect the occupants about 60 years ago, and in the 1960s came another invention of child restraint system (crs) as a passive safety device of a different kind. child passenger safety is an integral part of child safety [1]. the main aim of crs is to create a well-anchored seat like that of an adult, safeguarding children to the maximum degree possible in the event of a collision or of abrupt deceleration of the vehicle, by limiting the mobility of the child’s body. crs is oriented to the good protection of child occupants including baby, infants, toddlers and other older children in automotive crashes, since children are more vulnerable to injuries than adults during the collision process [2]. protecting and improving the safety of children is of fundamental importance. over the past several decades, dramatic progress has been made in improving the safety and reducing the mortality rate of young child occupants [3, 4]. however, https://doi.org/10.33976/jert.8.2/2021/2 zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 13 it is still too early to conclude that the problem of child-related traffic safety has been solved. automotive crashes, until present day, remain one of the leading causes of preventable injury and death for children from newborn to 10 years of age, despite advancement in legislation and public awareness [5, 6]. it is even more urgent to take effective measures to control infant and child mortality brought about by traffic accidents especially in developing countries [7, 8, 9]. lack of legislation and technical standards accompanied by little knowledge about crs and child safety has become a big threat to child occupants of vehicles in some places [10, 11]. in fact, crs legislation is proven effective to reduce motor vehicle related serious injuries and fatalities [12, 13, 14]. undoubtedly, what the child occupants and their care-givers can rely upon to prevent the possible injuries must be the crs’s that they are using; because the uncertainty of risks in a severe traffic accident makes it impossible for an adult to react in time and take timely measures in most cases [15]. to a great extent, the effectiveness of crs depends upon the correct installation of crs in the vehicle, the appropriate restraining of the child in the crs, and the appropriate use of crs [6]. among the factors that influence the effectiveness related to child occupant’s safety closely, the correctness of crs installation is often neglected or cannot be achieved for the fact that there are various types of crs and different kinds of installation methods which involve usage of seatbelts, isofix and latch, devices seemingly easy to use but inducing the state of user’s manipulation at will and regardless of whether the devices designed to restrain the crs are used in the right way [16, 17]. when the crs is equipped with isofix and top tether, the problem mentioned above is more obvious because of many factors [18]. the top tether is not the unique anti-rotation device used for crs, yet it is the one that is more inclined to be neglected compared to other kinds of devices such as the support leg aiming to achieve the same goal, because it is usually installed in the inconspicuous position and many drivers do not even know about it and how to use it when they carry out the installations of crs with a top tether [19]. as to enhanced child restraint system used on board of motor vehicles (ecrs), the validity of installation procedure is of great importance. ecrs belongs to one kind of crs, and has salient features different from general crs’s. since the crs equipped with isofix attachment seems easy and convenient to install and use, it has become more and more popular, hence promoting the modification and improvement of the product itself, accompanied by the continuous revision and amendments of technical regulations and standards. once installed inappropriately, crs will not bring safety and reliability, but become the source of dangers actually [20]. accordingly, some research works oriented to the correct installations or use of crs have been carried out [21, 22, 23]. regulations and standards related to crs, nevertheless, serve as an important basis of judgement whether or not the safety device’s installation correctness has been ensured. in europe, the united nations economic commission for europe (unece) is responsible for the development of vehicle regulations including un regulation no. 44 and no.129 which are specific to the requirements and testing methods of crs. while in china, the national standard of crs is gb 27887-2011 whose requirements and testing methods of dynamic tests are the same as those of un regulation no. 44. however, un regulation no. 44 and gb 27887-2011 both undergo adjustments continuously and gradually. from 1st september 2020, no new approvals shall be granted under un regulation no.44 to child restraint systems other than group 3, and from 1st september 2022, no extensions shall be granted under this regulation to child restraint systems other than group 3. obviously, more and more approvals will be granted under regulation no.129. gb 27887-2011 will be revised similarly to incorporate the same requirements and testing methods of dynamic tests as regulation no.129. therefore, no matter in china or in europe, frontal impact, rear impact and lateral impact tests should all be carried out as part of crs dynamic tests in the near future. the test procedure specified by the regulation provides an ideal methodology to reproduce or to simulate the abrupt deceleration of vehicle during a crash accident. this paper studies the injury potential of child occupants in a frontal crash, so reproducing the frontal-impact-deceleration curves of the vehicle is necessary by means of a trolley system. trolley systems can be classified into the deceleration type and the acceleration type, producing the deceleration pulses and acceleration pulses respectively. it is better to choose the acceleration type, for the trolley of this kind can produce the required acceleration curves with higher precision and repeatability [24, 25]. meanwhile, the deceleration or acceleration curves defined in the regulation can be adopted to carry out the frontal crash test. in order to ensure universality, other proper deceleration or acceleration curves should be defined arbitrarily for the dynamic test. parts of the child occupant’s body that are prone to injuries could include but not be limited to the head, the thorax and the abdomen which contain all the important organs [26]. most fatalities are caused by injuries of these parts. according to findlay, melucci, dombrovskiy, pierre and lee [27], head/neck injuries are most common for child occupants in all age groups after motor vehicle crashes. actually, the neck injury or whiplash injury could be avoided when the head has been protected within the safety range, and the head injury is more fatal. if the child occupant’s head moves too far from the original place in a crash accident, it may crash into the back of the front seat or other rigid objects and gets wounded zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 14 severely. it is suitable to control the head’s displacement because the displacement of the head is bigger than that of the thorax in a frontal crash. as to the thorax, accelerations in 3 dimensions could be the proper criteria to determine whether it is safe for child occupant in the crash, for internal organs in the thorax are quite sensitive to acceleration. there are limits of acceleration to which a human organism can be exposed. and the abdomen as the soft and weak part of the body, should not be subjected to excessive stresses. meanwhile, the “cushion effect,” a phenomenon in which obesity protects against abdominal injury in adults in motor vehicle accidents, is not apparent among pediatric frontal motor vehicle crash victims [28]. therefore, internal and external structures of child abdomen facilitate penetration of rigid and sharp objects into the soft tissues and organs of human body. so, the abdominal penetration could be taken into consideration for safety evaluation. considerable experimental and numerical researches have been conducted to increase the crs usage rate, to improve the public awareness, to optimize the safety and effectiveness of crs and so on, with comparatively fewer emphases placed on researches upon assessing the relationship between injury potentials of child occupants and installations of crs quantitatively by means of experiments [29, 30, 31, 32]. hence, it is necessary to devise a proper scheme for the purpose of discerning and clarifying the relationship mentioned above. furthermore, how to verify the injury potential of child occupants caused by inappropriate installation of crs of a certain kind and to explore ways to avoid injuries deserves deep research. as the computer science and computational technology develop rapidly, numerical simulation is more and more widely used in the field of traffic accident analysis [33, 34, 35]. but the methodology of numerical simulation has its own shortcomings and cannot exactly reproduce the crash process without any differences, for it is confined to algorithms and ideal conditions. instead, the experimental methodology could incorporate more factors into the process and be used for the validation of numerical simulation [36]. without any limitations, experimental methodology is the better way to reproduce the crash process. this paper aims to examine the extent to which the top tether that is often neglected and leads to the inappropriate installations of crs affects the injuries of child occupants aged 3 by the means of experimental methodology. in order to obtain the useful information, a testing scheme was devised and 4 tests with different set-ups were conducted. by comparison of test results, the influence of the most common inappropriate installation on the ability of crs to protect child occupants can be discerned. the ability could be demonstrated by the safety performance of crs and linked directly with the injury potential of child occupants. ii methodology in order to ensure objectivity, universality and to reflect the real status of crs quality, four crs samples were bought randomly from a local market in china according to the requirements that the product should be forward facing isofix crs with top tether and belong to group i restraints suitable for 3-year-old child occupants. testing scheme was devised to ascertain the relationship between injuries of child occupants aged 3 with the top tether of isofix crs in a frontal crash accident, for the most common incorrect or inappropriate installation lies in forgetting to connect the top tether to the anchorage. the crash tests were conducted using the existing un regulation no. 44 fixture and p series 3-year (p3) manikin by means of trolley of acceleration type. compared with trolley of deceleration type, the acceleration one can ensure the consistency and precision of pulses with higher efficiency. but the acceleration curves were not limited to the regulation, i.e. the frontal impact pulses defined by the un regulation no. 44 and defined at will were both adopted. these pulses can be produced and adjusted to simulate and resemble the vehicle deceleration in the frontal crash by the trolley system. in total, two kinds of acceleration pulses produced by the trolley system were used to carry out the crash tests, one kind representing the typical abrupt deceleration of the vehicle in the form of the pulse specified by the regulation and the other kind representing the deceleration in extreme conditions. a test preparation four independent tests were carried out, in which the accelerometer, the high-speed camera and the clay were used respectively to obtain the necessary information of acceleration, displacement and abdominal penetration. a sample of modelling clay was vertically connected to the front of the lumbar vertebrae by means of thin adhesive tape and gave indication of abdominal penetration. in tests, the modelling clay samples were of the same length and width as the lumbar spinal column, with the thickness of the samples being 25±2 mm. accelerations of the trolley and manikin would undergo cfc 60 and cfc 180 signal filtration respectively [37, 38]. frequency class designated by the numbers 60 and 180, i.e., cfc 60 and cfc 180, indicated that the channel frequency responses lay within specified limits or were filtered using specified algorithms. comparability of testing results and signalto-noise ratio were improved by processing signals in this way. zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 15 as shown in fig.1, the test was set-up by, firstly placing the p3 manikin so that the gap was between the rear of the manikin and crs. then the hinged board 2.5 cm thick and 6 cm wide and of length equal to the shoulder height was placed to follow as closely as possible the curvature of the restraint. secondly, the belt used to restrain the manikin was pulled to a tension of 275n with a deflection angle of 45°. as tabulated in table 1, the testing scheme devised contains four tests under different conditions and the set-ups. the crs was installed to the test seat of the un regulation no. 44 fixture in accordance with the user manual, and the hinged board was removed. after completing the procedure, the isofix and top tether were both connected to the anchorages separately in the first and the third tests, while only the isofix was connected in the second and the fourth tests. in the first and second tests, the acceleration pulse defined by the regulation would be produced. in the third and fourth tests, the acceleration pulse defined at will but whose integrated value would be set in a proper range would be used. the procedure of preparation in each test was fixed except for the way in which crs was connected to the test seat and the acceleration pulse adopted. (a) installation finished (b) the hinged board removed before a test fig. 1. test setup: p3 manikin restrained in the crs ready for test table 1 testing scheme, conditions and the set-ups b injury requirements according to the extent to which human body can endure the acceleration and the related regulation’s specification, the resultant chest acceleration of the child shall not exceed 55g except during periods whose sum does not exceed 3 ms. the vertical component of the acceleration from the abdomen towards the head shall not exceed 30g except during periods whose sum does not exceed 3 ms [38, 39, 40]. once these requirements are met, the child occupant can be, theoretically, protected and the injuries to organs in the thorax are avoided. as for the requirements of abdominal penetration, there shall be no visible signs of penetration of the modelling clay of the abdomen caused by any part of the restraining device. once the penetration occurs, it means the abdomen of the child occupant has suffered severe injury in a real crash accident. one example of such a penetration is illustrated in fig.2. requirements of manikin’s head displacement are related with the type of crs and the group to which the crs belongs. actually, most regulations’ specifications on the displacement of manikin’s head are almost the same especially in the values of displacement. and the specifications can be referred to as the norms to evaluate whether the displacement values have exceeded the range of safety. as a summary of the specifications mentioned, a figure used for fast evaluation of displacement was devised in this paper, and depicted in fig. 3 [38, 39]. as can be seen in fig.3, the heavy line represents the evaluation boundary between forward facing crs and rearward facing crs, and the displacement boundary as well. the horizontal and vertical continuous thin lines indicate the displacement boundary that the head of the manikin should not exceed. the slant line stands for the common boundary of displacement. cr line is defined in un regulation no. 44, and it is the origin for measurement, and measurement unit is mm in the figure. in the space, line cr is coincident with the intersection line between the top plane of the seat on which the test no. crs installation method parameters measured acceleration pulse 1 isofix+top tether head displacement thorax acceleration abdomen penetration frontal impact in un regulation no. 44 2 isofix ditto frontal impact in un regulation no. 44 3 isofix+top tether ditto non-regulation 4 isofix ditto non-regulation zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 16 crs is put and restrained and the front lane of the seat back. in the four tests, crs used belongs to group i, so the values of front boundary and upper boundary should be 500mm and 800mm respectively. (a) p3 manikin (b) clay in the abdomen of p3 manikin fig. 2. abdominal penetration of p3 manikin fig. 3. fast evaluation of displacement of manikin’s head based on (wp.29, 2013; wp.29, 2014) regulations [38, 39] iii results after test preparation, four tests were carried out sequentially using the acceleration type trolley according to the testing scheme. given the target curve of the acceleration, the trolley produced the actual pulses in the tests. the actual pulse could be deemed as the equivalent of the deceleration that the vehicle would undergo in a frontal crash accident. as depicted in fig.4, the two acceleration pulses were produced by the trolley according to un regulation no. 44 in the dynamic tests, and met the requirement of the regulation. the pulses in fig.5 produced by the trolley did not necessarily meet any requirement of the regulations related with the safety of crs, and could be seen as the proper simulation of the frontal crash too. target curves employed in the third and fourth tests were defined at will and were used as the guide for the trolley to produce the actual pulses shown in fig.5. the peak accelerations can be of 75g or greater in magnitude and the pulses themselves simulate the automotive crashes under extreme conditions. besides crs tends to fail to work when the worst happens, so to some extent it is more severe, representative and objective to conduct the testing scheme using the pulses of this kind as the simulation of the real accelerations. the accelerations of the manikin’s chest measured in the tests when the isofix attachment and top tether are both connected and when the top tether is not connected with only isofix attachment being connected differ greatly in magnitude, for the lack of the upper fastening point is easy to cause the unstable status of crs. the differences in x, y and z directions between the comparative tests and the resultant accelerations compare are illustrated in fig.6, fig.7, fig.8 and fig.9 respectively, which are the results of dynamic tests carried out according to related regulations. in addition to differences between chest accelerations, it is also noteworthy that slight visible signs of abdominal penetration are found, and head displacements are also very close to the limits in the second test, and the phenomena are usually brought about by the unstable status of crs in a dynamic test. however, the third and fourth tests that are not conducted according to related regulations and whose acceleration curves are set at will and are more severe than the ones specified by regulations returned completely different results. in the third dynamic test, the resultant acceleration and the vertical acceleration both satisfy the requirements and the test results are partly qualified due to the good protection of crs for the child occupant, as shown in fig.10, although the process has been set under severe conditions. therefore, it can be inferred that the appropriate installation of crs is of importance to improve the safety. the top tether of ecrs works as an important anti-rotation device and provides the third fastening point for ecrs installation, thus ensuring the stable status in which the child occupant can get necessary protection in a crash. without the upper fastening point provided by the tether, damages usually inevitably occur to the crs in extreme or severe conditions, and make crs fail to take effect. fig.11 displays the damaged crs in the fourth dynamic test. in fact, the damage was caused by the lack of the upper fastening point provided by top tether and the consequent bending moment in large-magnitude. the crs did not fall off the trolley because of the belts that had been fastened to protect the manikin and the accelerometer in advance. detailed testing results and values are illustrated in table.2. results of the first and second tests are both qualified to a great extent, while the head displacement exceeds the permitted range in the third test with the accelerations meeting zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 17 the requirements. but in the fourth test, complete failure of crs occurs and no criterion is satisfied. fig. 4. acceleration pulses of the trolley employed in the first and second tests fig. 5. acceleration pulses of the trolley employed in the third and fourth tests fig. 6. chest accelerations of the manikin in x direction in the first and second tests fig. 7. chest accelerations of the manikin in y direction in the first and second tests fig. 8. chest accelerations of the manikin in z direction in the first and second tests fig. 9. resultant accelerations of the manikin’s chest in the first and second tests zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 18 fig. 10. chest accelerations of the manikin in the third test fig. 11. damaged ecrs in the fourth test table 2 testing results iv discussion the results of the dynamic tests reflect that the top tether as the anti-rotation device plays an important role in improving the safety performance of ecrs in frontal crash accidents. thorax accelerations, abdominal penetration and head displacements of child occupants are directly related with the installation method. according to the research, inappropriate installation such as the neglect or wrong use of top tether most common in daily use can impair the ability of ecrs to resist the abrupt deceleration and to absorb the dynamic energy that will otherwise be transmitted to child occupants and result in injuries during a frontal vehicle crash. trends exist that peak accelerations of child occupant’s thorax decrease with the proper use of top tether in a crash, and as the additional fastening point, the tether can restrain the crs effectively from rotation and deformation that usually make the child suffer from large accelerations and abdominal penetration, and cause the situation in which head displacements exceed the permitted range, as displayed in fig.6-fig.11. hence, it is recommended that top tether should be used properly no matter what magnitude of the automotive deceleration in a crash accident can be. further research is needed to stipulate the preventive measures for avoiding inappropriate installation of any kind. legislation, standards, specifications and technical regulations could be amended and revised in time to adapt to new changes in the use of crs. being a significant factor in resisting the rotation of ecrs during a frontal crash, the top tether connection should be not ignored. steps could be taken to increase the public awareness of these issues, thus reducing chances of inappropriate installations of ecrs with top tether. v conclusion it can be concluded that top tether affects the accelerations of thorax, abdominal penetration and head displacements of child occupants aged 3 by devising the testing scheme and comparing the test results of four dynamic tests conducted in different experimental set-ups. proper use of top tether in the process of installation can reduce the risk of functional failure of crs and enhance its ability to satisfy the safety criteria, thus improving the safety performance. the additional fastening point provided by top tether is necessary and able to ensure that crs works normally and is in the stable status during a frontal crash. theoretically, appropriate installation of ecrs with top tether will bring about fewer injury potentials. the inappropriate installation will not only give rise to injury potential, but may cause severe damages even to crs itself under extreme conditions. therefore, connecting top tether to the anchor point should be deemed as an essential prerequisite before seating a child into the crs. with the emergence of ecrs with anti-rotation device such as the top tether, convenience, comfort and safety performance of crs become more obvious, thus making products of this kind have a growing tendency and possibility of substituting the traditional crs installed by the seatbelt. recently tremendous progress has been made in the fields of crs designing, manufacturing, utilization, standardization and testing, especially in europe and north america, whose technical standards related to passive safety are in the lead test no. peak acceleration of thorax abdominal penetration head displacement 1 resultant: 15.0g vertical: -14.7g no visible signs of penetration front: 382 mm upper: 612 mm 2 resultant: 18.5g vertical: -17.7g slight visible signs of penetration front: 495 mm upper: 767 mm 3 resultant: 27.3g vertical: -26.9g slight visible signs of penetration front: 596 mm upper: 833 mm 4 crs failure crs failure crs failure zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 19 and can be a good reference in formulating standards of the same kind [4, 38, 39, 41]. china has the larger quantity of children from newborn to aged 10 than most other countries, while usage of crs remains even low because of many factors among which the complexity in crs installation is a noteworthy aspect and related with the wrong use of crs to some extent [42]. meanwhile inappropriate installation exists widely and consequently leads to many injuries and fatalities that should have been avoided in china too. measures should be taken to tackle the problem of crs inappropriate installation, and make it easy to check the status of crs in use at any time before and during driving. on the one hand, improving public awareness and mechanisms of providing safety information to the public can contribute to minimizing inappropriate installations of crs and fundamentally ameliorate the situation in which the step of connecting the top tether to the anchor is omitted consciously or unconsciously. on the other hand, technical advances in industry support more emergence of special devices used to prevent neglect or inappropriate use of passive safety devices that include crs and other mechanical and electrical mechanisms, e.g., seatbelt, airbag, seat, etc. various devices such as seatbelt reminder used for reminding occupants of using seatbelt have been long used and proved effective in discerning and preventing the incorrect use of safety devices, e.g., the failure of tongue’s connection to the buckle, and similar devices can be designed and utilized to achieve the same effect for crs installation [43]. besides, the optimization of crs structure and the proper layout of parts can also be taken into consideration, as a proper way, to eliminate risks of inappropriate installations [44, 45]. efficient joint-action mechanism of industry, research institution, university, inspection organization, government and end users should be established to build the system in which any risk of designing, manufacturing, inspection and using related with crs can be considered, discerned and avoided in terms of sources. finally, in view of the injury potential of child occupants aged 3 brought about by inappropriate installation of ecrs with top tether in a frontal crash accident, aspects involved in the process should be emphasized. the influence of top tether upon the ability of crs to protect child occupants from injuries has been ascertained based on the testing scheme including four comparison tests, targeted methods can then be devised to deal with follow-up issues. references [1] e. p. elliott, a. c. hariramani, and j. ansiaux, “child passenger safety,” physician assist. clin., vol. 1, no. 4, pp. 525–540, 2016, doi: 10.1016/j.cpha.2016.05.001. [2] s. s. m. chang, r. c. a. symons, and j. ozanne-smith, “child road traffic injury mortality in victoria, australia (0–14 years), the need for targeted action,” injury, vol. 49, no. 3, pp. 604 – 612, 2018, doi: 10.1016/j.injury.2017.12.018. [3] t. kuska and c. rush, “issues and trends in child passenger safety in the united states and canada,” int. j. trauma nurs., vol. 7, no. 4, pp. 137–141, 2001, doi: 10.1067/mtn.2001.118969. [4] m. roynard, p. silverans, y. casteels, and p. lesire, “national roadside survey of child restraint system use in belgium,” accid. anal. prev., vol. 62, pp. 369–376, 2014, doi: 10.1016/j.aap.2013.08.021. [5] a. akhavan rezayat et al., “child injury mortality in iran: a systematic review and meta-analysis,” j. transp. heal., vol. 16, p. 100816, 2020, doi: 10.1016/j.jth.2019.100816. [6] g. lee, c. n. pope, a. nwosu, l. b. mckenzie, and m. zhu, “child passenger fatality: child restraint system usage and contributing factors among the youngest passengers from 2011 to 2015,” j. safety res., vol. 70, pp. 33– 38, 2019, doi: 10.1016/j.jsr.2019.04.001. [7] s. bendak and k. alkhaledi, “child restraint system use in the united arab emirates,” transp. res. part f traffic psychol. behav., vol. 51, pp. 65 – 72, 2017, doi: 10.1016/j.trf.2017.09.001. [8] a. chaudhry, i. sanaullah, b. z. malik, and a. a. klair, “an investigation of awareness, perceptions, and usage of child car seats in pakistan,” j. transp. heal., vol. 13, pp. 247 – 258, 2019, doi:10.1016/j.jth.2019.05.001. [9] j. ignacio nazif-muñoz, a. nandi, and m. ruiz-casares, “impact of child restraint policies on child occupant fatalities and injuries in chile and its regions: an interrupted time-series study,” accid. anal. prev., vol. 120, pp. 38–45, 2018, doi: 10.1016/j.aap.2018.07.028. [10] e. f. sam, “don׳t learn safety by accident: a survey of child safety restraint usage among drivers in dansoman, accra,” j. transp. heal., vol. 2, no. 2, pp. 160–165, 2015, doi: 10.1016/j.jth.2014.08.003. [11] e. n. aidoo, w. ackaah, s. k. appiah, e. k. appiah, j. addae, and h. alhassan, “a bivariate probit analysis of child passenger’s sitting behaviour and restraint use in motor vehicle,” accid. anal. prev., vol. 129, pp. 225– 229, 2019, doi: 10.1016/j.aap.2019.05.022. [12] j. i. nazif-muñoz, j. falconer, and a. gong, “are child passenger fatalities and child passenger severe injuries zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 20 equally affected by child restraint legislation? the case of chile,” int. j. inj. contr. saf. promot., vol. 24, no. 4, pp. 501 – 509, oct. 2017, doi: 10.1080/17457300.2016.1278236. [13] j. i. nazif-munoz and n. nikolic “the effectiveness of child restraint and seat belt legislation in reducing child injuries: the case of serbia,” traffic inj. prev., vol. 19, no. sup1, pp. s7 – s14, feb. 2018, doi: 10.1080/15389588.2017.1387254. [14] j. shanthosh et al., “effectiveness of child restraint legislation to reduce motor vehicle related serious injuries and fatalities: a national interrupted time series analysis,” accid. anal. prev., vol. 142, p. 105553, 2020, doi: 10.1016/j.aap.2020.105553. [15] s. nakahara, m. ichikawa, and y. nakajima, “effects of increasing child restraint use in reducing occupant injuries among children aged 0–5 years in japan,” traffic inj. prev., vol. 16, no. 1, pp. 55–61, jan. 2015, doi: 10.1080/15389588.2014.897698. [16] s. l. bachman, g. a. salzman, r. v burke, h. arbogast, p. ruiz, and j. s. upperman, “observed child restraint misuse in a large, urban community: results from three years of inspection events,” j. safety res., vol. 56, pp. 17–22, 2016, doi: 10.1016/j.jsr.2015.11.005. [17] k. d. klinich et al., “effects of child restraint system features on installation errors,” appl. ergon., vol. 45, no. 2, part b, pp. 270 – 277, 2014, doi: 10.1016/j.apergo.2013.04.005. [18] j. s. jermakian et al., “factors affecting tether use and correct use in child restraint installations,” j. safety res., vol. 51, pp. 99 – 108, 2014, doi: 10.1016/j.jsr.2014.09.011. [19] a. h. eichelberger, l. e. decina, j. s. jermakian, and a. t. mccartt, “use of top tethers with forward-facing child restraints: observations and driver interviews,” j. safety res., vol. 48, pp. 71 – 76, 2014, doi: 10.1016/j.jsr.2013.11.002. [20] skjerven-martinsen, p. a. naess, t. b. hansen, t. staff, and a. stray-pedersen, “observational study of child restraining practice on norwegian high-speed roads: restraint misuse poses a major threat to child passenger safety,” accid. anal. prev., vol. 59, pp. 479–486, 2013, doi: 10.1016/j.aap.2013.07.023. [21] j. brown, c. f. finch, j. hatfield, and l. e. bilston, “child restraint fitting stations reduce incorrect restraint use among child occupants,” accid. anal. prev., vol. 43, no. 3, pp. 1128 – 1133, 2011, doi: 10.1016/j.aap.2010.12.021. [22] d. c. schwebel, m. a. tillman, m. crew, m. muller, and a. johnston, “using interactive virtual presence to support accurate installation of child restraints: efficacy and parental perceptions,” j. safety res., vol. 62, pp. 235–243, 2017, doi: 10.1016/j.jsr.2017.06.018. [23] k. tessier, “effectiveness of hands-on education for correct child restraint use by parents,” accid. anal. prev., vol. 42, no. 4, pp. 1041 – 1047, 2010, doi: 10.1016/j.aap.2009.12.011. [24] s. m. beeman, a. r. kemper, m. l. madigan, c. t. franck, and s. c. loftus, “occupant kinematics in lowspeed frontal sled tests: human volunteers, hybrid iii atd, and pmhs.” accid. anal. prev., vol. 47, 128–139, 2012. doi: 10.1016/j.aap.2012.01.016. [25] d. s. bhalerao, s. a. kale, k. d. sapate, and a. v. mannikar, “finite elemental analysis of pull and release trolley for conducting vehicle crash tests,” int. j. eng. tech., vol. 8, no. 5, pp. 2115-2120, 2016. [26] r. k. myers, l. lombardi, m. r. pfeiffer, m. r. zonfrillo, and a. e. curry, “child restraint use and hospitalreported injuries among crash-involved child passengers,” traffic inj. prev., pp. 1 – 2, nov. 2020, doi: 10.1080/15389588.2020.1829931. [27] b. l. findlay, a. melucci, v. dombrovskiy, j. pierre, and y.-h. lee, “children after motor vehicle crashes: restraint utilization and injury severity,” j. pediatr. surg., vol. 54, no. 7, pp. 1411 – 1415, 2019, doi: 10.1016/j.jpedsurg.2018.10.046. [28] c. m. harbaugh et al., “evaluating the ‘cushion effect’ among children in frontal motor vehicle crashes,” j. pediatr. surg., vol. 53, no. 5, pp. 1033–1036, 2018, doi: 10.1016/j.jpedsurg.2018.02.042. [29] a. hall et al., “barriers to correct child restraint use: a qualitative study of child restraint users and their needs,” saf. sci., vol. 109, pp. 186 – 194, 2018, doi: 10.1016/j.ssci.2018.05.017. [30] l. niu, y.-m. gao, y. tian, and s.-m. pan, “safety awareness and use of child safety seats among parents after the legislation in shanghai,” chinese j. traumatol., vol. 22, no. 2, pp. 85 – 87, 2019, doi: 10.1016/j.cjtee.2018.08.005. [31] d. s. usami, l. persia, and v. sgarra, “determinants of the use of safety restraint systems in italy,” transp. res. procedia, vol. 45, pp. 143 – 152, 2020, doi: 10.1016/j.trpro.2020.03.001. zhe wei / analysis of injury potential of three-year-old child occupants caused by inappropriate installation of enhanced child restraint system with top tether in frontal crash accidents (2021) 21 [32] s. yan et al., “assessing an app-based child restraint system use intervention in china: an rct,” am. j. prev. med., vol. 59, no. 3, pp. e141–e147, 2020, doi: 10.1016/j.amepre.2020.02.003. [33] t. kapoor et al., “a numerical investigation into the effect of crs misuse on the injury potential of children in frontal and side impact crashes,” accid. anal. prev., vol. 43, no. 4, pp. 1438 – 1450, 2011, doi: 10.1016/j.aap.2011.02.022. [34] y. meng and c. untaroiu, “numerical investigation of occupant injury risks in car-to-end terminal crashes using dummy-based injury criteria and vehicle-based crash severity metrics,” accid. anal. prev., vol. 145, p. 105700, 2020, doi: 10.1016/j.aap.2020.105700. [35] l. tang, j. zheng, and j. hu, “a numerical investigation of factors affecting lumbar spine injuries in frontal crashes,” accid. anal. prev., vol. 136, p. 105400, 2020, doi: 10.1016/j.aap.2019.105400. [36] d. bruski et al., “experimental and numerical analysis of the modified tb32 crash tests of the cable barrier system,” eng. fail. anal., vol. 104, pp. 227–246, 2019, doi: 10.1016/j.engfailanal.2019.05.023. [37] x. r. zhang, y. h. shi, “simulation and experiment study on the deviation influence factors of dynamic collision test for child safety seat,” int. j. crash., vol.0:0, 2019, pp. 1-12. [38] world forum for harmonization of vehicle regulations(wp.29), “uniform provisions concerning the approval of restraining devices for child occupants of power-driven vehicles (‘child restraint systems’)”. 2014, available at: https://unece.org/transport/vehicle-regulationswp29/standards/addenda-1958-agreement-regulations121-140. [39] world forum for harmonization of vehicle regulations(wp.29), “uniform provisions concerning the approval of enhanced child restraint systems used on board of motor vehicles (ecrs)”. 2013, available at: https://unece.org/transport/vehicle-regulationswp29/standards/addenda-1958-agreement-regulations41-60. [40] b. d. graaf, and w. v. weperen, “the retention of balance: an exploratory study into the limits of acceleration the human body can withstand without losing equilibrium,” human factors, vol. 39, no. 1, pp. 111-118, 1997. [41] t. kuska, and c. rush, “issues and trends in child passenger safety in the united states and canada. int. j. trauma. nursing. ” vol. 7, no. 4, pp. 137–141, 2001, doi: 10.1067/mtn.2001.118969 [42] k. omari and o. baron-epel, “low rates of child restraint system use in cars may be due to fatalistic beliefs and other factors,” transp. res. part f traffic psychol. behav., vol. 16, pp. 53 – 59, 2013, doi: 10.1016/j.trf.2012.08.010. [43] w. a. good, “child seat restraint alarm system,” 2010, us. [44] j. a. mansfield, y. n. zaragoza-rivera, g. h. baker, and j. h. bolte, “evaluation of interventions to make top tether hardware more visible during child restraint system (crs) installations,” traffic inj. prev., vol. 20, no. 5, pp. 534 – 539, jul. 2019, doi: 10.1080/15389588.2019.1618849. [45] k. stylidis, e. al-saidi, a. t. erinjery, l. lindkvist, c. wickman, and r. söderberg, “design of the top tether component for the premium car market segment: case study of volvo cars,” procedia cirp, vol. 91, pp. 146– 151, 2020, doi: 10.1016/j.procir.2020.02.160. zhe wei is a senior engineer in national quality supervision and inspection centre for engineering machinery affiliated to china academy of machinery science and technology. he obtained the b. sc and m. sc in 2007 and in 2010 respectively. his research field is related with vehicle’s passive safety now. as the first author and the first inventor, he has more than 20 publications and 5 authorized patents. meanwhile, he owns the premium membership of chinese society of agricultural machinery and is the committee member of sac/tc 240. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreement-regulations-121-140. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreement-regulations-121-140. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreement-regulations-121-140. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreement-regulations-41-60. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreement-regulations-41-60. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreement-regulations-41-60. transactions template journal of engineering research and technology, volume 1, issue 1, march 2014 1 rainwater sequential sampler: assessing intra-event water composition variability sílvia c.p. carvalho, joão l.m.p. de lima, and m. isabel p. de lima abstract—rainwater sequential sampler instruments can be very useful in characterizing the variability in rainwater composition, which can occur over relatively short time periods. the main aim of this study was to develop a low-cost volume-based sequential rain sampler for the assessment of variations in the chemical composition of rainwater during individual rain events in one place. in order to evaluate the performance of the apparatus a few tests were conducted under field conditions in coimbra (portugal). rainy periods were analysed in relation to the following physicochemical parameters: electrical conductivity, ph, turbidity, nitrates, sulphates and chloride. the results showed that the rainwater composition varied over time; moreover, some parameters were found to be highest at the beginning of the rainy period, followed by a rapid decline of the initial value and then remained approximately constant. the findings suggest that the rainwater sequential sampler is a low-cost solution tool that can be useful for non-continuous assessment of intra-event rainwater composition variability. index terms—rainwater sampler, rainwater composition, equipment design. i introduction rain is a scavenging agent for pollutants present in the atmosphere (e.g. [1]) creating a potential of contamination for terrestrial and aquatic ecosystems. collecting rainwater sequentially is crucial to understand the variability in rainwater composition during rain events. the rainwater composition is related to the atmospheric composition. for example, in rural areas that are located far from cities and industrial pollution and are not so much affected by the transport of pollutants, the rainwater is expected to be low polluted, as the air is mostly clean. on the contrary, urban areas are typically marked by intense traffic and industry can produce pollutants that are “washed out” from the atmosphere during rainfall events (e.g. [2]). a wide variety of sequential rain samplers have been proposed: manual sampling (e.g. [3]); linked collection vessels (e.g. [4]); automatic sequential samplers (e.g. [5]); and continuous monitors (e.g. [6]). in addition, a classification of sequential rain samplers can be defined by the way the rain is fractionated, i.e. by volume (e.g. [7]) or at fixed time intervals (e.g. [8]). the method used to collect the rainwater might affect the results (e.g. [9]). the variability in different rainwater components has been explored in relation to, for example, the rainfall event intensity and depth, the season when it occurs, and the antecedent dry periods (e.g. [10], [11], [12]). some of the studies on rainwater chemical composition use daily or lower time resolutions (e.g. [13], [14]). however, as pointed out by raynor and hayes [15], and seymour and stout[16] shortperiod samples (e.g. hourly) can provide crucial information, because the rainwater composition and the meteorological conditions (e.g. wind patterns, temperature, humidity) often change significantly over time during an event, and important relationships might be masked by inadequate temporal resolution of the observations. the aim of this research was to present a volume-based sequential rain sampler that can be adapted for low or high volume resolution (by using sampling-bottles with different capacities). the equipment was designed to attain: low manufacturing cost, set-up and maintenance easiness, and no power requirements. it was tested under field conditions in coimbra (portugal). ii rainwater sequential sampler a design of the equipment the intra-event variability of rainwater quality can be explored by collecting sequential samples of rainwater (with an appropriate resolution) during the event. in this study a volume-based sequential rain sampler was designed (figure 1, see also the photograph of the equipment in figure 2c). the components of this equipment are: i) rainwater collection: a knife-edge collector ring; an aluminium funnel with an aperture diameter of 0.358 m; a flexible hose that connects the funnel to a flume; an adjustable support which keeps the funnel at an height of 1.5 m above the ground; and a support for the flume. ii) the sampler: an acrylic flume with 11 openings (regularly spaced at 100 mm) where bottles are attached; and 11 polypropylene bottles to collect/store the rainwater samples. ————————————————  s.c.p. carvalho, j.l.m.p. de lima, and m.i.p. de lima department of civil engineering, faculty of science and technology, campus ii – university of coimbra, rua luís reis santos, 3030-788 coimbra, portugal rainwater sequential sampler: assessing intra-event water composition variability s.c.p. carvalho, j.l.m.p. de lima and m.i.p. de lima (2014) 2 this study was performed in a way to provide a continuous storage of rainwater until a maximum of 10 mm of cumulative rainfall depth. for that purpose, and taking into account the funnel aperture/collecting area, i.e. around 0.1 m2, a maximum of 1000 ml of rainwater were collected (using a total of 11 individual samples): 50 ml (0.5 mm) for the first two bottles and 100 ml (1 mm) for each of the other bottles (figure 1b); after all the bottles are filled the additional rain is disregarded. the first two bottles were used to better capture the eventual stronger variation on rainwater composition at the beginning of the events. because all the sampling bottles attached in the equipment have 100 ml capacity, the volume collected in each bottle was adjusted by placing small glass spheres in the bottle (figure 1a); i.e. the first two bottles were filled with small glass spheres until the empty space left were enough for storing a maximum of 50 ml of rainwater. there was also some concern regarding mixing the rainwater from earlier samples and the following ones. therefore, each bottle also contains a large polystyrene sphere which seals it once it is filled and prevents the inflow of additional rainwater; as entrapped air was present surrounding the float sphere and the water level, the inexistence of rainwater mixing between individual samples was confirmed. this sealing also protects samples from contamination. b advantages and disadvantages of the equipment the sequential rainwater samplers found in the literature vary in complexity, but the manual collection of rainwater is figure 2 (a) location of coimbra in mainland portugal; (b) location of the study site in the city of coimbra (black triangle); (c) photograph of the equipment figure 1 (a) setup of the rainwater sequential sampler. distances are in meters; (b) hydraulic scheme of the rainwater sampling s.c.p. carvalho, j.l.m.p. de lima and m.i.p. de lima (2014) 3 the simplest and least expensive method in terms of equipment requirements (e.g. [3] and [17]). however, the long term availability of an operator to carry out the experiments makes this procedure difficult to implement. the sampler present in this study has low manufacturing costs and the bottles are filled in sequence by gravitational flow. indeed, the samplers based on linked collection vessels, such as the present, have a simple construction (e.g. [4]). since these samplers are designed to operate unattended, the number of samples that can be collected is an important specification to consider. the number of sampling bottles typically used in linked collection vessels is five or less [18], this equipment is prepared to attach 11 samples. depending on the number of samples and the purpose of the analysis, the collection of samples can be based on time or on precipitation volume; automatic sequential samplers are usually able to sample at unit times but in that case some extra care should be taken to avoid incomplete record of the event, for example, if a sample container is not big enough, the excess of water will overflow before the next container is in position to fill. the total amount of rainfall collected by the present sampler can be easily adapted to different measuring schemes by using bottles with different volume capacities, i.e., decreasing or increasing the sampling volume resolution. in addition to the assessment of the intra-event water composition variability, if one wants to register the intensity and duration of the rain event, it is necessary to complement the measurements using a recording rain-gauge (e.g. tippingbucket rain-gauge), which would obviously involve extra costs and power requirements. this equipment only provides an unrefrigerated collection of samples; some other devices store the samples under refrigeration conditions, see e.g. [19]. for studies aiming at analysing rainwater composition in terms of stable chemical species, the samples can be removed immediately after the end of the event, which guarantee their physicochemical integrity. nevertheless, this sampler is easily transported to the field and it does not require power. the equipment also has low maintenance requirements (it can be easily cleaned with distilled water). iii testing the sampler a measuring site the sequential rain sampler was tested in the field in the city of coimbra (portugal), which is located in the valley of mondego river, and it is at approximately 50 km from the atlantic coast (figure 2). the sampler was installed on the flat roof of the building of the department of civil engineering of the university of coimbra (with geographic coordinates 40º11’08”n and 08º24’52”w). b data acquisition the dataset analysed comprises four rainy periods (table 1), which were selected based on the following criteria: i) a table 1 description of the rainy periods sampled. r a in y p e r io d s d a te ( d a y -m o n th -y e a r ) (h .m in ) a n te c e d e n t d r y p e r io d (h ) t o ta l r a in a m o u n t (m m ) t o ta l d u r a ti o n (m in ) s h o r te st s a m p li n g d u r a ti o n * ( m in ) l o n g e st s a m p li n g d u r a ti o n * * ( m in ) m e a n r a in i n te n si ty (m m h -1 ) m a x im u m r a in i n te n si ty (m m h -1 ) m a ss -w e ig h te d m e a n d r o p d ia m e te r , d m ( m m ) 1 01-09-2011 17.23–20.20 8 10 178 7 51 3.4 10.2 1.76 2 02-11-2011 8.08 – 9.36 33.4 10 89 1 49 6.8 69.8 2.11 3 11-11-2011 6.31– 10.30 43.5 10 240 7 58 2.5 13.6 1.48 4 23-09-2012 3.04 – 6.13 120 6 190 2 134 1.9 63 1.64 *the “shortest sampling duration” is the minimum interval needed to fill a sampling bottle. **the “longest sampling duration” is the maximum interval needed to fill a sampling bottle. table 2 description of rainwater parameters for the four rainy periods: i (rainfall intensity), dm (mass-weighted mean drop diameter), ec (electrical conductivity), tr (turbidity), cl(chloride), so4 2 (sulphates), and no3 (nitrates), used to test the sampler. s a m p le s n u m b e r r a in fa ll a m o u n t (m m ) 01-09-2011 02-11-2011 11-11-2011 23-09-2012 i (m m h -1 ) d m ( m m ) t r ( f n u ) e c ( μ s c m -1 ) p h i (m m h -1 ) d m ( m m ) t r ( f n u ) e c ( μ s c m -1 ) p h i (m m h -1 ) d m ( m m ) t r ( f n u ) e c ( μ s c m -1 ) p h i (m m h -1 ) d m ( m m ) t r ( f n u ) e c ( μ s c m -1 ) p h c l ( m g l -1 ) 𝐒 𝐎 𝟒𝟐 − (m g l -1 ) 𝐍 𝐎 𝟑− ( m g l -1 ) 1 0.5 0.5 1.38 1.76 48.0 6.73 0.5 2.06 3.79 35.2 7.05 0.5 1.02 2.90 21.2 6.41 0.8 1.42 3.73 75.3 7.34 13.0 0.24 0.04 2 0.5 3.4 1.70 1.14 30.4 6.62 1.4 1.88 3.02 33.0 7.06 3.8 1.62 1.49 11.9 6.79 18.2 2.22 1.40 21.3 7.53 4.1 7.47 0.65 3 1.0 3.7 1.96 0.74 8.4 6.82 19.2 1.92 1.70 9.6 6.98 4.0 2.20 0.74 5.8 6.88 14.3 1.82 1.09 10.8 7.53 2.4 0.17 0.00 4 1.0 7.8 2.21 0.48 5.5 7.20 49.6 2.25 1.07 6.6 6.91 3.6 1.18 0.90 6.0 6.84 12.9 1.73 0.91 7.8 7.40 4.1 2.65 2.71 5 1.0 7.7 1.91 0.47 5.2 7.26 39.8 2.09 0.68 5.4 7.00 2.8 1.22 0.88 6.3 6.75 12.2 1.91 0.95 7.3 7.43 4.1 32.03* 2.00 6 1.0 3.9 1.69 0.57 5.4 7.14 35.6 1.79 0.67 4.2 7.17 2.4 1.18 0.83 4.5 6.84 3.6 1.20 0.65 9.1 7.02 5.9 3.46 0.05 7 1.0 2.4 1.57 0.39 5.2 7.06 69.1 2.53 0.61 3.6 7.10 2.1 1.27 0.76 3.3 7.02 0.5 1.05 0.56 15.8 7.18 2.4 5.14 0.01 8 1.0 4.6 1.69 0.46 5.1 6.96 69.8 2.56 0.65 3.1 6.98 1.4 0.95 0.59 2.8 7.30 9 1.0 6.2 1.77 0.22 9.9 6.51 32.9 2.14 0.61 2.9 7.10 8.5 1.53 0.47 2.2 6.98 10 1.0 8.0 1.78 0.39 7.7 6.43 33.3 1.95 0.49 2.7 6.93 7.9 2.10 0.53 2.2 6.86 11 1.0 4.6 1.63 0.53 11.8 6.50 14.2 1.70 0.42 3.1 7.10 7.6 2.02 0.74 3.3 6.74 mean 3.4 0.65 12.96 6.84 6.8 1.25 9.95 7.03 2.5 0.98 6.32 6.86 1.9 1.33 21.06 7.35 5.14 3.19 0.78 coef. of variation 0.80 0.67 1.06 0.04 2.16 0.91 1.22 0.01 1.03 0.70 0.90 0.03 2.27 0.83 1.16 0.03 0.71 0.89 1.43 *apparent anomalous value; it was ignored in the analysis. rainwater sequential sampler: assessing intra-event water composition variability s.c.p. carvalho, j.l.m.p. de lima and m.i.p. de lima (2014) 4 minimum rainfall amount of 6 mm (to provide at least 7 sampling bottles); ii) a minimum of 6 hours of dry period prior to sampling (dry period means here that the rain intensity was lower than 0.05 mm h-1). in relation to the procedure for collection and analysis of rainwater samples: all the components of the rain sampling equipment were pre-washed with distilled water; the sampling bottles were removed immediately after being filled. the electrical conductivity (ec), ph and turbidity of rainwater were measured immediately upon completing the removal of samples, using the following portable instruments: hi9033 multi-range ec meter, hi8314 ph/orp/temperature meter, and the hi93125 turbidity meter, all of them manufactured by hanna instruments. in addition to the electrical conductivity, ph and turbidity measurements, the sampled rainwater from rainy period 4 (23-09-2012) was frozen and transported to an analytical laboratory for analysing nitrates, sulphates, and chloride. the concentrations of sulphates and nitrates were measured by ion chromatography, and the chloride was determined by the mohr method. although under certain conditions the chemical composition of the rainwater (e.g. ph) can change between the time from filling the first bottle and the time of sampling collection, it is believed that such time period is not long enough to influence the concentration of the major inorganic ions in the dissolved fraction (e.g. [20]). the rain intensity was measured by a laser disdrometer (“laser precipitation monitor” from thies clima) installed next to the sequential rain sampler. this instrument also yields the number of raindrops over 21 size classes and 20 fall speed classes. the precipitation data temporal resolution is one-minute and the depth resolution is 0.001 mm. the variability in rainwater composition was explored in relation to the distribution of raindrop sizes. the laser disdrometer provided each minute a two dimensional matrix with the count of drops in each size and fall speed classes; the matrices were added over the sampling period to obtain one single matrix, which was used for determining the massweighted mean drop diameter (dm). the dm allows the quantification of the overall distribution of raindrop sizes and is obtained by (e.g. [21]): dm= ∑ di 4n(di)∆di 21 i=1 ∑ di 3n(di) ∆di 21 i=1 (1) where di [mm] is the central diameter of the size class i (21 classes) and n(di) [mm -1 m-3] is the expected number of drops, with diameters between d and d+δd, present per unit volume of air. the n(di) [mm -1m-3] defined in the eq. (1) is obtained by (e.g. [22]): n(di)= 1 a∆t∆di ∑ nij vj 20 j=1 (2) where nij is the number of detected raindrops in the size class i and fall speed class j (20 classes), which is measured during the time period δt [s] taken to fill the sampling bottles, vj [m s -1] is the fall speed at the middle of the fall speed class j, a [m2] is the detection area and δdi [mm] is the width of the size class i. c data analysis figure 3 shows the hyetographs of the four rainy periods investigated (see also table 1). the total amount of rainwater collected in each rainy period was 10 mm, with the exception of rainy period 4 (23-09-2012), which accumulated 6 mm. the time taken to fill all the sampling-bottles for the four rainy periods varied from 89 min (rainy period 2) to 240 min (rainy period 3). the time needed to fill each sampling bottle is also represented in figure 3; the highest difference in sampling duration was observed for the rainy period 4 (23-09-2012), ranging from 2 to 134 min. figure 3 hyetographs of the four rainy periods (see table 1). time needed to fill each sampling bottle is represented on the hyetographs (time between two vertical dotted lines) s.c.p. carvalho, j.l.m.p. de lima and m.i.p. de lima (2014) 5 the mean intensity for the four rainy periods varied between 1.9 mm h-1 (rainy period 4) and 6.8 mm h-1 (rainy period 2). the mass-weighted mean drop diameter (dm) ranged from 1.48 to 2.11 mm (table 1). the highest dm was observed in rainy period 2 (02-11-2011), which was expected since the highest mean and maximum intensity was found for this sampling period, and bigger drops are typically more abundant in high rain rate episodes. the physical and chemical parameters that characterize the rainwater of the rainy periods investigated using the rainwater sequential sampler are given in table 2. empirical turbidity time variation was represented in figure 4a for the four rainy periods investigated. the turbidity of the rainwater reduced over time; a power law fitted well the data. the results suggest the suitability of high resolution sampling – in particular, the 2 first bottles with 0.5 mm of rain each – to assess the intra-event rainwater composition; in case e.g. 3-mm sampling-bottles were used, the rapid decline of turbidity in the beginning of this particular event would be unnoticed. this variability in rain turbidity is consistent with several studies that report the presence of higher concentration of suspended matter during the beginning of rainfall and decrease throughout the rain event, which is likely to result from “washout” processes (e.g. [23]). the rainwater sequential sampling also permitted to identify the ph fluctuations during the rainy periods (figure 4b). for the four rainy periods investigated, the samples ph values ranged between 6.4 and 7.5. the mean (± standard deviation) for each rainy period was 6.8 (± 0.3), 7.0 (± 0.1), 6.8 (± 0.2), and 7.3 (± 0.2) for rainy periods 1 to 4, respectively. figure 5 shows the evolution of the electrical conductivity of the rainwater during the four rainy periods. they all show the occurrence of higher electrical conductivity at the beginning of rainfall and a rapid decreased in the first millimetre of rain. nevertheless, it is possible to observe some differences between the studied rainy periods. for example, the rainy period 4 (23-09-2012) reached the highest electrical conductivity, 75 μs cm-1, which might be explained by the corresponding longest antecedent dry period, ~ 5 days (see table 1). this relationship is usually described in the literature, for example nyika et al. [24] observed that the rainwater samples had higher electrical conductivity when there was no rainfall on the days before the rainwater collection, in comparison with the samples taken after a rainy day. the figure 6a shows the variations of concentrations of chloride, sulphates and nitrates for the rainy period 4 (23-092012). the chloride concentration was higher in the first 0.5 mm and, after that, the concentration fluctuated between 2 and 6 mg l-1 (see also table 2). the rainwater sequential samples have concentrations of sulphates between 0.2 and 7.5 mg l-1, with a mean value of 3.2 mg l-1. in relation to figure 4 (a) turbidity measured for each sample collected during the four rainy periods. power laws are fitted to the data; (b) ph measured in the rainwater samples collected during the four rainy periods figure 5 electrical conductivity (ec) measured in each sample during the four rainy periods. rainfall intensity was averaged over each sampling interval rainwater sequential sampler: assessing intra-event water composition variability s.c.p. carvalho, j.l.m.p. de lima and m.i.p. de lima (2014) 6 the nitrates, the mean concentration recorded was 0.8 mg l-1; and 2.71 mg l-1 was the maximum value detected, which corresponds to the sample filled at the time of maximum rain intensity (~ 63 mm h-1), see figure 3. the relationship between the processes of removal of pollutants from the atmosphere and raindrop sizes has been studied for a long time. for example, levine and schwartz [25] stated that the removal of hno3 vapour depend on raindrop size, with the smaller drops (< 1 mm) having the greatest contribution to the washout scavenging. in addition, ebert et al. [26] establish different relationships between scavenged particle sizes (0.19 1.8 μm) and the most effective raindrop diameter. in figure 6b the mass-weighted mean drop diameter (dm) is plotted against the concentrations of chloride, sulphates and nitrates measured in the rainy period 4 (23-09-2012), apparently showing no clear relationship. the variation of dm, ranging from 1.05 to 2.22 mm, seems to have a different effect on rainwater composition depending on the ionic species. for example, the highest concentrations of sulphates (7.5 and 5.1 mg l-1) were observed for the extreme (highest and lowest) values of dm (2.22 and 1.05 mm, respectively) (see also table 2). but the small sample size analysed does not allow any inference of relations between the relevant variables, which in any case was not the main goal of this work. iv conclusion this study proposed a rainwater sequential sampler that can be useful as a low-cost solution to explore rainwater composition variations during a rainfall event. because of the simplicity of its design, the apparatus can be easily adapted to include different sample volumes and total amount of sampled rainfall. a drawback is that in order to register the intensity and duration of the rain event it is necessary to use in addition a recording rain gauge. sequential samples of rain were collected in coimbra (portugal) during four rainy periods to test the performance of the equipment. the higher resolution of the initial rainwater samples (2 bottles with 0.5 mm of rainwater each whereas all other bottles have 1 mm) allowed the detection of higher values of some parameters followed by a rapid decline. results suggest that the volume resolution of the device is able to assess rainwater composition variability during a rain event, but if necessary this can be easily adapted to specific requirements. acknowledgment the first author is grateful to the foundation for science and technology (fct) of the portuguese ministry of education and science for the financial support through the doctoral grant sfrh/bd/60213/2009. references [1] r. engelmann, “the calculation of precipitation scavenging” in meteorology and atomic energy, springfield, virginia: us atomic energy commission, pp. 208–221, 1968. [2] b. helmreich and h. horn, “opportunities in rainwater harvesting” desalination, vol. 248, pp. 118–124, nov 2009. [3] d. gatz and a. n. dingle, “trace substances in rain water: concentration variations during convective rains, and their interpretation” tellus, vol. 23, no. 1, pp. 14– 27, feb 1971. [4] h. b. h. cooper, j. m. demo, and j. a. lopez, “chemical composition of acid precipitation in central texas” water, air, and soil pollution, vol. 6, pp. 351–359, sep/oct/nov 1976. [5] c. ronneau, j. cara, j. navarre, and p. priest, “an automatic sequential rain sampler” water, air, & soil pollution, vol. 9, pp. 171–176, feb 1978. [6] s. tomich and m. dana, “computer-controlled automated rain sampler (ccars) for rainfall measure figure 6 (a) concentrations of chloride, sulphates and nitrates during the rainy period 4 (23-09-2012); (b) concentrations of chloride, sulphates and nitrates plotted against the mass-weighted mean drop diameter (dm). s.c.p. carvalho, j.l.m.p. de lima and m.i.p. de lima (2014) 7 ment and sequential sampling” journal of atmospheric and oceanic technology, vol. 7, pp. 541–549, aug 1990. [7] m. mangoni, r. udisti, and g. piccardi, “sequential sampling of rain: construction and operation of an automatic wet-only apparatus” international journal of environmental analytical chemistry, vol. 69, no. 1, pp. 53–66, 1998. [8] j. gray, k. hage, and h. mary, “an automatic sequential rainfall sampler” review of scientific instruments, vol. 45, no. 12, pp. 1517–1519, dec 1974. [9] f. laquer, “an intercomparison of continuous flow, and automatically segmenting rainwater collection methods for determining precipitation conductivity and ph” atmospheric environment, vol. 24a, no. 9, pp. 2299– 2306, 1990. [10] f. huff and g. stout, “distribution of radioactive rainout in convective rainfall” journal of applied meteorology, vol. 3, pp. 707–717, dec 1964. [11] g. dawson, “ionic composition of rain during sixteen convective showers” atmospheric environment, vol. 12, no. 10, pp. 1991–1999, 1978. [12] b. lim, t. jickells, and t. davies, “sequential sampling of particles, major ions and total trace metals in wet deposition” atmospheric environment, vol. 25a, no. 3/4, pp. 745–762, 1991. [13] p. mantovan, a. pastorea, l. szpyrkowicz, and f. ziliograndi, “characterization of rainwater quality from the venice region network using multiway data analysis” science of the total environment, vol. 164, pp. 27–43, mar 1995. [14] t. okuda, t. iwase, h. ueda, y. suda, s. tanaka, y. dokiya, k. fushimi, and m. hosoe, “long-term trend of chemical constituents in precipitation in tokyo metropolitan area, japan, from 1990 to 2002” science of the total environment, vol. 339, pp. 127–41, mar 2005. [15] g. raynor and j. hayes, “acidity and conductivity of precipitation on central long island, new york in relation to meteorological variables” water, air, and soil pollution, vol. 15, pp. 229–245, feb 1981. [16] m. d. seymour and t. stout, “observation on the chemical composition of rain using short sampling times during a single event” atmospheric environment, vol. 17, no. 8, pp. 1483–1487, 1983. [17] r. castillo, g. lala, and j. e. jiusto, “the chemistry and microphysics of intrastorm sequential precipitation samples” tellus, vol. 37b, pp. 160–165, jul 1985. [18] f. laquer, “sequential precipitation samplers: a literature review” atmospheric environment, vol. 24a, no. 9, pp. 2289–2297, 1990. [19] s. kawakubo, s. hashi, and m. iwatsuki, “physicochemical speciation of molybdenum in rain water” water research, vol. 35, no. 10, pp. 2489–2495, jul 2001. [20] s. v krupa, “sampling and physico-chemical analysis of precipitation: a review” environmental pollution, vol. 120, pp. 565–594, jan. 2002. [21] c. w. ulbrich, “natural variations in the analytical form of the raindrop size distribution” journal of climate and applied meteorology, vol. 22, pp. 1764–1775, oct 1983. [22] w. f. krajewski, a. kruger, c. caracciolo, p. golé, l. barthes, j.-d. creutin, j.-y. delahaye, e. i. nikolopoulos, f. ogden, and j.-p. vinson, “devex-disdrometer evaluation experiment: basic results and implications for hydrologic studies” advances in water resources, vol. 29, pp. 311–325, feb 2006. [23] w. jambers, v. dekov, and van grieken r., “single particle and inorganic characterization of rainwater collected above the north sea” the science of the total environment, vol. 256, pp. 133–150, jul 2000. [24] d. nyika, e. zhande, and w. zhakata, “rainwater quality during 1991–1993 and constituents of milky rain (november 1992) in bulawayo, zimbabwe” the science of the total environment, vol. 186, pp. 273–281, jul 1996. [25] s. levine and s. schwartz, “in-cloud and below-cloud scavenging of nitric acid vapor” atmospheric environment, vol. 16, no. 7, pp. 1725–1734, 1982. [26] p. ebert, m. kibler, a. mainka, b. tenberken, k. baechmann, g. frank, and j. tschiersch, “a field study of particle scavenging by raindrops of different sizes using monodisperse trace aerosol” journal of aerosol science, vol. 29, pp. 173–186, jan/feb 1998. sílvia c.p. carvalho received the m.sc. degree in environmental engineering from university of coimbra, coimbra, portugal, in 2008. she is currently working on the ph.d. degree in environmental engineering at the same university. joão l.m.p. de lima ph.d., professor at the department of civil engineering (laboratory of hydraulics, water resources and environment) of the university of coimbra, coimbra, portugal. researcher at the institute of marine research (imar-cma). m. isabel p. de lima ph.d., assistant professor at the department of civil engineering (laboratory of hydraulics, water resources and environment) of the university of coimbra, coimbra, portugal. researcher at the institute of marine research (imar-cma). jert paper ontology enriched parsing of arabic verbal sentence journal of engineering research and technology, volume 8, issue 2, september 2021 22 parsing arabic verbal sentence using grammar ontology khaled m. almunirawi1,a rebhi s. baraka2,b 1university college of ability development, khan younis, gaza, palestine 2faculty of information technology, islamic university of gaza, gaza, palestine akhalediue@gmail.com brbaraka@iugaza.edu.ps https://doi.org/10.33976/jert.8.2/2021/3 abstract—we build a model to parse the arabic verbal sentence based on arabic grammar ontology. the ontology conceptualizes the arabic verbal sentence through the representation of grammar parsing classes, verb properties, and conjunction checking. by populating the ontology with verbal sentences and adding grammar rules, we form a verbal sentence knowledge base. the parsing model is supported by morphological analysis for sentence syntactic analysis and supported by arabic synonyms extractor for deriving synonyms. we have implemented the model and have provided it with a user interface where the user can enter a sentence to be parsed and obtains the parsing results. the interface has the options to partially or totally add diacritics to the words of the sentence and it has the possibility to remove ambiguity by choosing the most appropriate analysis from lexicon results. to evaluate the model, we have selected a representative set of arabic verbal sentences from arabic grammar books that represent all the possibilities of a verbal sentence. we have performed several parsing tests on these sentences with and without diacritics. the results prove the ability of the model to parse the various forms of the verbal sentence. the accuracy increases when the sentence is diacriticized while avoiding free word order and following the arabic verbal sentence general form. index terms— arabic parsing, arabic word net, arabic grammar ontology, morphological analysis, synonym extraction. i introduction parsing is necessary for distinguishing the meaning and understanding the intentions of a sentence and rolling out any ambiguities. it is nearly impossible to misunderstand the meaning if the parsing is done correctly. parsing arabic sentence is a difficult task due to the relatively free word order of arabic, besides the length of the sentence and the omission of diacritics (vowels) in written arabic. parsing arabic sentence is the analysis of an input sentence into its linguistic parts in the form of a parsing tree with syntactic relations among them. this parsing usually contains semantic information [1]. traditional sentence parsing is performed by understanding the exact (semantic) meaning of a sentence. when the ambiguity in the sentence is resolved, various possible interpretations are reduced and the sentence becomes more obvious. parsing such a morphologically rich and free word order languages is a challenging task, requiring advanced techniques in nlp for processing the words and requiring the machine to understand the syntactic and semantic analysis of the words. it is nearly impossible to misunderstand the meaning, provided the parsing is done correctly. in case the ambiguity is resolved, the range of possible interpretations will be reduced and the sentence would be parsed. an ontology, which a semantic web technique, is used to conceptualize language grammar [2]. it presents language entities such words, tenses, verbs, and phrases as ontological class hierarchy with properties, instances and grammar rules. then, the ontology can be used for various linguistic tasks such as lexical analysis and sentence parsing. we develop an ontology-based model for parsing arabic verbal sentence. the model receives a sentence as an input from the user, identifies if it is verbal by checking the first word of the sentence as it is a verb or not (except if the verb is preceded with verb preposition), starts defining the syntax of each word, and then performs parsing of these words depending on arabic grammar knowledge base. the knowledge base contains mainly the arabic grammar ontology besides five components: grammar rules, verb properties, parsing classes, conjunction checker and word parser. the ontology contains classes, objects and relations regarding classifying the arabic sentence. the implemented model allows the user to interact with it and deals with the ontology through owl-api to extract the parsing result for the given sentence. based on this motivation, the research aims to: build a grammar ontology for arabic verbal sentence and represent its characteristics and parsing rules as relations and properties within the ontology. then populate the ontology with appropriate parsing instances of verbal sentences to form a grammar knowledge base. build the model that uses the ontology and the grammar knowledge base along with needed morphological and synonyms features for parsing a verbal sentence. mailto:khalediue@gmail.com mailto:rbaraka@iugaza.edu.ps https://doi.org/10.33976/jert.8.2/2021/3 khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 23 to achieve these two objectives, we follow the following research methodology: 1. arabic parsing domain review: study and analyze current approaches related to arabic language verbal sentence and its parsing alternatives. also, study the domain of arabic grammar and arabic parsing rules to extract the elements of the ontology including objects, properties, relations, instances. 2. data collection: the dataset depends on formalized cases and rules applied to parsing arabic sentence. it includes a collection of verbal sentences with different structures that covers all cases related to the verbal sentence parsing rules. 3. parsing model development: the model is based on the knowledge base formed of the ontology, classes, instances, and parts for conjunction checker, parsing classes, verb properties and grammar rules. the development process includes three main sub-phases: developing the ontology and the knowledge base, including extending the ontology with needed classes, properties and instances. building the model mainly including the grammar ontology for representing the arabic grammar pertaining to the verbal sentence, the morphological analyzer for the identification of all syntactic possibilities for each word of the sentence and the synonyms extractor for giving the synonym of each word in the sentence. implementing a prototype of the model realizing the above components where a given verbal sentence can be completely parsed and its parsing results and returned. the implementation will depend on tools such as java programming language, owl and jena apis and other tools and apis as needed. 4. model evaluation: evaluate the accuracy of the model as a whole including the ontology by comparing the model results to the actual results based on a pre-parsed set of sentences representing all possibilities of verbal sentence. this is followed by briefly comparing the model to the xml semantic parser proposed by alrabiah & al-salman [3] which is used to resolve parsing problems based on defined parsing-related factors. the rest of this paper is organized as follows: section ii presents a review of related work. section iii presents the model for arabic verbal sentence parsing (avsp). section iv presents the implementation of the parsing model. section v presents an evaluation of the model. finally, section vi concludes the paper. ii related work parsing arabic sentences is one of the challenging arabic nlp tasks due to the distinct features of arabic including its morphological richness. many researchers tried to tackle arabic parsing problems through various techniques, approaches and algorithms. these approaches can be divided as follows: a natural language processing methods salloum et al. [4] focused on arabic parsing issue and built a parser as a part of a machine translation system. the parser have problems regarding the ambiguity, since a huge amount of parse trees are generated due to the ambiguity issues. they defined the lexical functional grammar (lfg) in the arabic context, lfg is a linguistic hypothesis of grammar which concerns the nature of the statement structure and generate realistic framework for natural language processing [5]. lfg distinguishes two levels of representation to each sentence of the language. this approach presented these two different formalisms: trees form or constituent structure (called c-structure) and functional structure representing grammatical functions like subject and object and the relation between them as attribute-value matrices. arabic language does not use diacritics in the presence of vowels in a sentence and therefore makes the language unclear. in case the ambiguity is resolved, the language would become clearer and the range of possible interpretations is reduced. syntactic analysis system has been developed for arabic language including three nlp elements: a lexicon, a morphological analyzer and a syntactic parser. applying disambiguation approach, the morphology analyzer gives probable readings of the given arabic word. this would become clearer by following the grammar rules that would be correctly parsed and resolve ambiguity. othman et al. [6] stated that since arabic language does not use diacritics when writing vowels, thus makes the language unclear and slows down its development of arabic nlp (anlp). the way to resolve ambiguity will be influenced by certain linguistic constraints while parsing an arabic sentence. they have developed a syntactic analyzer for the language with three nlp elements: a lexicon, a morphological analyzer and a syntactic parser. when applying a disambiguation approach (based on the parser and analyzer), the morphological analyzer returns all the possible values of the given arabic word. this would be clearer by applying the grammar rules which ensure correct parsing and resolve ambiguity. they focused on limited categories of ambiguity, rather than examining the performance on small datasets. alqrainy et al. [7] developed a simple parser which aimed to check the correctness of a given arabic sentence through building new context-free grammar (cfg) which makes the top-down techniques much more valuable. many experiments were conducted and the results revealed efficient outcomes while analyzing the nominal and verbal sentences. the system lacks resolving ambiguity issues which gives different meaning for the sentence being parsed. khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 24 b transition networks methods augmented transition networks (atn) can analyze the sentence structure, however, practically; it is complicated depending on the language features, morphological richness and complexity. a chart-parser approach [8] uses modern standard arabic (msa) sentences with syntactic constraints to minimize parsing ambiguity using some features in lexical semantics. they are mainly used utilized to solve the structure of ambiguous sentences. prolog language has been used to implement the developed parser and has the capability to assure syntactic constraints. recent researches [9] used chart-parser methodology depending on context free grammar (cfg) to parse a simple arabic sentence. bataineh & bataineh [10] employed a methodology that was studying and analyzing the grammar of arabic language conforming to gender and number. this approach has accuracy of about 85%. the problem with ambiguity leads to bad-formed sentence which can be resolved semantically stressing the need to use semantic-based approaches to give better results. c machine learning methods machine learning (ml) gives machines the ability to learn without being explicitly programmed. it was used in nlp and parsing. combined methods [11] using treebankbased parsers and automatic lfg f-structure annotation methodologies on parsing arabic. the arabic annotation algorithm (a3) exploits the functional annotations in the penn arabic treebank (atb) [12] to assign lfg f-structure equations to trees. for parsing, the researches modified bikel’s parser to learn atb functional tags and merge phrasal categories with functional tags in the training data. results are low compared with the domain expert. mccord & cavalli-sforza [13] used atb to build arabic slot grammar (asg) parser based on same efforts done for european languages using slot grammar (sg). al-emran et al. [14] developed a system that parses msa sentences through the use of treebank. it relies on the arabic statistical parser to produce a model through training on the penn arabic treebank and standard arabic linguistic resource. not different from previous models, ambiguity was a problem and avoiding it makes sentences meaningless. d semantic approaches semantics, in linguistics, is a field concerned with the study of meaning at the levels of words, phrases, and sentences. some parsers are developed for english language [15]. the arabic language is a difficult language that may delay the expansion of the tools and applications for semantic web in arabic [16]. it has many discriminations such as complex morphology, diacritics and short vowels. alsalman et al. [17] followed the semantic technology and attempted to reveal the word sense ambiguity, by building a semantic parser using a semantic analyzer. a dependency parsing approach [18] for modern standard arabic (msa) is used for verbal sentences using data-driven dependency parser. it utilizes the semantic information available in lexical arabic verbnet to complement the existing morphosyntactic information already available in the data. this complementing information is encoded as an additional semantic feature for data driven parsing. they were able to build a dependency parser with accuracy of 71.5% labeled attachment score (las), 77.5% unlabeled attachment score (uas), and 2% increasing in total accuracy compared to the case of not using semantic features. other semantic parsing approach such as [3] has collected traditional arabic grammar rules and presented them into extended backus normal form (ebnf) grammar to serve as a base for other arabic nlp researches. they presented the architecture of an xml-based semantic parser. the parser was able to reduce the parsing ambiguity by using the lexical, syntactic and semantic feature structures of the unification grammar. it still returns inaccurate parsing results, and a percentage of ambiguity. e ontology-based approaches an ontology-based parsing approach [19] analyzes the turkish sentence and utilized the interlingua representation called text meaning representation (tmr). it represents the relations between events and entities, the semantic properties and pragmatic properties. the core definitions of word senses (without any modifications) were taken from the lexicon. this approach bypassed the syntactic constrains and used the semantics provided from the ontology to achieve high accuracy of parsing results. a formalization of arabic grammar and its entities using ontology is proposed by elmalki [2]. the language phrases and grammar concepts are conceptualized into arabic grammar ontology. it includes classes for the word, sentence, marks, gender, tense, count, verb and person, beside creating sub-classes. in addition, grammar relations are added as object properties in the ontology and define their domains and ranges from the classes. the ontology is published online and it is available for writing sparql queries or sematic web rule language (swrl) rules directly. recent applied research on representing arabic language into ontology is due to [20]. the arabic ontology is similar to wordnet. each concept in the ontology (meaning of an arabic term) is given a unique resource identifier (uri), informally described by a gloss, and lexicalized by one or more of synonymous lemma terms. some important individuals are included in the ontology, such as individual countries. the arabic ontology in the process of linking and integrating all arabic lexicons. each meaning in every lexicon will be linked (as much as possible) with a concept in the arabic ontology. based on this, a large linguistic graph integrating arabic semantics and morphology can be built. khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 25 iii building the arabic verbal sentence parsing (avsp) model the avsp model consists of five main components as shown in fig. 1. they include the arabic grammar knowledge base, the morphological analyzer, the synonym extractor, the word parser, and the user interface. next, we describe each of them and then the flow of the model. a arabic grammar knowledge base the arabic grammar knowledge base consists of the arabic grammar ontology and the arabic grammer instances. the ontology is a conceptualization of the arabic grammar phrases and concepts in the form of classes and related properties. the ontology is based on [2] and we have extended the it with additional classes and populated it with needed grammer instances that are related to the verbal sentence. 1. classes: the arabic grammar ontology contains nine classes, some of them contain sub-classes. fig. 2 shows the classes and their subclasses. 2. properties: the ontology contains two types of object properties: factor property and functional property. both of them contain sub-object properties (see fig. 2). the ontology does not contain any data properties; since all data types are text. we explain the object factor properties related to our model, as follows: fig. 1. structure of the arabic verbal sentence parsing model fig. 2. arabic grammar ontology & its object properties khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 26 1. subject (فاعل): a relation that connects the active verb to a noun in one direction, with restricted domain and range as defined in the characteristics of the property. 2. subject deputy ( الفاعل نائب ): a relation that connects the passive verb to a noun in one direction with the subject is omitted. it has restricted domain and range as defined in the characteristics of the property. 3. object ( به مفعول ): a relation that connects the active verb to a noun in one direction. it has restricted domain and range as defined in the characteristics of the property. to make the ontology more compatible with our model and to obtain more accurate results with better quality, we have extended the ontology capabilities by adding some classes, and instances. these additions include two classes with their instances to the pronoun class. these classes are called nominative pronoun ( رفع ضمير ) and accusative pronoun ( نصب ضمير ) as illustrated in fig. 3. the instances in the class nominative pronoun ( رفع ضمير ) are pronouns that are parsed in a nominative state as a subject. the instances in accusative pronoun ( نصب ضمير ) class are pronouns parsed in an accusative state as an object. in addition, we added verb prepositions as instances to their respective classes verb jussive and verb accusative pronoun as shown in fig. 3(c)-(d). these additions of classes, instances, and properties enrich the ontology and form a grammatical knowledge base. they enable the model to deal with verbal sentences containing such properties. they are used define the parsing state of a word and recognize the related parsing cases through the ontology properties to return the accurate parsing result. the defined classes and instances would make better conceptualization for the personal pronouns and prepositions which improve the parsing result. (a) nominative pronoun class and its pronoun instances (b) accusative pronouns class and its pronoun instances (c) verb jussive prepositions (d) verb accusative prepositions fig. 3. extending the grammar ontology with pronouns, prepositions and their instances khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 27 b the morphological analyzer the morphological analyzer is mainly based and depends on alkhalil morpho sys [21] to identify all possible syntactic features associated with a given word. for each possibility, the analyzer returns the voweled word (with diacritics), prefix, stem, type, pattern (weight), root, part of speech (pos) and suffix. if the given word is diacritized, the possibilities are reduced to the minimum number, and the analyzer excludes the words that do not match the added diacritics. in our model, we use the prefix, type, pos and suffix to form the final parsing result. c the synonym extractor the synonym extractor is a lexical resource for modern standard arabic (msa) based on arabic wordnet (awn) and follows the design of princeton wordnet (pwn). it extracts meanings and alternative synonyms of a given word in the context of a given sentence. for instance, the extracted synonyms of the word “ذهب” (went or gold) are as follows. d the word parser the word parser applies the grammar rules based on the given parsing class returned from the knowledge base, particularly from the ontology, corresponding to the words of the sentence to be parsed. it also deals with prefixes and suffixes of a verb. therefore, it determines the words’ final parsing result. inputs to the word parser, which are needed to produce the parsing results, are the word type, the parsing class, and the prefixes and suffixes of each word. the output of the word parser is the final parsing results of each word. e the user interface the user interface accepts a sentence to be parsed from the user, and displays the results after performing the needed parsing steps based on the arabic grammar ontology and the grammar rules. the interface needs to be interactive and able to show the analysis of the sentence depending on the morphological analyzer. it also shows the synonyms of the analyzed word based on the synonyms extractor. next, the model flow explains in details how the model works and how its components interact. f flow of the model the flow of the model starts when a verbal sentence is given to be parsed, then the sentence processing stage performs two operations: 1. checks if the sentence is verbal, i.e., if the sentence starts with a verb, or starts with a preposition which precedes the verb such as accusative or jussive preposition. if the first word includes other probabilities than a verb, then the model takes the verb form, and eliminates all none verbal forms of the first word, which are subject, viasubject and object properties. 2. the model refers to the ontology to extract the object properties that have the verb as its range, and reduces the rest of the sentence’s words syntactic results as follows: a. if the verb is active, then reduce the second word’s syntactic results to the domain of the subject property. b. if the verb is passive, then reduce the second word’s syntactic results to the domain of the via-subject property. c. if the verb is active and transitive, reduce the third word’s syntactic results to the domain of the object property. the results of awn are passed to the tokenization process, in which each word is tokenized from relevant prefixes and suffixes. theses prefixes and suffixes (if available) are passed to the ontology to extract its class. the results are either one or both of the following possibilities: 1. the prefixes of a verb are prepositions (accusative or jussive). 2. the suffixes of a verb are personal pronouns with their two types: nominative and accusative. the verb can be connected to two suffixes (both nominative and accusative) if the verb is active and transitive. at this stage, the words are completely processed in terms of synonym extraction, conjunction checking, and their corresponding parsing classes. finally, these words are passed to the word parser which in turn applies the arabic grammar parsing rules on each corresponding parsing class retuned from the grammar knowledge base, and pass the parsing results to the user interface. the sentence processing stage results in a set of reduced parsing states for each word in the sentence. these states are passed to the morphological analysis stage which uses alkhalil morpho sys to analyze the words set with reduced parsing states morphologically. this set is passed to the synonyms extractor and morphological analyzer stage. in this stage, the model performs the synonym extraction and makes the syntactic analysis of analyzed words using awn. the results are the syntactically-analyzed word with their synonyms, so that the model can have more information about the word’s synonyms and alternate uses. based on the flow of the model and the roles of each component presented above, next we present the implementation of the model. khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 28 iv implementing the avsp model in the implementation of the avsp model, we have implemented each component with all of its required functionalities and interactions with the other components as defined in flow of the model. to achieve this, we have employed some tools and apis and we have dealt with some issues imposed by the diversity of the used tools. the arabic grammar ontology including our extension is encoded in owl using protégé ontology framework. as mentioned in the ontology component of the model, our extension includes classes related to accusative and nominative personal pronouns and prepositions as required by the model to deal with all different forms of the verbal sentence. the ontology is, then, populated with the needed instances for each class such as all kinds of accusative and nominative pronouns as well as all kinds of prepositions related to the verbal sentence. these changes to the ontology required checking the consistency and the integrity of the ontology in terms of the class hierarchy, object (word) properties, type values, and instances belonging to their corresponding classes. owl api is used to implement, manipulate and serialize the arabic grammar ontology. it enabled us to access each class in the ontology with its instances, object (verb) properties, property values and arabic grammar rules embedded into the ontology forming the grammar knowledge base (the ontology and its instances). the morphological analyzer in the model is implemented based on alkhalil morph system rather than implementing our own or using other morphological systems. this is because we did not want to reinvent the wheel and it is not our purpose to implement another lexicon. furthermore, it is up to date, open source, comprehensive where it can effectively deal with any word in a given sentence. it has a java interface that simplified interacting with it in the model through some programming modifications and refactoring to capture the morphological analysis of the words and return them as part of the parsing model. the synonym extractor is implemented based on awn which is an open source and comprehensive in dealing with any word in a given sentence. its java interface simplified interfacing with the model through some programming modifications and capture the synonyms of the words and returns them as part of the parsing model. the implementation of the model is represented by the user interface which interacts with the previously described components of the model. it accepts a sentence to be parsed from the user, and displays the results after performing the needed parsing steps. the interface shows the analysis of the sentence depending on morphological analyzer. for each word in the analysis, synonyms extractor shows the synonym set and translation. the interface refers to the ontology to examine the desired rules and to process the needed steps to return the parsing result, as described in the flow of the model. fig. 4 shows the parts of the user interface labeled 1 to 7 as they illustrate the different interactions, and results of the model applied to parsing the sentence “ في الطعام الطفل أكل .(the child eat the food in the market) ”السوق this implementation is tested to be correct by ensuring that each component returns the expected output/result based on the model and the final parsing result corresponds to the parsing performed by a human expert. this is verified through a parsing example but more details are presented in the evaluation of the model (see section v). fig. 4. the user interface of the avsp model khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 29 example: parsing the sentence: “ذهب الطالب إلى المدرسة” (the student went to school) 1. the word “ذهب” holds two meanings: the past active transitive verb “went” and the noun “gold”. the model uses the lexicon to reflect the sentence word’s categories in arabic and returns the syntactic and lexical feature of each category. the lexicon tells us that the word -is a verb and returns the following syntactic fea ”ذهب“ ture: (tense: past, transitivity: no). in addition, the lexicon returns another type of the word “ذهب” which is a noun, meaning “gold” and this result is excluded since it is a noun and the model is limited to parse the arabic verbal sentence only. fig. 5 shows all available and returned options for the verb status of the word “ذهب” that have five forms with different meanings. 2. if the user has added a diacritic on a letter (the diacritics can be on any letter in the word, including the last letter) in the word, the lexicon returns less options. then the parsing ambiguity decreases as shown in figure 5(a). here we add a diacritic on the word “ذهب” such that it became “ َذََهب” the returned morphological analysis is only one as shown. next, the model connects to the ontology and checks the appropriate related object property to the active verb, which is (subject) “فاعل”, and gets its domain and range for the subject property. the results will be used to limit the lexicon results for the words after the verb. 3. since the verb is not transitive “الزم” then the next word must be the subject, and it must be nominative “مرفوع” as arabic grammar rules states. 4. therefore, the model limits all the successive word options to the nominative possibilities as shown in figure 5(b) for the word “الطالب”. if the verb was transitive, then the second word after the verb is considered accusative “منصوب”. therefore, limit its options to the accusative possibilities. 5. now, the results of the domain and range for the object properties has decreased the lexicon options, and the parsing process starts when the user selects the desired word from the lexicon results. the selected word enters other text processing level to check the suffixes and prefixes for the word. this is because some of them affects the parsing status and the parsing mark. the prefixes are the allowed propositions, grammatically, to proceed the verb, including accusative “النصب” and jussive “الجزم” prepositions. the suffixes include checking two types of personal pronouns that are connected to the verb, i.e. nominative “الرفع” and accusative “النصب” pronouns. after checking the domains and ranges, besides the verb transitivity, the nominative suffix is parsed as a subject and the accusative suffix is parsed as an object. 6. the main objects in the sentence, i.e., the verb “ذهب” and subject “الطالب” are parsed, and the rest of the sentence is called semi-sentence, the proposition’s parsing is fixed. the succeeding word is parsed as genitive noun. (a) analysis with diacritics (b) analysis without diacritics fig. 5. analysis of the word “ذهب” (went) khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 30 v evaluating the avsp model the evaluation of the model aims to prove its correctness and show its accuracy in parsing a verbal sentence in its different forms. this requires the results of the morphological analyzer which is used in the model to return the true parsing for the word based on its properties and conjunctions within the sentence. the evaluation also requires the synonyms extractor to return synonyms of the selected word. the model’s accuracy is evaluated using a set of representative sentences with different parsing rules as specified in arabic grammar references. parsing results are compared with the results obtained from these references. the representative sentences are classified into six categories presented as follows: 1. verb-subject-object (vso) sentences. 2. verb-object-subject (vos) sentences. 3. sentences start with a verb having nominative conjunction. 4. sentences start with a verb having accusative conjunction. 5. sentences start with a verb having both nominative and accusative conjunctions. 6. sentences start with a verb preceded by a verb proposition. for verbal sentences shown in table 1, the avsp model checks the properties of the verb through verb properties and returns active/transitive verb. table 1 verb-subject-object (vso) sentences # sentence sentence with diacritics دَرس الطالُب الدرسَ درس الطالب الدرس 1 الطالب المحاضرتيندَرس درس الطالب المحاضرتين 2 دَرس الطالبان المحاضرتين درس الطالبان المحاضرتين 3 غادَر الركاب الحافالت غادر الركاب الحافالت 4 َحَضر المشرفون المناقشة حضر المشرفون المناقشة 5 درَسْت الطالبة الدرس درست الطالبة الدرس 6 دَعا الطالب هللا دعا الطالب هللا 7 the conjunction checker checks the verb and reports that it does not contain conjunctions, except in the sixth sentence where the conjunction does not affect the parsing result. the parsing classes of the model checks the diacritics on the end of the words and returns nominative class for the second word and accusative class for the third word. then grammar rules part applies arabic grammar rules on the words. the verbs are checked for vowel/unvowel, and the words are checked for single/dual/plural form and applies the rules. because the sentences follow the vso form, the model returns accurate parsing results after applying the parsing rules, based on adding diacritics to the words. this is not expected if the sentences follow vos form, as shown in table 2. this sentence follow the vos form, so the user must add diacritics to the words to be correctly parsed. the parsing classes part checks the diacritics on the words and adds the correct parsing class to the word depending on the diacritics added on it. if the user did not add diacritics to the sentences, then the avsp model returns false results. table 2 verb-object-subject (vos) sentences sentence sentence with diacritics درس الدرَس الطالبُ درس الدرس الطالب for sentences shown in table 3, the conjunction checker part checks the verb and returns a conjunction that belongs to nominative pronouns type. so, it parses the pronouns as subject the parsing classes part returns accusative class for the .”فاعل“ third word. then the grammar rules part applies arabic grammar rules on the words and parses the rest of the sentence and returns the parsing result. the parsing classes part returns nominative class for the third word. now, the grammar rules part applies arabic grammar rules on the words and parses the rest of the sentence. table 3 verb with nominative conjunction sentences # sentence sentence with diacritics درْسُت الدرس درست الدرس 1 درْسَت الدرس درست الدرس 2 درْسِت الدرس درست الدرس 3 دَرسا الدرس درسا الدرس 4 دَرُسوا الدرس درسوا الدرس 5 دَرْسنا الدرس درسنا الدرس 6 for the sentence shown in table 4, the conjunction checker checks the verb and returns a conjunction which belongs to accusative pronouns. it parses the pronoun as object “مفعول به”. table 4 verb with accusative conjunction sentences sentence sentence with diacritics دّرسهاْ المدرس درسها المدرس for the sentence shown in table 5, the conjunction checker checks the verb and reports that it contains two conjunctions. table 5 verb with nominative and accusative conjunction sentence sentence with diacritics ضربتُه ضربته the first one belongs to nominative pronouns type, and the second one belongs to accusative pronouns type so it parses the first pronouns as subject “فاعل”, and parses the second pronouns as object “مفعول به”. for sentences shown in table 6, the conjunction checker checks the verb and finds that it is preceded with verb preposition that belongs to jussive type, then the avsp model parses khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 31 the verb as jussive “مجزوم” and continue parsing the sentence as explained in the above cases. table 6 verb with jussive conjunction sentences # sentence sentence with diacritics ا يحضْر الطالب لما يحضر الطالب المحاضرة 1 المحاضرةلمَّ ليتِقْن العامل العمل ليتقن العامل العمل 2 next, we address the cases where the parsing results might be inaccurate or a sentence is falsely parsed: verbal sentence preceded with a preposition: the model checks if the sentence starts with a verb, or starts with a preposition that precedes the verb. the lexicon does not recognize some words, while other words are analyzed in many forms such as the word “أكل” (eat) where it can be analyzed as a verb or as a noun “كل” (all) preceded with “أ” which is a question preposition. our model cannot recognize such sentences which is out of the model scope. word order: arabic sentence formalization tends to use the vso form. it can use vos which is uncommon but can be used in cases where the user wants to insure the importance of the object. if another form is used i.e., vos the model will give false parsing results in parsing unless the user adds diacritics to the subject and object. if diacritics are added to the end of a word, the model can recognize the word’s proper pos in the syntactic analysis stage. diacritics: adding diacritics on the words, partially or totally, reduces the results of syntactic analysis and eliminates the ambiguity of the sentence. in most cases, the parsing process looks for the diacritics added to the last letter of the word. if the user did not add them, the parsing process returns many and inaccurate results. the word has three meanings: ‘i studied’, ‘she studied’ and ”درست“ ‘he studied’. without diacritics on the last letter, ambiguity appears resulting in these three cases. if the user added diacritics to the word to be “ ُدرْست” or “درَست” or “ َدرْست”, this will clarify the meaning. the diacritics are necessary to remove ambiguity at least on the last letter of the word. word order and diacritics: parsing a sentence like “ درس -without dia (the student studied the lesson) ”الدرَس الطالبُ critics results in true parsing for invalid sentence. the user means other semantics of the sentence where he wants to ensure that the lesson is studied by the student. if the user did not add diacritics to the sentence, the model returns regular parsing result. this is called “ المفعول بهتقديم ” (forwarding the object) in arabic. id diacritics are added to the words, the model returns different parsing results, and the word ‘الدرس’ will be parsed as accusative object ( مفعول -will be parsed as nomina ’الطالب‘ and the word (به منصوب tive subject ( اعل مرفوعف ). this case has been taken into consideration when the model parses words entered by the user with diacritic. compared to the xml-based semantic parser of al-rabiah & al-salman [3], our avsp model uses owl (which is more expressive than xml) as a formalism to represent arabic grammar rules while their parser uses xml only to represent the grammar rules. avsp depends on the syntactic analysis using morphological analyzers to reduce the word’s possibilities. then, it interacts with the arabic grammar ontology to perform parsing. their parser reduces the parsing ambiguity using lexical, syntactic and semantic feature structures of the unification grammar. these feature structure makes their parser less modular and less flexible than avsp. avsp achieves more accurate parsing results than the xml-based parser (taking into consideration the conditions and parameters we mentioned in section v) because it handles the free word order issues within the verbal sentence and processes words with diacritics which are not taken into consideration by the xml-based parser. vi conclusion the main contribution of this research is building a parsing model for arabic verbal sentence that uses a grammar ontology to conceptualize the arabic grammar rules. the conceptualization includes presenting grammar phrases into classes and defines their domains and ranges based on strong understanding of the domain and relate them together via object properties. additional classes and instances are defined in order to classify the word’s type and then apply the correct parsing roles. the model is supported by morphological analysis and synonym extraction. the morphological analyzer performs the syntactic analysis of each word in the sentence and extracts all of its linguistic features such as a verb with its tense and type, noun and preposition. the synonym extractor extracts word’s feature and its synonyms that help the model to remove the ambiguity by choosing the exact meaning of the word. the implementation of the model has realized and connected those components and has provided a user interface to facilitate using the model. various text processing and word tokenization steps have been performed between the components to handle and prepare the sentence to be applicable to the arabic grammar parsing rules. to evaluate the accuracy of the parsing model, we have performed parsing on a number of representative verbal sentences. they include sentences with different verbal forms, different word order, with and without diacritics. the model demonstrated a high parsing accuracy for diacritisized sentences. diacritics minimized the lexicon results, reduced ambiguity of the parsing model and hence increased its accuracy. verbal sentences preceded by a preposition or having vos order returned low accuracy or false results. there are still several improvements that can be addressed in a future work. the ontology can be extended to include other classes, instances, lexical features, synonyms related to a word. this might simplify the parsing process making it khaled m. almunirawi and rebhi s. baraka / parsing arabic verbal sentence using grammar ontology (2021) 32 more direct, more semantic, and more accurate. extend and add more features to the model so it will be able to handle verbal sentences preceded by a preposition or having vos order, extended sentences with and without diacritics. in this case the model might cover other types of arabic sentences like nominal sentences. the implementation of can cover these improvements including a web and mobile versions. references [1] s. green and c. manning, "better arabic parsing: baselines, evaluations, and analysis," in the 23rd int. conference on computational linguistics, 2010. [2] t. elmalki, a computer ontology of arabic grammar: toward a modern logical and linguistic description of the arabic language, vol. 1st edition, al-nabigha house for publishing and distribution, 2015. [3] m. al-rabiah and a. al-salman, "an xml-based semantic parser for traditional arabic," in proc. of the 4th int. universal communication symposium (iucs), 2010. [4] s. salloum, m. al-emran and k. shaalan, "a survey of lexical functional grammar in the arabic context," int. journal of comm. network technology, vol. 4, no. 3, 2016. [5] l. tounsi and j. van genabith, "arabic parsing using grammar transforms," in proceedings of the seventh international conference on language resources and evaluation, malta, 2010. [6] e. othman, k. shaalan and a. rafea, "towards resolving ambiguity in understanding arabic sentence," in proceedings of the int. conference on arabic language resources and tools (nemlar 2004), egypt, 2004. [7] s. alqrainy, h. muaidi and m. alkoffash, "context-free grammar analysis for arabic sentences," international journal of computer applications, vol. 53, no. 3, 2012. [8] e. othman, k. shaalan and a. rafea, "a chart parser for analyzing modern standard arabic sentence," in proc. of the mt summit ix workshop on machine translation for semitic languages: issues and approaches, 2003. [9] a. al-taani, m. msallam and s. wedian, "a top-down chart parser for analyzing arabic sentences," int. arab journal of information technology, vol. 9, no. 2, 2012. [10] b. bataineh and e. bataineh, "an efficient recursive transition network parser for arabic language," in proceedings of the world congress on engineering, 2009. [11] l. tounsi, m. attia and van genabith, "parsing arabic using treebank-based lfg resources," in proceedings of the lfg09 conference, 2009. [12] m. maamouri, a. bies, t. buckwalter and w. mekki, "the penn arabic treebank: building a large-scale annotated arabic corpus," in nemlar conference on arabic language resources and tools, 2004. [13] m. mccord and v. cavalli-sforza, "an arabic slot grammar parser," in workshop on computational approaches to semitic languages: common issues and resources, 2007. [14] m. al-emran, s. zaza and k. shaalan, "parsing modern standard arabic using treebank resources," in proceedings of international conference on information and communication technology research, 2015. [15] y. wang, j. berant and p. liang, "building a semantic parser overnight," in proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, 2015. [16] a. al-zoghby, a. ahmed and t. hamza, "arabic semantic web applications: a survey," jr. of emerging technologies in web intelligence, vol. 5, no. 1, pp. 52-69, 2013. [17] a. al-salman, y. al-ohali and m. alrabiah, "an arabic semantic parser and meaning analyzer," egyptian computer science journal, vol. 28, no. 3, pp. 8-29, 2006. [18] h. elnajjar and r. baraka, "improving dependency parsing of verbal arabic sentences using semantic features," in procedings of the international conference on promising electronic technologies, 2018. [19] m. temizsoy and i. cicekli, "an ontology-based approach to parsing turkish sentences," in machine translation and the information soup, 1998. [20] m. jarrar, "building a formal arabic ontology," in proc. of the experts meeting on arabic ontologies and semantic networks, alecso, arab league, tunisia, 2011. [21] m. boudchiche, a. mazroui, m. bebah, a. lakhouaja and a. boudlal, "alkhalil morpho sys 2: a robust arabic morpho-syntactic analyzer," journal of king saud university-computer and information sciences, vol. 29, no. 2, pp. 141-146, 2017. khaled m. almunirawi is a computer engineer working as the head of it department at the university college of ability development. almunirawi is working as a university lecturer in the field of computer science and programming. he is also working as a web applications developer since 2008 with an interest in arabic language issues. he obtained his master degree in information technology and bsc in computer engineering from the islamic university of gaza, palestine in 2019 and 2007 respectively. his research interests are in the area of semantic web and nlp with application to arabic language. rebhi s. baraka is an associate professor of computer science and ex-dean of the faculty of information technology, the islamic university of gaza, palestine. he obtained his phd degree in computer science from johannes kepler university, austria in 2006. he obtained his msc degree in computer science from de la salle university, philippines in 1996. he obtained his bsc degree in electronics and communications engineering from the university of the east, philippines in 1991. dr. baraka has more than 18 years of teaching and research. his research interests include semantic web and ontology engineering with focus on arabic related issues, parallel and distributed computing with focus on cloud computing, big data and web services. he is a referee in a number of scientific journals in the above areas. he has joined several research and capacity building projects related to scientific and social issues. he is also a member and organizer of several scientific, academic and social committees and events. transactions template journal of engineering research and technology, volume 10, issue 1, march 2023 1 received on (04-04-2022) accepted on (10-11-2022) analysis of the effect of infant carrier’s webbing tension on 18month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations zhe wei https://doi.org/10.33976/jert.10.1/2023/1 abstract—infant carriers play an important role in protecting child occupants from severe injuries caused by collisions, but the tension of harness webbing cannot be controlled properly most of the time. infant carrier’s user manual or instruction generally contains little information about the extent to which the adjusting belt should be pulled to cause the necessary webbing tension, and it is often neglected that the infants should be restrained securely. in order to improve public awareness, it is important to ascertain the effect of infant carrier’s webbing tension on the occupant’s chest accelerations. a testing scheme including 12 dynamic tests was devised and conducted, and test conditions were controlled strictly to ensure the accuracy and objectivity of results. p1.5 dummy’s resultant and vertical chest accelerations were collected and analyzed. both isofix installation and seat belt installation methods were taken into consideration without lack of generality. sled’s accelerations and velocities were set and acquired, which constituted the fundamental testing conditions of dynamic tests and ensured the repeatability and reliability of tests. furthermore, dummy’s chest acceleration pulses were monitored and recorded, and the data were evaluated in accordance with criteria defined in relevant technical standards. the dummy’s chest accelerations were classified into 2 groups according to child restraint systems’ installation methods, i.e., the isofix group and the seat belt group. in each group, both resultant chest acceleration and vertical chest acceleration were involved. universal phenomena were displayed in all the tests, and the larger the tensile forces were, the lower the chest accelerations were in tests. based on experimental validations, the relation between webbing’s tensions and chest accelerations in frontal crash accidents was verified. furthermore, suggestions were made about adjusting the webbing tension and the proper use of infant carriers. index terms—infant carrier; tensile force; child occupant; frontal crash; dynamic tests i introduction child restraint system is capable of being anchored to a power-driven vehicle, and is so designed as to diminish the risk of injury to the wearer, in the event of a collision or of abrupt deceleration of the vehicle, by limiting the mobility of the wearer's body. being an effective device to keep child occupants from harmful and fatal secondary collisions between children and the vehicle in traffic accidents, child restraint systems play an important role in reducing preventable injuries and fatalities of child occupants [1]. in many countries, child restraint systems are very popular with caregivers, and widely used in adults’ driving with children in cars [2, 3]. actually, child restraint system, as a special device for passive safety, has been used for many decades, and at present has evolved greatly, can be considered as a sophisticated system especially when it functions in real accidents [4]. in order to achieve the optimum state of child restraint systems in use, many matters in the field of passive safety should be taken into consideration [5, 6, 7]. meanwhile, regulations and technical standards about child restraint systems have been developed and entered into force in many countries [8]. for example, un regulation no. 44 and no. 129 are widely adopted and used in europe, while fmvss 213 and cmvss 213 are perceived as the norms and specifications in the design, manufacture and use supervision of child restraint systems in north america [9]. in addition, gb 27887-2011 and ais-072 are effective standards used compulsorily in china and india respectively, with the aim of standardizing the product, ensuring the safety performance, and protecting child occupants ultimately. in japan, brazil, south africa and many other countries, similar regulations or standards also exist. hence, different governments have reached a common view in improving the safety performance of child restraint systems and providing a powerful safeguard for child occupants [10, 11, 12]. https://doi.org/10.33976/jert.10.1/2023/1 zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations ( 2023) 2 there are many kinds and types of child restraint systems, and infant carrier is a very common one. infant carrier is oriented to child occupants with comparatively lower ages, e.g., children from newborn to those aged 1 year or so, and it is different from child-safety seat, carry-cot, and booster cushion, etc., no matter in structure, function and installation methods [13]. nevertheless, dynamic test methods are basically the same in different regulations and technical standards, and the differences mainly lie in the requirements about magnitudes of physical quantities and dimensions. since infants are more vulnerable to abrupt deceleration of vehicle than toddlers, more rigorous requirements should be set for infant carriers, as reflected in most regulations [14, 15, 16]. in accordance with regulations such as un regulation no. 44 and no.129, infant carriers generally belong to group 0+, which is defined as the mass group suitable for children of a mass less than 13 kg. in the daily use of infant carriers, inappropriate installations and wrong use can result in severe injuries or fatalities to infant occupants [17]. heavy lessons from traffic accidents have raised public awareness in the proper use of infant carriers, and various researches and studies have been conducted on product standardization, safety performance improvement, injury evaluation, standards amendment, etc [18, 19, 20]. achievements have also been made in the fields related to passive safety [21, 22, 23]. however, some inadequacies of usages are being exposed gradually, including but not limited to the control of tension of webbing that constitutes the harness used to restrain the child occupant [24]. especially, the webbing tensile force is usually ignored by child occupants’ parents or caregivers when they use infant carriers, for instructions and user manuals do not indicate and define the factor at all. results in dynamic tests conducted in laboratories accredited according iso/iec 17025:2017 have shown that webbing tensile force is an unavoidable factor influencing accelerations of child dummies. in order to avoid the undue effects of the factor on testing results, measures have been taken to minimize the uncertainty, such as means of standardization or normalization of the magnitude of webbing tensile force in a dynamic test [25]. on the basis of that, testing results become comparable, and then can be analyzed. despite the fact that laboratories begin to realize the importance of standardizing testing conditions, infant occupants’ caregivers’ failure to know or understand the relation between harness’ webbing tensile force and infants’ accelerations which is directly related to potential injuries, nevertheless, still remain a non-negligible factor in causing preventable injuries and fatalities. furthermore, emphasis of past researches have seldom been placed on the aspect, thus making people pay less attention to the issue [26]. therefore, it is necessary to carry out relevant research and to draw enough attention to the danger and harm of neglect or ignorance of controlling the tension of harness’ webbing properly [27, 28, 29]. a testing scheme was devised to verify the effect of webbing tensile force on p1.5 dummy’s chest accelerations. p1.5 dummy weighs about 11kg, and is suitable for being a substitute for infant whose mass and size fit child restraint systems of group 0+. harness is mainly used to restrain child occupant’s thorax and laps, and limit the mobility of the occupant’s body. furthermore, serious chest injuries could bring about more hazards and even deaths because of the locations of most vital organs [30, 31]. so chest accelerations can be a good choice for the current research, and they are in more direct relation with harness’ webbing tensile force. in view of the installation methods of infant carriers, both isofix, which is a system that provides a method of connecting a child restraint system to a vehicle, and is based on two vehicle anchorages and two corresponding attachments on the child restraint system in conjunction with a means to limit the pitch rotation of the child restraint system, and 3-point seat belt are taken into consideration, with the aim of diminishing the effect of installation methods on testing results. being an effective way to simulate and reproduce real crash accidents, dynamic tests were conducted in a devised pattern to verify the effect [32, 33, 34]. in most regulations and technical standards, the resultant chest acceleration shall not exceed 55 g except during periods whose sum does not exceed 3 ms, and the vertical component of the acceleration from the abdomen towards the head shall not exceed 30 g except during periods whose sum does not exceed 3 ms. since they are basic safety requirements, the resultant and vertical accelerations of p1.5 dummy’s chest in dynamic tests are taken as the object for quantitative research. as to the issue, it is more scientific and objective to explore the current research based on experimental validations than pure computer simulation, for the quality of computer simulation depends on algorithms to a great extent [35, 36, 37]. furthermore, simulation itself needs experimental validation to check its validity and correctness [38, 39]. small variation in configuring initial conditions for computer simulations may lead to a large discrepancy in results, while it is more robust when it comes to a real dynamic test. hence, the current research is solely based on experimental validations for obtaining more accurate results. similar researches have been conducted, but seldom ones have been oriented to ameliorating the balance between child occupant’s comfort feeling and safety, and to what extent should child restraint system’s adjusting belt be pulled has never been ascertained [2, 4]. on the one hand, there are many different types, kinds, or groups of child restraint systems, and each system does not necessarily have the same structure, zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations (2023) 3 rigidity, and even installation methods [14]. it is difficult to ascertain the proper tensile forces for different restraint systems. on the other hand, child occupants are different, e.g., some are fat and tall, while others are thin and short. besides, their feelings about comfort may vary greatly. therefore, it is meaningless and even impossible to ascertain or calculate each value of tensile force in using child restraint systems. being one kind of child restraint system, infant carriers are mainly used for children under the age of 18 months, i.e., from newborn to 1.5 years old. although it is not necessary to ascertain each proper tensile force of harness restraining child occupants, to some extent, it is important to ascertain the relation between tensile forces and child occupants’ safety. as to each infant carrier, it is possible to obtain the relation curve between the webbing tension and dummy’s chest accelerations by experimental methods, thus making the relation displayed visually. however, the emphases of other researches have more been put on the structure design, the simulation algorithm, safety improvement in design and manufacture, proper installation and appropriate use, etc., than on the tensile force control. since tendency exists that isofix becomes more and more adopted in child restraint systems’ installations, and the seat belt installation method is also applicable to infant carriers, the two installation methods are both analyzed in the research, which is different from the prior researches. in addition, the harness’ webbing tensile force is directly related to child occupant’s safety, and the harness consists of shoulder belts and abdomen belts used to restrain child occupant, so varying the values of tensile forces will result in variations of chest accelerations inevitably. by experimental validations for each installation methods including isofix and seat belt, the relation curves between webbing’s tensile force and child occupant’s chest acceleration can be obtained. it is more meaningful and applicable to use such a curve than to know a single force value in instructing caregivers to use restraint systems. therefore, the research filled the gap in the field. the findings in the research will help make a compromise between safety and comfort easily, and help improve infant carrier’s safety in applications. ii methodology a test instruments, equipment and samples generally, dynamic test is oriented to simulating the real crash and obtaining the necessary information about safety performance of passive safety devices, and is normally conducted by means of special equipment and instruments. being the impact simulator, sled or trolley is employed to reproduce collision conditions, and cause the devices for passive safety to function as they do in real world collisions. sled can be classified into 2 types, i.e., sled of acceleration type and the one of deceleration type. in the past, sled of deceleration type was widely used because of its convenience and low cost in conducting various dynamic tests. even nowadays, sled of the type can be seen in many laboratories and factories, for deceleration pulses can be generated in different ways, such as using specific structures of energy-absorbing mechanisms, pressing steel or polyurethane tubes, and deforming steel bars, which are all successful cases in carrying out dynamic tests in different laboratories. however, the reproducibility and repeatability can not be so well ensured when sled of deceleration type is employed in most cases, although the sled ’s deceleration pulses may also fall into the specified range. meanwhile, it is more complicated to operate than the sled of acceleration type, and therefore, the latter one is comparatively a time-saving device. hence, sled of the acceleration type was employed in the current research so that the reliability, objectiveness, and precision of test results could be ensured to a great extent. in figure.1, (a) displays a kind of sled of the deceleration type , and (b) shows the sled used in the research. the deceleration or acceleration data of the sleds are generally collected by the accelerometers mounted on the sleds. exactly speaking, only when acceleration pulses fall into the range defined by standards or regulations, can the tests be perceived as qualified ones, if dynamic tests are conducted according to the requirements and methods of relevant standards or technical regulations. in many other cases, acceleration curves are specified before tests, and yet similarity and coincidence are essential between real pulses and the curves set as objects. considering that the injury criteria of the dummy in frontal crashes were researched on, the definition of curves for frontal impact in most regulations including un regulation no.44, un regulation no.129, and gb 27887-2011, etc., was adopted in the current research, as shown in figure.2. acceleration pulses of the sled’s falling into the specified zone being the prerequisite for sled tests, their closest approximation to the object curves has further beneficial effects on obtaining more exact test results. the p1.5 dummy was chosen as a substitute for child occupant aged 18 months, and it was put into the infant carrier in accordance with user manual and installation instruction. nevertheless, the manual or instruction generally indicates no information about the extent to which the adjusting belt should be pulled. hence, tensions of the adjusting belt at 3 different levels were taken into consideration to verify the effects of harness webbing’s tension on the chest accelerations of p1.5 dummy. actually, the adjusting belt is connected to the harness straps, and is used to adjust the tension of the webbing constraining child occupant’s torso, as for the structure and function of each infant carrier. in other words, the webbing’s tension can be increased or decreased by changing the zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations ( 2023) 4 tensile force of adjusting belt during installation. the forces were applied to the adjusting belts of infant carriers by a force gauge at 3 different levels. the force gauge had been calibrated, and was used as the force-applying device, as displayed in figure.3. in addition, the forces were applied with the same deflection angles to ensure comparable, objective and proper force transmissions and distributions. the infant carrier samples involves 2 kinds of installation methods, i.e., using the isofix and using 3-point seat belt respectively. both were taken into consideration in the research so that test results could be comprehensive and representative. the samples can be classified into 2 kinds according to installation methods, but samples of the same kind are completely the same in the category, type, structure, and even the brand. the total quantity of infant carriers used in the testing scheme is 12, 6 sets for each kind of installation method respectively, as is tabulated in table.1. infant carriers are generally installed rearward-facing, since it is safer for a newborn baby or toddler to be kept rearward-facing in driving because of young child’s physical structure. but even being kept rearward-facing and using the rearward-facing restraint system, cannot necessarily ensure child occupant’s safety, if harness webbing’s tensile forces cannot be controlled properly. forward-facing child restraint systems are more used for children aged three or more, and will not be involved in the research. . (a) (b) figure.1 sleds of the deceleration type and the acceleration type figure.2 definition of curves for frontal impact figure.3 force gauge table.1 testing scheme b experimental procedure the infant carrier was firstly placed on the test seat. then the p1.5 dummy was placed into the infant carrier, and gap existed between the rear of the dummy and the restraint. a hinged board 2.5 cm thick and 6 cm wide, and of length equal to the shoulder height less the hip centre height in the sitting position of p1.5 dummy, was put between the dummy and the back of the infant carrier. the board followed as closely as possible the curvature of the infant carrier and its lower end was at the height of the dummy's hip joint. figure.4 displays the hinged board used to cause the slack which is unavoidable in the actual state when an infant carrier is used to protect the child occupant in a vehicle. the adjusting belt was pulled by means of the force gauge in accordance with the testing scheme shown in table.1. the tensions were applied at 3 different levels, with deflection angles of the belt at the adjuster of 45° ± 5°for all the samples, as displayed in figure.5. during installation of the infant carrier with adult seat belt to the test seat, the 3-point seat belt incorporating a diagonal belt and a lap belt was used in the same way before each dynamic test. as for infant carriers with isofix attachments, top tether and supporting leg can both be used as the anti-rotation device, and prevent infant carriers from rotating in frontal crashes effectively. in the current research, samples with isofix attachments relied on the supporting legs to stay stable in dynamic tests. in the process of installation, isofix zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations (2023) 5 attachments were connected and used uniformly to ensure that the installation would not influence the test results too much. finally, the hinged board was removed just before the sled system was launched and the test started. standardization of installation is of great importance, since non-standardization brings about more errors and makes test results not comparable. because the foam test cushion would compress after installation of the infant carrier, the dynamic test was conducted no more than 10 minutes after installation as possible. to allow the cushion to recover, the period between 2 dynamic tests using the same cushion was 20 minutes. figure.6 shows the final state of the sled and the installed infant carrier with p1.5 dummy in it before the test start. figure.4 the hinged board used to cause the slack figure.5 the way to apply the tension (a) (b) figure.6 the final state of the sled and the samples of 2 kinds before test starts c evaluation criteria the dummy of 11 kg is required to use for the tests of group 0+ device, and it is the biggest dummy for the group. that means the p1.5 dummy displayed in figure.7 could cause more severe damages to the restraint system in frontal collisions than p0 dummy and p3/4 dummy do. since child occupants’ chest accelerations are directly relevant with the tension of infant carrier’s harness straps, accelerations of the part of p1.5 dummy’s body can be collected for analysis in the 12 tests. however, the lateral acceleration, i.e., the acceleration of y axis are influenced less than the ones of the other 2 axes obviously by frontal collisions. generally, resultant chest acceleration and the vertical component of the acceleration from the abdomen towards the head are required to measure in most regulations and technical standards concerned. to ensure the objectivity and generality, the 2 parameters are also taken into full consideration for the evaluation of potential injuries caused by arbitrariness in the extent to which the adjusting belt of a infant carrier is pulled. especially, the acceleration of z axis and the resultant acceleration of dummy’s chest are obviously the most significant factors that can be influenced by a frontal collision, considering that the infant carrier accommodates the child occupant in a rearward-facing semi-recumbent position. the resultant chest acceleration exceeding 55 g except during periods whose sum does not exceed 3 ms, or/and the vertical component of the acceleration from the abdomen towards the head exceeding 30 g except during periods whose sum does not exceed 3 ms, can be perceived as a failure to meet the requirements of relevant regulations and technical standards. in laboratories, safety performance of the infant carrier is also generally evaluated based on the criteria, when it comes to the injury potentials of child occupant’s chest in traffic crashes. figure.7 p1.5 dummy iii experimental results a sled’s accelerations and velocities after conducting the testing scheme, the accelerations and velocities of the sled were collected, as shown in figure.8 and figure.9. acceleration pulses all fall into the specified zone, and are qualified and valid ones that meet the requirements. zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations ( 2023) 6 besides, they have the same trends as the object curves. as for the velocities, the maximum one is 51.6 km/h, and the minimum one is 50.5 km/h, i.e., all the velocities are valid and within the specified range. detailed information is tabulated in table.2, and the data about sled’s velocities and accelerations are shown. although discrepancies between data exist, test conditions of the 12 dynamic tests could be deemed to be the same because of high coincidence between pulses. figure.8 sled acceleration pulses in 12 dynamic tests figure.9 sled velocity pulses in 12 dynamic tests table 2. sled’s accelerations and velocities in tests b dummy’s chest accelerations according to the testing scheme, both resultant chest acceleration and vertical component of the acceleration from the abdomen towards the head were collected. due to data acquisition achieved by the accelerometer and the follow-up data processing, pulses of resultant chest accelerations and vertical chest accelerations of p1.5 dummy were displayed, as shown in figure.10, figure.11, figure.12, and figure.13. the measuring procedures corresponded to those defined in iso 6487: 2002, and the channel frequency class (cfc) was set as cfc 180 for signal filtration. figure.10 and figure.11 display how the resultant and vertical chest accelerations of p1.5 dummy change with the changes of sled’s accelerations when infant carriers with isofix attachments are tested. all the tests from no.1 to no.6 are relevant with safety performance of the infant carrier equipped with isofix interface used to install the system to the vehicle. the tests from no.7 to no.12 involve the research on the infant carrier using seat belt for installation. figure.12 and figure.13 show that, being the functions of time, the resultant and vertical chest accelerations change in real time under the action of sled’s motion. as shown in table.3, p1.5 dummy’s maximum chest accelerations during periods whose sum exceeds 3 ms in dynamic tests from no.1 to no.12 are all obtained, and the results include 2 kinds, i.e., resultant chest acceleration and the vertical component of the acceleration from the abdomen towards the head. all the data are adequate to respond to the testing scheme, and the relation between the infant carrier’s webbing tension and the chest accelerations of child occupants in frontal crash accidents could be revealed based on the analysis of the data. figure.10 p1.5 dummy’s resultant chest acceleration pulses in dynamic tests from no.1 to no.6 zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations (2023) 7 figure.11 p1.5 dummy’s vertical chest accelerations from the abdomen towards the head in dynamic tests from no.1 to no.6 figure.12 p1.5 dummy’s resultant chest acceleration pulses in dynamic tests from no.7 to no.12 figure.13 p1.5 dummy’s vertical chest accelerations from the abdomen towards the head in dynamic tests from no.7 to no.12 table.3 p1.5 dummy’s maximum chest accelerations during periods whose sum exceeds 3 ms in dynamic tests from no.1 to no.12 iv discussion sled test being the simulation of a real crash, it is an effective way to reproduce the crash conditions and validate the hypothesis about passive safety. in order to ensure the reliability, repeatability, objectivity, and accuracy of testing scheme and its output, test conditions including but not limited to sled’s accelerations and velocities, must be set meticulously. based on the testing results, it can be concluded that the sled ran in a stable state during the 12 tests, and the pulses of either acceleration or velocity were almost the same between different tests. in addition, the acceleration pulses meet the requirements of un regulation no.44, un regulation no.129, and other similar regulations. therefore, the testing scheme can be conducted effectively and will not bring about avoidable errors, thus preventing results’ deviations from happening. as for the chest accelerations of p1.5 dummy, it indicates that the injury potentials could be influenced greatly by the harness’ webbing tension. there is a negative correlation between the chest accelerations and the tensile force of infant carrier’s webbing. no matter which method of installation is used in tests, it appears that the bigger the tensile force is, the lower the chest acceleration is. it is the same for both resultant chest acceleration and vertical chest acceleration, as shown in figure.14 and figure.15. although infant carrier equipped with isofix attachment shows obvious advantages in safety performance, the relation between chest acceleration and webbing’s tension seems not to be related to installation method. in other words, installation method influences infant carrier’s safety performance mainly, yet it cannot change the trends of the effects of webbing tension on child occupant’s chest accelerations, and the effects are objective existences. nevertheless, controlling the tensile force of an infant carrier is a paradox, because increasing the tension of webbing will inevitably affect the comfort, and make child occupants unwilling to wear the harness or to use the infant carrier. it is also meaningless to make the harness too loose, for the feeling of comfort cannot be made at the expense of safety. generally, there are some ways to solve the problem. firstly, choosing a proper installation method is of importance, for isofix attachments help reduce chest accelerations in frontal crashes. once an infant carrier equipped with isofix is used, the webbing tension can be decreased to some extent, so that the feeling of comfort can be ensured. secondly, making some necessary slacks of webbing can also lead to a good user experience. but it depends on the sense and knowledge of caregivers, and needs some skills. risk still exists that the child occupants may fail to be restrained in collisions. thirdly, the product itself can be optimized and zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations ( 2023) 8 redesigned to improve the safety performance, and it involves a lot of work. finally, based on the current research, it is better and easier to keep the webbing tightened to the extent that the wearer can endure and shows no obvious rejection of the use. since the child occupants are too young to tell their feelings anytime, it is essential that caregivers or adult occupants have close observations and pay enough attention to the states of infants. adjusting the webbing’s tension may be frequent, for safety and comfort are both important for infants in a vehicle. nevertheless, the tensile forces were chosen from a proper range, i.e., the forces or so that fall into the range are more likely to be applied by caregivers. so it is nearly impossible to bring about restraint failures or severe injuries if sled’s accelerations are set according to technical standards, and if the infant carriers are qualified and have no inherent quality problems or defects. undoubtedly, if the webbing’s tensile force is zero, the chest acceleration will inevitably exceed the limit and appear greater than the permitted one, which has been proved in many laboratories. similarly, if the force is too large, the occupant will feel very uncomfortable and will not cooperate in wearing the harness. in the respect, the tensile forces chosen in the research are typical. moreover, the quality of products, public awareness, and many other relevant factors must be taken into consideration in actual applications. the tensile force remains one of risk sources in applications, is very likely to be neglected or ignored, and should be paid enough attention to. figure.14 relation between resultant chest accelerations and webbing’s tensions figure.15 relation between vertical chest accelerations and webbing’s tensions v conclusion a testing scheme incorporating 12 dynamic tests was devised and conducted, and the data obtained were analyzed. the accelerations and velocities of the sled were controlled strictly to meet the relevant requirements, and the repeatability of crash conditions was ensured. the resultant chest accelerations and the vertical components of the accelerations from the abdomen towards the head of p1.5 dummy were collected for further analysis of the relation between the infant carrier’s safety performance and webbing’s tension. based on the testing results, the analysis leads to the following conclusions: there is a negative correlation between infant carrier’s webbing tensile force and the chest acceleration of the child occupant in frontal crash accidents. it is necessary to tighten the harness’ webbing enough during vehicle’s driving, for bigger tensile force of webbing means lower accelerations of chest when a collision happens. although occupant’s feeling of comfort should be emphasized, the safety cannot be ignored especially. equal emphasis should be placed on pulling the adjusting belt and tightening the harness when an infant carrier is used, even user manual or instruction indicates little information about the extent to which the adjusting belt should be pulled. besides, infant carrier equipped with isofix attachment has the better safety performance than the one using seat belt for installation. isofix and the anti-rotation device help reduce the impact that the child occupant suffers from, and it is superior to the installation method using seat belt, on the aspects of convenience, misuse prevention, and protection effect, etc. no matter what installation method is adopted, relation exists between webbing’s tensile forces and child occupant’s resultant and vertical chest accelerations objectively. the accelerations can be influenced by harness webbing’s tensile zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations (2023) 9 forces for both installation methods, even though the influence levels are different. references [1] f. muhammad butt, m. a. dalhat, k. shahid minhas, and a. al-mojil, “influence of seat belt use behavior and road traffic crash experience on the use of child restraint systems: a step further,” j. king saud univ. eng. sci., 2021, doi: https://doi.org/10.1016/j.jksues.2021.07.005. [2] m. cornelissen, m. hermans, l. tuijl, m. versteeg, e. van beeck, and e. kemler, “child safety in cars: an observational study on the use of child restraint systems in the netherlands,” traffic inj. prev., vol. 22, no. 8, pp. 634–639, 2021, doi: https://doi.org/10.1080/15389588.2021.1980562. [3] b. a. west, m. a. yellman, and r. a. rudd, “use of child safety seats and booster seats in the united states: a comparison of parent/caregiver-reported and observed use estimates,” j. safety res., vol. 79, pp. 110–116, 2021, doi: https://doi.org/10.1016/j.jsr.2021.08.011. [4] c. visvikis, c. thurn, m. kettner, and t. müller, “the effect of chin-to-chest contact on upper neck axial force in un regulation no. 129 frontal impact tests of child restraint systems,” traffic inj. prev., vol. 21, pp. s173– s176, 2020, doi: https://doi.org/10.1080/15389588.2020.1829923. [5] b. s. acar, “passive prevention systems in automobile safety,” r. b. t.-i. e. of t. vickerman, ed. oxford: elsevier, 2021, pp. 406–414. [6] a. v. ngo, j. becker, d. thirunavukkarasu, p. urban, s. koetniyom, and j. carmai, “investigation of occupant kinematics and injury risk in a reclined and rearwardfacing seat under various frontal crash velocities,” j. safety res., vol. 79, pp. 26–37, 2021, doi: https://doi.org/10.1016/j.jsr.2021.08.001. [7] y. liu, x. wan, w. xu, l. shi, g. deng, and z. bai, “an intelligent method for accident reconstruction involving car and e-bike coupling automatic simulation and multiobjective optimizations,” accid. anal. prev., vol. 164, p. 106476, 2022, doi: https://doi.org/10.1016/j.aap.2021.106476. [8] s. levi, h. lee, w. ren, s. mccloskey, and a. polson, “reducing child restraint misuse: national survey of awareness and use of inspection stations,” traffic inj. prev., vol. 21, no. 7, pp. 453–458, 2020, doi: https://doi.org/10.1080/15389588.2020.1782896. [9] p. baranowski, k. damaziak, l. mazurkiewicz, j. malachowski, a. muszynski, and d. vangi, “analysis of mechanics of side impact test defined in un/ece regulation 129,” traffic inj. prev., vol. 19, no. 3, pp. 256–263, 2018, doi: https://doi.org/10.1080/15389588.2017.1378813. [10] d. s. usami, l. persia, and v. sgarra, “determinants of the use of safety restraint systems in italy,” transp. res. procedia, vol. 45, pp. 143–152, 2020, doi: https://doi.org/10.1016/j.trpro.2020.03.001. [11] b. benzaman, n. j. ward, and w. j. schell, “the influence of inferred traffic safety culture on traffic safety performance in u.s. states (1994–2014),” j. safety res., 2021, doi: https://doi.org/10.1016/j.jsr.2021.12.014. [12] m. haghani, a. behnood, o. oviedo-trespalacios, and m. c. j. bliemer, “structural anatomy and temporal trends of road accident research: full-scope analyses of the field,” j. safety res., vol. 79, pp. 173–198, 2021, doi: https://doi.org/10.1016/j.jsr.2021.09.002. [13] b. albanese et al., “influence of child restraint system design features on comfort, belt fit and posture,” saf. sci., vol. 128, p. 104707, 2020, doi: https://doi.org/10.1016/j.ssci.2020.104707. [14] g. lee, c. n. pope, a. nwosu, l. b. mckenzie, and m. zhu, “child passenger fatality: child restraint system usage and contributing factors among the youngest passengers from 2011 to 2015,” j. safety res., vol. 70, pp. 33– 38, 2019, doi: https://doi.org/10.1016/j.jsr.2019.04.001. [15] l. williams, t. standifird, and m. madsen, “effects of infant transportation on lower extremity joint moments: baby carrier versus carrying in-arms,” gait posture, vol. 70, pp. 168–174, 2019, doi: https://doi.org/10.1016/j.gaitpost.2019.02.004. [16] a. raj, c. w. christian, j. e. reid, and g. binenbaum, “a baby carrier fall leading to intracranial bleeding and multilayered retinal hemorrhages,” j. am. assoc. pediatr. ophthalmol. strabismus, 2022, doi: https://doi.org/10.1016/j.jaapos.2021.10.008. [17] j. b. cicchino and j. s. jermakian, “vehicle characteristics associated with latch use and correct use in realworld child restraint installations,” j. safety res., vol. 53, pp. 77–85, 2015, doi: https://doi.org/10.1016/j.jsr.2015.03.009. [18] a. alrejjal, a. farid, and k. ksaibati, “investigating factors influencing rollover crash risk on mountainous zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations ( 2023) 10 interstates,” j. safety res., 2021, doi: https://doi.org/10.1016/j.jsr.2021.12.020. [19] r. schindler, c. flannagan, a. bálint, and g. bianchi piccinini, “making a few talk for the many – modeling driver behavior using synthetic populations generated from experimental data,” accid. anal. prev., vol. 162, p. 106331, 2021, doi: https://doi.org/10.1016/j.aap.2021.106331. [20] g. deng et al., “assessment of standing passenger traumatic brain injury caused by ground impact in subway collisions,” accid. anal. prev., vol. 166, p. 106547, 2022, doi: https://doi.org/10.1016/j.aap.2021.106547. [21] l. lalika, a. e. kitali, h. j. haule, e. kidando, t. sando, and p. alluri, “what are the leading causes of fatal and severe injury crashes involving older pedestrian? evidence from bayesian network model,” j. safety res., 2021, doi: https://doi.org/10.1016/j.jsr.2021.12.011. [22] b. wali, n. ahmad, and a. j. khattak, “toward better measurement of traffic injuries – comparison of anatomical injury measures in predicting the clinical outcomes in motorcycle crashes,” j. safety res., 2021, doi: https://doi.org/10.1016/j.jsr.2021.11.013. [23] d. v mcgehee, c. a. roe, p. kasarla, and c. wang, “quantifying and recommending seat belt reminder timing using naturalistic driving video data,” j. safety res., 2022, doi: https://doi.org/10.1016/j.jsr.2021.12.022. [24] c. se, t. champahom, s. jomnonkwao, p. chaimuang, and v. ratanavaraha, “empirical comparison of the effects of urban and rural crashes on motorcyclist injury severities: a correlated random parameters ordered probit approach with heterogeneity in means,” accid. anal. prev., vol. 161, p. 106352, 2021, doi: https://doi.org/10.1016/j.aap.2021.106352. [25] p. d. tremoulet, a. belwadi, b. corr, s. sarfare, t. seacrist, and s. tushak, “how do novel seat positions impact usability of child restraints?,” transp. res. interdiscip. perspect., vol. 10, p. 100372, 2021, doi: https://doi.org/10.1016/j.trip.2021.100372. [26] s. kendi, m. b. howard, m. a. mohamed, s. eaddy, and j. m. chamberlain, “so much nuance: a qualitative analysis of parental perspectives on child passenger safety,” traffic inj. prev., vol. 22, no. 3, pp. 224–229, 2021, doi: https://doi.org/10.1080/15389588.2021.1877276. [27] c. buderer et al., “effects of multisystemic therapy for child abuse and neglect on severity of neglect, behavioral and emotional problems, and attachment disorder symptoms in children,” child. youth serv. rev., vol. 119, p. 105626, 2020, doi: https://doi.org/10.1016/j.childyouth.2020.105626. [28] a. li, s. shen, a. nwosu, k. l. ratnapradipa, j. cooper, and m. zhu, “investigating traffic fatality trends and restraint use among rear-seat passengers in the united states, 2000–2016,” j. safety res., vol. 73, pp. 9–16, 2020, doi: https://doi.org/10.1016/j.jsr.2020.02.005. [29] c. missikpode, c. j. hamann, and c. peek-asa, “association between driver and child passenger restraint: analysis of community-based observational survey data from 2005 to 2019,” j. safety res., vol. 79, pp. 168–172, 2021, doi: https://doi.org/10.1016/j.jsr.2021.08.016. [30] t. whyte, n. kent, l. keay, k. coxon, and j. brown, “frontal crash seat belt restraint effectiveness and comfort accessories used by older occupants,” traffic inj. prev., vol. 21, no. 1, pp. 60–65, jan. 2020, doi: 10.1080/15389588.2019.1690648. [31] h. fagerlind, l. harvey, p. humburg, j. davidsson, and j. brown, “identifying individual-based injury patterns in multi-trauma road users by using an association rule mining method,” accid. anal. prev., vol. 164, p. 106479, 2022, doi: https://doi.org/10.1016/j.aap.2021.106479. [32] c. s. parenteau, d. c. viano, and r. burnett, “secondrow occupant responses with and without intrusion in rear sled and crash tests,” traffic inj. prev., vol. 22, no. 1, pp. 43–50, 2021, doi: https://doi.org/10.1080/15389588.2020.1842380. [33] d. c. viano, r. a. burnett, g. a. miller, and c. s. parenteau, “influence of retractor and anchor pretensioning on dummy responses in 40 km/h rear sled tests,” traffic inj. prev., vol. 22, no. 5, pp. 396–400, 2021, doi: https://doi.org/10.1080/15389588.2021.1910243. [34] z. wei, “effects of deceleration on secondary collisions between adult occupants and the vehicle in frontal crash accidents,” int. j. eng., vol. 34, no. 12, pp. 2658–2664, 2021, doi: 10.5829/ije.2021.34.12c.11. [35] f. f. florena, f. faizal, and s. viridi, “experimental and simulation study of solid flows in beads mill,” adv. powder technol., vol. 32, no. 8, pp. 2703–2711, 2021, doi: https://doi.org/10.1016/j.apt.2021.05.029. [36] a. das and m. m. ahmed, “adjustment of key lane change parameters to develop microsimulation models for representative assessment of safety and operational zhe wei / analysis of the effect of infant carrier’s webbing tension on 18-month-old child occupant’s chest accelerations in frontal crash accidents based on experimental validations (2023) 11 impacts of adverse weather using shrp2 naturalistic driving data,” j. safety res., 2022, doi: https://doi.org/10.1016/j.jsr.2022.01.002. [37] h. bin tahir, m. m. haque, s. yasmin, and m. king, “a simulation-based empirical bayes approach: incorporating unobserved heterogeneity in the before-after evaluation of engineering treatments,” accid. anal. prev., vol. 165, p. 106527, 2022, doi: https://doi.org/10.1016/j.aap.2021.106527. [38] a. i. dmitriev, l. b. voll, and v. l. popov, “the final no-wear state due to dual-mode fretting: numerical prediction and experimental validation,” wear, vol. 458–459, p. 203402, 2020, doi: https://doi.org/10.1016/j.wear.2020.203402. [39] y. cheng, k. liu, y. li, z. wang, and j. wang, “experimental and numerical simulation of dynamic response of u-type corrugated sandwich panels under low-velocity impact,” ocean eng., vol. 245, p. 110492, 2022, doi: https://doi.org/10.1016/j.oceaneng.2021.110492. zhe wei is a senior engineer in national quality supervision and inspection centre for engineering machinery affiliated to china academy of machinery science and technology. he obtained the b. sc and m. sc in 2007 and in 2010 respectively. his research field is related with vehicle’s passive safety now. as the first author and the first inventor, he has more than 30 publications and 5 authorized patents. meanwhile, he owns the premium membership of chinese society of agricultural machinery and is the committee member of sac/tc 240. https://doi.org/10.1016/j.oceaneng.2021.110492. transactions template journal of engineering research and technology, volume 8, issue 2, september 2021 1 development of flood defense alternaives for the beach of deir el balah camp, palestine mazen abualtayef environmental engineering department, the islamic university of gaza, palestine https://doi.org/10.33976/jert.8.2/2021/1 abstract— the development along the coastal zone has led to the host of problems such as erosion, siltation, flooding, loss of coastal resources and the destruction of the fragile marine habitats. the erosion threatens the coastal zone, which affects people's economic, tourist, and recreational life. the main reason of the erosion is due to khan younis breakwater and the sea waves working on empty the beachy sand, thereby flooding and scouring the area as it ebbs and removing part of the unconsolidated sand. this study uses geographic information system to detect changes in the coastline along deir el balah coast during the 1972–2020 period. shoreline change rates in the form of erosion and accretion patterns are quantified. in addition, four alternatives are proposed to to mitigate the current problems raised by repeated flooding and erosion through reefballs, cubes, geotubes and seawalls and analyze their impacts on coastal protection to provide the best possible mitigations in environmental, economical and engineering terms. multi criteria analysis is used to assess the alternatives with respect to criteria that capture the key dimensions of the selection process. multi criteria have been selected and addressed the most important factors when planning, designing, financing, and implementing coastal protection measures. based on the analysis, the best alternative of three-row reefballs submerged breakwaters is recommended. index terms— flood defense, erosion, offshore breakwater, gaza i. introduction gaza's mediterranean coast, which stretches for around 40 km, is rich in coastal resources. coastal development has resulted in hosting problems, including increased flooding, erosion, siltation, depletion of coastal resources, and the degradation of vulnerable marine ecosystems. in some parts along the coast of gaza strip, the coastal erosion is the most serious problem. the erosion threatens the structure, buildings, roads and other installations located directly on the coast. on other words, erosion threatens the coastal life, which affects people's houses, economic, tourist, and recreational life as well as the daily life [1]. fixed buildings near the beach are being increasingly exposed to the direct impact of storm waves, and will be damaged unless costly protective measures are taken against erosion. it has long been assumed that the underlying rate of long-term sandy beach erosion is two orders of magnitude faster than the rate of sea level rise, implying that any substantial rise in sea level would result in a significant increase in sea level [2]. developing future shoreline change projection necessitates a detailed understanding of the underlying coastal processes. these processes are largely caused by the interaction of mean sea level, waves, storm surges, and tides, all of which are influenced by global and regional climate change and whose variability grows over time [3]. coastal ecosystems are being threatened by sea level rise and wave action. the restoration of wetlands to preserve shorelines has been touted as a win-win strategy for humans and nature, but field evidence for the wetland protection purpose, as well as an understanding of its background dependence, are rare [4]. the coastal region usually concentrates better social, economic and recreational opportunities [5]. it is of great importance for the natural environment and economic development, in spite of presenting higher risk of natural disasters, such as tsunamis, extreme waves and coastal erosion [6]. in recent decades, the coastal region was changing continuously under the roles of anthropogenic activities such as breakwaters. the ignoring of beach erosion in gaza strip leads to aggravation this problem and increasing the danger on the coastal. so to increase the protect of the beach, there are a lot of methods to reduce the erosion such as seawall, groins and breakwater but the suitable methods and the lowest cost should be chosen, and trying to have a beautiful landscape [7]. gaza's coastal zone is a narrow strip of land that is rapidly expanding, and the growth rate is attributed to its potential for a variety of economic activities and the pressures of urbanization [8]. impacts of erosion on coastal structures and beaches: during the last few years, the beach faced bad weather conditions due to climate change (high waves and strong currents) led directly to many of the problems that collapse and come because of beach erosion continuous years ago along the gaza beach. a site visit was conducted on june 8, 2020 to the areas that were severely affected along deir el balah beach, where the https://doi.org/10.33976/jert.8.2/2021/1 mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 2 damage was inspected and their existing situation was assessed. many areas along the deir el balah coast suffer from the disappearance of the beach erosion, which poses a great danger to the road, structures, properties and citizens, beside the safety, environment and beauty of the beach. the following images show the damage to the fishing port in deir el balah after the severe weather conditions, which is located 1.4 km to the south of the study site. during the site visit, the head of fishermen's union explained the impacts of recent wave attacks during the last two years on the port buildings and dunes as well as the mitigation measures to support the building foundations. fig. 1. wave attacks of the fishery port at deir el balah (january 16, 2020) furthermore, on january 16, 2020, a site visit to beach club at deir el balah (located 3.4 km to the south of study site) was performed immediately after last wave attacks. the club has witnessed sever beach erosion of the cliffs and coast, where the landscaping of the club was collapsed as shown in figure 2. the citizens living in the deir el balah camp have faced flooding of their houses in the last storms. therefore, the construction of flood defense at the beach in front of deir el balah camp is at top priority to prevent the repeated seawater flooding in the winter seasons and to find a proper space to the families who need to set in the front of the beach. the flood defense should be planned to target these needs raised by the people by constructing a concrete barrier, which takes into consideration the recreational perspectives. the main aim of the study is to examine coastal protection techniques against erosion and contribute to improving and developing coastal areas and to support tourism. several offshore breakwater alternatives were analyzed their impacts on coastal protection to provide the best possible solutions in environmental, economic and engineering terms. fig. 2. wave attacks of the beach club at deir el balah (january 16, 2020) ii. materials and methods existing baseline conditions of the study area were identified in order to understand the nature of the problem. these include information on location and topography, climate, bathymetry, coast and seabed characteristics, sediment transport, waves, tides and sea level rise, currents, wave runup, erosion, flooding, and social and economic activities. 2.1 location and topography available topographic points of the study area have been collected, in order to acquire ground level for identifying the flood area and coastal defense. the existing topography for the study area is illustraed in figure 3. the figure shows the location of planned coastal defense as well as the proposed location of the offshore breakwaters. 2.2 climate temperature: deir el balah is located at the middle part of gaza strip, which is considered within the arid desert climate. the average daily mean temperature ranges from 25°c in summer to 13°c in winter [9]. rainfall: in deir el balah governorate, similar to the gaza strip, there are two well defined seasons, the wet season starting in october and extending into april, and the dry season extending from may to september. the rainy season extends from about mid of october to the end of march, with essentially no rain falling in the remaining months. the annual average 5-year rainfall in deir el balah governorate is about 300 mm for the last five years [10]. winds: the predominant wind direction in gaza is through nw with more than 28% of the time. the wind speed is considered calm, as the most speeds are below 5 m/s. the weather mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 3 in the winter season is dominated by cyclones passing in easterly directions. this results in rather unstable conditions, the most frequent winds occurring from directions between sw and ne. these winds are often strong and generate high waves during the winter season [11]. fig. 3. existing topography contour map for the study area 2.3 bathymetry the site-specific bathymetric survey of the study area was conducted on 9 july 2020 within an area of 200 m offshore by 300 m alongshore as shown in figure 4. fig. 4. bathymetric contour map for the study area (july 2020) results of the survey showed that the seabed off the study area has an average slope of about 1 in 40 between 0 and 200 m offshore. the seabed level, although following a general pattern, can be subject to seasonal changes depending on waves, currents and supplied sediment. the visual inspection by the surveyor showed a sandy bed. therefore, the stability of submersed breakwater units may be affected by local currents causing scouring near and under these units. thus, the use of geotextile sheet as a lining material is required to enhance the structure stability. 2.4 coast and seabed characteristics moving from beach to the coastal profile, the coastline can be divided into the seabed, the beach, the dune face or coarse sand cliffs, and the adjacent body of the dune. the coastal profile consists of erosion-resistant formations of rock and coarse sand protrude, on the seabed, on the beach, and in the cliffs. on the beach especially near the waterline of the gaza shoreline on many places coarse sand outcrops and rocky ridges can be seen, which works well to reduce erosion. where these hard layers are covered only by a relatively thin layer of sand, a retreating coastal profile will gradually consist of an increasing amount of erosion-resistant surface. the identification of erodibility and composition of the steep coarse sand cliffs along the gaza coast is important. these cliffs themselves are, somewhat, able to delay the erosion tendency. if these cliffs are attacked by waves and locally collapsed, the eroded kurkar material will feed the beach with a mixture of very fine to very coarse sediment. the fine particles are moved to deep water, whereas the coarse particles work as an armour layer, protecting the freshly exposed coarse sand cliff face during some time. the gaza strip seabed is composed of sand, locally slightly cemented, light gray color, with locally shell fragments. soil investigation report for the study site (issued on 2 march 2020) states that the soil profile encountered in this investigation for the boreholes under buildings consists of sand layer at top followed by a clayey silty sand layer then sand layer. the grain size (d50) of top layer ranges from 260 to 280 m for borehole bh1 to borehole bh3, respectively. for grain sizes bigger than 150 m, the erosion rate decreases as the grain size increases [12]. therefore, the beach off the study area is considered to have medium to high erosion rates. 2.5 sediment transport gaza coastal zone is part of the more extensive nile littoral cell, which extends from the nile delta in egypt. this cell comprises of quartz, sand, silt and clay sediment derived from the nile delta by longshore sediment transport, which transports sand by waves and currents north and eastward along the sinai coast towards gaza. therefore, significant amounts of sediment may transfer to the coastal and seaward areas. the longshore sediment transport of about 200,000 m3/year was estimated [13]. 2.6 waves waves’ historical data for gaza are based on wind statistics and fetch considerations, which show that the dominant wave direction is the north-west. table 1 shows the significant wave height for various return periods [14]. table 1 significant wave height for various return periods significant wave height return period (year) 1 5 10 25 50 100 open sea, hs (m) 6.5 7.8 8.3 9.0 9.6 10 2.7 tides and sea level rise the mediterranean tidal range is very small. from an ecological and morphological perspective, this means that the intertidal zone is very small, and is influenced mainly by wave mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 4 action rather than by tidal processes. the effect of meteorological variations on sea level may often be greater than astronomical tides. in the mediterranean, the combined effect of barometric pressure variations, storm surges and waves can reach values that are of the same order as the tidal variations. table 2 shows the tidal and meteorological data for gaza coast [15]. table 2 astronomic tide and meteorological table for gaza astronomical type symbol level (msl) max sea level* + 0.93 m max high surge level* + 0.71 m highest astronomical tide hat + 0.45 m mean high water in spring mhws + 0.35 m mean high water in neap mhwn + 0.15 m mean sea level msl 0 mean low water in neap mlwn 0.15 m mean low water in spring mlws 0.25 m lowest astronomical tide lat 0.35 m 2.8 currents tidal currents in the eastern mediterranean are relatively weak. the general circulation is oriented counter-clockwise most of the time, with current speed decreases towards the shore due to the geostrophic current and shelf waves. currents of about 1.0 m/s have been measured [16]. however, typical current mean velocities are 0.05 – 0.10 m/s [17]. the current speed fluctuates between zero and 1.00 m/s, and the dominant current direction is 31° to the north [18]. 2.9 wave run-up extreme weather events bear a significant impact on coastal human activities. forecasting the action of sea storms on coastal structures and beaches is an important tool to mitigate their effects. to this end, with particular regard to low coasts and beaches, we will use empirical formula to evaluate beach wave run-up levels and beach flooding during a storm. the wave run-up can reach beach elevation about +2.0 m msl during high storms for a significant wave height of >5 m as per hunt formula [19], which also includes the wave setup and storm surge. the formulation of run-up height can be calculated from the incident waves. hunt empirical formulas, ru includes the wave set-up: 𝑅𝑢 𝐻𝑜 = 𝜉 𝜉 = 𝑡𝑎𝑛𝛽 √ 𝐻𝑜 𝐿𝑜 𝐿𝑜 = 𝑔𝑇2 2𝜋 where, ho is the deep-water significant wave height t is the wave period lo is the linear theory deep-water wavelength  is the beach slope angle  is the iribarren number or surf similarity parameter ru is the wave run-up the calculated run-up based on hunt formula was 1.02 m. however, the local people have witnessed a run-up of +4.0 m elevation on the beach dune (60-70 m away from the shoreline) during the last storms. therefore, the study is rely on the local observed figures in which the run-up of 4 m is considered in the design. 2.10 environmental conditions this section highlights the existing key environmental conditions of the study area. a. erosion the coastal zone of the gaza strip is a narrow piece of land lying on the eastern coast of the mediterranean sea. the study area covered the beach stretch off 210 m of the northern part of deir el balah camp. the shoreline changes between 1972 and 2020 were analyzed using remote sensing and gis tools, where satellite landsat and aerial images for the mediterranean coast of gaza were acquired for different years covering a time span of 48 years. the images were acquired in for various periods and in good quality, with no effective clouds, three images were acquired as shown in table 3. table 3 satellite images source and resolutions image satellite bands date resolution [m×m] pixel depth image source landsat 5 tm 4 22/10/ 1972 60×60 8 bit usgs landsat 8 10 17/6/2 016 30×30 16 bit usgs sentinel-2 13 06/1/2 020 10×10 12 bit esa usgs: u.s. geological survey esa: european space agency the images pre-processing is the process of making them more suitable for a particular purpose. the image pre-processing of remotely sensed data is essential for image classification, and in the direct linkage between the data and biophysical phenomena and features. this requires several processing steps for better identification of the image features. these steps include atmospheric correction and geometric correction. for this analysis, supervised classification was used in shoreline extraction since the area of interest is known and clear to be distinguished, i.e. the water and land, so the spectral signatures of classes are developed and then the software assigns each pixel in the image to the type to which its signature is most similar. the analysis shows that the current average beach width in the study area is about 60 m. however, comparing the images of 1972 and 2016, when the construction of khan younis breakwater was started, shows that accretion took place to the area to the south of the study area, where the beach width has mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 5 decreased with 5-15 m. on the other hand, comparing the image of 2016 to the image of 2020 shows that erosion takes place along the shore within the area, given that it is located to the north of the port (see figure 5). according to this analysis, the average annual erosion rate in the study area during the period between 2016 and 2020 is 2.37 m per year. fig. 5. shoreline erosion for the study area an overview of shoreline erosion for the whole coastline of deir el balah over the four-year period between 2016 and 2020, the analysis showed that a maximum shoreline retreat of 40 m is found to take place at some locations along deir el balah shoreline, while a minimum retreat of 1 m is taking place at some other locations. in general, an advancing shoreline and growing beach occur south of khan younis breakwater, as the littoral sediments transport has been interrupted by the breakwater. on the other hand, a retreating shoreline and eroded beach are present north of the breakwater. these erosion and accretion patterns reveal the natural processes of wave-induced long-shore currents and sediment transport, in addition to the impact of human intervention by coastal structures. b. flooding according to the local residents of the study area, seawater flooding occurs during heavy storms, where seawater can reach specific locations that were identified during the visit with a ground level near +4 m msl. from residents’ observations, the frequency of stormy season was once every ten years, however the return period become shorter these days. based on the collected data during the site visit and taking into consideration the historical wave records, the area was identified to have a flood risk potential as shown in figure 6. the estimated flood area is about 5,000 m2 based on the available topographic survey, this area represents about 3% of the total camp area (160,000 m2). therefore, the design level of the new recreational area should be slightly above +4 m msl as per the local residents’ feedbacks. fig. 6. location of flooding area 2.10 multi criteria analysis three offshore alternatives were addressed, namely: artificial reefballs, concrete cubes, and geotubes. additional seawall alternative on-shore structure was considered. multi criteria analysis was used to assess the alternatives with respect to criteria that capture the key dimensions of the selection process, the main criteria include erosion/scouring control, impact on ecosystem, associated safety risk, acceptability, material availability, construction experience, cost, aesthetic aspects, socio-economic impact, and sustainability. iii results and discussion several mitigation measures were analysis using numerical model for gaza coastal area such as groins, detached offshore breakwaters, and submersed offshore breakwaters [20]. results show that the offshore-submerged breakwater is an effective structure for preventing sandy beach erosion due to wave and nearshore current actions, and for the landscape of seaview in front of sandy beaches. 3.1 geometric characteristic and sizing of offshore breakwater offshore breakwater is a structure constructed parallel with the coast. even though it is called offshore breakwater, the position of the construction is quite close with the coast, where the construction efficiency is very much determined by the distance between the breakwater and the original coastline. the creat of submerged breakwater is usually below the higher sea level, mean sea level and lower sea level. the breakwater is built to provide protection from wave action and currents in some cases. breakwaters are constructed to minimize wave action in an area behind the structure. on the coast protected by offshore breakwater, sediment deposit will mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 6 be formed which is called salient. the success of coastal protection using offshore breakwater is determined by salient (ys) that was formed, see figure 7, where the height of the salient is determined by the length of breakwater (ls) and the distance between breakwater and the original coastline or breakwater distance (x). considering that breakwater is constructed quite close with the coast where the crest-line is almost parallel with the coast or the wave direction is almost perpendicular to the coast [21]. submerged breakwaters are recommended from environmental and aesthetics perspectives; considering that the targeted area is a tourism place. moreover, the location of the breakwaters was selected in order stabilize the shoreline off the deir el balah camp without causing erosion to the shoreline to the north of the study area. fig. 7. offshore breakwater and salient most commonly offshore obstruction will cause the shoreline in its lee to protrude in a smooth fashion, forming a salient or a tombolo. this happens because the breakwater lowers the wave height in its lee, reducing the waves' ability to transport sand. as a result, sediment carried by longshore currents and waves accumulates in the breakwater's lee [22]. the level of protection is governed by the size and offshore position of the breakwater, so the size of the salient or tombolo varies in accordance with reef dimensions. of course, one can expect this kind morphological change only if the sediment is available (from natural sources or as sand nourishment). simple geometrical empirical criteria for the layout and shoreline response of detached breakwaters are [23], [24]:  for tombolo formation: ls/x > (1.0 to 1.5)  for salient formation: ls/x = (0.5 to 1.0)  for salients with multiple breakwaters: gx/ls2 > 0.5 where, ls is the length of a breakwater x is the breakwater distance to the shore, g is the gap width between two successive breakwaters in addition, rijn et al. [25] has made relation between ls and x against the formation of salient or tombolo as per table 4. the recommended formation is salient, since it allow the current streaming for better seawater quality. table 4 the value of 𝐿𝑠 𝑋 and salient formation ls/x ratio formation > 3 permanent tombolo 2 < ls/x < 3 permanent or periodic tombolo 1 < ls/x < 2 well-developed salient 0.5 < ls/x < 1 weak to well-developed salient 0.2 < ls/x < 0.5 incipient to weak salient ls/x < 0.2 no effect according to the bathymetric survey at the study site, it was found that the water depth of 1.5 m is at 18-50 m crossshore. for a well-developed salient, x is 30 m for ls of 40 m. the width of the gap (g) between two successive breakwaters is usually according to pilarezyk [26]: 𝐿 ≤ 𝐺 ≤ 0.8𝐿𝑠 where, the wave length 𝐿 = 𝑇(√𝑔ℎ) = 25 m thus, the gap width is 25 ≤ 𝐺 ≤ 32 for detached breakwaters. the salient can be formed if the submersed breakwater along the study site without gaps, but it is recommended to keep a gap of 25 m and for small boat navigation. the size of breakwater is affected by the wave height reaching the water depth of 1.5 m. therefore, the shoaling, refraction and breaking excel model was used to compute the onshore wave characteristics at water depth of 1.5 m msl (1.95 m at hwl) for the extreme significant wave height of 10 m (at deep waters) that may occur in a return period of 100 years (see table 1). the computed wave height is 0.93 m at msl and 1.20 m at hwl and the following equations were used [27]: 𝐻𝑟𝑠 = 𝐻𝑜 × 𝐾𝑟 × 𝐾𝑠 𝐾𝑠 = √ 1 2𝑛 tanh ( 2𝜋𝑑 𝐿 ) 𝐾𝑟 = √ 𝑐𝑜𝑠 ∝ 𝑜 𝑐𝑜𝑠 ∝ 𝐻𝑏 = 0.56𝑑𝑒 3.5𝑚 where, hrs is the wave height affected by shoaling and refraction ho is the wave height at deep-water hb is the breaking wave height ks is the shoaling coefficient kr is the refraction coefficient o is the wave approaching angle in deep-water  is the wave angle at local depth l is the wave length d is the local water depth m is the seabed slope stability of the submerged breakwater (s), water depth (h), significant wave height (hs), breakwater crest height (hc’), wave period (t), water density (𝜌𝑤) and rock/concrete density (𝜌𝑟 ) are main factors that affect the unit size of submerged breakwaters. the following equations were used for sizing the breakwater units [28], [29] and the median mass of concrete unit (m50) must be greater than 643 kg: ℎ𝑐 ′ ℎ = (2.1 + 0.1 𝑆)𝑒 −0.14 𝑁𝑠 ∗ 𝛥 = 𝜌𝑟 𝜌𝑤 − 1 mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 7 𝐷𝑛50 = ( 𝑀50 𝜌𝑟 ) 1 3⁄ 𝐻𝑠 ∆𝐷𝑛50 𝑠𝑝 −1 3⁄ = 𝑁𝑠 ∗ 𝑠𝑝 = 2𝜋𝐻𝑠 𝑔𝑇2 where, 𝑁𝑠 ∗ is the modified stability number, 𝐷𝑛50 is the nominal diameter, ∆ is the relative buoyant density, 𝑀50 is the median mass of unit given by 50 % on mass distribution curve 𝑠𝑝 is the local wave steepness. for any reason at the lowest sea level, where the breakwater structure will behave as a low crest breakwater. therefore, the structure must be check for low crest breakwater. for design low crest breakwater, water depth (h), crest freeboard (rc), breakwater slope (cot ), stability number (ns), wa is weight of reef units, γa is specific weight of reefs, r is the ratio between reef and water specific weight = γa/γw and equilibrium coefficient (kd) are main factors that affect the design of low crest breakwaters. hudson equations were used for the compuation of the breakwater units [30]: 𝛥 = 𝜌𝑟 𝜌𝑤 − 1 𝑀50 = 𝜌𝑟 𝐻 3 𝐾𝐷 ∆ 3𝑐𝑜𝑡𝛼 𝐷𝑛50 = ( 𝑀50 𝜌𝑟 ) 1 3⁄ 𝑠𝑝 = 2𝜋𝐻𝑠 𝑔𝑇2 𝑅𝑝∗ = 𝑅𝑐 𝐻𝑠 √ 𝑠𝑝 2𝜋 𝑅𝐷50 = 1 1.25 − 4.8𝑅𝑝∗ 𝑁𝑠 = ∛𝛾𝑎𝐻 (𝑅−1)∛𝑊𝑎 the stability coefficient for artificial reefs or breakwater blocks can be determined from the stability numbers ns that are obtained from laboratory tests using the following relationship: 𝑁𝑠 = ∛(𝐾𝐷 𝑐𝑜𝑡𝜃) thus, the median mass of concrete unit (m50) must be greater than 1,131 kg. 3.2 artificial reefballs breakwater alternative reefballs are hollow hemispherical-shaped artificial units, designed for improvement in biological growth and acting as coastal protection structure to control beach erosion. reefballs are constructed from concrete. this allows reefballs to mix in the marine environment and decreasing potential negative impacts and disturbing the existing ecosystem’s growth. based on the sizing of breakwater under the water level conditions, it was summarized that the minimum concrete unit weight is 1,162 kg. open sea significant wave height of 10 m for a return period of 100 years was used. the individual units selected for the breakwater were 1.16 m high reefball units, with base diameter of 1.83 m, and weight of 1364-1,909 kg. three rows of the selected unit should be installed at distance of 15-40 m from the shoreline. the length of sets of the reefballs is 120 m. the proposed location of the submerged breakwaters is shown in figure 9. the breakwater is proposed to be installed in water depth of 1.10-1.50 m, so that the units are 0-0.35 m below msl. the tide range in the study area is approximately 0.4 m, i.e. the highest astronomical tide depth and the lowest astronomical tide depth at the selected location are about 1.95 m and 1.15 m, respectively. fig. 8. submersed reefballs [31] mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 8 fig. 9. location of reefballs breakwater (top) and layout (middle) and section (bottom) 3.3 concrete cube breakwater alternative concrete cube breakwater had been widely used to decrease the impact of the wave force that reached the front part of vertical wall breakwater. the gaps between the cubes could reduce the volume of concrete being used. the concrete cubes of 1.00 m high should be used at water depth -1.35 m msl (1.00 low water level). the concrete volume is 1.0 m3 (2,400 kg) which is > 1,162 kg. fig. 10. location of concrete cubes breakwater (top) and layout (middle) and section (bottom) 3.4 geotubes breakwater alternative for more than 20 years, geotubes, or geosynthetic containers, have been used as an alternative, long-term solution for coastal protection. they have a lesser impact on the environment than hard structures, such as rock and concrete. geotube systems are also used for land reclamation, island creation, wetlands creation, construction platforms, revetments, dykes, groins, offshore-submerged breakwaters, protecting the cliff along the shore from erosion from wave and wind damage and dewatering container. the geotube system entails the creation of close-ended tubular containers with filling ports spaced at regular intervals. the containers are hydraulically filled with a sand and water mixture, and transported inside of the tube by hydraulic pressure. water will dissipate through the permeable fabric, while sand will settle inside the container. geotube systems are made of woven and composite fabrics in order to meet varying tensile strength, durability and environmental requirements. the tubular shaped geotube containers typically range in diameter from 1.5 m to 5 m. the size of geotube should satisfy bezuijen & vastenburd equations [32]: ℎ ≥ (1 − √1 − 𝑓). 𝑑 𝐵 ≥ ℎ + 1 2 . 𝜋. (𝑑 − ℎ) where, f is the percentage of sand filling, 0.56 % is recommended d is the water depth h is the height of the geotube b is the width of the geotube and b ≥ 2.35 m for sizing the geosynthetic material strength and as a result of bezuijen & vastenburd equations, the structure to protect the beach should be build with two parallel tubes with a diameter of 1.60 m and a fill height of 1.0 m and the circumference of 5.0 m is recommended. the tubes should be installed with a free board of 0.40 meter. a numerical analysis software called tc geotube simulator was used for the sizing the geosynthetic tubes as shown in figure 11. mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 9 fig. 11. location of geotube breakwater (top), layout (middle) and section (bottom) 3.5 concrete seawall alternative seawall is a protection built along to the seawater barrier. it is usually the preffered protection method for areas where further shore erosion will cause significant damage, such as when roads or buildings are about to fall into the water. seawalls must be stable and structurally sound as a primary design requirement. they are located at the top of the shore and will be safe during good times (at low water). during times of stress (at high water), they will be exposed to direct wave action. majority of seawalls are under severe stress. the waves attack the structure, move sand offshore and longshore away from the structure. for long-term, the wave action reflected off the seawall disturbs water near the wall, which can lead to deep scour holes just offshore of the seawall. the scour areas and disturbed flows can be dangerous, and the scour may even excavate the supporting sand from beneath the structure, jeopardizing the wall's stability. the total length of the seawall is 252 m in a curvy shape. the existing ground level is about +3.5 m msl, and it was assumed that the beach in front of the seawall will not drifted so that the seawater is deep adjacent to the seawall, i.e. the beach level just at 0.00 m msl due to scouring. this because of the actions from deir el balah municipality to dump the area with sand to protect the seawall after storm season. the structural analysis for the retaining wall for a section at the highest level (+5.00 m msl) was carried out using a design code of aci 318-2011. it was concluded that a wall thickness of 0.40 m is required to support the soil action. the structure was extended to level of 1.0 m below msl with a foundation of cross section of 3.4×0.4 m as shown in figure 12. the proposed top level at different sections of the seawater barrier is +5.00 m msl, which is above the highest observed wave run up level of +4.0 m msl. therefore, the proposed level is suitable to work as a flood defense and protecting shelters from sea attacks. fig. 12. sewall layout (top), and section (bottom) 3.6 recommended protection alternative a comparative matrix between the different protections alternatives is presented in table 5. multi criteria analysis was used to assess the alternatives with respect to criteria that capture the key dimensions of the selection process. criteria have been selected based on a literature review, consultations with key stakeholders and the consultant experience (via interviews) of the most important factors when planning, designing, financing, and implementing coastal protection measures. the selected factors can be summarized as follows:  erosion and scouring control: this factor assesses the proposed alternative effectiveness in providing the required control of shoreline erosion in the study area.  ecosystem: implementing this option would enhance the marine habitat and the ecological value of the study area.  risk: implementing this option is expected to cause some safety hazards to people in the area, especially children.  acceptability: implementing this protection option would be readily accepted by the community and relevant stakeholders.  material availability: each material required to implement this protection option, including additives, is available in the local market/ or could be supplied with no restrictions on its entry to the gaza strip.  construction experience: implementing this option is possible and the local contractors have the required experience, equipment, human and technical resources to implement this alternative.  cost: implementing this protection option is economically feasible and cost effective.  aesthetic: implementing this option has aesthetic visual effects on the seaside landscape.  socio-economic: implementing this option would have impact upon business activities in the study area (e.g. fishing activities).  sustainability: this factor addresses the relevance of the proposed option in the present and future, its resilience against irresponsible behavior of some local residents, and its resilience to future changes. quantitative scores ranging from -5 to 5 were used in the evaluation of the proposed alternatives, where -5 is used when the option is highly unfavorable in terms of the specific factor, while 5 is used when the option is highly favorable and zero (0) is used when the option is neutral. mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 10 table 5 protection alternatives comparison matrix factor weight protection alternatives reefballs break water cubes breakwater geotube breakwater seawall erosion/scouring control 2 5 5 5 0 ecosystem 2 5 2 1 0 risk 2 -3 -3 0 0 acceptability 1 5 4 2 4 material availability 2 5 5 2 5 construction experience 2 4 5 0 5 cost 2 3 2 5 1 sustainability 2 5 5 2 3 aesthetic visual effect 1 4 3 5 2 socio-economic 1 2 1 0 0 sum of weighted scores 59 50 37 34 based on this analysis, the recommended alternative is the reefballs offshore breakwaters, where three sets of three-row reef ball submerged breakwaters are proposed to be installed. iv conclusions this study proposes a rational solution to the flooding and erosion problems at deir el balah camp, araising from the shoreline imbalance due to climate change and the construction of khan younis breakwater in 2016. seawater flooding occurs during heavy storms, where seawater can reach a ground level near +4 m msl. from residents’ observations, the frequency of stormy season was once every ten years, however the return period become shorter these days. the shoreline changes between 1972 and 2020 were analyzed using remote sensing and gis tools, where satellite landsat and aerial images for the mediterranean coast of deir el balah were acquired for different years covering a time span of 48 years. the analysis shows that erosion takes place along the shore within the area, and the average annual erosion rate in the study area during the period between 2016 and 2020 is 2.37 m per year. it is found that the submerged breakwater is the best alternative as a solution to the erosion problem at deir el balah beach as it has no adverse effects on the ecosystem, quite the contrary it is environmental friendly alternative especially if we designed it as "artificial reef", it will enrich the marine environment and give a beauty shape, moreover it will protect the shoreline and provides the beach with new accretion areas. acknowledgment i would like to express my thankfulness to all those who gave me the possibility to accomplish this study. i am deeply indebted to israa abushaban, mohamed abdrabou and dr. amjad jarada for their continuous valuable effort during data collection and data processing. references [1] m. abualtayef, s. ghabayen, a. foul, a. seif, m. kuroiwa, y. matsubara, and o. matar, “the impact of gaza fishing harbor on the mediterranean coast of gaza”. journal of coastal development, vol. 16, no. 1, pp. 1-10, 2012 [2] k. zhang, b.c. douglas, and s.p. leatherman, “global warming and coastal erosion”. climatic change, vol. 64, article 41, 2004 [3] a. toimil, i. j. losada, r. j. nicholls, r. a. dalrymple, and m. j.f. stive, “addressing the challenges of climate change risks and adaptation in coastal areas: a review”. coastal engineering, vol. 156, article 103611, 2020 [4] b. r. silliman, q.he, c. angelini, c. s. smith, m. l. kirwan, p. daleo, and j. van de koppel, “field experiments and meta-analysis reveal wetland vegetation as a crucial element in the coastal protection paradigm”. current biology, vol. 29, no. 11, pp. 1800-1806.e1803, 2019 [5] n. g. rangel-buitrago, g. anfuso, and a. t. williams, “coastal erosion along the caribbean coast of colombia: magnitudes, causes and management”. ocean & coastal management, vol. 114, pp. 129-144, 2015 [6] s. m. arens, j. p. mulder, q. l. slings, l. h. geelen, and p. damsma, “dynamic dune management, integrating objectives of nature development and coastal safety: examples from the netherlands”. geomorphology, vol. 199, pp. 205-213, 2013 [7] a. v. hedge, “coastal erosion and mitigation methods – global state of art”. indian journal of geo-marine sciences, vol. 39, no. 4, pp. 521-530, 2010 [8] h. zaqoot, s. hujair, q. ansari, and s. khan, “assessment of land-based pollution sources in the mediterranean sea along gaza coast – palestine”, in book: energy, environment and sustainable development, springer-verlag/weir, editor: m. a. uqaili and h. khanji, 2012 [9] m. shatat, k. arakelyan, o. shatat, t. forster, a. mushtaha, and s. riffat, “low volume water desalination in the gaza strip – al salam small scale ro water desalination plant case study”. future cities and environment, vol. 4, no. 1, article 11, 2018 [10] i.a. abuamra, a.y.a. maghari, and h.f. abushawish, “medium-term forecasts for groundwater production and rainfall amounts (deir el-balah city as a case study)”. sustain. water resour. manag., vol. 6, article 82, 2020 mazen abualtayef./ development of flood defence alternatives for the beach of deir el balah camp, palestine (2021) 11 [11] a. masria, m. abualtayef, and a. seif, “hydro‑morphological simulation for blue beach, gaza strip, palestine”. innovative infrastructure solutions, vol. 6, articl 99, 2021 [12] v. b. nguyen, “effect of particle size on erosion characteristics”. wear, vol. 348, pp.126-137, 2016 [13] m. abualtayef, a. foul, s. ghabayen, a. f. rabou, a. seif, and o. matar, “mitigation measures for gaza coastal erosion”. journal of coastal development, vol. 16, no. 2, pp. 135-146, 2013 [14] netherlands ministry of economic affairs and palestinian national authority, nmea-pna, “port of gaza: basic engineering study”. final report, 1994 [15] d. s. rosen, “assessing present and future mediterranean sea level rise impact on ‘israel's’ coast and mitigation ways against beach and cliff erosion”. coastal engineering proceedings, february 2011 [16] d. s. rosen, “a summary of the environmental and hydrographic characteristics of the mediterranean coast of israel”. israel oceanographic & limnological research, national institute of oceanography, technical report, 2001 [17] s. brenner, “high-resolution nested model simulations of the climatological circulation in the southeastern mediterranean sea”. annales geophysicae, vol. 21, pp. 267-280, 2003 [18] israel oceanographic and limnological research, iolr, “seawater characteristics at ashkelon”. technical report, 2016 [19] i. a. hunt, “design of sea walls and breakwaters”. t. american socity of civil engineering, vol. 126, pp. 542– 570, 1959 [20] m. abualtayef, a. foul, s. ghabayen, a. f. rabou, a. seif, and o. matar, “mitigation measures for gaza coastal erosion”. journal of coastal development, vol. 16, no. 2, pp. 135-146, 2013 [21] s. hutahaean, “salient calculation at the single offshore breakwater for a wave perpendicular to coastline using polynomial approach”. international journal of advanced engineering research and science, vol. 7, no. 1, pp. 156-161, 2020 [22] s. mead and k. black, “wave rotation for coastal protection”. asian and pacific coasts 2003, pp. 1-12, 2004 [23] m. m. harris, and j. b. herbich, “effects of breakwater spacing on sand entrapment”. journal of hydraulic research, vol. 24, no. 5, pp. 347-357, 1986 [24] w.r. dally, and j. pope, “detached breakwaters for shore protection”. technical report cerc-81-61, u.s. army corps of engineers, vicksburg, mississippi, 1986 [25] l. c. rijn, j. s. ribberink, j. j. van der werf, and d. j. r. walstra, “coastal sediment dynamics: recent advances and future research needs”. journal of hydraulic research, vol. 51, no. 5, pp. 475-493, 2013 [26] k. w. pilarczyk, “design of low-crested submerged structures-an overview. proceeding of 6th international conference on coastal and port. engineering in developing countries, pianc-copedec, colombo, sri-lanka, pp. 1–16, 2003 [27] j w. kamphuis, “introduction to coastal engineering and management”. second edition, advanced series on ocean engineering vol. 30, pages: 564, world scientific press, 2010 [28] j. van der meer, and k. pilarczyk, “stability of lowcrested and reef breakwaters”. coastal engineering conference, chapter 103, pp.1375-1388, 1990 [29] j. van der meer, “rock slopes and gravel beaches under wave attack”. ph.d. thesis, delft university, the netherlands, 1988 [30] r. y. hudson, f. a. herrmann, r. a. sager, r. w. whalin, g. h. keulegan, c. e. chatham and l. z. hales, “coastal hydraulic models”. special report no. 5, us army engineer waterways, experiment station, vicksburg, mississippi, 1979 [31] m. buccino, i. del vita, and m. calabrese, “engineering modelling of wave transmission of reef balls”. journal of waterway, port, coastal and ocean engineering, vol. 140, no. 4, pp. 04014010, 2014 [32] a. bezuijen and e.w. vastenburd, “geosystems: design rules and applications”. first edition crc press/belkama, 2013 mazen abualtayef is an associate professor of coastal engineering in environmental engineering department at the islamic university of gaza. he is a water engineering expert and got a versatile experience during his 24 years of experience of working in managing, designing and supervising of infrastructure projects especially water and coastal projects. he participates in teaching many courses such as port and coastal engineering, brine management, renewable energy for desalination, o&m of water-sewerage-stormwater networks, engineering economics, surveying and gis, environmental modeling, fluid mechanics, hydraulics , numerical analysis. transactions template journal of engineering research and technology, volume 1, issue 1, feburary 2014 24 genetic algorithm model to optimize water resources allocation in gaza strip said m. ghabayen, ibrahim m madi, khalid a qahman, and basim i. sirdah abstract— groundwater aquifer is considered the main and only water supply source for all kind of human usage in gaza strip (domestic, agricultural and industrial). this source is severely deteriorated in both quality and quantity for many reasons, including low rainfall, dramatic increase in the urban areas and population, pollution from overland activities, and seawater intrusion. in 2011, the palestinian water authority has instituted a plan for integrated management of gaza water resources that considers introducing of new external water resources to the system such as seawater desalination and treatment and reuse of wastewater. in this work, a genatic algorithm model was developed to seek the optimal combination of the management scenarioios of palestinian water authority plan. the optimization code is designed and run using matlab r2011b. the objective function maximized the benefits and minimizes the cost related to the use of different sources of water. the decision variables represents water allocation over different users sectors. the benefits from utilizing water for municipal and industrial purposes are based on the marginal value of water which is derived from the economic equilibrium point between supply and demand curves. the benefits from irrigation water are affected by the relationship between crop yield and salinity. the constraints in the optimization model are allowed to iterate between two bounds (upper bound and lower bound) until the optimal value for each variable is found. the results show that there is a significant improvement in aquifer’s water levels in the majority area of the gaza strip for the planning years 2015, 2025, and 2035 providing that the planned phased desalination and wastewater treatment schemes are implemented in the specifies time horizon. the results show that the resulted quality of available water for agriculture use in term of total weighted average of electrical conductivity is 962 µs/cm in the year 2015, and 876 µs/cm in the year 2025, and 842 µs/cm in the planning year 2035. the results also show that the resulted quality of available water for municipal and industrial use in term of total weighted average of electrical conductivity is 867 µs/cm in the year 2015, and 685 µs/cm in the year 2025, and 631 µs/cm in the planning year 2035. index terms— gaza strip, optimization, genetic algorithms, water resources allocation, marginal value of water i introduction a water resources optimization the availability of freshwater is imperative to economic and social development. therefore, sources of freshwater should be managed in a sustainable manner. sustainable water resource systems include three integrated processes, namely the natural environment, the socio-economic environment and the management system. the purpose of the integrated system is not only to use natural resources without degrading the quality of the water or land, but also to ensure that present and future water demands are met irrespective of the changes in circumstances [1]. the optimization model consists of an objective function or a quantity that is maximized or minimized, and a set of additional constraints or conditional statements that must be satisfied. in recent years, optimization has been widely used in groundwater planning and management models. in the past decade, nonlinear programming techniques have been applied to groundwater management models since these often give rise to highly non-convex and non-linear programming problems [2]. most of optimization problems related to the interaction of groundwater resources and socioeconomic activities are nonlinear. the non-linearity comes from the complexity groundwater system. in addition cost functions tend to change no-linearly with economy of scale [3]. genetic algorithms (ga) have been used extensively for solving complex and highly non-linear optimization problems. genetic algorithms are based on a random search scheme based inspired by biological evolution. ga is an optimization technique based on the process of biological evolution [4]. the concept of ga is based on the initial selection of a relatively small population. each individual in the population represents a possible solution in the parameter space. the fitness of each individual is determined by the value of the objective function calculated, based on that set of parameters. the natural evolutional processes of reproduction, crossover, mutation, and selection, are applied using probability rules to evolve the new and better generations. the evolution based search algorithms claim to find much better near-optimal solution than any other optimization method [5].  head of environmental engineering department, islamic university of gaza, gaza, palestine.  projectr manager, engineering and environmental protection department, united nations mission in darfur (unamid).  general director, environmental quality authority, gaza, palestine.  research assistant, civil engineering department, islamic university of gaza, gaza, palestine. genetic algorithm model to optimize water resources allocation in gaza strip said m. ghabayen, basim i. sirdah, and ibrahim madi/ research name (2014) 25 b about the study area groundwater aquifer is considered the main and only water supply source for all kind of human usage in gaza strip (domestic, agricultural and industrial). this source is severely deteriorated in both quality and quantity for many reasons, including low rainfall, dramatic increase in the urban areas and population, pollution from overland activities, and seawater intrusion. in 2011, the palestinian water authority has instituted a plan for integrated management of gaza water resources that considers introducing of new external water resources to the system such as seawater desalination and treatment and reuse of wastewater. in this work, a genatic algorithm model was developed to seek the optimal combination of the management scenarioios of palestinian water authority plan [6]. ii methodology the optimization code is designed and run using matlab r2011b. the code initializes a random sample of individuals with different parameters to be optimized using the genetic algorithm approach. a objective function the objective of the management model is to maximize the total benefits from the use of water sources for different purposes with minimum cost. the objective function can be expressed as: 𝑀𝑎𝑥 𝑍 = ∑ 𝑄𝑖,𝑗 𝑖=𝑛 𝑗=𝑛 𝑖=1 𝐽=1 ∗ (𝐵𝑗 − 𝐶𝑖 ) (1) where; i: indicates a particular source of water from n sources, j: denotes the sector where the water is utilized, and q i,j: represent the quantity of water extracted from source (i) and utilized in sector (j), which is represented by the following equation: qi,j = q 11 + q 21 + q 31 +q 41 + q 51 + q 12 + q 52 + q 62 + q 7 where; q11: the quantity of water supplied from groundwater wells to the municipal and industrial users. q21: the quantity of water supplied from brackish groundwater desalination to the municipal and industrial users. q31: the quantity of water supplied from seawater desalination to the municipal and industrial users. q41: the quantity of water supplied from imported water from mekorot company (israel) to the municipal and industrial users. q51: the quantity of water supplied from imported water from egypt to the municipal and industrial users. q12: the quantity of water supplied from groundwater wells to the agriculture sector. q52: the quantity of water supplied from imported water from egypt to the agriculture sector. q62: the quantity of water supplied from reclaimed water to the agriculture sector, and q7: the quantity of harvested water from storm water and used to replenish the groundwater aquifer. bj: the estimated benefits resulting from utilizing one cubic meter of water for municipal and industrial sector and/or for agricultural sector (bm&i or bag). ci: the estimated cost to supply a unit volume of water from different sources including physical losses. b estimation of benefits from municipal and industrial sectors (bm&i) the benefit from utilizing water for municipal and industrial purposes (bm&i) is based on the marginal value of water (optimal value of water) which is based on the economic equilibrium point between supply and demand curves this point corresponds to the marginal value of unit volume of good quality water which is 1.03 $/m3. the lower value water corresponds to the present quality situation [7]. based on that, the benefit of water supply is given by the relation: 𝐵𝑀&𝐼 = 1.51 − 0.48 𝐸𝐶𝑀&𝐼 (2) where; bm&i: economic value of unit water for municipal and industrial purposes, and ecm&i: electrical conductivity of blended water (ms/cm) from different sources which given by the following mass balance relation: 𝐸𝐶𝑀&𝐼 = ∑ 𝑄𝑖 ∗ 𝐸𝐶𝑖𝑀&𝐼 ∑ 𝑄𝑖𝑀&𝐼 (3) where; eci = electrical conductivity of each source used for municipal and industrial purposes. ∑ qim&i = sum of the quantities of water from different sources used for municipal and industrial purposes. c estimation of benefits from agricultural sector (bag) the benefits from irrigation water (bag) are affected by the relationship between crop yield and salinity. a series of this type of relationships were developed for different categories of crops as shown in figure 1. the different categories are also illustrated in table 1[8]. for gaza strip case and for the purpose of simplification of the optimization model, medium curve (average crop) ,between the four curves shown in figure 1, is selected to represent gaza strip agricultural sector. the reason for this is that no accurate data is available about the crop distribution particularly for future prediction. in addition to that, we are concerned about the macro-scale picture of problem. genetic algorithm model to optimize water resources allocation in gaza strip said m ghabayen, ibrahim m. madi, khalid a. qahaman, and basim i. sirdah (2014) 26 figure 1: relative cropyyield vs. salinity [8] table 1 crops categories and their tolerance to salinity [8] tolerance to salinity crop tolerant barley, sugar beet moderately tolerant wheat, wheat grass, zucchini, beet (red), orange moderately sensitive tomato, cucumber, alfalfa, clover, corn, potato sensitive onion, carrot, bean, apple, cherry, strawberry, flowers d limits and constraints most of the constraints in ga model are allowed to iterate between two bounds (upper bound and lower bound) until the optimal value for each variable is found. quality variables and resource variables constraints can be expressed by the following inequalities: 𝐸𝐶𝑖,𝑚𝑖𝑛 ≤ 𝐸𝐶𝑖 ≤ 𝐸𝐶𝑖,𝑚𝑎𝑥 𝑄𝑖𝑗,𝑚𝑖𝑛 ≤ 𝑄𝑖𝑗 ≤ 𝑄𝑖𝑗,𝑚𝑎𝑥 these values are either based on the nature or environment carrying capacity such as sustainable abstraction quantities from the groundwater aquifer or they are based on local polices for allocating the resources for different sectors. table 2, below shows the upper and lower bounds of the different variables used in the ga model. table 2 ga model input variables and limits variable (ga model ) variable (text) unit year 2015 year 2025 year 2035 min max min max min max x1 q11 m3 60*1 06 67*1 06 40*1 06 48*1 06 40*1 06 48*10 6 x2 q21 m 3 0 5*10 6 0 5*10 6 0 5*106 x3 q31 m3 0 13*1 06 0 72*1 06 0 130*1 06 x4 q41 m3 5*10 6 15*1 06 5*10 6 21*1 06 5*10 6 21*10 6 x5 q51 m3 0 5*10 6 0 10*1 06 0 10*10 6 x6 q12 m 3 60*1 06 70*1 06 40*1 06 50*1 06 40*1 06 50*10 6 x7 q52 m 3 0 5*10 6 0 10*1 06 0 10*10 6 x8 q62 m 3 0 10*1 06 0 20*1 06 0 40*10 6 x9 ec1 µs/c m 1000 1670 1000 1040 1000 1040 x10 ec2 µs/c m 500 1000 500 1000 500 1000 x11 ec3 µs/c m 500 700 500 700 500 700 x12 ec4 µs/c m 700 1040 700 1040 700 1040 x13 ec5 µs/c m 500 700 500 700 500 700 x14 q7 m3 0 20*1 06 0 30*1 06 0 40*10 6 calculated ect1 µs/c m n/a 1500 n/a 1500 n/a 1500 calculated ect2 µs/c m n/a 1650 n/a 1650 n/a 1650 e other constraints the sustainable abstraction from the groundwater aquifer is estimated at 110 x 106 cubic meter per year [10]. this value changes based on the applied strategies for inflows and outflows to the aquifer and the availability of other resources and are modeled by the following equation: 𝑄11 + 𝑄12 + 1.5𝑄21 ≤ 𝑄7 + 110 ∗ 10 6 (4) q7 is the quantity of water added from harvesting and infiltration so it will increase the upper bound of the aquifer capacity. based on metcalf and eddy (2003) [10] the integrated aquifer management plan assumed that at least 25% of irrigation demand should come from the aquifer and the rest can be supplied from treated effluent. this argument can be modeled in the following constraint: 𝑄62 ≤ 0.75 (𝑄12 + 𝑄52 + 𝑄62) (5) f total water demand constraints table 3 bellow summarizes the total water demand of water for both domestic & industrial and agriculture. these constraints are for the years 2015, 2025 and 2035 [6, 9]. table 3 ga model input variables and limits constrains total water demand year 2015 year 2025 year 2035 q11 + q21 + q31 + q41 + q51 ≥ 94 mcm 140 mcm 198 mcm q12 + q52 + q62 ≥ 77 mcm 69 mcm 61 mcm g cost variables the under mentioned variables are the cost variables for the model [6]: c11: the unit cost for quantity of water supplied from groundwater wells to the municipal and industrial users. ($ 0.30 /m3), genetic algorithm model to optimize water resources allocation in gaza strip said m. ghabayen, basim i. sirdah, and ibrahim madi/ research name (2014) 27 c21: the unit cost for the quantity of water supplied from brackish groundwater desalination to the municipal and industrial users. ($ 0.75/m3), c31: the unit cost for the quantity of water supplied from seawater desalination to the municipal and industrial users. ($ 0.90/m3), c41: the unit cost for the quantity of water supplied from imported water from mekorot company (israel) to the municipal and industrial users. ($ 0.85/m3), c51: the unit cost for the quantity of water supplied from imported water from egypt to the municipal and industrial users. ($ 0.80/m3), c12: the unit cost for the quantity of water supplied from groundwater wells to the agriculture sector. ($ 0.20/m3), c52: the unit cost for the quantity of water supplied from imported water from egypt to the agriculture sector. ($ 0.80/m3), c62: the unit cost for additional treatment for the quantity of water supplied from reclaimed water to the agriculture sector. ($ 0.35/m3), and c7: the unit cost for additional treatment for the quantity of water harvested and infiltrated into the aquifer groundwater ($ 0.35/m3). iii results and discussion a genetic algorithm model results for short term management for year 2015 according to the palestinian central bureau of statistics (pcbs) [10], the growth population rate of 3.5% was assumed to estimate the future municipal well abstractions. based on that the projected population of gaza strip by year of 2015 will stand at 1.8 million inhabitants distributed over the five governorates. the estimated quantities of water for domestic demand were calculated considering the recommendations of getap 2011[6] by considering a benchmark water consumption of 135 liters per capita/day for the whole of gaza strip towards the end of the short-term intervention period. table 4 summarizes the distribution of projected population of gaza strip as well as the demanded quantities of water for domestic and agriculture use. as the result of the instability of the political situation and the absence of the industrial infrastructure, the consumption for industrial demand will consider being 2mcm/year [11]. considering the required quantities for both domestic and agriculture use with all constrains and limits the ga model solved the case for optimal quantities by maximizing the total benefits and minimizing the cost. the optimal quantities after comparing 100 generations are summarized in table 5. table 4 projected population and domestic & agriculture water demand in year 2015 g o v e rn o ra te y e a r p o p u la ti o n c o n s u m p ti o n (l /c a p it a / d a y ) d o m e s ti c w a te r d e m a n d p e r g o v e rn o ra te (m 3 /y e a r) a g ri c u lt u re w a te r d e m a n d f o r a ll g o v e rn o ra te s (m 3 /y e a r) north 2015 501,979 135 17,535,100 77,000,000 gaza 922,078 135 32,209,983 middle 381,779 135 13,336,278 khan younis 503,341 135 17,582,699 rafah 322,037 135 11,249,383 total 2,631,214 135 91,913,443 77,000,000 table 5 optimal water quantities for short term management in year 2015 description source variable unit quantity domestic and industrial demand groundwater wells q11 m3 65*106 desalinated water from brackish wells q21 m3 1.08*106 desalinated sea water. q31 m3 13*106 imported water from mekorot company q41 m3 9.90*106 imported water from egypt. q51 m3 4.90*106 total m3 93.88*106 agriculture demand groundwater wells q12 m3 63*106 imported water from egypt q52 m3 4.96*106 reclaimed water q62 m3 9.28*106 total m3 77.24*106 harvested water harvested water from storm water q7 m3 19.87*106 electrical conductivities of water from different sources water from aquifer ec1 µs/cm 1000 desalinated water from brackish wells ec2 µs/cm 500 desalinated seawater. ec3 µs/cm 500 imported water from mekorot company. ec4 µs/cm 700 electrical conductivity of water imported from egypt ec5 µs/cm 500 final quality of water in terms of electrical conductivity calculated average electrical conductivity for domestic water ect1 µs/cm 867 average electrical conductivity for irrigation water ect2 µs/cm 962 genetic algorithm model to optimize water resources allocation in gaza strip said m ghabayen, ibrahim m. madi, khalid a. qahaman, and basim i. sirdah (2014) 28 best function value $ 77 *106 the results show that there is a significant improvement in aquifer’s water levels in the majority area of the gaza strip especially in the middle area. the levels will gradually to increase to reach 4 meters below msl in north area and 11 meters below msl in south area. as for quality, the results show that the total average of electrical conductivity for the domestic and agriculture uses for water are 867 µs/cm and 962 µs/cm respectively. this means that the total dissolved solids (tds) for domestic use is around 500 mg/liter and 580 mg/liter for agriculture use. figure 2 shows the the results of the simulations for the water levels in the aquifer using seawat model when adopting the optimized scenario for short term planning in year 2015. figure 2: predicted water level for optimized scenario (year 2015) b genetic algorithm model results for medium term management for year 2025 the estimated quantities of water for domestic demand were calculated considering the recommendations of getap 2011[6] by considering a benchmark water consumption of 150 liters per capita/day for the whole of the gaza strip towards the end of the medium-term intervention period. table 6 summarizes the distribution of projected population of the gaza strip as well as the demanded quantities of water for domestic and agriculture use. as the result of the instability of the political situation and the absence of the industrial infrastructure, the consumption for industrial demand will consider to be 2mcm/year [11]. considering the required quantities for both domestic and agriculture use with all constrains and limits, the ga model solved the case for optimal quantities by maximizing the total benefits and minimizing the cost. the optimal quantities after comparing 100 generations are summarized in table 7 of genetic algorithm model interface for year 2025. table 6 projected population and domestic & agriculture water demand in year 2025 g o v e rn o ra te y e a r p o p u la ti o n c o n s u m p ti o n (l /c a p it a / d a y ) d o m e s ti c w a te r d e m a n d p e r g o v e rn o ra te (m 3 /y e a r) a g ri c u lt u re w a te r d e m a n d f o r a ll g o v e rn o ra te s (m 3 /y e a r) north 2015 501,979 150 27,483,350 69,000,000 gaza 916,655 150 50,186,861 middle 373,197 150 20,432,535 khan younis 456,331 150 24,984,122 rafah 322,035 150 17,631,416 total 2,570,197 150 140,718,284 69,000,000 table 7 optimal water quantities for medium term management in year 2025 description source variable unit quantity domestic and industrial demand groundwater wells q11 m3 48*10 6 desalinated water from brackish wells q21 m3 5*10 6 desalinated sea water. q31 m3 67*106 imported water from mekorot company q41 m3 10*10 6 imported water from egypt. q51 m3 10*10 6 total m3 140*106 agriculture demand groundwater wells q12 m3 49*10 6 imported water from egypt q52 m3 9.7*106 reclaimed water q62 m3 20*106 total m3 78.7*106 harvested water harvested water from storm water q7 m3 18*10 6 electrical conductivities of water from different sources water from aquifer ec1 µs/cm 1000 desalinated water from brackish wells ec2 µs/cm 500 desalinated seawater. ec3 µs/cm 500 imported water from mekorot company. ec4 µs/cm 700 electrical conductivity of water imported from egypt ec5 µs/cm 500 final quality of water in calculated average electrical conductivity for domestic water ect1 µs/cm 685 genetic algorithm model to optimize water resources allocation in gaza strip said m. ghabayen, basim i. sirdah, and ibrahim madi/ research name (2014) 29 terms of electrical conductivity average electrical conductivity for irrigation water ect2 µs/cm 876 best function value $ 79 *106 the results show that there is a significant improvement also in aquifer’s water levels in the majority area of the gaza strip especially in the middle area. the levels will gradually to increase to reach 4 meters below msl in north area and 8 meters below msl in south area. as for quality, the results show that the total average of electrical conductivity for the domestic and agriculture uses are water is 685µs/cm and 876 µs/cm respectively. this means that the tds for domestic use is around 400 mg/liter and 530 mg/liter for agriculture use. figure 3 shows the water levels of aquifer throughout the gaza strip by adopting the optimized scenario for medium term planning in year 2025. figure 3 shows the the results of the simulations for the water levels in the aquifer using seawat model when adopting the optimized scenario for short term planning in year 2025. figure 3 clearly dominstrate the imporovement in the water level in the aquifer compared to the year 2015. c genetic algorithm model results for short term management for year 2035 the estimated quantities of water for domestic demand were calculated considering the recommendations of getap 2011[6] by considering a benchmark water consumption of 150 liters per capita/day for the whole of the gaza strip towards the end of the long-term intervention period. table 8 summarizes the distribution of projected population of the gaza strip as well as the demanded quantities of water for domestic and agriculture use. as the result of the instability of the political situation and the absence of the industrial infrastructure, the consumption for industrial demand will consider to be 2mcm/year [11]. figure 3: predicted water level for optimized scenario (year 2025) considering the required quantities for both domestic and agriculture use with all constrains and limits, the ga model solved the case for optimal quantities by maximizing the total benefits and minimizing the cost. the optimal quantities after comparing 100 generations are summarized in table 9 of genetic algorithm model interface for year 2035. table 8 projected population and domestic & agriculture water demand in year 2035 g o v e rn o ra te y e a r p o p u la ti o n c o n s u m p ti o n (l /c a p it a / d a y ) d o m e s ti c w a te r d e m a n d p e r g o v e rn o ra te (m 3 /y e a r) a g ri c u lt u re w a te r d e m a n d f o r a ll g o v e rn o ra te s (m 3 /y e a r) north 2015 708,091 150 38,767,982 61,000,000 gaza 1,293,033 150 70,793,556 middle 526,431 150 28,822,097 khan younis 643,701 150 35,242,630 rafah 454,262 150 24,870,844 total 3,625,518 150 198,497,110 61,000,000 table 9 optimal water quantities for short term management in year 2035 description source variable unit quantity domestic and groundwater wells q11 m3 48*10 6 genetic algorithm model to optimize water resources allocation in gaza strip said m ghabayen, ibrahim m. madi, khalid a. qahaman, and basim i. sirdah (2014) 30 industrial demand desalinated water from brackish wells q21 m3 5*10 6 desalinated sea water. q31 m3 125*106 imported water from mekorot company q41 m3 10*10 6 imported water from egypt. q51 m3 10*10 6 total m3 198*106 agriculture demand groundwater wells q12 m3 50*10 6 imported water from egypt q52 m3 10*106 reclaimed water q62 m3 30*106 total m3 80*106 harvested water harvested water from storm water q7 m3 18*10 6 electrical conductivities of water from different sources water from aquifer ec1 µs/cm 1000 desalinated water from brackish wells ec2 µs/cm 500 desalinated seawater. ec3 µs/cm 500 imported water from mekorot company. ec4 µs/cm 700 electrical conductivity of water imported from egypt ec5 µs/cm 500 final quality of water in terms of electrical conductivity calculated average electrical conductivity for domestic water ect1 µs/cm 631 average electrical conductivity for irrigation water ect2 µs/cm 842 best function value $ 86 *106 the results show that there is a significant improvement in water levels in aquifer in the majority area especially in the middle area. the levels reach to a significant of 13 meters above msl in the eastern area and show a significant increase also in both north with 3 meters below msl and in south area with 4 meters below msl. as for quality, the results show that the total average of electrical conductivity for the domestic and agriculture uses are water is 631 µs/cm and 842 µs/cm respectively . this means that the tds for domestic use is around 380 mg/liter and 500 mg/liter for agriculture use. figure 4 shows the water levels of aquifer throughout gaza strip by adopting the optimized scenario for long term planning in year 2035. figure 4 shows the the results of the simulations for the water levels in the aquifer using seawat model when adopting the optimized scenario for short term planning in year 2025. figure 4 show more imporovement in the water level in the aquifer compared to the year 2015, and year 2025. figure 4: predicted water level for optimized scenario (year 2035) iv conclusions a optimal solution for short term planning in year 2015 the optimal quantities for domestic and industrial demands are 65*106 m3 from groundwater, 1.08*106 m3 from desalinated brackish wells, 13*106 m3 from desalinated from sea water, 9.90*106 m3 imported water from mekorot company and 4.90*106 m3 from imported water from egypt. the optimal quantities for agriculture demand are 63*106 m3 from groundwater, 4.96 *106 m3 of imported water from egypt and 9.28 *106 m3 from reclaimed water . the results of the model show that a quantity of 19.87 *106 m3 of harvested water should have injected to aquifer to add additional quantities and to improve the ground water quality . the results show that the final quality of available water for agriculture use in term of total weighted average of electrical conductivity is 962 µs/cm which is equal to 577 mg/l and 230 mg/l for tds and clrespectively. b optimal solution for medium short term planning in year 2025 the optimal quantities for domestic and industrial demands are 48*106 m3 from groundwater, 5*106 m3 from desalinated genetic algorithm model to optimize water resources allocation in gaza strip said m. ghabayen, basim i. sirdah, and ibrahim madi/ research name (2014) 31 brackish wells,67*106 m3 from desalinated from sea water, 10*106 m3 imported water from mekorot company and 10*106 m3 from imported water from egypt. the optimal quantities for agriculture demand are 49*106 m3 from groundwater, 9.7 *106 m3 from imported water from egypt and 20 *106 m3 from reclaimed water. the results of the model show that a quantity of 18 *106 m3 of harvested water should have injected to aquifer to add additional quantities and to improve the ground water quality . the results show that the final quality of available water for agriculture use in term of total weighted average of electrical conductivity is 876 µs/cm which is equal to 526 mg/l and 210 mg/l for tds and cl respectively. c optimal solution for long term planning in year 2035 the optimal quantities for domestic and industrial demands are 48*106 m3 from groundwater, 5*106 m3 from desalinated brackish wells, 125*106 m3 from desalinated from sea water, 10*106 m3 imported water from mekorot company and 10*106 m3 from imported water from egypt. the optimal quantities for agriculture demand are 50*106 m3 from groundwater, 10*106 m3 imported water from egypt and 30 *106 m3 from reclaimed water . the results of the model show that a quantity of 18 *106 m3 of harvested water should have injected to aquifer to add additional quantities and to improve the ground water quality. the results show that the final quality of available water for agriculture use in term of total weighted average of electrical conductivity is 842 µs/cm which is equal to 505 mg/l and 202 mg/l for tds and clrespectively. references [1] qahnam k. aspects of hydrogeology, modeling, and management of seawater intrusion for gaza aquifer – ppalestine. ph.d. dissertation, department of civil engineering, university of mohammad al khamis. morroco, (2004). [2] willis, r. and yeh w. w-g, groundwater systems planning and management. prentice-hall, inc, new jersey, (1987). [3] ghabayen s, mckee m, kemblowski ma, bayesian belief network model for multi-objective optimization of the gaza water resources system. awra annual conference, san diego, california, (2003). [4] holland, j., adaptation in natural and artificial systems, univ. of mich. press, ann arbor, (1975). [5] ouazar, d., and cheng a. h. d., application of genetic algorithms in water resources, in groundwater pollution control. edited by k. l. katsifarakis, chap. 7, pp. 293316, wit press., boston, (1999). [6] the gaza emergency technical assistance programme on water supply to the gaza strip (getab), component 1the comparative study of options for an additional supply of water for the gaza strip (cso-g), (2011). [7] ghabayen s., probabilistic expert system for analysis and optimization of large scale water resources systems. ph.d. dissertation, utah state university, utah, usa, (2004). [8] hill r. and koeing r., water salinity and crop yield. report #ag 425.3. utah state university cooperative extension, (1999). [9] metcalf and eddy, integrated aquifer management plan: model final report. coastal aquifer management program, (2003). [10] palestinian central bureau of statistics (pcbs), " population and housing and establishment census ". census final results in gaza strip, palestinian national authority. (2007) [11] palestinian water authority (pwa). (2012). water supply annual report 2010. transactions template journal of engineering research and technology, volume 9, issue 2, october 2022 1 received on (01-02-2022) accepted on (19-04-2022) exploiting wikipedia to measure the semantic relatedness between arabic terms basel alhaj and iyad alagha https://doi.org/10.33976/jert.9.2/2022/1 abstract—measuring the semantic relatedness between words or terms plays an important role in many domains such as linguistics and artificial intelligence. although this topic has been widely explored in the literature, most efforts focused on the english text, while little has been done to measure the similarity between arabic terms. a growing number of semantic relatedness measures have relied on an underlying background knowledge such as wikipedia. they often map terms to wikipedia concepts, and then use the content or hyperlink structure of the corresponding wikipedia articles to estimate the similarity between terms. however, existing approaches mostly focused on the english version of wikipedia, while limited work has been done on the arabic version. this work proposes an approach that takes advantage of wikipedia features to measure the relationship between arabic terms. it exploits two types of relations to gain rich features for the similarity measure, which are: the context-based relation and the category-based relation. the context-based relation is measured based on the intersection between incoming links of wikipedia articles, while the category-based relation is measured by utilizing the taxonomy of wikipedia categories. the proposed approach was evaluated based on a translated version of the wordsimilarity353 benchmark dataset. the results show that our approach generally outperforms several approaches in the literature that use the same dataset in english. however, the poor structure and content of the arabic version of wikipedia compared to the english version has resulted in several incorrect similarity scores. index terms—semantic relatedness, arabic text, wikipedia, text similarity. i introduction measuring semantic relatedness between terms is an important issue in natural language processing, and is used in many application areas such as information extraction and retrieval, text summarization, document classification and clustering, and question answering. terms can be related either lexically or semantically: terms have a lexical relevance if they have a similar sequence of characters such as ‘underestimate’ and ‘understand’ or ‘withhold’ and ‘withdraw’. terms have semantic relatedness when they are frequently used in the same context. for example, “job” and “money” are semantically related even though they are not lexically related [1, 2]. lexical relatedness is often calculated by using stringbased algorithms (sba), which are based on string sequences and character composition to determine if two strings are similar or not. semantic relatedness is often measured using corpus-based (cba) and knowledge-based algorithms (kba). cba is based on information gained from large corpora to calculate similarity between terms, while kba uses information derived from semantic networks [2]. humans depend on a huge amount of background knowledge in analyzing the meanings of terms. therefore, any automated attempt to calculate the semantic relatedness between terms should be based on external sources of knowledge [3]. wikipedia today represents one of the largest sources of knowledge and covers a large number of knowledge domains. due to this importance and popularity, many works have exploited wikipedia as a knowledge source to measure the semantic relatedness between terms. some of these works have exploited the structure of wikipedia articles such as categories, hyperlinks, and templates [4, 5]. another line of works has attempted to measure similarity based on the natural language processing of the textual content of wikipedia articles [6-9]. the aforementioned works have focused solely on english text and used the english version of wikipedia. to the best of our knowledge, few efforts have benefited from the arabic version of wikipedia to improve the contextual understanding of the arabic text. in this work, we propose a wikipedia-based approach for measuring the semantic relatedness between arabic terms. wikipedia is particularly selected as a knowledge source for our approach due to its large content and wide coverage of different domains of knowledge. this enables our approach to measure similarity between terms from different domains of knowledge. given any two arabic terms, our approach should give a score that indicates the degree of similarity between them. the proposed approach exploits the hyperlinks between wikipedia articles and the taxonomy of categories to capture the semantic similarity between terms. the hyperlink structure is used to determine the context-based relation between the articles that correspond to input terms. https://doi.org/10.33976/jert.9.2/2022/1 basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 2 wikipedia categories are also used to group articles with similar or related subjects together. the category graph of the arabic version of wikipedia is constructed and analyzed to compute the category-based relation between terms. the contributions of this work can be summarized as the following: 1) it presents an approach to measure the semantic relatedness between arabic terms using the wikipedia’ hypertext structure and category graph. we provide the source code of the proposed approach through the following link: (https://github.com/baselalhaj/semanticrelatedness). 2) by comparing our approach with similar approaches that use the english version of wikipedia, we can assess the reliability of the arabic version of wikipedia as compared to the english version and inform the research community of the potential of arabic wikipedia for measuring relatedness between arabic terms. 3) it provides a hands-on-experience in processing wikipedia content to enable searching in and mapping to wikipedia articles, as well as the construction of category graph to measure category-based relation. we believe that this will be of importance to practitioners and researchers who are interested in exploiting the arabic version of wikipedia. ii related works a variety of semantic similarity methods have been proposed in the literature, which can be generally classified into three main categories [10, 11]: 1) knowledge-based methods, and 2) corpus-based methods, 3) deep learning methods. in what follows, we discuss these categories of similarity methods, and then review the related works on arabic text similarity measures. a. knowledge-based semantic similarity methods knowledge based semantic similarity methods estimate the similarity between terms based on the information extracted from background knowledge sources. these methods often rely on the structured representation offered by the background knowledge [12]. this structure often comes as a set of concepts connected with relations. examples of knowledge sources widely used for similarity methods include wordnet, wiktionary, wikipedia, and babelnet [13]. knowledge-based similarity methods can be classified according to the underlying principle into four categories [10]: edge-counting methods, feature-based methods,, content-based methods and hybrid methods. edge-based methods consider the underlying knowledge as a graph connecting concepts taxonomically and count the edges between terms to measure the similarity. the greater the distance between the terms, the less similar they are. in general, the limitation of edge counting methods is that the distance often fails to capture the similarity between terms. feature-based methods calculate similarity as a function of properties of the terms based on the neighboring terms, or the different meanings of the term in the glossary [12]. for example, the lesk measure [14] estimates the relatedness between two terms based on the overlap of their meanings in a background dictionary like wordnet. jiang et al. [15] proposed an approach that measures the semantic similarity using the glosses of concepts present in wikipedia. the main problem with feature-based methods is their dependency on the existence of semantic features, which are not always present in the background knowledge. information content-based methods attempt to estimate the similarity between terms based on what is called the information content (ic) of the term. the ic of a term is defined as the information derived from the context where the term appears in. a high ic value indicates that the term is more specific and describes a topic with less ambiguity, while a lower ic means that the term is more abstract in meaning [16]. a numerous number of extensions have been proposed to measure the term’s ic by exploiting different features of the underlying structure of the background knowledge [17-19]. in this work, we use and evaluate two methods to calculate the ic value of a term based on the taxonomy of concepts in the arabic wikipedia. hybrid knowledge-based methods combine various measures from the three categories aforementioned to better capture the similarity between terms. for example, goa et al. [20] proposed a method that uses three different strategies that include the depths of all the terms in wordnet along with the path between the two terms, the depth of the least common subsumer of the terms, and the ic value of the terms. in general, knowledge-based measures are highly dependent on the richness, divergence, and recentness of the underlying knowledge. b. corpus-based similarity methods corpus-based methods calculate the semantic similarity between terms using the information retrieved from large corpora. there is a wide variety of corpus-based techniques for measuring the semantic similarity between texts. latent semantic analysis (lsa) is one of the most popular and widely used corpus-based methods [21]. it is a statistical text analytics method that can uncover the conceptual content within unstructured data by using singular value decomposition (svd). several works have used lsa to measure similarity between terms [22, 23]. hyperspace analogue to language (hal) is another corpusbased method that captures the statistical dependencies between terms by considering their co-occurrences in a surrounding window of text [24, 25]. word-alignment models present another line of corpus-based methods that calculate the semantic similarity between sentences based on their alignment over a large corpus [26, 27]. latent dirichlet allocation (lda) [28] is another technique that is widely used for topic modeling tasks, and it has the advantage of reduced dimensionality [29]. normalised google distance (ngd) is another corpus-based measure of semantic similarity that is derived from the google search engine. it is based on the assumption that two words are highly related if they occur together frequently on web pages [30]. word-attention models [31, 32] are of https://github.com/baselalhaj/semanticrelatedness https://github.com/baselalhaj/semanticrelatedness basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 3 the most recent and promising corpus-based methods that differ from traditional semantic similarity methods in that they can capture the importance of words from underlying corpora before calculating the semantic similarity. c. deep learning methods semantic similarity methods have recently exploited recent developments in neural networks and deep learning techniques to enhance performance. plenty of works proposed methods to measure semantic similarity between terms by using convolutional neural networks (cnn) [33, 34], long short term memory (lstm) [35], bidirectional long short term memory (bi-lstm)[36], recursive tree lstm [37], and transformers [38]. despite the great potential of deep learning methods, their main limitation is that they are computationally expensive and require large training sets to work effectively. in contrast, the approach proposed in this work is unsupervised, and thus does not require labelled data. d. arabic semantic similarity methods in the context of arabic text, several works proposed methods to measure the similarity between documents [39, 40], sentences [41-43] and words [44, 45]. however, most of these methods are based on word co-occurrences or word embeddings, which help more for capturing syntactic rather than semantic similarity between words. few methods tackled semantic techniques for arabic text similarity. for example, almarsoomi, et al. [44] used the measure proposed by li et al. [46] to calculate the similarity between words by exploiting different attributes from the arabic wordnet. froud et al. [45] measured the semantic similarity between two words by using the latent semantic analysis (lsa) model and demonstrated the difference between using stemming and light stemming in the preprocessing phase. recently, an increasing number of works have exploited the arabic version of wikipedia for different purposes in computer science. some works use the structured-content of wikipedia to construct ontologies [47, 48]. others used wikipedia features and hyperlink structure to build arabic-named entity corpora [49, 50] or for entity linking[50]. wikipedia-based categories have been also used to support the classification of arabic text [51], the open-domain text tagging [23], and the search query expansion [52]. this work adds to the previous knowledge by leveraging the arabic wikipedia to measure semantic relatedness between arabic terms. iii overview of the proposed approach in general, our approach exploits the structure of wikipedia to measure two types of relations between terms, which are the context-based relation and the categorybased relation. the context-based relation estimates the relatedness between two terms based on the commonality between the corresponding articles in wikipedia. in the context of wikipedia, any two terms can be related if the corresponding articles share common links. in our approach, incoming links from articles are used to compute relatedness. the more incoming links shared between articles, the more related they are. the category-based relation depends on the categories that are used to classify wikipedia articles. wikipedia articles are categorized by using a taxonomy of predefined categories. if articles share same or related categories, via a child-parent relation for example, then these articles are likely to be related. our approach combines both category-based and context-based relations to estimate the relatedness between any two wikipedia articles. intuitively, the relatedness between the articles denotes the relation between the terms representing them. figure 1 depicts our approach for measuring the semantic relatedness between sample input terms a and b. the first step is to match the terms to the wikipedia articles that best describe them. then, both the context-based and the category-based relations between the two articles are measured. several computations are performed at this phase to analyze the hyperlink and category structures. the final relatedness score will be the average of the two relation scores mentioned above. iv context-based relation the context-based relation between terms reflects how often these terms share contexts. wikipedia articles contain many hyperlinks that refer to other articles. in our approach, we depend on incoming links to represent shared contexts between two articles. the greater the number of shared incoming links, the higher the context-based relation is. the context-based relation between them two wikfigure 1. measuring the semantic relatedness between terms by exploiting context-based and category-based relations basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 4 ipedia articles can be measured using the following equation from [3]: 𝑐𝑜𝑛𝑡𝑒𝑥𝑡_𝑟𝑒𝑙(𝑎, 𝑏) = 1 − log(max(|𝐴|,|𝐵|))−log(|𝐴∩𝐵|) log|𝑁|−log(min(|𝐴|,|𝐵|)) (1) where 𝑎 and 𝑏 are any two articles from wikipedia, 𝐴 and 𝐵 are the sets of incoming links to 𝑎 and 𝑏 respectively. 𝑁 is the total number of articles in wikipedia. v category-based relation wikipedia provides many categories that are used in each article to determine its scope. articles belonging to the same wikipedia category are related. these categories are used in our approach to determine the relatedness between wikipedia articles as shown in figure 2. for any two articles a and b, let s1= {c11, c12, ..., c1n} and s2= {c21, c22, ..., c2m} be the sets of categories that a and b belong to respectively. n and m are the sizes of s1 and s2 respectively. in our approach, the pairwise relation between every two categories (c1i, c2j) is calculated, where c1i ∈ s1 and c2j ∈ s2. then, the overall relation between s1 and s2 is calculated by combining pairwise relation scores. first, the relation between c1i and c2j is calculated using the following equation. 𝑝𝑎𝑖𝑟𝑤𝑖𝑠𝑒_𝑐𝑎𝑡_𝑟𝑒𝑙(𝑐𝑖 , 𝑐𝑗 ) = iv(lca(𝑐𝑖,𝑐𝑗)) iv(𝑐𝑖)+iv(𝑐𝑗) (2) where 𝑝𝑎𝑖𝑟𝑤𝑖𝑠𝑒_𝑐𝑎𝑡_𝑟𝑒𝑙(𝑐𝑖, 𝑐𝑗) is the category-based relation between 𝑐𝑖 and 𝑐𝑗 , 𝐿𝐶𝐴(𝑐𝑖 , 𝑐𝑗 ) is the lowest common ancestor of 𝑐𝑖 and 𝑐𝑗 , and iv(𝑐) is the information value of the category 𝑐. the calculation of lca(𝑐𝑖 , 𝑐𝑗 ) and iv(𝑐) is explained in the subsequent sections. for each c1i ∈ s1, we find best(c1i), which is the maximal pairwise similarity between c1i and any category c2j in s2. similarly, we find best(c2j) for each c2j ∈s2. the overall category-based relation between s1 and s2 is calculated using the following equation [53]: 𝑐𝑎𝑡_𝑟𝑒𝑙(𝑆1, 𝑆2) = 0.5 ∗ ∑ 𝑏𝑒𝑠𝑡(𝑐1𝑖) 𝑛 𝑖=1 𝑛 + 0.5 ∗ ∑ 𝑏𝑒𝑠𝑡(𝑐2𝑗) 𝑚 𝑗=1 𝑚 (3) figure 2 summarizes the process of calculating the category-based relation. vi information value as shown in equation 2, measuring the category-based relation is based on the information value of categories. the information value indicates the specificity of the category. for example, the category “هندسة البرمجيات” is more specific that the general category “علم الحاسوب”. thus, the former category contains more information value than the latter category when both are assigned to a single article. the general categories, or top-level categories, are not reliable in measuring relatedness because they often do not help distinguish between articles unlike the specific categories. thus, we need to give each category an information value based on its specificity. to determine the specificity of a category, the wikipedia category graph is first constructed. then, two measures are used to calculate the information value for a category. these measures use: 1) the depth of the category in the wikipedia category graph. 2) the number of descendants of the category. these measures are explained as follows: the first measure depends on the depth of a given category in the wikipedia category graph to determine the appropriate information value for it by using the following equation. 𝑐𝑎𝑡𝐷𝑒𝑝𝑡ℎ𝐼𝑉(𝑐) = log(maxdepth(c)) log(𝑔𝑟𝑎𝑝ℎ𝑀𝑎𝑥𝐷𝑒𝑝𝑡ℎ) (4) where 𝑐𝑎𝑡𝐷𝑒𝑝𝑡ℎ𝐼𝑉(𝑐) is the information value of the category𝑐. the maxdepth(c) is the depth of 𝑐 in the entire wikipedia category graph, and 𝑔𝑟𝑎𝑝ℎ𝑀𝑎𝑥𝐷𝑒𝑝𝑡ℎ is the maximum depth of wikipedia category graph. this measure indicates that the larger the depth of 𝑐, the larger information value it has. in general, top-level categories are of low depth, and are often general and contain less information value compared to high-depth categories. this measure was inspired by existing research on the specificity of graph nodes such as [54] and [55]. figure 2. measuring the category-based relation between terms a and b basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 5 the second measure uses the descendants (subcategories) of a category to determine the appropriate information value for it as follows. 𝑐𝑎𝑡𝐷𝑒𝑠𝑐𝑒𝑛𝑑𝑎𝑛𝑡𝑠𝐼𝑉(𝑐) = 1 − log(des(c)) log(𝑁) (5) where 𝑐𝑎𝑡𝐷𝑒𝑠𝑐𝑒𝑛𝑑𝑎𝑛𝑡𝑠𝐼𝑉(𝑐) is the information value of category 𝑐, des(c) is the number of descendants of category 𝑐 and 𝑁 is the set of all categories in wikipedia. it is assumed that the category with a large number of descendants is more general and thus has less information value. in contrast, categories with few or no descendants are likely to be more specific and thus have more information value. this metric was also inspired by existing works that present metrics for graph-based similarity such as [54]. given the above two measures of the category's information value: 𝑐𝑎𝑡𝐷𝑒𝑝𝑡ℎ𝐼𝑉 and 𝑐𝑎𝑡𝐷𝑒𝑠𝑐𝑒𝑛𝑑𝑎𝑛𝑡𝑠𝐼𝑉, only one measure will be used to calculate information values of categories in equation 2. part of the experiments that we conducted in the evaluation aimed to examine the two measures of information value in order to choose the best of them to be used in equation 2. vii combined relatedness measure in the above sections we showed how to measure the context-based and category-based relations between any pair of terms respectively. the overall relatedness between two terms is then calculated as the average of the categorybased relation and the context-based relation values using the following equation. 𝑆𝑖𝑚. 𝑟𝑒𝑙𝑎𝑡𝑒𝑑𝑛𝑒𝑠𝑠(𝑎, 𝑏) = 𝑐𝑜𝑛𝑡𝑒𝑥𝑡_𝑟𝑒𝑙(𝑎,𝑏) +𝑐𝑎𝑡_𝑟𝑒𝑙(𝑎,𝑏) 2 (6) i implementation highlights after formally presenting our approach, the following sections provide a step-by-step guide on the implementation details including the processing of wikipedia content, the mapping of the terms to wikipedia articles, and the construction of wikipedia category graph. we also show how we handled the challenges that can be encountered when processing the wikipedia graph, such as graph cycles and creating the category depth and descendants’ maps. figure 3 shows the components of our implementation of the proposed approach. part 1 in figure depicts the preprocessing of the wikipedia dump file to store its content in a local database, and this process is performed once at the beginning of the work. part 2 shows the article matcher module that is used to match the input terms to the corresponding wikipedia articles. part 3 represents the process the constructing of category graph, the map of the descendants of the category, and the map of the depth of the category. given two input terms, the process starts by mapping these terms to the corresponding wikipedia articles using the article matcher module; then the semantic relatedness score is calculated by calculating the contextbased and category-based relations and averaging them as in equation 6. the modules in figure 3 are explained in detail in what follows: a. wikipedia processing module to access the arabic wikipedia and extract the required information, we downloaded the xml dump file of the arabic wikipedia, on the 1st feb. 2021. the information about the downloaded dump file is shown in table 1. table 1 information about the downloaded dump xml dump file size 736 mb size after extraction 4.87 gb number of all pages 1243905 number of content pages (articles) 648399 number of redirect pages 584630 number of disambiguation pages 16663 number of categories 560154 number of articles used in our work 515094 number of categories used in our work 547396 after that, the xml dump file parsing process was performed, and the information was extracted and stored in a local database. the database contains tables for pages, page-inlinks, page-outlinks, page-redirects, page-categories, page-mapline, metadata, categories, category-inlinks, category-outlinks, and category-pages. access to wikipedia information during work will be done by querying the database. we used jwpl (java wikipedia library) [56] to parse the wikipedia dump file. jwpl is a free, java-based application programming interface, which allows a structured access to all information in wikipedia. figure 3. the components of our implementation of the proposed approach for measuring semantic relatedness between arabic terms. basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 6 after populating the tables in the database, we found some articles that do not have incoming links. the incoming links are important in our work since one of the metrics used depends on the number of incoming links to articles. therefore, these articles have been discarded and deleted from the database, knowing that the total number of content pages with no incoming links is 133305 articles. in addition, some wikipedia categories that are used for editing and managing articles, known as administrative categories, were discarded as they negatively affect the evaluation of the semantic relatedness. examples of administrative categories include “ يبيدياويك ,”مقاالت“ ,”مشاريع ويكي“ ,” .”صناديق المعلومات“ and ”بذرة“ ,”قالب“ ,”قوالب“ b. article matcher module article matcher (see figure 3) is a component responsible for mapping each input term to the corresponding wikipedia article. the goal is to obtain the appropriate wikipedia article to be used to measure semantic relatedness. for each term, the matcher first converts the text of the term to a normalized form. the normalized text is used to check whether there is an article matching the term in the database or not, and will improve the result of the matching process. for example, the term “اللغة العربية” will be converted to the following normalized format: “ )ه|ة()ا|إ|أ|آ(للغ)ه|ة(_)ا|إ|أ|آ(لعربي ”, where suffixes and prefixes will be separated from the original term. in addition, different forms of arabic letters will be considered when matching with wikipedia article. in addition, redirect pages should be excluded from the matching process because they have no content or categories, and thus could impede the semantic relatedness score. c. handling term disambiguation some terms are ambiguous in the sense that they can have multiple meanings. wikipedia provides disambiguation articles for these terms, whereas each disambiguation page contains a list of possible senses for the term. for example, the term “ عين” in wikipedia is a redirect page to the disambiguation page “)عين )توضيح”, which contains a list of articles with different meanings such as “)عين “ ,عين )طب "(ماءعين ) and ”)حرف( ”. when a disambiguation page is retrieved from the article macher for an input term, the list of all senses is considered when the relatedness with the other input term is calculated. the sense that achieves the highest relatedness score is used, and the other senses are skipped. for example, if the input terms to our approach are “ عين” and “نبع”. since the first term “ عين” has three senses as explained above, the semantic relatedness between ease sense and the term “نبع” is measured, and the sense that gives the best relatedness score will be used. d. category graph builder module the measurement of category-based relations depends essentially on the calculation of the information value for each category, as explained before. to determine the information value, two structures should be constructed, which are: the category depth map and the category descendants map. a category graph to speed up the processing of categories and calculation of results. in order to construct the category graph, a directed graph was created, and wikipedia categories are added to the graph as vertices. to determine the edges of the graph, the incoming links (parent categories) and outgoing links (children categories) of each category are used. for a category (c) that has the set of incoming links in_links={ic1, ic2, ..., icn} and the set of outgoing links out_links={oc1, oc2, ..., ocm}, a graph edge is created from each ici∈ in_links to each ocj∈ out_links. finally, we get a graph whose vertices are wikipedia categories, and edges are the links between these categories. creating a graph consumes a lot of time so we created it once and saved it as a serializable object in a file. when needed, it is loaded from the file rather than recreating it. the directed graph should not contain any self-directed edge such as the one shown in figure 4. self-directed edges cause infinite loops during depth computation. these edges can be easily detected and ignored by finding the intersection between the incoming links and outgoing links, that is: cycles = in_links ∩ out_links. figure 4. self-directed edge in addition, cycles as the one shown in figure 5 should also be eliminated because they can cause infinite loops when calculating the depth of the category graph as required in equation 4. to find cycles, a first-depth-first (fds) traversal was performed starting from top-level categories, and each visited vertex is marked. if the vertex is visited twice during the fds, the incoming edge through which the vertex is reached for the second time is removed. our experiments showed that this strategy has successfully eliminated most cycles in the constructed category graph. figure 5. a graph cycle basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 7 e. depth and descendant maps the estimation of the information value of a category requires extracting some information from the category graph, such as the depth of a category and the number of category descendants (refer to equations 4 and 5). knowing that extracting this information for each category at run time takes a lot of time, it would be better to extract them one time and store them to be used when needed. for this purpose, two maps were created: vertexdepthmap and descendantsmap. vertexdepthmap is used to store the path from each category in the graph to the root, while descendantsmap is used to store the number of descendants of each category in graph. to create vertexdepthmap, the leaf vertices of the graph are first extracted, which are the vertices that have no outgoing edges. for each leaf vertex, all up-level vertices through the incoming edges are extracted, and this is performed recursively until reaching the root. during the recursion process, the different paths from each vertex to the root are compared, and the longest path is retrieved. descendantsmap creation also begins from the leaf vertices of the graph. while moving up from the leaf vertices, the children for each vertex are counted and stored in map. whenever moving to an upper level in the graph, the number of child vertices is calculated in the same way as before, and the number of descendant vertices is added by being retrieved from map. moving to a higher level is continued until reaching the root. f. user interface we provide a simple user interface, as shown in figure 6, that allows the user to input two arabic terms and get the relatedness score between them. the user should choose the appropriate settings and click the button "compute relatedness". the value of the semantic relatedness is displayed in a range from 0 to 1, where 0 denotes no relatedness, and 1 denotes maximum relatedness. figure 6. user interface to compute the semantic relatedness between input terms ii evaluation similar approaches from the literature have been often evaluated by being compared with other approaches [3, 57, 58]. however, to the best of our knowledge, there are no similar knowledge-based approaches for measuring semantic relatedness between arabic terms. therefore, we selected a benchmark dataset that has been used in works on english text, and translated the terms included in the dataset to arabic. the benchmark dataset includes human judgements on the similarity between the given terms. thus, we use human-assigned judgments as a baseline to assess the accuracy of our approach. we also compared our approach with previous approaches that used the same dataset in english. a. benchmark dataset the used dataset is one of the wordsimilarity-353 test collection [59] that contains two sets of english term pairs along with human-assigned relatedness judgments. we selected the first set (set 1) that contains 153 term pairs along with their relatedness scores from 13 human subjects. the relatedness is assessed by human subjects by using a scale that ranges from 0 to 10 where 10 indicates the highest relatedness. set 1 of the wordsimilarity-353 dataset also includes the list of 30 noun pairs from miller and charles [60]. to use it in our work, we translated the terms in set 1 to arabic. the translation was carried out by the authors and was reviewed by an expert translator. table 2 shows sample terms of wordsimilarity-353 after being translated into arabic, along with the assigned human judgment scores. as an instance from the translated dataset. table 2 snapshot of the translated wordsimilarity-353 dataset term 1 term 2 human judgment 1 2 3 4 5 6 7 8 9 10 11 12 13 mean 7.35 7 5 7 9 6 5 8.5 9 8 7 8 7 9 قط نمر 6.31 8 4 9 6 6 6 7 6 3 5 7.5 7.5 7 سيارة قطار األوراق المالية 0.92 0 0 0 3 1 0 2 0 4 1 0 0 1 اليغور 8.12 9 5 9.5 8 7 8 8.5 8 6 10 9 8.5 9 بروتون الفيزياء 8.5 9 7 8 10 8.5 9 8.5 9 6 9 9.5 8 9 بنك مال 4.38 4 1 4 3 4 5 5 6 2 5 6 6 6 هضبة ساحل 6.58 6 3 4 5 6.5 5 8 9 5 8 9 8 9 قهوة كوب we checked the existence of wikipedia articles corresponding to the translated terms. term pairs that do not have corresponding articles were excluded from the dataset because our approach computes semantic relatedness based on the presence of wikipedia articles, i.e., each term can be mapped to a wikipedia article. 23 out of the 153 pairs in the dataset were excluded, ending with 120 pairs. in addition, the mean value of human judgement scores was normalized to be in the range from 0 to 1 so that it becomes comparable with the normalized scores from our basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 8 approach. the complete translated dataset can be downloaded from: https://github.com/baselalhaj/semanticrelatedness. b. experimental conditions recall that our approach for measuring the semantic relatedness between terms uses two types of relations: the context relation and the category relation. for the computation of the category relation, two methods are used to compute the category’s information value, which are: the depth-based information value and the descendants-based information value. given that, our aim is to assess five different variants of our approach in order to find which setting gives most accurate results. these variants are as follows: • sem-context: in this version, the semantic relatedness is measured by using only the context-based relation. • sem-category-depth: in this version, the semantic relatedness is measured by using only the category relation, where the category’s information value is computed based on the category depth. • semcategory-desc: in this version, the semantic relatedness is measured by using only the category relation, where the category’s information value is computed based on the category’s descendants. • sem-context-category-depth: this variant combines both context and category relations, where the category’s information value is calculated based on the depth of the category. • sem-context-category-desc: this variant combines both context and category relations, where the category’s information value is calculated based on the descendants of the category. c. evaluation metric results were evaluated by measuring the pearson correlation [61] between the relatedness scores of our approach and the human judgement scores based on the following equation: 𝑟 = ∑ (𝑥𝑖 − �̅�)(𝑦𝑖 − �̅�) 𝑛 𝑖=1 √∑ (𝑥𝑖 − �̅�) 2𝑛 𝑖=1 √∑ (𝑦𝑖 − �̅�) 2𝑛 𝑖=1 (7) where 𝑛 is sample size, 𝑥𝑖 , 𝑦𝑖 are the individual sample points indexed with 𝑖 and �̅�, �̅� is the mean of 𝑥, 𝑦 values respectively in the sample. the value of 𝑟 is located in the range between +1 and −1, where 𝑟 = 1 means a total positive correlation, 𝑟 = 0 means no correlation exist, and 𝑟 = −1 means a total negative correlation. d. results and discussion table 3 shows the results from the five variants of our approach, in terms of pearson correlation with human-assigned scores. in general, the version named sem-contextcategory-depth outperforms all over variants with a correlation of 0.68. the difference between sem-context-category-depth and other variants is statistically significant with 𝑝 < 0.05 based on pairwise t-test. this indicates that the best setting for our approach is to combine both context and category relations, and to calculate the category’s information value based on the depth of the category. this result also indicates that combining both the context relation and the category relation in our approach gives better results than using either of the two relations separately. when relations are used separately, we find that the semcontext version surpasses both the sem-category-depth and the sem-category-desc, but the difference was statistically insignificant. this indicates that when relations are used separately, the context-based relation is slightly more effective than the category-based relation. table 3 experimental results in terms of the correlation with the human judgments variant of our approach correlation sem-context 0.62 sem-category-depth 0.60 semcategory-desc 0.58 sem-context-category-depth 0.68 table 3sem-context-categorydesc 0.64 e. comparison with existing approaches one objective of the evaluation is to explore how our approach compares to other popular approaches from the literature that used the same dataset but with the english version of wikipedia as a background knowledge. we compare our work with the following works that were discussed in the related works section: wikirelate [57], wang [62], ssa [63], wikisim [64] and cprel [65], esa [58], wlm [3], wlvm [66] and wla [67]. table 4 shows the correlation values for all approaches. figure 7 depicts the correlation values of compared approaches. looking at the results, we notice that our approach performs better than some previous approaches such as: wikirelate [57], wang [62], ssa [63], wikisim [64], and cprel [65]. in contrast, it does not perform as well as other approaches such as: esa [58], wlm [3], wlvm [66] and wla [67]. we furtherly analyzed errors to better understand the reason behind these differences. in total, more than 70% of the reported errors occurred due to the poor content of the arabic wikipedia articles and the lack of links between the articles compared to the english version of wikipedia. this lack of links hindered the computation of the contextbased relation which primarily depends on the shared incoming links between articles. for example, the relatedness score obtained for the terms “مال” and “بنك” is 0.2, which is obviously inaccurate. the inspection of this error showed that the shared incoming links between the articles corresponding to these terms was only 135, comparing to 3623 links in the corresponding english articles. in addition, a lot of errors originated from the low complexity of the category graph in the arabic wikipedia compared to https://github.com/baselalhaj/semanticrelatedness https://github.com/baselalhaj/semanticrelatedness basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 9 the category graph of the english wikipedia. the category graph is essential in our approach to estimate the information value of wikipedia categories and to compute the category-based relation between terms. we found that the calculated information values for several categories differ across the two versions of wikipedia and were mostly lower in the arabic wikipedia. in fact, english wikipedia has about 3 times the number of categories and 2.8 times the number links between categories when compared to the arabic wikipedia[68]. to conclude, we believe that the difference in performance between our approach and others that rely on english wikipedia can be mainly attributed to the gap between the arabic and english versions of wikipedia in terms of information richness and complexity. table 4 the proposed approach compared to other approaches. conclusion and future work in this work we proposed an approach for measuring the semantic relatedness between arabic terms by exploiting arabic wikipedia as a knowledge source. given two arabic terms the approach selects the corresponding wikipedia articles and uses their incoming links and categories to estimate the relatedness between them. our approach was evaluated using a dataset from wordsimilarity-353 test collection which contains 120 pairs of terms with their human-assigned judgment scores. the results of the approach were compared with the results of human judgment and the results of other approaches that used the english version of wikipedia. the correlation between our results and the result of human judgment was 0.68, which outperformed the results of some previous approaches that used the same dataset with english wikipedia. the investigation of results has shown that many errors resulted from the lack of content of many wikipedia articles and the poor category structure. this indicates that the arabic version of wikipedia can give satisfactory results when used as a background knowledge for semantic similarity measures, but it is still not as reliable as the english version. references [1] a. budanitsky and g. hirst, "evaluating wordnet-based measures of lexical semantic relatedness," computational linguistics, vol. 32, no. 1, pp. 13-47, 2006. [2] w. h. gomaa and a. a. fahmy, "a survey of text similarity approaches," international journal of computer applications, vol. 68, no. 13, pp. 13-18, 2013. [3] i. h. witten and d. n. milne, "an effective, low-cost measure of semantic relatedness obtained from wikipedia links," in conference of association for the advancement of artificial intelligence (aaai), chicago, the us, 2008, pp. 25-30. [4] m. j. hussain, s. h. wasti, g. huang, l. wei, y. jiang, and y. tang, "an approach for measuring semantic similarity between wikipedia concepts using multiple inheritances," information processing & management, vol. 57, no. 3, pp. 102-188, 2020. [5] y.-w. zhang, b.-a. li, x.-q. lv, s. ning, t. jing-jing, and engineering, "research on domain term dictionary construction based on chinese wikipedia," in international conference on applied mechanics, mathematics, modeling and simulation (ammms 2018), hong kong, 2018, pp. 225-230. [6] p. arnold and e. rahm, "extracting semantic concept relations from wikipedia," in proceedings of the 4th international conference on web intelligence, mining and semantics (wims14), 2014, pp. 26-37: acm. [7] p. arnold and e. rahm, "automatic extraction of semantic relations from wikipedia," international journal on artificial intelligence tools, vol. 24, no. 2, pp. 1-36, 2015. [8] j.-x. huang, k. s. lee, k.-s. choi, and y.-k. kim, "extract reliable relations from wikipedia texts for practical ontology construction," computación y sistemas, vol. 20, no. 3, pp. 467476, 2016. [9] z. wu et al., "an efficient wikipedia semantic matching approach to text document classification," information sciences, vol. 393, pp. 15-28, 2017. [10] d. chandrasekaran and v. mago, "evolution of semantic similarity—a survey," acm computing surveys (csur), vol. 54, no. 2, pp. 1-37, 2021. [11] p. sunilkumar and a. p. shaji, "a survey on semantic similarity," in 2019 international conference on advances in computing, communication and control (icac3), 2019, pp. 1-8: ieee. [12] d. sánchez, m. batet, d. isern, and a. valls, "ontology-based semantic similarity: a new feature-based approach," expert systems with applications, vol. 39, no. 9, pp. 7718-7728, 2012. approach correlation the proposed approach 0.68 wikipedia links and abstract (wla)[67] 0.72 context profile based relatedness (cprel)[65] 0.53 wikisim [64] 0.63 semantic relatedness between words based on wikipedia links [62] 0.63 salient semantic analysis (ssa) [63] 0.622 wikipedia link-based measure (wlm) [3] 0.69 explicit semantic analysis (esa) [58] 0.75 wikipedia link vector model (wlvm) [66] 0.72 wikirelate [57] 0.49 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 p e a rs o n c o rr e la ti o n figure 7. comparison between the proposed approach and other similarity approaches that used the english version of wikipedia basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 10 [13] r. navigli and s. p. ponzetto, "babelnet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network," artificial intelligence, vol. 193, pp. 217-250, 2012. [14] s. banerjee and t. pedersen, "extended gloss overlaps as a measure of semantic relatedness," in international joint conference on artificial intelligence, 2003, vol. 3, pp. 805-810: citeseer. [15] y. jiang, x. zhang, y. tang, and r. nie, "feature-based approaches to semantic similarity assessment of concepts using wikipedia," information processing & management, vol. 51, no. 3, pp. 215-234, 2015. [16] g. zhu and c. iglesias, "computing semantic similarity of concepts in knowledge graphs," ieee transactions on knowledge and data engineering, vol. 29, no. 1, pp. 72-85, 2016. [17] d. sánchez, m. batet, and d. isern, "ontology-based information content computation," knowledge-based systems, vol. 24, no. 2, pp. 297-303, 2011. [18] m. a. rodriguez, m. egenhofer, and d. engineering, "determining semantic similarity among entity classes from different ontologies," ieee transactions on knowledge, vol. 15, no. 2, pp. 442-456, 2003. [19] y. jiang, w. bai, x. zhang, and j. hu, "wikipedia-based information content and semantic similarity computation," information processing & management, vol. 53, no. 1, pp. 248265, 2017. [20] j.-b. gao, b.-w. zhang, and x.-h. chen, "a wordnet-based semantic similarity measurement combining edge-counting and information content theory," engineering applications of artificial intelligence, vol. 39, pp. 80-88, 2015. [21] s. t. dumais and technology, "latent semantic analysis," annual review of information science, vol. 38, pp. 189-230, 2004. [22] s. jain, k. seeja, and r. jindal, "computing semantic relatedness using latent semantic analysis and fuzzy formal concept analysis," international journal of reasoning-based intelligent systems, vol. 13, no. 2, pp. 92-100, 2021. [23] i. alagha and y. abu-samra, "tag recommendation for short arabic text by using latent semantic analysis of wikipedia," jordanian journal of computers and information technology, vol. 6, no. 02, pp. 165-181, 2020. [24] a. rozeva and s. zerkova, "assessing semantic similarity of texts–methods and algorithms," in aip conference proceedings, 2017, vol. 1910, no. 1, p. 060012: aip publishing llc. [25] p. mandera, e. keuleers, and m. brysbaert, "how useful are corpus-based methods for extrapolating psycholinguistic variables?," quarterly journal of experimental psychology, vol. 68, no. 8, pp. 1623-1642, 2015. [26] d. cer, m. diab, e. agirre, i. lopez-gazpio, and l. specia, "semeval-2017 task 1: semantic textual similarity-multilingual and cross-lingual focused evaluation," arxiv preprint arxiv:.00055, 2017. [27] t. kajiwara and m. komachi, "building a monolingual parallel corpus for text simplification using sentence similarity based on alignment between word embeddings," in proceedings of coling 2016, the 26th international conference on computational linguistics: technical papers, 2016, pp. 11471158. [28] h. jelodar et al., "latent dirichlet allocation (lda) and topic modeling: models, applications, a survey," multimedia tools and applications, vol. 78, no. 11, pp. 15169-15211, 2019. [29] r. ben djemaa, h. nabli, i. amous ben amor, and experience, "enhanced semantic similarity measure based on two‐level retrieval model," concurrency and computation: practice, vol. 31, no. 15, p. e5135, 2019. [30] o. araque, g. zhu, and c. a. iglesias, "a semantic similaritybased perspective of affect lexicons for sentiment analysis," knowledge-based systems, vol. 165, pp. 346-359, 2019. [31] h. t. nguyen, p. h. duong, and e. cambria, "learning short-text semantic similarity with word embeddings and external knowledge sources," knowledge-based systems, vol. 182, p. 104842, 2019. [32] i. lopez-gazpio, m. maritxalar, m. lapata, and e. agirre, "word n-gram attention models for sentence similarity and inference," expert systems with applications, vol. 132, pp. 1-11, 2019. [33] t. zheng et al., "detection of medical text semantic similarity based on convolutional neural network," bmc medical informatics and decision making, vol. 19, no. 1, pp. 1-11, 2019. [34] e. l. pontes, s. huet, a. c. linhares, and j.-m. torres-moreno, "predicting the semantic textual similarity with siamese cnn and lstm," arxiv preprint arxiv:.10641, 2018. [35] l. yao, z. pan, and h. ning, "unlabeled short text similarity with lstm encoder," ieee access, vol. 7, pp. 3430-3437, 2018. [36] s. zhang, x. xu, y. tao, x. wang, q. wang, and f. chen, "text similarity measurement method based on bilstm-secapsnet model," in 2021 6th international conference on image, vision and computing (icivc), 2021, pp. 414-419: ieee. [37] j. kleenankandy and k. a. nazeer, "recognizing semantic relation in sentence pairs using tree-rnns and typed dependencies," in 2020 6th ieee congress on information science and technology (cist), 2021, pp. 372-377: ieee. [38] y. zhang, r. tang, and j. lin, "explicit pairwise word interaction modeling improves pretrained transformers for english semantic similarity tasks," arxiv preprint arxiv:.02847, 2019. [39] m. t. elhadi, "arabic news articles classification using vectorized-cosine based on seed documents," journal of advances in computer engineering technology, vol. 5, no. 2, pp. 117-128, 2019. [40] m. belazzoug, m. touahria, f. nouioua, and m. brahimi, "an improved sine cosine algorithm to select features for text categorization," journal of king saud university-computer and information sciences, vol. 32, no. 4, pp. 454-464, 2020. [41] m. o. alhawarat, h. abdeljaber, and a. hilal, "effect of stemming on text similarity for arabic language at sentence level," peerj computer science, vol. 7, pp. 530-540, 2021. [42] d. schwab, "semantic similarity of arabic sentences with word embeddings," in third arabic natural language processing workshop, 2017, pp. 18-24. [43] m. bekkali, a. lachkar, and mining, "an effective short text conceptualization based on new short text similarity," social network analysis, vol. 9, no. 1, pp. 1-11, 2019. [44] f. a. almarsoomi, j. d. oshea, z. bandar, and k. crockett, "awss: an algorithm for measuring arabic word semantic similarity," in 2013 ieee international conference on systems, man, and cybernetics, 2013, pp. 504-509: ieee. [45] h. froud, a. lachkar, and s. a. ouatik, "stemming versus light stemming for measuring the simitilarity between arabic words with latent semantic analysis model," in 2012 colloquium in information science and technology, 2012, pp. 69-73: ieee. [46] y. li, z. a. bandar, and d. mclean, "an approach for measuring semantic similarity between words using multiple information sources," ieee transactions on knowledge and data engineering, vol. 15, no. 4, pp. 871-882, 2003. [47] g. zakria, m. farouk, k. fathy, and m. n. makar, "relation extraction from arabic wikipedia," indian journal of science technology, vol. 12, pp. 46-52, 2019. [48] a. m. al-zoghby, a. elshiwi, and a. atwan, "semantic relations extraction and ontology learning from arabic texts—a survey," in intelligent natural language processing: trends and applications: springer, 2018, pp. 199-225. [49] f. b. mesmia, k. haddar, d. maurel, and n. friburger, "arabic named entity recognition process using transducer cascade and arabic wikipedia," in proceedings of the international conference recent advances in natural language processing, 2015, pp. 48-54. [50] m. biltawi, a. awajan, s. tedmori, and a. al-kouz, "exploiting multilingual wikipedia to improve arabic named entity resources," international arab journal on information technology, vol. 14, no. 4a, pp. 598-607, 2017. [51] a. alahmadi, a. joorabchi, and a. e. mahdi, "combining words and concepts for automatic arabic text classification," in international conference on arabic language processing, 2017, pp. 105-119: springer. basel alhaj; iyad alagha / exploiting wikipedia to measure the semantic relatedness between arabic terms (2022) 11 [52] a.-a. iyad and a. ahmed, "towards supporting exploratory search over the arabic web content: the case of arabxplore," journal of information technology management, vol. 12, no. 4, pp. 160-179, 2020. [53] y. ni et al., "semantic documents relatedness using concept graph representation," in proceedings of the ninth acm international conference on web search and data mining, 2016, pp. 635-644: acm. [54] i. traverso, m.-e. vidal, b. kämpgen, and y. sure-vetter, "gades: a graph-based semantic similarity measure," in proceedings of the 12th international conference on semantic systems, 2016, pp. 101-104: acm. [55] b. louie, s. bergen, r. higdon, and e. kolker, "quantifying protein function specificity in the gene ontology," standards in genomic sciences, vol. 2, no. 2, p. 238, 2010. [56] t. zesch, c. müller, and i. gurevych, "extracting lexical semantic knowledge from wikipedia and wiktionary," in lrec, 2008, vol. 8, no. 2008, pp. 1646-1652. [57] m. strube and s. p. ponzetto, "wikirelate! computing semantic relatedness using wikipedia," in association for the advancement of artificial intelligence (aaai), 2006, vol. 6, pp. 1419-1424. [58] e. gabrilovich and s. markovitch, "computing semantic relatedness using wikipedia-based explicit semantic analysis," in the international joint conference on artificial intelligence, 2007, vol. 7, pp. 1606-1611. [59] l. finkelstein et al., "placing search in context: the concept revisited," acm transactions on information systems, vol. 20, no. 1, pp. 116-131, 2002. [60] g. a. miller and w. g. charles, "contextual correlates of semantic similarity," language and cognitive processes, vol. 6, no. 1, pp. 1-28, 1991. [61] j. benesty, j. chen, y. huang, and i. cohen, "pearson correlation coefficient," in noise reduction in speech processing: springer, 2009, pp. 1-4. [62] r.-q. wang "measuring of semantic relatedness between words based on wikipedia links," international proceedings of computer science & information technology, vol. 50, 2012. [63] s. hassan and r. mihalcea, "semantic relatedness using salient semantic analysis," in the twenty-fifth conference on artificial intelligence (aaai-11), 2011, pp. 884-889: san francisco, ca. [64] s. jabeen, x. gao, and p. andreae, "harnessing wikipedia semantics for computing contextual relatedness," in pacific rim international conference on artificial intelligence, 2012, pp. 861865: springer. [65] s. jabeen, x. gao, and p. andreae, "cprel: semantic relatedness computation using wikipedia based context profiles," research in computing science, vol. 70, pp. 57-68, 2013. [66] d. milne, "computing semantic relatedness using wikipedia link structure," in proceedings of the new zealand computer science research student conference, 2007, pp. 63-70. [67] d. zhao, l. qin, p. liu, z. ma, and y. li, "computing terms semantic relatedness by knowledge in wikipedia," in web information system and application conference (wisa), 2015 12th, 2015, pp. 107-111: ieee. [68] w. lewoniewski, k. węcel, and w. abramowicz, "multilingual ranking of wikipedia articles with quality and popularity assessment in different topics.," computers, vol. 8, no. 3, pp. 132, 2019. journal of engineering research and technology, volume 9, issue 2, october 2022 19 received on (15-08-2022) accepted on (24-09-2022) assessment of stormwater infiltration basins models developed in gaza strip zakaria helles1*, yunes mogheir2 1water technology ph.d. joint program, islamic university of gaza, and al-azhar university of gaza, palestine *corresponding author: zakaria.helles@gmail.com 2civil and environmental engineering, islamic university of gaza, palestine https://doi.org/10.33976/jert.9.2/2022/3 abstract—stormwater remains the sole source of aquifer recharge in the gaza strip, which should be utilized properly through artificial infiltration. the study objective is to investigate and analyze the infiltration efficiency of three large exiting infiltration basins in the gaza strip (alamal, asadaqa, and waqf) using different infiltration techniques. the technique applied in alamal basin is the natural surface spreading of stormwater while asadaqa basin used the surface spreading combined with graveled boreholes. waqf basin used non-graveled boreholes (empty shafts cased with upvc pipes). the infiltration rate and efficiency were recorded and estimated for each basin during the 2021-2022 wet season and compared to a past 2017-2018 wet season at a water depth of 1.70 m. the study revealed that, the actual infiltration capacity of waqf basin was estimated as 2,000 m3/day in the 2021-2022 wet season, twice that in the 2017-2018 wet season, with an infiltration efficiency of 57.47 %, that was attributed to the 18 drilled non-graveled boreholes, which enhanced the seepage of stormwater into the underlying soil. asadaqa basin has the lowest infiltration efficiency of 3.90 % due to the continuous accumulation of thick and dense sediment layer on the basin floor, with nonchanged actual infiltration capacity (around 2,800 m3/day) between the two studied wet seasons. on the other hand, alamal basin infiltration efficiency was only 4.60 %, with actual infiltration capacity of 629 and 105.4 m3/day during the two wet seasons, respectively where some repair and upgrade works were performed at alamal basin which enhanced the actual infiltration capacity but still far from the design infiltration capacity. for future studies, waqf basin technique should be thoroughly studied and investigated as a novel artificial infiltration method, with deep study on the factors affecting the infiltration process. index terms— stormwater, infiltration basin, drywell, water depth, clogging, aquifer. i. introduction groundwater is considered the only source of freshwater supply in the gaza strip used, for domestic, agricultural, and industrial purposes, and the main replenishment source for the gaza coastal aquifer through infiltration. other infiltrating components are available, for instance, water irrigation activities, wastewater and domestic water leakage from networks, retention and sedimentation ponds, cesspits, and soak ways. the recharging components are directly influenced by human activities, which in many cases discharge substandard water quality that infiltrate into the soil and percolate to the groundwater resulting in unrecoverable contamination of groundwater quality. in some areas, the over-extraction of groundwater has led to a continuous lowering of the groundwater table up to 10 m below the mean sea level [1]. this has a detrimental effect on the aquifer allowing for seawater intrusion, which led to a significant and irreversible deterioration in groundwater quality. furthermore, the high population growth rate is exacerbating water scarcity in the gaza strip, with continuously decreasing rates of stormwater infiltration due to rapid urbanization activities, expansion of built-up areas, and global climate change. amid all mentioned constraints and complications affecting water situation in the gaza strip, the importance of enhancing stormwater infiltration into the gaza coastal aquifer is increasing with time. thus, understanding the artificial recharging systems and studying the applied engineering technologies is very important and will assist in alleviating the gaza water deficit and the deterioration of groundwater quality and quantity [2]. the volume of stormwater infiltrating into underlying ground formation depends upon a large number of affecting factors: soil characteristics, land use, land cover, soil saturation, temperature, water table, water composition and other variables. infiltration phenomena is a very complex process studied by many previous researchers trying to precisely describe the behavior of water when invading soil pores to replace entrapped air within a complicated microscale processes that influence a macroscale general behavior of infiltration process. numerous methods and approaches were created to estimate infiltration rate such as: in situ measurement method, which is commonly known as field observed measurement datadriven approach, where this approach was applied in this study. empirical models such as green-ampt, kostiakov, horton, philip, holtan, and others were generated to estimate cumulative infiltration and infiltration rate, some are accurate to a certain limit that can give satisfactory results, others are not and can be only applied under specific assumptions. mailto:zakaria.helles@gmail.com https://doi.org/10.33976/jert.9.2/2022/3 zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 20 darcy (1856), a french engineer who performed several field experiments on the behavior of water infiltration and he formulated the first best known empirical equation for describing the water flow through saturated porous media, known as darcy’s law expressed in equation 1 [3]. the equation opened a new conceptualization for infiltration process that widened the researchers and scientists’ scope of thinking for further studies and investigations. 𝑓 = − 𝐾𝑠 ∆𝛷 𝐿 (1) where f is water flux in (length/time) flowing through unit sectional area in unit time. ks is saturated hydraulic conductivity and ∆𝛷 is the difference between two points of different hydraulic head separated by a distance l. darcy’s law is valid for laminar flows of specific reynolds number smaller than 1.0 [4]. however, forchheimer in 1930 proposed a correction for darcy’s law at reynolds number larger than 1.0. another famous equation was proposed in 1931 by richard, which was formulated to describe unsaturated flow as a continuity of buckingham study on extending darcy’s law [5]. richard’s equation can be used for 3-dimensional unsaturated flow with a complicated form of equation, yet the widely used is 1-dimentional expression for the vertical infiltration as expressed in equation 2. ∂θ ∂t = ∂θ ∂z [k (φ)( ∂φ ∂z +1)] (2) where z is vertical distance, k is saturated hydraulic conductivity, φ is capillary suction and θ is moisture content, thus saturated hydraulic conductivity was replaced with a function of soil moisture content. kostiakov [6] and horton [7,8] are considered to be the best known empirical equations used to present infiltration rate. the proposed equations have critical limitations that may hinder their application. since they depend on complicated parameters that cannot be readily estimated from the available soil information. kostiakov empirical model is expressed in equation 3 [6]. 𝑓(𝑡) = 𝛼(𝑡)−𝛽 (3) where f is infiltration rate at time t; 𝛼 𝑎𝑛𝑑 𝛽 are empirical constants, the model well describes infiltration rate within small time duration but less accurate at larger time duration. horton also proposed a famous and well known equation in 1940 as given in equation 4, to describe the basic behavior of infiltrating water, however the decay constant was difficult to obtain and poorly defined, that was one of main drawback of the model [7,8]. 𝑓(𝑡) = 𝑓𝑓 + (𝑓𝑜 − 𝑓𝑓 )𝑒 𝛾𝑡 (4) where ff and fo are final and initial infiltration rates respectively, t is the time since rainfall start and 𝛾 is an empirical constant. other important empirical models and mathematical equations of infiltration process were expressed by philip [9] and green-ampt [10], both of which used parameters and information that can be obtained from soil data, particularly that of greenampt model. in addition, fok [11] summarized in his study the development and limitations of using various infiltration models. massmann [12] provided a design manual for sizing infiltration basins by developing green-ampt model. in this study, the actual infiltration rate, capacity, and efficiency were estimated by in situ observation approach for three existing basins in the gaza strip (waqf, asadaqa, and alamal) that uses different infiltration techniques. ii. study area 1. alamal basin alamal infiltration basin is located in the west of khanyounes city (latitude 31°21'38.79"n and longitude 34°18'2.52"e). the catchment area is 10 km2 with an amount of 89,215.0 m3/hr as a surface runoff flowing into the basin during rainy seasons [13]. the catchment area collects stormwater through a box culvert that conveys the stormwater into the basin for retention until seeping gradually into the underlying groundwater over time, see figure 1. figure 1: layout of alamal infiltration basin (google map, 2022) alamal basin used the direct surface spreading of stormwater without augmenting drywells (infiltration boreholes). based on [13], with a hydraulic conductivity k = 6.67 m/day and a hydraulic gradient i = 0.134, the design infiltration rate of alamal basin was estimated as 0.8041 m/day using darcy’s law. furthermore, multiplying the design infiltration rate by the basin floor area of 17000 m2, we get the design infiltration capacity of 13,670 m3/day. the soil profile beneath the bottom of alamal basin was very heterogeneous with relatively impermeable clay layers extending below the water table. based on the results of previous soil investigation, there are thick layers of sand that extend to the water table and are overlaid by lenses of clay layers that reduce soil permeability. 2. asadaqa basin asadaqa infiltration basin is located in atuffah district of northeastern gaza city (latitude 31°30'43.99"n and longitude 34°28'32.87"e). the basin location was best suited to collect the surface runoff created from stormwater. since the basin is at the lowest elevation, it contributes significantly to intercepting the stormwater from the surrounding areas. the zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 21 catchment area is 2.5 km2 [14]. asadaqa basin was designed using a combination of two techniques; the surface spreading and the vadose-zone wells (graveled drywells) which did not reach the groundwater table. the basin consists of two main sub basins; the northern and southern basins, see figure 2. the basin floor area was estimated as 8,000 m2. figure 2: layout of asadaqa infiltration basin [14] according to [14], a total of 293 boreholes were drilled to enhance the surface infiltration, each borehole has a diameter of 80 cm and was constructed 5 m away from the neighboring borehole in two directions. the borehole was 15 m deep in the ground with 5 m penetration of the kurkar layer, each borehole was filled with gravel of 10-20 mm size, with an infiltration capacity of 246 m3/day for each borehole. thus, the design infiltration capacity of the basin was estimated as 72,078 m3/day by multiplying the number of drywells (293 boreholes) by the design infiltration capacity of each borehole (246 m3/day). regarding the soil profile studied by [14], a layer of dark brown sandy and silty clay extends from the land surface to a depth of 7.5-8.5 m, then a kurkar layer extends down under the first layer until the end of the testing borehole depth of 20 m. for the design of the infiltration technique, a layer of clean sand (sand layer) 1.0 m thick was applied and spread over the surface of the basin, the layer acts as a filter for any suspended solids that may be existing in the collected rainwater. a layer of non-woven geotextile was then laid directly under the sand layer, to allow stormwater to seep into the bottom soil preventing the sand grains from passage. then two layers of gravel were spread under the upper layer of sand, the first with a depth of 20 cm and 5-10 mm size was spread under the nonwoven geotextile layer at top of the second layer with a depth of 40 cm and 10-20 mm. the purpose of the gravel layer was to allow stormwater to seep down through the aggregate pores and imbibe through the boreholes that accelerate the infiltration rate bypassing the poorly permeable layers. 3. waqf basin waqf infiltration basin is located in azaytoon area south of gaza city (latitude 31°30'2.78"n and longitude 34°27'31.62"e). first, the basin was designed and constructed using the natural surface spreading. then, sequential development and upgrading steps occurred throughout different time phases. the catchment area is 6.0 km2, where the basin is located in the lowest area to support collecting the incoming flow of stormwater by gravity [15], see figure 3. figure 3: waqf infiltration basin (google map, 2022) a. first stage previously, waqf basin used surface spreading of collected stormwater for infiltration. however, the low permeability of the soil layers underneath basin floor reduced the infiltration rate to unrecoverable levels. this raised the necessity of replacing the top soil layer of 6 m thick, with a new soil layer of higher permeability. the new upgraded system worked and has been in operation for the last 4 years with a design infiltration capacity of 49,826 m3/day [15]. meanwhile, the basin performance declined back to a low infiltration rate, owing to the existence of silt and clay (suspended solids) in the stormwater entering the basin. thereby, the suspended solids settled to the bottom of the basin floor forming a thick and dense clogging layer that blocked the pores and significantly reduced the infiltration rate. thus, another stage of system development and upgrade became necessary. b. second stage waqf basin was recently upgraded in 2021 through the second stage of development by constructing 18 boreholes (drywells) at the end of the basin towards the west side. the distance between any two boreholes was around 12.0 m in two directions, each borehole of 355 mm diameter was cased with upvc pipe [16]. the borehole pipes extended 16 m deep into the ground with a slotted depth of 10 m (20% open area), the pipes were fixed after digging the boreholes with a mechanical auger bucket, and then upvc pipes were inserted into empty boreholes without filling by any media, see figure 4. a gravel gabion (5-7 cm) was constructed over each borehole’s upper tip, the gabion is a cubic shape of gravel with a side length of 1.5 m. figure 4: borehole drilling and upvc pipes installation at waqf basin the upvc pipe extended through the gabion with a slotted area (20-25%) covered with plastic mesh as shown in figure 5. hydrus (2d/3d) software was used for modeling and zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 22 simulation of the infiltration system using the richards equation. it was obtained that each borehole can infiltrate 232 m3/day of stormwater. thus, the design infiltration capacity of the basin was 3,480 m3/day, obtained by multiplying the number of boreholes (18 boreholes) by the design infiltration capacity of each borehole (232 m3/day/borehole)) as indicated in [16]. after the second stage of development, the soil profile underneath waqf basin was classified by [16] as follows: the first layer (soil-sludge mixture) of 0.25 m, extending from the ground surface to a depth of 0.25 m, the second layer (yellowish imported fine sand) extended from the bottom of the first layer to a depth of 3.0 m. then a third layer (yellowish imported coarse sand) extended from 3.0 m to 6.0 m depth, followed by layers of kurkar, gravelly sand, and sandy gravel to a depth of 23.5 m where the water table was encountered. figure 5: borehole (drywell) profile at waqf basin [16] iii. methodology field measurement data driven approach was used in this study to estimate infiltration rate and capacity of each basin, where readings of water surface were recorded and compared during two wet seasons. the measuring unit of infiltration rate is (m/day) represented by the vertical net drop of ponded water level over 24 hours at each basin, (excluding inflow/outflow to the basin). in addition, the effect of water evaporation was also considered in this study, by deducting 2.39 mm/day (average evaporation rate during winter season in gaza city) [17]. equation 5 was used to calculate the net infiltration rate. infiltration rate (m/day) = net drop in water level (m) elapsed time (day) (5) thus, the actual infiltration capacity of a basin (volume of infiltrated stormwater) was obtained by multiplying net actual infiltration rate by basin floor area as expressed in equation 6. actual infiltration capacity (m3) = infiltration rate (m/day) x basin floor area (m2) (6) then, infiltration efficiency was calculated by dividing the measured actual infiltration capacity of each basin (in the 2021-2022 wet season) by the design infiltration capacity of that basin, as expressed in equation 7. infiltration efficiency (%) = actual infiltration capacity (𝑚3/𝑑𝑎𝑦) design infiltration capacity (𝑚3/𝑑𝑎𝑦) × 100 (7) 1. storm events under the study five storm events with corresponding daily rainfall depths in mm were selected out of 37 rainy days occurred in the 20212022 wet season. rainfall depths were recorded by the ministry of agriculture at each rainfall gauge station (17 manual gauge stations are available in the gaza strip), and the rainy day number was also added to identify the temporal location of the 5 selected rainy days. the rainfall depths of both waqf and asadaqa basins were recorded through atuffah gauge station. however, alamal basin rainfall depth was recorded by west of khanyounes gauge station, as shown in table 1. table 1: the five strom events selected during the 2021-2022 wet season gauge station governorate infiltration basin storm date 17/12/2021 15/01/2022 24/01/2022 5/2/2022 11/2/2022 rainy day number 7 17 21 27 29 storm number storm 1 storm 2 storm 3 storm 4 storm 5 daily rainfall depth, mm tuffah gaza waqf, asadaqa 14.5 27.3 5.0 19.5 8.0 west khanyounes khanyounes alamal 12.5 12.5 3.0 25.0 9.8 2. methods of measuring water level the water level of ponded stormwater in the three infiltration basins (waqf, asadaqa, and alamal) were measured after the 5 selected storm events in the 2021-2022 wet season, and compared to water level recorded in the 2017-2018 wet season through a previous study [18]. at waqf basin, two methods were available to measure the water surface drop (equals indirectly the actual infiltration rate), first method was a measuring staff gauge placed at the middle of the basin near zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 23 the southern side at (31°30'1.65"n, 34°27'32.36"e) while the second method was an electrical sonic ranger attached to a steel stand fixed above water surface to the basin southern side at (31°30'1.46"n, 34°27'32.37"e). the sonic ranger was proposed to send hourly data-log readings to a control panel located at the control room. however, it was not functioning properly at the time of this study, and re-calibration was required for accurate readings, see figure 6. figure 6: staff gauge and a sonic ranger panel at waqf basin at asadaqa basin, water level was measured using staff gauges placed in the northern sub basin at (31°30'47.51"n, 34°28'33.50"e) and the southern sub basin at (31°30'40.47"n, 34°28'30.15"e). however, at alamal basin the staff gauge was a marked ruler (marking lines) drawn on the concrete embankment of the basin. 3. water level readings of past season old readings of water level during the past 2017-2018 wet season were recorded through a previous study [18], adding to a historical data set obtained from the municipality of gaza for waqf and asadaqa basins and municipality of khanyounes for alamal basin. the readings of stormwater level at each basin were given in tables 2 to 5. table 2: waqf basin old records [18] storm no. date of reading ponded water depth, m infiltration rate, m/day 1 7/12/2017 1.05 8/12/2017 1.00 0.05 9/12/2017 0.95 0.05 2 27/12/2018 2.00 28/12/2018 1.90 0.10 29/12/2018 1.80 0.10 30/12/2018 1.70 0.10 3 7/1/2018 3.80 8/1/2018 3.50 0.30 9/1/2018 3.20 0.30 10/1/2018 2.90 0.30 11/1/2018 2.70 0.20 4 1/3/2018 2.80 0.15 2/3/2018 2.70 0.10 3/3/2018 2.65 0.05 5 1/4/2018 1.43 2/4/2018 1.42 0.01 3/4/2018 1.41 0.01 4/4/2018 1.40 0.01 table 3: asadaqa basin old records, south basin [18] storm no. date of reading ponded water depth, m infiltration rate, m/day 1 6/12/2017 1.35 0.15 9/12/2017 1.00 0.10 16/12/2017 0.65 0.05 23/12/2017 0.44 0.04 2 7/1/2018 1.70 0.28 8/1/2018 1.42 0.22 9/1/2018 1.20 0.10 10/1/2018 1.10 0.07 3 30/1/2018 1.57 0.17 31/1/2018 1.40 0.15 1/2/2018 1.25 0.14 4 5/3/2018 0.80 0.04 6/3/2018 0.76 0.02 7/3/2018 0.74 0.01 8/3/2018 0.73 0.01 table 4: asadaqa basin old records, north basin [18] storm no. date of reading ponded water depth, m infiltration rate, m/day 1 25/12/2017 0.95 0.95 2 1/1/2018 0.80 0.90 3 7/1/2018 0.95 0.80 4 28/1/2018 1.35 0.35 29/1/2018 1.00 0.20 30/1/2018 0.80 0.20 31/1/2018 0.60 0.15 1/2/2018 0.45 0.14 2/2/2018 0.31 0.11 table 5: alamal basin old records [18] storm no. date of reading ponded water depth, m infiltration rate, m/day 1 12/1/2018 3.45 0.20 13/1/2018 3.25 0.15 14/1/2018 3.10 0.15 15/1/2018 2.95 0.14 2 2/2/2018 3.95 0.22 3/2/2018 3.73 0.20 4/2/2018 3.53 0.18 3 19/2/2018 3.85 0.05 20/2/2018 3.80 0.05 21/2/2018 3.75 0.05 4 12/3/2018 2.68 0.02 13/3/2018 2.66 0.02 14/3/2018 2.64 0.01 15/3/2018 2.63 0.01 4. water level readings of present season during the 2021-2022 rainy season, current readings of the stormwater level in the three infiltration basins were measured on daily basis and collected at a specific time to ensure 24-hour period between every two consecutive readings. the readings were taken after the end of each storm event and during a dormant period (rain free period) in order to measure the net drop in water level only due to the infiltration phenomena excluding any unrequired effects. the staff gauge readings (representing infiltration rates) were presented in tables 6 to 9. table 6: waqf basin recent records storm no. time of reading date of reading ponded water depth, m infiltration rate, m/day 1 11:00 am 12/17/2021 1.08 12/18/2021 0.72 0.36 12/19/2021 0.42 0.30 12/20/2021 0.25 0.17 12/21/2021 0.15 0.10 12/22/2021 0.08 0.07 2 10:35 am 1/17/2022 4.95 1/18/2022 4.37 0.55 1/19/2022 3.85 0.52 1/20/2022 3.40 0.45 1/21/2022 3.00 0.40 1/22/2022 2.70 0.30 zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 24 3 12:15 pm 1/30/2022 4.77 1/31/2022 4.30 0.47 2/1/2022 3.85 0.45 2/2/2022 3.50 0.35 2/3/2022 3.20 0.30 2/4/2022 2.97 0.23 4 8:00 am 2/6/2022 5.10 2/7/2022 4.75 0.35 2/8/2022 4.40 0.35 2/9/2022 4.10 0.30 2/10/2022 3.85 0.25 2/11/2022 3.65 0.20 5 8:00 am 2/13/2022 4.38 2/14/2022 4.20 0.18 2/15/2022 4.04 0.16 2/16/2022 3.90 0.14 2/17/2022 3.77 0.13 2/18/2022 3.67 0.10 table 7: asadaqa basin recent records, south basin storm no. time of reading date of reading ponded water depth, m infiltration rate, m/day 1 11:00 am 1/17/2022 0.86 1/18/2022 0.75 0.11 1/19/2022 0.66 0.09 1/20/2022 0.60 0.06 1/21/2022 0.56 0.04 1/22/2022 0.53 0.03 2 12:15 pm 1/30/2022 0.80 1/31/2022 0.70 0.10 2/1/2022 0.61 0.09 2/2/2022 0.53 0.08 2/3/2022 0.46 0.07 2/4/2022 0.42 0.04 3 8:15 am 2/6/2022 0.80 2/7/2022 0.73 0.07 2/8/2022 0.66 0.07 2/9/2022 0.60 0.06 2/10/2022 0.55 0.05 2/11/2022 0.51 0.04 4 8:15 am 2/13/2022 0.71 2/14/2022 0.65 0.06 2/15/2022 0.60 0.05 2/16/2022 0.56 0.04 2/17/2022 0.53 0.03 2/18/2022 0.51 0.02 table 8: asadaqa basin recent records, north basin storm no. time of reading date of reading ponded water depth, m infiltration rate, m/day 1 11:10 am 1/17/2022 0.42 1/18/2022 0.25 0.17 1/19/2022 0.14 0.11 1/20/2022 0.07 0.07 1/21/2022 0.04 0.03 1/22/2022 0.02 0.02 2 12:20 pm 1/30/2022 no readings 1/31/2022 2/1/2022 2/2/2022 2/3/2022 2/4/2022 3 8:20 am 2/6/2022 0.67 2/7/2022 0.47 0.20 2/8/2022 0.30 0.17 2/9/2022 0.14 0.16 2/10/2022 0.04 0.10 2/11/2022 0.01 0.03 4 8:20 am 2/13/2022 no 2/14/2022 readings 2/15/2022 2/16/2022 2/17/2022 2/18/2022 table 9: alamal basin recent records storm no. time of reading date of reading ponded water depth, m infiltration rate, m/day 1 13:35 pm 1/17/2022 6.80 1/18/2022 6.30 0.50 1/19/2022 5.85 0.45 1/20/2022 5.48 0.37 1/21/2022 5.18 0.30 1/22/2022 4.90 0.28 2 12:15 pm 1/30/2022 6.47 1/31/2022 6.00 0.47 2/1/2022 5.67 0.33 2/2/2022 5.40 0.27 2/3/2022 5.20 0.20 2/4/2022 5.00 0.20 3 13:30 pm 2/6/2022 8.05 2/7/2022 7.50 0.55 2/8/2022 7.00 0.50 2/9/2022 6.55 0.45 2/10/2022 6.15 0.40 2/11/2022 5.80 0.35 4 13:30 pm 2/13/2022 7.48 2/14/2022 7.00 0.48 2/15/2022 6.60 0.40 2/16/2022 6.25 0.35 2/17/2022 6.00 0.25 2/18/2022 5.80 0.20 iv. results and discussion 1. evaluation of basins in order to examine the difference between the three basins, several aspects should be considered starting with the comparison between the old readings (2017-2018 wet season) and the current readings (2021-2022 wet season) of infiltration rates. we can identify the changes in the basin efficiency in recent years and then compare the actual infiltrated volume of stormwater (from different seasons) to the design volume for the three basins. as is known, multiple parameters affecting the infiltration capacity of the artificial basins were not discussed in this study and should be considered in the author’s future studies. the fully comparison of infiltration basins was elaborated in order to finally identify the best technique and most efficient infiltration technology that can be applied in the gaza strip. a. waqf basin first, the old readings of waqf basin during the 2017-2018 wet season showed that the infiltration rate (represented by the water level drop) did not exceed 30 cm/day at a water depth of 3.8 m, as shown in table 2. at a water depth of 1.70 m, the infiltration rate was measured as 0.10 m, substituting into equation 6 to get the following actual infiltration capacity = 0.10 (m/day) x 10,000 (m2) = 1,000 m3/day at a water depth of 1.70 m. zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 25 the design infiltration capacity of waqf basin was estimated as 50,000 m3 through a design report by [15]. obviously, there was a big difference between the actual infiltration capacity (at old season) and the design infiltration capacity. the design report of [15] assumed that the entire basin floor area acts as an infiltration surface allowing stormwater to flow into the ground, as the bottom of the basin was replaced by more permeable layers (yellowish sand layers) than before, potentially promoting the infiltration rate for a while, however, the system malfunctioned again due to the accumulation of sediments at the basin floor which led to clogging and blocking of the soil pores with a significant decrease in the infiltration rate. the system failure was attributed to numerous factors such as; the low permeability of the underlying soil layers, the untreated stormwater incoming to the basin full of suspended substances, and lack of maintenance and repair of the basin (repair and cleaning after end of wet season); this may include replacing topsoil, which acts as a “bottleneck layer” preventing stormwater passage into the underlying soil layers. other measures could be considered, for example, plowing, disking, and scrapping of the basin floor before every wet season, depending on field reconnaissance visits to determine the appropriate intervention. nonetheless, the municipality of gaza commenced the second stage of development for waqf basin, which was completed in 2021. this time the upgrading of the basin involved the construction and drilling of 18 boreholes (drywells) as previously discussed. these boreholes greatly increased the infiltration rate, which was apparently ensured during the 2021-2022 rainy season when the infiltration rate reached 55 cm/day at a water depth of 4.95 m. figure 7: graveled borehole gabion at waqf basin: (a), (b) the boreholes shown in figure 7 have dramatically influenced the infiltration rate as they bypassed the relatively impermeable layers underneath the basin floor. however, this technique is still novel using empty upvc pipes to keep the boreholes hollow, only for dribbling stormwater that pass through the on-basin floor graveled gabions. according to study [16], richard’s equation was used with hydrus (2d/3d) software (3d numerical modeling tool), each borehole infiltration capacity was 232 m3/day, considering clogging and groundwater mounding effects that can reduce infiltration rate of the boreholes. thus, the design infiltration capacity of 18 boreholes was estimated as 3480 m3/day. however, the drop in the water level during (2021-2022 wet season) in waqf basin was measured as 55 cm/day at a water depth of 4.95 m, as in table 6. while, at a water depth of 1.70 m the infiltration rate was determined by the best fit regression model (power function relation) and estimated as 0.20 m/day, then we substitute into equation 6, we get actual infiltration capacity as 0.20 x 10,000 = 2,000 m3/day. the previous result has shown the improvement in the infiltration rate of waqf basin, which was very close to the design infiltration capacity obtained through the study [16], the boreholes were working properly according to the expected pre-planned capacity. thus, the obtained infiltration efficiency of the system was estimated by substituting in equation 7. infiltration efficiency (%) = 2,000 (m3/day) /3,480 (m3/day) = 57.47 %, at a water depth of 1.70 m in the 20212022 wet season. the efficiency achieved demonstrated that waqf basin has a highly efficient infiltration technique, which was obtained after the second stage of development and upgrade works. however, in table 6, a decrease in the infiltration rate from storm 1 to storm 5 was observed due to the continuous accumulation of silt and clay during the same season. therefore, an end of season maintenance program should be activated which may include backwashing of the boreholes gabions to clean and flush the plastic geotextile mesh covering the slotted areas around the upvc pipes. see figure 8 for the location of the 18 boreholes at waqf basin. zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 26 figure 8: location of 18 boreholes at waqf basin b. asadaqa basin in asadaqa basin, the system was designed using a combination of drywells (vadose zone wells) and surface spreading, as in [14], with a total of 293 boreholes filled with gravel as previously mentioned. the design infiltration capacity of each borehole was estimated as 246 m3/day, multiplying by the total number of boreholes, we get the design infiltration capacity as 72,078 m3/day. results showed that the system was working properly as tested and verified in 2016-2017 wet season [14]. however, the infiltration rate decreased over time, which was noticeable through water level readings in 2017-2018 and 2021-2022 rainy seasons, respectively. the average infiltration rate (south and north basins) at a water depth of 1.70 m in the 2017-2018 wet season was estimated as 0.34 m/day, as seen in tables 3 and 4, respectively. applying equation 6 at both the southern and northern basins for the two rainy seasons, we get the following actual infiltration capacity = 8,000 (m2) x 0.34 (average infiltration rate of the two basins at a water depth of 1.7 m) = 2,743 m3/day, in the 2017-2018 wet season. however, the infiltration rate obtained from readings of southern and northern basins in the 2021-2022 wet season (tables 7 and 8) was estimated as 0.11 and 0.2 m/day at water depths of 0.86 and 0.67 m, respectively. we used the best fit regression model (power function relation) as shown in tables 7 and 8 to obtain the infiltration rate of 0.35 m/day at a water depth of 1.70 m, in the 2021-2022 wet season. actual infiltration capacity = 8,000 (m2) x 0.35 = 2,800 m3/day, thus, the infiltration efficiency was calculated by applying equation 7. infiltration efficiency (%) = 2,800 (m3/day) /72,078 (m3/day) = 3.90 %, at a water depth of 1.70m in the 20212022 wet season. the obtained actual infiltration capacity was very close to that obtained in the 2017-2018 rainy season, this demonstrated that the system was operating properly and no significant reduction in infiltration capacity was observed between the two seasons. however, the actual infiltration capacity of the two seasons was far from the design infiltration capacity, where the obtained infiltration efficiency did not exceed only 3.9 %, which highlighted the importance of repair and maintenance of the system. considering that the graveled boreholes are difficult to clean by backwashing as they are drywells that only receive stormwater for recharge not reversely pumped in opposite direction. this may require replacing the clogged surface layer of yellowish fine sand that acted as a filter for stormwater before passing into groundwater. c. alamal basin alamal basin is applying only surface spreading technique with no infiltration wells or drywells. in the 2017-2018 and 2021-2022 rainy seasons, the readings of water levels were recorded in a similar way. the actual infiltration capacity in the 2017-2018 wet season was expressed as infiltration capacity = 17,000 (m2) x 0.0062 = 105.4 m3/day, at a water depth of 1.7 m. using the best fit regression model (power function relation) of the recorded readings in table 9, the infiltration rate was estimated as 0.037 m/day at a water depth of 1.70 m in the 2021-2022 rainy season, and the actual infiltration capacity was estimated as 629 m3/day. hence, the infiltration efficiency was calculated by applying equation 7 as follows infiltration efficiency (%) = 629 (m3/day) /13,670 (m3/day) = 4.60 %, at a water depth of 1.70m in the 2021-2022 wet season. we found a very low infiltration efficiency due to the large difference between the design infiltration capacity and the actual infiltration capacity in the 2021-2022 wet season. the accumulation of sediments (suspended solids) that form a thick and dense layer of silt and clay was the reason for the significant reduction in the actual infiltration capacity, which worsened over time, especially when there was no regular repair and maintenance program for alamal basin. 2. comparison of three basins previous sections showed the water level readings recorded at the basins during two wet seasons, we noticed the difference in the efficiency, which depends on several factors, zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 27 one of the main factors was the geological characteristics of the soil underneath basin floor, this factor can identify the applicable infiltration technique prior to design phase. when soil layers with accepted permeability and saturated hydraulic conductivity with no clay lenses or confined aquifers, the surface spreading technique is applicable and efficient such as in alamal basin that used the same technique. more importantly, this technique should be used when mixing of sewage with stormwater is likely. therefore, the sat system (soil aquifer treatment) takes the role of filtering contaminated stormwater before infiltrated into the aquifer. however, water resources for surface recharge systems should be of adequate quality to prevent excessive clogging of infiltrating surface. the clogging of infiltrating surface and consequent reduction in infiltration rate is the bane of all artificial recharge systems. minimizing the effects of clogging may require pretreatment of stormwater to reduce suspended solids, nutrients, organic carbon, and regular drying of the system to allow for peeling, cracking, and physical removal of clogging layer. this has reduced the infiltration efficiency of alamal basin to 4.6% as in the 2021-2022 wet season due to the lack of any repair and maintenance program. the technique used at asadaqa basin is a combination of surface spreading and deep recharging using graveled drywells (vadose zone wells). the technique was used because the surface soil layer was of low permeability (dark brown sandy and silty clay) and its thickness was too large to be replaced or removed. the boreholes were drilled and distributed over the entire area of the basin surface in order to accelerate the imbibition of stormwater into the deep soil layers. with this technique, collected stormwater should not mix with wastewater or even clean stormwater only could enter the basin. asadaqa basin was functioning properly with a reducing infiltration rate over time as previously described by the water level readings obtained during the wet seasons studied. the infiltration efficiency was significantly reduced to about 3.90 % in the 2021-2022 wet season compared to the design infiltration rate. the main advantage of recharging trenches or wells (drywells) in the vadose zone is that they are relatively inexpensive. however, the disadvantage (low infiltration efficiency of 3.90 %) is that they eventually clog up at their infiltrating surface due to the accumulation of suspended soils and/or biomass. since they are in the vadose zone, boreholes cannot be redeveloped or backwashed to restore infiltration efficiency. in order to minimize clogging, water should be pretreated before infiltration or a sand filter should be placed with possibly a geotextile fabric on top of backfill as discussed earlier in this study. at waqf basin, a combination of surface spreading and drywells was applied, however, the drywells are not graveled and surface spreading was ignored and neglected. the technique was used since the underlying soil media was of low permeability, therefore the surface spreading did not function adequately as per the design infiltration capacity. the technique was novel and emerging technology since the drilled boreholes were empty and not filled with gravel, extending into vadose zone without reaching groundwater table, leaving only 6.0 m to clean the infiltrated stormwater with sat system. this raised the necessity to discharge only high quality water into the basin (pretreated) to avoid clogging of the infiltrating shafts (drywells) and the closure of slotted area on pipes permitter that cannot be backwashed reversely for cleaning. the technique proved a good performance during the 2021-2022 wet season, with a high infiltration efficiency of 57.47 %. table 10: comparison of three basins parameter waqf basin asadaqa basin alamal basin catchment area (km2) 6.0 2.5 10.0 basin floor area (m2) 10,000 8,000 17,000 infiltration technique non graveled boreholes (drywells) graveled boreholes (drywells) surface spreading no. of boreholes 18 293 n/a capacity of borehole (m3/day) 232 246 n/a soil type yellow fine sand, yellow coarse sand, gravelly sand and sandy gravel dark brown sandy and silty clay, kurkar highly heterogeneous with relatively impermeable clay layers design infiltration capacity (m3/day) 3,480 72,078 13,670 2017-2018 wet season at a water depth of 1.70 m infiltration rate (m/day) 0.10 0.34 0.0062 infiltration capacity (m3/day) 1,000 2,743 105.4 2021-2022 wet season at a water depth of 1.70 m infiltration rate (m/day) 0.20 0.35 0.037 infiltration capacity (m3/day) 2,000 2,800 629.0 infiltration efficiency (%) 57.47 3.90 4.60 the regular maintenance and repair program that involves cleaning and backwashing of both gabions and the plastic geotextile mesh was important before every winter season. table 10 presents a comparison of three basins in terms of several themes, in which we found that the infiltration capacity of waqf basin doubled from 1,000 m3/day to 2,000 m3/day from the 2017-2018 wet season to the 2021-2022 wet season, respectively, due to the development and upgrade works that were recently carried out by constructing the 18 boreholes zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 28 with an infiltration efficiency of 57.47 % as previously discussed. the technique was accepted and proved to be a highly efficient infiltration system and a novel solution to the long drainage time at waqf basin. the full modeling and simulation of the basin using a total of 18 boreholes needs to be performed to realistically verify the design infiltration capacity of the basin at various stormwater levels. whereas at asadaqa the actual infiltration capacity between the two wet seasons was almost the same at around 2,800 m3/day, but with a very low infiltration efficiency of 3.90 %. any upgrade of the system will be very expensive, which may require removing the entire clogged layers and cleaning the geotextile layers, then fixing them back and backfilling with clean layers of fine sand as before. at alamal basin, the actual infiltration capacity was also increased to 629 m3/day in the 2021-2022 wet season compared to the 2017-2018 wet season of only 105.4 m3/day, and this was attributed to the repair and maintenance that was performed for the basin surface layers such as disking and scrapping of clogging sediments layers before the start of the 20212022 wet season. despite this, the infiltration efficiency of 4.60 % was still very low, which may be due to the inaccurate design infiltration capacity of [13], where the hydraulic conductivity and hydraulic gradient changed significantly over time, resulting in a changed infiltration rate. it was also evident that asadaqa basin has the highest infiltration rate compared to other basins but not the highest infiltration efficiency, that was noticed through some field visits during specific rainy days where no stormwater was retained in the basin, figure 9. figure 9: infiltration rate of three basins at two wet seasons it is worth noting that the water depth of 1.70 m was selected to compare and study the differences between the basins in different wet seasons since the variation in water depth affects the infiltration rate, and infiltration rates at a water depth of 1.7 m in the 2017-2018 wet season were already recorded in the study [18] and then compared to the results of the 2021-2022 wet season. figure 10 shows the change in the infiltration rate over time for the three infiltration basins during specific storm events superimposed in one graph. the first storm event was selected for each wet season because the infiltration rate at the basins varies over time and thus the infiltration rate also changes during the same wet season from storm 1 to storm 5 owing to the continuous biofouling and siltation of basin floor. figure 10: infiltration rate of three basins over time: storm1 (a), storm2 (b) v. conclusion and recommendation stormwater infiltration is important and indispensable for groundwater recharge. in this study, three infiltration techniques were investigated in the gaza strip involving, surface spreading, surface spreading combined with graveled boreholes, and surface spreading combined with non-graveled boreholes. the techniques were compared over two wet seasons and the actual infiltration capacity was compared with the design infiltration capacity at each basin. infiltration efficiency (%) was also calculated for each basin, studied, discussed, and then compared to others. the infiltration technique used at waqf basin has definitely shown a significant increase in the actual infiltration capacity of 2,000 m3/day at a water depth of 1.70 m in the 2021-2022 wet season with the highest infiltration efficiency of 57.47%. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 waqf basin asadaqa basin alamal basin in fi lt ra ti o n r a te ( m /d a y ) wet season 2017-2018 wet season 2021-2022 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0 1 2 3 4 5 6 in fi lt ra ti o n r a te ( m /d a y ) time (day) (a) waqf basin sadaqa basin-southern sadaqa basin-northern alamal basin 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0 1 2 3 4 5 6 in fi lt ra ti o n r a te ( m /d a y ) time (day) (b) waqf basin sadaqa basin-southern sadaqa basin-northern alamal basin zakaria helles, yunes mogheir / assessment of stormwater infiltration basins models developed in gaza strip (2022) 29 while the technique used at asadaqa basin was still functioning properly without significant reduction in the actual infiltration capacity between the two wet seasons (2,743 m3/day in the 2017-2018 wet season and 2,800 m3/day in the 20212022 wet season). despite this, asadaqa basin was of the lowest infiltration efficiency of 3.90% compared to other basins since the soil pores were clogged, thereby preventing stormwater from passing through top soil layers to the graveled boreholes and then to groundwater. the low infiltration efficiency was attributed to the lack of repair and maintenance program that should be put in place by local municipalities. the infiltration rate at alamal basin needs further improvement as the infiltration efficiency was only 4.60%, and this can be performed by drilling drywells (boreholes) which can accelerate infiltration rate into the underlying soil layers provided that the collected stormwater is clean, safe, and not mixed with wastewater to protect groundwater from contamination. it is recommended that future studies focusing on the factors affecting infiltration rate should be conducted to furtherly evaluate the most efficient technique that can be applied in the gaza strip, associated with an accurate quantification of surface runoff at winter season to precisely determine the volume of incoming stormwater in each basin. nonetheless, in-depth studies and investigations should be conducted for waqf basin to estimate the infiltration capacity using software modeling tool and the effectiveness of increasing the number of drilled boreholes with the same emerging technique. vi. references [1] palestinian water authority (pwa), “seasonal rainfall assessment in gaza strip report prepared by pwa”, gaza, palestine, 2013. [2] coastal municipalities water utility (cmwu), “annual progress report in 2015”, gaza strip, palestine, 2016. [3] s. assouline, “infiltration into soils: conceptual approaches and solutions,” water resources research, vol. 49, no. 4, pp. 1755–1772, apr. 2013, doi: 10.1002/wrcr.20155. [4] wilfried brutsaert, hydrology: an introduction. cambridge; new york: cambridge university press, 2005. [5] d. n. p. s. dr n p sonaje, “modeling of infiltration process – a review,” indian journal of applied research, vol. 3, no. 9, pp. 226–230, oct. 2011, doi: 10.15373/2249555x/sept2013/69. [6] a. n. kostiakov, “on the dynamics of the coefficients of water percolation in soils and the necessity of studying it from a dynamic point of view for purposes of amelioration”. trans. corn. int. soc. soil. sci., 6th moscow, part a (1932),17-22, 1932. [7] r. e. horton, “the rôle of infiltration in the hydrologic cycle,” transactions, american geophysical union, vol. 14, no. 1, p. 446, 1933, doi: 10.1029/tr014i001p00446. [8] r. e. horton, “an approach toward a physical interpretation of infiltration-capacity,” soil science society of america journal, vol. 5, no. c, pp. 399–417, 1941, doi: 10.2136/sssaj1941.036159950005000c0075x. [9] j. r. philip, “the theory of infiltration,” soil science, vol. 83, no. 5, pp. 345–358, may 1957, doi: 10.1097/00010694-195705000-00002. [10] w. heber green and g. a. ampt, “studies on soil phyics.,” the journal of agricultural science, vol. 4, no. 1, pp. 1–24, may 1911, doi: 10.1017/s0021859600001441. [11] y.s. fok, “evolution of algebraic infiltration equations”. proc. of the int. conf. on infiltration development and application, univ. of hawaii, jan. 6-9, 1987. [12] j. massmann, a design manual for sizing infiltration ponds (no. wa-rd 578.2). washington state transportation commission. 2003. [13] coastal municipalities water utility (cmwu), “consultancy service for the detail design of retention and infiltration basin in al-amal area at khan younis governorate”, gaza strip, palestine, 2014. [14] y. mogheir, “design of asadaqa infiltration basin located at atuffah area, gaza strip, palestine, 2014. [15] y. mogheir, “enhancement of infiltration capacity at waqf stormwater basin”, gaza strip, palestine, 2021. [16] global vision consultants (gvc), “enhancement of infiltration capacity at waqf (asqula) stormwater basin”, gaza strip, palestine, 2021. [17] h. k. sirhan, numerical feasibility study for treated wastewater recharge as a tool to impede saltwater intrusion in the coastal aquifer of gaza-palestine (doctoral dissertation). 2014. [18] j. abu shammala, “assessment of stormwater infiltration basins in gaza strip, case study: asadaqa basinasqual basin-alamal basin”, unpublished msc thesis, islamic university-gaza, palestine, 2020. zakaria helles: head of engineering at ma’an development center; one of the leading humanitarian ngos in the gaza strip. he obtained his msc degree in civil engineering accompanied by 18 years of experience in managing, designing, and supervising engineering projects. he is currently a ph.d. researcher in water topics and engineering at the joint program of water technology between islamic university-gaza and al-azhar university-gaza. yunes mogheir. professor doctor in water resources and environment engineering at islamic university-gaza. he is a prominent expert in water resources and strategic planning. he got his m.sc. degree from ihe-delft, the netherlands, in march 1997, and his ph.d. was obtained from the university of coimbra, portugal, in february 2004. journal of engineering research and technology, volume 10, issue 1, march 2023 12 received on (12-06-2022) accepted on (11-11-2022) ultra-high-performance concrete (uhpc) applications worldwide: a state-of-the-art review bassam a. tayeh1* lawend k. askar2 mand k. askar3 b.h. abu bakar4 1civil engineering department, faculty of engineering, islamic university of gaza, gaza strip, palestine, 2technical college of engineering; duhok polytechnic university, iraq, lawend.kamal@dpu.edu.krd 3technical college of engineering; duhok polytechnic university, iraq, mand.askar@dpu.edu.krd 4 school of civil engineering, university sains malaysia, engineering campus, 14300 nibong tebal, pulau pinang malaysia, https://doi.org/10.33976/jert.10.1/2023/2 abstract. research is in progress on the applications of ultra-high performance concrete (uhpc) as a new additive material in construction technology. over the last twenty years’ significant improvements have been achieved in mechanical properties of uhpc, such as its strength, workability, and ductility, with further improvements made in selfplacing properties, higher density, and durability compared with normal concrete. one of the biggest advantages offered by ultra-high performance concrete over normal concrete is the possibility to minimize the cross-sectional dimensions of the structural elements. uhpc can be used to provide significant long-span members, whilst also showing less variation, creep, and drying shrinkage compared to conventional concrete. after many years of development and research into uhpc’s properties, it is being used in commercial applications to meet the rising demand for quality constructions. many projects in the world started using uhpc for different construction objectives such as long spans, columns, jacketing, rain-screen cladding systems, panel systems, façades, etc. furthermore, there are not enough sources in the literature describing mixture design, preparation, and curing. this research gives an overview of the uses of uhpc in structural and architectural applications. keywords: uhpc, structural application, architectural application. 1. introduction uhpfc provides significantly improved mechanical properties compared with normal concrete through the use of a concrete mix without coarse aggregates, minimizing the quantity of water required, and the addition of materials such as silica fume and steel fiber [1-5]. said, et al. [6] and elsayed, et al. [7] suggested that the high compressive strength of ultrahigh-performance fiber-reinforced concrete (uhpfc) can be used as a conventional jacketing material to ensure a strong mechanical bond between the normal concrete (nc) and uhpfc. various researchers [8-10] have conducted studies focused on the mechanical properties of uhpfc as new concrete material. meanwhile, other researchers [11, 12] have studied uhpfc as a composite material. there is, however, the limited information available in relation to uhpfc’s behavior as a repair material in specific bonding behavior. ultra-high performance fiber concrete (uhpfc) is a new class of concrete. because of its distinguished mechanical properties, uhpfc is considered as an ideal alternative material for use in developing new structural solutions. concrete is the second most consumed substance in the world after water, which is the most commonly used building material. concrete is a construction material having a compression strength that is significantly higher than tensile strength. as a result, concrete is thought of as a substance that is brittle [13]. advances in the science of concrete materials have led to the development of a new class of cementation composites, namely uhpfc. the mechanical properties of uhpfc make it an ideal alternative material for use in developing new solutions to pressing concerns about highway infrastructure deterioration, repair, and replacement, uhpfc is considered an ideal alternative material for use in developing new structural solutions [14]. this research gives an overview of the uses of uhpc in structural and architectural applications. 1.1ultra-high performance fiber concrete (definitions, contents, properties and applications) during the last two decades, the demand for ultra-high performance concrete has increased in relation to mega projects and high-rise buildings [15]. ultra-high-performance-fiberreinforced-cement production is the result of the development over many years of high-performance-concrete to obtain a grain-binder matrix appropriate for the granular structure and cementitious-binder composition. uhpfc displays better https://doi.org/10.33976/jert.10.1/2023/2 bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 13 mechanical properties compared to conventional concrete regarding compressive strength, modulus of elasticity, tensile strength, and elastic post cracking. it also has a high density, which results in better structure life as porosity and permeability are reduced [16-18]. habel, et al. [19] classified fiber reinforced concrete as shown below in fig. 1: fig. 1: classification of frc [36] morin, et al. [20] state that the material used in uhpfc has a high technology cement-based matrix and high fiber content that result in strong and durable concrete. nevertheless, uhpfc applications are rare due to the cost compared with traditional concrete. similarly, rossi (2002) describes uhpfc as a dense concrete that contains a large number of evenly embedded steel fibers, resulting in high tensile strength, strain hardening, and low permeability. based on [21], compressive strength can reach 150 mpa, with better ductility compared with normal strength concrete. moreover, testing of compressive strength of hpc and uhpfc by other researchers [15, 22] indicated that uhpfc has better tensile strength and durability. one of the significant features of uhpfc is the reduction of water content in the mixture, which enhances the concrete’s mechanical properties. another feature is the replacement of coarse aggregate with refined silica fume and steel fibers [23]. ghafari, et al. [24] suggest that uhpfc’s efficiency depends on its density, which can be increased by optimizing the particle packing to obtain ultra-high consolidation of the concrete matrix, with perfect grain size distribution achieved through the absorption of a homogeneous gradient of fine and coarse particles in the mixture. 1.2ultra-high performance fiber concrete components the homogeneity of a uhpc mix containing no coarse aggregate enhances its properties compared with normal concrete [15]. optimization of the grain size distributions in uhpc materials means that uhpc has very low permeability due to its dense matrix [25]. the dimensions of the materials, ranging from the biggest to the smallest, are as follows: sand, very finely graded and ranging from one hundred fifty µm to six hundred µm, is the largest material in the uhpc materials. the second largest granular material is cement, with an average diameter size equal to fifteen µm. finally, silica fume has the smallest particle size in the uhpc mix table 1. crushed quartz has an average diameter of ten µm. when steel fiber is added to the uhpc mix to improve the ductility it becomes the largest component in the uhpc [15]. table 1: range of uhpfc mix components [26] component typical range by weight (kg/m3) sand 490 – 1390 cement 610 – 1080 silica fume 50 – 334 crushed quartz 0 – 410 fibers 40 – 250 superplasticizer 9 – 71 water 126 – 261 using sand in the uhpc mix confines the cement matrix to add strength. in addition, a variety of quartz sand that is not chemically active in the cement hydration reaction at room temperature should be used [27]. vernet [28] demonstrated that because of the low water content of the mix some cement grains in the uhpc cannot become. in fact, the anhydrate cement grain act as high elastic modulus reinforcements in the matrix. silica fume with a diameter of 0.2 μm is used in the uhpfc matrix. it fills the voids between the cement grains as well as forming hydration products by pozzolanic activity and enhancing the rheological characteristics. with hydration of the opc the silica fume reacts with ca(oh)2, the latter being consumed to produce c-s-h hydrates [29-31]. in concrete mix, the workability is affected by functions of both the fiber size and the coarse aggregate size. in the case of uhpfc without coarse aggregate, the size of the steel fibers affects the concrete flow ability. the workability of uhpfc mixes clearly decreases with increasing fiber size [32]. thirdgeneration superplasticizers: polycarboxylate and polycarboxylate ethers are generally used in uhpfc mix for their high efficiency and lack of appropriate threshold for low water/cement ratios [33]. 1.3mmechanical and durability properties of uhpc ultra-high performance fiber concrete is an extremely strong cementation matrix with a high fiber content makes up the advanced concrete material known as ultra-high performance fiber concrete. uhpfc is more durable than regular concrete because of its strong tensile and compressive strengths, which are made possible by the utilization of powder components [34, 35]. in comparison to regular concrete, uhpfc's strain hardening behavior in tension ensures that crack openings stay bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 14 smaller and offers a material with greater ductility. ultra-highperformance-fiber reinforced concretes are a type of cement composite formed from a distributed three-dimensional reinforcement of steel or synthetic fibers and a strong and compact powder-based matrix. fibers have ductility among other qualities, and in sufficient quantities, they cause a strain hardening behavior in tension. according to aitcin, et al. [36], the uhpfc can achieve great strength by doing the following: on an industrial level, uhpfc has very high mechanical characteristics. it follows that a coarse aggregate is concrete's weakest component. to increase the compressive strength of concrete, only the coarse aggregate needs to be removed. this claim illustrates the uhpfc's potential strength. uhpfc advantages are listed below: 1. simple placement (good filling and passing ability). 2. high early strength. 3. long-term mechanical properties. 4. low permeability. 5. stability of volume. 6. long life in harsh conditions. additionally, the following benefits of uhpfc can be summed up (schneider, 2002): 1. the removal of coarse aggregate improves homogeneity. 2. increasing the packing density via granular mixture optimization using a broad range of powder size classes 3. enhancing the matrix's characteristics by including a pozzolanic admixture, such as silica fume. 4. improving matrix characteristics by lowering the water to binder ratio. 5. improvement of the heat treatment of the microstructure. 6. increasing ductility by using tiny steel fibers. 7. increasing compressive strength there is a weak transition zone between the aggregate and paste in conventional concrete and uhpfc [37]. the aggregates in uhpfc are a collection of inclusions in a continuous matrix, and their sizes are extremely tiny. as a result, the matrix may transfer the compressive force rather than a stiff skeleton of aggregates, which lessens the stresses that form at the pasteaggregate contact. in uhpfc, the transmittal of stresses by the surrounding matrix and the aggregates results in a much more uniform stress distribution, which can lessen the likelihood of shear and tensile cracking at the interface [25]. the stiff framework in typical concrete also inhibits some paste shrinkage, which increases porosity. however, in uhpfc, aggregates only partially prevent paste shrinkage, therefore deleting both fine and coarse aggregates is not fully advantageous, according the hypothesis of maximum paste thickness. cement paste is constrained by aggregates. the compressive strength of the composite actually falls as paste thickness between particles increases [38]. in order to preserve the best possible compressive strength, fine aggregate is kept in uhpfc. improvements to aggregate gradations and the use of a superplasticizer with high-range water reduction led to the development of uhpfc materials. a typical uhpfc mixture contains sand, cement, silica fume, crushed quartz, fibers, superplasticizer, and water in the ranges shown in table 2. it shows a typical uhpc mixture with the mixing components. table 2: range of uhpc mixing components [39]. component typical range of k/m³ sand (490-1390) cement (610-1080) silica fume (50-334) crushed quartz (0-410) fibers (40-250) superplasticizer (9-71) water (126-261) 2. applications of uhpfc uhpfc is used in structural and architectural applications due to its ideal mechanical properties. the development of uhpc was initiated in the early 1990s to meet the most demanding structural applications. the superior properties of uhpfc, which have enabled the redesign and optimization of structural elements as well as enhancement of durability, have permitted lengthening of design life and potential use as thin overlays, claddings, or shells [40]. in 1997, the first uhpc structure was constructed in canada. since then uhpc has mostly been used in the construction of pedestrian and road bridges [41], protective panels [42] [43], and architectural applications [44]. in the past, architectural designers have been moving to avoid the use of synthetic cladding systems or metal, therefor the uhpc is a novel solution for designers to be used as a cladding system. uhpc panels are flat, thin, lightweight, and easy to install. moreover, uhpc has high resistance to abrasion and low porosity resulting in reduced maintenance requirements. [45]. in recent years, uhpc has been widely used among construction committees due to its high mechanical properties, such as compressive strength and durability, as well as high workability, self-placing, self-densifying properties, and nonbrittleness behavior. the rising demand for uhpc applications as construction materials led to the development of a uhpc formulation for use in commercial industries. uhpc is the ‘future’ material for improving the sustainability of buildings and other infrastructure components [46]. syed [47] demonstrated that uhpv can be used for architectural applications where the aesthetics are a preference for a huge structures such as the facades. also, uhpc can be bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 15 used for structural applications due to its high mechanical properties of uhpc in comparison to conventional reinforced concrete. [48] mentioned that due to the high compressive strength and high durability of uhpc, it is an excellent material that can be used for architecture and construction such as thin façade elements, balconies, joints of prefabricated elements, and staircases. furthermore, uhpc can be used for a building when the durability of the structure is a concern. uhpc has been utilized in different countries for multiple reasons; in malaysia for durability and low maintenance purposes, in the netherlands for fast and hinder-free construction purposes. also, in switzerland uhpc has been widely used as a rehabilitation solution for concrete bridges in zones exposed to severe environmental due to the low permeability and high mechanical strength of uhpc [49]. the following section presents illustrations of the applications of uhpfc. 1.4structural applications 2.1.1 pulaski skyway the pulaski skyway, a 5.6 km long bridge in the northeastern us state of new jersey, provides a direct link to new york city via the holland tunnel. it was opened in 1932 and is listed in the national register of historic places. when the decision was made to replace the bridge deck, because of the critical importance of the skyway to the region’s transportation, and the narrowness of the structure making it difficult to perform maintenance without impacting traffic, the new jersey department of transportation wanted to ensure that the new bridge deck would have a service life of 75 years and need little maintenance during that time period ultra-high performance concrete (uhpc) is currently being extensively used in the ongoing replacement of the pulaski skyway deck. the unique properties of uhpc allow simple and rapid installation of a durable deck system, despite the challenging conditions and limited space. the employment of uhpc for nearly all the precast panel connections means that the connections are no longer the weak points in terms of strength and durability that they were traditional. instead, the connections are the strongest and most durable points of the deck system, stronger and more durable than precast deck panels with shop-cast concrete and corrosionresistant rebar [50]. fig. 2: partial elevated view of the pulaski skyway 2.1.2sherbrooke footbridge, canada the sherbrooke footbridge in sherbrooke, quebec, canada, completed in 1997, was the world’s first major uhpc structure. the bridge is of post-tension open space truss design and 60m in length. using uhpc in the bridge allowed the top deck to be only 30 mm in thickness [51] [52]. fig. 3: sherbrooke pedestrian bridge quebec, canada [53] 2.1.3footbridge of peace, south korea the footbridge of peace in seoul, south korea was the first bridge in the world where uhpfc was used as a full replacement for normal concrete. the bridge, which was completed in 2002, is 120m long and has arch height of about 15m with 30 to 100mm deck depth fig. 4 [54] [55]. bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 16 fig. 4: footbridge of peace in seoul, south korea 2.1.4seonyu footbridge, super bridge 200, pedestrian cable-stayed bridge, south korea in 2002, the korea institute of construction technology completed a project that used uhpc to construct a hybrid cable-stayed bridge that was intended to be low cost, increase the normal span by about 20%, and have a longer lifetime. the compressive strength of the uhpc was 180mpa and the tensile strength was about 10 mpa. uhpc allowed the thickness of the two front girders to be reduced to 70mm compared with the 180mm thickness of the rear opc girder. the deck dimensions were 2.7x7m as a precast segment [56]. fig. 5: uhpc girder pedestrian cable-stayed bridge. 2.1.5sakata-mirai footbridge, japan the year 2002 saw the completion of the sakata-mirai uhpc footbridge fig. 6 in japan, a structure of low weight and aesthetically pleasing appearance that used perforated webs [41]. fig. 6: the sakata-mirai footbridge in japan 2.1.6bourg les valence road bridge, france bourg les valence road bridge, france fig. 7 was the world’s first uhpc road bridge. its construction was supported by the fwg ( french working group) which in 2002 introduced the first guidelines for bridge design [44]. fig. 7: bourg les valence road bridge, france 2.1.7shepherd creek road bridge, australia australia’s first uhpc application was the four-lane shepherd creek road bridge fig. 8, completed in 2004. the bridge was constructed entirely from uhpc, replacing nc. the bridge consists of formwork overlain on a reinforced concrete deck. the beams were 15m in length by 600 mm in depth and spaced at 1.3m. the formwork panels were 25mm in depth and the weight of the beams was reduced by about half compared to the replaced nc beams [57]. bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 17 fig. 8: shepherd creek road bridge in new south wales, australia. 2.1.8mars hill bridge, iowa mars hill bridge was completed in 2006 and replaced a 73year-old truss bridge fig. 9 [58]. it comprised 33m prestressed uhp beams. its construction was supported by the federal highway administration in collaboration with iowa state university (graybeal, 2006). as a part of a monitoring program, the bridge was then monitored for two years to study its behavior under loading [59]. fig. 9: mars hill bridge in wapello county, iowa 2.1.9gaertnerplatz bridge, germany gaertnerplatz bridge in kassel, completed in 2007, was the first uhpc application in germany fig. 10. the structure comprises three steel trusses combined with longitudinal girders and deck slabs. prefabricated elements were used that consisted of prestressed, fiber-reinforced uhpc. the bridge has a span is 132m in length with 85mm slab thickness [60] [61]. similar to mars hill bridge, this bridge has since its construction been monitored to clarify the design assumptions, material and load-bearing behavior, based on the collected data. furthermore, the data has complied with the expectations of the design phase [62]. (a) under construction (b) in use fig. 10: gaertnerplatz bridge in kassel 2.1.10 haneda airport runway d, japan haneda airport runway d expansion, which started in july 2007, was a most impressive realization created over the sea to increase airport runway capacity. uhpc pre-stressed slab was assembled by post-tensioning in a perpendicular direction, supported by a metallic structure [63]. walraven [64] and resplendino [65] claimed that the bridge is an instance of a significant reduction in structure weight and increased sustainability with respect to environmental destruction. as compared with conventional concrete, uhpc resulted in a 50 % reduction of the overall weight of the structure [66]. bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 18 fig. 11: view of haneda airport runway d 2.1.11 route 31 bridge in lyons, new york field-cast uhpc on route 31 bridge in lyons, new york, completed in 2009, field-cast uhpc was used for the connections between precast deck panels fig. 12 and also used between the top flanges of deck-bulb-tee girders as longitudinal connections [40]. fig. 12: longitudinal connections cast between deck-bulbtee girders on route 31 bridge in lyons, new york 2.1.12 glenmore/legsby pedestrian bridge, calgary, alberta, canada the 53m single-span glenmore pedestrian bridge, alberta, canada in 2007 fig. 13 has 8 lanes and consists of t-shape girders made of uhpfrc. the girders are 33.6m in length with 1.1m in depth at mid-span and the deck width is 3.6m. the two supported cantilevers are made of high-strength concrete. a passive reinforcement was used in the form of cfrp (glass fiber reinforced plastic bars) [58]. fig. 13: glenmore pedestrian bridge. (npca). 2.1.13 kampung linsum bridge, malaysia the medium-span kampung linsum bridge, completed in 2011, is to date the longest uhpfc composite bridge application in malaysia. it consists of a single u-trough girder that is 1.75 m in depth and 2.5 m in width at the top, topped with a 4 m wide cast in-situ 200mm thick rc deck fig. 14. compressive strength of uhpfc was used in the application to achieve 180 mpa with 30 mpa tensile strength [21]. fig. 14: kampung linsum bridge, rantau, negeri sembilan 2.1.14 batu 6 bridge, malaysia bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 19 this 100m bridge, completed in february 2015 in gerik, perak, malaysia, consists of 40 – 4.0-meter-high precast segments fig. 15 that were transported to the site for placement and tensioning after casting in the factory. the thickness of the webs between the segment ends is 150 mm. the bridge weighs 770 tons, the average prestress on the sections is 17.1 mpa compression, the stress at the top and bottom of the sections at mid-span is 19.3 mpa compression, and the stress at the bottom is 15.0 mpa. moreover, the measured hog was 50 mm compared with 34.8mm for the theoretically calculated hog [67]. (a) : batu 6 bridge cross section (b) : batu 6 bridge factory cast segments (c) : batu 6 bridge after completion of abutment a. fig. 15: gerik, perak, malaysia. 2.1.15precast ultra-high performance concrete cantilever retaining wall (2016) this precast ultra-high performance concrete (uhpc) 40mm thick cantilever retaining wall was designed based on the japanese society of civil engineers' recommendations for the design and construction of uhpc structures. two thin uhpc slabs were used to construct the retaining wall. the uhpc cantilever is 2m in length, 2m wide, and 2.5m in height fig. 16. the wall is strengthened with two 80mm by 100mm steel reinforcing stiffeners spaced at 1.25m along the wall [68]. fig. 16: dimensions and details of conventional precast rc cantilever retaining wall. 2.1.16 production of a footbridge with double curvature using uhpc (2017) this experimental design of a single 10m span bridge used uhpfrc to reduce the structure’s thickness to 30-40mm with 1.5 m cross-section width. the bridge comprised a one-piece cast as one prefabricated element. uhpfrc has selfcompacting characteristics. the bridge has vertical and horizontal curvatures, with a camber of 0.4 m. load bearing consists of 45mm thickness at the bottom of the deck and side walls. to ensure shear transfer and anchorage forces from the supports, the bridge deck was designed to be thicker than the support area. steel fiber reinforcement (u shape) was used in the rest of the structure[69]. [70, 71] state that forever durability can be achieved with a very high cement matrix density, minimum porosity, and unconnected pores. these criteria are provided by tiny particles (slag and silica fume) with a low w/c ratio. workability is necessary for achieving these perfect material properties. fig. 17: transport of final 1:1 mock-up of the footbridge in upside-down position bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 20 2.1.17 kg. baharu-kg teluk bridge (kb-kt) uhpfrc precast sections have been used in kg. baharu-kg teluk bridge (kb-kt). the prestressed girders in the bridge are one of the longest multiple-span bridges in the world. located at ayer tawar, manjung, perak, completed in 2017 and spanning 420 m long, kg babaru-kg teluk (kb-kt) bridge consists of 20 uhpfrc precast ubeams. the span of each beam equals 41.5m with six segments consisting of two end/anchorage segments spanning 4.75-meter and four standard intermediate segments spanning 8m. the dimensions of the segments is 2m deep, 3m wide at the top surface, and 1.4 m wide at the bottom flange. the average and characteristic compressive strength after 1-day were 89 mpa and 78 mpa, respectively. furthermore, the average and characteristic compressive strength after 28-day were 167 mpa and 154 mpa, respectively. regarding flexural strength, after 28 days the strength reached 29.1 mpa and 24.5 mpa, respectively for the average and characteristic flexural strength. the average elastic modulus was 50.7 gpa and poisson’s ratio was 0.2 [72]. (a): kb-kt bridge. (b) : cross-section of kb-kt bridge. fig. 18: kg. baharu-kg teluk bridge (kb-kt) 2.2 architectural applications 2.2.1 martel tree sculpture, france the martel tree, completed in france in 1999, is a sculpture made completely from uhpc . some of the elements were only 60mm in thickness [54]. fig. 19: martel tree sculpture made of uhpc [54] 2.2.2 shawnessy lrt station with uhpc canopies, canada in 2003, canada constructed a series of renowned uhpc structures in the form of canopies at shawnessy lrt station in calgary. uhpc gave the architects the flexibility to realize their desire to design such a free-flowing form. to clarify the satisfactory behavior of the twenty-four uhpc canopies, each 20mm in thickness and supported by a single uhpc column, full-scale tests were conducted prior to their installation [63]. fig. 20: shawnessy lrt station with uhpc canopies [63] 2.2.3 cover of millau toll, france the 98m millau toll structure in millau, france was built in 2004 fig. 21. it is 28m wide and has two thin 100mm slabs that are joined together by 12 prestressed beams. the cover was bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 21 built using uhpc and comprises 53 match-cast pasted prefabricated elements assembled on a hanger with longitudinal pre-stressing [73]. the project was an example of uhpc’s capabilities in terms of producing complex shapes and a thin covering [65]. fig. 21: overview of the cover of millau toll [73] 2.2.4 wilson hall, malaysia in the year 2008, a prefabricated uhpfc system was used at the wilson hall building in malaysia to construct the entrance frame which had a 1,861 m2 roof area. the transverse width of the building was 67m and the longitudinal length was 42.7m. uhpfc portal frames were spaced at 12.2 m c/c. furthermore, the building consisted of eight pieces comprising uhpfc prestressed columns, internal rafters, cantilever rafters, and connections, as shown in fig. 22 [74]. voo, et al. [75] claimed that this building was the first in the world to replace conventional steel beams with uhpfc prestressed beams/columns. fig. 22: wilson hall during the construction [74] 2.2.5 stade jean bouin, france: this stadium, which opened in 1925 and seats 12,000 people, was closed temporarily for an expansion project that began in the summer of 2010. the stadium reopened in 2013 with an increased capacity of up to 20,000 spectators. ruddy ricciotti used uhpc as a solution to achieve this technically difficult objective, creating a precast uhpc lattice-style façade system that is light and airy. the result is a remarkable, totally asymmetric envelope, undulating in three dimensions. new technical challenges were presented by the combination of glass and ultra-high performance concrete (uhpc), which makes the project very original, and the construction of a watertight envelope that would cover the stadium’s entire surface area. this 23000 m² envelope includes a 12000 m² roof made of 3,600 self-supporting ductal triangular uhpc panels, each approximately 8 to 9m long by 2.5m wide and 0.45 m thick. the envelope covers the stadium in an amorphic fashion, protecting the spectators from the elements and providing an acoustic screen in consideration of the surrounding neighborhood. this unique project is another prime example of the architectural use of uhpc [76]. fig. 23: stade jean bouin, open lattice façade allowing sunlight to filter through 2.2.6 mucem (the museum of european and mediterranean civilisations ), marseille, france. the museum of european and mediterranean civilisations is a national museum located in marseille, france. it was opened on 7 june 2013. the mucem project was designed by rudy ricciotti and roland carta and demonstrates the capability of uhpc in the architectural and structural design of a whole building. the structure of the building consists of seven floors, two as the basement or underground areas and five above ground. entrance to the museum is through a 76 m span and 1.8m high footbridge, designed using uhfprc, while uhpc structural elements inside the museum include columns and the latticework in the second skin. https://en.wikipedia.org/wiki/marseille https://en.wikipedia.org/wiki/france https://en.wikipedia.org/wiki/marseille https://en.wikipedia.org/wiki/france bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 22 in this museum, the main supporting structure in the façade is made of 309 arboreal uhpfrc poles, the trees fabricated in 20 casts of different heights, diameters, and shapes, with three families of trees fabricated. the framework is supported by exterior uhpc brackets, and the latticework is a fine example of the architectural application of uhpc for durable façade elements [77]. fig. 24: mega project: mucem with uhpc lattice facade, roof and footbridge. 2.2.7 fondation louis vuitton pour la creation paris, france (2014) foundation louis vuitton pour la creation is an art museum and cultural center. the construction began in march 2008, located in the garden of acclimation in paris, france, and designed by frank gehry and ghery partners. the cladding of fondation louis vuitton pour la creation contains 1900 prefabricated panels cast in white ultra-high performance fiber reinforced concrete (uhpfrc). the area of the project is approximately (50 m² x 45 m²), elevated to 45 m of height. the architecture team used uhpfrc as an iceberg to construct the innovative building [78]. [46] demonstrated that in louis vuitton pour la creation project, the selection of ultra-highperformance concrete was to produce an aesthetic, durable, and light structure. fig. 25: fondation louis vuitton pour la creation paris, france. 2.2.8 fulton state hospital, fulton, missouri usa (2018) while under construction henry and heaney [45] demonstrated the change in the design philosophy of the project as the hospital first opened in 1851 for aging mental patients. on the other hand, the new massive complex for fulton state hospital was designed for safer, modern patient treatments in the years to come. the area of the hospital is approximately 22.25 hectares (222500m²), containing 11,612 square meters of façade constructed to form the uhpc rain screen panel system. due to the massive size of the building and weight constraints, the architectural engineering team selected uhpc for the high durability and finishing capabilities of uhpc, as the top the panels appear to "dissolve" into the walls and resulting in reduced height and overall size. [45]. bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 23 fig. 26: fulton state hospital, fulton, missouri, usa fig. 27: grooved uhpc panel used in fulton state hospital, fulton, missouri, usa 2.2.9 kimmel pavilion, new york university, usa (2018) henry and heaney [45] mentioned that kimmel pavilion is a medical center added to the main campus of new york university with 77,109 m² in langone, on manhattan’s lower east side. the interstitial space between floors was covered with thin uhpc rain screen cladding that provided a visual break in the glass curtain wall that surrounds the facade on each level. the thin, high durability and lightweight characteristics of uhpc cladding panels meet the project requirement for a natural finish, as well as the light uhpc, which led to the use of small cranes instead of large cranes. the use of smaller cranes can contribute significantly to cost reduction, site safety, and speed of construction. each uhpc piece was cast as a cshaped, 3d element. the uhpc material helped to make the 3d elements very robust and impact-resistant. uhpc panels can resist the wind force as well as the concentrated force from window washing baskets. fig. 28: full-size mock-up of the uhpc panel application for nyu's new kimmel pavilion 2.2.10 lewis farms fire station, edmonton, canada in edmonton, canada, the low freezing temperatures in the winter as well as the very high temperatures during some summer days made the uhpc cladding panel an evident solution. these panels were chosen due to the high durability and high resistance to freeze-thaw, high density, and low porosity of uhpc which attributes to the low permeability characteristics that prevent water molecules penetration into the matrix. the high durability of the uhpc exterior façade would provide superior resistance to impact, chemical attack, abrasion, fire, and seismic activity. the lewis farms fire station, edmonton, canada project had an art component by using a custom design of the facade's textured panels [45]. bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 24 (a) : lewis farms fire station, edmonton, canada (b) : uhpc panel customized texture fig. 29: the lewis farms fire station, edmonton, canada project 3. conclusion this review paper describes uhpc applications and their benefits when applied to structural and architectural work. many countries, including japan, malaysia, australia, france, america, and canada, have used uhpc in construction applications. these applications in different countries demonstrate the huge benefits worldwide regarding sustainability and increased service life offered by the use of uhpc. uhpc has high strength, low permeability, very high density, and very low porosity, with unconnected pores, which in combination lead to fast curing time, very low w/c ratio high flow ability, and high durability, with steel fibers providing high levels of flexural strength. the unique properties of uhpc permit reduction of the cross-sectional dimensions of the structural elements and let to reduce weight. also, it has demonstrated enhanced behavior in terms of low vibration along with a reduction in cross-sectional dimensions, drying shrinkage, and creep. using uhpc for connections means that these connections are no longer the weak point in terms of strength and durability that they traditionally were. moreover, uhpfrc can be used for critical joints of existing structures, allowing low water infiltration and ductile behavior with minimal maintenance. uhpc is used as canopies and rain screen cladding due to the light weight, watertight and thin layer of uhpc. all these abovementioned efforts indicate the potential of uhpc as a construction material for present and future use. furthermore, uhpfc offers an ideal solution for improving the sustainability of buildings and other infrastructure components. however, applications of uhpc have so far been limited due to its high initial cost, lack of clear design codes for uhpc, and uhpc mix design difficulties, which have hampered its commercial development and application in the construction industry. 4. recommendations • more studies should be conducted to address the issue of the high cost of uhpc, potentially using alternative cheaper materials with similar functions. • design standards and codes should be established for uhpfc. due to the lack of codes and standards for uhpc, more studies must be carried out to develop a uhpc design code. • further structural applications should be carried out for uhpc in the future. structural elements such as columns, beams, foundations, and slabs in multi-story buildings, bridges substructure and in concrete dams, etc. • more studies should be done on the long-term durability behavior of uhpv. 5. references [1] g. j. parra-montesinos, s. w. peterfreund, and s. h. chao, "highly damage-tolerant beam-column joints through use of high-performance fiber-reinforced cement composites," aci structural journal, vol. 102, no. 3, pp. 487-495, 2005. [2] i. y. hakeem, m. amin, b. a. abdelsalam, b. a. tayeh, f. althoey, and i. s. agwa, "effects of nanosilica and micro-steel fiber on the engineering properties of ultra-high performance concrete," structural engineering and mechanics, vol. 82, no. 3, pp. 295-312, 2022. [3] w. mansour, m. a. sakr, a. a. seleemah, b. a. tayeh, and t. m. khalifa, "bond behavior between concrete and prefabricated ultra high-performance fiber-reinforced concrete (uhpfrc) plates," structural engineering and mechanics, vol. 81, no. 3, pp. 305-316, 2022. [4] m. amin, i. y. hakeem, a. m. zeyad, b. a. tayeh, a. m. maglad, and i. s. agwa, "influence of recycled aggregates and carbon nanofibres on properties of ultra-high-performance concrete under elevated temperatures," case studies in construction materials, vol. 16, p. e01063, 2022. [5] l. k. askar, i. h. m. albarwary, and m. k. askar, "use of expanded polystyrene (eps) beads in silicafume concrete," journal of duhok university, vol. 22, no. 1, pp. 30-38, 2019. [6] a. said, m. elsayed, a. abd el-azim, f. althoey, and b. a. tayeh, "using ultra-high performance fiber reinforced concrete in improvement shear strength of reinforced concrete beams," case studies in construction materials, vol. 16, p. e01009, 2022. [7] m. elsayed, b. a. tayeh, m. abou elmaaty, and y. aldahshoory, "behaviour of rc columns strengthened with ultra-high performance fiber reinforced concrete (uhpfrc) under eccentric loading," journal of building engineering, vol. 47, p. 103857, 2022. [8] m. g. lee, y. c. wang, and c. t. chiu, "a preliminary study of reactive powder concrete as a new repair material," construction and building materials, vol. 21, no. 1, pp. 182-189, 2007. bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 25 [9] m. amin, a. m. zeyad, b. a. tayeh, and i. s. agwa, "effects of nano cotton stalk and palm leaf ashes on ultrahigh-performance concrete properties incorporating recycled concrete aggregates," construction and building materials, vol. 302, p. 124196, 2021. [10] a. s. faried, s. a. mostafa, b. a. tayeh, and t. a. tawfik, "mechanical and durability properties of ultra-high performance concrete incorporated with various nano waste materials under different curing conditions," journal of building engineering, vol. 43, p. 102569, 2021. [11] k. habel, e. denarié, and e. brühwiler, "experimental investigation of composite ultrahigh-performance fiber-reinforced concrete and conventional concrete members," aci structural journal, vol. 104, no. 1, 2007. [12] n. k. baharuddin, f. mohamed nazri, b. h. abu bakar, s. beddu, and b. a tayeh, "potential use of ultra high-performance fibre-reinforced concrete as a repair material for fire-damaged concrete in terms of bond strength," international journal of integrated engineering, vol. 12, no. 9, 2020. [13] i. h. m. albarwary, z. n. s. aldoski, and l. k. askar, "effect of aggregate maximum size upon compressive strength of concrete," journal of duhok university, pp. 790-797, 2017. [14] l. k. askar, b. a. tayeh, and b. h. abu bakar, "effect of different curing conditions on the mechanical properties of uhpfc," effect of different curing conditions on the mechanical properties of uhpfc, vol. 4, no. 3, 2012. [15] b. a. graybeal, "characterization of the behavior of ultra-high performance concrete," doctoral thesis university of maryland 2005. [16] r. d. toledo filho, e. a. b. koenders, s. formagini, and e. m. r. fairbairn, "performance assessment of ultra high performance fiber reinforced cementitious composites in view of sustainability," materials & design, vol. 36, pp. 880-888, 2012. [17] s. a. yildizel, g. calis, and b. a. tayeh, "mechanical and durability properties of ground calcium carbonateadded roller-compacted concrete for pavement," journal of materials research and technology, vol. 9, no. 6, pp. 13341-13351, 2020/11/01/ 2020, doi: https://doi.org/10.1016/j.jmrt.2020.09.070. [18] z. xu, j. li, h. qian, and c. wu, "blast resistance of hybrid steel and polypropylene fibre reinforced ultrahigh performance concrete after exposure to elevated temperatures," composite structures, vol. 294, p. 115771, 2022/08/15/ 2022, doi: https://doi.org/10.1016/j.compstruct.2022.115771. [19] k. habel, e. denarié, and e. brühwiler, "structural response of elements combining ultrahighperformance fiber-reinforced concretes and reinforced concrete," journal of structural engineering, vol. 132, p. 1793, 2006. [20] v. morin, f. cohen tenoudji, a. feylessoufi, and p. richard, "superplasticizer effects on setting and structuration mechanisms of ultrahigh-performance concrete," cement and concrete research, vol. 31, no. 1, pp. 63-71, 2001. [21] y. l. voo, p. c. augustin, and t. a. j. thamboe, "construction and design of a 50m single span uhp ductile concrete composite road bridge," the structural engineer, the institution of structural engineers, uk, vol. 89, no. 15, pp. 24 – 31, 2011. [22] m. k. askar, m. h. selman, and s. i. mohammed, "mechanical properties of concrete reinforced with alternative fibers," journal of duhok university, vol. 23, no. 1, pp. 149-158, 2020. [23] v. voort, "design and field testing of tapered hshaped ultra high performance concrete piles," master thesis, iowa state university, 2008. [24] e. ghafari, h. costa, e. júlio, a. portugal, and l. durães, "optimization of uhpc by adding nanomaterials " presented at the proceeding of the 3rd international symposium on uhpc and nanotechnology for high performance construction materials, 7-9 march 2012, kassel university, germany, 2012. [25] p. richard and m. cheyrezy, "composition of reactive powder concretes," cement and concrete research, vol. 25, no. 7, pp. 1501-1511, 1995. [26] b. a. graybeal, "material property characterization of ultra-high performance concrete," the federal highway administration (fhwa), report no. fhwa-hrt-06-103, 2006. [27] c. porteneuve, h. zanni, c. vernet, k. o. kjellsen, j. p. korb, and d. petit, "nuclear magnetic resonance characterization of high-and ultrahigh-performance concrete:: application to the study of water leaching," cement and concrete research, vol. 31, no. 12, pp. 1887-1893, 2001. [28] c. p. vernet, "ultra-durable concretes: structure at the micro-and nanoscale," mrs bulletin, vol. 29, no. 5, pp. 324-327, 2004. [29] r. j. detwiler and p. k. mehta, "chemical and physical effects of silica fume on the mechanical behavior of concrete," aci materials journal, vol. 86, no. 6, 1989. [30] a. goldman and a. bentur, "the influence of microfillers on enhancement of concrete strength," cement and concrete research, vol. 23, no. 4, pp. 962972, 1993. [31] m. a. megat johari, j. brooks, s. kabir, and p. rivard, "influence of supplementary cementitious materials on engineering properties of high strength concrete," construction and building materials, vol. 25, pp. 2639-2648, 2011. [32] y. l. voo and s. j. foster, "characteristics of ultrahigh performance ‘ductile’concrete and its impact on sustainable construction," the ies journal part a: civil & structural engineering, vol. 3, no. 3, pp. 168187, 2010. [33] k. habel, "structural behaviour of composite uhpfrc–concrete elements," doctoral thesis, swiss federal institute of technology, lausanne, switzerland., 2004. [34] m. schmidt and e. fehling, "ultra-high-performance concrete: research, development and application in https://doi.org/10.1016/j.jmrt.2020.09.070 https://doi.org/10.1016/j.compstruct.2022.115771 bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 26 europe," aci spec. publ, vol. 228, no. 1, pp. 51-78, 2005. [35] s. meng, c. jiao, x. ouyang, y. niu, and j. fu, "effect of steel fiber-volume fraction and distribution on flexural behavior of ultra-high performance fiber reinforced concrete by digital image correlation technique," construction and building materials, vol. 320, p. 126281, 2022/02/21/ 2022, doi: https://doi.org/10.1016/j.conbuildmat.2021.126281. [36] p.-c. aitcin, y. delagrave, and r. beck, "a 100-m high prefabricated concrete pole: why not?," in 2000 ieee esmo-2000 ieee 9th international conference on transmission and distribution construction, operation and live-line maintenance proceedings. esmo 2000 proceedings. global esmo 2000. the pow, 2000: ieee, pp. 365-374. [37] w. dowd and c. e. dauriac, "reactive powder concrete: a french engineering company has developed a concrete with a compressive strength two to four times greater than that of hpc," construction specifier, vol. 49, pp. 47-53, 1996. [38] l. k. askar, b. a. tayeh, b. h. abu bakar, and a. m. zeyad, "properties of ultra-high performance fiber concrete (uhpfc) under different curing regimes," international journal of civil engineering and technology (ijciet), vol. 8, no. 4, 2017. [39] t. l. vande voort, m. t. suleiman, and s. sritharan, "design and performance verification of ultra-high performance concrete piles for deep foundations," 2008. [40] b. a. graybeal, "fatigue response in bridge deck connection composed of field-cast ultra-highperformance concrete," transportation research record: journal of the transportation research board, vol. 2251, no. -1, pp. 93-100, 2011. [41] m. rebentrost and g. wight, "experience and applications of ultra-high performance concrete in asia.," presented at the second international symposium on ultra high performance concrete. 0507 march 2008, kassel university, kassel, germany, 2008. [42] b. cavill and m. rebentrost, "ductal® a highperformance material for resistance to blasts and impacts.," australian journal of structural engineering, vol. 7, no. 1, pp. 37-45, 2006. [43] c. wu, d. j. oehlers, m. rebentrost, j. leach, and a. s. whittaker, "blast testing of ultra-high performance fibre and frp-retrofitted concrete slabs," engineering structures, vol. 31, no. 9, pp. 2060-2069, 2009. [44] m. behloul, o. bayard, and j. resplendino, "ductal® prestressed girders for a traffic bridge in mayenne, france," presented at the 7th international conference on short & medium span bridges. 23-25 august 2006., montréal, canada. , 2006. [45] k. a. henry and c. w. heaney, "industrial production of thin rainscreen cladding in uhpc," in symposium on ultra-high performance fibre-reinforced concrete, uhpfrc, montpellier, france, f. t. j. resplendinot.ch., ed., october 2-4, 2017 2017: rilem publications sarl, pp. 937 944. [46] n. azmee and n. shafiq, "ultra-high performance concrete: from fundamental to applications," case studies in construction materials, vol. 9, p. e00197, 2018. [47] a. syed, "development of ultra high performance fiber reinforced concrete," 2018. [48] m. ženíšek, a. vodička, t. vlach, l. laiblová, and p. hájek, "lightweight uhpc facade panel with led display," acta polytechnica ctu proceedings, vol. 22, pp. 145-149, 2019. [49] a. d. reitsema, m. luković, s. grünewald, and d. a. hordijk, "future infrastructural replacement through the smart bridge concept," materials, vol. 13, no. 2, p. 405, 2020. [50] m. mcdonagh and a. foden, "benefits of ultra-high performance concrete for the rehabilitation of the pulaski skyway," in first international interactive symposium on uhpc, 2016, pp. 1-10. [51] r. adeline, m. lachemi, and p. blais, "design and behavior of the sherbrooke footbridge. ," presented at the the the international symposium on highperformance and reactive powder concretes, sherbrooke, canada., 1998. [52] w. j. semioli, "the new concrete technology," concrete international, vol. 23, no. 11, pp. 75-79, 2001. [53] j. resplendino and j. petitjean, "ultra-high performance concrete: first recommendations and examples of application.," presented at the 3rd international symposium on high performance concrete, pci, orlando, florida., 2003. [54] s. deem. " concrete attraction-something new on the french menu-concrete. http://www.popularmechanics.com/science/research/ 2002/6/concrete/print . accessed at 11-12-2010 " (accessed 11-12-2010. [55] m. behloul and k. lee, "ductal seonyu footbridge," structural concrete, vol. 4, no. 4, pp. 195-201, 2003. [56] c.-d. lee, k.-b. kim, and s. choi, "application of ultra-high performance concrete to pedestrian cablestayed bridges," journal of engineering science and technology, vol. 8, no. 3, pp. 296-305, 2013. [57] m. rebentrost and b. cavill, "reactive powder concrete bridges.," presented at the austroads conference, 12-15 septemper., perth, australia., 2006. [58] h. g. russell, b. a. graybeal, and h. g. russell, "ultra-high performance concrete: a state-of-the-art report for the bridge community," united states. federal highway administration. office of infrastructure …, 2013. [59] v. perry and p. seibert, "the use of ductal® for bridges in north america: the technology, applications and challenges facing commercialization.," presented at the second international symposium on ultra high performance concrete. 05-07 march 2008, university of kassel, germany 2008. [60] e. fehling, m. schmidt, k. bunje, and w. schreiber, "design of first hybrid uhpc-steel bridge across the river fulda in kassel, germany.," presented at the https://doi.org/10.1016/j.conbuildmat.2021.126281 http://www.popularmechanics.com/science/research/2002/6/concrete/print http://www.popularmechanics.com/science/research/2002/6/concrete/print bassam a. tayeh1* , lawend k. askar2, mand k. askar2, b.h. abu bakar3, ultra-high performance concrete (uhpc) applications worldwide: a state-of-the-art review 2023 27 second international symposium on ultra high performance concrete, 05-07 march 2008, kassel university, kassel, germany, 2008. [61] r. krelaus, s. freisisnger, and m. schmidt, "adhesive bonding of uhpc structural members at the gaertnerplatz in kassel. ," presented at the second international symposium on ultra high performance concrete, 05-07 march 2008, kassel university, kassel, germany, 2008. [62] m. schmidt, "sustainable building with ultra-highperformance concrete (uhpc)coordinated research program in germany.," presented at the proceeding of the 3rd international symposium on uhpc and nanotechnology for high performance construction materials, 7-9 march 2012, kassel university, germany, 2012. [63] j. f. batoz and m. behloul, "uhpfrc development on the last two decades: an overview.," presented at the designing and building with uhpfrc : state of the art and development. 17th and 18th november, marseille, france. , 2009. [64] j. walraven, "on the way to international design recommendations for ultra high performance fibre reinforced concrete," presented at the proceeding of the 3rd international symposium on uhpc and nanotechnology for high performance construction materials, 7-9 march 2012, kassel university, germany, 2012. [65] j. resplendino, "state of the art of design and construction of uhpfc structures in france," presented at the proceeding of the 3rd international symposium on uhpc and nanotechnology for high performance construction materials, 7-9 march 2012, kassel university, germany, 2012. [66] e. fehling, m. schmidt, j. walraven, t. leutbecher, and s. fröhlich, ultra-high performance concrete uhpc: fundamentals, design, examples. john wiley & sons, 2014. [67] s. j. foster and y. l. voo, "uhpfrc as a material for bridge construction: are we making the most of our opportunities?," concrete in australia, vol. 41, no. 2, 2015. [68] b. nematollahi, y. l. voo, and j. sanjayan, "design and construction of a precast ultra-high performance concrete cantilever retaining wall," in first international interactive symposium on uhpc–2016. des moines, iowa, usa, 2016, pp. 18-21. [69] j. kolísko, d. čítek, p. tej, and m. rydval, "production of footbridge with double curvature made of uhpc," in iop conference series: materials science and engineering, 2017, vol. 246, no. 1: iop publishing, p. 012009. [70] s. abbas, m. nehdi, and m. saleem, "ultra-high performance concrete: mechanical performance, durability, sustainability and implementation challenges," international journal of concrete structures and materials, vol. 10, no. 3, pp. 271-295, 2016. [71] l. f. m. duque, i. varga, and b. a. graybeal, "fiber reinforcement influence on the tensile response of uhpfrc," in first international interactive symposium on uhpc-2016 (des moines, iowa, jul. 18–20), 2016. [72] y. l. voo and s. j. foster, "malaysia, taking ultrahigh performance concrete bridges to new dimensions," ed: uhpfrc, 2017. [73] z. hajar, p. winiecki, s. simon, and t. thibaux, "realization of an ultra high performance fibre reinforced concrete thin shell structure covering the toll-gate station of millau viaduct.," presented at the fib symposium: concrete structures: the challenge of creativity. , avignon., 2004. [74] y. l. voo and w. k. poon, "the world first portal frame building (wilson hall) constructed using ultrahigh performance concrete.," presented at the 33rd conference on “our world in concrete & structures: sustainability”, 25-27 august 2008, singapore, 2008. [75] y. l. voo, b. nematollahi, a. b. mohamed said, b. a. gopal, and t. s. yee, "application of ultra high performance fiber reinforced concrete–the malaysia perspective," international journal of sustainable construction engineering and technology, vol. 3, no. 1, pp. 26-44, 2012. [76] d.-y. yoo and y.-s. yoon, "a review on structural behavior, design, and application of ultra-highperformance fiber-reinforced concrete," international journal of concrete structures and materials, vol. 10, no. 2, pp. 125-142, 2016. [77] p. mazzacane, r. ricciotti, f. teply, e. tollini, and d. corvez, "mucem: the builder's perspective," proceedings uhpfrc, pp. 3-16, 2013. [78] s. aubry et al., "a uhpfrc cladding challenge: the fondation louis vuitton pour la création" iceberg," in rilem-fib-afgc int. symposium on ultra-high performance fibre-reinforced concrete, 2013.