format template vol. 2, no. 1 | january – june 2018 sjcms | p-issn: 2520-0755 | e-issn: 2522-3003 © 2018 sukkur iba university – all rights reserved 70 comparative study of testing tools blazemeter and apache jmeter pirah memon1, tahseen hafiz2, sania bhatti2, saman shahid qureshi1 abstract: automated testing plays a vital role in the entire development of software. due to growing requirements of an automated testing diverse range of testing tools are available. from litera-ture, it is observed that a number of automated testing tools are studied and compared. how-ever, this is the first time that apache jmeter and blazemeter are compared. the objective of this paper is to compare load testing tools: apache jmeter and blazemeter based on the criteria such as performance, latency, size, error percent, duration count, number of hits and re-sponse time. this paper focuses on the analysis of the performance and functionality that minimizes the software cost and resources. after performing experiments, it is proved that the performance of blazemeter is better than apache jmeter with respect to all parameters. keywords: blazemeter; apache jmeter; automated software testing. 1. introduction software testing is used to find out the errors in the software product. software testing identifies the product completeness, correctness and also used to improve the product quality. testing does not guarantee the error-free software. rather, testing helps to debug the error within the software. there are various approaches for testing, which depend upon the software requirements, category of software and available resources .in simple word, testing is to “verify the product and evaluate it”. where the term verifies is something that the tester wants to match the requirements with the actual product and the response of product with the behavior in action to the analysis of the tester. although most of the intellectual properties are identical to the inspections of the requirements the word testing is concerned with dynamic analysis of the product. testing helps us improve the quality of the software. the quality of the software can be improved by involving the non-functional attributes 1 institute of information and communication technology mehranuniversity of engineering and technology jamshoro, pakistan 2 department of software engineering, mehran university of engineering and technology, jamshoro, pakistan corresponding email: pirahmemon01@gmail.com according to the standard of iso-9126. testing is about verifying and validating software if it is working as it is intended to design. it involves testing of a product using static and dynamic methodologies because of human mistakes, manual designs. hence the quality of the software can be achieved by performing the quality assurance activities. it is usual for the developer to spend 40% of the software cost on testing. for example bank transaction monitor, control can cost 3 to 5 times as much as all other activities are combined due to the antagonistic nature of testing the developer does not consider the notations in its development of software. 2. related work there has been a significant work on the comparative analysis of hp loadrunner and apache jmeter in literature but limited work on blazemeter. the limitation of the paper proposed by v.chandel [1] is that they only described the apache jmeter hp loadrunner but does not depict the real time results. the mailto:pirahmemon01@gmail.com pirah memon (et al.), comparative study of testing tools blazemeter and apache jmeter (pp. 70 76) sukkur iba journal of computing and mathematical sciences sjcms | volume 2 no. 1 january – june 2018 © sukkur iba university 71 automated web services testing tools presented in study [2] are very interesing as it describes the apache jmeter ,soapui and storm in detail but they do not justify that among them which tool is better. in paper [3] m.s sharmila. et al. discussed the apache jmeter in a well mannered way but the factors for comparison with other tools are not focused. in study [4] k. tirghoda illustrated the apache tool in detail but any script for web services which are being tested using the apache tool is not generated. sadiq et al. [5] uses response time ,throughput, latency, scalability and resource utilization for apache jmeter but does not delineate the security issues related to apache jmeter. b. patel, et al. [6] compared two performance testing tools i.e loadrunner and jmeter. they compared the parameters such as load generating capacity, installation, download proficiency result reporting, cost, technicality of software and reliability. the comparative analysis is done between hp loadrunner and apache jmeter by r. b. khan [7] but author only targets the websites loan calculator and bmi calculator because of not enough traffic on these websites. authors in [8] put light on two load testing tools: sikuli and commercial tool for acceptance testing. they compared on static properties and industrial traffic management system but there is no statistical difference between these tools; both performed the same automated testing. in study [9] the empirical analysis of web service testing tools is performed with the technical features and the comparison is completed on the basis of performance only. in a recent study [10] the comparative analysis is done among the selenium, soap ui, hp qtp/uft and test complete on the basis of different features. authors use the 3-point scale, i.e. good, average and bad in comparison. the results are presented in the form of graphs based on the calculated values for selected tools and soap is considered as the best tool among them. 3. apache jmeter apache jmeter [11] is an apache open source software, a pure java application, designed to perform the functional behavior and performance testing specially on web applications that further expand into the other applications on both static and dynamic resources (web services saop/reset) web dynamic languages php, java and asp.net files but jmeter does not execute the java script found within the html pages nor it does render the html pages as browser does. it can also be used to graphically analyze the performance or to test the server\script\object behavior under heavy concurrent load. figure 1 shows that as the request is sent by the user is directly acknowledged by the server, the server responses to the user’s request, then jmeter collects the data and manipulates the statistical information of further task will be completed and the results will be displayed. figure 2 defines the test plan for starting testing of facebook. test plan requires four elements http default request, http, cookie manager, a listener and graph result. figure 3 describes the number of users along with the test start and end time with a loop count at each thread while the users are generated. in figure 4, the black labels show the data of the facebook, the blue line defines the slight change in average number of users, pink line defines the no change occur in the median while testing. however, red line shows that deviation changing occur constantly, whereas the green line output increases as the number of users increases. on the y-axis the maximun time is 63068 milliseconds is the time required for completing the facebook uniform resource locator test.here the graph result depicts the visual model of facebook samples that can be read and written from a file. pirah memon (et al.), comparative study of testing tools blazemeter and apache jmeter (pp. 70 76) sukkur iba journal of computing and mathematical sciences sjcms | volume 2 no. 1 january – june 2018 © sukkur iba university 72 fig. 1. working mechanism of apache jmeter [3]. fig. 2. facebook url test with apache jmeter. pirah memon (et al.), comparative study of testing tools blazemeter and apache jmeter (pp. 70 76) sukkur iba journal of computing and mathematical sciences sjcms | volume 2 no. 1 january – june 2018 © sukkur iba university 73 fig. 3. thread group of facebook url test with apache jmeter. fig. 4. graphical depiction of facebook url test. pirah memon (et al.), comparative study of testing tools blazemeter and apache jmeter (pp. 70 76) sukkur iba journal of computing and mathematical sciences sjcms | volume 2 no. 1 january – june 2018 © sukkur iba university 74 4. blazemeter testing tool blazemeter [12] is an enterprise tool fully compatible with apache jmeter. it provides the developer’s simple integrated tool into their native development environment. blaze meter is used to test the mobile, web applications; web site, web services, and database testing that can simulate thousands of users. the objective of this paper is to conduct a comparative analysis of social website, facebook to improve the performance of the social websites by making the system more reliable at a less response time. 5. results and discussion the goal of this work is to improve the performance of social website “facebook” by performing load testing with blazemeter instead of apache jmeter. because apache jmeter fails when the scalability of the product is increased and the behavior is inflexible that means modifications cannot be applied after performing testing even if they are needed. the performance of the testing tools is judged on the basis of the following factors. flexibility: it is the nonfunctional quality attributes of software engineering. flexibility refers to how easily the changes can be accommodated in the system. scalability: scalability refers to the how easily the system can be expanded by increasing the number of users. performance: performance is the ability of the system to perform the task. load controller: the device used to regulate the amount of power that a load can consume. it can be used by third party energy or utility to reduce the customer energy demands at the certain time. reliability: the ability of the system to perform failure free operation for a specified time in a specified environment. aggregate reports: it allows reviewing an overview of administrative information for various settings and status. latency time: it is defined as the amount of time a message takes to reverse a system or to reach a designation. table 1. comparison of testing tools based on various factors. factors blazemeter apache jmeter 1 flexibility yes yes 2 scalability yes yes 3 performance yes yes 4 load controller yes no 5 reliability yes yes 6 aggregate reports yes no 7 latency time no yes table 02, as described in section iv, defines that at each point the virtual number of users is generated with minimum response time at an average bandwidth of 20kb with zero percent error and also shows that the test pass successfully. table 2. result of facebook url test with 0% error and 50 virtual users. figure 5 shows the 0% error means the test pass successfully with fifty users and maximum number of hits which are 0.86 per milliseconds. max. users avg. thro ughp ut er ro r avg.r espons e time 90%r espons e time avg. band widt h 50vu 0.78 hits/s 0 % 174.5 (ms) 211 (ms) 20.15 kb pirah memon (et al.), comparative study of testing tools blazemeter and apache jmeter (pp. 70 76) sukkur iba journal of computing and mathematical sciences sjcms | volume 2 no. 1 january – june 2018 © sukkur iba university 75 fig. 5. result of facebook url test with blazemeter for number of hits.avg. table 3. samples of facebook url test with response time. fig. 6. result of facebook blazemeter for latency url test with blazemeter for latency. element label samples avg. response time (ms) avg. hits/s 99%line (ms) min. response time (ms) avg. bandwidth bytes/s error rate facebook 19612 187.3 16.34 624 155 2931 0% pirah memon (et al.), comparative study of testing tools blazemeter and apache jmeter (pp. 70 76) sukkur iba journal of computing and mathematical sciences sjcms | volume 2 no. 1 january – june 2018 © sukkur iba university 76 figure 6 depicts that at each point the test is carried by the virtual users with maximum throughput and average response time of 157 milliseconds. in table 3 column 2 describes the number of samples with value 19612, column 3 shows the average response time to value of 187.3 milliseconds, column 4 depicts the average number of hits, column 5 defines the time, in milliseconds, column 6 illustrates the minimum response time, column 7 the average bandwidth and column 8 represents the error 0% means test pass successfully. 6. conclusion currently, software testing has become the necessity for the organizations. because it saves both time and money. apache jmeter and blazemeter are very efficient for encountering the performance testing of software. from experiments it is appearant that blazemeter tool is more efficient as compared to apache jmeter. it has a simple, clean user interface that shows what’s going on without confusion and too much effort and it offers straightforwardness with its uniqueness. moreover, it is free of cost and possesses effective portability with 100% java purity. both of them are open source projects and have merits, but neither is ideal. because the experiments are performed with a limited number of users and more experiments are required to be performed with increased number of users. references [1] v. chandel et al., "comparative study of testing tools: apache jmeter and load runner," international journal of computing and corporate research, 2013. [2] g. murawski, et al., "evaluation of load testing tools," 2014. [3] m. s. sharmila and e. ramadevi, "analysis of performance testing of web application," international journal of advanced research in computer and communication engineering, 2014. [4] k. tirghoda, "web services performance testing using open source apache jmeter," international journal of scientific & engineering research, vol. 3, 2012. [5] m. sadiq, et al., "a survey of most common referred automated performance testing tools," arpn journal of science and technology, vol. 5, pp. 525-536, 2015. [6] b. patel, et al., "a review paper on comparison of sql performance analyzer tools: apache jmeter and hp loadrunner," 2014. [7] r. b. khan, "comparative study of performance testing tools: apache jmeter and hp loadrunner," ed. 2016. [8] e. borjesson and r. feldt, "automated system testing using visual gui testing tools: a comparative study in the industry," in 2012 ieee fifth international conference on software testing, verification and validation, 2012, pp. 350-359. [9] s. sharma and a. k. sharma, "empirical analysis of web service testing tools". [10] m. imran, et al., "a comparative study of qtp and load runner, automated testing tools and their contributions to software project scenario," 2016. [11] a. s. foundation. 2017-1-19. apache jmeter. available: http://jmeter.apache.org/. [12] a. girmonsky. (2016, 01-12-2016). blazemeter. available: http://blazemeter.com/. vol. 1, no. 1 | jan – june 2017 sjcms | p-issn: 2520-0755 | vol. 1 | no. 1 | © 2017 sukkur iba 51 enhancing the statistical filtering scheme to detect false negative attacks in sensor networks muhammad akram, muhammad ashraf, college of information and communication engineering, sungkyunkwan university, suwon 16419, republic of korea akram.khan@skku.edu, ashraf84@skku.edu tae ho cho college of software, sungkyunkwan university, suwon 16419, republic of korea thcho@skku.edu abstract in this paper, we present a technique that detects both false positive and false negative attacks in statistical filtering-based wireless sensor networks. in statistical filtering scheme, legitimate reports are repeatedly verified en route before they reach the base station, which causes heavy energy consumption. while the original statistical filtering scheme detects only false reports, our proposed method promises to detect both attacks. keywords: wsns; sef; en route filtering; false positive attack; false negative attack; energy efficiency. 1. introduction wireless sensor networks (wsns) comprise tiny nodes equipped with restricted computational resources and limited energy supply. wsns are usually deployed in an exposed environment which increases their proneness to security compromises such as cryptographic information capture [1]. compromised nodes are exploited by attackers to initiate numerous attacks, such as denial of service, sinkhole attack, and eavesdropping [2]. usually, attackers use compromised nodes to create bogus event reports, and inject them into the network to drain the energy of the network [1, 2]. various filtering schemes have been proposed to detect and filter these bogus reports en route [1-5]. compromised sensor nodes can also be exploited to block authentic data from being delivered to the base station (bs), by attaching false message authentication codes (macs) to legitimate reports [1, 2, 6]. these true reports with false macs attached to them are dropped en route at the intermediate verification nodes. pvfs counters these two attacks simultaneously, whereas other filtering schemes only focus on countering the false report injection [fri] attack, which is also known as the false positive attack [1-8]. all of these filtering schemes use either static or dynamic authentication key sharing [1-5, 7, 8]. we propose to enhance the filtering capacity of the sef scheme so that it not only filters false reports, but also allows legitimate reports with false macs to reach the bs station without failure. the probabilistic mailto:akram.khan@skku.edu mailto:ashraf84@skku.edu mailto:thcho@skku.edu m. akram et al. enhancing the statistical filtering scheme to detect false negative attacks in sensor networks (pp. 51 56) sjcms | p-issn: 2520-0755 | vol. 1 | no. 1 | © 2017 sukkur iba 52 voting-based filtering scheme (pvfs) [2] is a static scheme that deals with both the attacks, and filters false reports at the probabilistically chosen verification nodes. in statistical en route filtering (sef), each intermediate node verifies the report probabilistically, and if it detects an invalid mac attached to it, it immediately drops it. sef exploits network scale and density to drop false data through the collective detection power of several intermediate relay nodes. however, while making a decision to drop the report, sef does not allow the forwarding nodes to consider the results of the previous verifications. every intermediate node that finds an invalid mac makes an independent decision to drop the report. this inflexibility of sef allows room for the compromised nodes to impact the performance of the network. compromised nodes launch a false negative attack by attaching false macs to the legitimate reports that are dropped en route by the verification nodes. the false negative attack stalls the passage of true reports to the bs [1, 2, 6]. by appending a few extra bits in the header of the report being forwarded, we can make sef restrict false negative attacks. once a threshold for the verification of true reports is reached, they are marked safe, and forwarded without further verification. the fri attack aims to drain the energy resource of the sensor network, and render it useless in the presence of compromised nodes. the detection probability in sef increases with distance. however, relying on the filtering capability of filtering nodes farther from the report generating cluster and closer to the bs leads to an uneven load share. an energy-hole syndrome appears in which the filtering nodes around the bs soon die out on account of their rapid depletion of energy and unceasing verification activity. the energy-hole phenomenon causes information lose and shortened network lifetime. in sef, each forwarded report is verified against t macs created by keys from t distinct non-overlapping sub-pools of authentication keys. firstly, each intermediate node checks if a report carries t macs, as well as t key indices from t different partitions. secondly, the intermediate node tries to check if a key’s index in the report matches that of one of its own keys. if so, the intermediate node tries to authenticate the report by calculating a new mac with the same key. if the new calculated mac matches the mac contained in the report, the report is authenticated, and forwarded. if the mac is found to be false, the report is immediately dropped. if none of the key indices in the report matches a key index of the keys possessed by the node itself, an intermediate node simply forwards the report. thus if it possesses the matching key, every intermediate node is virtually required to authenticate the report. none of the intermediate nodes considers the outcome of the previous verifications performed by the earlier nodes in the decision making. if a single mac is found to be false, any intermediate node immediately drops the report. this is why the sef schemes do not handle the false negative attack, as well as it incurs more energy by requiring every intermediate node to verify the report. 2. statistical en route filtering (sef) sef is the first scheme that was proposed to filter false data injected by adversaries exploiting compromised nodes. in sef, a pre-generated global key pool of size n, maintained at the bs, is divided into multiple non-overlapping n partitions, each of size m, i.e. n = m x n figure 1 shows the partitions of the global key pool and allocation of k keys to each sensor node in the network. every key is mapped against a unique key index for identification purpose during the process of en route filtering. prior to sensor deployment, m. akram et al. enhancing the statistical filtering scheme to detect false negative attacks in sensor networks (pp. 51 56) sjcms | p-issn: 2520-0755 | vol. 1 | no. 1 | © 2017 sukkur iba 53 each node is preloaded with k (k