INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL ISSN 1841-9836, 11(5):613-630, October 2016. Efficient Historical Query in HBase for Spatio-Temporal Decision Support X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao Xiao-Ying Chen, Chong Zhang*, Bin Ge, Wei-Dong Xiao Science and Technology on Information Systems Engineering Laboratory National University of Defense Technology Changsha 410073, P.R.China chenxiaoying1991@yahoo.com, leocheung8286@yahoo.com gebin1978@gmail.com, wilsonshaw@vip.sina.com *Corresponding author: leocheung8286@yahoo.com Abstract: Comparing to last decade, technologies to gather spatio-temporal data are more and more developed and easy to use or deploy, thus tens of billions, even trillions of sensed data are accumulated, which poses a challenge to spatio-temporal Decision Support System (stDSS). Traditional database hardly supports such huge volume, and tends to bring performance bottleneck to the analysis platform. Hence in this paper, we argue to use NoSQL database, HBase, to replace traditional back-end storage system. Under such context, the well-studied spatio-temporal querying techniques in traditional database should be shifted to HBase system parallel. However, this problem is not solved well in HBase, as many previous works tackle the problem only by designing schema, i.e., designing row key and column key formation for HBase, which we don’t believe is an effective solution. In this paper, we address this problem from nature level of HBase, and propose an index structure as a built-in component for HBase. STEHIX (Spatio-TEmporal Hbase IndeX) is adapted to two-level architecture of HBase and suitable for HBase to process spatio-temporal queries. It is composed of index in the meta table (the first level) and region index (the second level) for indexing inner structure of HBase regions. Base on this structure, three queries, range query, kNN query and GNN query are solved by proposing algorithms, respectively. For achieving load balancing and scalable kNN query, two optimizations are also presented. We implement STEHIX and conduct experiments on real dataset, and the results show our design outperforms a previous work in many aspects. Keywords: spatio-temporal query, HBase, range query, kNN query, GNN query, load balancing. 1 Introduction Nowadays, either organizations or common users need sophisticated spatio-temporal Deci- sion Support System (stDSS) [1] for countless geospatial applications, such as urban planning, emergency response, military intelligence, simulator training, and serious gaming. Meanwhile, with the development of positioning technology (such as GPS) and other related applications, huge of spatio-temporal data are collected, of which volume increases to PB or even EB. Con- sequently, this necessarily poses a challenge to stDSS applications. Traditionally, these data are stored in relational database, however, since the database can’t resist such a huge volume, such architecture would bring performance bottleneck to the whole analysis task. Hence, the new structural storage system should back up stDSS. In this paper, we argue that HBase [2] is capable to accomplish such task, since HBase is a key-value, NoSQL storage system, which can support large-scale data operations efficiently. On the other hand, from system point of view, an ideal geospatial application designed to formulate and evaluate decision-making questions for stDSS should contain efficient presentation Copyright © 2006-2016 by CCC Publications 614 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao of a basic set of spatio-temporal queries, such as: find doctors who can carry out rescue in a certain area, recently, find 5 flower shops nearest to Tony, a group of friends spreading over different places want to find nearest restaurant to them, aggregately, i.e., the sum of distances to them is minimum. These operations are supported well in relational database, however, they are not supported by HBase in a straightforward way. The main reason is that HBase do not natively support multi-attribute index, which limits the rich query applications. Hence in this paper, we explore processing for basic spatio-temporal queries in HBase for stDSS. From a variety of applications, we mainly address three common and useful spatio- temporal queries as follows: • range query: querying data in specific spatial and temporal range. For instance, in real- time monitoring and early warning of population, query the number of people in different time intervals within a specific area. • kNN query (k-Nearest Neighbor): querying data to obtain k nearest objects to a specific location during a certain period. For instance, in the past week, find 5 nearest Uber taxis to a given shopping mall. • GNN query (Group Nearest Neighbor): querying data to obtain k nearest objects aggregately (measured by sum of distances) to a group of specific locations during a certain period. For instance, during last month, find the nearest ship to the given three docks. As an example, Figure 1 shows the spatial distribution of users during two time interval [1, 6] and [7, 14]. For range query, find the users who are in the spatial range marked by the dashed line rectangle within time period [1, 6], apparently, {u1, u3} is the result. For 1NN query, if we want to find the users who are nearest to p1 during time period [1, 6] and [7, 14], respectively, the result is u2 for [1, 6] and u1 for [7, 14]. For GNN query, if we want to find the user who are nearest to p1 and p2 by summing the distances during time period [1, 6], the result is u2. p1 (u1,1) (u1,4) (u1,6) (u2,6) (u3,3) (u3,5) (u3,6) (u2,1) x y (u1,3) p1 (u1,9) (u1,14) (u2,12) (u3,10) (u3,14) (u2,14) x y t [1,6] t [7,14] p2 p2 (u2,6) (u1,6) (u1,4) (u3,5) (u1,3) (u1,1) (u3,6) (u3,3) (u2,1) ||p1,ui || ||p2,ui || sum (u1,9) (u2,12) (u1,14) (u3,10) (u3,14) (u2,14) ||p1,ui || ||p2,ui || sum 10 units Figure 1: An example for range, kNN and GNN query Efficient Historical Query in HBase for Spatio-Temporal Decision Support 615 1.1 Motivation Our motivation is to adapt HBase to efficiently process spatio-temporal queries as basic operations for spatio-temporal decision support system. Although some previous works propose distributed index on HBase, but these works only consider spatial dimension, more critically, most of these works only concern how to design schema for spatial data, which do not tackle the problem from the nature level of HBase, except one, MD-HBase [5] is designed to add index structure into the meta table, however, it doesn’t provide index to efficiently retrieve the inner data of HBase regions. Our solution, STEHIX (Spatio-TEmporal Hbase IndeX), is built on two-level lookup mechanism, which is based on the retrieval mechanism of HBase. First, we use Hilbert curve to linearize geo-locations and store the converted one-dimensional data in the meta table, and for each region, we build a region index indexing the StoreFiles in HBase regions. We focus on range query, kNN query and GNN query for such environment in this paper. 1.2 Contributions and paper organization We address how to efficiently answer range query, k nearest neighbor (kNN) query and GNN query on spatio-temporal data in HBase. Our solution is called STEHIX (Spatio-TEmporal Hbase IndeX), which fully takes inner structure of HBase into consideration. The previous works focus on building index based on the traditional index, such as R-tree, B-tree, while our method constructs index based on HBase itself, thus, our index structure is more suitable for HBase retrieval. In other way, STEHIX considers not only spatial dimension, but also temporal one, which is more in line with user demand. We use Hilbert curve to partition space as the initial resolution, the encoded value of which is used in the meta table to index HBase regions, then we use quad-tree to partition Hilbert cells as the finer resolution, based on this, we design region index structure for each region, which contains the finer encoded values for indexing spatial dimension and time segments for indexing temporal dimension. And later, we show such two-level index structure, meta table + region index, is more suitable for HBase to process query in the experiment. Based on our index structure, algorithms for range query, kNN query and GNN query are devised, and load balancing policy and optimization to kNN query are also presented to raise STEHIX performance. We compare STEHIX with MD-HBase on real dataset, and the results show our design philosophies make STEHIX to be more excellent than the counterpart. In summary, we make the following contributions: • We propose STEHIX structure which fully follow inner mechanism of HBase and is a new attempt on building index for spatio-temporal data in HBase platform. • We propose efficient algorithms for processing range query, kNN query and GNN query in HBase. • We carry out comprehensive experiments to verify the efficiency and scalability of STEHIX. The rest of this paper is organized as follows. Section 2 reviews related works. Section 3 formally defines the problem and prerequisites. Section 4 presents STEHIX structure. In section 5, algorithms for range query kNN query and GNN query are presented. Section 6 reports the optimizations to the index. And we experimentally evaluate STEHIX in section 7. Finally, section 8 concludes the paper with directions for future works. 616 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao 2 Related Works To overcome the drawbacks of traditional RDBMS, as an attractive alternative for large- scale data processing, Cloud storage system currently adopts a hash-like approach to retrieve data that only support simple keyword-based queries, but lacks various forms of information search. For data processing operations, several cloud data managements (CDMs), such as HBase, are developed. HBase, as NoSQL databases, is capable to handle large scale storage and high insertion rate, however, it does not offer much support for rich index functions. Many works focus on this point and propose various approaches. Nishimura et al. [5] address multidimensional queries for PaaS by proposing MD-HBase. It uses k-d-trees and quad-trees to partition space and adopts Z-curve to convert multidimensional data to a single dimension, and supports multi-dimensional range and nearest neighbor queries, which leverages a multi-dimensional index structure layered over HBase. However, MD-HBase builds index in the meta table, which does not index inner structure of regions, so that scan operations are carried out to find results, which reduces its efficiency. Hsu et al. [6] propose a novel Key formulation scheme based on R+-tree, called KR+-tree, and based on it, spatial query algorithm of kNN query and range query are designed. Moreover, the proposed key formulation schemes are implemented on HBase and Cassandra. With the experiment on real spatial data, it demonstrates that KR+-tree outperforms MD-HBase. KR+- tree is able to balance the number of false-positive and the number of sub-queries so that it improves the efficiency of range query and kNN query a lot. This work designs the index according to the features found in experiments on HBase and Cassandra. However, it still does not consider the inner structure of HBase. Zhou et al. [7] propose an efficient distributed multi-dimensional index (EDMI), which con- tains two layers: the global layer divides the space into many subspaces adopting k-d-tree, and in the local layer, each subspace is associated to a Z-order prefix R-tree (ZPR-tree). ZPR-tree can avoid the overlap of MBRs and obtain better query performance than other Packed R-trees and R∗-tree. This paper experimentally evaluates EDMI based on HBase for point, range and kNN query, which verifies its superiority. Compared with MD-HBase, EDMI uses ZPR-tree in the bottom layer, while MD-HBase employs scan operation, so that EDMI provides a better performance. Han et al. [8] propose HGrid data model for HBase. HGrid data model is based on a hybrid index structure, combining a quad-tree and a regular grid as primary and secondary indices, supports efficient performance for range and kNN queries. This paper also formulates a set of guidelines on how to organize data for geo-spatial applications in HBase. This model does not outperform all its competitors in terms of query response time. However, it requires less space than the corresponding quad-tree and regular-grid indices. HBaseSpatial, a scalable spatial data storage based on HBase, proposed by Zhang et al. [9]. Compared with MongoDB and MySQL , experimental results show it can effectively enhance the query efficiency of big spatial data and provide a good solution for storage. But this model does not compare with other distributed index method. All the previous works we have mentioned above only consider the spatial query. For moving objects, a certain type of geo-spatial applications, requires high update rate and efficient real- time query on multi-attributes such as time-period and arbitrary spatial dimension. Du et al. [10] present hybrid index structure based on HBase, using R-tree for indexing space and applying Hilbert curve for traversing approaching space. It supports efficient multi-dimensional range queries and kNN queries, especially it is adept at skewing data compared with MD-HBase and KR+-tree. As this work focus on moving objects, it is different for our goal, and it also does not take the inner structure of HBase into account. Efficient Historical Query in HBase for Spatio-Temporal Decision Support 617 To address the shortcoming which have mentioned above, the STEHIX structure which fully follow inner mechanism of HBase and is a new attempt on building index for spatio-temporal data in HBase platform is proposed. 3 Problem Definition and Prerequisites In this section, we first formally describe spatio-temporal data, and then present the structure of HBase storage. For simplicity, only two-dimensional space is considered in this paper, however, our method can be directly extended into higher dimensional space. A record r of spatio-temporal data can be denoted as 〈x, y, t, A〉, where (x, y) means the geo-location of the record, t means the valid time when the data is produced, A represents other attributes, such as user-id, object’s shape, descriptions, and etc. We give the descriptions for structure of storage and index in HBase [11], [12], for simplicity, some unrelated components, such as HLog and version, are omitted. Usually, an HBase cluster is composed of at least one administrative server, called Master, and several other servers holding data, called RegionServers. Logically, a table in HBase is similar to a grid, where a cell can be located by the given row identifier and column identifier. Row identifiers are implemented by row keys (rk), and the column identifier is represented by column family (cf) + column qualifier (cq), where a column family consists of several column qualifiers. The value in a cell can be referred to as the format (rk, cf:cq). Table 1 shows a logical view of a table in HBase. For instance, value v1 can be referred to as (rk1, cf1:cq1). Table 1: Logical View for HBase Table cf1 cf2 cq1 cq2 cq3 cqa cqb rk1 v1 v2 v3 v4 v5 rk2 v6 v7 v8 v9 v10 Physically, a table in HBase is horizontally partitioned along rows into several regions, each of which is maintained by exactly one RegionServer. The client directly interacts with the respective RegionServer when executing read or write operations. When the data, formally as 〈rk, cf:cq, value〉 (we alternatively use term key-value data in rest of the paper), are written into a region, the RegionServer first keeps the data in a list-like memory structure called MemStore, where each entry is pre-configured with the same fixed size (usually 64KB) and the size of a certain number of entries is equal to that of the block of the underlying storage system, such as HDFS. When the size of MemStore exceeds a pre-configured number, the whole MemStore is written into the underlying system as a StoreFile, the structure of which is similar to that of MemStore. Further, when the number of StoreFiles exceeds a certain number, the RegionServer will execute the compaction operation to merge StoreFiles into a new large one. HBase provides a two-level lookup mechanism to locate the value corresponding to the key (rk, cf:cq). The catalog table meta stores the relation {[table name]:[start row key]:[region id]:[region server]}, thus given a row key, the corresponding RegionServer can be found, and then the RegionServer searches the value locally according to the given key (rk, cf:cq). Figure 2 shows an example of HBase two-level lookup structure. From above descriptions, we can see that HBase only provides a simple hierarchical index structure based on the meta table, and the corresponding RegionServer must do scan work to refine the results, which would be inefficient to handle spatio-temporal queries. 618 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao tableT1, rk1, regionA-->serverI tableT2, rk2, regionB-->serverII tableTn, rkn, regionY-->serverX regoinA StoreFileMemStore serverI regoinB serverII regoinY serverX meta regoin ... regoin ... regoin ... StoreFileMemStore StoreFileMemStore (rk1,cf1:cq1,value1) (rk1,cf1:cq2,value2) (rk1,cf1:cq3,value3) 64KB Figure 2: HBase Two-Level Lookup 4 STEHIX Structure In this section, we present the structure of our index, STEHIX (Spatio-TEmporal Hbase IndeX). The following philosophies are considered during index design, 1) for applications, it is not necessary for users to dedicatedly to design schema for query spatio-temporal data, i.e., our index should add no restriction on schema design, but a inner structure associated with HBase, 2) the index should be in accordance with the architecture of HBase as identical as possible, 3) the index should be adaptive to data distribution. For design rule 1), we don’t care the schema design and generalize each record to be a key- value data in StoreFile(MemStore), formally (rk, cf:cq, r), where r=〈x, y, t, A〉. For design rule 2), our index is built on the two-level lookup mechanism. In particular, we use Hilbert curve to linearize geo-locations and store the converted one-dimensional data in the meta table, and for each region, we build a region index to index the StoreFiles. Figure 3 shows an overview of STEHIX architecture. 4.1 Meta Table Organization We use Hilbert curve to partition the whole space as the initial granularity. According to the design rationale of HBase, the prefix of row key should be different so that the overhead of inserting data could be distributed over RegionServers. And such design is able to satisfy this demand. Hilbert curve is a kind of space filling curve which maps multi-dimensional space into one- dimensional space. In particular, the whole space is partitioned into equal-size cells and then Efficient Historical Query in HBase for Spatio-Temporal Decision Support 619 meta data for other purpose [hs1, he1], region A->serverI [hsn, hen], region Y->serverX regoin A StoreFile serverI meta [hs2, he2], region B->serverII s-index t-index StoreFile StoreFile regoin B serverII s-index t-index regoin Y serverX s-index t-index Figure 3: Overview of STEHIX a curve is passed through each cell for only once in term of some sequence, so that every cell is assigned a sequence number. Different space filling curves are distinguished by different se- quencing methods. Due to information loss in the transformation, different space filling curves are evaluated by the criteria, locality preservation, meaning that how much the change of prox- imities is from original space to one-dimensional space. Hilbert curve is proved to be the best locality preserved space filling curve [13]. With Hilbert curve, any object in the original space is transformed into [0, 22λ − 1] space, where λ is called the order of Hilbert curve. Figure 4 shows four Hilbert curves in two-dimensional space with λ=1, 2, 3 and 4. (a) =1 (b) =2 (c) =3 (d) =4 p1 p2 R1 Figure 4: Hilbert Curves We describe three functions for Hilbert curve, first one is mapping a point in the original space to a value in one-dimensional space, the second is mapping a range window to a series of intervals, and the third is retrieving proximity cells of a point. Specifically, for a Hilbert curve with order=λ, • coorToCell(p). Given a point p=(x1, x2, . . . , xn) in n-dimensional space S, coorToCell(p) returns a cell number (between 0 and 22λ − 1) referring the cell where p lies within S. 620 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao • rectToIntervals(R). Given a range window R=(xl1, x l 2, . . . , x l n, x u 1 , x u 2 , . . . , x u n) in n- dimensional space S, where xli and x u i (1 ≤ i ≤ n) are the lower and upper bound of the ith-dimension, respectively, rectToIntervals(R) returns a series of intervals representing the cells intersecting with R in S. • getNeighborCells(p). Given a point p=(x1, x2, . . . , xn) in n-dimensional space S, getNeigh borCells(p) returns a list of cell numbers referring the cells which are neighbors of the cell coorToCell(p). For instance, in Figure 4 (b), coorToCell(p1) = 2, coorToCell(p2) = 13, rectToIntervals(R1) = {[1,2], [7,8], [11,15]}, and getNeighborCells(p2)={1, 2, 7, 8, 11, 12, 15, 14}. Based on above descriptions, we use Hilbert cell value as row key in the meta table to index spatio-temporal data as first level, thus, each record can be placed into the corresponding region according to Hilbert value of spatial part of the record. In particular, the following mapping structure is built in the meta table (for simplicity, table name is omitted): {[start Hilbert cell, end Hilbert cell]:[region id]:[region server]}. Initially, assuming there are N regions across M RegionServers, we can uniformly assign Hilbert cells to these regions, for instance, the first entry could be {[0, ((22λ − 1)/N) − 1] : regionA : serverI}, and the second {[((22λ − 1)/N), (2 ∗ (22λ − 1)/N) − 1] : regionB : serverII}. 4.2 Region Index Structure For retrieving local data efficiently, we design the region index which is kept in memory like MemStore. Considering MemStore is always kept in memory, region index is only to index Store- File, however, for answering a query, MemStore must be scanned to guarantee the completeness of results. Region index is a list-like in-memory structure, each entry of which points to a list of addresses referring to key-value data in the StoreFile. The region index consists of two parts, one is called s-index indexing spatial component of data, the other is called t-index indexing the temporal part, and such design is able to benefit query efficiency as we will see in next section. For constructing s-index, the space is further partitioned at a finer granularity, i.e., each Hilbert cell is recursively divided by quad-tree and the resulting tiles are encoded with binary Z- Order. Such consideration is able to deal with the skewed data, i.e., when a hotspot is detected, quad-tree can be used recursively until the hotspot is eliminated. Later, we will use this idea to design an adaptive load balancing policy. After partitioning the Hilbert cell, each tile is corresponding to an entry in the s-index, i.e., the entry points to the key-value data whose geo- locations lie in that tile. For instance, Figure 5 shows an example of meta table and region index, where in the meta table, Hilbert cells [0, 1] indexes regionA : serverI and [2, 3] for regionB : serverII, respectively. For regionA, Hilbert cells 0 and 1 are divided using quad-tree into 11 tiles, 7 of which are 2-bit tiles and 4 are 4-bit tiles, and for each entry in s-index, the identifier is a combination of Hilbert value and binary Z-Order value, for instance, entry 0-10, where 0 is the number of Hilbert cell 0 and 10 is the code of lower-right tile in Hilbert cell 0, points to a list containing two addresses referring to two key-value records in StoreFile. For building t-index, we use a period T to bound the length of the list of t-index, and such consideration is based on the fact that there may be some cycle for the spatial change of objects. The period T is divided into several segments, each of which is corresponding to an entry in t-index. Each entry points to a list of addresses referring to key-value data in StoreFile, whose temporal component modulo T lies in the segment. Continuing the example, Figure 5 shows the structure of t-index. Let T=24, which means a period of 24 hours is a cycle, and let each segment = 3 hours, which means T is divided into 8 segments, and entry [3, 6) points to 8 key-value data whose temporal value modulo 24 between 3 and 6. Efficient Historical Query in HBase for Spatio-Temporal Decision Support 621 00 01 10 11 01 10 11 0000 0010 0001 0011 [0, 1], region A->serverI meta regoin A serverI 0-10 0-11 1-0000 1-0001 1-01 [0, 3) [3, 6) s-index t-index [21, 0) StoreFile [2, 3], region B->serverII q 1-0010 1-0011 #addr #addr #addr #addr #addr #addr #addr #addr #addr #addr #addr #addr #addr #addr #addr Figure 5: Region Index Structure 5 Query Processing In this section, the processing algorithms for range query, kNN query and GNN query are presented. 5.1 Range Query A range query q=(xl, yl, xu, yu, ts, te), aims to find all the records, whose geo-locations lie in the range (xl, yl, xu, yu) during time [ts, te]. The basic work flow for processing a range query q is described as follows, first, using Hilbert curve, spatial predicate (xl, yl, xu, yu) is converted into a set of one-dimensional intervals Iq, then according to mapping relation in the meta table, the involved RegionServers are informed to search the corresponding regions locally, utilized by region index. Here we propose a query optimization, i.e., using s-index and t-index to calculate selectivity, which is helpful to choose the high-selectivity to filter more unrelated data, in particular, the spatial predicate is recursively divided by quad-tree, the results of which are intersected with the entries in s-index, and then the number of addresses to key-value data can be calculated, say sn, similarly, using t-index can also calculate a number, tn, then if sn is less than tn, s-index is followed to retrieve results, other wise t-index is used. Algorithm 1 describes the range query processing for STEHIX. In line 1, the spatial predicate is converted into one-dimensional intervals Iq, and the temporal predicate is converted into [0, T ] interval in line 2. In line 3, function findRegions() finds the involved regions which intersect with Iq. From line 4 to 11, each corresponding region index is inspected to retrieve results, in 622 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao particular, s-index and t-index is used to calculate selectivity for the query, which is implemented by function getCard(), and the index with the lower cardinality is chosen to retrieve the results. Algorithm 1 Range Query Processing Require: q=(xl,yl,xu,yu, ts, te) Ensure: Qlist //result list 1: Iq = rectToIntervals(xl,yl,xu,yu) 2: keys = ts mod T , keye = te mod T 3: Regions=findRegions(Iq) /*the following processing is executed sepa- rately in each region*/ 4: for each region ∈ Regions do 5: sn=region.s-index.getCard(xl,yl,xu,yu) 6: tn=region.t-index.getCard(keys,keye) 7: if sn ≤ tn then 8: Qlist←region.s-index.seachIndex(q) 9: else 10: Qlist←region.t-index.seachIndex(q) 11: end if 12: end for 13: return Qlist Figure 5 shows an example for range query processing in STEHIX. The spatial bound of q is depicted with dashed line and we assume that temporal predicate of q is [3, 6]. Then Hilbert cells 0 and 1 are intersected with q, thus, two entries in the meta table are examined, namely, {[0, 1] : regionA : serverI} and {[2, 3] : regionB : serverII}. For instance, in regionA, the entries in s-index are intersected with spatial predicare of q, resulting 0-10, 0-11, 1-0001, 1-0011, 1-01 these 5 entries, which refer to totally 7 addresses to key-value data, and similarly, entry [3, 6) of t-index refers to 8 addresses, consequently s-index is followed to retrieve the results. 5.2 kNN Query A kNN query could be formally defined as: given a set R of spatio-temporal data records, a kNN query q=(xq, yq, ts, te, k), aims to find a set R(q) ⊆ R , such that |R(q)|=k, and d(o, (xq,yq)) ≤ d(o′, (xq,yq)), ∀o ∈ R(q), o′ ∈ R\R(q), and o.t, o′.t ∈[ts, te], where d() is the Euclidean distance function. We don’t want to use n range queries to accomplish the kNN query, which means continuously enlarging spatial range of the query until k records are obtained [14], because we believe such a method would cause heavy querying overhead. We propose an approach utilized by incremental retrieval idea [15]. The basic work flow is, proximity objects of point (xq, yq) are constantly, incrementally retrieved until k results are found. In particular, first, Hilbert cell h containing point (xq, yq) is located, then the corresponding region index is utilized to retrieve all records lie in h, meanwhile, neighbor cells of h are also retrieved, and these records and Hilbert cells are all enqueued into a priority queue where priority metric is the distance from (xq, yq) to record or Hilbert cell. Then top element is constantly dequeued and processed, either being added to result list or being followed to retrieve neighbor cells to be enqueued, until k results are found. Algorithm 2 presents kNN query processing. The first line initializes a priority queue PQ where each element is ordered by the distance from (xq, yq) to the element. The element can be Hilbert cell or record, and if it is a Hilbert cell, the distance is MINDIST [16], other wise, the distance is the Euclidean distance from (xq, yq) to geo-location of the record. In line 2, the Hilbert cell containing (xq, yq) is gained, and is enqueued in line 3. From line 4, the procedure constantly retrieves top element e from PQ (line 5) and processes it, in particular, if e is a Hilbert cell (line 6), find the corresponding region rg from the meta table (line 7), and then the corresponding region index is searched to retrieve all the records satisfying temporal predicate (line 8), which are enqueued into PQ (line 9 to 11), after that, the neighbor cells of e are obtained and enqueued into PQ (line 12 to 15); other wise, i.e., if e is a record (line 16), which means e is a result, e is added into Qlist (line 17), and the above procedure is looped until the size of Efficient Historical Query in HBase for Spatio-Temporal Decision Support 623 Algorithm 2 kNN Query Processing Require: q=(xq, yq, ts, te, k) Ensure: Qlist //result list 1: PQ=null //initial a priority queue 2: h=coorToCell(xq, yq) 3: PQ.enqueue(h, MINDIST((xq, yq), h)) 4: while PQ 6= φ do 5: e=PQ.dequeue() 6: if e is typeof cell then 7: rg=findRegions(e) 8: RS=rg.findRecords(e, (ts, te)) 9: for each record ∈ RS do 10: PQ.enqueue(record, dist((xq, yq), record)) 11: end for 12: CellSet=getNeighborCells(e.center) 13: for each cell ∈ CellSet do 14: PQ.enqueue(cell, MINDIST((xq, yq), cell)) 15: end for 16: else if e is typeof record then 17: Qlist←e 18: if Qlist.size()=k then 19: return Qlist 20: end if 21: end if 22: end while Qlist reaches k (line 18 to 20). 5.3 GNN Query A GNN query in our work could be formally defined as: given a set R of spatio-temporal data records and a set of location point(s) P , a GNN query q=(P , ts, te, k), aims to find a set R(q) ⊆ R , such that |R(q)|=k, and the point(s) of R(q) with smallest sum of distances to all points in P (|P |=n), i.e. ∑N i=1 d(o, (xi,yi)) ≤ ∑n i=1 d(o′, (xi,yi)), ∀o ∈R(q), o′ ∈R\R(q), and o.t, o′.t ∈[ts, te], where d() is the Euclidean distance function. Different from kNN query, GNN query aims to finds a group of point(s) that nearest to a set of points. In kNN query processing, firstly Hilbert cell h containing point (xq, yq) is located, while in GNN processing, we firstly find the ideal nearest neighbor p, which could not exist in the dataset R. This approach is that the nearest neighbor is the point(s) "near" p. Let (x,y) be the coordinates of ideal 1NN point p and (xi,yi) be the coordinates of point pi ∈P , p minimizes the sum of distance function: sumdist(p,P) = n∑ i=1 √ (x−xi)2 + (y −yi)2 (1) Partially calculate the derivation of function sumdist(p,P) with respect to variables x and y, let them equal to zero, we have:  ∂sumdist(p,P) ∂x = n∑ i=1 x−xi√ (x−xi)2 + (y −yi)2 = 0 ∂sumdist(p,P) ∂y = n∑ i=1 y −yi√ (x−xi)2 + (y −yi)2 = 0 (2) However, this equations can not be solved when n > 2. According to the method in [18], we start with the arbitrary initial coordinates x = ∑n i=1 xi n , y = ∑n i=1 yi n , then modifies as follows: x = x−η∂sumdist(p,P) ∂x ,y = y −η∂sumdist(p,P) ∂y (3) where η is a step size. The process is repeated until the distance function sumdist(p,P) converges to a minimum value. We call this processing p = getNearest(P). The range around p in which we should look for points of R(q). 624 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao The basic work flow is similar to kNN query processing which introduce above. Algorithm 3 presents GNN query processing. In particular, first, Hilbert cell h containing point p (x, y) is located, then the corresponding region index is utilized to retrieve all records lie in h, meanwhile, neighbor cells of h are also retrieved, and these records and Hilbert cells are all enqueued into a priority queue where priority metric is the sum of distance from P to record or Hilbert cell. Then top element is constantly dequeued and processed, either being added to result list or being followed to retrieve neighbor cells to be enqueued, until k results are found. Algorithm 3 GNN Query Processing Require: q=(P , ts, te, k) Ensure: Qlist //result list 1: PQ=null //initial a priority queue 2: p = getNearest(P) 3: h=coorToCell(p) 4: PQ.enqueue(h, sumMINDIST ((P), h)) 5: while PQ 6= φ do 6: e=PQ.dequeue() 7: if e is typeof cell then 8: rg=findRegions(e) 9: RS=rg.findRecords(e, (ts, te)) 10: for each record ∈ RS do 11: PQ.enqueue(record, sumdist((P), record)) 12: end for 13: CellSet=getNeighborCells(e.center) 14: for each cell ∈ CellSet do 15: PQ.enqueue(cell, sumMINDIST ((xq, yq), cell)) 16: end for 17: else if e is typeof record then 18: Qlist←e 19: if Qlist.size()=k then 20: return Qlist 21: end if 22: end if 23: end while 6 Optimizations In this section, we propose two methods for raising performance of STEHIX from the aspects of load balancing and query optimization. 6.1 Adaptive Load Balancing For achieving design rule 3), adaptive load balancing is considered. Our spatial partition procedure contains two phases, first is Hilbert curve, and the second is quad-tree. And load balancing is based on the second phase and region split, in particular, when the volume of a region exceeds a limit due to the hotspot in spatial dimension, the procedure detects which Hilbert cell is the hotspot, and uses a quad-tree to divide it into four subspaces, thus the original region is split into five regions, i.e., four corresponds to the four subspaces and one corresponds to the undivided Hilbert cell(s). After that, the meta table is also updated to renew the mapping information as well as the region index. Figure 6 shows an example of region split. We can see when a hotspot is generated in Hilbert cell 0, the cell is divided into four subspaces by quad-tree, and the corresponding region is split into five, namely, 0-00, 0-01, 0-10, 0-11 and 1, and the meta table and new regions are updated accordingly. 6.2 Optimization for kNN Query From kNN algorithm we can see, each time for retrieving the records of a Hilbert cell, the meta table must be searched to locate the corresponding region, which would increase overhead of the query. To deal with such a problem, we add modifications to region index, in particular, each region index ri is connected to the regions whose Hilbert cells are the neighbors of ri’s Efficient Historical Query in HBase for Spatio-Temporal Decision Support 625 region’s Hilbert cells. Thus, when getNeighborCells() method is invoked, the current region is able to retrieve records from proximity regions, however, not all the records can be retrieved, and for this case, the meta table should be searched. Nevertheless, this optimization would reduce the overhead of querying the meta table. 00 01 10 11 [0, 1], region A->serverI meta 0-00, region R1->serverII meta 0-01, region R2->serverII 0-10, region R3->serverX 0-11, region R4->serverX 1, region A->serverI Figure 6: Load Balancing 7 Experimental Evaluation We evaluate our algorithms on real dataset, which contains trajectories of taxis in Beijing1. In particular, the dataset contains about 100 million records, and temporal range is from Nov. 1st to 3rd, and each record in the dataset contains vehicle ID, geo-location, recording time stamp, etc. Our algorithms are implemented in Hadoop 2.5.1 and HBase 0.98.6, and run on a cluster with size varied from 5 to 33, in which each node is equipped with Intel(R) Core(TM) i3 CPU @ 3.40GHz, 4GB main memory (for Master 16GB), and 500GB storage, and operating system is CentOS release 6.5 64bit, and network bandwidth is 10Mbps. For comparison, we choose MD-HBase due to the similar function. 1http://activity.datatang.com/20130830/description 626 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao 3 10 15 30 50 1000 2000 3000 4000 5000 6000 7000 8000 D e la y (m s ) �(%) STEHIX MD-HBase (a) Effect of selectivity 5 9 17 33 2000 3000 4000 5000 6000 7000 8000 9000 D e la y (m s ) Cluster size STEHIX MD-HBase (b) Effect of cluster size Figure 7: Experimental Results for Range Queries 7.1 Range Queries First, we evaluate the algorithm for range queries. And we introduce two parameters to test the algorithm under various conditions. One is selectivity θ defined as: θ = L(ts,te) Lt · ARq AS where L(ts,te) means the length of query temporal range (ts, te), Lt means the length of temporal extent of the dataset, ARq means the area of query spatial range Rq, and AS means the area of the whole space. Selectivity specifies the size of the query range, and the larger θ is, the more spatio-temporal records are involved. In this experiment, the default values of θ and cluster size are 10% and 9, respectively. For each value of θ or size, we issue 10 queries with different temporal ranges and spatial ranges, and collect the average response time as the measurement of performance. First, we vary θ from 3% to 50% and Figure 7(a) shows the results. We can see that response time increases with θ for both methods. This is because a larger selectivity would access more records to be retrieved and examined, which increases the processing time. However, we can see STEHIX outperforms MD-HBase, which can be explained by the design of region index. Although MD-HBase builds index in the meta table, it doesn’t index inner structure of regions, thus, scan operations are carried out to find results, which cost heavily. Our STEHIX is adapted to the two-level architecture of HBase, and is able to use region index to efficiently search each region, which highly improve performances. Next, we vary cluster size from 5 to 33, and Figure 7(b) shows the results. It is apparent that STEHIX is excellent due to its nearly horizontal response time and good scalability. When the number of cluster size is increased, more RegionServers take part in the processing and use their region indexes parallel. However, due to lack of indexing StoreFiles, the scalability of MD-HBase is not good. Efficient Historical Query in HBase for Spatio-Temporal Decision Support 627 4 8 16 32 0 500 1000 1500 2000 2500 3000 D e la y (m s ) k STEHIX MD-HBase (a) Effect of k 5 9 17 33 500 1000 1500 2000 2500 3000 D e la y (m s ) Cluster size STEHIX MD-HBase (b) Effect of cluster size Figure 8: Experimental Results for kNN Queries 7.2 kNN Queries In this experiment, the default values of k and cluster size are 8 and 9, respectively. First, we vary k from 4 to 32, and Figure 8(a) shows that STEHIX outperforms MD-HBase. When k is increased, both methods need more time to process queries. STEHIX uses less time to retrieve k results, which can be explained by the same reason, i.e., the region index embedded in HBase region. And then cluster size is varied from 5 to 33, still, STEHIX is better than MD-HBase, Figure 8(b) shows the fact. 7.3 GNN Queries In this experiment, we vary the size of location set P and cluster to measure performance of STEHIX. Note that MD-HBase does not study GNN query, so we just simply use our virtual centroid method to apply to it. We vary n (size of P) from 3 to 9, and Figure 9(a) shows the results. We can see with increasing of n, the response time is also increased, this is because a larger size of P would cause more time to calculate the virtual centroid, however, we can the delay time does not increase very steeply, due to the fact that computing the virtual centroid only cost CPU time. Similarly, our STEHIX still outperforms MD-HBase in both varying n and cluster size. 7.4 Effect of Optimizations We examine the effect of optimizations to STEHIX in this experiment, and Figure 10 show the results. First, we use maximum imbalance load ratio [17] as metric, and test our adaptive load balancing policy, the results of comparison with non-load balancing are plotted in Figure 10 (a). We can see with cluster size increased, both ratios are raised, this is because the more nodes participate in the cluster, the more difficult is to distribute load uniformly, however, we can see our load balancing method indeed takes effect, i.e., when load balancing policy is used, the ratio is averagely around 6, while the counterpart shows the performance about 38 to 70. Next, we test the effect of kNN optimization, from Figure 10 (b), we can see the connections among region indexes give chances to reduce querying overhead. 628 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao 3 4 5 6 7 8 9 1000 1200 1400 1600 1800 2000 2200 2400 2600 D e la y (m s ) n STEHIX MD-HBase (a) Effect of size of P 5 9 17 33 1000 1500 2000 2500 3000 D e la y (m s ) Cluster Size STEHIX MD-HBase (b) Effect of cluster size Figure 9: Experimental Results for GNN Queries 4 8 16 32 0 200 400 600 800 1000 1200 1400 1600 1800 2000 D e la y (m s ) k STEHIX STEHIX Optimization (a) Effect of k 5 9 17 33 0 10 20 30 40 50 60 70 M a x im b a la n c e lo a d ra ti o (% ) Cluster size Load Balancing Non load Balancing (b) Effect of cluster size Figure 10: Experimental Results for Optimizations Efficient Historical Query in HBase for Spatio-Temporal Decision Support 629 8 Conclusion and Future Works With development of positioning technology, more and more spatio-temporal data need to be processed. To equip HBase with efficient and scalable spatio-temporal querying capability will benefit the whole spatio-temporal decision support system. In this paper, we argue that many previous works fail to tackle this problem due to lack of deep design for HBase, while we address the problem by proposing a novel index structure adapted to two-level architecture of HBase, which is suitable for HBase to process queries. Algorithms for range query, kNN query and GNN query are designed, what’s more, the optimizations for load balancing and kNN query are also proposed. We carry out extensive experimental studies for verifying our index, and the results show that our approach for HBase is more efficient and scalable than the previous work. In the future, we plan to utilize this idea to efficiently store and retrieve graph data and apply to social networks. Acknowledgment This work is supported by NSF of China grant 61303062 and 71331008. We would like to thank Peijun He for helping with the implementation. Bibliography [1] Van Orshoven et al. (2011), Upgrading geographic information systems to spatio-temporal decision support systems, Mathematical and Computational Forestry & Natural Resource Sciences, 3(1): 36-41. [2] Wiki, H. HBase: bigtable-like structured storage for Hadoop HDFS. 2012-02-23)[2012-04- 17]. http://wiki. apache, org/hadoop/Hbase. [3] Ralph Kimball, Margy Ross (1996), The data warehouse toolkit, Wiley. [4] Ralph Kimball, Margy Ross (2012), The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling, 2nd Edition, Wiley. [5] Nishimura, S., Das, S., Agrawal, D., Abbadi, A. E. (2011, June). MD-HBase: A scalable multi-dimensional data infrastructure for location aware services. In Mobile Data Manage- ment (MDM), 2011 12th IEEE International Conference on, 1: 7-16. [6] Hsu, Y. T., Pan, Y. C., Wei, L. Y., Peng, W. C., Lee, W. C. (2012), Key formulation schemes for spatial index in cloud data managements. In Mobile Data Management (MDM), 2012 IEEE 13th International Conference on, 21-26. [7] Zhou, X., Zhang, X., Wang, Y., Li, R., Wang, S. (2013), Efficient distributed multi- dimensional index for big data management. In Web-Age Information Management, Springer Berlin Heidelberg, 130-141. [8] Han, D., & Stroulia, E. (2013), Hgrid: A data model for large geospatial data sets in hbase. In Cloud Computing (CLOUD), 2013 IEEE Sixth International Conference on, 910-917. [9] Zhang, N., Zheng, G., Chen, H., Chen, J., Chen, X. (2014). Hbasespatial: A scalable spatial data storage based on hbase. In Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, 644-651. 630 X.Y. Chen, C. Zhang, B. Ge, W.D. Xiao [10] Du, N., Zhan, J., Zhao, M., Xiao, D., & Xie, Y. (2015), Spatio-Temporal Data Index Model of Moving Objects on Fixed Networks Using HBase, In Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, 247-251. [11] HBase, A. (2012), Apache hbase reference guide. Webpage available at http://wiki. apache. org/hadoop/Hbase/HbaseArchitecture. Webpage visited, 04-04. [12] George, L. (2011). HBase: the definitive guide, O’Reilly Media, Inc. [13] Faloutsos, C., Roseman, S. (1989), Fractals for secondary key retrieval, Proceedings of the eighth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, 247-252. [14] Wang, J., Wu, S., Gao, H., Li, J., Ooi, B. C. (2010), Indexing multi-dimensional data in a cloud system. Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, 591-602. [15] Hjaltason, G. R., Samet, H. (1999), Distance browsing in spatial databases, ACM Transac- tions on Database Systems (TODS), 24(2): 265-318. [16] Roussopoulos, N., Kelley, S., Vincent, F. (1995). Nearest neighbor queries. In ACM sigmod record, 24(2):71-79. [17] Vu, Q. H., Ooi, B. C., Rinard, M., Tan, K. L. (2009), Histogram-based global load balancing in structured peer-to-peer systems, Knowledge and Data Engineering, IEEE Transactions on, 21(4): 595-608. [18] Hochreiter, S., Younger, A. S., Conwell, P. R. (2001), Learning to Learn Using Gradient Descent. Artificial Neural Networks-ICANN 2001, Springer Berlin Heidelberg.