Stop Thinking, Just Do!

Sungsoo Kim's Blog

SpatialSpark

tagsTags

13 April 2015


SpatialSpark: Big Spatial Data Process using Spark

Introduction

We aim to implement a set of spatial operations dedicated for big spatial data using Apache Spark. Since Aparch Spark claims that its significant improvement comparing with Hadoop MapReduce (10~100X faster, according the project page), we decide to develop this project natively in Spark.

Spatial Join Processing

We have developed two different spatial join operators, namely broadcast spatial join and partitioned spatial join. Broadcast spatial join is designed for joining one big dataset with another small dataset efficiently. For example, reverse geocoding a very large geo-tagged tweets dataset to city/county boundaries. This kind of spatial join involves a big dataset (geo-tagged tweets) and a considerable small datasets (political boundaries). Partitioned spatial join is more general for joining two big datasets. The basic idea is divide-and-conquer and follows similar designs of HadoopGIS. The two big datasets are divided into small pieces via space decomposition, and each small piece is processed by a executor.

broadcast partition

More details are in our technical report.

Spatial Query

We have implemented range query (window query) with/without index support. A full scan will be performed to generate query results if no index exists. Otherwise, an indexed query will be performed using pre-built index. We plan to develop more efficient index support for range queries, as well as kNN queries.

full_scan

Code and Report

Source code is available at github, a techinical report is here.


comments powered by Disqus