Skip to content

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

License

Notifications You must be signed in to change notification settings

mihaigurau-tomtom/mosaic

 
 

Repository files navigation

Mosaic by Databricks Labs

mosaic-logo

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

PyPI version PyPI - Downloads codecov build docs Language grade: Python Code style: black

Why Mosaic?

Mosaic was created to simplify the implementation of scalable geospatial data pipelines by bounding together common Open Source geospatial libraries via Apache Spark, with a set of examples and best practices for common geospatial use cases.

What does it provide?

Mosaic provides geospatial tools for

mosaic-general-pipeline

The supported languages are Scala, Python, R, and SQL.

How does it work?

The Mosaic library is written in Scala to guarantee maximum performance with Spark and when possible, it uses code generation to give an extra performance boost.

The other supported languages (Python, R and SQL) are thin wrappers around the Scala code.

mosaic-logical-design Image1: Mosaic logical design.

Getting started

Create a Databricks cluster running Databricks Runtime 10.0 (or later).

We recommend using Databricks Runtime versions 11.2 or higher with Photon enabled, this will leverage the Databricks h3 expressions when using H3 grid system.

Documentation

Check out the documentation pages.

Python

Install databricks-mosaic as a cluster library, or run from a Databricks notebook

%pip install databricks-mosaic

Then enable it with

from mosaic import enable_mosaic
enable_mosaic(spark, dbutils)

Scala

Get the jar from the releases page and install it as a cluster library.

Then enable it with

import com.databricks.labs.mosaic.functions.MosaicContext
import com.databricks.labs.mosaic.H3
import com.databricks.labs.mosaic.ESRI

val mosaicContext = MosaicContext.build(H3, ESRI)
import mosaicContext.functions._

R

Get the Scala JAR and the R from the releases page. Install the JAR as a cluster library, and copy the sparkrMosaic.tar.gz to DBFS (This example uses /FileStore location, but you can put it anywhere on DBFS).

library(SparkR)

install.packages('/FileStore/sparkrMosaic.tar.gz', repos=NULL)

Enable the R bindings

library(sparkrMosaic)
enableMosaic()

SQL

Configure the Automatic SQL Registration or follow the Scala installation process and register the Mosaic SQL functions in your SparkSession from a Scala notebook cell:

%scala
import com.databricks.labs.mosaic.functions.MosaicContext
import com.databricks.labs.mosaic.H3
import com.databricks.labs.mosaic.ESRI

val mosaicContext = MosaicContext.build(H3, ESRI)
mosaicContext.register(spark)

Examples

Example Description Links
Quick Start Example of performing spatial point-in-polygon joins on the NYC Taxi dataset python, scala, R, SQL
Spatial KNN Runnable notebook-based example using Mosaic SpatialKNN model python
Open Street Maps Ingesting and processing with Delta Live Tables the Open Street Maps dataset to extract buildings polygons and calculate aggregation statistics over H3 indexes python
STS Transfers Detecting Ship-to-Ship transfers at scale by leveraging Mosaic to process AIS data. python, blog

You can import those examples in Databricks workspace using these instructions.

Ecosystem

Mosaic is intended to augment the existing system and unlock the potential by integrating spark, delta and 3rd party frameworks into the Lakehouse architecture.

mosaic-logo Image2: Mosaic ecosystem - Lakehouse integration.

Project Support

Please note that all projects in the databrickslabs github space are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

About

An extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 83.1%
  • Python 15.3%
  • R 1.5%
  • Shell 0.1%