Skip to content

2.2.5. Consuming Migrated Data

Gs1TestTeam edited this page Nov 20, 2018 · 2 revisions

2.2.5. Consuming Migrated Data

As we can see in 2.2.3 "Migration Case By Data Type" section, after migrating we can utilize the migrated data using the existing SQL based BI tools such as Tableau, Microsoft Strategy, and Qlik. In this section, we will talk about how to consume the migrated data with the SQL based BI tools.

The MongoDB Connector for Business Intelligence (BI) allows users to create queries with SQL and visualize, graph, and report on their MongoDB Enterprise data using existing relational business intelligence tools such as Tableau, MicroStrategy, and Qlik. (MongoDB.com)

To test BI Connector, we need four components:

  • MongoDB database: Data storage.​
  • BI Connector: Provides a relational schema and translates SQL queries between your BI tool and MongoDB.​
  • ODBC data source name (DSN): Holds authentication and connection configuration data.​
  • BI Tool: Data visualization and analysis.​

The below figure shows a system diagram to use data from MongoDB with the Business Intelligence Tools.

Consuming Data

Quick Test

1. Set up MongoDB, ODBC, BI connector

2. Start Mongod and Mongosqld

  • Start Mongod

Consuming Data

  • Start Mongosqld

Consuming Data

3. Connect to MySQL and MongoDB on Tableau

  • Connect to RDBMS (MySQL)

Consuming Data

  • Connect to MongoDB using ODBC driver

Consuming Data

  • Confirm the connection in normal

Consuming Data

4. Select data using SQL in Tableau

  • Join testview table in MySQL and testview_mongo, testview_mongo_nested_mongodata

Consuming Data


How to use Microsoft Power BI with MongoDB

  1. Set up the test environment
  • Same to 1, 2 of Tableau BI.
  1. Set up the Power BI
  1. Connect and Display data on MongoDB using Power BI ​ Consuming Data

See More:

1. Integration RDBMS, MongoDB on Hadoop

Consuming Data

  • Apache Hadoop: an open-source software collection to use massive amounts of data using MapReduce programming model
  • MapReduce: a programming model to implement for processing and generating big data sets with a parallel, distributed algorithm on a cluster
  • Apache Hive: provider a SQL-like interface to query data stored in various databases and file systems
  • Apache Spark: an open-source framework to distribute data on cluster-computing framework
  • MongoDB Hadoop Integration
  • MongoDB Connector for Spark

2. MongoDB Charts

  • MongoDB Charts: a tool to visualize data from MongoDB data​
  • Docker: a digital container to contain software packages and operating system

Consuming Data