Sparkify needs to do analysis on the song and user activity on their music streaming app for unserstanding what songs users are listening to.
Given songs and user activity data that resided in S3, user spark to perfome ETL on the data from 3 and then load the data back to s3 as a set of dimensional tables optimized for song play analysis.
For the Pupose of this project a and ELT pipeline was created with sparks that loaded data from and s3 bukes, log and song data sets for songs and user activity, which then went through the ELT process to create 4 dimension tables: users, songs, artists and time linked to 1 facts table songplays to form a star schema that will allow for easy analysis.
Analytical queries can now be performed on the facts table songplays to answer questions like "How many" and analytic queries on the dimension tables aswell.
- dl.cfg:
Contain AWS Credentials (For the output s3 bucket)
- etl.py:
Main file that retrives data from S3 processes the data with spark and write parquet files to S3
- ReadMe.md:
Documentation and discussions on this project
In other to successfully run this python script follow the instructions below
-
Input your AWS Access key id and AWS Secret Access key into dl.cfg file
-
In the main funciton of the etl.py file, assign an S3 bucket e.g s3a://au-sparkify-datalake/ to the output_data variable
output_data = s3a://au-sparkify-datalake/
Note: the credential provided in the dl.cfg file should have access to the about s3 bucket.
-
Open a terminal to the directory containing etl.py file and run the command below
python etl.py
-
If all runs well without an error check your S3 bucket to see the newly created parquet files.
-
Done!!
ER Diagram for relationship between fact and dimension tables.
A sample query that can be performed for song play anylysis.
- How many songs where being heard for each month for each year
songplays_table = spark.sql(''' SELECT month, year, COUNT(*) as total_views FROM songplays_table GROUP BY GROUPING SETS ((),month, year, (month, year) ) LIMIT 20 ''')