-My interest in putting together this example was to learn and prototype. More specifically, learn more about PySpark pipelines as well as how I could integrate deep learning into the PySpark pipeline. I ran this entire project using Jupyter on my local machine to build a prototype for an upcoming project where the data will be massive. Since I work for IBM, I'll take this entire analytics project (Jupyter Notebook) and move it to IBM. This allows me to do my data ingestion, pipelining, training and deployment on a unified platform and on a much larger Spark cluster.
0 commit comments