-
Notifications
You must be signed in to change notification settings - Fork 81
Appendix: Productionalizing Models
-
https://www.reddit.com/r/datascience/comments/8ni5bx/taking_an_ml_algorithm_to_production/
-
https://stackoverflow.com/questions/44806416/how-to-productionalize-a-python-machine-learning-lib
-
https://hackernoon.com/a-guide-to-scaling-machine-learning-models-in-production-aa8831163846
-
https://www.anaconda.com/productionizing-and-deploying-data-science-projects/
-
https://towardsdatascience.com/productionizing-your-machine-learning-model-221468b0726d
-
https://www.analyticsvidhya.com/blog/2017/09/machine-learning-models-as-apis-using-flask/
-
https://machinelearningmastery.com/deploy-machine-learning-model-to-production/
-
https://www.quora.com/How-do-you-take-a-machine-learning-model-to-production
-
https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
-
http://michal.karzynski.pl/blog/2017/03/19/developing-workflows-with-apache-airflow/
-
https://www.kdnuggets.com/2019/06/understanding-cloud-data-services.html
-
https://towardsdatascience.com/democratising-machine-learning-with-h2o-7f2f79e10e3f
How to create a data pipeline for an application which scrapes zip files from a website and we are supposed to extract the contents of zip files which are .txt files separated by tabs? Using a cloud service to store the .txt files as tables is also a must. Since, the data is lot, pandas doesn't scale well so it's advised to use efficient distributed packages or frameworks such as spark for calculating metrics from the tables.
Simplest solution in my opinion would be to run an airflow schedule that uses spark to extract data from your system, process it and store it in a desired format to Cloud storage. If you are using AWS check EMR. if the ETL process does not take place that often and is computationally intensive you can use transient clusters that automatically shuts down your emr cluster after your ETL task saving you money. Hope it helps.