This tutorial is used to discuss the interface between Data Science and DevOps. It looks to highlight that data scientists are not so different from developers, therefore they need to know Git and follow best practices to maintain their dependencies and code, add tests and make release. All these tasks can be supported through pipelines and bots so that data scientists can focus on the main problem to solve. In other words in this tutorial you will learn how the ML lifecycle, practices and tools can be enhanced by DevSecOps techniques.
At the end of this tutorial you will be able to spawn images from JupyterHub, manage dependencies for Jupyter Notebook using Project Thoth extension for dependency management on JupyterLab, learn about overlays
concept, setup AICoE CI and Kebechet Bot to automate creation of images for overlays and maintenance of software stacks. Then you will learn how to create and run an Elyra AI pipeline with Kubeflow Pipelines using the images created. Finally, you will learn how to leverage ArgoCD and deploy AI model automatically.
The demo application selected for this tutorial is the MNIST Classification
. The MNIST Dataset
is described here.
In this tutorial there are different variations:
- one using TensorFlow
- one using Pytorch and Neural Magic tools.
Operate First is an open infrastructure environment started at Red Hat's Office of the CTO. It has been selected to run this tutorial since it is an open source initiative that fulfills all the requirements stated above. Anyone with a Google account can log in and start developing. To learn more about Operate First, visit the website or GitHub community.
Operate First hosts Open Data Hub with all the tools provided for Data Science projects (e.g. JupyterHub, Elyra, Kubeflow Pipelines, Seldon, Prometheus, Grafana, Superset) running on Red Hat Openshift.
The project template used can be found here: project template. It shows correlation between a data scientist needs (e.g. data, notebooks, models) and that of an AI DevOps engineer (e.g. manifests). Having structure in a project ensures all the pieces required for the ML and DevOps lifecycles are present and easily discoverable.
Here you find a list of conferences where this tutorial has been used: