owner project setup notebook - shows how to setup projects using DKube SDK. Skip the below project workflow steps, if you are creating the project and other resources by running this notebook. Jump to step 4 directly.
- Click on Repos in left pane and then click on +Code.
- Name: titanic
- Git URL: https://github.com/oneconvergence/dkube-examples.git
- Branch: tensorflow
- Click on Projects in left pane in Dkube.
- Click on + Create Project.
- Give a project name, say titanic.
- Check the enable leaderboard option and click on submit.
- Click on titanic project and click on evaluation tab and then select the evaluation source repo as titanic created in step 1.
- Give the evaluation script as
python titanic/owner/eval.py
and click on save button. - The Project leaderboard service is available once the status changes from 'in-progress' to 'ready'
- Click on Repos in left pane and then click on +dataset.
- Details to be filled for train dataset:
- Name: titanic-train
- DataSource: Other
- URL: https://dkube.s3.amazonaws.com/datasets/titanic/train.csv
- Details to be filled for test dataset
- Name: titanic-test
- DataSource: Other
- URL: https://dkube.s3.amazonaws.com/datasets/titanic/test.csv
The pipeline.ipynb file automatically creates a code repo named titanic-code, featuresets (titanic-train-fs and titanic-test-fs) for the user, and titanic-model using DKube SDK.
- Click on IDEs in left pane and then select your titanic project from top.
- Click on +JupyterLab and then fill the below details:
- Give a name : titanic-{user}, replace {user} with your username.
- Select code as titanic
- Select Framework as tensorflow and version as 2.0.0 and then click on submit.
- Open JupyterLab under the actions tab and go to workspace/titanic/titanic and then run all the cells of pipeline.ipynb file.
- Preprocessing, Training and Predict runs will be automatically created in Dkube.
- Go to your project titanic.
- Navigate to the leaderboard to see the results that shows the accuracy and loss metrics.
- Training metric results can be viewed from the runs tab in Dkube, with the tag as
dkube-pipeline
and type astraining
.
- Navigate to the model (titanic-model) and click on test inference.
- Give the test inference name, say titanic.
- The serving image is ocdr/tensorflowserver:2.0.0.
- Check transformer option, and type the transformer script as titanic/transformer.py
- Choose CPU, and submit.
- Go to
https://<URL>:32222/inference
- Copy the model serving URL from the test inference tab.
- Copy the auth token from developer settings
- Select model type sk-stock
- Copy the contents of https://raw.githubusercontent.com/oneconvergence/dkube-examples/tensorflow/titanic/titanic_sample.csv and save then as CSV, and upload.
- Click predict.
- Navigate to Repos-> Models-> titanic-model : select a model version
- Deploy
- Give name: titanic-deploy
- Transformer: click on transformer checkbox.
- Change the transformer script to: titanic/transformer.py.
- Submit
- Release Model
- Click on model name titanic-model .
- Click on Release Action button for latest version of Model.
- Click on Release button and then the model will be released.
- Publish Model
- Click on Publish Model icon under ACTIONS column.
- Give the publish model name.
- Select the serving image as ocdr/tensorflowserver:2.0.0
- Click on Transformer checkbox.
- Change transformer code to titanic/transformer.py.
- Click on Submit.
- Deploy Model
- Click on Model catalog and select the published model.
- Click on the deploy model icon under ACTIONS column.
- Enter the deploy model name and select CPU and click Submit.
- The state changes to deployed.
- Check in Model Serving and wait for the deployed model to change to running state.
- Deployed Model can used to test the prediction.
- Publish Model
- Navigate to Models-> titanic-model : select a model version
- Click on Publish Model icon under ACTIONS column
- Select the serving image as ocdr/tensorflowserver:2.0.0
- Click on Transformer checkbox
- Change transformer script to titanic/transformer.py
- Click on Submit
- Deploy Model
- Click on Models in the navigation pane
- Click on the drop down next to 'Owned by me' and select 'Published'
- Click on the published model 'titanic-model'
- Select the published version and click on the deploy model icon under ACTIONS column
- Enter the deploy model name, select Deployment / Test and select Deploy using / CPU. Click Submit
- Check in Deployments and wait for the deployed model to change to running state
- Deployed Model can be used to test the prediction.
- Go to
- Deployments in 2.1.x.x version
- Model Serving in 2.2.x.x version
- Deployments in 3.0.x.x version
- Copy the prediction Endpoint for the model 'titanic-model'
- Create a browser tab and go to https://<dkube_url>/inference
- Paste the Endpoint URL
- Copy Auth token from Developer settings in Dkube page and Paste in inference page
- Select model type as sk-stock
- Copy the contents of https://raw.githubusercontent.com/oneconvergence/dkube-examples/tensorflow/titanic/titanic_sample.csv and save then as CSV, and upload.
- Click predict