Skip to content

Commit

Permalink
updated minio section
Browse files Browse the repository at this point in the history
  • Loading branch information
mamurak committed Jan 24, 2024
1 parent a446c7c commit 87bdaaa
Show file tree
Hide file tree
Showing 16 changed files with 126 additions and 81 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,4 @@
** xref:3-02-offline-scoring-pipelines.adoc[2. Offline Scoring Pipelines]
* 4. Fine Tuning and Training a Model
** xref:4-01-training-the-model.adoc[1. Training a Model]
** xref:4-02-integrating-the-new-model.adoc[2. Deploying and Integrating the New Model]
18 changes: 9 additions & 9 deletions docs/modules/ROOT/pages/1-01-project-setup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ include::_attributes.adoc[]

xref:index.adoc[Back to the introduction]

== The OpenShift Data Science Dashboard
== The OpenShift AI Dashboard

NOTE: We're going to refer to _Red Hat OpenShift Data Science_ as _RHODS_ throughout this workshop.
NOTE: We're going to refer to _Red Hat OpenShift AI_ as _RHOAI_ throughout this workshop.

You should be logged into *Red Hat OpenShift Data Science*, and be able to see the dashboard, that looks like this:
You should be logged into *Red Hat OpenShift AI*, and be able to see the dashboard, that looks like this:

image::notebooks/rhods_dashboard.png[alt text]

*Red Hat OpenShift Data Science* brings you a variety of containerized on-demand environments, such as Jupyter Notebooks. Don't worry if you've never used notebooks before as this workshop will start with a small tutorial on what they are and how to use them.
*Red Hat OpenShift AI* brings you a variety of containerized on-demand environments, such as Jupyter Notebooks. Don't worry if you've never used notebooks before as this workshop will start with a small tutorial on what they are and how to use them.

* Now that you are logged into to *Red Hat OpenShift Data Science*, navigate to `Data Science Projects` in the left menu. Here you can see the list of existing projects that you have access to.
* Now that you are logged into to *Red Hat OpenShift AI*, navigate to `Data Science Projects` in the left menu. Here you can see the list of existing projects that you have access to.

NOTE: Projects allow you and your team to organize and collaborate on resources within separated namespaces.

Expand Down Expand Up @@ -62,15 +62,15 @@ If you are curious about where the *Cluster storage* is managed, you can make us

* Ensure you're seeing the Administrator Perspective by selecting `Administrator` in the top field of the left menu. The cluster storage volume allocated to your Data Science Project can be found under `Storage` and the `PersistentVolumeClaims`. You will see that in addition to the _development_ volume, there are others which have been prepared and will be leverage in the next chapters of the workshop.

NOTE: https://www.redhat.com/en/technologies/cloud-computing/openshift[Red Hat OpenShift] is the hybrid cloud Kubernetes-based engine that powers the RHODS platform and provides all the needed resources like storage, memory, CPUs and GPUs, as well as the DevSecOps and GitOps pipeline capabilities needed for Data Science Pipelines and MLOps flows. In the OpenShift Console you have two perspectives, `Administrator` and `Developer`, between which you can easily switch by selecting `Developer` or `Administrator` respectively in the top field of the left menu. Here you can inspect and edit all resources that you create throughout the workshop in the RHODS dashboard, as part of your Data Science Projects.
NOTE: https://www.redhat.com/en/technologies/cloud-computing/openshift[Red Hat OpenShift] is the hybrid cloud Kubernetes-based engine that powers the RHOAI platform and provides all the needed resources like storage, memory, CPUs and GPUs, as well as the DevSecOps and GitOps pipeline capabilities needed for Data Science Pipelines and MLOps flows. In the OpenShift Console you have two perspectives, `Administrator` and `Developer`, between which you can easily switch by selecting `Developer` or `Administrator` respectively in the top field of the left menu. Here you can inspect and edit all resources that you create throughout the workshop in the RHOAI dashboard, as part of your Data Science Projects.

We'll use the OpenShift Console tab to deploy the object detection app later in this workshop. For now, keep the tab open for reference and return to the RHODS Dashboard tab.
We'll use the OpenShift Console tab to deploy the object detection app later in this workshop. For now, keep the tab open for reference and return to the RHOAI Dashboard tab.

NOTE: If you're at the OpenShift Web Console and need to navigate back to the OpenShift Data Science Dashboard, use the *Application Switcher* icon in the top right of the navigation bar.
NOTE: If you're at the OpenShift Web Console and need to navigate back to the OpenShift AI Dashboard, use the *Application Switcher* icon in the top right of the navigation bar.

image::notebooks/ocp-console-app-switcher.png[alt text, 400]

You're now all set. You got a bit familiar with the `RHODS dashboard` as well as the `OpenShift Console`. You've also configured your first Data Science Project!
You're now all set. You got a bit familiar with the `RHOAI dashboard` as well as the `OpenShift Console`. You've also configured your first Data Science Project!

To start working with the object detection model, select the `Open` link next to your running workbench instance and
xref:1-02-jupyter-env.adoc[head to the next section.]
43 changes: 34 additions & 9 deletions docs/modules/ROOT/pages/1-02-jupyter-env.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

include::_attributes.adoc[]

You are now inside your Jupyter environment. As you can see, it's a web-based environment, but everything you'll do here is in fact happening on the *Red Hat OpenShift Data Science*, powered underneath by the *OpenShift* cluster. This means that without having to install and maintain anything on your own computer, and without disposing of lots of local resources like CPU, GPU and RAM, you can still conduct your Data Science work in this powerful and stable managed environment.
You are now inside your Jupyter environment. As you can see, it's a web-based environment, but everything you'll do here is in fact happening on the *Red Hat OpenShift AI*, powered underneath by the *OpenShift* cluster. This means that without having to install and maintain anything on your own computer, and without disposing of lots of local resources like CPU, GPU and RAM, you can still conduct your Data Science work in this powerful and stable managed environment.

In the "file-browser" like window you're in right now, you'll find the files and folders that are saved inside your own personal space inside *RHODS*.
In the "file-browser" like window you're in right now, you'll find the files and folders that are saved inside your own personal space inside *RHOAI*.

It's pretty empty right now though... So the first thing we will do is to bring the content of the workshop inside this environment.

Expand Down Expand Up @@ -36,18 +36,43 @@ image::notebooks/od_folder_click.png[alt text]
image::notebooks/od_folder.png[alt text]


Now let's open another tab and connect to your `S3 Provider Console` with the URL provided by the workshop instructor.
Now let's open another tab and connect to the S3 object storage provider that we're going to use throughout this course.

NOTE: For simplicity, we are leveraging https://min.io/product/private-cloud-red-hat-openshift[MinIO] in this case as the S3 Provider and an instance has already been deployed on OpenShift next to RHODS. S3 buckets and users have been already provisioned for your convenience as well. For production environments we recommend scalable solutions like https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation[OpenShift Data Foundations (ODF)] or solutions from other storage vendors.
NOTE: For simplicity, we are leveraging https://min.io/product/private-cloud-red-hat-openshift[MinIO] in this case as the S3 Provider and an instance has already been deployed on OpenShift next to RHOAI. S3 buckets and users have been already provisioned for your convenience as well. For production environments we recommend scalable solutions like https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation[OpenShift Data Foundations (ODF)] or solutions from other storage vendors.

Log into the S3 Provider Console with the URL and credentials provided. You should be able to see the dashboard, that looks like this:
Open the OpenShift Console through the Application Switcher in the top right navigation bar of the OpenShift AI dashboard.

image::notebooks/minio_console.png[S3 Provider Console]
NOTE: If you're at the RHOAI Dashboard and need to navigate to the OpenShift Web Console, use the *Application Switcher* Icon in the top right of the navigation bar and click on the *OpenShift Console* menu item.

* Use the `Object Browser` to navigate to the bucket associated with your user name.
image::s2i/rhods-dashboard-app-switcher.png[RHOAI Dashboard App Switcher, 400]

You can now browse the content of your S3 bucket including a number of images and the model artifact. We will access and work with that model in the following sections.
From the `OpenShift Console` switch to the Developer Perspective from the menu on the top left if it's not already selected:

The _object-detection_ data connection that was provisioned earlier for you and which you reviewed previously is pointing to your user S3 bucket.
image::s2i/dev-view.png[Developer Perspective]

Navigate to the Topology view of your project (userX). You should now see a number of circular icons representing the pods running in your project. Find the `minio` pod and click on the small arrow icon in its top right corner.

image::notebooks/minio-access.png[alt text]

A new tab opens with the login screen of Minio. Authenticate with the following login:
[.lines_space]
[.console-input]
[source,text]
----
minio
----
and password:
[.lines_space]
[.console-input]
[source,text]
----
minio123
----

You can now browse the S3 buckets that are provided within this Minio instance. For the remainder of this course, we're using the contents of the `object detection` bucket, which you can select in the left navigation bar. It includes a number of images in the `data` folder and some model artifacts in the `models` folder. We will access and work with these models in the following sections.

image::notebooks/minio-od-bucket.png[alt text]

The _object-detection_ data connection that was provisioned earlier for you and which you reviewed previously is pointing to this object detection bucket.

xref:1-03-notebooks.adoc[Ready? Let's go to the next section.]
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/1-03-notebooks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ And a cell where we have entered some code:

image::notebooks/cell_code.png[alt text]

* Code cells contain Python code that can be run interactively. Thats means you can modify the code, then run it. The code will not run on your computer or in the browser, but directly in the environment you are connected to, *Red Hat OpenShift Data Science* in our case.
* Code cells contain Python code that can be run interactively. Thats means you can modify the code, then run it. The code will not run on your computer or in the browser, but directly in the environment you are connected to, *Red Hat OpenShift AI* in our case.
* To run a code cell, just select it (click in the cell, or on the left side of it), and click the Run/Play button from the toolbar (you can also press CTRL+Enter to run a cell, or Shift+Enter to run the cell and automatically select the following one).

The Run button on the toolbar:
Expand All @@ -37,7 +37,7 @@ Notebooks are so named because they are just like a physical *Notebook*: it's ex

Now that we have covered the basics, just give it a try!

* In your Jupyter environment (the file explorer-like interface), there is file called `0_sandbox.ipynb`. Double-click on it to launch the notebook (it will open another tab in the content section of the environment). Please feel free to experiment, run the cells, add some more and create functions. You can do what you want - it's your environment, and there is no risk of breaking anything or impacting other users. This environment isolation is also a great advantage brought by *RHODS*.
* In your Jupyter environment (the file explorer-like interface), there is file called `0_sandbox.ipynb`. Double-click on it to launch the notebook (it will open another tab in the content section of the environment). Please feel free to experiment, run the cells, add some more and create functions. You can do what you want - it's your environment, and there is no risk of breaking anything or impacting other users. This environment isolation is also a great advantage brought by *RHOAI*.
* You can also create a new notebook by selecting `File`\->``New``\->``Notebook``from the menu on the top left, then select a Python 3 kernel. This instructs Jupyter that we want to create a new notebook where the code cells will be run using a Python 3 kernel. We could have different kernels, with different languages or versions that we can run into notebooks, but that's a story for another time...
* You can also create a notebook by simply clicking on the icon in the launcher:

Expand Down
20 changes: 10 additions & 10 deletions docs/modules/ROOT/pages/2-01-model-api.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,28 +4,28 @@ include::_attributes.adoc[]

In the previous section, we learned how to create the code that will use an existing object detection model to identify object in a static image. But of course we cannot use a notebook directly like this in a production environment.

So now we will learn how to service this model as an *API* that you can directly query from other applications. For this, we will use the model serving capabilities of RHODS:
So now we will learn how to service this model as an *API* that you can directly query from other applications. For this, we will use the model serving capabilities of RHOAI:

* Return to the RHODS Dashboard and enter your project. Let's now take a look at the model server that was already provisioned and configured for your convenience. Let's take a look at its configuration.
* Return to the RHOAI Dashboard and enter your project. Let's now take a look at the model server that was already provisioned and configured for your convenience. Let's take a look at its configuration.
image::notebooks/model_serving.png[Models and model servers]

Under *Models and model servers*, find _OVMS_ and select `Edit model server` under the *vertical ellipsis*, *⋮*. In the configuration page, you can review the typical fields for a model server. When done hit `Cancel` or simply close the window.

* You're now ready to deploy the object detection model. Under the _OVMS_ model server select *Deploy model* and set it up as follows:
You're now ready to deploy the object detection model. Under the _OVMS_ model server select *Deploy model* and set it up as follows:

** Choose an arbitrary *Model Name* such as `model`.
* Choose an arbitrary *Model Name* such as `model`.
** Under *Model framework* select `onnx-1`.
* Under *Model framework* select `onnx-1`.
** Under *Model location*, *Existing data connection* select the _object-detection_ data connection that was provisioned earlier for you. As you have seen in sections 1.1 and 1.2, this data connection points to your user S3 bucket where the model and other data reside.
* Under *Model location*, *Existing data connection* select the _object-detection_ data connection that was provisioned earlier for you. As you have seen in sections 1.1 and 1.2, this data connection points to your user S3 bucket where the model and other data reside.
** Under *Path* enter `yolov5m.onnx`.
* Under *Path* enter `models/yolov5m.onnx`.
* Select *Deploy* to deploy and start serving your object detection model on the model server. Click on `1` below _Deployed models_ to monitor the status of the model deployment. It will indicate a green status once the model can be consumed. Copy the _Inference endpoint_ so we can test the model deployment.
Select *Deploy* to deploy and start serving your object detection model on the model server. Click on `1` below _Deployed models_ to monitor the status of the model deployment. It will indicate a green status once the model can be consumed. Copy the _Inference endpoint_ so we can test the model deployment.

image::notebooks/testing_model_deployment.png[Testing the Model Deployment]

* In your workbench, open the `2_online-scoring` notebook. Insert the inference endpoint into the `prediction_url` placeholder and execute the notebook. You should see bounding boxes of the objects that the deployed model detected based on the image that it received.
In your workbench, open the `2_online-scoring` notebook. Insert the inference endpoint into the `prediction_url` placeholder and execute the notebook. You should see bounding boxes of the objects that the deployed model detected based on the image that it received.

You just deployed your first model with RHODS model serving. Let's now xref:2-02-deploy-s2i.adoc[deploy a small application that consumer the model].
You just deployed your first model with RHOAI model serving. Let's now xref:2-02-deploy-s2i.adoc[deploy a small application that consumer the model].
12 changes: 7 additions & 5 deletions docs/modules/ROOT/pages/2-02-deploy-s2i.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ We'll start from the *OpenShift Console*.

image::s2i/ocp-console.png[Ocp Console]

NOTE: If you're at the RHODS Dashboard and need to navigate to the OpenShift Web Console, use the *Application Switcher* Icon in the top right of the navigation bar and click on the *OpenShift Console* menu item.
NOTE: If you're at the RHOAI Dashboard and need to navigate to the OpenShift Web Console, use the *Application Switcher* Icon in the top right of the navigation bar and click on the *OpenShift Console* menu item.

image::s2i/rhods-dashboard-app-switcher.png[RHODS Dashboard App Switcher, 400]
image::s2i/rhods-dashboard-app-switcher.png[RHOAI Dashboard App Switcher, 400]

* From the `OpenShift Console` switch to the Developer Perspective from the menu on the top left if it's not already selected:

Expand Down Expand Up @@ -55,6 +55,8 @@ quay.io/mmurakam/object-detection-rest:v0.1.0
----
* You will see that OpenShift automatically validates the container image.
* Under Runtime icon, select `python`.
--
Build from source::
Expand Down Expand Up @@ -147,7 +149,7 @@ Value:
[.console-input]
[source,text]
----
<Your model's inference endpoint you copied from your RHODS project under Models and model servers>
<Your model's inference endpoint you copied from your RHOAI project under Models and model servers>
----

image::s2i/env_prediction_url.png[Prediction URL for the inference endpoint]
Expand All @@ -160,9 +162,9 @@ Everything is ready, so you can click on *Create*:

//image::s2i/create-button.png[alt text]

== Checking your Application
== Checking your Application

* You will see that a build is going on from the build icon:
* If you're building the application container from source, you will see that a build is going on from the build icon:

image::s2i/topology.png[alt text, 400]

Expand Down
Loading

0 comments on commit 87bdaaa

Please sign in to comment.