Skip to content

Commit

Permalink
Rutuja/edits of this course (#57)
Browse files Browse the repository at this point in the history
* Edits of first 2 chapters

* Edits of last 2 chapters

---------

Co-authored-by: Ravi Srinivasan <[email protected]>
  • Loading branch information
Rutuja-desh and rsriniva authored May 27, 2024
1 parent ad58f08 commit fa7dca0
Show file tree
Hide file tree
Showing 10 changed files with 36 additions and 31 deletions.
14 changes: 7 additions & 7 deletions modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -135,17 +135,17 @@ image::htpasswd-provider.png[title=htpasswd_provider prompt]
== Prerequisites

* Basic knowledge of OpenShift (or Kubernetes) administration
* Building and deploying container images
* OpenShift User and Role administration
* Ability to build and deploy container images
* Knowledge of OpenShift User and Role administration
* Basic knowledge of AWS EC2 and S3 services

== Objectives

The overall objectives of this course include:

* Installing RedHat OpenShift AI using the web console and CLI
* Upgrading RedHat OpenShift AI components
* Managing RedHat OpenShift AI users and controlling access
* Enabling GPU support in RedHat OpenShift AI
* Stopping idle notebooks
* Install RedHat OpenShift AI using the web console and CLI
* Upgrade RedHat OpenShift AI components
* Manage RedHat OpenShift AI users and controlling access
* Enable GPU support in RedHat OpenShift AI
* Stop idle notebooks
* Create and configure a custom notebook image
10 changes: 6 additions & 4 deletions modules/chapter1/pages/dependencies-install-web-console.adoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
= Installing Dependencies Using the Web Console

As described in the xref::install-general-info.adoc[General Information about Installation] section you may need to install other operators depending on the components and features of OpenShift AI you want to use. This section will discuss installing and configuring those components.
As described in the xref::install-general-info.adoc[General Information about Installation] section, you may need to install other operators depending on the components and features of OpenShift AI you want to use. This section will discuss about installing and configuring those components.

It is generally recommended to install any dependent operators prior to installing the *Red{nbsp}Hat OpenShift AI* operator.

// This section given below is the same as in the previous chapter. Is the whole section with explanation required here again?

https://www.redhat.com/en/technologies/cloud-computing/openshift/pipelines[Red{nbsp}Hat OpenShift Pipelines Operator]::
The *Red Hat OpenShift Pipelines Operator* is required if you want to install the *Data Science Pipelines* component.
https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html[NVIDIA GPU Operator]::
Expand Down Expand Up @@ -45,7 +47,7 @@ image::pipeline_install4.png[width=800]

*Red{nbsp}Hat OpenShift Pipelines* is now successfully installed.

TIP: For assistance installing OpenShift Pipelines from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/openshift-pipelines-operator[redhat-cop/gitops-catalog/openshift-pipelines-operator] GitHub repo.
TIP: For assistance in installing OpenShift Pipelines from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/openshift-pipelines-operator[redhat-cop/gitops-catalog/openshift-pipelines-operator] GitHub repo.

== Lab: Installation of GPU Dependencies

Expand All @@ -55,7 +57,7 @@ Currently, *Red{nbsp}Hat OpenShift AI* supports accelerated compute with NVIDIA

The following section will discuss the installation and a basic configuration of both *NVIDIA GPU Operator* and the *Node Feature Discovery* operator.

NOTE: *Node Feature Discovery* and the *NVIDIA GPU Operator* can both be installed in a cluster that does not have a node with a GPU. This can be helpful when you plan to add GPUs at a later date. If a GPU is not present in the cluster the Dashboard will not present the user an option to deploy using a GPU.
NOTE: *Node Feature Discovery* and the *NVIDIA GPU Operator* can both be installed in a cluster that does not have a node with a GPU. This can be helpful when you plan to add GPUs at a later date. If a GPU is not present in the cluster, the Dashboard will not present the user an option to deploy using a GPU.

TIP: To view the list of GPU models supported by the *NVIDIA GPU Operator* refer to the https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/platform-support.html#supported-nvidia-gpus-and-systems[Supported NVIDIA GPUs and Systems] docs.

Expand Down Expand Up @@ -93,7 +95,7 @@ image::nfd_configure1.png[width=800]
+
image::nfd_verify.png[width=800]

TIP: For assistance installing the Node Feature Discovery Operator from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/nfd[redhat-cop/gitops-catalog/nfd] GitHub repo.
TIP: For assistance in installing the Node Feature Discovery Operator from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/nfd[redhat-cop/gitops-catalog/nfd] GitHub repo.

*Node Feature Discovery* is now successfully installed and configured.

Expand Down
1 change: 1 addition & 0 deletions modules/chapter1/pages/rhods-install-web-console.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ IMPORTANT: The installation requires a user with the _cluster-admin_ role
. Navigate to **Operators** -> **OperatorHub** and search for *OpenShift AI*.
+
image::rhods_install1.png[title=Search for OpenShift AI operator,width=800]
// The sentence in this image is not captured correctly

. Click on the `Red{nbsp}Hat OpenShift AI` operator. In the pop up window that opens, ensure you select the latest version in the *stable* channel and click on **Install** to open the operator's installation view.
+
Expand Down
12 changes: 6 additions & 6 deletions modules/chapter1/pages/uninstalling-rhods.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ The *Red{nbsp}Hat OpenShift AI* operator manages *Red{nbsp}Hat OpenShift AI* com
[#demo-rhods]
== Demo: Uninstalling Red{nbsp}Hat OpenShift AI

WARNING: These steps are for demonstration purposes only! Do NOT run these steps in your classroom because you will continue to work with the installed version of the product in the hands on labs in the course.
WARNING: These steps are for demonstration purposes only! Do NOT run these steps in your classroom because you will continue to work with the installed version of the product in the hands-on labs in the course.

[IMPORTANT]
Make sure that you have installed the *Red{nbsp}Hat OpenShift AI* operator using one of the previous demonstrations (Web based or CLI). They both install a version of the operator from the _stable_ channel. The screenshots in this section were taken on an older version of the product and may not exactly match yours.

. Log in to Red{nbsp} OpenShift web console using a user which has the _cluster-admin_ role assigned.
. Log in to Red{nbsp}Hat OpenShift web console using a user which has the _cluster-admin_ role assigned.

. Delete the DataScienceCluster object.
+
Expand All @@ -33,7 +33,7 @@ Navigate to the *DSCInitialization* tab and delete all *DSCI* resources.
+
image::rhods2-delete-dsci.png[title=Delete DSCI Resource]
+
alternatively you can delete the *DCSI* objects from the CLI.
Alternatively you can delete the *DCSI* objects from the CLI.
+
[subs=+quotes]
----
Expand Down Expand Up @@ -73,7 +73,7 @@ namespace "redhat-ods-operator" deleted

. Delete the namespaces that the Operator created during
installation. They are labeled with label _opendatahub.io/generated-namespace=true_.
Deleting namespace _rhods-notebooks_ leads to Persistent Volume Claims (PVC) being used by Workbench get deleted as well.
Deleting namespace _rhods-notebooks_ leads to Persistent Volume Claims (PVC) being used by Workbench getting deleted as well.
+
Navigate to *Administration* -> *Namespaces*, filter the namespaces using the label _opendatahub.io/generated-namespace=true_ and delete them.
+
Expand Down Expand Up @@ -105,7 +105,7 @@ namespace "my-rhods-project" deleted
== Uninstalling the Red{nbsp}Hat OpenShift AI dependencies.

If you have installed some dependencies you can remove them as long as they are not used by other deployments.
The following demonstration shows uninstallation of the *Red{nbsp}Hat OpenShift Pipelines* operator
The following demonstration shows uninstallation of the *Red{nbsp}Hat OpenShift Pipelines* operator.

[#demo-pipelines]
=== Demo: Uninstallation of the *Red{nbsp}Hat OpenShift Pipelines* operator
Expand All @@ -117,7 +117,7 @@ image::pipelines-uninstall.png[width=800]
+
Click on Uninstall operator.

. In the pop-up window scroll down, check *Delete all operand instances for this operator* and click on *Uninstall*
. In the pop-up window scroll down, check *Delete all operand instances for this operator* and click on *Uninstall*.
+
image::piplines-uninstall-confirm.png[width=800]

8 changes: 4 additions & 4 deletions modules/chapter1/pages/upgrading-rhods.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Red Hat OpenShift AI upgrades are handled by the *Red{nbsp}Hat OpenShift AI* operator.

When an upgrade is available *OLM* creates an *Installplan* for the new version.
When an upgrade is available, *OLM* creates an *Installplan* for the new version.

[subs=+quotes]
----
Expand All @@ -11,19 +11,19 @@ NAME CSV APPROVAL APPROVED
install-sp49w rhods-operator.2.7.0 Manual false <1>
install-w6lqv rhods-operator.2.6.0 Manual true <2>
----
<1> *Installplan* for the new version of the operator which has not been approved yet. It has to be approved in order to start the upgrade.
<1> *Installplan* for the new version of the operator which has not been approved yet. It has to be approved in order to start the upgrade.
<2> *Installplan* for the currently installed version. It's been approved and the version is currently installed.


Installplan is approved either automatically when a new version is available without user's intervention or requires a manual approval. Whether the approval is automatic or manual depends on the value of the *installPlanApproval* attribute of the operator's subscription. When it is set to _Automatic_ the *installplan* is approved automatically and installation starts without user's intervention. _Manual_ value requires a manual approval.
Installplan is approved either automatically when a new version is available without user's intervention or requires a manual approval. Whether the approval is automatic or manual, depends on the value of the *installPlanApproval* attribute of the operator's subscription. When it is set to _Automatic_ the *installplan* is approved automatically and installation starts without user's intervention. _Manual_ value requires a manual approval.

Approvals can be set from the web console as well as from the CLI.

The following demonstration shows an upgrade with a manual approval done from the web console.

== Demo: Upgrading Red{nbsp}Hat OpenShift AI

WARNING: These steps are for demonstration purposes only! Do NOT run these steps in your classroom because you will continue to work with the _stable_ version of the product in the hands on labs in the course.
WARNING: These steps are for demonstration purposes only! Do NOT run these steps in your classroom because you will continue to work with the _stable_ version of the product in the hands-on labs in the course.

[IMPORTANT]
Make sure that you have installed the *Red{nbsp}Hat OpenShift AI* operator using one of the previous demonstrations (Web based or CLI). They both install a version of the operator from the _stable_ channel. The screenshots in this section were taken on an older version of the product and may not exactly match yours.
Expand Down
6 changes: 3 additions & 3 deletions modules/chapter2/pages/resources.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ OpenShift AI allows for end users to create many different types of resources as

== Kubernetes Resources

The majority of actions taken in the OpenShift AI will create a kubernetes object based on that action. This section will discussion the different components supported by OpenShift AI and the different resources they create or interact with.
The majority of actions taken in the OpenShift AI will create a kubernetes object based on that action. This section will discuss the different components supported by OpenShift AI and the different resources they create or interact with.

=== Data Science Projects

Expand All @@ -15,7 +15,7 @@ The majority of actions taken in the OpenShift AI will create a kubernetes objec
|Data Science Project
|projects.project.openshift.io
|No
|A Data Science Project is synonym with an OpenShift Project or a Namespace. See the users section for more information on how to create and manage Data Science Projects
|A Data Science Project is synonym with an OpenShift Project or a Namespace. See the users section for more information on how to create and manage Data Science Projects.

|Users
|users.user.openshift.io
Expand Down Expand Up @@ -144,7 +144,7 @@ In this case, RHOAI will stop a notebook if no logged-in user activity is detect

== Managing Workbench and Model Server Sizes

When launching Workbenches or Model Servers from the Dashboard, users are presented with several default sizes they can select from. The default options may not suit your organizations needs
When launching Workbenches or Model Servers from the Dashboard, users are presented with several default sizes they can select from. The default options may not suit your organizations needs.

----
apiVersion: opendatahub.io/v1alpha
Expand Down
7 changes: 4 additions & 3 deletions modules/chapter2/pages/users.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ The admin group can be updated to another group other than the default `rhods-ad
[TIP]
====
It is highly recommended that dedicated admin users are configured for OpenShift AI and that organizations do not rely on the `cluster-admin` role for exclusive permissions to admin configurations of OpenShift AI. Dedicated Admin users should be added to the existing `rhods-admins` group, or another group which already contains the correct users should be configured in lieu of the `rhods-admins` group.
It is highly recommended that dedicated admin users are configured for OpenShift AI and that organizations do not rely on the `cluster-admin` role for exclusive permissions to admin configurations of OpenShift AI. Dedicated Admin users should be added to the existing `rhods-admins` group, or another group which already contains the correct users that should be configured in lieu of the `rhods-admins` group.
====

Expand All @@ -56,11 +56,12 @@ The normal Data Science user group can also be updated to change what users are
[WARNING]
====
Updating the access in the `User and group settings` in the Dashboard will only impact a users abilities to access the Dashboard, and will not impact any permissions granted by regular Kubernetes based RBAC.
Updating the access in the `User and group settings` in the Dashboard will only impact a user's abilities to access the Dashboard, and will not impact any permissions granted by regular Kubernetes based RBAC.
For example, if the normal user group is updated to only allow specific users to access the Dashboard, and a user that is not part of that group has admin may still have the ability to create Data Science related objects such as Notebooks, Data Science Pipelines, or Model Servers in a namespace they have permission in using the associated k8s objects without the UI.
For example, if the normal user group is updated to only allow specific users to access the Dashboard, and a user that is not part of that group as admin may still have the ability to create Data Science related objects such as Notebooks, Data Science Pipelines, or Model Servers in a namespace they have permission in using the associated k8s objects without the UI.
====
// Revised sentence suggestion - For example, the normal user group is updated to allow access to the Dashboard only for specific users. In this scenario, an administrator, even if not included in that group, might still have the ability to create Data Science-related objects—like Notebooks, Data Science Pipelines, or Model Servers—in a namespace where they have permissions, in using the associated k8s objects without the UI."

=== Managing Dashboard Permissions with GitOps

Expand Down
2 changes: 1 addition & 1 deletion modules/chapter3/pages/custom.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Custom notebook images allow you to add specific packages and libraries for your

The default OpenShift AI notebook images can be found in the https://github.com/red-hat-data-services/notebooks[Default OpenShift AI notebooks] repository. This repository contains the source code, pre-built images, and examples to help you create your own image.

Additional images are available at the ODH contributions repository (https://github.com/opendatahub-io-contrib/workbench-images[opendatahub-io-contrib/workbench-images]) This is a place to source additional images, as well as a great resource for best practices for building custom images. Workbench and runtime images are available as well as a script to generate custom images (https://github.com/opendatahub-io-contrib/workbench-images#building-an-image[])
Additional images are available at the ODH contributions repository (https://github.com/opendatahub-io-contrib/workbench-images[opendatahub-io-contrib/workbench-images]). This is a place to source additional images, as well as a great resource for best practices for building custom images. Workbench and runtime images are available as well as a script to generate custom images (https://github.com/opendatahub-io-contrib/workbench-images#building-an-image[]).

== Exercise
We will now build our own custom image. We'll use https://quay.io/modh/cuda-notebooks[CUDA Notebook] as our base image. This image contains artifacts that we will be using later in the course.
Expand Down
5 changes: 3 additions & 2 deletions modules/chapter3/pages/importcustom.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
In this section we will go over importing a custom notebook through the OpenShift AI dashboard and test it to make sure our dependencies are included.

== Import the Notebook Image
1. Before we import the image into OpenShif AI we need to set the quay repository we just created to public. In a browser login to quay.io and go to the *rhods-admin-custom-image* repository. Select the *Settings* gear icon.
1. Before we import the image into OpenShif AI we need to set the quay repository we just created to public. In a browser, login to quay.io and go to the *rhods-admin-custom-image* repository. Select the *Settings* gear icon.
+
image::quaySettings.png[Quay Repository Settings]

Expand Down Expand Up @@ -61,7 +61,8 @@ image::rhodsDataScienceProj.png[OpenShift AI Data Science Projects]
+
image::rhodsCreateProj.png[OpenShift AI Project]

3. Click the *Create* button.
3. Click the *Create* button.
// This is covered in the earlier instruction.
+
image::rhodsCreateWrkbench.png[OpenShift AI Workbench]

Expand Down
2 changes: 1 addition & 1 deletion modules/chapter3/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ In this chapter, you learn how to build custom notebook images.
Goals:

* Create custom notebook images
* How to import custom notebook images into OpenShift AI
* Import custom notebook images into OpenShift AI

0 comments on commit fa7dca0

Please sign in to comment.