Skip to content

Commit

Permalink
doc-update-ptnfly6 update images and adoc files to reflect Patternfly… (
Browse files Browse the repository at this point in the history
#63)

* doc-update-ptnfly6 update images and adoc files to reflect Patternfly 6 UI changes

* doc-update-ptnfly6 - address peer review comments
  • Loading branch information
MelissaFlinn authored Jan 22, 2025
1 parent 306d4f1 commit cbc9204
Show file tree
Hide file tree
Showing 54 changed files with 44 additions and 58 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Original file line number Diff line number Diff line change
Expand Up @@ -65,31 +65,33 @@ image::pipelines/wb-pipeline-node-1.png[Select Node 1, 150]

. Scroll down to the *File Dependencies* section and then click *Add*.
+
image::pipelines/wb-pipeline-node-1-file-dep.png[Add File Dependency, 200]
image::pipelines/wb-pipeline-node-1-file-dep.png[Add File Dependency, 500]

. Set the value to `data/*.csv` which contains the data to train your model.

. Select the *Include Subdirectories* option and then click *Add*.
. Select the *Include Subdirectories* option.
+
image::pipelines/wb-pipeline-node-1-file-dep-form.png[Set File Dependency Value, 500]
image::pipelines/wb-pipeline-node-1-file-dep-form.png[Set File Dependency Value, 300]

. Save the pipeline.
. *Save* the pipeline.

== Create and store the ONNX-formatted output file

In node 1, the notebook creates the `models/fraud/1/model.onnx` file. In node 2, the notebook uploads that file to the S3 storage bucket. You must set `models/fraud/1/model.onnx` file as the output file for both nodes.

. Select node 1 and then select the *Node Properties* tab.
. Select node 1.

. Select the *Node Properties* tab.

. Scroll down to the *Output Files* section, and then click *Add*.

. Set the value to `models/fraud/1/model.onnx` and then click *Add*.
. Set the value to `models/fraud/1/model.onnx`.
+
image::pipelines/wb-pipeline-node-1-file-output-form.png[Set file dependency value, 400]

. Repeat steps 1-3 for node 2.
. Repeat steps 2-4 for node 2.

. Save the pipeline.
. *Save* the pipeline.

== Configure the connection to the S3 storage bucket

Expand Down Expand Up @@ -144,7 +146,7 @@ image::pipelines/wb-pipeline-add-kube-secret.png[Add Kubernetes Secret]
** *Secret Name*: `aws-connection-my-storage`
** *Secret Key*: `AWS_ACCESS_KEY_ID`
+
image::pipelines/wb-pipeline-kube-secret-form.png[Secret Form, 300]
image::pipelines/wb-pipeline-kube-secret-form.png[Secret Form, 400]

. Repeat Step 2 for each of the following Kubernetes secrets:

Expand Down Expand Up @@ -185,15 +187,15 @@ NOTE: If `Data Science Pipeline` is not available as a runtime configuration, yo

. Return to your data science project and expand the newly created pipeline.
+
image::pipelines/dsp-pipeline-complete.png[Set File Dependency Value, 800]
image::pipelines/dsp-pipeline-complete.png[New pipeline expanded, 800]

. Click the action menu (⋮) and then select *View runs* from the list.
. Click *View runs*.
+
image::pipelines/dsp-view-run.png[Set File Dependency Value, 800]
image::pipelines/dsp-view-run.png[View runs for selected pipeline, 500]

. Click on your run and then view the pipeline run in progress.
. Click your run and then view the pipeline run in progress.
+
image::pipelines/pipeline-run-complete.png[Set File Dependency Value, 800]
image::pipelines/pipeline-run-complete.png[Pipeline run progress, 800]

The result should be a `models/fraud/1/model.onnx` file in your S3 bucket which you can serve, just like you did manually in the xref:preparing-a-model-for-deployment.adoc[Preparing a model for deployment] section.

Expand Down
16 changes: 4 additions & 12 deletions workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,25 +32,17 @@ image::workbenches/create-workbench-form-image.png[Workbench image, 600]
+
image::workbenches/create-workbench-form-size.png[Workbench size, 600]

. Leave the *Accelerator* field with the default `None` selection.
+
image::workbenches/create-workbench-form-accelerator.png[Workbench accelerator, 600]

. Leave the default environment variables and storage options.
+
image::workbenches/create-workbench-form-env-storage.png[Workbench storage, 600]

. Under *Connections*, select *Use a connection*.
+
image::workbenches/create-workbench-form-use-data-connection.png[Use connection, 600]
. For *Connections*, click *Attach existing connection*.

. Select *Use existing connection*, and then select `My Storage` (the object storage that you configured previously) from the list.
. Select `My Storage` (the object storage that you configured previously) and then click *Attach*.
+
image::workbenches/create-workbench-form-data-connection.png[Connection form, 600]

. Click the *Create workbench* button.
+
image::workbenches/create-workbench-form-button.png[Create workbench button, 150]
. Click *Create workbench*.

.Verification

Expand All @@ -60,7 +52,7 @@ image::workbenches/ds-project-workbench-list.png[Workbench list]

NOTE: If you made a mistake, you can edit the workbench to make changes.

image::workbenches/ds-project-workbench-list-edit.png[Workbench list edit, 250]
image::workbenches/ds-project-workbench-list-edit.png[Workbench list edit, 350]


.Next step
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[id='creating-data-connections-to-storage']
[id='creating-connections-to-storage']
= Creating connections to your own S3-compatible object storage

If you have existing S3-compatible storage buckets that you want to use for this {deliverable}, you must create a connection to one storage bucket for saving your data and models. If you want to complete the pipelines section of this {deliverable}, create another connection to a different storage bucket for saving pipeline artifacts.
Expand All @@ -23,19 +23,17 @@ If you don't have this information, contact your storage administrator.

.. In the {productname-short} dashboard, navigate to the page for your data science project.

.. Click the *Connections* tab, and then click *Add connection*.
.. Click the *Connections* tab, and then click *Create connection*.
+
image::projects/ds-project-add-dc.png[Add connection]

.. In the *Add connection* modal, for the *Connection type* select *S3 compatible object storage - v1*.

.. Complete the *Add connection* form and name your connection *My Storage*. This connection is for saving your personal work, including data and models.
+
NOTE: Skip the *Connected workbench* item. You add connections to a workbench in a later section.
+
image::projects/ds-project-my-storage-form.png[Add my storage form, 500]

.. Click *Add connection*.
.. Click *Create*.

. Create a connection for saving pipeline artifacts:
+
Expand All @@ -45,18 +43,16 @@ NOTE: If you do not intend to complete the pipelines section of the {deliverable

.. Complete the form and name your connection *Pipeline Artifacts*.
+
NOTE: Skip the *Connected workbench* item. You add connections to a workbench in a later section.
+
image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts form, 500]

.. Click *Add connection*.
.. Click *Create*.


.Verification

In the *Connections* tab for the project, check to see that your connections are listed.

image::projects/ds-project-dc-list.png[List of project connections, 500]
image::projects/ds-project-connections.png[List of project connections, 500]


.Next steps
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ image::model-serving/ds-project-model-list-add.png[Models]
+
NOTE: Depending on how model serving has been configured on your cluster, you might see only one model serving platform option.

. In the *Multi-model serving platform* tile, click *Add model server*.
. In the *Multi-model serving platform* tile, click *Select multi-model*.

. In the form, provide the following values:
.. For *Model server name*, type a name, for example `Model Server`.
Expand All @@ -28,7 +28,7 @@ image::model-serving/create-model-server-form.png[Create model server form, 400]

. In the *Models and model servers* list, next to the new model server, click *Deploy model*.
+
image::model-serving/ds-project-workbench-list-deploy.png[Create model server form]
image::model-serving/ds-project-workbench-list-deploy.png[Create model server form, 500]

. In the form, provide the following values:
.. For *Model deployment name*, type `fraud`.
Expand All @@ -37,15 +37,15 @@ image::model-serving/ds-project-workbench-list-deploy.png[Create model server fo
.. Type the path that leads to the version folder that contains your model file: `models/fraud`
.. Leave the other fields with the default settings.
+
image::model-serving/deploy-model-form-mm.png[Deploy model from for multi-model serving, 400]
image::model-serving/deploy-model-form-mm.png[Deploy model from for multi-model serving, 500]

. Click *Deploy*.

.Verification

Notice the loading symbol under the *Status* section. The symbol changes to a green checkmark when the deployment completes successfully.

image::model-serving/ds-project-model-list-status-mm.png[Deployed model status]
image::model-serving/ds-project-model-list-status.png[Deployed model status, 350]


.Next step
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ image::model-serving/ds-project-model-list-add.png[Models]
+
NOTE: Depending on how model serving has been configured on your cluster, you might see only one model serving platform option.

. In the *Single-model serving platform* tile, click *Deploy model*.
. In the *Single-model serving platform* tile, click *Select single-model*.
. In the form, provide the following values:
.. For *Model deployment name*, type `fraud`.
.. For *Serving runtime*, select `OpenVINO Model Server`.
Expand All @@ -33,7 +33,7 @@ image::model-serving/deploy-model-form-sm.png[Deploy model form for single-model

Notice the loading symbol under the *Status* section. The symbol changes to a green checkmark when the deployment completes successfully.

image::model-serving/ds-project-model-list-status-sm.png[Deployed model status]
image::model-serving/ds-project-model-list-status.png[Deployed model status, 350]

.Next step

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ You must wait until the pipeline configuration is complete before you continue a
+
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
+
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server, 150]
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server, 250]
+
You can also ask your {productname-short} administrator to verify that self-signed certificates are added to your cluster as described in link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/installing_and_uninstalling_openshift_ai_self-managed/working-with-certificates_certs[Working with certificates].

Expand All @@ -45,7 +45,7 @@ You can also ask your {productname-short} administrator to verify that self-sign
. Navigate to the *Pipelines* tab for the project.
. Next to *Import pipeline*, click the action menu (⋮) and then select *View pipeline server configuration*.
+
image::projects/ds-project-pipeline-server-view.png[View pipeline server configuration menu, 150]
image::projects/ds-project-pipeline-server-view.png[View pipeline server configuration menu, 250]
+
An information box opens and displays the object storage connection information for the pipeline server.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The {productname-short} dashboard shows the *Home* page.

NOTE: You can navigate back to the OpenShift console by clicking the application launcher to access the OpenShift console.

image::projects/ds-console-ocp-tile.png[OCP console link, 400]
image::projects/ds-console-ocp-tile.png[OCP console link, 300]

For now, stay in the {productname-short} dashboard.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ To prepare a model for deployment, you must complete the following tasks:
* You created the `My Storage` connection and have added it
to your workbench.
+
image::projects/ds-project-dc-list.png[Data storage in workbench]
image::projects/ds-project-connections.png[Data storage in workbench]

.Procedure

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ For your convenience, the output of the `build.sh` script is provided in the `7_
. Upload the `7_get_data_train_upload.yaml` file to {productname-short}.

.. In the {productname-short} dashboard, navigate to your data science project page. Click the *Pipelines* tab and then click *Import pipeline*.
+
image::pipelines/dsp-pipeline-import.png[]

.. Enter values for *Pipeline name* and *Pipeline description*.

Expand All @@ -29,25 +27,23 @@ image::pipelines/dsp-pipline-import-upload.png[]

.. Click *Import pipeline* to import and save the pipeline.
+
The pipeline shows in the list of pipelines.

. Expand the pipeline item, click the action menu (⋮), and then select *View runs*.
The pipeline shows in graphic view.
+
image::pipelines/dsp-pipline-view-runs.png[]
image::pipelines/python-pipeline-graph.png[]

. Click *Create run*.
. Select *Actions* -> *Create run*.

. On the *Create run* page, provide the following values:
.. For *Experiment*, leave the default `Default` value.
.. For *Experiment*, leave the value as `Default`.
.. For *Name*, type any name, for example `Run 1`.
.. For *Pipeline*, select the pipeline that you uploaded.
+
You can leave the other fields with their default values.
+
image::pipelines/pipeline-create-run-form.png[Create Pipeline Run form]

. Click *Create* to create the run.
. Click *Create run* to create the run.
+
A new run starts immediately. The run details page shows a pipeline created in Python that is running in {productname-short}.
A new run starts immediately.
+
image::pipelines/pipeline-run-in-progress.png[]
image::pipelines/pipeline-run-in-progress.png[New pipeline run, 400]
4 changes: 2 additions & 2 deletions workshop/docs/modules/ROOT/pages/testing-the-model-api.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ Now that you've deployed the model, you can test its API endpoints.

. Take note of the model's Inference endpoint URL. You need this information when you test the model API.
+
image::model-serving/ds-project-model-inference-endpoint.png[Model inference endpoint]
+
If the *Inference endpoint* field contains an *Internal endpoint details* link, click the link to open a text box that shows the URL details, and then take note of the *restUrl* value.
+
image::model-serving/ds-project-model-inference-endpoint.png[Model inference endpoint, 300]

. Return to the Jupyter environment and try out your new endpoint.
+
Expand Down

0 comments on commit cbc9204

Please sign in to comment.