Skip to content

Commit

Permalink
Merge pull request #1 from RedHatQuickCourses/transfer-from-local
Browse files Browse the repository at this point in the history
Transfer local course files to github shell llm-on-rhoai
  • Loading branch information
kknoxrht authored Jun 3, 2024
2 parents e2a7b19 + fc54985 commit 46eb440
Show file tree
Hide file tree
Showing 45 changed files with 2,592 additions and 54 deletions.
Binary file added .DS_Store
Binary file not shown.
86 changes: 72 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,81 @@
# Red Hat OpenShift Data Science Quick Course
# Serving LLM Models on OpenShift AI

This is the starter template for creating a new quick course in the **RedHatQuickCourses** GitHub organization.
This is an advanced course on Serving a Large Language Model (LLM) using OpenShift AI. This course is a lab walkthrough starting with OpenShift Container Cluster. You will need to install the Operators to successfully configuire OPenShift AI. Once operational, you will to add the Ollama Model Serving Runtime, create a Data Science Project, Deploy S3 compatible Storage, Setup Data Connections, create a workbench, Use the Single Serving Model Platform to host the Ollama framework, configuire a Mistral Model, then work through a Jupyter Notebook to test your models Performance.

After you create a new repository based on this template, you need to edit and change several files and replace placeholder text and links in them.
Creating Course Content

1. Add a _README_ at the root of your repository and leave instructions for people contributing to your quick course. Make sure you provide the link to the GitHub issues page for your course so that contributors and users can report issues and provide feedback about your course.
We use a system called Antora (https://antora.org) to publish courses. Antora expects the files and folders in a source repository to be arranged in a certain opinionated way to simplify the process of writing course content using asciidoc, and then converting the asciidoc source to HTML.

1. Edit the **antora.yml** file in the repository root.
* Change the _name_, _title_ and _version_ attributes
* Edit the list of items under the _nav_ attribute to add or remove new chapters/modules to the course.
Refer to the quick courses [contributor guide](https://redhatquickcourses.github.io/welcome/1/guide/overview.html) for a detailed guide on how to work with Antora tooling and publish courses.

1. Edit the antora-playbook.yml file in the repository root.
* Edit only the _title_ and _start_page_ attributes in this file. You may not be required to change the other attributes unless the need arises.
## TL;DR Quickstart

1. Edit the _supplemental-ui/partials/header-content.hbs_ file and change the link in the _navbar-item_ div to point to the GitHub issues page for your repository.
This section is intended as a quick start guide for technically experienced members. The contributor guide remains the canonical reference for the course content creation process with detailed explanations, commands, video demonstrations, and screenshots.

1. Edit the files and folders under the _modules_ folder to structure your course content into chapters/modules and sections.
### Pre-requisites

1. Take a brief look at the GitHub actions configuration in the _.github_ folder. It contains basic configuration to auto-generate HTML from the asciidoc source and render it using GitHub pages. Unless you know what you are doing with this, and have prior experience with GitHub actions workflows, do not change these files.
* You have a macOS or Linux workstation. Windows has not been tested, or supported. You can try using a WSL2 based environment to run these steps - YMMV!
* You have a somewhat recent version of the Git client installed on your workstation
* You have a somewhat new Node.js LTS release (Node.js 16+) installed locally.
* Install a recent version of Visual Studio Code. Other editors with asciidoc editing support may work - YMMV, and you are on your own...

## Problems and Feedback
If you run into any issues, report bugs/suggestions/improvements about this template here - https://github.com/RedHatQuickCourses/course-starter/issues
### Antora Files and Folder Structure

The *antora.yml* file lists the chapters/modules/units that make up the course.

Each chapter entry points to a *nav.adoc* file that lists the sections in that chapter. The home page of the course is rendered from *modules/ROOT/pages/index.adoc*.

Each chapter lives in a separate folder under the *modules* directory. All asciidoc source files live under the *modules/CHAPTER/pages* folder.

To create a new chapter in the course, create a new folder under *modules*.

To add a new section under a chapter create an entry in the *modules/CHAPTER/nav.adoc* file and then create the asciidoc file in the *modules/CHAPTER/pages* folder.

### Steps

1. Clone or fork the course repository.
```
$ git clone [email protected]:RedHatQuickCourses/llm-model-serving.git
```

2. Install the npm dependencies for the course tooling.
```
$ cd llm-model-serving
$ npm install
```

3. Start the asciidoc to HTML compiler in the background. This command watches for changes to the asciidoc source content in the **modules** folder and automatically re-generates the HTML content.
```
$ npm run watch:adoc
```
4. Start a local web server to serve the generated HTML files. Navigate to the URL printed by this command to preview the generated HTML content in a web browser.
```
$ npm run serve
```

5. Before you make any content changes, create a local Git branch based on the **main** branch. As a good practice, prefix the branch name with your GitHub ID. Use a suitable branch naming scheme that reflects the content you are creating or changing.
```
$ git checkout -b your_GH_ID/ch01s01
```

6. Make your changes to the asciidoc files. Preview the generated HTML and verify that there are no rendering errors.Commit your changes to the local Git branch and push the branch to GitHub.
```
$ git add .
$ git commit -m "Added lecture content for chapter 1 section 1"
$ git push -u origin your_GH_ID/ch01s01
```

7. Create a GitHub pull request (PR) for your changes using the GitHub web UI. For forks, create a PR that merges your forked changes into the `main` branch of this repository.

8. Request a review of the PR from your technical peers and/or a member of the PTL team.

9. Make any changes requested by the reviewer in the **same** branch as the PR, and then commit and push your changes to GitHub. If other team members have made changes to the PR, then do not forget to do a **git pull** before committing your changes.

10. Once reviewer(s) approve your PR, you should merge it into the **main** branch.

11. Wait for a few minutes while the automated GitHub action publishes your changes ot the production GitHub pages website.

12. Verify that your changes have been published to the production GitHub pages website at https://redhatquickcourses.github.io/rhods-deploy

# Problems and Feedback
If you run into any issues, report bugs/suggestions/improvements about this course here - https://github.com/RedHatQuickCourses/llm-model-serving/issues
4 changes: 2 additions & 2 deletions antora-playbook.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
site:
title: Placeholder Course Title
start_page: placeholder-course-name::index.adoc
title: Serving LLM Models on OpenShift AI
start_page: llm-model-serving::index.adoc

content:
sources:
Expand Down
6 changes: 3 additions & 3 deletions antora.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: placeholder-course-name
title: Placeholder Course Title
version: 1
name: llm-model-serving
title: Serving LLM Models on OpenShift AI
version: 1.01
nav:
- modules/ROOT/nav.adoc
- modules/chapter1/nav.adoc
Expand Down
10 changes: 5 additions & 5 deletions devfile.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
schemaVersion: 2.1.0
metadata:
name: rhods-quick-course
displayName: RHODS Quick Course
description: RHODS Quick Course published using Antora
name: llm-model-serving-quick-course
displayName: Serving LLM Models on OpenShift AI
description: LLM Model Serving Quick Course published using Antora
icon: https://nodejs.org/static/images/logos/nodejs-new-pantone-black.svg
tags:
- Node.js
Expand All @@ -12,10 +12,10 @@ metadata:
language: JavaScript
version: 2.1.1
starterProjects:
- name: rhods-quick-course
- name: llm-model-serving-quick-course
git:
remotes:
origin: 'https://github.com/RedHatTraining/rhods-quick-course.git'
origin: 'https://github.com/RedHatTraining/llm-model-serving-quick-course.git'
components:
- name: runtime
container:
Expand Down
Binary file added modules/.DS_Store
Binary file not shown.
10 changes: 10 additions & 0 deletions modules/ROOT/pages/index copy.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
= Serving LLM Models on OpenShift AI
:navtitle: Home

== Introduction

Welcome to this quick course on Serving LLM Models on Red Hat OpenShift AI:

The objective is to experience the entire process of Serving the Mistral 7B Large Language Model, starting with a Openshift Container Cluster version 4.15.

From this point, you will need to install the Operators to successfully configuire OpenShift AI. Once operational, you will to add the Ollama Model Serving Runtime, create a Data Science Project, Deploy S3 compatible Storage, Setup Data Connections, create a workbench, Use the Single Serving Model Platform to host the Ollama framework, configuire a Mistral Model, then work through a Jupyter Notebook to test your models Serving Performance.
61 changes: 58 additions & 3 deletions modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,61 @@
= An Example Quick Course
= Serving LLM Models on OpenShift AI
:navtitle: Home

== Introduction
Welcome to this Quick course on _Deploying an LLM using OpenShift AI_. This is the first of a set of advanced courses about Red Hat OpenShift AI:

This is an example quick course demonstrating the usage of Antora for authoring and publishing quick courses.
IMPORTANT: The hands-on labs in this course were created and tested with RHOAI v2.9.1. Labs should mostly work without any changes in minor dot release upgrades of the product. Please open issues in this repository if you face any issue.


== Authors

The PTL team acknowledges the valuable contributions of the following Red Hat associates:

*Christopher Nuland

*Vijay Chebolu & Team

*Karlos Knox

== Classroom Environment

This introductory course has a few, simple hands-on labs. You will use the Base RHOAI on AWS catalog item in the Red Hat Demo Platform (RHDP) to run the hands-on exercises in this course.

This course will utlize the *Red Hat OpenShift Container Platform Cluster*.

When ordering this catalog item in RHDP:

* Select Practice/Enablement for the Activity field

* Select Learning about the Product for the Purpose field

* Enter Learning RHOAI in the Salesforce ID field

* Scroll to the bottom, check the box to confirm acceptance of terms and conditions

* Click order

For Red Hat partners who do not have access to RHDP, provision an environment using the Red Hat Hybrid Cloud Console. Unfortunately, the labs will NOT work on the trial sandbox environment. You need to provision an OpenShift AI cluster on-premises, or in the supported cloud environments by following the product documentation at Product Documentation for Red Hat OpenShift AI 2024.

== Prerequisites

For this course, basic experience with Red Hat OpenShift is recommended but is not mandatory.

You will encounter & modify code segments, deploy resources using Yaml files, and have to modify launch configurations, but you will not have to write code.

== Objectives

The overall objectives of this introductory course include:

* Familiarize utilizing Red Hat OpenShift AI to Serve & Interact with an LLM.

* Installing Red Hat OpenShift AI Operator & Dependencies

* Add a custom Model Serving Runtime

* Create a data science project, workbench & data connections

* Load an LLM model into the Ollama Runtime Framework

* Import (from Git repositories), interact with LLM model via a Jupyter Notebook

* Experiment with the Mistral LLM
2 changes: 1 addition & 1 deletion modules/appendix/pages/appendix.adoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
= Appendix A

Content for Appendix A...
Content for Appendix A... +D
Binary file added modules/chapter1/images/redhatllm.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 1 addition & 4 deletions modules/chapter1/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1 @@
* xref:index.adoc[]
** xref:section1.adoc[]
** xref:section2.adoc[]
** xref:section3.adoc[]
* xref:index.adoc[]
43 changes: 41 additions & 2 deletions modules/chapter1/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,42 @@
= Chapter 1
= Technical side of LLMs


[NOTE]
This segment of the course provides context to know & analogies to guide us to comprehend the purpose of guided lab in the next section. Feel free to skip ahead if you just want to get started.

=== Why this technical course ?

Previously, read a post on LinkenIn and felt it summed up the why quite nicely.

It described the basic idea that a Formula One Driver doesn't need to know the how to build an engine to be an F1 champion. However, she/he needs to have a *mechanical sympathy* which is understanding of car's mechanics to drive it effectively and get the best out it.

The same applies to AI, we don't need to be AI experts to harness the power of large language models but we to develop a certain level of "mechanical sympathy" with how these Models are Selected, Operationized, Served, Infered from, and kept up to date, to work with AI in harmony. Not just as users, but as collaborators who understand the underlying mechanics to communicate with clients, partners, and co-workers effectively.

It's not just about the Model itself, it's about the platform that empowers us to create trushtworthy AI applications and guides us in making informed choices.

The true power lies in the platform that enables us to harness a diverse range of AI models, tools, infrastructure and operationalize our ML projects.

That platform, *OpenShift AI* is what we learn to create, configure, and utilize to Serve LLM Models in this quick course.


=== The Ollama Model Framework

LLMs - Large Language Models (LLMs) can generate new stories, summarize texts, and even performing advanced tasks like reasoning and problem solving, which is not only impressive but also remarkable due to their accessibility and easy integration into applications.

There are a lot of popular LLMs, Nonetheless, their operation remains the same: users provide instructions or tasks in natural language, and the LLM generates a response based on what the model "thinks" could be the continuation of the prompt.

Ollama is not an LLM Model - Ollama is a relatively new but powerful open-source framework designed for serving machine learning models. It's designed to be efficient, scalable, and easy to use, making it an attractive option for developers and organizations looking to deploy their AI models into production.

==== How does Ollama work?


*At its core, Ollama simplifies the process of downloading, installing, and interacting with a wide range of LLMs, empowering users to explore their capabilities without the need for extensive technical expertise or reliance on cloud-based platforms.

In this course, we will focus on single LLM, Mistral. However, with the understanding of the Ollama Framework, we will be able to work with a variety of large language models utilizing the exact same configuration.

You be able to switch models in minutes, all running on the same platform. This will enable you test, compare, and evalute multiple models with the skills gained in the course.

*Experimentation and Learning*

Ollama provides a powerful platform for experimentation and learning, allowing users to explore the capabilities and limitations of different LLMs, understand their strengths and weaknesses, and develop skills in prompt engineering and LLM interaction. This hands-on approach fosters a deeper understanding of AI technology and empowers users to push the boundaries of what’s possible.*

This is the home page of _Chapter_ 1 in the *hello* quick course...
3 changes: 1 addition & 2 deletions modules/chapter1/pages/section1.adoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
= Section 1
= Follow up Story

This is _Section 1_ of _Chapter 1_ in the *hello* quick course....
3 changes: 1 addition & 2 deletions modules/chapter1/pages/section2.adoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
= Section 2
= Follow up Story

This is _Section 2_ of _Chapter 1_ in the *hello* quick course....
4 changes: 0 additions & 4 deletions modules/chapter1/pages/section3.adoc

This file was deleted.

Binary file added modules/chapter2/.DS_Store
Binary file not shown.
Binary file added modules/chapter2/images/redhatllm.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion modules/chapter2/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
* xref:index.adoc[]
** xref:section1.adoc[]
** xref:section1.adoc[]
** xref:section2.adoc[]
51 changes: 49 additions & 2 deletions modules/chapter2/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,50 @@
= Chapter 2
= OpenShift AI Initilization

This is the home page of _Chapter 2_ in the *hello* quick course....
== Supported configurations
OpenShift AI is supported in two configurations:

* A managed cloud service add-on for *Red Hat OpenShift Dedicated* (with a Customer Cloud Subscription for AWS or GCP) or for Red Hat OpenShift Service on Amazon Web Services (ROSA).
For information about OpenShift AI on a Red Hat managed environment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_cloud_service/1[Product Documentation for Red Hat OpenShift AI Cloud Service 1]

* Self-managed software that you can install on-premise or on the public cloud in a self-managed environment, such as *OpenShift Container Platform*.
For information about OpenShift AI as self-managed software on your OpenShift cluster in a connected or a disconnected environment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8[Product Documentation for Red Hat OpenShift AI Self-Managed 2.8]

In this course we cover installation of *Red Hat OpenShift AI self-managed* using the OpenShift Web Console.

== General Information about Installation


[INFO]
====
The product name has been recently changed to *Red{nbsp}Hat OpenShift AI (RHOAI)* (old name *Red{nbsp}Hat OpenShift Data Science*). In this course, most references to the product use the new name. However, references to some UI elements might still use the previous name.
====

In addition to the *Red{nbsp}Hat OpenShift AI* Operator there are some other operators that you may need to install depending on which features and components of *Red{nbsp}Hat OpenShift AI* you want to install and use.


https://www.redhat.com/en/technologies/cloud-computing/openshift/pipelines[Red{nbsp}Hat OpenShift Pipelines Operator]::
The *Red{nbsp}Hat OpenShift Pipelines Operator* is required if you want to install the *Red{nbsp}Hat OpenShift AI Pipelines* component.


[NOTE]
====
To support the KServe component, which is used by the single-model serving platform to serve large models, install the Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh.
====

https://docs.openshift.com/container-platform/latest/hardware_enablement/psap-node-feature-discovery-operator.html[OpenShift Serveless Operator]::
The *OpenShift Serveless Operator* is a prerequisite for the *Single Model Serving Platform*.

https://docs.openshift.com/container-platform/latest/hardware_enablement/psap-node-feature-discovery-operator.html[OpenShift Service Mesh Operator]::
The *OpenShift Service Mesh Operator* is a prerequisite for the *Single Model Serving Platform*.


[NOTE]
====
The following Operators are required to support the use of Nvidia GPUs (accelerators) with OpenShift AI
====

https://docs.openshift.com/container-platform/latest/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator]::
The *Node Feature Discovery Operator* is a prerequisite for the *NVIDIA GPU Operator*.

https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html[NVIDIA GPU Operator]::
The *NVIDIA GPU Operator* is required for GPU support in Red Hat OpenShift AI.
Loading

0 comments on commit 46eb440

Please sign in to comment.