In this workshop, we demonstrate how to use Amazon SageMaker and various strategies to fine tune LLMs. The goal of this workshop is to give you hands-on experience with fine-tune / deploying foundation models using Amazon SageMaker
This workshop tackles the following topics -
- Lab 1: Fine-tune a Llama-based LLM.
- Lab 2: Multi-LoRA adapter inference on Amazon SageMaker.
- Lab 3: Deploy LLMs with SageMaker Jumpstart UI
- Lab 4: Setup a LLM Playground on SageMaker Studio
- Lab 5: Prompt Engineering with LLMs on SageMaker Studio.
- Lab 6: Contextual chatbot using Llama2 via SageMaker JumpStart and Amazon OpenSearch Serverless with Vector Engine
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.