diff --git a/04-prompt-engineering-fundamentals/1-introduction.ipynb b/04-prompt-engineering-fundamentals/1-introduction.ipynb index 699adbc97..bbb6ec343 100644 --- a/04-prompt-engineering-fundamentals/1-introduction.ipynb +++ b/04-prompt-engineering-fundamentals/1-introduction.ipynb @@ -1,506 +1,256 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The following notebook was auto-generated by GitHub Copilot Chat and is meant for initial setup only" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Introduction to Prompt Engineering\n", - "Prompt engineering is the process of designing and optimizing prompts for natural language processing tasks. It involves selecting the right prompts, tuning their parameters, and evaluating their performance. Prompt engineering is crucial for achieving high accuracy and efficiency in NLP models. In this section, we will explore the basics of prompt engineering using the OpenAI models for exploration." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 1: Tokenization\n", - "Explore Tokenization using tiktoken, an open-source fast tokenizer from OpenAI\n", - "See [OpenAI Cookbook](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb?WT.mc_id=academic-105485-koreyst) for more examples.\n" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[198, 41, 20089, 374, 279, 18172, 11841, 505, 279, 8219, 323, 279, 7928, 304, 279, 25450, 744, 13, 1102, 374, 264, 6962, 14880, 449, 264, 3148, 832, 7716, 52949, 339, 430, 315, 279, 8219, 11, 719, 1403, 9976, 7561, 34902, 3115, 430, 315, 682, 279, 1023, 33975, 304, 279, 25450, 744, 11093, 13, 50789, 374, 832, 315, 279, 72021, 6302, 9621, 311, 279, 19557, 8071, 304, 279, 3814, 13180, 11, 323, 706, 1027, 3967, 311, 14154, 86569, 2533, 1603, 12715, 3925, 13, 1102, 374, 7086, 1306, 279, 13041, 10087, 50789, 8032, 777, 60, 3277, 19894, 505, 9420, 11, 50789, 649, 387, 10107, 3403, 369, 1202, 27000, 3177, 311, 6445, 9621, 35612, 17706, 508, 60, 323, 374, 389, 5578, 279, 4948, 1481, 1315, 478, 5933, 1665, 304, 279, 3814, 13180, 1306, 279, 17781, 323, 50076, 627]\n" - ] - }, - { - "data": { - "text/plain": [ - "[b'\\n',\n", - " b'J',\n", - " b'upiter',\n", - " b' is',\n", - " b' the',\n", - " b' fifth',\n", - " b' planet',\n", - " b' from',\n", - " b' the',\n", - " b' Sun',\n", - " b' and',\n", - " b' the',\n", - " b' largest',\n", - " b' in',\n", - " b' the',\n", - " b' Solar',\n", - " b' System',\n", - " b'.',\n", - " b' It',\n", - " b' is',\n", - " b' a',\n", - " b' gas',\n", - " b' giant',\n", - " b' with',\n", - " b' a',\n", - " b' mass',\n", - " b' one',\n", - " b'-th',\n", - " b'ousand',\n", - " b'th',\n", - " b' that',\n", - " b' of',\n", - " b' the',\n", - " b' Sun',\n", - " b',',\n", - " b' but',\n", - " b' two',\n", - " b'-and',\n", - " b'-a',\n", - " b'-half',\n", - " b' times',\n", - " b' that',\n", - " b' of',\n", - " b' all',\n", - " b' the',\n", - " b' other',\n", - " b' planets',\n", - " b' in',\n", - " b' the',\n", - " b' Solar',\n", - " b' System',\n", - " b' combined',\n", - " b'.',\n", - " b' Jupiter',\n", - " b' is',\n", - " b' one',\n", - " b' of',\n", - " b' the',\n", - " b' brightest',\n", - " b' objects',\n", - " b' visible',\n", - " b' to',\n", - " b' the',\n", - " b' naked',\n", - " b' eye',\n", - " b' in',\n", - " b' the',\n", - " b' night',\n", - " b' sky',\n", - " b',',\n", - " b' and',\n", - " b' has',\n", - " b' been',\n", - " b' known',\n", - " b' to',\n", - " b' ancient',\n", - " b' civilizations',\n", - " b' since',\n", - " b' before',\n", - " b' recorded',\n", - " b' history',\n", - " b'.',\n", - " b' It',\n", - " b' is',\n", - " b' named',\n", - " b' after',\n", - " b' the',\n", - " b' Roman',\n", - " b' god',\n", - " b' Jupiter',\n", - " b'.[',\n", - " b'19',\n", - " b']',\n", - " b' When',\n", - " b' viewed',\n", - " b' from',\n", - " b' Earth',\n", - " b',',\n", - " b' Jupiter',\n", - " b' can',\n", - " b' be',\n", - " b' bright',\n", - " b' enough',\n", - " b' for',\n", - " b' its',\n", - " b' reflected',\n", - " b' light',\n", - " b' to',\n", - " b' cast',\n", - " b' visible',\n", - " b' shadows',\n", - " b',[',\n", - " b'20',\n", - " b']',\n", - " b' and',\n", - " b' is',\n", - " b' on',\n", - " b' average',\n", - " b' the',\n", - " b' third',\n", - " b'-b',\n", - " b'right',\n", - " b'est',\n", - " b' natural',\n", - " b' object',\n", - " b' in',\n", - " b' the',\n", - " b' night',\n", - " b' sky',\n", - " b' after',\n", - " b' the',\n", - " b' Moon',\n", - " b' and',\n", - " b' Venus',\n", - " b'.\\n']" - ] - }, - "execution_count": 10, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "# EXERCISE:\n", - "# 1. Run the exercise as is first\n", - "# 2. Change the text to any prompt input you want to use & re-run to see tokens\n", - "\n", - "import tiktoken\n", - "\n", - "# Define the prompt you want tokenized\n", - "text = f\"\"\"\n", - "Jupiter is the fifth planet from the Sun and the \\\n", - "largest in the Solar System. It is a gas giant with \\\n", - "a mass one-thousandth that of the Sun, but two-and-a-half \\\n", - "times that of all the other planets in the Solar System combined. \\\n", - "Jupiter is one of the brightest objects visible to the naked eye \\\n", - "in the night sky, and has been known to ancient civilizations since \\\n", - "before recorded history. It is named after the Roman god Jupiter.[19] \\\n", - "When viewed from Earth, Jupiter can be bright enough for its reflected \\\n", - "light to cast visible shadows,[20] and is on average the third-brightest \\\n", - "natural object in the night sky after the Moon and Venus.\n", - "\"\"\"\n", - "\n", - "# Set the model you want encoding for\n", - "encoding = tiktoken.encoding_for_model(\"gpt-3.5-turbo\")\n", - "\n", - "# Encode the text - gives you the tokens in integer form\n", - "tokens = encoding.encode(text)\n", - "print(tokens);\n", - "\n", - "# Decode the integers to see what the text versions look like\n", - "[encoding.decode_single_token_bytes(token) for token in tokens]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 2: Validate OpenAI API Key Setup\n", - "\n", - "Run the code below to verify that your OpenAI endpoint is set up correctly. The code just tries a simple basic prompt and validates the completion. Input `oh say can you see` should complete along the lines of `by the dawn's early light..`\n" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "by the dawn's early light\n" - ] - } - ], - "source": [ - "# The OpenAI SDK was updated on Nov 8, 2023 with new guidance for migration\n", - "# See: https://github.com/openai/openai-python/discussions/742\n", - "\n", - "## Updated\n", - "import os\n", - "import openai\n", - "from openai import OpenAI\n", - "\n", - "client = OpenAI(\n", - " api_key=os.environ['OPENAI_API_KEY'], # this is also the default, it can be omitted\n", - ")\n", - "\n", - "## Updated\n", - "def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n", - " messages = [{\"role\": \"user\", \"content\": prompt}]\n", - " response = openai.chat.completions.create(\n", - " model=model,\n", - " messages=messages,\n", - " temperature=0, # this is the degree of randomness of the model's output\n", - " max_tokens=1024\n", - " )\n", - " return response.choices[0].message.content\n", - "\n", - "## ---------- Call the helper method\n", - "\n", - "### 1. Set primary content or prompt text\n", - "text = f\"\"\"\n", - "oh say can you see\n", - "\"\"\"\n", - "\n", - "### 2. Use that in the prompt template below\n", - "prompt = f\"\"\"\n", - "```{text}```\n", - "\"\"\"\n", - "\n", - "## 3. Run the prompt\n", - "response = get_completion(prompt)\n", - "print(response)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 3: Fabrications\n", - "Explore what happens when you ask the LLM to return completions for a prompt about a topic that may not exist, or about topics that it may not know about because it was outside it's pre-trained dataset (more recent). See how the response changes if you try a different prompt, or a different model." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Title: The Martian War of 2076 - A Lesson Plan\n", - "\n", - "Objective: \n", - "To educate students about the Martian War of 2076, its causes, key events, and consequences, fostering critical thinking, historical analysis, and empathy.\n", - "\n", - "Grade Level: \n", - "High School (9th-12th grade)\n", - "\n", - "Duration: \n", - "2-3 class periods (approximately 90 minutes each)\n", - "\n", - "Materials Needed:\n", - "1. Access to research materials (books, articles, online resources)\n", - "2. Whiteboard or blackboard\n", - "3. Markers or chalk\n", - "4. Handouts (optional)\n", - "5. Multimedia resources (optional)\n", - "\n", - "Lesson Plan:\n", - "\n", - "Day 1:\n", - "\n", - "Introduction (15 minutes)\n", - "1. Begin the lesson by engaging students in a brief discussion about science fiction and its influence on society.\n", - "2. Introduce the topic of the Martian War of 2076, explaining that it is a fictional event but will be studied as if it were real.\n", - "3. Share the lesson objectives and explain the importance of understanding historical events, even if they are fictional.\n", - "\n", - "Causes of the Martian War (30 minutes)\n", - "1. Divide students into small groups and provide each group with research materials.\n", - "2. Instruct students to identify and discuss the possible causes of the Martian War of 2076.\n", - "3. After group discussions, facilitate a class discussion, allowing each group to share their findings.\n", - "4. Summarize the causes on the board, encouraging students to critically analyze the factors that led to the war.\n", - "\n", - "Key Events of the Martian War (45 minutes)\n", - "1. Provide students with a timeline of the Martian War, highlighting key events.\n", - "2. Instruct students to work individually or in pairs to research and summarize each event.\n", - "3. Allow time for students to present their findings to the class, discussing the significance of each event.\n", - "4. Facilitate a class discussion to analyze the sequence of events and their impact on the war's outcome.\n", - "\n", - "Day 2:\n", - "\n", - "Consequences of the Martian War (30 minutes)\n", - "1. Discuss the consequences of the Martian War, both for Earth and Mars.\n", - "2. Divide students into small groups and assign each group a specific consequence to research.\n", - "3. Instruct students to create a visual representation (poster, infographic, etc.) highlighting their assigned consequence.\n", - "4. Allow time for each group to present their findings, discussing the short-term and long-term effects of the war.\n", - "\n", - "Analyzing Perspectives (45 minutes)\n", - "1. Divide students into pairs or small groups.\n", - "2. Assign each group a specific role, such as a Martian civilian, an Earth soldier, a Martian rebel, or an Earth politician.\n", - "3. Instruct students to discuss and analyze the war from their assigned perspective, considering motivations, emotions, and experiences.\n", - "4. Facilitate a class discussion, allowing each group to share their insights and promoting empathy towards different perspectives.\n", - "\n", - "Reflection and Discussion (15 minutes)\n", - "1. Lead a class discussion to reflect on the Martian War of 2076 as a fictional event.\n", - "2. Encourage students to draw parallels between the Martian War and real historical conflicts, discussing the potential lessons learned.\n", - "3. Conclude the lesson by summarizing the key takeaways and emphasizing the importance of critical thinking and empathy in understanding historical events.\n", - "\n", - "Optional Extension Activities:\n", - "1. Creative Writing: Ask students to write a short story or a diary entry from the perspective of a character involved in the Martian War.\n", - "2. Debate: Organize a class debate on the ethical implications of the Martian War, exploring topics such as colonization, resource exploitation, and interplanetary relations.\n", - "3. Multimedia Presentation: Allow students to create multimedia presentations (videos, slideshows, etc.) summarizing the Martian War and its impact.\n", - "\n", - "Assessment:\n", - "1. Group presentations and class discussions can be assessed based on the depth of analysis, clarity of communication, and ability to support arguments with evidence.\n", - "2. Optional extension activities can be assessed based on creativity, critical thinking, and understanding of the Martian War's historical context.\n" - ] - } - ], - "source": [ - "\n", - "## Set the text for simple prompt or primary content\n", - "## Prompt shows a template format with text in it - add cues, commands etc if needed\n", - "## Run the completion \n", - "text = f\"\"\"\n", - "generate a lesson plan on the Martian War of 2076.\n", - "\"\"\"\n", - "\n", - "prompt = f\"\"\"\n", - "```{text}```\n", - "\"\"\"\n", - "\n", - "response = get_completion(prompt)\n", - "print(response)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 4: Instruction Based \n", - "Use the \"text\" variable to set the primary content \n", - "and the \"prompt\" variable to provide an instruction related to that primary content.\n", - "\n", - "Here we ask the model to summarize the text for a second-grade student" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Jupiter is a really big planet that is fifth from the Sun. It is made of gas and is much smaller than the Sun but bigger than all the other planets combined. People have known about Jupiter for a really long time because it is very bright in the night sky. It is named after a god from ancient Rome. Sometimes, Jupiter is so bright that it can make shadows on Earth. It is usually the third-brightest thing we can see at night, after the Moon and Venus.\n" - ] - } - ], - "source": [ - "# Test Example\n", - "# https://platform.openai.com/playground/p/default-summarize\n", - "\n", - "## Example text\n", - "text = f\"\"\"\n", - "Jupiter is the fifth planet from the Sun and the \\\n", - "largest in the Solar System. It is a gas giant with \\\n", - "a mass one-thousandth that of the Sun, but two-and-a-half \\\n", - "times that of all the other planets in the Solar System combined. \\\n", - "Jupiter is one of the brightest objects visible to the naked eye \\\n", - "in the night sky, and has been known to ancient civilizations since \\\n", - "before recorded history. It is named after the Roman god Jupiter.[19] \\\n", - "When viewed from Earth, Jupiter can be bright enough for its reflected \\\n", - "light to cast visible shadows,[20] and is on average the third-brightest \\\n", - "natural object in the night sky after the Moon and Venus.\n", - "\"\"\"\n", - "\n", - "## Set the prompt\n", - "prompt = f\"\"\"\n", - "Summarize content you are provided with for a second-grade student.\n", - "```{text}```\n", - "\"\"\"\n", - "\n", - "## Run the prompt\n", - "response = get_completion(prompt)\n", - "print(response)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise 5: Complex Prompt \n", - "Try a request that has system, user and assistant messages \n", - "System sets assistant context\n", - "User & Assistant messages provide multi-turn conversation context\n", - "\n", - "Note how the assistant personality is set to \"sarcastic\" in the system context. \n", - "Try using a different personality context. Or try a different series of input/output messages" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Oh, just a casual little event called the World Series. It was played in Arlington, Texas at the beautiful Globe Life Field.\n" - ] - } - ], - "source": [ - "response = openai.chat.completions.create(\n", - " model=\"gpt-3.5-turbo\",\n", - " messages=[\n", - " {\"role\": \"system\", \"content\": \"You are a sarcastic assistant.\"},\n", - " {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n", - " {\"role\": \"assistant\", \"content\": \"Who do you think won? The Los Angeles Dodgers of course.\"},\n", - " {\"role\": \"user\", \"content\": \"Where was it played?\"}\n", - " ]\n", - ")\n", - "print(response.choices[0].message.content)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Exercise: Explore Your Intuition\n", - "The above examples give you patterns that you can use to create new prompts (simple, complex, instruction etc.) - try creating other exercises to explore some of the other ideas we've talked about like examples, cues and more." - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.8" - }, - "orig_nbformat": 4 - }, - "nbformat": 4, - "nbformat_minor": 2 - } - \ No newline at end of file + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The following notebook was auto-generated by GitHub Copilot Chat and is meant for initial setup only" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Introduction to Prompt Engineering\n", + "Prompt engineering is the process of designing and optimizing prompts for natural language processing tasks. It involves selecting the right prompts, tuning their parameters, and evaluating their performance. Prompt engineering is crucial for achieving high accuracy and efficiency in NLP models. In this section, we will explore the basics of prompt engineering using the OpenAI models for exploration." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 1: Tokenization\n", + "Explore Tokenization using tiktoken, an open-source fast tokenizer from OpenAI\n", + "See [OpenAI Cookbook](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb?WT.mc_id=academic-105485-koreyst) for more examples.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# EXERCISE:\n", + "# 1. Run the exercise as is first\n", + "# 2. Change the text to any prompt input you want to use & re-run to see tokens\n", + "\n", + "import tiktoken\n", + "\n", + "# Define the prompt you want tokenized\n", + "text = f\"\"\"\n", + "Jupiter is the fifth planet from the Sun and the \\\n", + "largest in the Solar System. It is a gas giant with \\\n", + "a mass one-thousandth that of the Sun, but two-and-a-half \\\n", + "times that of all the other planets in the Solar System combined. \\\n", + "Jupiter is one of the brightest objects visible to the naked eye \\\n", + "in the night sky, and has been known to ancient civilizations since \\\n", + "before recorded history. It is named after the Roman god Jupiter.[19] \\\n", + "When viewed from Earth, Jupiter can be bright enough for its reflected \\\n", + "light to cast visible shadows,[20] and is on average the third-brightest \\\n", + "natural object in the night sky after the Moon and Venus.\n", + "\"\"\"\n", + "\n", + "# Set the model you want encoding for\n", + "encoding = tiktoken.encoding_for_model(\"gpt-3.5-turbo\")\n", + "\n", + "# Encode the text - gives you the tokens in integer form\n", + "tokens = encoding.encode(text)\n", + "print(tokens);\n", + "\n", + "# Decode the integers to see what the text versions look like\n", + "[encoding.decode_single_token_bytes(token) for token in tokens]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 2: Validate OpenAI API Key Setup\n", + "\n", + "Run the code below to verify that your OpenAI endpoint is set up correctly. The code just tries a simple basic prompt and validates the completion. Input `oh say can you see` should complete along the lines of `by the dawn's early light..`\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The OpenAI SDK was updated on Nov 8, 2023 with new guidance for migration\n", + "# See: https://github.com/openai/openai-python/discussions/742\n", + "\n", + "## Updated\n", + "import os\n", + "import openai\n", + "from openai import OpenAI\n", + "\n", + "client = OpenAI(\n", + " api_key=os.environ['OPENAI_API_KEY'], # this is also the default, it can be omitted\n", + ")\n", + "\n", + "## Updated\n", + "def get_completion(prompt, model=\"gpt-3.5-turbo\"):\n", + " messages = [{\"role\": \"user\", \"content\": prompt}]\n", + " response = openai.chat.completions.create(\n", + " model=model,\n", + " messages=messages,\n", + " temperature=0, # this is the degree of randomness of the model's output\n", + " max_tokens=1024\n", + " )\n", + " return response.choices[0].message.content\n", + "\n", + "## ---------- Call the helper method\n", + "\n", + "### 1. Set primary content or prompt text\n", + "text = f\"\"\"\n", + "oh say can you see\n", + "\"\"\"\n", + "\n", + "### 2. Use that in the prompt template below\n", + "prompt = f\"\"\"\n", + "```{text}```\n", + "\"\"\"\n", + "\n", + "## 3. Run the prompt\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 3: Fabrications\n", + "Explore what happens when you ask the LLM to return completions for a prompt about a topic that may not exist, or about topics that it may not know about because it was outside it's pre-trained dataset (more recent). See how the response changes if you try a different prompt, or a different model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "## Set the text for simple prompt or primary content\n", + "## Prompt shows a template format with text in it - add cues, commands etc if needed\n", + "## Run the completion \n", + "text = f\"\"\"\n", + "generate a lesson plan on the Martian War of 2076.\n", + "\"\"\"\n", + "\n", + "prompt = f\"\"\"\n", + "```{text}```\n", + "\"\"\"\n", + "\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 4: Instruction Based \n", + "Use the \"text\" variable to set the primary content \n", + "and the \"prompt\" variable to provide an instruction related to that primary content.\n", + "\n", + "Here we ask the model to summarize the text for a second-grade student" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Test Example\n", + "# https://platform.openai.com/playground/p/default-summarize\n", + "\n", + "## Example text\n", + "text = f\"\"\"\n", + "Jupiter is the fifth planet from the Sun and the \\\n", + "largest in the Solar System. It is a gas giant with \\\n", + "a mass one-thousandth that of the Sun, but two-and-a-half \\\n", + "times that of all the other planets in the Solar System combined. \\\n", + "Jupiter is one of the brightest objects visible to the naked eye \\\n", + "in the night sky, and has been known to ancient civilizations since \\\n", + "before recorded history. It is named after the Roman god Jupiter.[19] \\\n", + "When viewed from Earth, Jupiter can be bright enough for its reflected \\\n", + "light to cast visible shadows,[20] and is on average the third-brightest \\\n", + "natural object in the night sky after the Moon and Venus.\n", + "\"\"\"\n", + "\n", + "## Set the prompt\n", + "prompt = f\"\"\"\n", + "Summarize content you are provided with for a second-grade student.\n", + "```{text}```\n", + "\"\"\"\n", + "\n", + "## Run the prompt\n", + "response = get_completion(prompt)\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise 5: Complex Prompt \n", + "Try a request that has system, user and assistant messages \n", + "System sets assistant context\n", + "User & Assistant messages provide multi-turn conversation context\n", + "\n", + "Note how the assistant personality is set to \"sarcastic\" in the system context. \n", + "Try using a different personality context. Or try a different series of input/output messages" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "response = openai.chat.completions.create(\n", + " model=\"gpt-3.5-turbo\",\n", + " messages=[\n", + " {\"role\": \"system\", \"content\": \"You are a sarcastic assistant.\"},\n", + " {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n", + " {\"role\": \"assistant\", \"content\": \"Who do you think won? The Los Angeles Dodgers of course.\"},\n", + " {\"role\": \"user\", \"content\": \"Where was it played?\"}\n", + " ]\n", + ")\n", + "print(response.choices[0].message.content)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Exercise: Explore Your Intuition\n", + "The above examples give you patterns that you can use to create new prompts (simple, complex, instruction etc.) - try creating other exercises to explore some of the other ideas we've talked about like examples, cues and more." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.8" + }, + "orig_nbformat": 4 + }, + "nbformat": 4, + "nbformat_minor": 2 +}