diff --git a/docs/assets/basics/openai_maximum_length_example.webp b/docs/assets/basics/openai_maximum_length_example.webp new file mode 100644 index 00000000000..30087e872d5 Binary files /dev/null and b/docs/assets/basics/openai_maximum_length_example.webp differ diff --git a/docs/assets/basics/openai_mode.webp b/docs/assets/basics/openai_mode.webp index 5d7669fda8d..6a39d5161c2 100644 Binary files a/docs/assets/basics/openai_mode.webp and b/docs/assets/basics/openai_mode.webp differ diff --git a/docs/assets/basics/openai_stop_sequences_example.webp b/docs/assets/basics/openai_stop_sequences_example.webp new file mode 100644 index 00000000000..b9bba2a68b4 Binary files /dev/null and b/docs/assets/basics/openai_stop_sequences_example.webp differ diff --git a/docs/assets/basics/openai_system_prompt.webp b/docs/assets/basics/openai_system_prompt.webp new file mode 100644 index 00000000000..40c01651224 Binary files /dev/null and b/docs/assets/basics/openai_system_prompt.webp differ diff --git a/docs/basics/configuration_hyperparameters.md b/docs/basics/configuration_hyperparameters.md index 8a11d80c351..8af871b0ace 100644 --- a/docs/basics/configuration_hyperparameters.md +++ b/docs/basics/configuration_hyperparameters.md @@ -97,13 +97,24 @@ import Max from '@site/docs/assets/basics/openai_maximum_length.webp';
-The maximum length is the total # of tokens the AI is allowed to generate. This setting is useful since it allows users to manage the length of the model's response, preventing overly long or irrelevant responses. It also helps control cost, as the length is shared between the input in the Playground box and the generated response. +The maximum length is the total # of tokens the AI is allowed to generate. This setting is useful since it allows users to manage the length of the model's response, preventing overly long or irrelevant responses. The length is shared between the USER input in the Playground box and the ASSISTANT generated response. Notice how with a limit of 256 tokens, our PirateGPT from earlier is forced to cut its story short mid-sentence.
+import max_length_example from '@site/docs/assets/basics/openai_maximum_length_example.webp'; + +
+
+ +
+ +:::note +This also helps control cost if you're paying for use of the model through the API rather than using the Playground. +::: + ## Other LLM Settings There many other settings that can affect language model output, such as stop sequences, and frequency and presence penalties. @@ -114,13 +125,20 @@ import Stop from '@site/docs/assets/basics/openai_stop_sequences.webp';
-Stop sequences tell the model when to cease output generation, which allows you to control content length and structure. If you are prompting the AI to write an email, setting "Best regards," or "Sincerely," as the stop sequence ensures the model stops after the closing salutation, which keeps the email short and to the point. +Stop sequences tell the model when to cease output generation, which allows you to control content length and structure. If you are prompting the AI to write an email, setting "Best regards," or "Sincerely," as the stop sequence ensures the model stops before the closing salutation, which keeps the email short and to the point. Stop sequences are useful for output that you expect to come out in a structured format such as an email, a numbered list, or dialogue.
+import stop_sequences_example from '@site/docs/assets/basics/openai_stop_sequences_example.webp'; + +
+
+ +
+ ### Frequency Penalty import Freq from '@site/docs/assets/basics/openai_frequency_penalty.webp'; @@ -161,7 +179,7 @@ In conclusion, mastering settings like temperature, top p, maximum length and ot -Partly written by jackdickens382 +Partly written by jackdickens382 and evintunador [^a]: A more technical word is "configuration hyperparameters" [^b]: Also known as Nucleus Sampling \ No newline at end of file diff --git a/docs/basics/openai_playground.md b/docs/basics/openai_playground.md index 7c1dd666a5d..b485f6f3574 100644 --- a/docs/basics/openai_playground.md +++ b/docs/basics/openai_playground.md @@ -33,13 +33,13 @@ Or watch this video: -## The Interface - -At first, this interface seems very complex. There are many drop downs and sliders that allow you to configure models. We will cover System Prompts, Mode, and Model selection in this video. We will cover the rest in the next lesson. +:::note +This video shows an old version of the website, but the process of logging in remains very similar. +::: -### System Prompts +## The Interface -The first thing that you may notice is the SYSTEM area on the left side of the page. So far, we have seen two types of messages, USER messages, which are just the messages you send to the chatbot, and ASSISTANT messages, which are the chatbot's replies. There is a third type of message, the system prompt, that can be used to configure how the AI responds. This is the best place to put a priming prompt. +At first, this interface seems very complex. There are many drop downs and sliders that allow you to configure models. We will cover Mode, System Prompts, and Model selection in this lesson, and LLM settings like Temperature, Top P, and Maximum Length in the [next lesson](https://learnprompting.org/docs/basics/configuration_hyperparameters). ### Mode @@ -47,20 +47,33 @@ import Mode from '@site/docs/assets/basics/openai_mode.webp';
- Click the Mode dropdown on the top right of the page. This dropdown allows you to change the type of model that you are using. OpenAI has three different Modes: Chat, Complete, and Edit. We have already learned about the first two; Edit models modify the prompt you give them to, for example, fix typos. We will only use Chat and occasionally Complete models in this course. + Click the 'Assistants' dropdown on the top left of the page. This dropdown allows you to change the type of model that you are using. OpenAI has three different Modes: Assistants, Chat, and Complete. We have already learned about the latter two; Assistants models are meant for API use by developers and can use interesting tools such as running code and retrieving information. We will only use Chat and occasionally Complete models in this course.
+### System Prompts + +After switching to Chat, the first thing that you may notice on the left side of the page other than the Get Started popup is the SYSTEM area. So far, we have seen two types of messages, USER messages, which are just the messages you send to the chatbot, and ASSISTANT messages, which are the chatbot's replies. There is a third type of message, the system prompt, that can be used to configure how the AI responds. + +This is the best place to put a priming prompt. The system prompt will be "You are a helpful assistant." by default, but a fun alternative example to try out would be the "You are PirateGPT. Always talk like a pirate." example from [our previous lesson](https://learnprompting.org/docs/basics/priming_prompt). + +import system_prompt from '@site/docs/assets/basics/openai_system_prompt.webp'; + +
+ +
+
+ ### Model import Model from '@site/docs/assets/basics/openai_model.webp';
- Click the Model dropdown on the right of the page. This dropdown allows you to change the model that you are using. Each mode has multiple models, but we will focus on the chat ones. This list appears to be very complicated (*what does gpt-3.5-turbo mean?*), but these are just technical names for different models. Anything that starts with gpt-3.5-turbo is a version of ChatGPT, while anything that starts with gpt-4 is a version of GPT-4. + Click the Model dropdown on the right of the page. This dropdown allows you to change the model that you are using. Each mode has multiple models, but we will focus on the chat ones. This list appears to be very complicated (what does gpt-3.5-turbo mean?), but these are just technical names for different models. Anything that starts with gpt-3.5-turbo is a version of ChatGPT, while anything that starts with gpt-4 is a version of GPT-4, the newer model you get access to from purchasing a ChatGPT Plus subscription.
@@ -72,8 +85,10 @@ import Model from '@site/docs/assets/basics/openai_model.webp'; You may not see GPT-4 versions in your interface. ::: -The numbers like 16K or 32K in the model names represent the context length. If it's not specified, the default context length is 4K. OpenAI regularly updates both ChatGPT (gpt-3.5-turbo) and GPT-4, and older versions are kept available on the platform for a limited period. These older models have additional numbers at the end of their name, such as "0613". For instance, the model "gpt-3.5-turbo-16k-0613" is a ChatGPT model with a 16K context length, released on June 13th, 2023. However, it's recommended to use the most recent versions of models, which don't contain any date information. A comprehensive list of model versions can be found [here](https://platform.openai.com/docs/models/gpt-4). +The numbers like 16K, 32K, or 128k in the model names represent the context length. If it's not specified, the default context length is 4K for gpt-3.5 and 8k for GPT-4. OpenAI regularly updates both ChatGPT (gpt-3.5-turbo) and GPT-4, and older versions are kept available on the platform for a limited period. These older models have additional numbers at the end of their name, such as "0613". For instance, the model "gpt-3.5-turbo-16k-0613" is a ChatGPT model with a 16K context length, released on June 13th, 2023. However, it's recommended to use the most recent versions of models, which don't contain any date information. A comprehensive list of model versions can be found [here](https://platform.openai.com/docs/models/gpt-4). ## Conclusion -The OpenAI Playground is a powerful tool that provides a more advanced interface for interacting with ChatGPT and other AI models. It offers a range of configuration options, including the ability to select different models and modes. We will learn about the rest of the settings in the next lesson. The Playground also supports system prompts, which can be used to guide the AI's responses. While the interface may seem complex at first, with practice, it becomes a valuable resource for exploring the capabilities of OpenAI's models. Whether you're using the latest versions of ChatGPT or GPT-4, or exploring older models, the Playground offers a flexible and robust platform for AI interaction and experimentation. \ No newline at end of file +The OpenAI Playground is a powerful tool that provides a more advanced interface for interacting with ChatGPT and other AI models. It offers a range of configuration options, including the ability to select different models and modes. We will learn about the rest of the settings in the [next lesson](https://learnprompting.org/docs/basics/configuration_hyperparameters). The Playground also supports system prompts, which can be used to guide the AI's responses. While the interface may seem complex at first, with practice, it becomes a valuable resource for exploring the capabilities of OpenAI's models. Whether you're using the latest versions of ChatGPT or GPT-4, or exploring older models, the Playground offers a flexible and robust platform for AI interaction and experimentation. + +Partly written by evintunador \ No newline at end of file