From a7b06279bedabdde887db3a839f8b2e5ad091567 Mon Sep 17 00:00:00 2001 From: Miriam Scharnke Date: Mon, 29 Jul 2024 15:07:20 +0100 Subject: [PATCH 1/8] Create functions directory in the doc folder and add openAIChat documentation. --- doc/functions/openAIChat.md | 289 ++++++++++++++++++++++++++++++++++++ 1 file changed, 289 insertions(+) create mode 100644 doc/functions/openAIChat.md diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md new file mode 100644 index 0000000..a21a8bc --- /dev/null +++ b/doc/functions/openAIChat.md @@ -0,0 +1,289 @@ + +# openAIChat + +Connect to OpenAI Chat Completion API + +# Creation +## Syntax + +`chat = openAIChat` + + +`chat = openAIChat(systemPrompt)` + + +`chat = openAIChat(___,ApiKey=key)` + + +`chat = openAIChat(___,Name=Value)` + +## Description + +Connect to the OpenAI™ Chat Completion API to generate text using large language models developed by OpenAI. + + +To connect to the OpenAI API, you need a valid API key. For information on how to obtain an API key, see [https://platform.openai.com/docs/quickstart](https://platform.openai.com/docs/quickstart). + + +`chat = openAIChat` creates an `openAIChat` object. Connecting to the OpenAI API requires a valid API key. Either set the environment variable `OPENAI_API_KEY` or specify the `APIKey` name\-value argument. + + +`chat = openAIChat(``systemPrompt``)` creates an `openAIChat` object with the specified system prompt. + + +`chat = openAIChat(___,APIKey=key)` uses the specified API key. + + +`chat = openAIChat(___,``Name=Value``)` specifies additional options using one or more name\-value arguments. + +## Input Arguments +### `systemPrompt` \- System prompt + +character vector | string scalar + + +The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. + + +**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." + +## Name\-Value Arguments +### `APIKey` \- OpenAI API key + +character vector | string scalar + + +OpenAI API key to access OpenAI APIs such as ChatGPT. + + +Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). + +### `ModelName` \- Model name + +`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ... + + +Name of the OpenAI model to use for text or image generation. + + +For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). + +### `Temperature` \- Temperature + +`1` (default) | numeric scalar between `0` and `2` + + +Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output. + +### `TopP` \- Top probability mass + +`1` (default) | numeric scalar between `0` and `1` + + +Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity. + +### `Tools` \- OpenAI functions to use during output generation + +`openAIFunction` object | array of `openAIFunction` objects + + +Custom functions used by the model to process its input and output. + +### `StopSequences` \- Stop sequences + +`""` (default) | string array with between `1` and `4` elements + + +Sequences that stop generation of tokens. + + +**Example:** `["The end.","And that is all she wrote."]` + +### `PresencePenalty` \- Presence penalty + +`0` (default) | numeric scalar between `-2` and `2` + + +Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. + + +The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument. + +### `FrequencyPenalty` \- Frequency penalty + +`0` (default) | numeric scalar between `-2` and `2` + + +Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. + + +The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. + +### `TimeOut` \- Connection timeout in seconds + +`10` (default) | nonnegative numeric scalar + + +If the OpenAI server does not respond within the timeout, then the function throws an error. + +### `StreamFun` \- Custom streaming function + +function handle + + +Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated. + + +**Example:** `@(token) fprint("%s",token)` + +### `ResponseFormat` \- Response format + +`"text"` (default) | `"json"` + + +Format of generated output. + + +If you set the response format to `"text"`, then the generated output is a string. + + +If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models: + +- `ModelName="gpt-4"` +- `ModelName="gpt-4-0613"` + +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). + +# Properties +### `SystemPrompt` \- System prompt + +character vector | string scalar + + +This property is read\-only. + + +The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. + + +**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." + +### `ModelName` \- Model name + +`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ... + + +Name of the OpenAI model to use for text or image generation. + + +For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). + +### `Temperature` \- Temperature + +`1` (default) | numeric scalar between `0` and `2` + + +Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness. + +### `TopP` \- Top probability mass + +`1` (default) | numeric scalar between `0` and `1` + + +Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity. + +### `StopSequences` \- Stop sequences + +`""` (default) | string array with between `1` and `4` elements + + +Sequences that stop generation of tokens. + + +**Example:** `["The end.","And that is all she wrote."]` + +### `PresencePenalty` \- Presence penalty + +`0` (default) | numeric scalar between `-2` and `2` + + +Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. + + +The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument. + +### `FrequencyPenalty` \- Frequency penalty + +`0` (default) | numeric scalar between `-2` and `2` + + +Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. + + +The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. + +### `TimeOut` \- Connection timeout in seconds + +`10` (default) | nonnegative numeric scalar + + +This property is read\-only. + + +If the OpenAI server does not respond within the timeout, then the function throws an error. + +### `ResponseFormat` \- Response format + +`"text"` (default) | `"json"` + + +This property is read\-only. + + +Format of generated output. + + +If the response format is `"text"`, then the generated output is a string. + + +If the response format is `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models: + +- `ModelName="gpt-4"` +- `ModelName="gpt-4-0613"` + +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). + +### `FunctionNames` \- Names of OpenAI functions to use during output generation + +string array + + +This property is read\-only. + + +Names of the custom functions specified in the `Tools` name\-value argument. + +# Object Functions + +`generate` \- Generate text + +# Examples +## Create OpenAI Chat +```matlab +modelName = "gpt-3.5-turbo"; +chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName) +``` +## Generate and Stream Text +```matlab +sf = @(x) fprintf("%s",x); +chat = openAIChat(StreamFun=sf); +generate(chat,"Why is a raven like a writing desk?") +``` +# See Also +- [Create Simple Chat Bot](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/CreateSimpleChatBot.md) +- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) +- [Analyze Scientific Papers Using Function Calls](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeScientificPapersUsingFunctionCalls.md) +- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) + +Copyright 2024 The MathWorks, Inc. + From 19c70eebd223aa0ff9a16bfad030d6b8546a9c07 Mon Sep 17 00:00:00 2001 From: MiriamScharnke Date: Mon, 29 Jul 2024 16:10:35 +0100 Subject: [PATCH 2/8] Update doc/functions/openAIChat.md Co-authored-by: Christopher Creutzig <89011131+ccreutzi@users.noreply.github.com> --- doc/functions/openAIChat.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index a21a8bc..6890799 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -1,7 +1,7 @@ # openAIChat -Connect to OpenAI Chat Completion API +Connect to OpenAI™ Chat Completion API # Creation ## Syntax From 56a65e75cb2ef2ab1a0ded0695742e94a51859f8 Mon Sep 17 00:00:00 2001 From: MiriamScharnke Date: Mon, 29 Jul 2024 16:12:19 +0100 Subject: [PATCH 3/8] Update doc/functions/openAIChat.md I know why those happened, and can avoid them in the future! In MLX files, you can create hyperlinks which link to lines in the same file. I tried to see whether this would translate into the markdown - turns out it hasn't. What it has done is taken those hyperlinks and just given them extra backticks with no functionality. So long as I stop trying to link between passages, this should not be an issue. Co-authored-by: Christopher Creutzig <89011131+ccreutzi@users.noreply.github.com> --- doc/functions/openAIChat.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index 6890799..aa368b5 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -28,7 +28,7 @@ To connect to the OpenAI API, you need a valid API key. For information on how t `chat = openAIChat` creates an `openAIChat` object. Connecting to the OpenAI API requires a valid API key. Either set the environment variable `OPENAI_API_KEY` or specify the `APIKey` name\-value argument. -`chat = openAIChat(``systemPrompt``)` creates an `openAIChat` object with the specified system prompt. +`chat = openAIChat(systemPrompt)` creates an `openAIChat` object with the specified system prompt. `chat = openAIChat(___,APIKey=key)` uses the specified API key. From b0f15e25d580683600f81d93754be09c2ff849da Mon Sep 17 00:00:00 2001 From: MiriamScharnke Date: Mon, 29 Jul 2024 16:12:32 +0100 Subject: [PATCH 4/8] Update doc/functions/openAIChat.md Co-authored-by: Christopher Creutzig <89011131+ccreutzi@users.noreply.github.com> --- doc/functions/openAIChat.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index aa368b5..7694654 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -34,7 +34,7 @@ To connect to the OpenAI API, you need a valid API key. For information on how t `chat = openAIChat(___,APIKey=key)` uses the specified API key. -`chat = openAIChat(___,``Name=Value``)` specifies additional options using one or more name\-value arguments. +`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments. ## Input Arguments ### `systemPrompt` \- System prompt From 595316f0383f09004067916206768de42d6484ee Mon Sep 17 00:00:00 2001 From: MiriamScharnke Date: Tue, 30 Jul 2024 14:16:20 +0100 Subject: [PATCH 5/8] Update doc/functions/openAIChat.md Co-authored-by: Christopher Creutzig <89011131+ccreutzi@users.noreply.github.com> --- doc/functions/openAIChat.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index 7694654..e066655 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -133,6 +133,8 @@ function handle Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated. +For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) + **Example:** `@(token) fprint("%s",token)` From ada0ff810338786e6e035e39aeb8ffb51d74f5dd Mon Sep 17 00:00:00 2001 From: Miriam Scharnke Date: Tue, 30 Jul 2024 14:19:09 +0100 Subject: [PATCH 6/8] Update openAIChat documentation. --- doc/functions/openAIChat.md | 36 +++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index 7694654..f63bef7 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -19,7 +19,7 @@ Connect to OpenAI™ Chat Completion API ## Description -Connect to the OpenAI™ Chat Completion API to generate text using large language models developed by OpenAI. +Connect to the OpenAI Chat Completion API to generate text using large language models developed by OpenAI. To connect to the OpenAI API, you need a valid API key. For information on how to obtain an API key, see [https://platform.openai.com/docs/quickstart](https://platform.openai.com/docs/quickstart). @@ -56,7 +56,7 @@ character vector | string scalar OpenAI API key to access OpenAI APIs such as ChatGPT. -Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). +Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md). ### `ModelName` \- Model name @@ -66,7 +66,7 @@ Instead of using the `APIKey` name\-value argument, you can also set the environ Name of the OpenAI model to use for text or image generation. -For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). +For a list of currently supported models, see [OpenAI API](../OpenAI.md). ### `Temperature` \- Temperature @@ -152,7 +152,7 @@ If you set the response format to `"json"`, then the generated output is a JSON - `ModelName="gpt-4"` - `ModelName="gpt-4-0613"` -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). # Properties ### `SystemPrompt` \- System prompt @@ -176,7 +176,7 @@ The system prompt is a natural\-language description that provides the framework Name of the OpenAI model to use for text or image generation. -For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). +For a list of currently supported models, see [OpenAI API](../OpenAI.md). ### `Temperature` \- Temperature @@ -190,7 +190,7 @@ Temperature value for controlling the randomness of the output. Higher temperatu `1` (default) | numeric scalar between `0` and `1` -Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity. +Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity. ### `StopSequences` \- Stop sequences @@ -246,13 +246,20 @@ Format of generated output. If the response format is `"text"`, then the generated output is a string. -If the response format is `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models: +If the response format is `"json"`, then the generated output is a string containing JSON encoded data. + + +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`. + + +For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). + + +The JSON response format is not supported for these models: - `ModelName="gpt-4"` - `ModelName="gpt-4-0613"` -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). - ### `FunctionNames` \- Names of OpenAI functions to use during output generation string array @@ -280,10 +287,9 @@ chat = openAIChat(StreamFun=sf); generate(chat,"Why is a raven like a writing desk?") ``` # See Also -- [Create Simple Chat Bot](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/CreateSimpleChatBot.md) -- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) -- [Analyze Scientific Papers Using Function Calls](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeScientificPapersUsingFunctionCalls.md) -- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) - -Copyright 2024 The MathWorks, Inc. +- [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md) +- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) +- [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md) +- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) +Copyright 2024 The MathWorks, Inc. \ No newline at end of file From 72d171e9f3b1bf880950ad1b5c7a9611874fe877 Mon Sep 17 00:00:00 2001 From: Miriam Scharnke Date: Thu, 1 Aug 2024 09:55:31 +0100 Subject: [PATCH 7/8] Fix OpenAI trademarks --- README.md | 4 ++-- doc/Azure.md | 6 +++--- doc/Ollama.md | 2 +- doc/OpenAI.md | 4 ++-- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 4bcf377..d8e419a 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,8 @@ -# Large Language Models (LLMs) with MATLAB® +# Large Language Models (LLMs) with MATLAB [![Open in MATLAB Online](https://www.mathworks.com/images/responsive/global/open-in-matlab-online.svg)](https://matlab.mathworks.com/open/github/v1?repo=matlab-deep-learning/llms-with-matlab) [![View Large Language Models (LLMs) with MATLAB on File Exchange](https://www.mathworks.com/matlabcentral/images/matlab-file-exchange.svg)](https://www.mathworks.com/matlabcentral/fileexchange/163796-large-language-models-llms-with-matlab) -This repository contains code to connect MATLAB to the [OpenAI™ Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) (which powers ChatGPT™), OpenAI Images API (which powers DALL·E™), [Azure® OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/), and both local and nonlocal [Ollama™](https://ollama.com/) models. This allows you to leverage the natural language processing capabilities of large language models directly within your MATLAB environment. +This repository contains code to connect MATLAB® to the [OpenAI® Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) (which powers ChatGPT™), OpenAI Images API (which powers DALL·E™), [Azure® OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/), and both local and nonlocal [Ollama™](https://ollama.com/) models. This allows you to leverage the natural language processing capabilities of large language models directly within your MATLAB environment. ## Requirements diff --git a/doc/Azure.md b/doc/Azure.md index d5af221..b2bc323 100644 --- a/doc/Azure.md +++ b/doc/Azure.md @@ -1,6 +1,6 @@ -# Connecting to Azure® OpenAI Service +# Connecting to Azure OpenAI Service -This repository contains code to connect MATLAB to the [Azure® OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/). +This repository contains code to connect MATLAB to the [Azure® OpenAI® Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/). To use Azure OpenAI Services, you need to create a model deployment on your Azure account and obtain one of the keys for it. You are responsible for any fees Azure may charge for the use of their APIs. You should be familiar with the limitations and risks associated with using this technology, and you agree that you shall be solely responsible for full compliance with any terms that may apply to your use of the Azure APIs. @@ -31,7 +31,7 @@ loadenv(".env") ## Establishing a connection to Chat Completions API using Azure -To connect MATLAB to Chat Completions API via Azure, you will have to create an `azureChat` object. See [the Azure documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/chatgpt-quickstart) for details on the setup required and where to find your key, endpoint, and deployment name. As explained above, the endpoint, deployment, and key should be in the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_DEPLOYMENYT`, and `AZURE_OPENAI_API_KEY`, or provided as `Endpoint=…`, `Deployment=…`, and `APIKey=…` in the `azureChat` call below. +To connect MATLAB® to Chat Completions API via Azure, you will have to create an `azureChat` object. See [the Azure documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/chatgpt-quickstart) for details on the setup required and where to find your key, endpoint, and deployment name. As explained above, the endpoint, deployment, and key should be in the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_DEPLOYMENYT`, and `AZURE_OPENAI_API_KEY`, or provided as `Endpoint=…`, `Deployment=…`, and `APIKey=…` in the `azureChat` call below. In order to create the chat assistant, use the `azureChat` function, optionally providing a system prompt: ```matlab diff --git a/doc/Ollama.md b/doc/Ollama.md index 110a869..fc70032 100644 --- a/doc/Ollama.md +++ b/doc/Ollama.md @@ -1,6 +1,6 @@ # Ollama -This repository contains code to connect MATLAB to an [Ollama™](https://ollama.com) server, running large language models (LLMs). +This repository contains code to connect MATLAB® to an [Ollama™](https://ollama.com) server, running large language models (LLMs). To use local models with Ollama, you will need to install and start an Ollama server, and “pull” models into it. Please follow the Ollama documentation for details. You should be familiar with the limitations and risks associated with using this technology, and you agree that you shall be solely responsible for full compliance with any terms that may apply to your use of any specific model. diff --git a/doc/OpenAI.md b/doc/OpenAI.md index 51783ae..9072018 100644 --- a/doc/OpenAI.md +++ b/doc/OpenAI.md @@ -1,6 +1,6 @@ -# OpenAI™ +# OpenAI -Several functions in this repository connect MATLAB to the [OpenAI™ Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) (which powers ChatGPT™) and the [OpenAI Images API](https://platform.openai.com/docs/guides/images/image-generation-beta) (which powers DALL·E™). +Several functions in this repository connect MATLAB® to the [OpenAI® Chat Completions API](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) (which powers ChatGPT™) and the [OpenAI Images API](https://platform.openai.com/docs/guides/images/image-generation-beta) (which powers DALL·E™). To start using the OpenAI APIs, you first need to obtain OpenAI API keys. You are responsible for any fees OpenAI may charge for the use of their APIs. You should be familiar with the limitations and risks associated with using this technology, and you agree that you shall be solely responsible for full compliance with any terms that may apply to your use of the OpenAI APIs. From 223347b1c3f6b54ac0c93ac693c8cbeadb00bb22 Mon Sep 17 00:00:00 2001 From: MiriamScharnke Date: Thu, 1 Aug 2024 09:59:58 +0100 Subject: [PATCH 8/8] Delete doc/functions/openAIChat.md This file was not meant for this change. --- doc/functions/openAIChat.md | 297 ------------------------------------ 1 file changed, 297 deletions(-) delete mode 100644 doc/functions/openAIChat.md diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md deleted file mode 100644 index b10653a..0000000 --- a/doc/functions/openAIChat.md +++ /dev/null @@ -1,297 +0,0 @@ - -# openAIChat - -Connect to OpenAI™ Chat Completion API - -# Creation -## Syntax - -`chat = openAIChat` - - -`chat = openAIChat(systemPrompt)` - - -`chat = openAIChat(___,ApiKey=key)` - - -`chat = openAIChat(___,Name=Value)` - -## Description - -Connect to the OpenAI Chat Completion API to generate text using large language models developed by OpenAI. - - -To connect to the OpenAI API, you need a valid API key. For information on how to obtain an API key, see [https://platform.openai.com/docs/quickstart](https://platform.openai.com/docs/quickstart). - - -`chat = openAIChat` creates an `openAIChat` object. Connecting to the OpenAI API requires a valid API key. Either set the environment variable `OPENAI_API_KEY` or specify the `APIKey` name\-value argument. - - -`chat = openAIChat(systemPrompt)` creates an `openAIChat` object with the specified system prompt. - - -`chat = openAIChat(___,APIKey=key)` uses the specified API key. - - -`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments. - -## Input Arguments -### `systemPrompt` \- System prompt - -character vector | string scalar - - -The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. - - -**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." - -## Name\-Value Arguments -### `APIKey` \- OpenAI API key - -character vector | string scalar - - -OpenAI API key to access OpenAI APIs such as ChatGPT. - - -Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md). - -### `ModelName` \- Model name - -`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ... - - -Name of the OpenAI model to use for text or image generation. - - -For a list of currently supported models, see [OpenAI API](../OpenAI.md). - -### `Temperature` \- Temperature - -`1` (default) | numeric scalar between `0` and `2` - - -Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output. - -### `TopP` \- Top probability mass - -`1` (default) | numeric scalar between `0` and `1` - - -Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity. - -### `Tools` \- OpenAI functions to use during output generation - -`openAIFunction` object | array of `openAIFunction` objects - - -Custom functions used by the model to process its input and output. - -### `StopSequences` \- Stop sequences - -`""` (default) | string array with between `1` and `4` elements - - -Sequences that stop generation of tokens. - - -**Example:** `["The end.","And that is all she wrote."]` - -### `PresencePenalty` \- Presence penalty - -`0` (default) | numeric scalar between `-2` and `2` - - -Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. - - -The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument. - -### `FrequencyPenalty` \- Frequency penalty - -`0` (default) | numeric scalar between `-2` and `2` - - -Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. - - -The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. - -### `TimeOut` \- Connection timeout in seconds - -`10` (default) | nonnegative numeric scalar - - -If the OpenAI server does not respond within the timeout, then the function throws an error. - -### `StreamFun` \- Custom streaming function - -function handle - - -Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated. - -For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) - - -**Example:** `@(token) fprint("%s",token)` - -### `ResponseFormat` \- Response format - -`"text"` (default) | `"json"` - - -Format of generated output. - - -If you set the response format to `"text"`, then the generated output is a string. - - -If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models: - -- `ModelName="gpt-4"` -- `ModelName="gpt-4-0613"` - -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). - -# Properties -### `SystemPrompt` \- System prompt - -character vector | string scalar - - -This property is read\-only. - - -The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. - - -**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." - -### `ModelName` \- Model name - -`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ... - - -Name of the OpenAI model to use for text or image generation. - - -For a list of currently supported models, see [OpenAI API](../OpenAI.md). - -### `Temperature` \- Temperature - -`1` (default) | numeric scalar between `0` and `2` - - -Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness. - -### `TopP` \- Top probability mass - -`1` (default) | numeric scalar between `0` and `1` - - -Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity. - -### `StopSequences` \- Stop sequences - -`""` (default) | string array with between `1` and `4` elements - - -Sequences that stop generation of tokens. - - -**Example:** `["The end.","And that is all she wrote."]` - -### `PresencePenalty` \- Presence penalty - -`0` (default) | numeric scalar between `-2` and `2` - - -Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. - - -The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument. - -### `FrequencyPenalty` \- Frequency penalty - -`0` (default) | numeric scalar between `-2` and `2` - - -Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. - - -The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. - -### `TimeOut` \- Connection timeout in seconds - -`10` (default) | nonnegative numeric scalar - - -This property is read\-only. - - -If the OpenAI server does not respond within the timeout, then the function throws an error. - -### `ResponseFormat` \- Response format - -`"text"` (default) | `"json"` - - -This property is read\-only. - - -Format of generated output. - - -If the response format is `"text"`, then the generated output is a string. - - -If the response format is `"json"`, then the generated output is a string containing JSON encoded data. - - -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`. - - -For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). - - -The JSON response format is not supported for these models: - -- `ModelName="gpt-4"` -- `ModelName="gpt-4-0613"` - -### `FunctionNames` \- Names of OpenAI functions to use during output generation - -string array - - -This property is read\-only. - - -Names of the custom functions specified in the `Tools` name\-value argument. - -# Object Functions - -`generate` \- Generate text - -# Examples -## Create OpenAI Chat -```matlab -modelName = "gpt-3.5-turbo"; -chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName) -``` -## Generate and Stream Text -```matlab -sf = @(x) fprintf("%s",x); -chat = openAIChat(StreamFun=sf); -generate(chat,"Why is a raven like a writing desk?") -``` -# See Also -- [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md) -- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) -- [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md) -- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) - -Copyright 2024 The MathWorks, Inc. \ No newline at end of file