Skip to content

Commit

Permalink
Technical feedback and cosmetic changes for openAIChat reference page
Browse files Browse the repository at this point in the history
  • Loading branch information
MiriamScharnke committed Aug 1, 2024
1 parent 8426bb7 commit b272344
Showing 1 changed file with 90 additions and 117 deletions.
207 changes: 90 additions & 117 deletions doc/functions/openAIChat.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ Connect to OpenAI™ Chat Completion API

`chat = openAIChat(___,Name=Value)`

---

## Description

Connect to the OpenAI Chat Completion API to generate text using large language models developed by OpenAI.
Expand All @@ -36,19 +38,32 @@ To connect to the OpenAI API, you need a valid API key. For information on how t

`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments.


`chat = openAIChat(___,PropertyName=PropertyValue)` specifies properties that are settable at construction using one or more name\-value arguments.

---

## Input Arguments
### `systemPrompt` \- System prompt

---

### `systemPrompt` – System prompt

character vector | string scalar


The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.
Specify the system prompt and set the `SystemPrompt` property. The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.


**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."

---

## Name\-Value Arguments
### `APIKey` \- OpenAI API key

---

### `APIKey` – OpenAI API key

character vector | string scalar

Expand All @@ -58,7 +73,25 @@ OpenAI API key to access OpenAI APIs such as ChatGPT.

Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md).

### `ModelName` \- Model name
---

### `Tools` – OpenAI functions to use during output generation

`openAIFunction` object | array of `openAIFunction` objects


Custom functions used by the model to collect or generate additional data.

For an example, see [Analyze Scientific Papers Using ChatGPT Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md).

---

## Properties Settable at Construction
Optionally specify these properties at construction using name-value arguments. Specify `PropertyName1=PropertyValue1,...,PropertyNameN=PropertyValueN`, where `PropertyName` is the property name and `PropertyValue` is the corresponding value.

---

### `ModelName` – Model name

`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...

Expand All @@ -68,28 +101,27 @@ Name of the OpenAI model to use for text or image generation.

For a list of currently supported models, see [OpenAI API](../OpenAI.md).

### `Temperature` \- Temperature
---

### `Temperature` – Temperature

`1` (default) | numeric scalar between `0` and `2`


Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output.

### `TopP` \- Top probability mass

`1` (default) | numeric scalar between `0` and `1`

---

Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity.
### `TopP` – Top probability mass

### `Tools` \- OpenAI functions to use during output generation
`1` (default) | numeric scalar between `0` and `1`

`openAIFunction` object | array of `openAIFunction` objects

Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.

Custom functions used by the model to process its input and output.
---

### `StopSequences` \- Stop sequences
### `StopSequences` Stop sequences

`""` (default) | string array with between `1` and `4` elements

Expand All @@ -99,7 +131,9 @@ Sequences that stop generation of tokens.

**Example:** `["The end.","And that is all she wrote."]`

### `PresencePenalty` \- Presence penalty
---

### `PresencePenalty` – Presence penalty

`0` (default) | numeric scalar between `-2` and `2`

Expand All @@ -109,7 +143,9 @@ Penalty value for using a token that has already been used at least once in the

The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.

### `FrequencyPenalty` \- Frequency penalty
---

### `FrequencyPenalty` – Frequency penalty

`0` (default) | numeric scalar between `-2` and `2`

Expand All @@ -119,150 +155,83 @@ Penalty value for repeatedly using the same token in the generated output. Highe

The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.

### `TimeOut` \- Connection timeout in seconds
---

### `TimeOut` – Connection timeout in seconds

`10` (default) | nonnegative numeric scalar

After construction, this property is read-only.

If the OpenAI server does not respond within the timeout, then the function throws an error.

### `StreamFun` \- Custom streaming function
---

### `StreamFun` – Custom streaming function

function handle


Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated.

For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md).


**Example:** `@(token) fprint("%s",token)`

### `ResponseFormat` \- Response format

`"text"` (default) | `"json"`


Format of generated output.


If you set the response format to `"text"`, then the generated output is a string.


If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models:

- `ModelName="gpt-4"`
- `ModelName="gpt-4-0613"`

To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).

# Properties
### `SystemPrompt` \- System prompt

character vector | string scalar


This property is read\-only.


The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.


**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."

### `ModelName` \- Model name

`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ...


Name of the OpenAI model to use for text or image generation.
---

### `ResponseFormat` – Response format

For a list of currently supported models, see [OpenAI API](../OpenAI.md).

### `Temperature` \- Temperature

`1` (default) | numeric scalar between `0` and `2`


Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness.

### `TopP` \- Top probability mass

`1` (default) | numeric scalar between `0` and `1`


Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity.

### `StopSequences` \- Stop sequences

`""` (default) | string array with between `1` and `4` elements


Sequences that stop generation of tokens.


**Example:** `["The end.","And that is all she wrote."]`
`"text"` (default) | `"json"`

### `PresencePenalty` \- Presence penalty

`0` (default) | numeric scalar between `-2` and `2`
After construction, this property is read\-only.


Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
Format of generated output.


The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument.
If the response format is `"text"`, then the generated output is a string.

### `FrequencyPenalty` \- Frequency penalty

`0` (default) | numeric scalar between `-2` and `2`
If the response format is `"json"`, then the generated output is a string containing JSON encoded data.


Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens.
To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.


The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument.
For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).

### `TimeOut` \- Connection timeout in seconds

`10` (default) | nonnegative numeric scalar
The JSON response format is not supported for these models:

- `ModelName="gpt-4"`
- `ModelName="gpt-4-0613"`

This property is read\-only.

## Other Properties

If the OpenAI server does not respond within the timeout, then the function throws an error.
---

### `ResponseFormat` \- Response format
### `SystemPrompt` – System prompt

`"text"` (default) | `"json"`
character vector | string scalar


This property is read\-only.


Format of generated output.


If the response format is `"text"`, then the generated output is a string.


If the response format is `"json"`, then the generated output is a string containing JSON encoded data.


To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`.

The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc.

For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md).
Specify the `SystemPrompt` property at construction using the `systemPrompt` input argument.


The JSON response format is not supported for these models:
**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter."

- `ModelName="gpt-4"`
- `ModelName="gpt-4-0613"`
---

### `FunctionNames` \- Names of OpenAI functions to use during output generation
### `FunctionNames` Names of OpenAI functions to use during output generation

string array

Expand All @@ -272,26 +241,30 @@ This property is read\-only.

Names of the custom functions specified in the `Tools` name\-value argument.


# Object Functions

`generate` \- Generate text
`generate` Generate text

# Examples
## Create OpenAI Chat
```matlab
modelName = "gpt-3.5-turbo";
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName)
modelName = "gpt-4o-mini";
chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName);
```

---

## Generate and Stream Text
```matlab
sf = @(x) fprintf("%s",x);
chat = openAIChat(StreamFun=sf);
generate(chat,"Why is a raven like a writing desk?")
generate(chat,"Why is a raven like a writing desk?");
```
# See Also
- [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md)
- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md)
- [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md)
- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md)

Copyright 2024 The MathWorks, Inc.
*Copyright 2024 The MathWorks, Inc.*

0 comments on commit b272344

Please sign in to comment.