Skip to content

Latest commit

 

History

History
238 lines (172 loc) · 8.32 KB

DefaultApi.md

File metadata and controls

238 lines (172 loc) · 8.32 KB

AzureOpenaiClient::DefaultApi

All URIs are relative to https://your-resource-name.openai.azure.com/openai

Method HTTP request Description
chat_completions_create POST /deployments/{deployment-id}/chat/completions Creates a completion for the chat message
completions_create POST /deployments/{deployment-id}/completions Creates a completion for the provided prompt, parameters and chosen model.
embeddings_create POST /deployments/{deployment-id}/embeddings Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

chat_completions_create

chat_completions_create(deployment_id, api_version, chat_completions_create_request)

Creates a completion for the chat message

Examples

require 'time'
require 'azure_openai_client'
# setup authorization
AzureOpenaiClient.configure do |config|
  # Configure API key authorization: apiKey
  config.api_key['apiKey'] = 'YOUR API KEY'
  # Uncomment the following line to set a prefix for the API key, e.g. 'Bearer' (defaults to nil)
  # config.api_key_prefix['apiKey'] = 'Bearer'

  # Configure OAuth2 access token for authorization: bearer
  config.access_token = 'YOUR ACCESS TOKEN'
end

api_instance = AzureOpenaiClient::DefaultApi.new
deployment_id = 'deployment_id_example' # String | 
api_version = '2023-05-15' # String | 
chat_completions_create_request = AzureOpenaiClient::ChatCompletionsCreateRequest.new({messages: [AzureOpenaiClient::ChatCompletionsCreateRequestMessagesInner.new({role: 'system', content: 'content_example'})]}) # ChatCompletionsCreateRequest | 

begin
  # Creates a completion for the chat message
  result = api_instance.chat_completions_create(deployment_id, api_version, chat_completions_create_request)
  p result
rescue AzureOpenaiClient::ApiError => e
  puts "Error when calling DefaultApi->chat_completions_create: #{e}"
end

Using the chat_completions_create_with_http_info variant

This returns an Array which contains the response data, status code and headers.

<Array(, Integer, Hash)> chat_completions_create_with_http_info(deployment_id, api_version, chat_completions_create_request)

begin
  # Creates a completion for the chat message
  data, status_code, headers = api_instance.chat_completions_create_with_http_info(deployment_id, api_version, chat_completions_create_request)
  p status_code # => 2xx
  p headers # => { ... }
  p data # => <ChatCompletionsCreate200Response>
rescue AzureOpenaiClient::ApiError => e
  puts "Error when calling DefaultApi->chat_completions_create_with_http_info: #{e}"
end

Parameters

Name Type Description Notes
deployment_id String
api_version String
chat_completions_create_request ChatCompletionsCreateRequest

Return type

ChatCompletionsCreate200Response

Authorization

apiKey, bearer

HTTP request headers

  • Content-Type: application/json
  • Accept: application/json

completions_create

completions_create(deployment_id, api_version, completions_create_request)

Creates a completion for the provided prompt, parameters and chosen model.

Examples

require 'time'
require 'azure_openai_client'
# setup authorization
AzureOpenaiClient.configure do |config|
  # Configure API key authorization: apiKey
  config.api_key['apiKey'] = 'YOUR API KEY'
  # Uncomment the following line to set a prefix for the API key, e.g. 'Bearer' (defaults to nil)
  # config.api_key_prefix['apiKey'] = 'Bearer'

  # Configure OAuth2 access token for authorization: bearer
  config.access_token = 'YOUR ACCESS TOKEN'
end

api_instance = AzureOpenaiClient::DefaultApi.new
deployment_id = 'davinci' # String | 
api_version = '2023-05-15' # String | 
completions_create_request = AzureOpenaiClient::CompletionsCreateRequest.new # CompletionsCreateRequest | 

begin
  # Creates a completion for the provided prompt, parameters and chosen model.
  result = api_instance.completions_create(deployment_id, api_version, completions_create_request)
  p result
rescue AzureOpenaiClient::ApiError => e
  puts "Error when calling DefaultApi->completions_create: #{e}"
end

Using the completions_create_with_http_info variant

This returns an Array which contains the response data, status code and headers.

<Array(, Integer, Hash)> completions_create_with_http_info(deployment_id, api_version, completions_create_request)

begin
  # Creates a completion for the provided prompt, parameters and chosen model.
  data, status_code, headers = api_instance.completions_create_with_http_info(deployment_id, api_version, completions_create_request)
  p status_code # => 2xx
  p headers # => { ... }
  p data # => <CompletionsCreate200Response>
rescue AzureOpenaiClient::ApiError => e
  puts "Error when calling DefaultApi->completions_create_with_http_info: #{e}"
end

Parameters

Name Type Description Notes
deployment_id String
api_version String
completions_create_request CompletionsCreateRequest

Return type

CompletionsCreate200Response

Authorization

apiKey, bearer

HTTP request headers

  • Content-Type: application/json
  • Accept: application/json

embeddings_create

embeddings_create(deployment_id, api_version, request_body)

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Examples

require 'time'
require 'azure_openai_client'
# setup authorization
AzureOpenaiClient.configure do |config|
  # Configure API key authorization: apiKey
  config.api_key['apiKey'] = 'YOUR API KEY'
  # Uncomment the following line to set a prefix for the API key, e.g. 'Bearer' (defaults to nil)
  # config.api_key_prefix['apiKey'] = 'Bearer'

  # Configure OAuth2 access token for authorization: bearer
  config.access_token = 'YOUR ACCESS TOKEN'
end

api_instance = AzureOpenaiClient::DefaultApi.new
deployment_id = 'ada-search-index-v1' # String | The deployment id of the model which was deployed.
api_version = '2023-05-15' # String | 
request_body = { key: 3.56} # Hash<String, Object> | 

begin
  # Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
  result = api_instance.embeddings_create(deployment_id, api_version, request_body)
  p result
rescue AzureOpenaiClient::ApiError => e
  puts "Error when calling DefaultApi->embeddings_create: #{e}"
end

Using the embeddings_create_with_http_info variant

This returns an Array which contains the response data, status code and headers.

<Array(, Integer, Hash)> embeddings_create_with_http_info(deployment_id, api_version, request_body)

begin
  # Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
  data, status_code, headers = api_instance.embeddings_create_with_http_info(deployment_id, api_version, request_body)
  p status_code # => 2xx
  p headers # => { ... }
  p data # => <EmbeddingsCreate200Response>
rescue AzureOpenaiClient::ApiError => e
  puts "Error when calling DefaultApi->embeddings_create_with_http_info: #{e}"
end

Parameters

Name Type Description Notes
deployment_id String The deployment id of the model which was deployed.
api_version String
request_body Hash<String, Object>

Return type

EmbeddingsCreate200Response

Authorization

apiKey, bearer

HTTP request headers

  • Content-Type: application/json
  • Accept: application/json