diff --git a/src/aws.ts b/src/aws.ts index 76189dc514e..5a52d2db5f7 100644 --- a/src/aws.ts +++ b/src/aws.ts @@ -1355,6 +1355,12 @@ const completionSpec: Fig.Spec = { "AWS Marketplace Entitlement Service This reference provides descriptions of the AWS Marketplace Entitlement Service API. AWS Marketplace Entitlement Service is used to determine the entitlement of a customer to a given product. An entitlement represents capacity in a product owned by the customer. For example, a customer might own some number of users or seats in an SaaS application or some amount of data capacity in a multi-tenant database. Getting Entitlement Records GetEntitlements- Gets the entitlements for a Marketplace product", loadSpec: "aws/marketplace-entitlement", }, + { + name: "marketplace-reporting", + description: + "The Amazon Web Services Marketplace GetBuyerDashboard API enables you to get a procurement insights dashboard programmatically. The API gets the agreement and cost analysis dashboards with data for all of the Amazon Web Services accounts in your Amazon Web Services Organization. To use the Amazon Web Services Marketplace Reporting API, you must complete the following prerequisites: Enable all features for your organization. For more information, see Enabling all features for an organization with Organizations, in the Organizations User Guide. Call the service as the Organizations management account or an account registered as a delegated administrator for the procurement insights service. For more information about management accounts, see Tutorial: Creating and configuring an organization and Managing the management account with Organizations, both in the Organizations User Guide. For more information about delegated administrators, see Using delegated administrators, in the Amazon Web Services Marketplace Buyer Guide. Create an IAM policy that enables the aws-marketplace:GetBuyerDashboard and organizations:DescribeOrganization permissions. In addition, the management account requires the organizations:EnableAWSServiceAccess and iam:CreateServiceLinkedRole permissions to create. For more information about creating the policy, see Policies and permissions in Identity and Access Management, in the IAM User Guide. Access can be shared only by registering the desired linked account as a delegated administrator. That requires organizations:RegisterDelegatedAdministrator organizations:ListDelegatedAdministrators and organizations:DeregisterDelegatedAdministrator permissions. Use the Amazon Web Services Marketplace console to create the AWSServiceRoleForProcurementInsightsPolicy service-linked role. The role enables Amazon Web Services Marketplace procurement visibility integration. The management account requires an IAM policy with the organizations:EnableAWSServiceAccess and iam:CreateServiceLinkedRole permissions to create the service-linked role and enable the service access. For more information, see Granting access to Organizations and Service-linked role to share procurement data in the Amazon Web Services Marketplace Buyer Guide. After creating the service-linked role, you must enable trusted access that grants Amazon Web Services Marketplace permission to access data from your Organizations. For more information, see Granting access to Organizations in the Amazon Web Services Marketplace Buyer Guide", + loadSpec: "aws/marketplace-reporting", + }, { name: "marketplacecommerceanalytics", description: @@ -1419,7 +1425,7 @@ const completionSpec: Fig.Spec = { { name: "memorydb", description: - "MemoryDB is a fully managed, Redis OSS-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB stores the entire database in-memory, enabling low latency and high throughput data access. It is compatible with Redis OSS, a popular open source data store, enabling you to leverage Redis OSS\u2019 flexible and friendly data structures, APIs, and commands", + "MemoryDB for Redis is a fully managed, Redis-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB stores the entire database in-memory, enabling low latency and high throughput data access. It is compatible with Redis, a popular open source data store, enabling you to leverage Redis\u2019 flexible and friendly data structures, APIs, and commands", loadSpec: "aws/memorydb", }, { @@ -1702,7 +1708,7 @@ const completionSpec: Fig.Spec = { { name: "qconnect", description: - "Powered by Amazon Bedrock: Amazon Web Services implements automated abuse detection. Because Amazon Q in Connect is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI). Amazon Q in Connect is a generative AI customer service assistant. It is an LLM-enhanced evolution of Amazon Connect Wisdom that delivers real-time recommendations to help contact center agents resolve customer issues quickly and accurately. Amazon Q in Connect automatically detects customer intent during calls and chats using conversational analytics and natural language understanding (NLU). It then provides agents with immediate, real-time generative responses and suggested actions, and links to relevant documents and articles. Agents can also query Amazon Q in Connect directly using natural language or keywords to answer customer requests. Use the Amazon Q in Connect APIs to create an assistant and a knowledge base, for example, or manage content by uploading custom files. For more information, see Use Amazon Q in Connect for generative AI powered agent assistance in real-time in the Amazon Connect Administrator Guide", + "Amazon Q actions Amazon Q data types Powered by Amazon Bedrock: Amazon Web Services implements automated abuse detection. Because Amazon Q in Connect is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI). Amazon Q in Connect is a generative AI customer service assistant. It is an LLM-enhanced evolution of Amazon Connect Wisdom that delivers real-time recommendations to help contact center agents resolve customer issues quickly and accurately. Amazon Q in Connect automatically detects customer intent during calls and chats using conversational analytics and natural language understanding (NLU). It then provides agents with immediate, real-time generative responses and suggested actions, and links to relevant documents and articles. Agents can also query Amazon Q in Connect directly using natural language or keywords to answer customer requests. Use the Amazon Q in Connect APIs to create an assistant and a knowledge base, for example, or manage content by uploading custom files. For more information, see Use Amazon Q in Connect for generative AI powered agent assistance in real-time in the Amazon Connect Administrator Guide", loadSpec: "aws/qconnect", }, { diff --git a/src/aws/b2bi.ts b/src/aws/b2bi.ts index f819fbaa871..8503458a191 100644 --- a/src/aws/b2bi.ts +++ b/src/aws/b2bi.ts @@ -118,6 +118,14 @@ const completionSpec: Fig.Spec = { name: "list", }, }, + { + name: "--capability-options", + description: + "Specify the structure that contains the details for the associated capabilities", + args: { + name: "structure", + }, + }, { name: "--client-token", description: "Reserved for future use", @@ -229,10 +237,58 @@ const completionSpec: Fig.Spec = { }, ], }, + { + name: "create-starter-mapping-template", + description: + "Amazon Web Services B2B Data Interchange uses a mapping template in JSONata or XSLT format to transform a customer input file into a JSON or XML file that can be converted to EDI. If you provide a sample EDI file with the same structure as the EDI files that you wish to generate, then the service can generate a mapping template. The starter template contains placeholder values which you can replace with JSONata or XSLT expressions to take data from your input file and insert it into the JSON or XML file that is used to generate the EDI. If you do not provide a sample EDI file, then the service can generate a mapping template based on the EDI settings in the templateDetails parameter. Currently, we only support generating a template that can generate the input to produce an Outbound X12 EDI file", + options: [ + { + name: "--output-sample-location", + description: + "Specify the location of the sample EDI file that is used to generate the mapping template", + args: { + name: "structure", + }, + }, + { + name: "--mapping-type", + description: + "Specify the format for the mapping template: either JSONATA or XSLT", + args: { + name: "string", + }, + }, + { + name: "--template-details", + description: + "Describes the details needed for generating the template. Specify the X12 transaction set and version for which the template is used: currently, we only support X12", + args: { + name: "structure", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, { name: "create-transformer", description: - "Creates a transformer. A transformer describes how to process the incoming EDI documents and extract the necessary information to the output file", + "Creates a transformer. Amazon Web Services B2B Data Interchange currently supports two scenarios: Inbound EDI: the Amazon Web Services customer receives an EDI file from their trading partner. Amazon Web Services B2B Data Interchange converts this EDI file into a JSON or XML file with a service-defined structure. A mapping template provided by the customer, in JSONata or XSLT format, is optionally applied to this file to produce a JSON or XML file with the structure the customer requires. Outbound EDI: the Amazon Web Services customer has a JSON or XML file containing data that they wish to use in an EDI file. A mapping template, provided by the customer (in either JSONata or XSLT format) is applied to this file to generate a JSON or XML file in the service-defined structure. This file is then converted to an EDI file. The following fields are provided for backwards compatibility only: fileFormat, mappingTemplate, ediType, and sampleDocument. Use the mapping data type in place of mappingTemplate and fileFormat Use the sampleDocuments data type in place of sampleDocument Use either the inputConversion or outputConversion in place of ediType", options: [ { name: "--name", @@ -242,6 +298,21 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--client-token", + description: "Reserved for future use", + args: { + name: "string", + }, + }, + { + name: "--tags", + description: + "Specifies the key-value pairs assigned to ARNs that you can use to group and search for resources by type. You can attach this metadata to resources (capabilities, partnerships, and so on) for any purpose", + args: { + name: "list", + }, + }, { name: "--file-format", description: @@ -253,7 +324,7 @@ const completionSpec: Fig.Spec = { { name: "--mapping-template", description: - "Specifies the mapping template for the transformer. This template is used to map the parsed EDI file using JSONata or XSLT", + "Specifies the mapping template for the transformer. This template is used to map the parsed EDI file using JSONata or XSLT. This parameter is available for backwards compatibility. Use the Mapping data type instead", args: { name: "string", }, @@ -275,18 +346,35 @@ const completionSpec: Fig.Spec = { }, }, { - name: "--client-token", - description: "Reserved for future use", + name: "--input-conversion", + description: + "Specify the InputConversion object, which contains the format options for the inbound transformation", args: { - name: "string", + name: "structure", }, }, { - name: "--tags", + name: "--mapping", description: - "Specifies the key-value pairs assigned to ARNs that you can use to group and search for resources by type. You can attach this metadata to resources (capabilities, partnerships, and so on) for any purpose", + "Specify the structure that contains the mapping template and its language (either XSLT or JSONATA)", args: { - name: "list", + name: "structure", + }, + }, + { + name: "--output-conversion", + description: + "A structure that contains the OutputConversion object, which contains the format options for the outbound transformation", + args: { + name: "structure", + }, + }, + { + name: "--sample-documents", + description: + "Specify a structure that contains the Amazon S3 bucket and an array of the corresponding keys used to identify the location for your sample documents", + args: { + name: "structure", }, }, { @@ -407,7 +495,7 @@ const completionSpec: Fig.Spec = { { name: "delete-transformer", description: - "Deletes the specified transformer. A transformer describes how to process the incoming EDI documents and extract the necessary information to the output file", + "Deletes the specified transformer. A transformer can take an EDI file as input and transform it into a JSON-or XML-formatted document. Alternatively, a transformer can take a JSON-or XML-formatted document as input and transform it into an EDI file", options: [ { name: "--transformer-id", @@ -535,7 +623,7 @@ const completionSpec: Fig.Spec = { { name: "get-transformer", description: - "Retrieves the details for the transformer specified by the transformer ID. A transformer describes how to process the incoming EDI documents and extract the necessary information to the output file", + "Retrieves the details for the transformer specified by the transformer ID. A transformer can take an EDI file as input and transform it into a JSON-or XML-formatted document. Alternatively, a transformer can take a JSON-or XML-formatted document as input and transform it into an EDI file", options: [ { name: "--transformer-id", @@ -836,7 +924,7 @@ const completionSpec: Fig.Spec = { { name: "list-transformers", description: - "Lists the available transformers. A transformer describes how to process the incoming EDI documents and extract the necessary information to the output file", + "Lists the available transformers. A transformer can take an EDI file as input and transform it into a JSON-or XML-formatted document. Alternatively, a transformer can take a JSON-or XML-formatted document as input and transform it into an EDI file", options: [ { name: "--next-token", @@ -900,7 +988,7 @@ const completionSpec: Fig.Spec = { { name: "start-transformer-job", description: - "Runs a job, using a transformer, to parse input EDI (electronic data interchange) file into the output structures used by Amazon Web Services B2BI Data Interchange. If you only want to transform EDI (electronic data interchange) documents, you don't need to create profiles, partnerships or capabilities. Just create and configure a transformer, and then run the StartTransformerJob API to process your files", + "Runs a job, using a transformer, to parse input EDI (electronic data interchange) file into the output structures used by Amazon Web Services B2B Data Interchange. If you only want to transform EDI (electronic data interchange) documents, you don't need to create profiles, partnerships or capabilities. Just create and configure a transformer, and then run the StartTransformerJob API to process your files", options: [ { name: "--input-file", @@ -992,6 +1080,45 @@ const completionSpec: Fig.Spec = { }, ], }, + { + name: "test-conversion", + description: + "This operation mimics the latter half of a typical Outbound EDI request. It takes an input JSON/XML in the B2Bi shape as input, converts it to an X12 EDI string, and return that string", + options: [ + { + name: "--source", + description: "Specify the source file for an outbound EDI request", + args: { + name: "structure", + }, + }, + { + name: "--target", + description: + "Specify the format (X12 is the only currently supported format), and other details for the conversion target", + args: { + name: "structure", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, { name: "test-mapping", description: @@ -1008,7 +1135,7 @@ const completionSpec: Fig.Spec = { { name: "--mapping-template", description: - "Specifies the mapping template for the transformer. This template is used to map the parsed EDI file using JSONata or XSLT", + "Specifies the mapping template for the transformer. This template is used to map the parsed EDI file using JSONata or XSLT. This parameter is available for backwards compatibility. Use the Mapping data type instead", args: { name: "string", }, @@ -1212,6 +1339,14 @@ const completionSpec: Fig.Spec = { name: "list", }, }, + { + name: "--capability-options", + description: + "To update, specify the structure that contains the details for the associated capabilities", + args: { + name: "structure", + }, + }, { name: "--cli-input-json", description: @@ -1296,7 +1431,7 @@ const completionSpec: Fig.Spec = { { name: "update-transformer", description: - "Updates the specified parameters for a transformer. A transformer describes how to process the incoming EDI documents and extract the necessary information to the output file", + "Updates the specified parameters for a transformer. A transformer can take an EDI file as input and transform it into a JSON-or XML-formatted document. Alternatively, a transformer can take a JSON-or XML-formatted document as input and transform it into an EDI file", options: [ { name: "--transformer-id", @@ -1315,25 +1450,25 @@ const completionSpec: Fig.Spec = { }, }, { - name: "--file-format", + name: "--status", description: - "Specifies that the currently supported file formats for EDI transformations are JSON and XML", + "Specifies the transformer's status. You can update the state of the transformer, from active to inactive, or inactive to active", args: { name: "string", }, }, { - name: "--mapping-template", + name: "--file-format", description: - "Specifies the mapping template for the transformer. This template is used to map the parsed EDI file using JSONata or XSLT", + "Specifies that the currently supported file formats for EDI transformations are JSON and XML", args: { name: "string", }, }, { - name: "--status", + name: "--mapping-template", description: - "Specifies the transformer's status. You can update the state of the transformer, from active to inactive, or inactive to active", + "Specifies the mapping template for the transformer. This template is used to map the parsed EDI file using JSONata or XSLT. This parameter is available for backwards compatibility. Use the Mapping data type instead", args: { name: "string", }, @@ -1354,6 +1489,38 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--input-conversion", + description: + "To update, specify the InputConversion object, which contains the format options for the inbound transformation", + args: { + name: "structure", + }, + }, + { + name: "--mapping", + description: + "Specify the structure that contains the mapping template and its language (either XSLT or JSONATA)", + args: { + name: "structure", + }, + }, + { + name: "--output-conversion", + description: + "To update, specify the OutputConversion object, which contains the format options for the outbound transformation", + args: { + name: "structure", + }, + }, + { + name: "--sample-documents", + description: + "Specify a structure that contains the Amazon S3 bucket and an array of the corresponding keys used to identify the location for your sample documents", + args: { + name: "structure", + }, + }, { name: "--cli-input-json", description: diff --git a/src/aws/deadline.ts b/src/aws/deadline.ts index bd1a4420291..7d63a67da0d 100644 --- a/src/aws/deadline.ts +++ b/src/aws/deadline.ts @@ -926,6 +926,13 @@ const completionSpec: Fig.Spec = { name: "integer", }, }, + { + name: "--source-job-id", + description: "The job ID for the source job", + args: { + name: "string", + }, + }, { name: "--cli-input-json", description: @@ -3300,6 +3307,90 @@ const completionSpec: Fig.Spec = { }, ], }, + { + name: "list-job-parameter-definitions", + description: "Lists parameter definitions of a job", + options: [ + { + name: "--farm-id", + description: "The farm ID of the job to list", + args: { + name: "string", + }, + }, + { + name: "--job-id", + description: "The job ID to include on the list", + args: { + name: "string", + }, + }, + { + name: "--queue-id", + description: "The queue ID to include on the list", + args: { + name: "string", + }, + }, + { + name: "--next-token", + description: + "The token for the next set of results, or null to start from the beginning", + args: { + name: "string", + }, + }, + { + name: "--max-results", + description: + "The maximum number of results to return. Use this parameter with NextToken to get results as a set of sequential pages", + args: { + name: "integer", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--starting-token", + description: + "A token to specify where to start paginating. This is the\nNextToken from a previously truncated response.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", + args: { + name: "string", + }, + }, + { + name: "--page-size", + description: + "The size of each page to get in the AWS service call. This\ndoes not affect the number of items returned in the command's\noutput. Setting a smaller page size results in more calls to\nthe AWS service, retrieving fewer items in each call. This can\nhelp prevent the AWS service calls from timing out.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", + args: { + name: "integer", + }, + }, + { + name: "--max-items", + description: + "The total number of items to return in the command's output.\nIf the total number of items available is more than the value\nspecified, a NextToken is provided in the command's\noutput. To resume pagination, provide the\nNextToken value in the starting-token\nargument of a subsequent command. Do not use the\nNextToken response element directly outside of the\nAWS CLI.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", + args: { + name: "integer", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, { name: "list-jobs", description: "Lists jobs", diff --git a/src/aws/elasticache.ts b/src/aws/elasticache.ts index c59df62fc09..15728125077 100644 --- a/src/aws/elasticache.ts +++ b/src/aws/elasticache.ts @@ -224,12 +224,12 @@ const completionSpec: Fig.Spec = { { name: "copy-serverless-cache-snapshot", description: - "Creates a copy of an existing serverless cache\u2019s snapshot. Available for Redis OSS and Serverless Memcached only", + "Creates a copy of an existing serverless cache\u2019s snapshot. Available for Valkey, Redis OSS and Serverless Memcached only", options: [ { name: "--source-serverless-cache-snapshot-name", description: - "The identifier of the existing serverless cache\u2019s snapshot to be copied. Available for Redis OSS and Serverless Memcached only", + "The identifier of the existing serverless cache\u2019s snapshot to be copied. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -237,7 +237,7 @@ const completionSpec: Fig.Spec = { { name: "--target-serverless-cache-snapshot-name", description: - "The identifier for the snapshot to be created. Available for Redis OSS and Serverless Memcached only", + "The identifier for the snapshot to be created. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -245,7 +245,7 @@ const completionSpec: Fig.Spec = { { name: "--kms-key-id", description: - "The identifier of the KMS key used to encrypt the target snapshot. Available for Redis OSS and Serverless Memcached only", + "The identifier of the KMS key used to encrypt the target snapshot. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -253,7 +253,7 @@ const completionSpec: Fig.Spec = { { name: "--tags", description: - "A list of tags to be added to the target snapshot resource. A tag is a key-value pair. Available for Redis OSS and Serverless Memcached only. Default: NULL", + "A list of tags to be added to the target snapshot resource. A tag is a key-value pair. Available for Valkey, Redis OSS and Serverless Memcached only. Default: NULL", args: { name: "list", }, @@ -280,7 +280,7 @@ const completionSpec: Fig.Spec = { { name: "copy-snapshot", description: - "Makes a copy of an existing snapshot. This operation is valid for Redis OSS only. Users or groups that have permissions to use the CopySnapshot operation can create their own Amazon S3 buckets and copy snapshots to it. To control access to your snapshots, use an IAM policy to control who has the ability to use the CopySnapshot operation. For more information about using IAM to control the use of ElastiCache operations, see Exporting Snapshots and Authentication & Access Control. You could receive the following error messages. Error Messages Error Message: The S3 bucket %s is outside of the region. Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide. Error Message: The S3 bucket %s does not exist. Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide. Error Message: The S3 bucket %s is not owned by the authenticated user. Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide. Error Message: The authenticated user does not have sufficient permissions to perform the desired activity. Solution: Contact your system administrator to get the needed permissions. Error Message: The S3 bucket %s already contains an object with key %s. Solution: Give the TargetSnapshotName a new and unique value. If exporting a snapshot, you could alternatively create a new Amazon S3 bucket and use this same value for TargetSnapshotName. Error Message: ElastiCache has not been granted READ permissions %s on the S3 Bucket. Solution: Add List and Read permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide. Error Message: ElastiCache has not been granted WRITE permissions %s on the S3 Bucket. Solution: Add Upload/Delete permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide. Error Message: ElastiCache has not been granted READ_ACP permissions %s on the S3 Bucket. Solution: Add View Permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide", + "Makes a copy of an existing snapshot. This operation is valid for Valkey or Redis OSS only. Users or groups that have permissions to use the CopySnapshot operation can create their own Amazon S3 buckets and copy snapshots to it. To control access to your snapshots, use an IAM policy to control who has the ability to use the CopySnapshot operation. For more information about using IAM to control the use of ElastiCache operations, see Exporting Snapshots and Authentication & Access Control. You could receive the following error messages. Error Messages Error Message: The S3 bucket %s is outside of the region. Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide. Error Message: The S3 bucket %s does not exist. Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide. Error Message: The S3 bucket %s is not owned by the authenticated user. Solution: Create an Amazon S3 bucket in the same region as your snapshot. For more information, see Step 1: Create an Amazon S3 Bucket in the ElastiCache User Guide. Error Message: The authenticated user does not have sufficient permissions to perform the desired activity. Solution: Contact your system administrator to get the needed permissions. Error Message: The S3 bucket %s already contains an object with key %s. Solution: Give the TargetSnapshotName a new and unique value. If exporting a snapshot, you could alternatively create a new Amazon S3 bucket and use this same value for TargetSnapshotName. Error Message: ElastiCache has not been granted READ permissions %s on the S3 Bucket. Solution: Add List and Read permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide. Error Message: ElastiCache has not been granted WRITE permissions %s on the S3 Bucket. Solution: Add Upload/Delete permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide. Error Message: ElastiCache has not been granted READ_ACP permissions %s on the S3 Bucket. Solution: Add View Permissions on the bucket. For more information, see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket in the ElastiCache User Guide", options: [ { name: "--source-snapshot-name", @@ -344,7 +344,7 @@ const completionSpec: Fig.Spec = { { name: "create-cache-cluster", description: - "Creates a cluster. All nodes in the cluster run the same protocol-compliant cache engine software, either Memcached or Redis OSS. This operation is not supported for Redis OSS (cluster mode enabled) clusters", + "Creates a cluster. All nodes in the cluster run the same protocol-compliant cache engine software, either Memcached, Valkey or Redis OSS. This operation is not supported for Valkey or Redis OSS (cluster mode enabled) clusters", options: [ { name: "--cache-cluster-id", @@ -389,7 +389,7 @@ const completionSpec: Fig.Spec = { { name: "--num-cache-nodes", description: - "The initial number of cache nodes that the cluster has. For clusters running Redis OSS, this value must be 1. For clusters running Memcached, this value must be between 1 and 40. If you need more than 40 nodes for your Memcached cluster, please fill out the ElastiCache Limit Increase Request form at http://aws.amazon.com/contact-us/elasticache-node-limit-request/", + "The initial number of cache nodes that the cluster has. For clusters running Valkey or Redis OSS, this value must be 1. For clusters running Memcached, this value must be between 1 and 40. If you need more than 40 nodes for your Memcached cluster, please fill out the ElastiCache Limit Increase Request form at http://aws.amazon.com/contact-us/elasticache-node-limit-request/", args: { name: "integer", }, @@ -397,7 +397,7 @@ const completionSpec: Fig.Spec = { { name: "--cache-node-type", description: - "The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. Redis OSS configuration variables appendonly and appendfsync are not supported on Redis OSS version 2.8.22 and later", + "The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Valkey or Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Valkey or Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. The configuration variables appendonly and appendfsync are not supported on Valkey, or on Redis OSS version 2.8.22 and later", args: { name: "string", }, @@ -460,7 +460,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-arns", description: - "A single-element string list containing an Amazon Resource Name (ARN) that uniquely identifies a Redis OSS RDB snapshot file stored in Amazon S3. The snapshot file is used to populate the node group (shard). The Amazon S3 object name in the ARN cannot contain any commas. This parameter is only valid if the Engine parameter is redis. Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb", + "A single-element string list containing an Amazon Resource Name (ARN) that uniquely identifies a Valkey or Redis OSS RDB snapshot file stored in Amazon S3. The snapshot file is used to populate the node group (shard). The Amazon S3 object name in the ARN cannot contain any commas. This parameter is only valid if the Engine parameter is redis. Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb", args: { name: "list", }, @@ -468,7 +468,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-name", description: - "The name of a Redis OSS snapshot from which to restore data into the new node group (shard). The snapshot status changes to restoring while the new node group (shard) is being created. This parameter is only valid if the Engine parameter is redis", + "The name of a Valkey or Redis OSS snapshot from which to restore data into the new node group (shard). The snapshot status changes to restoring while the new node group (shard) is being created. This parameter is only valid if the Engine parameter is redis", args: { name: "string", }, @@ -500,12 +500,12 @@ const completionSpec: Fig.Spec = { { name: "--auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey 7.2 and above or Redis OSS engine version 6.0 and above, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--no-auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey 7.2 and above or Redis OSS engine version 6.0 and above, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--snapshot-retention-limit", @@ -573,7 +573,7 @@ const completionSpec: Fig.Spec = { { name: "--network-type", description: - "Must be either ipv4 | ipv6 | dual_stack. IPv6 is supported for workloads using Redis OSS engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the Nitro system", + "Must be either ipv4 | ipv6 | dual_stack. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system", args: { name: "string", }, @@ -581,7 +581,7 @@ const completionSpec: Fig.Spec = { { name: "--ip-discovery", description: - "The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Redis OSS engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the Nitro system", + "The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system", args: { name: "string", }, @@ -764,7 +764,7 @@ const completionSpec: Fig.Spec = { { name: "create-global-replication-group", description: - "Global Datastore for Redis OSS offers fully managed, fast, reliable and secure cross-region replication. Using Global Datastore for Redis OSS, you can create cross-region read replica clusters for ElastiCache (Redis OSS) to enable low-latency reads and disaster recovery across regions. For more information, see Replication Across Regions Using Global Datastore. The GlobalReplicationGroupIdSuffix is the name of the Global datastore. The PrimaryReplicationGroupId represents the name of the primary cluster that accepts writes and will replicate updates to the secondary cluster", + "Global Datastore offers fully managed, fast, reliable and secure cross-region replication. Using Global Datastore with Valkey or Redis OSS, you can create cross-region read replica clusters for ElastiCache to enable low-latency reads and disaster recovery across regions. For more information, see Replication Across Regions Using Global Datastore. The GlobalReplicationGroupIdSuffix is the name of the Global datastore. The PrimaryReplicationGroupId represents the name of the primary cluster that accepts writes and will replicate updates to the secondary cluster", options: [ { name: "--global-replication-group-id-suffix", @@ -811,7 +811,7 @@ const completionSpec: Fig.Spec = { { name: "create-replication-group", description: - "Creates a Redis OSS (cluster mode disabled) or a Redis OSS (cluster mode enabled) replication group. This API can be used to create a standalone regional replication group or a secondary replication group associated with a Global datastore. A Redis OSS (cluster mode disabled) replication group is a collection of nodes, where one of the nodes is a read/write primary and the others are read-only replicas. Writes to the primary are asynchronously propagated to the replicas. A Redis OSS cluster-mode enabled cluster is comprised of from 1 to 90 shards (API/CLI: node groups). Each shard has a primary node and up to 5 read-only replica nodes. The configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number or replicas allowed. The node or shard limit can be increased to a maximum of 500 per cluster if the Redis OSS engine version is 5.0.6 or higher. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see Creating a Subnet Group. For versions below 5.0.6, the limit is 250 per cluster. To request a limit increase, see Amazon Service Limits and choose the limit type Nodes per cluster per instance type. When a Redis OSS (cluster mode disabled) replication group has been successfully created, you can add one or more read replicas to it, up to a total of 5 read replicas. If you need to increase or decrease the number of node groups (console: shards), you can use ElastiCache (Redis OSS) scaling. For more information, see Scaling ElastiCache (Redis OSS) Clusters in the ElastiCache User Guide. This operation is valid for Redis OSS only", + "Creates a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) replication group. This API can be used to create a standalone regional replication group or a secondary replication group associated with a Global datastore. A Valkey or Redis OSS (cluster mode disabled) replication group is a collection of nodes, where one of the nodes is a read/write primary and the others are read-only replicas. Writes to the primary are asynchronously propagated to the replicas. A Valkey or Redis OSS cluster-mode enabled cluster is comprised of from 1 to 90 shards (API/CLI: node groups). Each shard has a primary node and up to 5 read-only replica nodes. The configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number or replicas allowed. The node or shard limit can be increased to a maximum of 500 per cluster if the Valkey or Redis OSS engine version is 5.0.6 or higher. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see Creating a Subnet Group. For versions below 5.0.6, the limit is 250 per cluster. To request a limit increase, see Amazon Service Limits and choose the limit type Nodes per cluster per instance type. When a Valkey or Redis OSS (cluster mode disabled) replication group has been successfully created, you can add one or more read replicas to it, up to a total of 5 read replicas. If you need to increase or decrease the number of node groups (console: shards), you can use scaling. For more information, see Scaling self-designed clusters in the ElastiCache User Guide. This operation is valid for Valkey and Redis OSS only", options: [ { name: "--replication-group-id", @@ -846,12 +846,12 @@ const completionSpec: Fig.Spec = { { name: "--automatic-failover-enabled", description: - "Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails. AutomaticFailoverEnabled must be enabled for Redis OSS (cluster mode enabled) replication groups. Default: false", + "Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails. AutomaticFailoverEnabled must be enabled for Valkey or Redis OSS (cluster mode enabled) replication groups. Default: false", }, { name: "--no-automatic-failover-enabled", description: - "Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails. AutomaticFailoverEnabled must be enabled for Redis OSS (cluster mode enabled) replication groups. Default: false", + "Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails. AutomaticFailoverEnabled must be enabled for Valkey or Redis OSS (cluster mode enabled) replication groups. Default: false", }, { name: "--multi-az-enabled", @@ -882,7 +882,7 @@ const completionSpec: Fig.Spec = { { name: "--num-node-groups", description: - "An optional parameter that specifies the number of node groups (shards) for this Redis OSS (cluster mode enabled) replication group. For Redis OSS (cluster mode disabled) either omit this parameter or set it to 1. Default: 1", + "An optional parameter that specifies the number of node groups (shards) for this Valkey or Redis OSS (cluster mode enabled) replication group. For Valkey or Redis OSS (cluster mode disabled) either omit this parameter or set it to 1. Default: 1", args: { name: "integer", }, @@ -898,7 +898,7 @@ const completionSpec: Fig.Spec = { { name: "--node-group-configuration", description: - "A list of node group (shard) configuration options. Each node group (shard) configuration has the following members: PrimaryAvailabilityZone, ReplicaAvailabilityZones, ReplicaCount, and Slots. If you're creating a Redis OSS (cluster mode disabled) or a Redis OSS (cluster mode enabled) replication group, you can use this parameter to individually configure each node group (shard), or you can omit this parameter. However, it is required when seeding a Redis OSS (cluster mode enabled) cluster from a S3 rdb file. You must configure each node group (shard) using this parameter because you must specify the slots for each node group", + "A list of node group (shard) configuration options. Each node group (shard) configuration has the following members: PrimaryAvailabilityZone, ReplicaAvailabilityZones, ReplicaCount, and Slots. If you're creating a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) replication group, you can use this parameter to individually configure each node group (shard), or you can omit this parameter. However, it is required when seeding a Valkey or Redis OSS (cluster mode enabled) cluster from a S3 rdb file. You must configure each node group (shard) using this parameter because you must specify the slots for each node group", args: { name: "list", }, @@ -906,7 +906,7 @@ const completionSpec: Fig.Spec = { { name: "--cache-node-type", description: - "The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. Redis OSS configuration variables appendonly and appendfsync are not supported on Redis OSS version 2.8.22 and later", + "The compute and memory capacity of the nodes in the node group (shard). The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Valkey or Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Valkey or Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. The configuration variables appendonly and appendfsync are not supported on Valkey, or on Redis OSS version 2.8.22 and later", args: { name: "string", }, @@ -930,7 +930,7 @@ const completionSpec: Fig.Spec = { { name: "--cache-parameter-group-name", description: - "The name of the parameter group to associate with this replication group. If this argument is omitted, the default cache parameter group for the specified engine is used. If you are running Redis OSS version 3.2.4 or later, only one node group (shard), and want to use a default parameter group, we recommend that you specify the parameter group by name. To create a Redis OSS (cluster mode disabled) replication group, use CacheParameterGroupName=default.redis3.2. To create a Redis OSS (cluster mode enabled) replication group, use CacheParameterGroupName=default.redis3.2.cluster.on", + "The name of the parameter group to associate with this replication group. If this argument is omitted, the default cache parameter group for the specified engine is used. If you are running Valkey or Redis OSS version 3.2.4 or later, only one node group (shard), and want to use a default parameter group, we recommend that you specify the parameter group by name. To create a Valkey or Redis OSS (cluster mode disabled) replication group, use CacheParameterGroupName=default.redis3.2. To create a Valkey or Redis OSS (cluster mode enabled) replication group, use CacheParameterGroupName=default.redis3.2.cluster.on", args: { name: "string", }, @@ -970,7 +970,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-arns", description: - "A list of Amazon Resource Names (ARN) that uniquely identify the Redis OSS RDB snapshot files stored in Amazon S3. The snapshot files are used to populate the new replication group. The Amazon S3 object name in the ARN cannot contain any commas. The new replication group will have the number of node groups (console: shards) specified by the parameter NumNodeGroups or the number of node groups configured by NodeGroupConfiguration regardless of the number of ARNs specified here. Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb", + "A list of Amazon Resource Names (ARN) that uniquely identify the Valkey or Redis OSS RDB snapshot files stored in Amazon S3. The snapshot files are used to populate the new replication group. The Amazon S3 object name in the ARN cannot contain any commas. The new replication group will have the number of node groups (console: shards) specified by the parameter NumNodeGroups or the number of node groups configured by NodeGroupConfiguration regardless of the number of ARNs specified here. Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb", args: { name: "list", }, @@ -1010,12 +1010,12 @@ const completionSpec: Fig.Spec = { { name: "--auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey 7.2 and above or Redis OSS engine version 6.0 and above, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--no-auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey 7.2 and above or Redis OSS engine version 6.0 and above, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--snapshot-retention-limit", @@ -1096,7 +1096,7 @@ const completionSpec: Fig.Spec = { { name: "--network-type", description: - "Must be either ipv4 | ipv6 | dual_stack. IPv6 is supported for workloads using Redis OSS engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the Nitro system", + "Must be either ipv4 | ipv6 | dual_stack. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system", args: { name: "string", }, @@ -1104,7 +1104,7 @@ const completionSpec: Fig.Spec = { { name: "--ip-discovery", description: - "The network type you choose when creating a replication group, either ipv4 | ipv6. IPv6 is supported for workloads using Redis OSS engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the Nitro system", + "The network type you choose when creating a replication group, either ipv4 | ipv6. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system", args: { name: "string", }, @@ -1112,7 +1112,7 @@ const completionSpec: Fig.Spec = { { name: "--transit-encryption-mode", description: - "A setting that allows you to migrate your clients to use in-transit encryption, with no downtime. When setting TransitEncryptionEnabled to true, you can set your TransitEncryptionMode to preferred in the same request, to allow both encrypted and unencrypted connections at the same time. Once you migrate all your Redis OSS clients to use encrypted connections you can modify the value to required to allow encrypted connections only. Setting TransitEncryptionMode to required is a two-step process that requires you to first set the TransitEncryptionMode to preferred, after that you can set TransitEncryptionMode to required. This process will not trigger the replacement of the replication group", + "A setting that allows you to migrate your clients to use in-transit encryption, with no downtime. When setting TransitEncryptionEnabled to true, you can set your TransitEncryptionMode to preferred in the same request, to allow both encrypted and unencrypted connections at the same time. Once you migrate all your Valkey or Redis OSS clients to use encrypted connections you can modify the value to required to allow encrypted connections only. Setting TransitEncryptionMode to required is a two-step process that requires you to first set the TransitEncryptionMode to preferred, after that you can set TransitEncryptionMode to required. This process will not trigger the replacement of the replication group", args: { name: "string", }, @@ -1120,7 +1120,7 @@ const completionSpec: Fig.Spec = { { name: "--cluster-mode", description: - "Enabled or Disabled. To modify cluster mode from Disabled to Enabled, you must first set the cluster mode to Compatible. Compatible mode allows your Redis OSS clients to connect using both cluster mode enabled and cluster mode disabled. After you migrate all Redis OSS clients to use cluster mode enabled, you can then complete cluster mode configuration and set the cluster mode to Enabled", + "Enabled or Disabled. To modify cluster mode from Disabled to Enabled, you must first set the cluster mode to Compatible. Compatible mode allows your Valkey or Redis OSS clients to connect using both cluster mode enabled and cluster mode disabled. After you migrate all Valkey or Redis OSS clients to use cluster mode enabled, you can then complete cluster mode configuration and set the cluster mode to Enabled", args: { name: "string", }, @@ -1128,7 +1128,7 @@ const completionSpec: Fig.Spec = { { name: "--serverless-cache-snapshot-name", description: - "The name of the snapshot used to create a replication group. Available for Redis OSS only", + "The name of the snapshot used to create a replication group. Available for Valkey, Redis OSS only", args: { name: "string", }, @@ -1223,7 +1223,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-arns-to-restore", description: - "The ARN(s) of the snapshot that the new serverless cache will be created from. Available for Redis OSS and Serverless Memcached only", + "The ARN(s) of the snapshot that the new serverless cache will be created from. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "list", }, @@ -1239,7 +1239,7 @@ const completionSpec: Fig.Spec = { { name: "--user-group-id", description: - "The identifier of the UserGroup to be associated with the serverless cache. Available for Redis OSS only. Default is NULL", + "The identifier of the UserGroup to be associated with the serverless cache. Available for Valkey and Redis OSS only. Default is NULL", args: { name: "string", }, @@ -1255,7 +1255,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-retention-limit", description: - "The number of snapshots that will be retained for the serverless cache that is being created. As new snapshots beyond this limit are added, the oldest snapshots will be deleted on a rolling basis. Available for Redis OSS and Serverless Memcached only", + "The number of snapshots that will be retained for the serverless cache that is being created. As new snapshots beyond this limit are added, the oldest snapshots will be deleted on a rolling basis. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "integer", }, @@ -1263,7 +1263,7 @@ const completionSpec: Fig.Spec = { { name: "--daily-snapshot-time", description: - "The daily time that snapshots will be created from the new serverless cache. By default this number is populated with 0, i.e. no snapshots will be created on an automatic daily basis. Available for Redis OSS and Serverless Memcached only", + "The daily time that snapshots will be created from the new serverless cache. By default this number is populated with 0, i.e. no snapshots will be created on an automatic daily basis. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -1290,12 +1290,12 @@ const completionSpec: Fig.Spec = { { name: "create-serverless-cache-snapshot", description: - "This API creates a copy of an entire ServerlessCache at a specific moment in time. Available for Redis OSS and Serverless Memcached only", + "This API creates a copy of an entire ServerlessCache at a specific moment in time. Available for Valkey, Redis OSS and Serverless Memcached only", options: [ { name: "--serverless-cache-snapshot-name", description: - "The name for the snapshot being created. Must be unique for the customer account. Available for Redis OSS and Serverless Memcached only. Must be between 1 and 255 characters", + "The name for the snapshot being created. Must be unique for the customer account. Available for Valkey, Redis OSS and Serverless Memcached only. Must be between 1 and 255 characters", args: { name: "string", }, @@ -1303,7 +1303,7 @@ const completionSpec: Fig.Spec = { { name: "--serverless-cache-name", description: - "The name of an existing serverless cache. The snapshot is created from this cache. Available for Redis OSS and Serverless Memcached only", + "The name of an existing serverless cache. The snapshot is created from this cache. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -1311,7 +1311,7 @@ const completionSpec: Fig.Spec = { { name: "--kms-key-id", description: - "The ID of the KMS key used to encrypt the snapshot. Available for Redis OSS and Serverless Memcached only. Default: NULL", + "The ID of the KMS key used to encrypt the snapshot. Available for Valkey, Redis OSS and Serverless Memcached only. Default: NULL", args: { name: "string", }, @@ -1319,7 +1319,7 @@ const completionSpec: Fig.Spec = { { name: "--tags", description: - "A list of tags to be added to the snapshot resource. A tag is a key-value pair. Available for Redis OSS and Serverless Memcached only", + "A list of tags to be added to the snapshot resource. A tag is a key-value pair. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "list", }, @@ -1346,7 +1346,7 @@ const completionSpec: Fig.Spec = { { name: "create-snapshot", description: - "Creates a copy of an entire cluster or replication group at a specific moment in time. This operation is valid for Redis OSS only", + "Creates a copy of an entire cluster or replication group at a specific moment in time. This operation is valid for Valkey or Redis OSS only", options: [ { name: "--replication-group-id", @@ -1408,7 +1408,7 @@ const completionSpec: Fig.Spec = { { name: "create-user", description: - "For Redis OSS engine version 6.0 onwards: Creates a Redis OSS user. For more information, see Using Role Based Access Control (RBAC)", + "For Valkey engine version 7.2 onwards and Redis OSS 6.0 and onwards: Creates a user. For more information, see Using Role Based Access Control (RBAC)", options: [ { name: "--user-id", @@ -1491,7 +1491,7 @@ const completionSpec: Fig.Spec = { { name: "create-user-group", description: - "For Redis OSS engine version 6.0 onwards: Creates a Redis OSS user group. For more information, see Using Role Based Access Control (RBAC)", + "For Valkey engine version 7.2 onwards and Redis OSS 6.0 onwards: Creates a user group. For more information, see Using Role Based Access Control (RBAC)", options: [ { name: "--user-group-id", @@ -1517,7 +1517,7 @@ const completionSpec: Fig.Spec = { { name: "--tags", description: - "A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted. Available for Redis OSS only", + "A list of tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value, although null is accepted. Available for Valkey and Redis OSS only", args: { name: "list", }, @@ -1563,7 +1563,7 @@ const completionSpec: Fig.Spec = { { name: "--global-node-groups-to-remove", description: - "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. GlobalNodeGroupsToRemove is a list of NodeGroupIds to remove from the cluster. ElastiCache (Redis OSS) will attempt to remove all node groups listed by GlobalNodeGroupsToRemove from the cluster", + "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. GlobalNodeGroupsToRemove is a list of NodeGroupIds to remove from the cluster. ElastiCache will attempt to remove all node groups listed by GlobalNodeGroupsToRemove from the cluster", args: { name: "list", }, @@ -1571,7 +1571,7 @@ const completionSpec: Fig.Spec = { { name: "--global-node-groups-to-retain", description: - "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. GlobalNodeGroupsToRetain is a list of NodeGroupIds to retain from the cluster. ElastiCache (Redis OSS) will attempt to retain all node groups listed by GlobalNodeGroupsToRetain from the cluster", + "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. GlobalNodeGroupsToRetain is a list of NodeGroupIds to retain from the cluster. ElastiCache will attempt to retain all node groups listed by GlobalNodeGroupsToRetain from the cluster", args: { name: "list", }, @@ -1608,7 +1608,7 @@ const completionSpec: Fig.Spec = { { name: "decrease-replica-count", description: - "Dynamically decreases the number of replicas in a Redis OSS (cluster mode disabled) replication group or the number of replica nodes in one or more node groups (shards) of a Redis OSS (cluster mode enabled) replication group. This operation is performed with no cluster down time", + "Dynamically decreases the number of replicas in a Valkey or Redis OSS (cluster mode disabled) replication group or the number of replica nodes in one or more node groups (shards) of a Valkey or Redis OSS (cluster mode enabled) replication group. This operation is performed with no cluster down time", options: [ { name: "--replication-group-id", @@ -1621,7 +1621,7 @@ const completionSpec: Fig.Spec = { { name: "--new-replica-count", description: - "The number of read replica nodes you want at the completion of this operation. For Redis OSS (cluster mode disabled) replication groups, this is the number of replica nodes in the replication group. For Redis OSS (cluster mode enabled) replication groups, this is the number of replica nodes in each of the replication group's node groups. The minimum number of replicas in a shard or replication group is: Redis OSS (cluster mode disabled) If Multi-AZ is enabled: 1 If Multi-AZ is not enabled: 0 Redis OSS (cluster mode enabled): 0 (though you will not be able to failover to a replica if your primary node fails)", + "The number of read replica nodes you want at the completion of this operation. For Valkey or Redis OSS (cluster mode disabled) replication groups, this is the number of replica nodes in the replication group. For Valkey or Redis OSS (cluster mode enabled) replication groups, this is the number of replica nodes in each of the replication group's node groups. The minimum number of replicas in a shard or replication group is: Valkey or Redis OSS (cluster mode disabled) If Multi-AZ is enabled: 1 If Multi-AZ is not enabled: 0 Valkey or Redis OSS (cluster mode enabled): 0 (though you will not be able to failover to a replica if your primary node fails)", args: { name: "integer", }, @@ -1629,7 +1629,7 @@ const completionSpec: Fig.Spec = { { name: "--replica-configuration", description: - "A list of ConfigureShard objects that can be used to configure each shard in a Redis OSS (cluster mode enabled) replication group. The ConfigureShard has three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones", + "A list of ConfigureShard objects that can be used to configure each shard in a Valkey or Redis OSS (cluster mode enabled) replication group. The ConfigureShard has three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones", args: { name: "list", }, @@ -1674,7 +1674,7 @@ const completionSpec: Fig.Spec = { { name: "delete-cache-cluster", description: - "Deletes a previously provisioned cluster. DeleteCacheCluster deletes all associated cache nodes, node endpoints and the cluster itself. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the cluster; you cannot cancel or revert this operation. This operation is not valid for: Redis OSS (cluster mode enabled) clusters Redis OSS (cluster mode disabled) clusters A cluster that is the last read replica of a replication group A cluster that is the primary node of a replication group A node group (shard) that has Multi-AZ mode enabled A cluster from a Redis OSS (cluster mode enabled) replication group A cluster that is not in the available state", + "Deletes a previously provisioned cluster. DeleteCacheCluster deletes all associated cache nodes, node endpoints and the cluster itself. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the cluster; you cannot cancel or revert this operation. This operation is not valid for: Valkey or Redis OSS (cluster mode enabled) clusters Valkey or Redis OSS (cluster mode disabled) clusters A cluster that is the last read replica of a replication group A cluster that is the primary node of a replication group A node group (shard) that has Multi-AZ mode enabled A cluster from a Valkey or Redis OSS (cluster mode enabled) replication group A cluster that is not in the available state", options: [ { name: "--cache-cluster-id", @@ -1913,7 +1913,7 @@ const completionSpec: Fig.Spec = { { name: "--final-snapshot-name", description: - "Name of the final snapshot to be taken before the serverless cache is deleted. Available for Redis OSS and Serverless Memcached only. Default: NULL, i.e. a final snapshot is not taken", + "Name of the final snapshot to be taken before the serverless cache is deleted. Available for Valkey, Redis OSS and Serverless Memcached only. Default: NULL, i.e. a final snapshot is not taken", args: { name: "string", }, @@ -1940,12 +1940,12 @@ const completionSpec: Fig.Spec = { { name: "delete-serverless-cache-snapshot", description: - "Deletes an existing serverless cache snapshot. Available for Redis OSS and Serverless Memcached only", + "Deletes an existing serverless cache snapshot. Available for Valkey, Redis OSS and Serverless Memcached only", options: [ { name: "--serverless-cache-snapshot-name", description: - "Idenfitier of the snapshot to be deleted. Available for Redis OSS and Serverless Memcached only", + "Idenfitier of the snapshot to be deleted. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -1972,7 +1972,7 @@ const completionSpec: Fig.Spec = { { name: "delete-snapshot", description: - "Deletes an existing snapshot. When you receive a successful response from this operation, ElastiCache immediately begins deleting the snapshot; you cannot cancel or revert this operation. This operation is valid for Redis OSS only", + "Deletes an existing snapshot. When you receive a successful response from this operation, ElastiCache immediately begins deleting the snapshot; you cannot cancel or revert this operation. This operation is valid for Valkey or Redis OSS only", options: [ { name: "--snapshot-name", @@ -2003,7 +2003,7 @@ const completionSpec: Fig.Spec = { { name: "delete-user", description: - "For Redis OSS engine version 6.0 onwards: Deletes a user. The user will be removed from all user groups and in turn removed from all replication groups. For more information, see Using Role Based Access Control (RBAC)", + "For Valkey engine version 7.2 onwards and Redis OSS 6.0 onwards: Deletes a user. The user will be removed from all user groups and in turn removed from all replication groups. For more information, see Using Role Based Access Control (RBAC)", options: [ { name: "--user-id", @@ -2034,7 +2034,7 @@ const completionSpec: Fig.Spec = { { name: "delete-user-group", description: - "For Redis OSS engine version 6.0 onwards: Deletes a user group. The user group must first be disassociated from the replication group before it can be deleted. For more information, see Using Role Based Access Control (RBAC)", + "For Valkey engine version 7.2 onwards and Redis OSS 6.0 onwards: Deletes a user group. The user group must first be disassociated from the replication group before it can be deleted. For more information, see Using Role Based Access Control (RBAC)", options: [ { name: "--user-group-id", @@ -2104,12 +2104,12 @@ const completionSpec: Fig.Spec = { { name: "--show-cache-clusters-not-in-replication-groups", description: - "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this mean Memcached and single node Redis OSS clusters", + "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters", }, { name: "--no-show-cache-clusters-not-in-replication-groups", description: - "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this mean Memcached and single node Redis OSS clusters", + "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters", }, { name: "--cli-input-json", @@ -2177,7 +2177,7 @@ const completionSpec: Fig.Spec = { { name: "--cache-parameter-group-family", description: - "The name of a specific cache parameter group family to return details for. Valid values are: memcached1.4 | memcached1.5 | memcached1.6 | redis2.6 | redis2.8 | redis3.2 | redis4.0 | redis5.0 | redis6.x | redis6.2 | redis7 Constraints: Must be 1 to 255 alphanumeric characters First character must be a letter Cannot end with a hyphen or contain two consecutive hyphens", + "The name of a specific cache parameter group family to return details for. Valid values are: memcached1.4 | memcached1.5 | memcached1.6 | redis2.6 | redis2.8 | redis3.2 | redis4.0 | redis5.0 | redis6.x | redis6.2 | redis7 | valkey7 Constraints: Must be 1 to 255 alphanumeric characters First character must be a letter Cannot end with a hyphen or contain two consecutive hyphens", args: { name: "string", }, @@ -2806,7 +2806,7 @@ const completionSpec: Fig.Spec = { { name: "describe-replication-groups", description: - "Returns information about a particular replication group. If no identifier is specified, DescribeReplicationGroups returns information about all replication groups. This operation is valid for Redis OSS only", + "Returns information about a particular replication group. If no identifier is specified, DescribeReplicationGroups returns information about all replication groups. This operation is valid for Valkey or Redis OSS only", options: [ { name: "--replication-group-id", @@ -2899,7 +2899,7 @@ const completionSpec: Fig.Spec = { { name: "--cache-node-type", description: - "The cache node type filter value. Use this parameter to show only those reservations matching the specified cache node type. The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. Redis OSS configuration variables appendonly and appendfsync are not supported on Redis OSS version 2.8.22 and later", + "The cache node type filter value. Use this parameter to show only those reservations matching the specified cache node type. The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Valkey or Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Valkey or Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. The configuration variables appendonly and appendfsync are not supported on Valkey, or on Redis OSS version 2.8.22 and later", args: { name: "string", }, @@ -3002,7 +3002,7 @@ const completionSpec: Fig.Spec = { { name: "--cache-node-type", description: - "The cache node type filter value. Use this parameter to show only the available offerings matching the specified cache node type. The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. Redis OSS configuration variables appendonly and appendfsync are not supported on Redis OSS version 2.8.22 and later", + "The cache node type filter value. Use this parameter to show only the available offerings matching the specified cache node type. The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts. General purpose: Current generation: M7g node types: cache.m7g.large, cache.m7g.xlarge, cache.m7g.2xlarge, cache.m7g.4xlarge, cache.m7g.8xlarge, cache.m7g.12xlarge, cache.m7g.16xlarge For region availability, see Supported Node Types M6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge M5 node types: cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge M4 node types: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge T4g node types (available only for Redis OSS engine version 5.0.6 onward and Memcached engine version 1.5.16 onward): cache.t4g.micro, cache.t4g.small, cache.t4g.medium T3 node types: cache.t3.micro, cache.t3.small, cache.t3.medium T2 node types: cache.t2.micro, cache.t2.small, cache.t2.medium Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) T1 node types: cache.t1.micro M1 node types: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge M3 node types: cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge Compute optimized: Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) C1 node types: cache.c1.xlarge Memory optimized: Current generation: R7g node types: cache.r7g.large, cache.r7g.xlarge, cache.r7g.2xlarge, cache.r7g.4xlarge, cache.r7g.8xlarge, cache.r7g.12xlarge, cache.r7g.16xlarge For region availability, see Supported Node Types R6g node types (available only for Redis OSS engine version 5.0.6 onward and for Memcached engine version 1.5.16 onward): cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge R5 node types: cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge R4 node types: cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge Previous generation: (not recommended. Existing clusters are still supported but creation of new clusters is not supported for these types.) M2 node types: cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge R3 node types: cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge Additional node type info All current generation instance types are created in Amazon VPC by default. Valkey or Redis OSS append-only files (AOF) are not supported for T1 or T2 instances. Valkey or Redis OSS Multi-AZ with automatic failover is not supported on T1 instances. The configuration variables appendonly and appendfsync are not supported on Valkey, or on Redis OSS version 2.8.22 and later", args: { name: "string", }, @@ -3093,12 +3093,12 @@ const completionSpec: Fig.Spec = { { name: "describe-serverless-cache-snapshots", description: - "Returns information about serverless cache snapshots. By default, this API lists all of the customer\u2019s serverless cache snapshots. It can also describe a single serverless cache snapshot, or the snapshots associated with a particular serverless cache. Available for Redis OSS and Serverless Memcached only", + "Returns information about serverless cache snapshots. By default, this API lists all of the customer\u2019s serverless cache snapshots. It can also describe a single serverless cache snapshot, or the snapshots associated with a particular serverless cache. Available for Valkey, Redis OSS and Serverless Memcached only", options: [ { name: "--serverless-cache-name", description: - "The identifier of serverless cache. If this parameter is specified, only snapshots associated with that specific serverless cache are described. Available for Redis OSS and Serverless Memcached only", + "The identifier of serverless cache. If this parameter is specified, only snapshots associated with that specific serverless cache are described. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -3106,7 +3106,7 @@ const completionSpec: Fig.Spec = { { name: "--serverless-cache-snapshot-name", description: - "The identifier of the serverless cache\u2019s snapshot. If this parameter is specified, only this snapshot is described. Available for Redis OSS and Serverless Memcached only", + "The identifier of the serverless cache\u2019s snapshot. If this parameter is specified, only this snapshot is described. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -3114,7 +3114,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-type", description: - "The type of snapshot that is being described. Available for Redis OSS and Serverless Memcached only", + "The type of snapshot that is being described. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -3122,7 +3122,7 @@ const completionSpec: Fig.Spec = { { name: "--next-token", description: - "An optional marker returned from a prior request to support pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by max-results. Available for Redis OSS and Serverless Memcached only", + "An optional marker returned from a prior request to support pagination of results from this operation. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by max-results. Available for Valkey, Redis OSS and Serverless Memcached only", args: { name: "string", }, @@ -3130,7 +3130,7 @@ const completionSpec: Fig.Spec = { { name: "--max-results", description: - "The maximum number of records to include in the response. If more records exist than the specified max-results value, a market is included in the response so that remaining results can be retrieved. Available for Redis OSS and Serverless Memcached only.The default is 50. The Validation Constraints are a maximum of 50", + "The maximum number of records to include in the response. If more records exist than the specified max-results value, a market is included in the response so that remaining results can be retrieved. Available for Valkey, Redis OSS and Serverless Memcached only.The default is 50. The Validation Constraints are a maximum of 50", args: { name: "integer", }, @@ -3330,7 +3330,7 @@ const completionSpec: Fig.Spec = { { name: "describe-snapshots", description: - "Returns information about cluster or replication group snapshots. By default, DescribeSnapshots lists all of your snapshots; it can optionally describe a single snapshot, or just the snapshots associated with a particular cache cluster. This operation is valid for Redis OSS only", + "Returns information about cluster or replication group snapshots. By default, DescribeSnapshots lists all of your snapshots; it can optionally describe a single snapshot, or just the snapshots associated with a particular cache cluster. This operation is valid for Valkey or Redis OSS only", options: [ { name: "--replication-group-id", @@ -3461,7 +3461,7 @@ const completionSpec: Fig.Spec = { { name: "--engine", description: - "The Elasticache engine to which the update applies. Either Redis OSS or Memcached", + "The Elasticache engine to which the update applies. Either Valkey, Redis OSS or Memcached", args: { name: "string", }, @@ -3633,7 +3633,7 @@ const completionSpec: Fig.Spec = { options: [ { name: "--engine", - description: "The Redis OSS engine", + description: "The engine", args: { name: "string", }, @@ -3761,12 +3761,12 @@ const completionSpec: Fig.Spec = { { name: "export-serverless-cache-snapshot", description: - "Provides the functionality to export the serverless cache snapshot data to Amazon S3. Available for Redis OSS only", + "Provides the functionality to export the serverless cache snapshot data to Amazon S3. Available for Valkey and Redis OSS only", options: [ { name: "--serverless-cache-snapshot-name", description: - "The identifier of the serverless cache snapshot to be exported to S3. Available for Redis OSS only", + "The identifier of the serverless cache snapshot to be exported to S3. Available for Valkey and Redis OSS only", args: { name: "string", }, @@ -3774,7 +3774,7 @@ const completionSpec: Fig.Spec = { { name: "--s3-bucket-name", description: - "Name of the Amazon S3 bucket to export the snapshot to. The Amazon S3 bucket must also be in same region as the snapshot. Available for Redis OSS only", + "Name of the Amazon S3 bucket to export the snapshot to. The Amazon S3 bucket must also be in same region as the snapshot. Available for Valkey and Redis OSS only", args: { name: "string", }, @@ -3902,7 +3902,7 @@ const completionSpec: Fig.Spec = { { name: "increase-replica-count", description: - "Dynamically increases the number of replicas in a Redis OSS (cluster mode disabled) replication group or the number of replica nodes in one or more node groups (shards) of a Redis OSS (cluster mode enabled) replication group. This operation is performed with no cluster down time", + "Dynamically increases the number of replicas in a Valkey or Redis OSS (cluster mode disabled) replication group or the number of replica nodes in one or more node groups (shards) of a Valkey or Redis OSS (cluster mode enabled) replication group. This operation is performed with no cluster down time", options: [ { name: "--replication-group-id", @@ -3915,7 +3915,7 @@ const completionSpec: Fig.Spec = { { name: "--new-replica-count", description: - "The number of read replica nodes you want at the completion of this operation. For Redis OSS (cluster mode disabled) replication groups, this is the number of replica nodes in the replication group. For Redis OSS (cluster mode enabled) replication groups, this is the number of replica nodes in each of the replication group's node groups", + "The number of read replica nodes you want at the completion of this operation. For Valkey or Redis OSS (cluster mode disabled) replication groups, this is the number of replica nodes in the replication group. For Valkey or Redis OSS (cluster mode enabled) replication groups, this is the number of replica nodes in each of the replication group's node groups", args: { name: "integer", }, @@ -3923,7 +3923,7 @@ const completionSpec: Fig.Spec = { { name: "--replica-configuration", description: - "A list of ConfigureShard objects that can be used to configure each shard in a Redis OSS (cluster mode enabled) replication group. The ConfigureShard has three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones", + "A list of ConfigureShard objects that can be used to configure each shard in a Valkey or Redis OSS (cluster mode enabled) replication group. The ConfigureShard has three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones", args: { name: "list", }, @@ -3960,7 +3960,7 @@ const completionSpec: Fig.Spec = { { name: "list-allowed-node-type-modifications", description: - "Lists all available node types that you can scale your Redis OSS cluster's or replication group's current node type. When you use the ModifyCacheCluster or ModifyReplicationGroup operations to scale your cluster or replication group, the value of the CacheNodeType parameter must be one of the node types returned by this operation", + "Lists all available node types that you can scale with your cluster's replication group's current node type. When you use the ModifyCacheCluster or ModifyReplicationGroup operations to scale your cluster or replication group, the value of the CacheNodeType parameter must be one of the node types returned by this operation", options: [ { name: "--cache-cluster-id", @@ -4045,7 +4045,7 @@ const completionSpec: Fig.Spec = { { name: "--num-cache-nodes", description: - "The number of cache nodes that the cluster should have. If the value for NumCacheNodes is greater than the sum of the number of current cache nodes and the number of cache nodes pending creation (which may be zero), more nodes are added. If the value is less than the number of existing cache nodes, nodes are removed. If the value is equal to the number of current cache nodes, any pending add or remove requests are canceled. If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to remove. For clusters running Redis OSS, this value must be 1. For clusters running Memcached, this value must be between 1 and 40. Adding or removing Memcached cache nodes can be applied immediately or as a pending operation (see ApplyImmediately). A pending operation to modify the number of cache nodes in a cluster during its maintenance window, whether by adding or removing nodes in accordance with the scale out architecture, is not queued. The customer's latest request to add or remove nodes to the cluster overrides any previous pending operations to modify the number of cache nodes in the cluster. For example, a request to remove 2 nodes would override a previous pending operation to remove 3 nodes. Similarly, a request to add 2 nodes would override a previous pending operation to remove 3 nodes and vice versa. As Memcached cache nodes may now be provisioned in different Availability Zones with flexible cache node placement, a request to add nodes does not automatically override a previous pending operation to add nodes. The customer can modify the previous pending operation to add more nodes or explicitly cancel the pending request and retry the new request. To cancel pending operations to modify the number of cache nodes in a cluster, use the ModifyCacheCluster request and set NumCacheNodes equal to the number of cache nodes currently in the cluster", + "The number of cache nodes that the cluster should have. If the value for NumCacheNodes is greater than the sum of the number of current cache nodes and the number of cache nodes pending creation (which may be zero), more nodes are added. If the value is less than the number of existing cache nodes, nodes are removed. If the value is equal to the number of current cache nodes, any pending add or remove requests are canceled. If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to remove. For clusters running Valkey or Redis OSS, this value must be 1. For clusters running Memcached, this value must be between 1 and 40. Adding or removing Memcached cache nodes can be applied immediately or as a pending operation (see ApplyImmediately). A pending operation to modify the number of cache nodes in a cluster during its maintenance window, whether by adding or removing nodes in accordance with the scale out architecture, is not queued. The customer's latest request to add or remove nodes to the cluster overrides any previous pending operations to modify the number of cache nodes in the cluster. For example, a request to remove 2 nodes would override a previous pending operation to remove 3 nodes. Similarly, a request to add 2 nodes would override a previous pending operation to remove 3 nodes and vice versa. As Memcached cache nodes may now be provisioned in different Availability Zones with flexible cache node placement, a request to add nodes does not automatically override a previous pending operation to add nodes. The customer can modify the previous pending operation to add more nodes or explicitly cancel the pending request and retry the new request. To cancel pending operations to modify the number of cache nodes in a cluster, use the ModifyCacheCluster request and set NumCacheNodes equal to the number of cache nodes currently in the cluster", args: { name: "integer", }, @@ -4132,6 +4132,14 @@ const completionSpec: Fig.Spec = { description: "If true, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the cluster. If false, changes to the cluster are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first. If you perform a ModifyCacheCluster before a pending modification is applied, the pending modification is replaced by the newer modification. Valid values: true | false Default: false", }, + { + name: "--engine", + description: + "Modifies the engine listed in a cluster message. The options are redis, memcached or valkey", + args: { + name: "string", + }, + }, { name: "--engine-version", description: @@ -4143,12 +4151,12 @@ const completionSpec: Fig.Spec = { { name: "--auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey 7.2 or Redis OSS engine version 6.0 or later, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--no-auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey 7.2 or Redis OSS engine version 6.0 or later, set this parameter to yes to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--snapshot-retention-limit", @@ -4185,7 +4193,7 @@ const completionSpec: Fig.Spec = { { name: "--auth-token-update-strategy", description: - "Specifies the strategy to use to update the AUTH token. This parameter must be specified with the auth-token parameter. Possible values: ROTATE - default, if no update strategy is provided SET - allowed only after ROTATE DELETE - allowed only when transitioning to RBAC For more information, see Authenticating Users with Redis OSS AUTH", + "Specifies the strategy to use to update the AUTH token. This parameter must be specified with the auth-token parameter. Possible values: ROTATE - default, if no update strategy is provided SET - allowed only after ROTATE DELETE - allowed only when transitioning to RBAC For more information, see Authenticating Users with AUTH", args: { name: "string", }, @@ -4200,7 +4208,7 @@ const completionSpec: Fig.Spec = { { name: "--ip-discovery", description: - "The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Redis OSS engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the Nitro system", + "The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system", args: { name: "string", }, @@ -4337,6 +4345,14 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--engine", + description: + "Modifies the engine listed in a global replication group message. The options are redis, memcached or valkey", + args: { + name: "string", + }, + }, { name: "--engine-version", description: @@ -4392,7 +4408,7 @@ const completionSpec: Fig.Spec = { { name: "modify-replication-group", description: - "Modifies the settings for a replication group. This is limited to Redis OSS 7 and newer. Scaling for Amazon ElastiCache (Redis OSS) (cluster mode enabled) in the ElastiCache User Guide ModifyReplicationGroupShardConfiguration in the ElastiCache API Reference This operation is valid for Redis OSS only", + "Modifies the settings for a replication group. This is limited to Valkey and Redis OSS 7 and above. Scaling for Valkey or Redis OSS (cluster mode enabled) in the ElastiCache User Guide ModifyReplicationGroupShardConfiguration in the ElastiCache API Reference This operation is valid for Valkey or Redis OSS only", options: [ { name: "--replication-group-id", @@ -4420,7 +4436,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshotting-cluster-id", description: - "The cluster ID that is used as the daily snapshot source for the replication group. This parameter cannot be set for Redis OSS (cluster mode enabled) replication groups", + "The cluster ID that is used as the daily snapshot source for the replication group. This parameter cannot be set for Valkey or Redis OSS (cluster mode enabled) replication groups", args: { name: "string", }, @@ -4508,6 +4524,14 @@ const completionSpec: Fig.Spec = { description: "If true, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the replication group. If false, changes to the nodes in the replication group are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first. Valid values: true | false Default: false", }, + { + name: "--engine", + description: + "Modifies the engine listed in a replication group message. The options are redis, memcached or valkey", + args: { + name: "string", + }, + }, { name: "--engine-version", description: @@ -4519,12 +4543,12 @@ const completionSpec: Fig.Spec = { { name: "--auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey or Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--no-auto-minor-version-upgrade", description: - "If you are running Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", + "If you are running Valkey or Redis OSS engine version 6.0 or later, set this parameter to yes if you want to opt-in to the next auto minor version upgrade campaign. This parameter is disabled for previous versions", }, { name: "--snapshot-retention-limit", @@ -4561,7 +4585,7 @@ const completionSpec: Fig.Spec = { { name: "--auth-token-update-strategy", description: - "Specifies the strategy to use to update the AUTH token. This parameter must be specified with the auth-token parameter. Possible values: ROTATE - default, if no update strategy is provided SET - allowed only after ROTATE DELETE - allowed only when transitioning to RBAC For more information, see Authenticating Users with Redis OSS AUTH", + "Specifies the strategy to use to update the AUTH token. This parameter must be specified with the auth-token parameter. Possible values: ROTATE - default, if no update strategy is provided SET - allowed only after ROTATE DELETE - allowed only when transitioning to RBAC For more information, see Authenticating Users with AUTH", args: { name: "string", }, @@ -4602,7 +4626,7 @@ const completionSpec: Fig.Spec = { { name: "--ip-discovery", description: - "The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Redis OSS engine version 6.2 onward or Memcached engine version 1.6.6 on all instances built on the Nitro system", + "The network type you choose when modifying a cluster, either ipv4 | ipv6. IPv6 is supported for workloads using Valkey 7.2 and above, Redis OSS engine version 6.2 and above or Memcached engine version 1.6.6 and above on all instances built on the Nitro system", args: { name: "string", }, @@ -4620,7 +4644,7 @@ const completionSpec: Fig.Spec = { { name: "--transit-encryption-mode", description: - "A setting that allows you to migrate your clients to use in-transit encryption, with no downtime. You must set TransitEncryptionEnabled to true, for your existing cluster, and set TransitEncryptionMode to preferred in the same request to allow both encrypted and unencrypted connections at the same time. Once you migrate all your Redis OSS clients to use encrypted connections you can set the value to required to allow encrypted connections only. Setting TransitEncryptionMode to required is a two-step process that requires you to first set the TransitEncryptionMode to preferred, after that you can set TransitEncryptionMode to required", + "A setting that allows you to migrate your clients to use in-transit encryption, with no downtime. You must set TransitEncryptionEnabled to true, for your existing cluster, and set TransitEncryptionMode to preferred in the same request to allow both encrypted and unencrypted connections at the same time. Once you migrate all your Valkey or Redis OSS clients to use encrypted connections you can set the value to required to allow encrypted connections only. Setting TransitEncryptionMode to required is a two-step process that requires you to first set the TransitEncryptionMode to preferred, after that you can set TransitEncryptionMode to required", args: { name: "string", }, @@ -4628,7 +4652,7 @@ const completionSpec: Fig.Spec = { { name: "--cluster-mode", description: - "Enabled or Disabled. To modify cluster mode from Disabled to Enabled, you must first set the cluster mode to Compatible. Compatible mode allows your Redis OSS clients to connect using both cluster mode enabled and cluster mode disabled. After you migrate all Redis OSS clients to use cluster mode enabled, you can then complete cluster mode configuration and set the cluster mode to Enabled", + "Enabled or Disabled. To modify cluster mode from Disabled to Enabled, you must first set the cluster mode to Compatible. Compatible mode allows your Valkey or Redis OSS clients to connect using both cluster mode enabled and cluster mode disabled. After you migrate all Valkey or Redis OSS clients to use cluster mode enabled, you can then complete cluster mode configuration and set the cluster mode to Enabled", args: { name: "string", }, @@ -4660,7 +4684,7 @@ const completionSpec: Fig.Spec = { { name: "--replication-group-id", description: - "The name of the Redis OSS (cluster mode enabled) cluster (replication group) on which the shards are to be configured", + "The name of the Valkey or Redis OSS (cluster mode enabled) cluster (replication group) on which the shards are to be configured", args: { name: "string", }, @@ -4694,7 +4718,7 @@ const completionSpec: Fig.Spec = { { name: "--node-groups-to-remove", description: - "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. NodeGroupsToRemove is a list of NodeGroupIds to remove from the cluster. ElastiCache (Redis OSS) will attempt to remove all node groups listed by NodeGroupsToRemove from the cluster", + "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. NodeGroupsToRemove is a list of NodeGroupIds to remove from the cluster. ElastiCache will attempt to remove all node groups listed by NodeGroupsToRemove from the cluster", args: { name: "list", }, @@ -4702,7 +4726,7 @@ const completionSpec: Fig.Spec = { { name: "--node-groups-to-retain", description: - "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. NodeGroupsToRetain is a list of NodeGroupIds to retain in the cluster. ElastiCache (Redis OSS) will attempt to remove all node groups except those listed by NodeGroupsToRetain from the cluster", + "If the value of NodeGroupCount is less than the current number of node groups (shards), then either NodeGroupsToRemove or NodeGroupsToRetain is required. NodeGroupsToRetain is a list of NodeGroupIds to retain in the cluster. ElastiCache will attempt to remove all node groups except those listed by NodeGroupsToRetain from the cluster", args: { name: "list", }, @@ -4756,17 +4780,17 @@ const completionSpec: Fig.Spec = { { name: "--remove-user-group", description: - "The identifier of the UserGroup to be removed from association with the Redis OSS serverless cache. Available for Redis OSS only. Default is NULL", + "The identifier of the UserGroup to be removed from association with the Valkey and Redis OSS serverless cache. Available for Valkey and Redis OSS only. Default is NULL", }, { name: "--no-remove-user-group", description: - "The identifier of the UserGroup to be removed from association with the Redis OSS serverless cache. Available for Redis OSS only. Default is NULL", + "The identifier of the UserGroup to be removed from association with the Valkey and Redis OSS serverless cache. Available for Valkey and Redis OSS only. Default is NULL", }, { name: "--user-group-id", description: - "The identifier of the UserGroup to be associated with the serverless cache. Available for Redis OSS only. Default is NULL - the existing UserGroup is not removed", + "The identifier of the UserGroup to be associated with the serverless cache. Available for Valkey and Redis OSS only. Default is NULL - the existing UserGroup is not removed", args: { name: "string", }, @@ -4782,7 +4806,7 @@ const completionSpec: Fig.Spec = { { name: "--snapshot-retention-limit", description: - "The number of days for which Elasticache retains automatic snapshots before deleting them. Available for Redis OSS and Serverless Memcached only. Default = NULL, i.e. the existing snapshot-retention-limit will not be removed or modified. The maximum value allowed is 35 days", + "The number of days for which Elasticache retains automatic snapshots before deleting them. Available for Valkey, Redis OSS and Serverless Memcached only. Default = NULL, i.e. the existing snapshot-retention-limit will not be removed or modified. The maximum value allowed is 35 days", args: { name: "integer", }, @@ -4790,7 +4814,23 @@ const completionSpec: Fig.Spec = { { name: "--daily-snapshot-time", description: - "The daily time during which Elasticache begins taking a daily snapshot of the serverless cache. Available for Redis OSS and Serverless Memcached only. The default is NULL, i.e. the existing snapshot time configured for the cluster is not removed", + "The daily time during which Elasticache begins taking a daily snapshot of the serverless cache. Available for Valkey, Redis OSS and Serverless Memcached only. The default is NULL, i.e. the existing snapshot time configured for the cluster is not removed", + args: { + name: "string", + }, + }, + { + name: "--engine", + description: + "Modifies the engine listed in a serverless cache request. The options are redis, memcached or valkey", + args: { + name: "string", + }, + }, + { + name: "--major-engine-version", + description: + "Modifies the engine vesion listed in a serverless cache request", args: { name: "string", }, @@ -4928,7 +4968,7 @@ const completionSpec: Fig.Spec = { { name: "purchase-reserved-cache-nodes-offering", description: - "Allows you to purchase a reserved cache node offering. Reserved nodes are not eligible for cancellation and are non-refundable. For more information, see Managing Costs with Reserved Nodes for Redis OSS or Managing Costs with Reserved Nodes for Memcached", + "Allows you to purchase a reserved cache node offering. Reserved nodes are not eligible for cancellation and are non-refundable. For more information, see Managing Costs with Reserved Nodes", options: [ { name: "--reserved-cache-nodes-offering-id", @@ -5023,7 +5063,7 @@ const completionSpec: Fig.Spec = { { name: "reboot-cache-cluster", description: - "Reboots some, or all, of the cache nodes within a provisioned cluster. This operation applies any modified cache parameter groups to the cluster. The reboot operation takes place as soon as possible, and results in a momentary outage to the cluster. During the reboot, the cluster status is set to REBOOTING. The reboot causes the contents of the cache (for each cache node being rebooted) to be lost. When the reboot is complete, a cluster event is created. Rebooting a cluster is currently supported on Memcached and Redis OSS (cluster mode disabled) clusters. Rebooting is not supported on Redis OSS (cluster mode enabled) clusters. If you make changes to parameters that require a Redis OSS (cluster mode enabled) cluster reboot for the changes to be applied, see Rebooting a Cluster for an alternate process", + "Reboots some, or all, of the cache nodes within a provisioned cluster. This operation applies any modified cache parameter groups to the cluster. The reboot operation takes place as soon as possible, and results in a momentary outage to the cluster. During the reboot, the cluster status is set to REBOOTING. The reboot causes the contents of the cache (for each cache node being rebooted) to be lost. When the reboot is complete, a cluster event is created. Rebooting a cluster is currently supported on Memcached, Valkey and Redis OSS (cluster mode disabled) clusters. Rebooting is not supported on Valkey or Redis OSS (cluster mode enabled) clusters. If you make changes to parameters that require a Valkey or Redis OSS (cluster mode enabled) cluster reboot for the changes to be applied, see Rebooting a Cluster for an alternate process", options: [ { name: "--cache-cluster-id", @@ -5212,7 +5252,7 @@ const completionSpec: Fig.Spec = { { name: "--customer-node-endpoint-list", description: - "List of endpoints from which data should be migrated. For Redis OSS (cluster mode disabled), list should have only one element", + "List of endpoints from which data should be migrated. For Valkey or Redis OSS (cluster mode disabled), the list should have only one element", args: { name: "list", }, @@ -5239,7 +5279,7 @@ const completionSpec: Fig.Spec = { { name: "test-failover", description: - "Represents the input of a TestFailover operation which tests automatic failover on a specified node group (called shard in the console) in a replication group (called cluster in the console). This API is designed for testing the behavior of your application in case of ElastiCache failover. It is not designed to be an operational tool for initiating a failover to overcome a problem you may have with the cluster. Moreover, in certain conditions such as large-scale operational events, Amazon may block this API. Note the following A customer can use this operation to test automatic failover on up to 15 shards (called node groups in the ElastiCache API and Amazon CLI) in any rolling 24-hour period. If calling this operation on shards in different clusters (called replication groups in the API and CLI), the calls can be made concurrently. If calling this operation multiple times on different shards in the same Redis OSS (cluster mode enabled) replication group, the first node replacement must complete before a subsequent call can be made. To determine whether the node replacement is complete you can check Events using the Amazon ElastiCache console, the Amazon CLI, or the ElastiCache API. Look for the following automatic failover related events, listed here in order of occurrance: Replication group message: Test Failover API called for node group Cache cluster message: Failover from primary node to replica node completed Replication group message: Failover from primary node to replica node completed Cache cluster message: Recovering cache nodes Cache cluster message: Finished recovery for cache nodes For more information see: Viewing ElastiCache Events in the ElastiCache User Guide DescribeEvents in the ElastiCache API Reference Also see, Testing Multi-AZ in the ElastiCache User Guide", + "Represents the input of a TestFailover operation which tests automatic failover on a specified node group (called shard in the console) in a replication group (called cluster in the console). This API is designed for testing the behavior of your application in case of ElastiCache failover. It is not designed to be an operational tool for initiating a failover to overcome a problem you may have with the cluster. Moreover, in certain conditions such as large-scale operational events, Amazon may block this API. Note the following A customer can use this operation to test automatic failover on up to 15 shards (called node groups in the ElastiCache API and Amazon CLI) in any rolling 24-hour period. If calling this operation on shards in different clusters (called replication groups in the API and CLI), the calls can be made concurrently. If calling this operation multiple times on different shards in the same Valkey or Redis OSS (cluster mode enabled) replication group, the first node replacement must complete before a subsequent call can be made. To determine whether the node replacement is complete you can check Events using the Amazon ElastiCache console, the Amazon CLI, or the ElastiCache API. Look for the following automatic failover related events, listed here in order of occurrance: Replication group message: Test Failover API called for node group Cache cluster message: Failover from primary node to replica node completed Replication group message: Failover from primary node to replica node completed Cache cluster message: Recovering cache nodes Cache cluster message: Finished recovery for cache nodes For more information see: Viewing ElastiCache Events in the ElastiCache User Guide DescribeEvents in the ElastiCache API Reference Also see, Testing Multi-AZ in the ElastiCache User Guide", options: [ { name: "--replication-group-id", @@ -5363,12 +5403,12 @@ const completionSpec: Fig.Spec = { { name: "--show-cache-clusters-not-in-replication-groups", description: - "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this mean Memcached and single node Redis OSS clusters", + "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters", }, { name: "--no-show-cache-clusters-not-in-replication-groups", description: - "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this mean Memcached and single node Redis OSS clusters", + "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters", }, { name: "--cli-input-json", @@ -5455,12 +5495,12 @@ const completionSpec: Fig.Spec = { { name: "--show-cache-clusters-not-in-replication-groups", description: - "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this mean Memcached and single node Redis OSS clusters", + "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters", }, { name: "--no-show-cache-clusters-not-in-replication-groups", description: - "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this mean Memcached and single node Redis OSS clusters", + "An optional flag that can be included in the DescribeCacheCluster request to show only nodes (API/CLI: clusters) that are not members of a replication group. In practice, this means Memcached and single node Valkey or Redis OSS clusters", }, { name: "--cli-input-json", diff --git a/src/aws/iot.ts b/src/aws/iot.ts index 00872fb465e..7ff47125940 100644 --- a/src/aws/iot.ts +++ b/src/aws/iot.ts @@ -161,7 +161,7 @@ const completionSpec: Fig.Spec = { { name: "associate-sbom-with-package-version", description: - "Associates a software bill of materials (SBOM) with a specific software package version. Requires permission to access the AssociateSbomWithPackageVersion action", + "Associates the selected software bill of materials (SBOM) with a specific software package version. Requires permission to access the AssociateSbomWithPackageVersion action", options: [ { name: "--package-name", @@ -180,7 +180,7 @@ const completionSpec: Fig.Spec = { { name: "--sbom", description: - "The Amazon S3 location for the software bill of materials associated with a software package version", + "A specific software bill of matrerials associated with a software package version", args: { name: "structure", }, @@ -894,7 +894,7 @@ const completionSpec: Fig.Spec = { { name: "create-billing-group", description: - "Creates a billing group. Requires permission to access the CreateBillingGroup action", + "Creates a billing group. If this call is made multiple times using the same billing group name and configuration, the call will succeed. If this call is made with the same billing group name but different configuration a ResourceAlreadyExistsException is thrown. Requires permission to access the CreateBillingGroup action", options: [ { name: "--billing-group-name", @@ -1247,6 +1247,30 @@ const completionSpec: Fig.Spec = { name: "structure", }, }, + { + name: "--authentication-type", + description: + "An enumerated string that speci\ufb01es the authentication type. CUSTOM_AUTH_X509 - Use custom authentication and authorization with additional details from the X.509 client certificate. CUSTOM_AUTH - Use custom authentication and authorization. For more information, see Custom authentication and authorization. AWS_X509 - Use X.509 client certificates without custom authentication and authorization. For more information, see X.509 client certificates. AWS_SIGV4 - Use Amazon Web Services Signature Version 4. For more information, see IAM users, groups, and roles. DEFAULT - Use a combination of port and Application Layer Protocol Negotiation (ALPN) to specify authentication type. For more information, see Device communication protocols", + args: { + name: "string", + }, + }, + { + name: "--application-protocol", + description: + "An enumerated string that speci\ufb01es the application-layer protocol. SECURE_MQTT - MQTT over TLS. MQTT_WSS - MQTT over WebSocket. HTTPS - HTTP over TLS. DEFAULT - Use a combination of port and Application Layer Protocol Negotiation (ALPN) to specify application_layer protocol. For more information, see Device communication protocols", + args: { + name: "string", + }, + }, + { + name: "--client-certificate-config", + description: + "An object that speci\ufb01es the client certificate con\ufb01guration for a domain", + args: { + name: "structure", + }, + }, { name: "--cli-input-json", description: @@ -2041,7 +2065,7 @@ const completionSpec: Fig.Spec = { { name: "--recipe", description: - "The inline job document associated with a software package version used for a quick job deployment via IoT Jobs", + "The inline job document associated with a software package version used for a quick job deployment", args: { name: "string", }, @@ -2348,7 +2372,7 @@ const completionSpec: Fig.Spec = { { name: "create-role-alias", description: - "Creates a role alias. Requires permission to access the CreateRoleAlias action", + "Creates a role alias. Requires permission to access the CreateRoleAlias action. The value of credentialDurationSeconds must be less than or equal to the maximum session duration of the IAM role that the role alias references. For more information, see Modifying a role maximum session duration (Amazon Web Services API) from the Amazon Web Services Identity and Access Management User Guide", options: [ { name: "--role-alias", @@ -2729,7 +2753,7 @@ const completionSpec: Fig.Spec = { { name: "create-thing-type", description: - "Creates a new thing type. Requires permission to access the CreateThingType action", + "Creates a new thing type. If this call is made multiple times using the same thing type name and configuration, the call will succeed. If this call is made with the same thing type name but different configuration a ResourceAlreadyExistsException is thrown. Requires permission to access the CreateThingType action", options: [ { name: "--thing-type-name", @@ -5403,7 +5427,7 @@ const completionSpec: Fig.Spec = { { name: "disassociate-sbom-from-package-version", description: - "Disassociates a software bill of materials (SBOM) from a specific software package version. Requires permission to access the DisassociateSbomWithPackageVersion action", + "Disassociates the selected software bill of materials (SBOM) from a specific software package version. Requires permission to access the DisassociateSbomWithPackageVersion action", options: [ { name: "--package-name", @@ -12477,6 +12501,30 @@ const completionSpec: Fig.Spec = { name: "structure", }, }, + { + name: "--authentication-type", + description: + "An enumerated string that speci\ufb01es the authentication type. CUSTOM_AUTH_X509 - Use custom authentication and authorization with additional details from the X.509 client certificate. CUSTOM_AUTH - Use custom authentication and authorization. For more information, see Custom authentication and authorization. AWS_X509 - Use X.509 client certificates without custom authentication and authorization. For more information, see X.509 client certificates. AWS_SIGV4 - Use Amazon Web Services Signature Version 4. For more information, see IAM users, groups, and roles. DEFAULT - Use a combination of port and Application Layer Protocol Negotiation (ALPN) to specify authentication type. For more information, see Device communication protocols", + args: { + name: "string", + }, + }, + { + name: "--application-protocol", + description: + "An enumerated string that speci\ufb01es the application-layer protocol. SECURE_MQTT - MQTT over TLS. MQTT_WSS - MQTT over WebSocket. HTTPS - HTTP over TLS. DEFAULT - Use a combination of port and Application Layer Protocol Negotiation (ALPN) to specify application_layer protocol. For more information, see Device communication protocols", + args: { + name: "string", + }, + }, + { + name: "--client-certificate-config", + description: + "An object that speci\ufb01es the client certificate con\ufb01guration for a domain", + args: { + name: "structure", + }, + }, { name: "--cli-input-json", description: @@ -13018,7 +13066,7 @@ const completionSpec: Fig.Spec = { { name: "--recipe", description: - "The inline job document associated with a software package version used for a quick job deployment via IoT Jobs", + "The inline job document associated with a software package version used for a quick job deployment", args: { name: "string", }, @@ -13132,7 +13180,7 @@ const completionSpec: Fig.Spec = { { name: "update-role-alias", description: - "Updates a role alias. Requires permission to access the UpdateRoleAlias action", + "Updates a role alias. Requires permission to access the UpdateRoleAlias action. The value of credentialDurationSeconds must be less than or equal to the maximum session duration of the IAM role that the role alias references. For more information, see Modifying a role maximum session duration (Amazon Web Services API) from the Amazon Web Services Identity and Access Management User Guide", options: [ { name: "--role-alias", diff --git a/src/aws/iotdeviceadvisor.ts b/src/aws/iotdeviceadvisor.ts index 9952eb649d4..441fea2dcbc 100644 --- a/src/aws/iotdeviceadvisor.ts +++ b/src/aws/iotdeviceadvisor.ts @@ -23,6 +23,14 @@ const completionSpec: Fig.Spec = { name: "map", }, }, + { + name: "--client-token", + description: + "The client token for the test suite definition creation. This token is used for tracking test suite definition creation using retries and obtaining its status. This parameter is optional", + args: { + name: "string", + }, + }, { name: "--cli-input-json", description: diff --git a/src/aws/marketplace-reporting.ts b/src/aws/marketplace-reporting.ts new file mode 100644 index 00000000000..c2590ed355d --- /dev/null +++ b/src/aws/marketplace-reporting.ts @@ -0,0 +1,48 @@ +const completionSpec: Fig.Spec = { + name: "marketplace-reporting", + description: + "The Amazon Web Services Marketplace GetBuyerDashboard API enables you to get a procurement insights dashboard programmatically. The API gets the agreement and cost analysis dashboards with data for all of the Amazon Web Services accounts in your Amazon Web Services Organization. To use the Amazon Web Services Marketplace Reporting API, you must complete the following prerequisites: Enable all features for your organization. For more information, see Enabling all features for an organization with Organizations, in the Organizations User Guide. Call the service as the Organizations management account or an account registered as a delegated administrator for the procurement insights service. For more information about management accounts, see Tutorial: Creating and configuring an organization and Managing the management account with Organizations, both in the Organizations User Guide. For more information about delegated administrators, see Using delegated administrators, in the Amazon Web Services Marketplace Buyer Guide. Create an IAM policy that enables the aws-marketplace:GetBuyerDashboard and organizations:DescribeOrganization permissions. In addition, the management account requires the organizations:EnableAWSServiceAccess and iam:CreateServiceLinkedRole permissions to create. For more information about creating the policy, see Policies and permissions in Identity and Access Management, in the IAM User Guide. Access can be shared only by registering the desired linked account as a delegated administrator. That requires organizations:RegisterDelegatedAdministrator organizations:ListDelegatedAdministrators and organizations:DeregisterDelegatedAdministrator permissions. Use the Amazon Web Services Marketplace console to create the AWSServiceRoleForProcurementInsightsPolicy service-linked role. The role enables Amazon Web Services Marketplace procurement visibility integration. The management account requires an IAM policy with the organizations:EnableAWSServiceAccess and iam:CreateServiceLinkedRole permissions to create the service-linked role and enable the service access. For more information, see Granting access to Organizations and Service-linked role to share procurement data in the Amazon Web Services Marketplace Buyer Guide. After creating the service-linked role, you must enable trusted access that grants Amazon Web Services Marketplace permission to access data from your Organizations. For more information, see Granting access to Organizations in the Amazon Web Services Marketplace Buyer Guide", + subcommands: [ + { + name: "get-buyer-dashboard", + description: + "Generates an embedding URL for an Amazon QuickSight dashboard for an anonymous user. This API is available only to Amazon Web Services Organization management accounts or delegated administrators registered for the procurement insights (procurement-insights.marketplace.amazonaws.com) feature. The following rules apply to a generated URL: It contains a temporary bearer token, valid for 5 minutes after it is generated. Once redeemed within that period, it cannot be re-used again. It has a session lifetime of one hour. The 5-minute validity period runs separately from the session lifetime", + options: [ + { + name: "--dashboard-identifier", + description: "The ARN of the requested dashboard", + args: { + name: "string", + }, + }, + { + name: "--embedding-domains", + description: + "Fully qualified domains that you add to the allow list for access to the generated URL that is then embedded. You can list up to two domains or subdomains in each API call. To include all subdomains under a specific domain, use *. For example, https://*.amazon.com includes all subdomains under https://aws.amazon.com", + args: { + name: "list", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + ], +}; + +export default completionSpec; diff --git a/src/aws/memorydb.ts b/src/aws/memorydb.ts index f3e98c514c4..b544a63bd3d 100644 --- a/src/aws/memorydb.ts +++ b/src/aws/memorydb.ts @@ -1,7 +1,7 @@ const completionSpec: Fig.Spec = { name: "memorydb", description: - "MemoryDB is a fully managed, Redis OSS-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB stores the entire database in-memory, enabling low latency and high throughput data access. It is compatible with Redis OSS, a popular open source data store, enabling you to leverage Redis OSS\u2019 flexible and friendly data structures, APIs, and commands", + "MemoryDB for Redis is a fully managed, Redis-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB stores the entire database in-memory, enabling low latency and high throughput data access. It is compatible with Redis, a popular open source data store, enabling you to leverage Redis\u2019 flexible and friendly data structures, APIs, and commands", subcommands: [ { name: "batch-update-cluster", @@ -306,10 +306,18 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--engine", + description: + "The name of the engine to be used for the nodes in this cluster. The value must be set to either Redis or Valkey", + args: { + name: "string", + }, + }, { name: "--engine-version", description: - "The version number of the Redis OSS engine to be used for the cluster", + "The version number of the engine to be used for the cluster", args: { name: "string", }, @@ -602,7 +610,7 @@ const completionSpec: Fig.Spec = { { name: "delete-cluster", description: - "Deletes a cluster. It also deletes all associated nodes and node endpoints CreateSnapshot permission is required to create a final snapshot. Without this permission, the API call will fail with an Access Denied exception", + "Deletes a cluster. It also deletes all associated nodes and node endpoints", options: [ { name: "--cluster-name", @@ -915,11 +923,19 @@ const completionSpec: Fig.Spec = { }, { name: "describe-engine-versions", - description: "Returns a list of the available Redis OSS engine versions", + description: "Returns a list of the available engine versions", options: [ + { + name: "--engine", + description: + "The engine version to return. Valid values are either valkey or redis", + args: { + name: "string", + }, + }, { name: "--engine-version", - description: "The Redis OSS engine version", + description: "The engine version", args: { name: "string", }, @@ -2187,6 +2203,14 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--engine", + description: + "The name of the engine to be used for the nodes in this cluster. The value must be set to either Redis or Valkey", + args: { + name: "string", + }, + }, { name: "--engine-version", description: diff --git a/src/aws/qconnect.ts b/src/aws/qconnect.ts index e0f4e3b7c47..84771fdefc8 100644 --- a/src/aws/qconnect.ts +++ b/src/aws/qconnect.ts @@ -1,8 +1,302 @@ const completionSpec: Fig.Spec = { name: "qconnect", description: - "Powered by Amazon Bedrock: Amazon Web Services implements automated abuse detection. Because Amazon Q in Connect is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI). Amazon Q in Connect is a generative AI customer service assistant. It is an LLM-enhanced evolution of Amazon Connect Wisdom that delivers real-time recommendations to help contact center agents resolve customer issues quickly and accurately. Amazon Q in Connect automatically detects customer intent during calls and chats using conversational analytics and natural language understanding (NLU). It then provides agents with immediate, real-time generative responses and suggested actions, and links to relevant documents and articles. Agents can also query Amazon Q in Connect directly using natural language or keywords to answer customer requests. Use the Amazon Q in Connect APIs to create an assistant and a knowledge base, for example, or manage content by uploading custom files. For more information, see Use Amazon Q in Connect for generative AI powered agent assistance in real-time in the Amazon Connect Administrator Guide", + "Amazon Q actions Amazon Q data types Powered by Amazon Bedrock: Amazon Web Services implements automated abuse detection. Because Amazon Q in Connect is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI). Amazon Q in Connect is a generative AI customer service assistant. It is an LLM-enhanced evolution of Amazon Connect Wisdom that delivers real-time recommendations to help contact center agents resolve customer issues quickly and accurately. Amazon Q in Connect automatically detects customer intent during calls and chats using conversational analytics and natural language understanding (NLU). It then provides agents with immediate, real-time generative responses and suggested actions, and links to relevant documents and articles. Agents can also query Amazon Q in Connect directly using natural language or keywords to answer customer requests. Use the Amazon Q in Connect APIs to create an assistant and a knowledge base, for example, or manage content by uploading custom files. For more information, see Use Amazon Q in Connect for generative AI powered agent assistance in real-time in the Amazon Connect Administrator Guide", subcommands: [ + { + name: "create-ai-agent", + description: "Creates an Amazon Q in Connect AI Agent", + options: [ + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--configuration", + description: "The configuration of the AI Agent", + args: { + name: "structure", + }, + }, + { + name: "--description", + description: "The description of the AI Agent", + args: { + name: "string", + }, + }, + { + name: "--name", + description: "The name of the AI Agent", + args: { + name: "string", + }, + }, + { + name: "--tags", + description: + "The tags used to organize, track, or control access for this resource", + args: { + name: "map", + }, + }, + { + name: "--type", + description: "The type of the AI Agent", + args: { + name: "string", + }, + }, + { + name: "--visibility-status", + description: "The visibility status of the AI Agent", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "create-ai-agent-version", + description: "Creates and Amazon Q in Connect AI Agent version", + options: [ + { + name: "--ai-agent-id", + description: "The identifier of the Amazon Q in Connect AI Agent", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--modified-time", + description: + "The modification time of the AI Agent should be tracked for version creation. This field should be specified to avoid version creation when simultaneous update to the underlying AI Agent are possible. The value should be the modifiedTime returned from the request to create or update an AI Agent so that version creation can fail if an update to the AI Agent post the specified modification time has been made", + args: { + name: "timestamp", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "create-ai-prompt", + description: "Creates an Amazon Q in Connect AI Prompt", + options: [ + { + name: "--api-format", + description: "The API Format of the AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--description", + description: "The description of the AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--model-id", + description: + "The identifier of the model used for this AI Prompt. Model Ids supported are: CLAUDE_3_HAIKU_20240307_V1", + args: { + name: "string", + }, + }, + { + name: "--name", + description: "The name of the AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--tags", + description: + "The tags used to organize, track, or control access for this resource", + args: { + name: "map", + }, + }, + { + name: "--template-configuration", + description: + "The configuration of the prompt template for this AI Prompt", + args: { + name: "structure", + }, + }, + { + name: "--template-type", + description: "The type of the prompt template for this AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--type", + description: "The type of this AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--visibility-status", + description: "The visibility status of the AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "create-ai-prompt-version", + description: "Creates an Amazon Q in Connect AI Prompt version", + options: [ + { + name: "--ai-prompt-id", + description: "The identifier of the Amazon Q in Connect AI prompt", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--modified-time", + description: "The time the AI Prompt was last modified", + args: { + name: "timestamp", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, { name: "create-assistant", description: "Creates an Amazon Q in Connect assistant", @@ -256,18 +550,544 @@ const completionSpec: Fig.Spec = { }, }, { - name: "--knowledge-base-id", - description: "The identifier of the knowledge base", + name: "--knowledge-base-id", + description: "The identifier of the knowledge base", + args: { + name: "string", + }, + }, + { + name: "--tags", + description: + "The tags used to organize, track, or control access for this resource", + args: { + name: "map", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "create-knowledge-base", + description: + "Creates a knowledge base. When using this API, you cannot reuse Amazon AppIntegrations DataIntegrations with external knowledge bases such as Salesforce and ServiceNow. If you do, you'll get an InvalidRequestException error. For example, you're programmatically managing your external knowledge base, and you want to add or remove one of the fields that is being ingested from Salesforce. Do the following: Call DeleteKnowledgeBase. Call DeleteDataIntegration. Call CreateDataIntegration to recreate the DataIntegration or a create different one. Call CreateKnowledgeBase", + options: [ + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--description", + description: "The description", + args: { + name: "string", + }, + }, + { + name: "--knowledge-base-type", + description: + "The type of knowledge base. Only CUSTOM knowledge bases allow you to upload your own content. EXTERNAL knowledge bases support integrations with third-party systems whose content is synchronized automatically", + args: { + name: "string", + }, + }, + { + name: "--name", + description: "The name of the knowledge base", + args: { + name: "string", + }, + }, + { + name: "--rendering-configuration", + description: "Information about how to render the content", + args: { + name: "structure", + }, + }, + { + name: "--server-side-encryption-configuration", + description: + "The configuration information for the customer managed key used for encryption. This KMS key must have a policy that allows kms:CreateGrant, kms:DescribeKey, kms:Decrypt, and kms:GenerateDataKey* permissions to the IAM identity using the key to invoke Amazon Q in Connect. For more information about setting up a customer managed key for Amazon Q in Connect, see Enable Amazon Q in Connect for your instance", + args: { + name: "structure", + }, + }, + { + name: "--source-configuration", + description: + "The source of the knowledge base content. Only set this argument for EXTERNAL knowledge bases", + args: { + name: "structure", + }, + }, + { + name: "--tags", + description: + "The tags used to organize, track, or control access for this resource", + args: { + name: "map", + }, + }, + { + name: "--vector-ingestion-configuration", + description: + "Contains details about how to ingest the documents in a data source", + args: { + name: "structure", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "create-quick-response", + description: "Creates an Amazon Q in Connect quick response", + options: [ + { + name: "--channels", + description: + "The Amazon Connect channels this quick response applies to", + args: { + name: "list", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--content", + description: "The content of the quick response", + args: { + name: "structure", + }, + }, + { + name: "--content-type", + description: + "The media type of the quick response content. Use application/x.quickresponse;format=plain for a quick response written in plain text. Use application/x.quickresponse;format=markdown for a quick response written in richtext", + args: { + name: "string", + }, + }, + { + name: "--description", + description: "The description of the quick response", + args: { + name: "string", + }, + }, + { + name: "--grouping-configuration", + description: + "The configuration information of the user groups that the quick response is accessible to", + args: { + name: "structure", + }, + }, + { + name: "--is-active", + description: "Whether the quick response is active", + }, + { + name: "--no-is-active", + description: "Whether the quick response is active", + }, + { + name: "--knowledge-base-id", + description: + "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--language", + description: + "The language code value for the language in which the quick response is written. The supported language codes include de_DE, en_US, es_ES, fr_FR, id_ID, it_IT, ja_JP, ko_KR, pt_BR, zh_CN, zh_TW", + args: { + name: "string", + }, + }, + { + name: "--name", + description: "The name of the quick response", + args: { + name: "string", + }, + }, + { + name: "--shortcut-key", + description: + "The shortcut key of the quick response. The value should be unique across the knowledge base", + args: { + name: "string", + }, + }, + { + name: "--tags", + description: + "The tags used to organize, track, or control access for this resource", + args: { + name: "map", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "create-session", + description: + "Creates a session. A session is a contextual container used for generating recommendations. Amazon Connect creates a new Amazon Q in Connect session for each contact on which Amazon Q in Connect is enabled", + options: [ + { + name: "--ai-agent-configuration", + description: + "The configuration of the AI Agents (mapped by AI Agent Type to AI Agent version) that should be used by Amazon Q in Connect for this Session", + args: { + name: "map", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--description", + description: "The description", + args: { + name: "string", + }, + }, + { + name: "--name", + description: "The name of the session", + args: { + name: "string", + }, + }, + { + name: "--tag-filter", + description: "An object that can be used to specify Tag conditions", + args: { + name: "structure", + }, + }, + { + name: "--tags", + description: + "The tags used to organize, track, or control access for this resource", + args: { + name: "map", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "delete-ai-agent", + description: "Deletes an Amazon Q in Connect AI Agent", + options: [ + { + name: "--ai-agent-id", + description: + "The identifier of the Amazon Q in Connect AI Agent. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "delete-ai-agent-version", + description: "Deletes an Amazon Q in Connect AI Agent Version", + options: [ + { + name: "--ai-agent-id", + description: + "The identifier of the Amazon Q in Connect AI Agent. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--version-number", + description: "The version number of the AI Agent version", + args: { + name: "long", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "delete-ai-prompt", + description: "Deletes an Amazon Q in Connect AI Prompt", + options: [ + { + name: "--ai-prompt-id", + description: + "The identifier of the Amazon Q in Connect AI prompt. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "delete-ai-prompt-version", + description: "Delete and Amazon Q in Connect AI Prompt version", + options: [ + { + name: "--ai-prompt-id", + description: "The identifier of the Amazon Q in Connect AI prompt", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--version-number", + description: + "The version number of the AI Prompt version to be deleted", + args: { + name: "long", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "delete-assistant", + description: "Deletes an assistant", + options: [ + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "delete-assistant-association", + description: "Deletes an assistant association", + options: [ + { + name: "--assistant-association-id", + description: + "The identifier of the assistant association. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--tags", + name: "--assistant-id", description: - "The tags used to organize, track, or control access for this resource", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { - name: "map", + name: "string", }, }, { @@ -290,69 +1110,69 @@ const completionSpec: Fig.Spec = { ], }, { - name: "create-knowledge-base", - description: - "Creates a knowledge base. When using this API, you cannot reuse Amazon AppIntegrations DataIntegrations with external knowledge bases such as Salesforce and ServiceNow. If you do, you'll get an InvalidRequestException error. For example, you're programmatically managing your external knowledge base, and you want to add or remove one of the fields that is being ingested from Salesforce. Do the following: Call DeleteKnowledgeBase. Call DeleteDataIntegration. Call CreateDataIntegration to recreate the DataIntegration or a create different one. Call CreateKnowledgeBase", + name: "delete-content", + description: "Deletes the content", options: [ { - name: "--client-token", + name: "--content-id", description: - "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + "The identifier of the content. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--description", - description: "The description", + name: "--knowledge-base-id", + description: + "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--knowledge-base-type", + name: "--cli-input-json", description: - "The type of knowledge base. Only CUSTOM knowledge bases allow you to upload your own content. EXTERNAL knowledge bases support integrations with third-party systems whose content is synchronized automatically", + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", args: { name: "string", }, }, { - name: "--name", - description: "The name of the knowledge base", + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", args: { name: "string", + suggestions: ["input", "output"], }, }, + ], + }, + { + name: "delete-content-association", + description: + "Deletes the content association. For more information about content associations--what they are and when they are used--see Integrate Amazon Q in Connect with step-by-step guides in the Amazon Connect Administrator Guide", + options: [ { - name: "--rendering-configuration", - description: "Information about how to render the content", - args: { - name: "structure", - }, - }, - { - name: "--server-side-encryption-configuration", + name: "--content-association-id", description: - "The configuration information for the customer managed key used for encryption. This KMS key must have a policy that allows kms:CreateGrant, kms:DescribeKey, kms:Decrypt, and kms:GenerateDataKey* permissions to the IAM identity using the key to invoke Amazon Q in Connect. For more information about setting up a customer managed key for Amazon Q in Connect, see Enable Amazon Q in Connect for your instance", + "The identifier of the content association. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { - name: "structure", + name: "string", }, }, { - name: "--source-configuration", - description: - "The source of the knowledge base content. Only set this argument for EXTERNAL knowledge bases", + name: "--content-id", + description: "The identifier of the content", args: { - name: "structure", + name: "string", }, }, { - name: "--tags", - description: - "The tags used to organize, track, or control access for this resource", + name: "--knowledge-base-id", + description: "The identifier of the knowledge base", args: { - name: "map", + name: "string", }, }, { @@ -375,100 +1195,91 @@ const completionSpec: Fig.Spec = { ], }, { - name: "create-quick-response", - description: "Creates an Amazon Q in Connect quick response", + name: "delete-import-job", + description: "Deletes the quick response import job", options: [ { - name: "--channels", - description: - "The Amazon Connect channels this quick response applies to", - args: { - name: "list", - }, - }, - { - name: "--client-token", - description: - "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + name: "--import-job-id", + description: "The identifier of the import job to be deleted", args: { name: "string", }, }, { - name: "--content", - description: "The content of the quick response", - args: { - name: "structure", - }, - }, - { - name: "--content-type", - description: - "The media type of the quick response content. Use application/x.quickresponse;format=plain for a quick response written in plain text. Use application/x.quickresponse;format=markdown for a quick response written in richtext", + name: "--knowledge-base-id", + description: "The identifier of the knowledge base", args: { name: "string", }, }, { - name: "--description", - description: "The description of the quick response", + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", args: { name: "string", }, }, { - name: "--grouping-configuration", + name: "--generate-cli-skeleton", description: - "The configuration information of the user groups that the quick response is accessible to", + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", args: { - name: "structure", + name: "string", + suggestions: ["input", "output"], }, }, - { - name: "--is-active", - description: "Whether the quick response is active", - }, - { - name: "--no-is-active", - description: "Whether the quick response is active", - }, + ], + }, + { + name: "delete-knowledge-base", + description: + "Deletes the knowledge base. When you use this API to delete an external knowledge base such as Salesforce or ServiceNow, you must also delete the Amazon AppIntegrations DataIntegration. This is because you can't reuse the DataIntegration after it's been associated with an external knowledge base. However, you can delete and recreate it. See DeleteDataIntegration and CreateDataIntegration in the Amazon AppIntegrations API Reference", + options: [ { name: "--knowledge-base-id", description: - "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The knowledge base to delete content from. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--language", + name: "--cli-input-json", description: - "The language code value for the language in which the quick response is written. The supported language codes include de_DE, en_US, es_ES, fr_FR, id_ID, it_IT, ja_JP, ko_KR, pt_BR, zh_CN, zh_TW", + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", args: { name: "string", }, }, { - name: "--name", - description: "The name of the quick response", + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", args: { name: "string", + suggestions: ["input", "output"], }, }, + ], + }, + { + name: "delete-quick-response", + description: "Deletes a quick response", + options: [ { - name: "--shortcut-key", + name: "--knowledge-base-id", description: - "The shortcut key of the quick response. The value should be unique across the knowledge base", + "The knowledge base from which the quick response is deleted. The identifier of the knowledge base", args: { name: "string", }, }, { - name: "--tags", - description: - "The tags used to organize, track, or control access for this resource", + name: "--quick-response-id", + description: "The identifier of the quick response to delete", args: { - name: "map", + name: "string", }, }, { @@ -491,53 +1302,61 @@ const completionSpec: Fig.Spec = { ], }, { - name: "create-session", - description: - "Creates a session. A session is a contextual container used for generating recommendations. Amazon Connect creates a new Amazon Q in Connect session for each contact on which Amazon Q in Connect is enabled", + name: "get-ai-agent", + description: "Gets an Amazon Q in Connect AI Agent", options: [ { - name: "--assistant-id", + name: "--ai-agent-id", description: - "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the Amazon Q in Connect AI Agent (with or without a version qualifier). Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--client-token", + name: "--assistant-id", description: - "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the Amazon Web Services SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--description", - description: "The description", + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", args: { name: "string", }, }, { - name: "--name", - description: "The name of the session", + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", args: { name: "string", + suggestions: ["input", "output"], }, }, + ], + }, + { + name: "get-ai-prompt", + description: "Gets and Amazon Q in Connect AI Prompt", + options: [ { - name: "--tag-filter", - description: "An object that can be used to specify Tag conditions", + name: "--ai-prompt-id", + description: "The identifier of the Amazon Q in Connect AI prompt", args: { - name: "structure", + name: "string", }, }, { - name: "--tags", + name: "--assistant-id", description: - "The tags used to organize, track, or control access for this resource", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { - name: "map", + name: "string", }, }, { @@ -560,8 +1379,8 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-assistant", - description: "Deletes an assistant", + name: "get-assistant", + description: "Retrieves information about an assistant", options: [ { name: "--assistant-id", @@ -591,8 +1410,8 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-assistant-association", - description: "Deletes an assistant association", + name: "get-assistant-association", + description: "Retrieves information about an assistant association", options: [ { name: "--assistant-association-id", @@ -630,8 +1449,9 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-content", - description: "Deletes the content", + name: "get-content", + description: + "Retrieves content, including a pre-signed URL to download the content", options: [ { name: "--content-id", @@ -644,7 +1464,7 @@ const completionSpec: Fig.Spec = { { name: "--knowledge-base-id", description: - "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the knowledge base. This should not be a QUICK_RESPONSES type knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, @@ -669,9 +1489,9 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-content-association", + name: "get-content-association", description: - "Deletes the content association. For more information about content associations--what they are and when they are used--see Integrate Amazon Q in Connect with step-by-step guides in the Amazon Connect Administrator Guide", + "Returns the content association. For more information about content associations--what they are and when they are used--see Integrate Amazon Q in Connect with step-by-step guides in the Amazon Connect Administrator Guide", options: [ { name: "--content-association-id", @@ -715,19 +1535,59 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-import-job", - description: "Deletes the quick response import job", + name: "get-content-summary", + description: "Retrieves summary information about the content", + options: [ + { + name: "--content-id", + description: + "The identifier of the content. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--knowledge-base-id", + description: + "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "get-import-job", + description: "Retrieves the started import job", options: [ { name: "--import-job-id", - description: "The identifier of the import job to be deleted", + description: "The identifier of the import job to retrieve", args: { name: "string", }, }, { name: "--knowledge-base-id", - description: "The identifier of the knowledge base", + description: + "The identifier of the knowledge base that the import job belongs to", args: { name: "string", }, @@ -752,14 +1612,13 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-knowledge-base", - description: - "Deletes the knowledge base. When you use this API to delete an external knowledge base such as Salesforce or ServiceNow, you must also delete the Amazon AppIntegrations DataIntegration. This is because you can't reuse the DataIntegration after it's been associated with an external knowledge base. However, you can delete and recreate it. See DeleteDataIntegration and CreateDataIntegration in the Amazon AppIntegrations API Reference", + name: "get-knowledge-base", + description: "Retrieves information about the knowledge base", options: [ { name: "--knowledge-base-id", description: - "The knowledge base to delete content from. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, @@ -784,20 +1643,20 @@ const completionSpec: Fig.Spec = { ], }, { - name: "delete-quick-response", - description: "Deletes a quick response", + name: "get-quick-response", + description: "Retrieves the quick response", options: [ { name: "--knowledge-base-id", description: - "The knowledge base from which the quick response is deleted. The identifier of the knowledge base", + "The identifier of the knowledge base. This should be a QUICK_RESPONSES type knowledge base", args: { name: "string", }, }, { name: "--quick-response-id", - description: "The identifier of the quick response to delete", + description: "The identifier of the quick response", args: { name: "string", }, @@ -822,8 +1681,9 @@ const completionSpec: Fig.Spec = { ], }, { - name: "get-assistant", - description: "Retrieves information about an assistant", + name: "get-recommendations", + description: + "This API will be discontinued starting June 1, 2024. To receive generative responses after March 1, 2024, you will need to create a new Assistant in the Amazon Connect console and integrate the Amazon Q in Connect JavaScript library (amazon-q-connectjs) into your applications. Retrieves recommendations for the specified session. To avoid retrieving the same recommendations in subsequent calls, use NotifyRecommendationsReceived. This API supports long-polling behavior with the waitTimeSeconds parameter. Short poll is the default behavior and only returns recommendations already available. To perform a manual query against an assistant, use QueryAssistant", options: [ { name: "--assistant-id", @@ -833,6 +1693,29 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--max-results", + description: "The maximum number of results to return per page", + args: { + name: "integer", + }, + }, + { + name: "--session-id", + description: + "The identifier of the session. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--wait-time-seconds", + description: + "The duration (in seconds) for which the call waits for a recommendation to be made available before returning. If a recommendation is available, the call returns sooner than WaitTimeSeconds. If no messages are available and the wait time expires, the call returns successfully with an empty list", + args: { + name: "integer", + }, + }, { name: "--cli-input-json", description: @@ -853,21 +1736,21 @@ const completionSpec: Fig.Spec = { ], }, { - name: "get-assistant-association", - description: "Retrieves information about an assistant association", + name: "get-session", + description: "Retrieves information for a specified session", options: [ { - name: "--assistant-association-id", + name: "--assistant-id", description: - "The identifier of the assistant association. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--assistant-id", + name: "--session-id", description: - "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the session. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, @@ -892,78 +1775,78 @@ const completionSpec: Fig.Spec = { ], }, { - name: "get-content", - description: - "Retrieves content, including a pre-signed URL to download the content", + name: "list-ai-agent-versions", + description: "List AI Agent versions", options: [ { - name: "--content-id", + name: "--ai-agent-id", description: - "The identifier of the content. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the Amazon Q in Connect AI Agent for which versions are to be listed", args: { name: "string", }, }, { - name: "--knowledge-base-id", + name: "--assistant-id", description: - "The identifier of the knowledge base. This should not be a QUICK_RESPONSES type knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--cli-input-json", + name: "--max-results", + description: "The maximum number of results to return per page", + args: { + name: "integer", + }, + }, + { + name: "--next-token", description: - "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + "The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results", args: { name: "string", }, }, { - name: "--generate-cli-skeleton", + name: "--origin", description: - "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + "The origin of the AI Agent versions to be listed. SYSTEM for a default AI Agent created by Q in Connect or CUSTOMER for an AI Agent created by calling AI Agent creation APIs", args: { name: "string", - suggestions: ["input", "output"], }, }, - ], - }, - { - name: "get-content-association", - description: - "Returns the content association. For more information about content associations--what they are and when they are used--see Integrate Amazon Q in Connect with step-by-step guides in the Amazon Connect Administrator Guide", - options: [ { - name: "--content-association-id", + name: "--cli-input-json", description: - "The identifier of the content association. Can be either the ID or the ARN. URLs cannot contain the ARN", + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", args: { name: "string", }, }, { - name: "--content-id", - description: "The identifier of the content", + name: "--starting-token", + description: + "A token to specify where to start paginating. This is the\nNextToken from a previously truncated response.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { name: "string", }, }, { - name: "--knowledge-base-id", - description: "The identifier of the knowledge base", + name: "--page-size", + description: + "The size of each page to get in the AWS service call. This\ndoes not affect the number of items returned in the command's\noutput. Setting a smaller page size results in more calls to\nthe AWS service, retrieving fewer items in each call. This can\nhelp prevent the AWS service calls from timing out.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { - name: "string", + name: "integer", }, }, { - name: "--cli-input-json", + name: "--max-items", description: - "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + "The total number of items to return in the command's output.\nIf the total number of items available is more than the value\nspecified, a NextToken is provided in the command's\noutput. To resume pagination, provide the\nNextToken value in the starting-token\nargument of a subsequent command. Do not use the\nNextToken response element directly outside of the\nAWS CLI.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { - name: "string", + name: "integer", }, }, { @@ -978,69 +1861,70 @@ const completionSpec: Fig.Spec = { ], }, { - name: "get-content-summary", - description: "Retrieves summary information about the content", + name: "list-ai-agents", + description: "Lists AI Agents", options: [ { - name: "--content-id", + name: "--assistant-id", description: - "The identifier of the content. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--knowledge-base-id", + name: "--max-results", + description: "The maximum number of results to return per page", + args: { + name: "integer", + }, + }, + { + name: "--next-token", description: - "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results", args: { name: "string", }, }, { - name: "--cli-input-json", + name: "--origin", description: - "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + "The origin of the AI Agents to be listed. SYSTEM for a default AI Agent created by Q in Connect or CUSTOMER for an AI Agent created by calling AI Agent creation APIs", args: { name: "string", }, }, { - name: "--generate-cli-skeleton", + name: "--cli-input-json", description: - "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", args: { name: "string", - suggestions: ["input", "output"], }, }, - ], - }, - { - name: "get-import-job", - description: "Retrieves the started import job", - options: [ { - name: "--import-job-id", - description: "The identifier of the import job to retrieve", + name: "--starting-token", + description: + "A token to specify where to start paginating. This is the\nNextToken from a previously truncated response.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { name: "string", }, }, { - name: "--knowledge-base-id", + name: "--page-size", description: - "The identifier of the knowledge base that the import job belongs to", + "The size of each page to get in the AWS service call. This\ndoes not affect the number of items returned in the command's\noutput. Setting a smaller page size results in more calls to\nthe AWS service, retrieving fewer items in each call. This can\nhelp prevent the AWS service calls from timing out.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { - name: "string", + name: "integer", }, }, { - name: "--cli-input-json", + name: "--max-items", description: - "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + "The total number of items to return in the command's output.\nIf the total number of items available is more than the value\nspecified, a NextToken is provided in the command's\noutput. To resume pagination, provide the\nNextToken value in the starting-token\nargument of a subsequent command. Do not use the\nNextToken response element directly outside of the\nAWS CLI.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { - name: "string", + name: "integer", }, }, { @@ -1055,51 +1939,44 @@ const completionSpec: Fig.Spec = { ], }, { - name: "get-knowledge-base", - description: "Retrieves information about the knowledge base", + name: "list-ai-prompt-versions", + description: "Lists AI Prompt versions", options: [ { - name: "--knowledge-base-id", + name: "--ai-prompt-id", description: - "The identifier of the knowledge base. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The identifier of the Amazon Q in Connect AI prompt for which versions are to be listed", args: { name: "string", }, }, { - name: "--cli-input-json", + name: "--assistant-id", description: - "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", args: { name: "string", }, }, { - name: "--generate-cli-skeleton", - description: - "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + name: "--max-results", + description: "The maximum number of results to return per page", args: { - name: "string", - suggestions: ["input", "output"], + name: "integer", }, }, - ], - }, - { - name: "get-quick-response", - description: "Retrieves the quick response", - options: [ { - name: "--knowledge-base-id", + name: "--next-token", description: - "The identifier of the knowledge base. This should be a QUICK_RESPONSES type knowledge base", + "The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results", args: { name: "string", }, }, { - name: "--quick-response-id", - description: "The identifier of the quick response", + name: "--origin", + description: + "The origin of the AI Prompt versions to be listed. SYSTEM for a default AI Agent created by Q in Connect or CUSTOMER for an AI Agent created by calling AI Agent creation APIs", args: { name: "string", }, @@ -1112,6 +1989,30 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--starting-token", + description: + "A token to specify where to start paginating. This is the\nNextToken from a previously truncated response.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", + args: { + name: "string", + }, + }, + { + name: "--page-size", + description: + "The size of each page to get in the AWS service call. This\ndoes not affect the number of items returned in the command's\noutput. Setting a smaller page size results in more calls to\nthe AWS service, retrieving fewer items in each call. This can\nhelp prevent the AWS service calls from timing out.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", + args: { + name: "integer", + }, + }, + { + name: "--max-items", + description: + "The total number of items to return in the command's output.\nIf the total number of items available is more than the value\nspecified, a NextToken is provided in the command's\noutput. To resume pagination, provide the\nNextToken value in the starting-token\nargument of a subsequent command. Do not use the\nNextToken response element directly outside of the\nAWS CLI.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", + args: { + name: "integer", + }, + }, { name: "--generate-cli-skeleton", description: @@ -1124,9 +2025,9 @@ const completionSpec: Fig.Spec = { ], }, { - name: "get-recommendations", + name: "list-ai-prompts", description: - "This API will be discontinued starting June 1, 2024. To receive generative responses after March 1, 2024, you will need to create a new Assistant in the Amazon Connect console and integrate the Amazon Q in Connect JavaScript library (amazon-q-connectjs) into your applications. Retrieves recommendations for the specified session. To avoid retrieving the same recommendations in subsequent calls, use NotifyRecommendationsReceived. This API supports long-polling behavior with the waitTimeSeconds parameter. Short poll is the default behavior and only returns recommendations already available. To perform a manual query against an assistant, use QueryAssistant", + "Lists the AI Prompts available on the Amazon Q in Connect assistant", options: [ { name: "--assistant-id", @@ -1144,19 +2045,19 @@ const completionSpec: Fig.Spec = { }, }, { - name: "--session-id", + name: "--next-token", description: - "The identifier of the session. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results", args: { name: "string", }, }, { - name: "--wait-time-seconds", + name: "--origin", description: - "The duration (in seconds) for which the call waits for a recommendation to be made available before returning. If a recommendation is available, the call returns sooner than WaitTimeSeconds. If no messages are available and the wait time expires, the call returns successfully with an empty list", + "The origin of the AI Prompts to be listed. SYSTEM for a default AI Agent created by Q in Connect or CUSTOMER for an AI Agent created by calling AI Agent creation APIs", args: { - name: "integer", + name: "string", }, }, { @@ -1168,42 +2069,27 @@ const completionSpec: Fig.Spec = { }, }, { - name: "--generate-cli-skeleton", - description: - "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", - args: { - name: "string", - suggestions: ["input", "output"], - }, - }, - ], - }, - { - name: "get-session", - description: "Retrieves information for a specified session", - options: [ - { - name: "--assistant-id", + name: "--starting-token", description: - "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + "A token to specify where to start paginating. This is the\nNextToken from a previously truncated response.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { name: "string", }, }, { - name: "--session-id", + name: "--page-size", description: - "The identifier of the session. Can be either the ID or the ARN. URLs cannot contain the ARN", + "The size of each page to get in the AWS service call. This\ndoes not affect the number of items returned in the command's\noutput. Setting a smaller page size results in more calls to\nthe AWS service, retrieving fewer items in each call. This can\nhelp prevent the AWS service calls from timing out.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { - name: "string", + name: "integer", }, }, { - name: "--cli-input-json", + name: "--max-items", description: - "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + "The total number of items to return in the command's output.\nIf the total number of items available is more than the value\nspecified, a NextToken is provided in the command's\noutput. To resume pagination, provide the\nNextToken value in the starting-token\nargument of a subsequent command. Do not use the\nNextToken response element directly outside of the\nAWS CLI.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide", args: { - name: "string", + name: "integer", }, }, { @@ -1848,9 +2734,17 @@ const completionSpec: Fig.Spec = { }, }, { - name: "--next-token", + name: "--next-token", + description: + "The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results", + args: { + name: "string", + }, + }, + { + name: "--override-knowledge-base-search-type", description: - "The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results", + "The search type to be used against the Knowledge Base for this request. The values can be SEMANTIC which uses vector embeddings or HYBRID which use vector embeddings and raw text", args: { name: "string", }, @@ -1862,6 +2756,13 @@ const completionSpec: Fig.Spec = { name: "list", }, }, + { + name: "--query-input-data", + description: "Information about the query", + args: { + name: "structure", + }, + }, { name: "--query-text", description: "The text to search for", @@ -1920,6 +2821,46 @@ const completionSpec: Fig.Spec = { }, ], }, + { + name: "remove-assistant-ai-agent", + description: + "Removes the AI Agent that is set for use by defafult on an Amazon Q in Connect Assistant", + options: [ + { + name: "--ai-agent-type", + description: + "The type of the AI Agent being removed for use by default from the Amazon Q in Connect Assistant", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, { name: "remove-knowledge-base-template-uri", description: "Removes a URI template from a knowledge base", @@ -2386,6 +3327,191 @@ const completionSpec: Fig.Spec = { }, ], }, + { + name: "update-ai-agent", + description: "Updates an AI Agent", + options: [ + { + name: "--ai-agent-id", + description: "The identifier of the Amazon Q in Connect AI Agent", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--configuration", + description: "The configuration of the Amazon Q in Connect AI Agent", + args: { + name: "structure", + }, + }, + { + name: "--description", + description: "The description of the Amazon Q in Connect AI Agent", + args: { + name: "string", + }, + }, + { + name: "--visibility-status", + description: + "The visbility status of the Amazon Q in Connect AI Agent", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "update-ai-prompt", + description: "Updates an AI Prompt", + options: [ + { + name: "--ai-prompt-id", + description: "The identifier of the Amazon Q in Connect AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--client-token", + description: + "A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. If not provided, the AWS SDK populates this field. For more information about idempotency, see Making retries safe with idempotent APIs", + args: { + name: "string", + }, + }, + { + name: "--description", + description: "The description of the Amazon Q in Connect AI Prompt", + args: { + name: "string", + }, + }, + { + name: "--template-configuration", + description: + "The configuration of the prompt template for this AI Prompt", + args: { + name: "structure", + }, + }, + { + name: "--visibility-status", + description: + "The visibility status of the Amazon Q in Connect AI prompt", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, + { + name: "update-assistant-ai-agent", + description: + "Updates the AI Agent that is set for use by defafult on an Amazon Q in Connect Assistant", + options: [ + { + name: "--ai-agent-type", + description: + "The type of the AI Agent being updated for use by default on the Amazon Q in Connect Assistant", + args: { + name: "string", + }, + }, + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--configuration", + description: + "The configuration of the AI Agent being updated for use by default on the Amazon Q in Connect Assistant", + args: { + name: "structure", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, { name: "update-content", description: "Updates information about the content", @@ -2653,6 +3779,14 @@ const completionSpec: Fig.Spec = { description: "Updates a session. A session is a contextual container used for generating recommendations. Amazon Connect updates the existing Amazon Q in Connect session for each contact on which Amazon Q in Connect is enabled", options: [ + { + name: "--ai-agent-configuration", + description: + "The configuration of the AI Agents (mapped by AI Agent Type to AI Agent version) that should be used by Amazon Q in Connect for this Session", + args: { + name: "map", + }, + }, { name: "--assistant-id", description: @@ -2702,6 +3836,60 @@ const completionSpec: Fig.Spec = { }, ], }, + { + name: "update-session-data", + description: "Updates the data stored on an Amazon Q in Connect Session", + options: [ + { + name: "--assistant-id", + description: + "The identifier of the Amazon Q in Connect assistant. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--data", + description: "The data stored on the Amazon Q in Connect Session", + args: { + name: "list", + }, + }, + { + name: "--namespace", + description: + "The namespace into which the session data is stored. Supported namespaces are: Custom", + args: { + name: "string", + }, + }, + { + name: "--session-id", + description: + "The identifier of the session. Can be either the ID or the ARN. URLs cannot contain the ARN", + args: { + name: "string", + }, + }, + { + name: "--cli-input-json", + description: + "Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally", + args: { + name: "string", + }, + }, + { + name: "--generate-cli-skeleton", + description: + "Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command", + args: { + name: "string", + suggestions: ["input", "output"], + }, + }, + ], + }, ], }; export default completionSpec; diff --git a/src/aws/quicksight.ts b/src/aws/quicksight.ts index 26f0b5117f0..71ad5a0d2a3 100644 --- a/src/aws/quicksight.ts +++ b/src/aws/quicksight.ts @@ -8280,6 +8280,24 @@ const completionSpec: Fig.Spec = { name: "structure", }, }, + { + name: "--include-folder-memberships", + description: + "A Boolean that determines if the exported asset carries over information about the folders that the asset is a member of", + }, + { + name: "--no-include-folder-memberships", + description: + "A Boolean that determines if the exported asset carries over information about the folders that the asset is a member of", + }, + { + name: "--include-folder-members", + description: + "A setting that indicates whether you want to include folder assets. You can also use this setting to recusrsively include all subfolders of an exported folder", + args: { + name: "string", + }, + }, { name: "--cli-input-json", description: diff --git a/src/aws/redshift.ts b/src/aws/redshift.ts index 65c80ef55ca..4b65a7901b5 100644 --- a/src/aws/redshift.ts +++ b/src/aws/redshift.ts @@ -5803,7 +5803,7 @@ const completionSpec: Fig.Spec = { { name: "--s3-key-prefix", description: - "The prefix applied to the log file names. Constraints: Cannot exceed 512 characters Cannot contain spaces( ), double quotes (\"), single quotes ('), a backslash (\\), or control characters. The hexadecimal codes for invalid characters are: x00 to x20 x22 x27 x5c x7f or larger", + "The prefix applied to the log file names. Valid characters are any letter from any language, any whitespace character, any numeric character, and the following characters: underscore (_), period (.), colon (:), slash (/), equal (=), plus (+), backslash (\\), hyphen (-), at symbol (@)", args: { name: "string", }, diff --git a/src/aws/s3api.ts b/src/aws/s3api.ts index dbcb37fa006..24e654059b2 100644 --- a/src/aws/s3api.ts +++ b/src/aws/s3api.ts @@ -124,7 +124,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", args: { name: "string", }, @@ -132,7 +132,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32-c", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", args: { name: "string", }, @@ -1763,7 +1763,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-algorithm", description: - "Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list: CRC32 CRC32C SHA1 SHA256 For more information, see Checking object integrity in the Amazon S3 User Guide. If the individual checksum value you provide through x-amz-checksum-algorithm doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm . If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter", + "Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list: CRC32 CRC32C SHA1 SHA256 For more information, see Checking object integrity in the Amazon S3 User Guide. If the individual checksum value you provide through x-amz-checksum-algorithm doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm . If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm parameter", args: { name: "string", }, @@ -3477,7 +3477,7 @@ const completionSpec: Fig.Spec = { { name: "head-object", description: - "The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're interested only in an object's metadata. A HEAD request has the same options as a GET operation on an object. The response is identical to the GET response except that there is no response body. Because of this, if the HEAD request generates an error, it returns a generic code, such as 400 Bad Request, 403 Forbidden, 404 Not Found, 405 Method Not Allowed, 412 Precondition Failed, or 304 Not Modified. It's not possible to retrieve the exact exception of these error codes. Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Permissions General purpose bucket permissions - To use HEAD, you must have the s3:GetObject permission. You need the relevant read object (or version) permission for this operation. For more information, see Actions, resources, and condition keys for Amazon S3 in the Amazon S3 User Guide. If the object you request doesn't exist, the error that Amazon S3 returns depends on whether you also have the s3:ListBucket permission. If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 Not Found error. If you don\u2019t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 Forbidden error. Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the CreateSession API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. Amazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see CreateSession . If you enable x-amz-checksum-mode in the request and the object is encrypted with Amazon Web Services Key Management Service (Amazon Web Services KMS), you must also have the kms:GenerateDataKey and kms:Decrypt permissions in IAM identity-based policies and KMS key policies for the KMS key to retrieve the checksum of the object. Encryption Encryption request headers, like x-amz-server-side-encryption, should not be sent for HEAD requests if your object uses server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3 managed encryption keys (SSE-S3). The x-amz-server-side-encryption header is used when you PUT an object to S3 and want to specify the encryption method. If you include this header in a HEAD request for an object that uses these types of keys, you\u2019ll get an HTTP 400 Bad Request error. It's because the encryption method can't be changed when you retrieve the object. If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers to provide the encryption key for the server to be able to retrieve the object's metadata. The headers are: x-amz-server-side-encryption-customer-algorithm x-amz-server-side-encryption-customer-key x-amz-server-side-encryption-customer-key-MD5 For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide. Directory bucket - For directory buckets, there are only two supported options for server-side encryption: SSE-S3 and SSE-KMS. SSE-C isn't supported. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. Versioning If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true in the response. If the specified version is a delete marker, the response returns a 405 Method Not Allowed error and the Last-Modified: timestamp response header. Directory buckets - Delete marker is not supported by directory buckets. Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null to the versionId query parameter in the request. HTTP Host header syntax Directory buckets - The HTTP Host header syntax is Bucket_name.s3express-az_id.region.amazonaws.com. For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the Amazon S3 User Guide. The following actions are related to HeadObject: GetObject GetObjectAttributes", + "The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're interested only in an object's metadata. A HEAD request has the same options as a GET operation on an object. The response is identical to the GET response except that there is no response body. Because of this, if the HEAD request generates an error, it returns a generic code, such as 400 Bad Request, 403 Forbidden, 404 Not Found, 405 Method Not Allowed, 412 Precondition Failed, or 304 Not Modified. It's not possible to retrieve the exact exception of these error codes. Request headers are limited to 8 KB in size. For more information, see Common Request Headers. Permissions General purpose bucket permissions - To use HEAD, you must have the s3:GetObject permission. You need the relevant read object (or version) permission for this operation. For more information, see Actions, resources, and condition keys for Amazon S3 in the Amazon S3 User Guide. For more information about the permissions to S3 API operations by S3 resource types, see Required permissions for Amazon S3 API operations in the Amazon S3 User Guide. If the object you request doesn't exist, the error that Amazon S3 returns depends on whether you also have the s3:ListBucket permission. If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 Not Found error. If you don\u2019t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 Forbidden error. Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the CreateSession API operation for session-based authorization. Specifically, you grant the s3express:CreateSession permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession API call to generate a new session token for use. Amazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see CreateSession . If you enable x-amz-checksum-mode in the request and the object is encrypted with Amazon Web Services Key Management Service (Amazon Web Services KMS), you must also have the kms:GenerateDataKey and kms:Decrypt permissions in IAM identity-based policies and KMS key policies for the KMS key to retrieve the checksum of the object. Encryption Encryption request headers, like x-amz-server-side-encryption, should not be sent for HEAD requests if your object uses server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with Amazon S3 managed encryption keys (SSE-S3). The x-amz-server-side-encryption header is used when you PUT an object to S3 and want to specify the encryption method. If you include this header in a HEAD request for an object that uses these types of keys, you\u2019ll get an HTTP 400 Bad Request error. It's because the encryption method can't be changed when you retrieve the object. If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers to provide the encryption key for the server to be able to retrieve the object's metadata. The headers are: x-amz-server-side-encryption-customer-algorithm x-amz-server-side-encryption-customer-key x-amz-server-side-encryption-customer-key-MD5 For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide. Directory bucket - For directory buckets, there are only two supported options for server-side encryption: SSE-S3 and SSE-KMS. SSE-C isn't supported. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. Versioning If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true in the response. If the specified version is a delete marker, the response returns a 405 Method Not Allowed error and the Last-Modified: timestamp response header. Directory buckets - Delete marker is not supported by directory buckets. Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only the null value of the version ID is supported by directory buckets. You can only specify null to the versionId query parameter in the request. HTTP Host header syntax Directory buckets - The HTTP Host header syntax is Bucket_name.s3express-az_id.region.amazonaws.com. For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name . Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the Amazon S3 User Guide. The following actions are related to HeadObject: GetObject GetObjectAttributes", options: [ { name: "--bucket", @@ -5129,7 +5129,7 @@ const completionSpec: Fig.Spec = { { name: "put-bucket-lifecycle-configuration", description: - "This operation is not supported by directory buckets. Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. Keep in mind that this will overwrite an existing lifecycle configuration, so if you want to retain any configuration details, they must be included in the new lifecycle configuration. For information about lifecycle configuration, see Managing your storage lifecycle. Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility. For the related API description, see PutBucketLifecycle. Rules You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. Each rule consists of the following: A filter identifying a subset of objects to which the rule applies. The filter can be based on a key name prefix, object tags, object size, or any combination of these. A status indicating whether the rule is in effect. One or more lifecycle transition and expiration actions that you want Amazon S3 to perform on the objects identified by the filter. If the state of your bucket is versioning-enabled or versioning-suspended, you can have many versions of the same object (one current version and zero or more noncurrent versions). Amazon S3 provides predefined actions that you can specify for current and noncurrent object versions. For more information, see Object Lifecycle Management and Lifecycle Configuration Elements. Permissions By default, all Amazon S3 resources are private, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration). Only the resource owner (that is, the Amazon Web Services account that created it) can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, a user must get the s3:PutLifecycleConfiguration permission. You can also explicitly deny permissions. An explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions: s3:DeleteObject s3:DeleteObjectVersion s3:PutLifecycleConfiguration For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources. The following operations are related to PutBucketLifecycleConfiguration: Examples of Lifecycle Configuration GetBucketLifecycleConfiguration DeleteBucketLifecycle", + "This operation is not supported by directory buckets. Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. Keep in mind that this will overwrite an existing lifecycle configuration, so if you want to retain any configuration details, they must be included in the new lifecycle configuration. For information about lifecycle configuration, see Managing your storage lifecycle. Rules You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. An Amazon S3 Lifecycle configuration can have up to 1,000 rules. This limit is not adjustable. Bucket lifecycle configuration supports specifying a lifecycle rule using an object key name prefix, one or more object tags, object size, or any combination of these. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility. For the related API description, see PutBucketLifecycle. A lifecycle rule consists of the following: A filter identifying a subset of objects to which the rule applies. The filter can be based on a key name prefix, object tags, object size, or any combination of these. A status indicating whether the rule is in effect. One or more lifecycle transition and expiration actions that you want Amazon S3 to perform on the objects identified by the filter. If the state of your bucket is versioning-enabled or versioning-suspended, you can have many versions of the same object (one current version and zero or more noncurrent versions). Amazon S3 provides predefined actions that you can specify for current and noncurrent object versions. For more information, see Object Lifecycle Management and Lifecycle Configuration Elements. Permissions By default, all Amazon S3 resources are private, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration). Only the resource owner (that is, the Amazon Web Services account that created it) can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, a user must get the s3:PutLifecycleConfiguration permission. You can also explicitly deny permissions. An explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions: s3:DeleteObject s3:DeleteObjectVersion s3:PutLifecycleConfiguration For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources. The following operations are related to PutBucketLifecycleConfiguration: Examples of Lifecycle Configuration GetBucketLifecycleConfiguration DeleteBucketLifecycle", options: [ { name: "--bucket", @@ -5164,6 +5164,14 @@ const completionSpec: Fig.Spec = { name: "string", }, }, + { + name: "--transition-default-minimum-object-size", + description: + "Indicates which default minimum object size behavior is applied to the lifecycle configuration. all_storage_classes_128K - Objects smaller than 128 KB will not transition to any storage class by default. varies_by_storage_class - Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB. To customize the minimum object size for any transition you can add a filter that specifies a custom ObjectSizeGreaterThan or ObjectSizeLessThan in the body of your transition rule. Custom filters always take precedence over the default transition behavior", + args: { + name: "string", + }, + }, { name: "--cli-input-json", description: @@ -5506,7 +5514,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-algorithm", description: - "Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list: CRC32 CRC32C SHA1 SHA256 For more information, see Checking object integrity in the Amazon S3 User Guide. If the individual checksum value you provide through x-amz-checksum-algorithm doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm . For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that's used for performance", + "Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list: CRC32 CRC32C SHA1 SHA256 For more information, see Checking object integrity in the Amazon S3 User Guide. If the individual checksum value you provide through x-amz-checksum-algorithm doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm . For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that's used for performance", args: { name: "string", }, @@ -5975,7 +5983,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-algorithm", description: - "Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list: CRC32 CRC32C SHA1 SHA256 For more information, see Checking object integrity in the Amazon S3 User Guide. If the individual checksum value you provide through x-amz-checksum-algorithm doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm . For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that's used for performance", + "Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don't use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request. For the x-amz-checksum-algorithm header, replace algorithm with the supported algorithm from the following list: CRC32 CRC32C SHA1 SHA256 For more information, see Checking object integrity in the Amazon S3 User Guide. If the individual checksum value you provide through x-amz-checksum-algorithm doesn't match the checksum algorithm you set through x-amz-sdk-checksum-algorithm, Amazon S3 ignores any provided ChecksumAlgorithm parameter and uses the checksum algorithm that matches the provided value in x-amz-checksum-algorithm . For directory buckets, when you use Amazon Web Services SDKs, CRC32 is the default checksum algorithm that's used for performance", args: { name: "string", }, @@ -5983,7 +5991,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", args: { name: "string", }, @@ -5991,7 +5999,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32-c", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", args: { name: "string", }, @@ -7003,7 +7011,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", args: { name: "string", }, @@ -7011,7 +7019,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32-c", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32C checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide", args: { name: "string", }, @@ -7405,7 +7413,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the base64-encoded, 32-bit CRC32 checksum of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide. Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the base64-encoded, 32-bit CRC-32 checksum of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide. Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail", args: { name: "string", }, @@ -7413,7 +7421,7 @@ const completionSpec: Fig.Spec = { { name: "--checksum-crc32-c", description: - "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the base64-encoded, 32-bit CRC32C checksum of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide. Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail", + "This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the base64-encoded, 32-bit CRC-32C checksum of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide. Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail", args: { name: "string", }, diff --git a/src/aws/workspaces.ts b/src/aws/workspaces.ts index 75e43eba118..b696f299399 100644 --- a/src/aws/workspaces.ts +++ b/src/aws/workspaces.ts @@ -692,7 +692,7 @@ const completionSpec: Fig.Spec = { { name: "create-workspaces", description: - "Creates one or more WorkSpaces. This operation is asynchronous and returns before the WorkSpaces are created. The MANUAL running mode value is only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use this value. For more information, see Amazon WorkSpaces Core. You don't need to specify the PCOIP protocol for Linux bundles because WSP is the default protocol for those bundles. User-decoupled WorkSpaces are only supported by Amazon WorkSpaces Core. Review your running mode to ensure you are using one that is optimal for your needs and budget. For more information on switching running modes, see Can I switch between hourly and monthly billing?", + "Creates one or more WorkSpaces. This operation is asynchronous and returns before the WorkSpaces are created. The MANUAL running mode value is only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use this value. For more information, see Amazon WorkSpaces Core. You don't need to specify the PCOIP protocol for Linux bundles because DCV (formerly WSP) is the default protocol for those bundles. User-decoupled WorkSpaces are only supported by Amazon WorkSpaces Core. Review your running mode to ensure you are using one that is optimal for your needs and budget. For more information on switching running modes, see Can I switch between hourly and monthly billing?", options: [ { name: "--workspaces", @@ -2568,7 +2568,7 @@ const completionSpec: Fig.Spec = { { name: "--ingestion-process", description: - "The ingestion process to be used when importing the image, depending on which protocol you want to use for your BYOL Workspace image, either PCoIP, WorkSpaces Streaming Protocol (WSP), or bring your own protocol (BYOP). To use WSP, specify a value that ends in _WSP. To use PCoIP, specify a value that does not end in _WSP. To use BYOP, specify a value that ends in _BYOP. For non-GPU-enabled bundles (bundles other than Graphics or GraphicsPro), specify BYOL_REGULAR, BYOL_REGULAR_WSP, or BYOL_REGULAR_BYOP, depending on the protocol. The BYOL_REGULAR_BYOP and BYOL_GRAPHICS_G4DN_BYOP values are only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use these values. For more information, see Amazon WorkSpaces Core", + "The ingestion process to be used when importing the image, depending on which protocol you want to use for your BYOL Workspace image, either PCoIP, DCV, or bring your own protocol (BYOP). To use WSP, specify a value that ends in _DCV. To use PCoIP, specify a value that does not end in _DCV. To use BYOP, specify a value that ends in _BYOP. For non-GPU-enabled bundles (bundles other than Graphics or GraphicsPro), specify BYOL_REGULAR, BYOL_REGULAR_DCV, or BYOL_REGULAR_BYOP, depending on the protocol. The BYOL_REGULAR_BYOP and BYOL_GRAPHICS_G4DN_BYOP values are only supported by Amazon WorkSpaces Core. Contact your account team to be allow-listed to use these values. For more information, see Amazon WorkSpaces Core", args: { name: "string", }, @@ -2598,7 +2598,7 @@ const completionSpec: Fig.Spec = { { name: "--applications", description: - "If specified, the version of Microsoft Office to subscribe to. Valid only for Windows 10 and 11 BYOL images. For more information about subscribing to Office for BYOL images, see Bring Your Own Windows Desktop Licenses. Although this parameter is an array, only one item is allowed at this time. During the image import process, non-GPU WSP WorkSpaces with Windows 11 support only Microsoft_Office_2019. GPU WSP WorkSpaces with Windows 11 do not support Office installation", + "If specified, the version of Microsoft Office to subscribe to. Valid only for Windows 10 and 11 BYOL images. For more information about subscribing to Office for BYOL images, see Bring Your Own Windows Desktop Licenses. Although this parameter is an array, only one item is allowed at this time. During the image import process, non-GPU DCV (formerly WSP) WorkSpaces with Windows 11 support only Microsoft_Office_2019. GPU DCV (formerly WSP) WorkSpaces with Windows 11 do not support Office installation", args: { name: "list", },