From 1a2d19e5e8abc561ead6dee5099ce2ce73fcb773 Mon Sep 17 00:00:00 2001 From: Brent George Date: Wed, 7 Aug 2024 21:54:48 -0400 Subject: [PATCH 1/2] nullify long default value in README --- charts/deepgram-self-hosted/README.md | 2 +- charts/deepgram-self-hosted/values.yaml | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/charts/deepgram-self-hosted/README.md b/charts/deepgram-self-hosted/README.md index bc1de8b..c611e03 100644 --- a/charts/deepgram-self-hosted/README.md +++ b/charts/deepgram-self-hosted/README.md @@ -271,7 +271,7 @@ If you encounter issues while deploying or using Deepgram, consider the followin | global.deepgramSecretRef | string | `nil` | Name of the pre-configured K8s Secret containing your Deepgram self-hosted API key. See chart docs for more details. | | global.outstandingRequestGracePeriod | int | `1800` | When an API or Engine container is signaled to shutdown via Kubernetes sending a SIGTERM signal, the container will stop listening on its port, and no new requests will be routed to that container. However, the container will continue to run until all existing batch or streaming requests have completed, after which it will gracefully shut down. Batch requests should be finished within 10-15 minutes, but streaming requests can proceed indefinitely. outstandingRequestGracePeriod defines the period (in sec) after which Kubernetes will forcefully shutdown the container, terminating any outstanding connections. 1800 / 60 sec/min = 30 mins | | global.pullSecretRef | string | `nil` | If using images from the Deepgram Quay image repositories, or another private registry to which your cluster doesn't have default access, you will need to provide a pre-configured K8s Secret with image repository credentials. See chart docs for more details. | -| gpu-operator | object | `{"driver":{"enabled":true,"version":"550.54.15"},"enabled":true,"toolkit":{"enabled":true,"version":"v1.15.0-ubi8"}}` | Passthrough values for [NVIDIA GPU Operator Helm chart](https://github.com/NVIDIA/gpu-operator/blob/master/deployments/gpu-operator/values.yaml) You may use the NVIDIA GPU Operator to manage installation of NVIDIA drivers and the container toolkit on nodes with attached GPUs. | +| gpu-operator | object | `` | Passthrough values for [NVIDIA GPU Operator Helm chart](https://github.com/NVIDIA/gpu-operator/blob/master/deployments/gpu-operator/values.yaml) You may use the NVIDIA GPU Operator to manage installation of NVIDIA drivers and the container toolkit on nodes with attached GPUs. | | gpu-operator.driver.enabled | bool | `true` | Whether to install NVIDIA drivers on nodes where a NVIDIA GPU is detected. If your Kubernetes nodes run a base image that comes with NVIDIA drivers pre-configured, disable this option, but keep the parent `gpu-operator` and sibling `toolkit` options enabled. | | gpu-operator.driver.version | string | `"550.54.15"` | NVIDIA driver version to install. | | gpu-operator.enabled | bool | `true` | Whether to install the NVIDIA GPU Operator to manage driver and/or container toolkit installation. See the list of [supported Operating Systems](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/platform-support.html#supported-operating-systems-and-kubernetes-platforms) to verify compatibility with your cluster/nodes. Disable this option if your cluster/nodes are not compatible. If disabled, you will need to self-manage NVIDIA software installation on all nodes where you want to schedule Deepgram Engine pods. | diff --git a/charts/deepgram-self-hosted/values.yaml b/charts/deepgram-self-hosted/values.yaml index 0bf5c4a..1cad41f 100644 --- a/charts/deepgram-self-hosted/values.yaml +++ b/charts/deepgram-self-hosted/values.yaml @@ -605,6 +605,7 @@ licenseProxy: # -- Passthrough values for [NVIDIA GPU Operator Helm chart](https://github.com/NVIDIA/gpu-operator/blob/master/deployments/gpu-operator/values.yaml) # You may use the NVIDIA GPU Operator to manage installation of NVIDIA drivers and the container toolkit on nodes with attached GPUs. +# @default -- `` gpu-operator: # -- Whether to install the NVIDIA GPU Operator to manage driver and/or container toolkit installation. # See the list of [supported Operating Systems](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/platform-support.html#supported-operating-systems-and-kubernetes-platforms) From 5861245d0733809d0d43b28aad702e645652a0d4 Mon Sep 17 00:00:00 2001 From: Brent George Date: Wed, 7 Aug 2024 22:04:01 -0400 Subject: [PATCH 2/2] add whitespace to pass CI --- charts/deepgram-self-hosted/CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/charts/deepgram-self-hosted/CHANGELOG.md b/charts/deepgram-self-hosted/CHANGELOG.md index 300e6fd..cc63a92 100644 --- a/charts/deepgram-self-hosted/CHANGELOG.md +++ b/charts/deepgram-self-hosted/CHANGELOG.md @@ -13,6 +13,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), ## [0.4.0] - 2024-07-25 ### Added + - Introduced entity detection feature flag for API containers (`false` by default). - Updated default container tags to July 2024 release. Refer to the [main Deepgram changelog](https://deepgram.com/changelog/deepgram-self-hosted-july-2024-release-240725) for additional details. Highlights include: - Support for Deepgram's new English/Spanish multilingual code-switching model