From 85c9e9415e46f147fc546e3a929645f5bd0cb2e2 Mon Sep 17 00:00:00 2001 From: jhen Date: Fri, 4 Aug 2023 05:10:55 +0800 Subject: [PATCH] docs(readme): correct n_gpu_layers usage --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7590c9e..d6d1eee 100644 --- a/README.md +++ b/README.md @@ -82,7 +82,7 @@ jest.mock('llama.rn', () => require('llama.rn/jest/mock')) ## NOTE - The [Extended Virtual Addressing](https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_kernel_extended-virtual-addressing) capability is recommended to enable on iOS project. -- Currently we got some iOS devices crash by enable Metal ('options.gpuLayers > 1'), to avoid this problem, we're recommended to check [Metal 3 supported devices](https://support.apple.com/en-us/HT205073). But currently the cause is still unclear and we are giving this issue a low priority. +- Currently we got some iOS devices crash by enable Metal ('params.n_gpu_layers > 0'), to avoid this problem, we're recommended to check [Metal 3 supported devices](https://support.apple.com/en-us/HT205073). But currently the cause is still unclear and we are giving this issue a low priority. - We can use the ggml tensor allocor (See [llama.cpp#2411](https://github.com/ggerganov/llama.cpp/pull/2411)) by use `RNLLAMA_DISABLE_METAL=1` env on pod install, which reduces the memory usage. If you only want to use CPU, this is very useful. ## Contributing