Skip to content

Commit

Permalink
docs(readme): correct n_gpu_layers usage
Browse files Browse the repository at this point in the history
  • Loading branch information
jhen0409 committed Aug 3, 2023
1 parent dba40e9 commit 85c9e94
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ jest.mock('llama.rn', () => require('llama.rn/jest/mock'))
## NOTE

- The [Extended Virtual Addressing](https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_kernel_extended-virtual-addressing) capability is recommended to enable on iOS project.
- Currently we got some iOS devices crash by enable Metal ('options.gpuLayers > 1'), to avoid this problem, we're recommended to check [Metal 3 supported devices](https://support.apple.com/en-us/HT205073). But currently the cause is still unclear and we are giving this issue a low priority.
- Currently we got some iOS devices crash by enable Metal ('params.n_gpu_layers > 0'), to avoid this problem, we're recommended to check [Metal 3 supported devices](https://support.apple.com/en-us/HT205073). But currently the cause is still unclear and we are giving this issue a low priority.
- We can use the ggml tensor allocor (See [llama.cpp#2411](https://github.com/ggerganov/llama.cpp/pull/2411)) by use `RNLLAMA_DISABLE_METAL=1` env on pod install, which reduces the memory usage. If you only want to use CPU, this is very useful.

## Contributing
Expand Down

0 comments on commit 85c9e94

Please sign in to comment.