Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update "ContributingGuide" for latest llama.cpp #951

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 8 additions & 12 deletions docs/ContributingGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,29 +32,25 @@ As shown in [llama.cpp cmake file](https://github.com/ggerganov/llama.cpp/blob/m

```cpp
option(BUILD_SHARED_LIBS "build shared libraries") // Please always enable it
option(LLAMA_NATIVE "llama: enable -march=native flag") // Could be disabled
option(LLAMA_AVX "llama: enable AVX") // Enable it if the highest supported avx level is AVX
option(LLAMA_AVX2 "llama: enable AVX2") // Enable it if the highest supported avx level is AVX2
option(LLAMA_AVX512 "llama: enable AVX512") // Enable it if the highest supported avx level is AVX512
option(LLAMA_BLAS "llama: use BLAS") // Enable it if you want to use BLAS library to acclerate the computation on CPU
option(LLAMA_CUDA "llama: use CUDA") // Enable it if you have CUDA device
option(LLAMA_CLBLAST "llama: use CLBlast") // Enable it if you have a device with CLBLast or OpenCL support, for example, some AMD GPUs.
option(LLAMA_VULKAN "llama: use Vulkan") // Enable it if you have a device with Vulkan support
option(LLAMA_METAL "llama: use Metal") // Enable it if you are using a MAC with Metal device.
option(GGML_NATIVE "llama: enable -march=native flag") // Could be disabled
option(GGML_CUDA "llama: use CUDA") // Enable it if you have CUDA device
option(GGML_OPENBLAS "llama: use OpenBLAS") // Enable it if you are using OpenBLAS
option(GGML_VULKAN "llama: use Vulkan") // Enable it if you have a device with Vulkan support
option(GGML_METAL "llama: use Metal") // Enable it if you are using a MAC with Metal device.
option(LLAMA_BUILD_TESTS "llama: build tests") // Please disable it.
option(LLAMA_BUILD_EXAMPLES "llama: build examples") // Please disable it.
option(LLAMA_BUILD_SERVER "llama: build server example")// Please disable it.
```

Most importantly, `-DBUILD_SHARED_LIBS=ON` must be added to the cmake instruction and other options depends on you. For example, when building with cublas but without openblas, use the following instruction:
Most importantly, `-DBUILD_SHARED_LIBS=ON` must be added to the cmake instruction and other options depends on you. For example, when building with CUDA but without openblas, use the following instruction:

```bash
mkdir build && cd build
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON
cmake .. -DGGML_CUDAS=ON -DBUILD_SHARED_LIBS=ON
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be GGML_CUDA?

cmake --build . --config Release
```

Now you could find the `llama.dll`, `libllama.so` or `llama.dylib` in your build directory (or `build/bin`).
Now you could find the `llama.dll`, `libllama.so` or `llama.dylib` in `build/src`.

To load the compiled native library, please add the following code to the very beginning of your code.

Expand Down
2 changes: 1 addition & 1 deletion llama.cpp
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you've committed an update to llama.cpp, can you undo that and just submit the changes to ContributingGuide.md. Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah crud. sorry about that. didn't notice that snuck in there.

Submodule llama.cpp updated 266 files
Loading