Skip to content

Commit

Permalink
Tweaks to code block.
Browse files Browse the repository at this point in the history
  • Loading branch information
scottstraughan committed Aug 16, 2024
1 parent c22ebd5 commit 855a409
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ The first step is to clone the llama.cpp repository, and configure cmake as usua
$ git clone https://github.com/ggerganov/llama.cpp.git
$ cd llama.cpp
$ git checkout 3c04bf6da89eaf4c7d317e0518f0687dfcbf2de7
$ mkdir build && cd build
$ mkdir build && cd build
$ cmake .. -DLLAMA_CUBLAS=ON -DLLAMA_CUDA=ON -
$ DCMAKE_CUDA_ARCHITECTURES=80
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Now we are going to build the converted code directly using the CMake file that
build the main binary for llama.cpp.

```shell
$ cd dpct_out && mkdir syclbuild && cd syclbuild
$ cd dpct_out && mkdir syclbuild && cd syclbuild
$ MKLROOT=/home/ruyman/soft/mkl CC=icx CXX=icpx cmake .. -DLLAMA_CUBLAS=ON -DCMAKE_CUDA_ARCHITECTURES=80 -DCMAKE_CXX_FLAGS="-fsycl -fsycl-targets=nvptx64-nvidia-cuda -L${MKLROOT}/lib"
$ make main
```
Expand Down

0 comments on commit 855a409

Please sign in to comment.