Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge llvm branch to master #932

Draft
wants to merge 105 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
105 commits
Select commit Hold shift + click to select a range
d30c027
Disable python bindings for faster build
pramodk Nov 27, 2020
ebe6539
Integrate LLVM into CMake build system
pramodk Nov 28, 2020
8994f8e
Code infrastructure for LLVM code generation backend
pramodk Nov 28, 2020
a3f7891
Azure CI fixes for LLVM build and README update
pramodk Nov 28, 2020
3b17307
Print build status after cmake configure stage
pramodk Nov 29, 2020
cea865d
Adding test template for LLVM codegen
pramodk Nov 29, 2020
00e4ac0
Initial LLVM codegen vistor routines (#457)
georgemitenkov Dec 22, 2020
6623eb7
FunctionBlock code generation and terminator checks (#470)
georgemitenkov Dec 25, 2020
13b129b
Add option to run LLVM optimisation passes (#471)
pramodk Dec 28, 2020
9c127f2
Add function call LLVM code generation (#477)
georgemitenkov Dec 30, 2020
470d54a
Support for IndexedName codegen (#478)
georgemitenkov Dec 30, 2020
1674f3b
Improvements for code generation specific transformations (#483)
pramodk Jan 6, 2021
f6d8b85
nrn_state function generation in NMODL AST to help LLVM codegen (#484)
pramodk Jan 6, 2021
5e0fee0
Running functions from MOD files via LLVM JIT (#482)
georgemitenkov Jan 8, 2021
34bbaab
Extended support for binary ops and refactoring (#489)
georgemitenkov Jan 12, 2021
1f4c8dc
Avoid converting LOCAL statement in all StatementBlocks (#492)
pramodk Jan 12, 2021
aa639de
Handle CodegenVarType type in JSON printer (#494)
pramodk Jan 13, 2021
a541c69
Integrating LLVM helper into LLVM visitor (#497)
georgemitenkov Jan 25, 2021
87baf0f
LLVM code generation for if/else statements (#499)
georgemitenkov Jan 25, 2021
05b721f
Added error handling for values not in scope (#502)
georgemitenkov Jan 26, 2021
5077c68
Added support for WHILE statement (#501)
georgemitenkov Jan 26, 2021
2223d00
Create mechanism instance struct in LLVM IR (#507)
iomaganaris Feb 1, 2021
9ca602e
Printf support in LLVM IR codegen (#510)
georgemitenkov Feb 3, 2021
b3f6fa2
Fix issue error: ‘runtime_error’ is not a member of ‘std’ (#512)
iomaganaris Feb 15, 2021
f15e3c5
Move code gen specific InstanceStruct node to codegen.yaml (#526)
pramodk Mar 5, 2021
f5dc06b
* Improvements to codegen helper (Part I)
pramodk Feb 27, 2021
451fe17
Addressing TODOs for Instance struct (#533) Part II
georgemitenkov Mar 6, 2021
e1e8eab
Unit test for scalar state kernel generation in LLVM (#547)
georgemitenkov Mar 9, 2021
fd2053e
Indexed name codegen improvements (#550)
georgemitenkov Mar 12, 2021
57cb77d
Add InstanceStruct test data generation helper and unit test (#546)
iomaganaris Mar 13, 2021
5768d68
Add the remainder loop for vectorization of DERIVATIVE block (#534)
alkino Mar 17, 2021
0497ee3
Always initialize return variable in function block (#554)
alkino Mar 19, 2021
5989460
Running a kernel with NMODL-LLVM JIT (#549)
georgemitenkov Apr 9, 2021
b5ceca6
Loop epilogue fix for LLVM visitor helper (#567)
georgemitenkov Apr 9, 2021
9e2284b
Gather support and vectorisation fixes for LLVM code generation (#568)
georgemitenkov Apr 10, 2021
0cb1cea
Verification and file utilities for LLVM IR codegen (#582)
georgemitenkov Apr 13, 2021
c839e68
Add gather execution test (#591)
georgemitenkov Apr 16, 2021
cffb50d
Fixed loop allocations (#590)
georgemitenkov Apr 17, 2021
481c728
Benchmarking LLVM code generation (#583)
georgemitenkov Apr 17, 2021
8b2d598
Minor benchmarking improvement (#593)
pramodk Apr 18, 2021
2d67af2
Bug fix in codegen helper: delete LOCAL statement (#595)
pramodk Apr 19, 2021
c884dd8
LLVM 13 compatibility and fixing void* type (#603)
georgemitenkov Apr 20, 2021
3c38a2e
Allow LOCAL variable inside StatementBlock for LLVM IR generation (#599)
pramodk Apr 20, 2021
1f250ee
Update CI with LLVM v13 (trunk) (#605)
pramodk Apr 22, 2021
f6dee6e
Integrating vector maths library into LLVM codegen (#604)
georgemitenkov Apr 22, 2021
b50027a
Using shared libraries in LLVM JIT (#609)
georgemitenkov Apr 22, 2021
4c9e1e1
Avoid local std::ofstream object causing segfault (#614)
pramodk Apr 24, 2021
be984b8
Refactoring of runners' infrastructure and dumping object files (#620)
georgemitenkov Apr 30, 2021
3cf65cf
Optimisation levels for benchmarking (#623)
georgemitenkov May 7, 2021
2356c19
Adding function debug information (#628)
georgemitenkov May 8, 2021
92eadeb
Fixed using benchmarking_info in TestRunner (#631)
georgemitenkov May 8, 2021
dbccdaa
Fixed addition of SOLVE block to kernel's FOR loop (#636)
georgemitenkov May 11, 2021
e299af1
IR builder redesign for LLVM IR code generation pipeline (#634)
georgemitenkov May 13, 2021
acbcd1b
Fixed initialisation of `CodegenAtomicStatement` (#642)
georgemitenkov May 13, 2021
acfd3c3
Fix instance struct data generation for testing/benchmarking (#641)
pramodk May 13, 2021
0e09468
Basic scatter support (#643)
georgemitenkov May 13, 2021
f7d00dd
Benchmarking code re-organisation and minor improvements (#647)
pramodk May 16, 2021
ac283f2
Added attributes and metadata to LLVM IR compute kernels (#648)
georgemitenkov May 17, 2021
05e9cfa
Added loaded value to the stack (#655)
georgemitenkov May 18, 2021
9953758
Basic predication support for LLVM backend (#652)
georgemitenkov May 20, 2021
7b45925
Improvements for LLVM code generation and benchmarking (#661)
georgemitenkov May 20, 2021
dcaff9a
Fixed `alloca`s insertion point for LLVM backend (#663)
georgemitenkov May 20, 2021
9109139
Fast math flags for LLVM backend (#662)
georgemitenkov May 21, 2021
fa5c7bf
Avoid generating LLVM IR for Functions and Procedures if inlined (#664)
iomaganaris May 21, 2021
30e53c7
Fixed typo in benchmarking metrics (#665)
georgemitenkov May 21, 2021
0362f66
Remove only inlined blocks from AST based on symtab properties (#668)
iomaganaris May 21, 2021
480f26e
Use VarName on the RHS of assignment expression (#669)
pramodk May 25, 2021
b907544
[LLVM] SLEEF and libsystem_m vector libraries support (#674)
georgemitenkov May 30, 2021
a1c4b0f
[LLVM] Enhancements for optimization pipeline (#683)
georgemitenkov Jun 3, 2021
6506fcf
[LLVM] Added saving to file utility (#685)
georgemitenkov Jun 3, 2021
d48bb20
[LLVM] Aliasing and `cpu` options for LLVM visitor and the benchmark …
georgemitenkov Jun 3, 2021
c95dec9
Fix azure yaml pipeline from merge (#687)
pramodk Jun 3, 2021
6f5e037
[LLVM] Support for newer versions of LLVM APIs
georgemitenkov Mar 8, 2022
58476bd
Fix build issues for the rebased branch
pramodk Mar 8, 2022
a30d52e
[LLVM] Allocate InstanceStruct on the GPU using cudaMallocManaged (#815)
iomaganaris Mar 10, 2022
a755719
[LLVM][GPU] Separated CPU and GPU CLI options (#817)
georgemitenkov Mar 14, 2022
412af4c
[LLVM][refactoring] Added platform abstraction (#818)
georgemitenkov Mar 15, 2022
2d05aaa
[LLVM][GPU] Added GPU-specific AST transformations (#819)
georgemitenkov Mar 22, 2022
2e5e149
[LLVM][GPU] Basic code generation for NVPTX backend (#820)
georgemitenkov Mar 22, 2022
6b9df33
Print kernel wrappers and nrn_init based on Instance Struct (#551)
iomaganaris Mar 23, 2022
84a85d4
[LLVM][GPU] NVPTX specific passes for code generation (#833)
georgemitenkov Mar 28, 2022
afec6c6
[LLVM] Code formatting changes (#838)
iomaganaris Apr 5, 2022
ac6d731
[LLVM][GPU][+refactoring] Replacement of math intrinsics with library…
georgemitenkov Apr 8, 2022
5ffc590
JIT invocation from python for benchmarks (#832)
Apr 27, 2022
af1ff70
Fixes issue with debug printing of visitors (#854)
iomaganaris Apr 29, 2022
49a13af
Support for Breakpoint block (nrn_cur) for code generation (#645)
pramodk May 2, 2022
8797f9b
[LLVM][GPU] Added CUDADriver to execute benchmark on GPU (#829)
iomaganaris May 9, 2022
b13ca90
[LLVM][GPU] Atomic updates support (#853)
georgemitenkov May 12, 2022
ece5c47
Replaced fmt literals with fmt::format
iomaganaris May 12, 2022
0c8f566
[LLVM][FIX] Float generation fix in LLVM helper visitor (#865)
georgemitenkov May 12, 2022
a784b53
[CP-859] Code generation changes for "inline" scopmath solvers. (#859…
iomaganaris May 17, 2022
253f639
[LLVM] Fixes compilation with LLVM codegen disabled (#867)
iomaganaris May 19, 2022
77c4f74
[CP-870] Cherry-pick sympy fix from master (#872)
iomaganaris May 19, 2022
a5577bb
[LLVM][SIMD] Atomic updates support (#864)
georgemitenkov May 23, 2022
bf3c125
Install explicitly LLVM 13.0.1 in MacOS builds in Azure (#898)
iomaganaris Jul 18, 2022
18856f4
Fix nrn_cur kernel code generation unit test (#892)
iomaganaris Jul 18, 2022
44688e3
[LLVM][refactoring] Annotations pass and no more wrappers (#893)
georgemitenkov Aug 22, 2022
d2211e0
[LLVM] Instantiate CodegenLLVMVisitor of PyJIT with fast math and deb…
iomaganaris Aug 24, 2022
08555ee
Fix issues after rebase
iomaganaris Sep 15, 2022
2188bb2
Update desired cmake version in github actions
iomaganaris Sep 15, 2022
958e9cd
Make Clang and CMake format happy
iomaganaris Sep 15, 2022
c5678e5
Disable llvm backend by default
iomaganaris Sep 15, 2022
fee9a0b
Fix benchmark linking with visitors
iomaganaris Sep 16, 2022
a967a3f
Setup sanitizer options for new test
iomaganaris Sep 16, 2022
de5b144
Added supression for undefined behavior in std random number generator
iomaganaris Sep 16, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/coverage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ on:
env:
CMAKE_BUILD_PARALLEL_LEVEL: 3
CTEST_PARALLEL_LEVEL: 1
DESIRED_CMAKE_VERSION: 3.15.0
DESIRED_CMAKE_VERSION: 3.17.0
PYTHON_VERSION: 3.8

jobs:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/nmodl-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ on:
env:
CTEST_PARALLEL_LEVEL: 1
PYTHON_VERSION: 3.8
DESIRED_CMAKE_VERSION: 3.15.0
DESIRED_CMAKE_VERSION: 3.17.0

jobs:
ci:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/nmodl-doc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ on:
env:
BUILD_TYPE: Release
PYTHON_VERSION: 3.8
DESIRED_CMAKE_VERSION: 3.15.0
DESIRED_CMAKE_VERSION: 3.17.0

jobs:
ci:
Expand Down
39 changes: 27 additions & 12 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,8 @@ trigger cvf:
.spack_nmodl:
variables:
SPACK_PACKAGE: nmodl
SPACK_PACKAGE_SPEC: ~legacy-unit+python
SPACK_PACKAGE_SPEC: ~legacy-unit+python+llvm
SPACK_INSTALL_EXTRA_FLAGS: -v

spack_setup:
extends: .spack_setup_ccache
Expand All @@ -65,14 +66,6 @@ build:intel:
variables:
SPACK_PACKAGE_COMPILER: intel

build:nvhpc:
extends:
- .spack_build
- .spack_nmodl
variables:
SPACK_PACKAGE_COMPILER: nvhpc
SPACK_PACKAGE_DEPENDENCIES: ^bison%gcc^flex%gcc^py-jinja2%gcc^py-sympy%gcc^py-pyyaml%gcc

.nmodl_tests:
variables:
# https://github.com/BlueBrain/nmodl/issues/737
Expand All @@ -84,8 +77,30 @@ test:intel:
- .nmodl_tests
needs: ["build:intel"]

test:nvhpc:
.benchmark_config:
variables:
bb5_ntasks: 1
bb5_cpus_per_task: 1
bb5_memory: 16G
bb5_exclusive: full
bb5_constraint: volta # V100 GPU node

.build_allocation:
variables:
bb5_ntasks: 2 # so we block 16 cores
bb5_cpus_per_task: 8 # ninja -j {this}
bb5_memory: 76G # ~16*384/80

build_cuda:gcc:
extends: [.spack_build, .build_allocation]
variables:
SPACK_PACKAGE: nmodl
SPACK_PACKAGE_SPEC: ~legacy-unit+python+llvm+llvm_cuda
SPACK_INSTALL_EXTRA_FLAGS: -v
SPACK_PACKAGE_COMPILER: gcc

test_benchmark:gcc:
extends:
- .benchmark_config
- .ctest
- .nmodl_tests
needs: ["build:nvhpc"]
needs: ["build_cuda:gcc"]
1 change: 1 addition & 0 deletions .sanitizers/undefined.supp
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
implicit-integer-sign-change:double vector[2] Eigen::internal::pabs<double vector[2]>(double vector[2] const&)
unsigned-integer-overflow:nmodl::fast_math::vexp(double)
unsigned-integer-overflow:nmodl::fast_math::vexpm1(double)
unsigned-integer-overflow:std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::_M_gen_rand()
38 changes: 37 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# See top-level LICENSE file for details.
# =============================================================================

cmake_minimum_required(VERSION 3.15 FATAL_ERROR)
cmake_minimum_required(VERSION 3.17 FATAL_ERROR)

project(NMODL LANGUAGES CXX)

Expand All @@ -22,6 +22,11 @@ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/bin)
# =============================================================================
option(NMODL_ENABLE_PYTHON_BINDINGS "Enable pybind11 based python bindings" ON)
option(NMODL_ENABLE_LEGACY_UNITS "Use original faraday, R, etc. instead of 2019 nist constants" OFF)
option(NMODL_ENABLE_LLVM "Enable LLVM based code generation" OFF)
option(NMODL_ENABLE_LLVM_GPU "Enable LLVM based GPU code generation" OFF)
option(NMODL_ENABLE_LLVM_CUDA "Enable LLVM CUDA backend to run GPU benchmark" OFF)
option(NMODL_ENABLE_JIT_EVENT_LISTENERS "Enable JITEventListener for Perf and Vtune" OFF)

if(NMODL_ENABLE_LEGACY_UNITS)
add_definitions(-DUSE_LEGACY_UNITS)
endif()
Expand Down Expand Up @@ -174,6 +179,21 @@ cpp_cc_find_python_module(sympy 1.3 REQUIRED)
cpp_cc_find_python_module(textwrap 0.9 REQUIRED)
cpp_cc_find_python_module(yaml 3.12 REQUIRED)

# =============================================================================
# Find LLVM dependencies
# =============================================================================
if(NMODL_ENABLE_LLVM)
include(cmake/LLVMHelper.cmake)
include_directories(${LLVM_INCLUDE_DIRS})
add_definitions(-DNMODL_LLVM_BACKEND)
if(NMODL_ENABLE_LLVM_CUDA)
enable_language(CUDA)
find_package(CUDAToolkit)
include_directories(${CUDAToolkit_INCLUDE_DIRS})
add_definitions(-DNMODL_LLVM_CUDA_BACKEND)
endif()
endif()

# =============================================================================
# Compiler specific flags for external submodules
# =============================================================================
Expand Down Expand Up @@ -207,6 +227,9 @@ set(MEMORYCHECK_COMMAND_OPTIONS
# do not enable tests if nmodl is used as submodule
if(NOT NMODL_AS_SUBPROJECT)
include(CTest)
if(NMODL_ENABLE_LLVM)
add_subdirectory(test/benchmark)
endif()
add_subdirectory(test/unit)
add_subdirectory(test/integration)
endif()
Expand Down Expand Up @@ -271,6 +294,19 @@ message(STATUS "Python Bindings | ${NMODL_ENABLE_PYTHON_BINDINGS}")
message(STATUS "Flex | ${FLEX_EXECUTABLE}")
message(STATUS "Bison | ${BISON_EXECUTABLE}")
message(STATUS "Python | ${PYTHON_EXECUTABLE}")
message(STATUS "LLVM Codegen | ${NMODL_ENABLE_LLVM}")
if(NMODL_ENABLE_LLVM)
message(STATUS " VERSION | ${LLVM_PACKAGE_VERSION}")
message(STATUS " INCLUDE | ${LLVM_INCLUDE_DIRS}")
message(STATUS " CMAKE | ${LLVM_CMAKE_DIR}")
message(STATUS " JIT LISTENERS | ${NMODL_ENABLE_JIT_EVENT_LISTENERS}")
endif()
message(STATUS "LLVM CUDA Codegen | ${NMODL_ENABLE_LLVM_CUDA}")
if(NMODL_ENABLE_LLVM_CUDA)
message(STATUS " CUDA VERSION | ${CUDAToolkit_VERSION}")
message(STATUS " INCLUDE | ${CUDAToolkit_INCLUDE_DIRS}")
message(STATUS " LIBRARY | ${CUDAToolkit_LIBRARY_DIR}")
endif()
message(STATUS "--------------+--------------------------------------------------------------")
message(STATUS " See documentation : https://github.com/BlueBrain/nmodl/")
message(STATUS "--------------+--------------------------------------------------------------")
Expand Down
38 changes: 35 additions & 3 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ To build the project from source, a modern C++ compiler with C++14 support is ne

- flex (>=2.6)
- bison (>=3.0)
- CMake (>=3.15)
- CMake (>=3.17)
- Python (>=3.7)
- Python packages : jinja2 (>=2.10), pyyaml (>=3.13), pytest (>=4.0.0), sympy (>=1.3), textwrap

Expand All @@ -31,7 +31,7 @@ Typically the versions of bison and flex provided by the system are outdated and
To get recent version of all dependencies we recommend using [homebrew](https://brew.sh/):

```sh
brew install flex bison cmake python3
brew install flex bison cmake python3 llvm
```

The necessary Python packages can then easily be added using the pip3 command.
Expand All @@ -57,7 +57,7 @@ export PATH=/opt/homebrew/opt/flex/bin:/opt/homebrew/opt/bison/bin:$PATH
On Ubuntu (>=18.04) flex/bison versions are recent enough and are installed along with the system toolchain:

```sh
apt-get install flex bison gcc python3 python3-pip
apt-get install flex bison gcc python3 python3-pip llvm-dev llvm-runtime llvm clang-format clang
```

The Python dependencies are installed using:
Expand All @@ -79,6 +79,15 @@ cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/nmodl
make -j && make install
```

If `llvm-config` is not in PATH then set LLVM_DIR as:

```sh
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/nmodl -DLLVM_DIR=/path/to/llvm/install/lib/cmake/llvm

# on OSX
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/nmodl -DLLVM_DIR=`brew --prefix llvm`/lib/cmake/llvm
```

And set PYTHONPATH as:

```sh
Expand Down Expand Up @@ -132,6 +141,29 @@ export NMODL_WRAPLIB=/opt/nmodl/lib/libpywrapper.so
**Note**: In order for all unit tests to function correctly when building without linking against libpython we must
set `NMODL_PYLIB` before running cmake!

### Using CUDA backend to run benchmarks

`NMODL` supports generating code and compiling it for execution on an `NVIDIA` GPU via its benchmark infrastructure using the `LLVM` backend. To enable the `CUDA` backend to compile and execute the GPU code we need to set the following `CMake` flag during compilation of `NMODL`:
```
-DNMODL_ENABLE_LLVM_CUDA=ON
```

To find the need `CUDA` libraries (`cudart` and `nvrtc`) it's needed to have CUDA Toolkit installed on your system.
This can be done by installing the CUDA Toolkit from the [CUDA Toolkit website](https://developer.nvidia.com/cuda-downloads) or by installing the `CUDA` spack package and loading the corresponding module.

Then given a supported MOD file you can execute the benchmark on GPU in you supported NVIDIA GPU by running the following command:
```
./bin/nmodl <file>.mod llvm --no-debug --ir --opt-level-ir 3 gpu --target-arch "sm_80" --name "nvptx64" --math-library libdevice benchmark --run --libs "${CUDA_ROOT}/nvvm/libdevice/libdevice.10.bc" --opt-level-codegen 3 --instance-size 10000000 --repeat 2 --grid-dim-x 4096 --block-dim-x 256
```
The above command executes the benchmark on a GPU with `Compute Architecture` `sm_80` and links the generated code to the `libdevice` optimized math library provided by `NVIDIA`.
Using the above command you can also select the optimization level of the generated code, the instance size of the generated data, the number of repetitions and the grid and block dimensions for the GPU execution.

**Note**: In order for the CUDA backend to be able to compile and execute the generated code on GPU the CUDA Toolkit version installed needs to have the same version as the `CUDA` installed by the NVIDIA driver in the system that will be used to run the benchmark.
You can find the CUDA Toolkit version by running the following command:
```
nvidia-smi
```
and noting the `CUDA Version` stated there. For example if `CUDA Version` reported by `nvidia-smi` is CUDA 11.4 you need to install the `CUDA Toolkit 11.4.*` to be able to compile and execute the GPU code.

## Testing the Installed Module

Expand Down
27 changes: 17 additions & 10 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -99,17 +99,21 @@ stages:
url="https://github.com/ispc/ispc/releases/download/${ispc_version}/ispc-${ispc_version}${ispc_version_suffix}-${url_os}.tar.gz";
mkdir $(pwd)/$CMAKE_PKG/ispc
wget --quiet --output-document=- $url | tar -xvzf - -C $(pwd)/$CMAKE_PKG/ispc --strip 1;
# install llvm nightly (future v13) TODO: this will fail now, FIX this!
wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
sudo ./llvm.sh 13
env:
CMAKE_VER: 'v3.15.0'
CMAKE_PKG: 'cmake-3.15.0-Linux-x86_64'
CMAKE_VER: 'v3.17.0'
CMAKE_PKG: 'cmake-3.17.0-Linux-x86_64'
displayName: 'Install Dependencies'
- script: |
export PATH=$(pwd)/$CMAKE_PKG/bin:/home/vsts/.local/bin:$PATH
export CXX='g++-8'
mkdir -p $(Build.Repository.LocalPath)/build
cd $(Build.Repository.LocalPath)/build
cmake --version
cmake .. -DPYTHON_EXECUTABLE=$(which python3.7) -DCMAKE_INSTALL_PREFIX=$HOME/nmodl -DCMAKE_BUILD_TYPE=Release
cmake .. -DPYTHON_EXECUTABLE=$(which python3.7) -DCMAKE_INSTALL_PREFIX=$HOME/nmodl -DCMAKE_BUILD_TYPE=Release -DNMODL_ENABLE_LLVM=ON -DLLVM_DIR=/usr/lib/llvm-13/share/llvm/cmake/
make -j 2
if [ $? -ne 0 ]
then
Expand All @@ -119,7 +123,7 @@ stages:
make install #this is needed for the integration tests
env CTEST_OUTPUT_ON_FAILURE=1 make test
env:
CMAKE_PKG: 'cmake-3.15.0-Linux-x86_64'
CMAKE_PKG: 'cmake-3.17.0-Linux-x86_64'
displayName: 'Build and Run Unit Tests'
- script: |
export PATH=$(pwd)/$CMAKE_PKG/bin:/home/vsts/.local/bin:$PATH
Expand Down Expand Up @@ -150,7 +154,7 @@ stages:
fi
./bin/nrnivmodl-core $(Build.Repository.LocalPath)/test/integration/mod
env:
CMAKE_PKG: 'cmake-3.15.0-Linux-x86_64'
CMAKE_PKG: 'cmake-3.17.0-Linux-x86_64'
SHELL: 'bash'
displayName: 'Build Neuron and Run Integration Tests'
- script: |
Expand All @@ -174,25 +178,25 @@ stages:
fi
./bin/nrnivmodl-core $(Build.Repository.LocalPath)/test/integration/mod
env:
CMAKE_PKG: 'cmake-3.15.0-Linux-x86_64'
CMAKE_PKG: 'cmake-3.17.0-Linux-x86_64'
displayName: 'Build CoreNEURON and Run Integration Tests with ISPC compiler'
- job: 'osx11'
pool:
vmImage: 'macOS-11'
displayName: 'MacOS (11), AppleClang 12.0'
vmImage: 'macOS-10.15'
displayName: 'MacOS (10.15), AppleClang 13.0 (trunk, May 2021)'
steps:
- checkout: self
submodules: True
- script: |
brew install flex bison cmake python@3 gcc@8
brew install flex bison cmake python@3 gcc@8 llvm@13
python3 -m pip install --upgrade pip setuptools
python3 -m pip install --user 'Jinja2>=2.9.3' 'PyYAML>=3.13' pytest pytest-cov numpy 'sympy>=1.3'
displayName: 'Install Dependencies'
- script: |
export PATH=/usr/local/opt/flex/bin:/usr/local/opt/bison/bin:$PATH;
mkdir -p $(Build.Repository.LocalPath)/build
cd $(Build.Repository.LocalPath)/build
cmake .. -DPYTHON_EXECUTABLE=$(which python3) -DCMAKE_INSTALL_PREFIX=$HOME/nmodl -DCMAKE_BUILD_TYPE=RelWithDebInfo -DNMODL_ENABLE_PYTHON_BINDINGS=OFF
cmake .. -DPYTHON_EXECUTABLE=$(which python3) -DCMAKE_INSTALL_PREFIX=$HOME/nmodl -DCMAKE_BUILD_TYPE=RelWithDebInfo -DNMODL_ENABLE_PYTHON_BINDINGS=OFF -DLLVM_DIR=$(brew --prefix llvm@13)/lib/cmake/llvm -DNMODL_ENABLE_LLVM=ON
make -j 2
if [ $? -ne 0 ]
then
Expand Down Expand Up @@ -237,9 +241,11 @@ stages:
./bin/nrnivmodl-core $(Build.Repository.LocalPath)/test/integration/mod
env:
SHELL: 'bash'
condition: false
displayName: 'Build Neuron and Run Integration Tests'
- job: 'manylinux_wheels'
timeoutInMinutes: 45
condition: eq(1,2)
pool:
vmImage: 'ubuntu-20.04'
strategy:
Expand Down Expand Up @@ -289,6 +295,7 @@ stages:
- template: ci/upload-wheels.yml
- job: 'macos_wheels'
timeoutInMinutes: 45
condition: eq(1,2)
pool:
vmImage: 'macOS-11'
strategy:
Expand Down
Loading