You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The isntallation of pyflamegpu from wheel is currently non-portable - it will only work with python3.6 and CUDA 11.0 on linux.
!{sys.executable} -m pip install should be used to ensure that the correct python environment/interpretter is used.
This will also require selecting an appropraite wheel for the platform, which also needs CUDA version knowledge (MMm).
The following snippet works, but is far more complex than it should really be.
import sys
import subprocess
import re
# Binary wheels are not currently distributed via pypi, so the exact URI of the wheel must be selected specific to your python version.
# This depends on the python version and CUDA version.
# This will be addressed in a future version.
wheelhouse = {
("36", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp36-cp36m-linux_x86_64.whl",
("37", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp37-cp37m-linux_x86_64.whl",
("38", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp38-cp38-linux_x86_64.whl",
("39", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp39-cp39-linux_x86_64.whl",
("36", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp36-cp36m-linux_x86_64.whl",
("37", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp37-cp37m-linux_x86_64.whl",
("38", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp38-cp38-linux_x86_64.whl",
("39", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp39-cp39-linux_x86_64.whl",
}
# Select the wheel url based on the python version and the CUDA version.
pyver = f"{sys.version_info[0]}{sys.version_info[1]}"
# Extract the CUDA version from nvcc, to select which whl to install
nvcc_version_result = " ".join(subprocess.check_output(["nvcc", "--version"], universal_newlines=True).splitlines())
nvcc_version_match = re.match(r".*V([0-9]+)\.([0-9]+).*", nvcc_version_result, re.MULTILINE)
nvcc_version = (int(nvcc_version_match.group(1)), int(nvcc_version_match.group(2))) if nvcc_version_match is not None else (None, None)
cudaver=None
# If CUDA >= 11.2, use +cuda112
if nvcc_version >= (11, 2):
cudaver = "112"
# If CUDA 11.0 or 11.1 use +cuda110 (Note 11.1 might not work...)
elif nvcc_version >= (11, 0):
cudaver = "110"
# Otherwise unsupported.
else:
raise Exception("Cuda version {nvcc_version} is not currently supported by a python binary wheel")
whl_key = (pyver, cudaver)
if whl_key not in wheelhouse:
raise Exception("Error: There is no current pyflamegpu wheel for python {pyver}, with CUDA {cudaver}")
whl = wheelhouse[whl_key]
# {sys.executable} maps to the python executable currently in use in jupyter.
!{sys.executable} -m pip install -I {whl}
The isntallation of
pyflamegpu
from wheel is currently non-portable - it will only work with python3.6 and CUDA 11.0 on linux.!{sys.executable} -m pip install
should be used to ensure that the correct python environment/interpretter is used.This will also require selecting an appropraite wheel for the platform, which also needs CUDA version knowledge (
MMm
).The following snippet works, but is far more complex than it should really be.
A much cleaner solution would be to provide a
-f wheelhouse.html
(FLAMEGPU/FLAMEGPU2#FLAMEGPU/FLAMEGPU2#645), or pypi distribution FLAMEGPU/FLAMEGPU2#648.Alternatively Conda distribution and use might be the most practical alternative FLAMEGPU/FLAMEGPU2#649.
The text was updated successfully, but these errors were encountered: