Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Portable python package installation #2

Open
ptheywood opened this issue Sep 7, 2021 · 0 comments
Open

Portable python package installation #2

ptheywood opened this issue Sep 7, 2021 · 0 comments

Comments

@ptheywood
Copy link
Member

The isntallation of pyflamegpu from wheel is currently non-portable - it will only work with python3.6 and CUDA 11.0 on linux.

!{sys.executable} -m pip install should be used to ensure that the correct python environment/interpretter is used.

This will also require selecting an appropraite wheel for the platform, which also needs CUDA version knowledge (MMm).

The following snippet works, but is far more complex than it should really be.

import sys
import subprocess
import re
# Binary wheels are not currently distributed via pypi, so the exact URI of the wheel must be selected specific to your python version.
# This depends on the python version and CUDA version.
# This will be addressed in a future version.
wheelhouse = {
    ("36", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp36-cp36m-linux_x86_64.whl",
    ("37", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp37-cp37m-linux_x86_64.whl",
    ("38", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp38-cp38-linux_x86_64.whl",
    ("39", "110"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda110-cp39-cp39-linux_x86_64.whl",
    ("36", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp36-cp36m-linux_x86_64.whl",
    ("37", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp37-cp37m-linux_x86_64.whl",
    ("38", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp38-cp38-linux_x86_64.whl",
    ("39", "112"): "https://github.com/FLAMEGPU/FLAMEGPU2/releases/download/v2.0.0-alpha.1/pyflamegpu-2.0.0a1+cuda112-cp39-cp39-linux_x86_64.whl",
}

# Select the wheel url based on the python version and the CUDA version.
pyver = f"{sys.version_info[0]}{sys.version_info[1]}"

# Extract the CUDA version from nvcc, to select which whl to install
nvcc_version_result = " ".join(subprocess.check_output(["nvcc", "--version"], universal_newlines=True).splitlines())
nvcc_version_match = re.match(r".*V([0-9]+)\.([0-9]+).*", nvcc_version_result, re.MULTILINE)
nvcc_version = (int(nvcc_version_match.group(1)), int(nvcc_version_match.group(2))) if nvcc_version_match is not None else (None, None)

cudaver=None
# If CUDA >= 11.2, use +cuda112 
if nvcc_version >= (11, 2):
    cudaver = "112"
# If CUDA 11.0 or 11.1 use +cuda110 (Note 11.1 might not work...)
elif nvcc_version >= (11, 0):
    cudaver = "110"
# Otherwise unsupported.
else:
    raise Exception("Cuda version {nvcc_version} is not currently supported by a python binary wheel")

whl_key = (pyver, cudaver)
if whl_key not in wheelhouse:
    raise Exception("Error: There is no current pyflamegpu wheel for python {pyver}, with CUDA {cudaver}")
whl = wheelhouse[whl_key]
    
# {sys.executable} maps to the python executable currently in use in jupyter. 
!{sys.executable} -m pip install -I {whl}

A much cleaner solution would be to provide a -f wheelhouse.html (FLAMEGPU/FLAMEGPU2#FLAMEGPU/FLAMEGPU2#645), or pypi distribution FLAMEGPU/FLAMEGPU2#648.
Alternatively Conda distribution and use might be the most practical alternative FLAMEGPU/FLAMEGPU2#649.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant