Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build for Windows #315

Merged
merged 45 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
434a56b
update
rusty1s Apr 4, 2024
2d0d747
update
rusty1s Apr 4, 2024
ef9ef00
update
rusty1s Apr 4, 2024
4fe3b6a
update
rusty1s Apr 4, 2024
0732f22
update
rusty1s Apr 4, 2024
257635e
update
rusty1s Apr 4, 2024
e7f5dd4
update
rusty1s Apr 4, 2024
fb770cb
update
rusty1s Apr 4, 2024
221ccb8
update
rusty1s Apr 4, 2024
5ea5035
update
rusty1s Apr 4, 2024
255960a
update
rusty1s Apr 6, 2024
bd68fb3
update
rusty1s Apr 6, 2024
fa9af66
update
rusty1s Apr 6, 2024
af49e9b
update
rusty1s Apr 6, 2024
0a6d0a9
update
rusty1s Apr 6, 2024
bfe51a8
update
rusty1s Apr 6, 2024
2a112b3
update
rusty1s Apr 6, 2024
0012f5a
update
rusty1s Apr 6, 2024
77642ce
update
rusty1s Apr 6, 2024
e6ba17d
update
rusty1s Apr 6, 2024
936f4cf
update
rusty1s Apr 6, 2024
826c85f
update
rusty1s Apr 6, 2024
195a8d4
update
rusty1s Apr 6, 2024
e9910ca
update
rusty1s Apr 6, 2024
505b1d2
update
rusty1s Apr 6, 2024
be7397f
update
rusty1s Apr 6, 2024
aa5650e
update
rusty1s Apr 6, 2024
e6eaa1c
update
rusty1s Apr 6, 2024
64583da
update
rusty1s Apr 6, 2024
b95e7d0
update
rusty1s Apr 7, 2024
332c064
update
rusty1s Apr 7, 2024
88a8a71
update
rusty1s Apr 7, 2024
69a3eee
update
rusty1s Apr 7, 2024
5452828
update
rusty1s Apr 7, 2024
bec7096
update
rusty1s Apr 7, 2024
4016fc3
update
rusty1s Apr 7, 2024
c3009bf
update
rusty1s Apr 7, 2024
e4cee39
update
rusty1s Apr 8, 2024
9db6ea5
update
rusty1s Apr 8, 2024
b777858
update
rusty1s Apr 8, 2024
c35f37b
update
rusty1s Apr 8, 2024
7ffb312
update
rusty1s Apr 8, 2024
61a5725
update
rusty1s Apr 8, 2024
4f0dd5a
update
rusty1s Apr 8, 2024
55b636f
update
rusty1s Apr 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/actions/setup/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,9 @@ runs:
sudo rm -rf /usr/share/dotnet
shell: bash

- name: Set up Windows developer command prompt
uses: ilammy/msvc-dev-cmd@v1

- name: Install CUDA ${{ inputs.cuda-version }}
if: ${{ inputs.cuda-version != 'cpu' }}
run: |
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/building.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ jobs:
torch-version: [1.12.0, 1.13.0, 2.0.0, 2.1.0, 2.2.0]
cuda-version: ['cpu', 'cu113', 'cu116', 'cu117', 'cu118', 'cu121']
exclude:
- os: windows-2019 # No windows support yet :(
- torch-version: 1.12.0
python-version: '3.12'
- torch-version: 1.13.0
Expand Down Expand Up @@ -112,6 +111,8 @@ jobs:
source ./.github/workflows/cuda/${{ runner.os }}-env.sh ${{ matrix.cuda-version }}
python setup.py bdist_wheel --dist-dir=dist
shell: bash
env:
TORCH_CUDA_ARCH_LIST: "5.0+PTX;6.0;7.0;7.5;8.0;8.6"

- name: Test wheel
run: |
Expand All @@ -121,6 +122,7 @@ jobs:
python -c "import pyg_lib; print('pyg-lib:', pyg_lib.__version__)"
python -c "import pyg_lib; print('CUDA:', pyg_lib.cuda_version())"
cd ..
shell: bash

- name: Configure AWS
uses: aws-actions/configure-aws-credentials@v1
Expand Down
20 changes: 15 additions & 5 deletions .github/workflows/cuda/Windows.sh
Original file line number Diff line number Diff line change
@@ -1,10 +1,5 @@
#!/bin/bash

# Install NVIDIA drivers, see:
# https://github.com/pytorch/vision/blob/master/packaging/windows/internal/cuda_install.bat#L99-L102
curl -k -L "https://drive.google.com/u/0/uc?id=1injUyo3lnarMgWyRcXqKg4UGnN0ysmuq&export=download" --output "/tmp/gpu_driver_dlls.zip"
7z x "/tmp/gpu_driver_dlls.zip" -o"/c/Windows/System32"

case ${1} in
cu121)
CUDA_SHORT=12.1
Expand Down Expand Up @@ -48,3 +43,18 @@ echo "Installing from ${CUDA_FILE}..."
PowerShell -Command "Start-Process -FilePath \"${CUDA_FILE}\" -ArgumentList \"-s nvcc_${CUDA_SHORT} cuobjdump_${CUDA_SHORT} nvprune_${CUDA_SHORT} cupti_${CUDA_SHORT} cublas_dev_${CUDA_SHORT} cudart_${CUDA_SHORT} cufft_dev_${CUDA_SHORT} curand_dev_${CUDA_SHORT} cusolver_dev_${CUDA_SHORT} cusparse_dev_${CUDA_SHORT} thrust_${CUDA_SHORT} npp_dev_${CUDA_SHORT} nvrtc_dev_${CUDA_SHORT} nvml_dev_${CUDA_SHORT}\" -Wait -NoNewWindow"
echo "Done!"
rm -f "${CUDA_FILE}"

# echo Installing NVIDIA drivers...
# https://github.com/pytorch/vision/blob/master/packaging/windows/internal/cuda_install.bat#L99-L102
# curl -k -L "https://ossci-windows.s3.us-east-1.amazonaws.com/builder/additional_dlls.zip" --output "/tmp/gpu_driver_dlls.zip"
# 7z x "/tmp/gpu_driver_dlls.zip" -o"/c/Windows/System32"

echo Installing NvToolsExt...
curl -k -L https://ossci-windows.s3.us-east-1.amazonaws.com/builder/NvToolsExt.7z --output /tmp/NvToolsExt.7z
7z x /tmp/NvToolsExt.7z -o"/tmp/NvToolsExt"
mkdir -p "/c/Program Files/NVIDIA Corporation/NvToolsExt/bin/x64"
mkdir -p "/c/Program Files/NVIDIA Corporation/NvToolsExt/include"
mkdir -p "/c/Program Files/NVIDIA Corporation/NvToolsExt/lib/x64"
cp -r /tmp/NvToolsExt/bin/x64/* "/c/Program Files/NVIDIA Corporation/NvToolsExt/bin/x64"
cp -r /tmp/NvToolsExt/include/* "/c/Program Files/NVIDIA Corporation/NvToolsExt/include"
cp -r /tmp/NvToolsExt/lib/x64/* "/c/Program Files/NVIDIA Corporation/NvToolsExt/lib/x64"
3 changes: 3 additions & 0 deletions .github/workflows/install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,9 @@ jobs:
run: |
source ./.github/workflows/cuda/${{ runner.os }}-env.sh ${{ matrix.cuda-version }}
pip install --verbose -e .
shell: bash
env:
TORCH_CUDA_ARCH_LIST: "5.0+PTX;6.0;7.0;7.5;8.0;8.6"

- name: Test imports
run: |
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ jobs:
torch-version: [1.12.0, 1.13.0, 2.0.0, 2.1.0, 2.2.0]
cuda-version: ['cpu', 'cu113', 'cu116', 'cu117', 'cu118', 'cu121']
exclude:
- os: windows-2019 # No windows support yet :(
- torch-version: 1.12.0
python-version: '3.12'
- torch-version: 1.13.0
Expand Down Expand Up @@ -118,6 +117,8 @@ jobs:
source ./.github/workflows/cuda/${{ runner.os }}-env.sh ${{ matrix.cuda-version }}
python setup.py bdist_wheel --dist-dir=dist
shell: bash
env:
TORCH_CUDA_ARCH_LIST: "5.0+PTX;6.0;7.0;7.5;8.0;8.6"

- name: Test wheel
run: |
Expand All @@ -127,6 +128,7 @@ jobs:
python -c "import pyg_lib; print('pyg-lib:', pyg_lib.__version__)"
python -c "import pyg_lib; print('CUDA:', pyg_lib.cuda_version())"
cd ..
shell: bash

- name: Configure AWS
uses: aws-actions/configure-aws-credentials@v1
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

## [0.5.0] - 2023-MM-DD
### Added
- Added Windows support ([#315](https://github.com/pyg-team/pyg-lib/pull/315))
- Added macOS Apple Silicon support ([#310](https://github.com/pyg-team/pyg-lib/pull/310))
### Changed
### Removed
Expand Down
32 changes: 22 additions & 10 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
cmake_minimum_required(VERSION 3.15)
project(pyg)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_SHARED_LIBRARY_PREFIX "lib")
set(PYG_VERSION 0.4.0)

option(BUILD_TEST "Enable testing" OFF)
Expand Down Expand Up @@ -71,16 +73,24 @@ else()
target_include_directories(${PROJECT_NAME} PRIVATE ${PHMAP_DIR})
endif()

set(METIS_DIR third_party/METIS)
target_include_directories(${PROJECT_NAME} PRIVATE ${METIS_DIR}/include)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DIDXTYPEWIDTH=64 -DREALTYPEWIDTH=32")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DIDXTYPEWIDTH=64 -DREALTYPEWIDTH=32")
set(GKLIB_PATH "${METIS_DIR}/GKlib")
include(${GKLIB_PATH}/GKlibSystem.cmake)
include_directories(${GKLIB_PATH})
include_directories("${METIS_DIR}/include")
add_subdirectory("${METIS_DIR}/libmetis")
target_link_libraries(${PROJECT_NAME} PRIVATE metis)
if (MSVC)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /DIDXTYPEWIDTH=64 /DREALTYPEWIDTH=32")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /DIDXTYPEWIDTH=64 /DREALTYPEWIDTH=32")
else()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DIDXTYPEWIDTH=64 -DREALTYPEWIDTH=32")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DIDXTYPEWIDTH=64 -DREALTYPEWIDTH=32")
endif()

if (NOT MSVC)
set(METIS_DIR third_party/METIS)
target_include_directories(${PROJECT_NAME} PRIVATE ${METIS_DIR}/include)
set(GKLIB_PATH "${METIS_DIR}/GKlib")
include(${GKLIB_PATH}/GKlibSystem.cmake)
include_directories(${GKLIB_PATH})
include_directories("${METIS_DIR}/include")
add_subdirectory("${METIS_DIR}/libmetis")
target_link_libraries(${PROJECT_NAME} PRIVATE metis)
endif()

find_package(Torch REQUIRED)
target_link_libraries(${PROJECT_NAME} PRIVATE ${TORCH_LIBRARIES})
Expand Down Expand Up @@ -120,4 +130,6 @@ set_target_properties(${PROJECT_NAME} PROPERTIES
# Cmake creates *.dylib by default, but python expects *.so by default
if (APPLE)
set_property(TARGET ${PROJECT_NAME} PROPERTY SUFFIX .so)
elseif (MSVC AND USE_PYTHON)
set_property(TARGET ${PROJECT_NAME} PROPERTY SUFFIX .pyd)
endif()
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,31 +37,31 @@ The following combinations are supported:
| PyTorch 2.2 | `cpu` | `cu102` | `cu113` | `cu116` | `cu117` | `cu118` | `cu121` |
|--------------|-------|---------|---------|---------|---------|---------|---------|
| **Linux** | ✅ | | | | | ✅ | ✅ |
| **Windows** | | | | | | | |
| **Windows** | | | | | | | ✅ |
| **macOS** | ✅ | | | | | | |

| PyTorch 2.1 | `cpu` | `cu102` | `cu113` | `cu116` | `cu117` | `cu118` | `cu121` |
|--------------|-------|---------|---------|---------|---------|---------|---------|
| **Linux** | ✅ | | | | | ✅ | ✅ |
| **Windows** | | | | | | | |
| **Windows** | | | | | | | ✅ |
| **macOS** | ✅ | | | | | | |

| PyTorch 2.0 | `cpu` | `cu102` | `cu113` | `cu116` | `cu117` | `cu118` | `cu121` |
|--------------|-------|---------|---------|---------|---------|---------|---------|
| **Linux** | ✅ | | | | ✅ | ✅ | |
| **Windows** | | | | | | | |
| **Windows** | | | | | | ✅ | |
| **macOS** | ✅ | | | | | | |

| PyTorch 1.13 | `cpu` | `cu102` | `cu113` | `cu116` | `cu117` | `cu118` | `cu121` |
|--------------|-------|---------|---------|---------|---------|---------|---------|
| **Linux** | ✅ | | | ✅ | ✅ | | |
| **Windows** | | | | | | | |
| **Windows** | | | | | ✅ | | |
| **macOS** | ✅ | | | | | | |

| PyTorch 1.12 | `cpu` | `cu102` | `cu113` | `cu116` | `cu117` | `cu118` | `cu121` |
|--------------|-------|---------|---------|---------|---------|---------| --------|
| **Linux** | ✅ | ✅ | ✅ | ✅ | | | |
| **Windows** | | | | | | | |
| **Windows** | | ✅ | ✅ | ✅ | | | |
| **macOS** | ✅ | | | | | | |

### Form nightly
Expand Down
10 changes: 6 additions & 4 deletions pyg_lib/csrc/ops/cpu/matmul_kernel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -86,8 +86,7 @@ void mkl_blas_gemm_batched(const int* m_array,
const int* ldc_array,
const int group_count,
const int* group_size) {
TORCH_INTERNAL_ASSERT(false,
"mkl_blas_gemm_batched: MKL BLAS is not supported");
TORCH_INTERNAL_ASSERT(false, "MKL BLAS is not supported");
}

void mkl_blas_gemm_batched(const int* m_array,
Expand All @@ -103,8 +102,7 @@ void mkl_blas_gemm_batched(const int* m_array,
const int* ldc_array,
const int group_count,
const int* group_size) {
TORCH_INTERNAL_ASSERT(false,
"mkl_blas_gemm_batched: MKL BLAS is not supported");
TORCH_INTERNAL_ASSERT(false, "MKL BLAS is not supported");
}

#endif
Expand Down Expand Up @@ -205,6 +203,7 @@ void grouped_matmul_out_kernel_at_impl(const std::vector<at::Tensor> input,
void grouped_matmul_out_kernel_mkl_impl(const std::vector<at::Tensor> input,
const std::vector<at::Tensor> other,
std::vector<at::Tensor> out) {
#if WITH_MKL_BLAS()
// matrix_params<M, N, K>
using matrix_params = std::tuple<int, int, int>;
phmap::flat_hash_map<matrix_params, std::vector<size_t>> groups;
Expand Down Expand Up @@ -276,6 +275,7 @@ void grouped_matmul_out_kernel_mkl_impl(const std::vector<at::Tensor> input,
group_sizes.data());
#endif
});
#endif
}

std::vector<at::Tensor> grouped_matmul_kernel(const at::TensorList input,
Expand Down Expand Up @@ -328,6 +328,7 @@ void segment_matmul_out_kernel_mkl_impl(const at::Tensor& input,
const at::Tensor& other,
at::Tensor& out,
const at::IntArrayRef& sizes) {
#if WITH_MKL_BLAS()
const int n = other.size(-1);
const int k = input.size(-1);
const int nk = n * k;
Expand Down Expand Up @@ -403,6 +404,7 @@ void segment_matmul_out_kernel_mkl_impl(const at::Tensor& input,
group_sizes.data());
#endif
});
#endif
}

at::Tensor segment_matmul_kernel(const at::Tensor& input,
Expand Down
6 changes: 6 additions & 0 deletions pyg_lib/csrc/partition/cpu/metis_kernel.cpp
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
#include <ATen/ATen.h>
#include <torch/library.h>

#ifndef _WIN32
#include <metis.h>
#endif

namespace pyg {
namespace partition {
Expand All @@ -14,6 +16,9 @@ at::Tensor metis_kernel(const at::Tensor& rowptr,
const c10::optional<at::Tensor>& node_weight,
const c10::optional<at::Tensor>& edge_weight,
bool recursive) {
#ifdef _WIN32
TORCH_INTERNAL_ASSERT(false, "METIS not yet supported on Windows");
#else
int64_t nvtxs = rowptr.numel() - 1;
int64_t ncon = 1;
auto* xadj = rowptr.data_ptr<int64_t>();
Expand Down Expand Up @@ -41,6 +46,7 @@ at::Tensor metis_kernel(const at::Tensor& rowptr,
}

return part;
#endif
}

} // namespace
Expand Down
44 changes: 24 additions & 20 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Environment flags to control different options
#
# USE_MKL_BLAS=1
# enables use of MKL BLAS (requires PyTorch to be built with MKL support)
# - USE_MKL_BLAS=1:
# Enables use of MKL BLAS (requires PyTorch to be built with MKL support)

import importlib
import os
import os.path as osp
import re
import subprocess
import warnings

Expand All @@ -19,7 +20,7 @@
class CMakeExtension(Extension):
def __init__(self, name, sourcedir=''):
Extension.__init__(self, name, sources=[])
self.sourcedir = os.path.abspath(sourcedir)
self.sourcedir = osp.abspath(sourcedir)


class CMakeBuild(build_ext):
Expand All @@ -40,7 +41,7 @@ def build_extension(self, ext):

import torch

extdir = os.path.abspath(osp.dirname(self.get_ext_fullpath(ext.name)))
extdir = osp.abspath(osp.dirname(self.get_ext_fullpath(ext.name)))
self.build_type = "DEBUG" if self.debug else "RELEASE"
if self.debug is None:
if CMakeBuild.check_env_flag("DEBUG"):
Expand All @@ -60,6 +61,7 @@ def build_extension(self, ext):
'-DUSE_PYTHON=ON',
f'-DWITH_CUDA={"ON" if WITH_CUDA else "OFF"}',
f'-DCMAKE_LIBRARY_OUTPUT_DIRECTORY={extdir}',
f'-DCMAKE_RUNTIME_OUTPUT_DIRECTORY={extdir}',
f'-DCMAKE_BUILD_TYPE={self.build_type}',
f'-DCMAKE_PREFIX_PATH={torch.utils.cmake_prefix_path}',
]
Expand All @@ -85,26 +87,28 @@ def build_extension(self, ext):
cwd=self.build_temp)


def maybe_append_with_mkl(dependencies):
if CMakeBuild.check_env_flag('USE_MKL_BLAS'):
import re
def mkl_dependencies():
if not CMakeBuild.check_env_flag('USE_MKL_BLAS'):
return []

import torch
torch_config = torch.__config__.show()
with_mkl_blas = 'BLAS_INFO=mkl' in torch_config
if torch.backends.mkl.is_available() and with_mkl_blas:
product_version = '2023.1.0'
pattern = r'oneAPI Math Kernel Library Version [0-9]{4}\.[0-9]+'
match = re.search(pattern, torch_config)
if match:
product_version = match.group(0).split(' ')[-1]
import torch

dependencies = []
torch_config = torch.__config__.show()
with_mkl_blas = 'BLAS_INFO=mkl' in torch_config
if torch.backends.mkl.is_available() and with_mkl_blas:
product_version = '2023.1.0'
pattern = r'oneAPI Math Kernel Library Version [0-9]{4}\.[0-9]+'
match = re.search(pattern, torch_config)
if match:
product_version = match.group(0).split(' ')[-1]
dependencies.append(f'mkl-include=={product_version}')
dependencies.append(f'mkl-static=={product_version}')

dependencies.append(f'mkl-include=={product_version}')
dependencies.append(f'mkl-static=={product_version}')
return dependencies


install_requires = []
maybe_append_with_mkl(install_requires)
install_requires = [] + mkl_dependencies()

triton_requires = [
'triton',
Expand Down
Loading