You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pip install instructlab-training[cuda] fails in a fresh virtual env due to a bug in flash-attns package. flash-attn does not correctly declare it's installation dependency in packaging metadata. See Dao-AILab/flash-attention#958 for a pending bug fix and more details.
The bug also affects instructlab 0.18.0. pip install instructlab[cuda]==0.18.0a4 is broken, too.
$ pip install instructlab-training[cuda]
Collecting instructlab-training[cuda]
Obtaining dependency information for instructlab-training[cuda] from https://files.pythonhosted.org/packages/8e/30/a363c6e568e7d87b871ffa25757c223b6c61efb4c0fabeb0bdd4da676112/instructlab_training-0.3.0-py3-none-any.whl.metadata
...
Collecting flash-attn>=2.4.0 (from instructlab-training[cuda])
Using cached flash_attn-2.6.1.tar.gz (2.6 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
Traceback (most recent call last):
File "venv/lib64/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "venv/lib64/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "venv/lib64/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-g9_133ri/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 327, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-g9_133ri/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 297, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-g9_133ri/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 497, in run_setup
super().run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-g9_133ri/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 313, in run_setup
exec(code, locals())
File "<string>", line 19, in <module>
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had activity within 90 days. It will be automatically closed if no further activity occurs within 30 days.
pip install instructlab-training[cuda]
fails in a fresh virtual env due to a bug in flash-attns package. flash-attn does not correctly declare it's installation dependency in packaging metadata. See Dao-AILab/flash-attention#958 for a pending bug fix and more details.The bug also affects instructlab 0.18.0.
pip install instructlab[cuda]==0.18.0a4
is broken, too.The text was updated successfully, but these errors were encountered: