Skip to content

Various fixes to enable my experimental llama.turbine port of llamacp… #9

Various fixes to enable my experimental llama.turbine port of llamacp…

Various fixes to enable my experimental llama.turbine port of llamacp… #9

Workflow file for this run

name: Test Turbine Models
on:
workflow_dispatch:
pull_request:
push:
branches:
- main
jobs:
test-turbine-models:
strategy:
matrix:
version: [3.11]
os: [nodai-ubuntu-builder-large]
runs-on: ${{matrix.os}}
steps:
- name: "Setting up Python"
uses: actions/setup-python@75f3110429a8c05be0e1bf360334e4cced2b63fa # v2.3.3
with:
python-version: ${{matrix.version}}
- name: "Checkout Code"
uses: actions/checkout@v2
- name: Sync source deps
run: |
python -m pip install --upgrade pip
# Note: We install in three steps in order to satisfy requirements
# from non default locations first. Installing the PyTorch CPU
# wheels saves multiple minutes and a lot of bandwidth on runner setup.
pip install --index-url https://download.pytorch.org/whl/cpu \
-r pytorch-cpu-requirements.txt \
-r torchvision-requirements.txt
pip install --upgrade -r requirements.txt
pip install -e .[testing]
pip install -e python/turbine_models
- name: Show current free memory
run: |
free -mh
- name: Run stateless_llama tests
run: |
pytest python/turbine_models/tests/stateless_llama_test.py
- name: Run sd tests
run: |
pytest python/turbine_models/tests/sd_test.py