This is the API for the Transformer Lab App which is the main repo for this project. Please go the Transformer Lab App repository to learn more and access documentation.
Use the instructions below if you are installing and running the API on a server, manually.
- Linux or WSL (for training)
- Mac, Linux, WSL for inference only
You can use the install script to get the application running:
./install.sh
This will install conda if it's not installed, and then use conda and pip to install the rest of the application requirements.
If you prefer to install the API without using the install script you can follow the steps on this page:
https://transformerlab.ai/docs/advanced-install
Once conda and dependencies are installed, run the following:
conda activate transformerlab
uvicorn api:app --port 8000 --host 0.0.0.0
Dependencies are managed with pip-tools (installed separately). Add new requirements to requirements.in
and regenerate their corresponding requirements.txt
files by running the following two commands:
# default GPU enabled requirements
pip-compile \
--extra-index-url=https://download.pytorch.org/whl/cu121 \
--output-file=requirements.txt \
requirements-gpu.in requirements.in
# requirements for systmes without GPU support
pip-compile \
--extra-index-url=https://download.pytorch.org/whl/cpu \
--output-file=requirements-no-gpu.txt \
requirements.in
We have not tested running the API on Windows extensively, but it should work.
On WSL, you might need to install CUDA manually by following:
then running the following before you launch:
export LD_LIBRARY_PATH=/usr/lib/wsl/lib