This is a python client for TIRA.io. Please find the documentation online.
To access non-public endpoints, you will need an authentication via an API key to ensure that you have the correct access credentials. Please generate your API key online at tira.io/admin/api/keys and login your tira client:
tira-cli login --token YOUR-TOKEN-HERE
You can download runs of published and unblinded submissions via:
from tira.rest_api_client import Client
tira = Client()
output = tira.get_run_output('<task>/<team>/<approach>', '<dataset>')
As an example, you can download all baseline BM25 runs submitted to TIREx via:
from tira.rest_api_client import Client
from tira.tirex import TIREX_DATASETS
tira = Client()
for dataset in TIREX_DATASETS:
output = tira.get_run_output('ir-benchmarks/tira-ir-starter/BM25 Re-Rank (tira-ir-starter-pyterrier)', dataset)
As an example, you can see all public software submissions submitted to TIREx via:
from tira.rest_api_client import Client
tira = Client()
submissions = tira.all_softwares("ir-benchmarks")
You can export datasets if you are the owner or if the dataset is public. Export a dataset via the cli:
tira-run --export-dataset '<task>/<tira-dataset>' --output-directory tira-dataset
Export a dataset via the python API:
from tira.rest_api_client import Client
tira = Client()
tira.download_dataset('<task>', '<tira-dataset>')
docker build -t tira/submission-base-image:1.0.0 -f Dockerfile .
Testing the model locally can be done using the following command:
tira-run \
--input-directory ${PWD}/input \
--output-directory ${PWD}/output \
--image tira/submission-base-image:1.0.0 \
--command 'tira-run-notebook --input $inputDataset --output $outputDir /workspace/template-notebook.ipynb'
Afterwards you can push the image to TIRA
docker push tira/submission-base-image:1.0.0
and set the command:
tira-run-notebook --input $inputDataset --output $outputDir /workspace/template-notebook.ipynb
Finally, if the actual processing in notebook is toggled via is_running_as_inference_server()
(as seen in the
template notebook)
and your notebook defines a function named predict
in the format
def predict(input_list: List) -> List:
you can start an inference server for your model with:
PORT=8001
docker run --rm -it --init \
-v "$PWD/logs:/workspace/logs" \
-p $PORT:$PORT \
tira/submission-base-image:1.0.0 \
tira-run-inference-server --notebook /workspace/template-notebook.ipynb --port $PORT
Exemplary request for a server running on localhost:8001
are
# POST (JSON list as payload)
curl -X POST -H "application/json" \
-d "[\"element 1\", \"element 2\", \"element 3\"]" \
localhost:8001
and
# GET (JSON object string(s) passed to the 'payload' parameter)
curl "localhost:8001?payload=\"element+1\"&payload=\"element+2\"&payload=\"element+3\""