This package contains a collection of libraries for rapid testing and development in computer vision research settings.
Note that this is explicitly not a 'framework', but rather a collection of common utlities and boilerplate code. The individual libraries can be individually used and combined, without requiring a large shift in paradigm or major cross-dependencies.
Additionally, unipercept
includes Hydra-based configurataion system and CLI to speed up common research tasks.
This package requires at least Python 3.11 and PyTorch 2.1. Once you have created an environment with these
dependencies, we can proceed to install unipercept
using one of three installation methods.
You can install the latest stable release from PyPI via
pip install unipercept
To install the latest version, which is not guaranteed to be stable, install from GitHub using
pip install git+https://github.com/kurt-stolle/unipercept.git
If your use-case requires changes to our codebase, we recommend that you first fork this repository and download your own fork locally. Assuming you have the GitHub CLI installed, you can clone your fork with
gh repo clone unipercept
Then, you can proceed to install the package in editable mode by running
pip install --editable unipercept
You are invited to share your improvements to the codebase through a pull request on this repository.
Before making a pull request, please ensure your changes follow our code guidelines by running pre-commit
before
adding your files into a Git commit.
Models can be trained and evalurated from the CLI or through the Python API.
To train a model with the CLI:
unipercept train --config <config path> <overrides>
Without a <config name>
, an interactive prompt will be started to assist in finding a configuration file.
To evaluate a model with the CLI:
unipercept evaluate --config <config path> <overrides>
All tests can ran via python -m pytest
.
However, we also provide a make
directive that uses pytorch-xdist
to speed up the process:
make test
You may need to tune the parameters if memory problems arise during testing.
Similarly, benchmarks are implemented using pytest-benchmark
. To run them, use:
make benchmark
Coverage reports are built using pytest-cov
, and executed using:
make coverage
Last, we largely follow the same principles and methods as Transformers uses for testing.
For more information on using pytest
for automated testing, check out their documentation.
If you are using a Conda environment that ships its own GCC/G++ versions, you may encounter issues when building the
torch extensions shipped with unipercept
. To resolve this, you can set the CC
and CXX
environment variables to the
active environment's GCC/G++ versions before running the installation command.
In your Conda environment, run the following commands:
conda env config vars set CC=$(which gcc)
conda env config vars set CXX=$(which g++)
Then, reload the environment and install unipercept
as usual.
This repository is released under the MIT License. For more information, please refer to the LICENSE file.