Skip to content

Commit

Permalink
Fixed typo
Browse files Browse the repository at this point in the history
  • Loading branch information
kurt-stolle committed Dec 12, 2023
1 parent da61c89 commit 007dc9b
Show file tree
Hide file tree
Showing 3 changed files with 40 additions and 25 deletions.
53 changes: 32 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,48 @@
# Unified Perception: Efficient Video Panoptic Segmentation with Minimal Annotation Costs
# UniPercept

Welcome to the PyTorch 2 implementation of Unified Perception, an innovative approach to depth-aware video panoptic segmentation. Our method achieves state-of-the-art performance without the need for video-based training. Instead, it utilizes a two-stage cascaded tracking algorithm that reuses object embeddings computed in an image-based network. This repository mirrors the research paper [Unified Perception](https://arxiv.org/abs/2303.01991).
## Installation

## Introduction
This package requires at least Python 3.11 and PyTorch 2.1. Once you have created an environment with these
dependencies, we can proceed to install `unipercept` using one of three installation methods.

Unified Perception is designed to tackle the challenge of depth-aware video panoptic segmentation with high precision. The method proposed in this repository effectively combines the benefits of image-based networks with a two-stage cascaded tracking algorithm, ultimately enhancing the performance of video panoptic segmentation tasks without the need for video-based training.

## Technical Documentation

For a comprehensive understanding of the Unified Perception implementation, please visit our technical documentation hosted on:

[https://tue-mps.github.io/unipercept](https://tue-mps.github.io/unipercept)

## Usage & Installation (Coming Soon)
### Stable release (recommended)
You can install the latest stable release from PyPI via
```bash
pip install unipercept
```

_This section is under development and will be added in the near future. It will provide step-by-step instructions for using and installing the Unified Perception package. Consider starring or subscribing to this repository for updates._
### Master branch
To install the latest version, which is not guaranteed to be stable, install from GitHub using
```bash
pip install git+https://github.com/kurt-stolle/unipercept.git
```

The following _optional_ packages may be installed:
### Developers
If your use-case requires changes to our codebase, we recommend that you first fork this repository and download your
own fork locally. Assuming you have the GitHub CLI installed, you can clone your fork with
```bash
gh repo clone unipercept
```
Then, you can proceed to install the package in editable mode by running
```bash
pip uninstall pillow
CC="cc -mavx2" pip install -U --force-reinstall pillow-simd
pip install --editable unipercept
```
You are invited to share your improvements to the codebase through a pull request on this repository.
Before making a pull request, please ensure your changes follow our code guidelines by running `pre-commit` before
adding your files into a Git commit.

## Training and evaluation

Use the CLI:
Models can be trained and evalurated from the CLI or through the Python API.

### CLI
To train a model with the CLI:
```bash
unicli train --config <config name>
unicli train --config <config path>
```
Without a `<config name>`, an interactive prompt will be started to assist in finding a configuration file.

Without a `<config name>`, an interactive prompt will be started.

## Validation and testing
## Developer guidelines
All tests can ran via `python -m pytest`.
However, we also provide a `make` directive that uses `pytorch-xdist` to speed up the process:
```
Expand Down
6 changes: 3 additions & 3 deletions sources/unipercept/integrations/wandb_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,9 @@ def on_trackers_setup(self, params: EngineParams, state: State, control: Signal,
def on_save(
self, params: EngineParams, state: State, control: Signal, *, model_path: str, state_path: str, **kwargs
):
if self.model_history <= 0:
if self.model_history > 0:
self._log_model(model_path)
if self.state_history <= 0:
if self.state_history > 0:
self._log_state(state_path)

@TX.override
Expand All @@ -118,7 +118,7 @@ def on_inference_end(
results_path: str,
**kwargs,
):
if self.inference_history <= 0:
if self.inference_history > 0:
self._log_inference(results_path)
if self.tabulate_inference_timings:
self._log_profiling("inference/timings", timings)
Expand Down
6 changes: 5 additions & 1 deletion sources/unipercept/nn/backbones/timm.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ def __init__(
pretrained: bool = True,
nodes: T.Sequence[int] | None | int = None,
keys: T.Sequence[str] | None = None,
jit_script: bool = False,
**kwargs,
):
dims = _get_dimension_order(name)
Expand All @@ -78,7 +79,10 @@ def __init__(

super().__init__(dimension_order=dims, feature_info={k: v for k, v in zip(keys, info)}, **kwargs)

self.ext = torch.jit.script(extractor)
self.ext = extractor

if jit_script:
self.ext = torch.jit.script(self.ext) # type: ignore

@override
def forward_extract(self, images: torch.Tensor) -> OrderedDict[str, torch.Tensor]:
Expand Down

0 comments on commit 007dc9b

Please sign in to comment.