Skip to content

zhyever/PatchRefinerV2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PatchRefinerV2

Fast and Lightweight Real-Domain High-Resolution
Metric Depth Estimation

Paper License: MIT

Zhenyu Li, Wenqing Cui, Shariq Farooq Bhat, Peter Wonka.
KAUST

NEWS

  • 2025-01-05: Release codes. Pretrained models are coming soon.

Repo Features

Environment setup

Install environment using environment.yml :

Using mamba (fastest):

mamba env create -n patchrefinerv2 --file environment.yml
mamba activate patchrefinerv2

Using conda :

conda env create -n patchrefinerv2 --file environment.yml
conda activate patchrefinerv2

NOTE:

Before running the code, please first run:

export PYTHONPATH="${PYTHONPATH}:/path/to/the/folder/PatchRefinerV2"
export PYTHONPATH="${PYTHONPATH}:/path/to/the/folder/PatchRefinerV2/external"

Make sure that you have exported the external folder which stores codes from other repos (ZoeDepth, Depth-Anything, etc.)

Pre-Train Model

Before training and inference, please prepare some pretrained models from here (TBD).

Unzip the file and make sure you have the work_dir folder in this repo after that.

There would be a table here to present models, their corresponding checkpoints and configs.

User Inference (Will be updated after releasing the pretrained models)

Running:

To execute user inference, use the following command:

python tools/test.py ${CONFIG_FILE} --ckp-path <checkpoints> --cai-mode <m1 | m2 | rn> --cfg-option general_dataloader.dataset.rgb_image_dir='<img-directory>' [--save] --work-dir <output-path> --test-type general [--gray-scale] --image-raw-shape [h w] --patch-split-num [h, w]

Arguments Explanation (More details can be found here):

  • ${CONFIG_FILE}: Select the configuration file from the following options based on the inference type you want to run:
    • configs/patchrefiner_zoedepth/pr_u4k.py for example. Please refer to the table in the Pre-Train Model section for more options.
  • --ckp-path: Specify the checkpoint path.
    • work_dir/zoedepth/u4k/pr/checkpoint_36.pth for example. Please refer to the table in the Pre-Train Model section for more options.
  • --cai-mode: Define the specific mode to use. For example, rn indicates n patches in mode r.
  • --cfg-option: Specify the input image directory. Maintain the prefix as it indexes the configuration. (To learn more about this, please refer to MMEngine. Basically, we use MMEngine to organize the configurations of this repo).
  • --save: Enable the saving of output files to the specified --work-dir directory (Make sure using it, otherwise there will be nothing saved).
  • --work-dir: Directory where the output files will be stored, including a colored depth map and a 16-bit PNG file (multiplier=256).
  • --gray-scale: If set, the output will be a grayscale depth map. If omitted, a color palette is applied to the depth map by default.
  • --image-raw-shape: Specify the original dimensions of the input image. Input images will be resized to this resolution before being processed by the model. Default: 2160 3840.
  • --patch-split-num: Define how the input image is divided into smaller patches for processing. Default: 4 4. (Check more introductions)

Example Usage:

Below is an example command that demonstrates how to run the inference process:

python ./tools/test.py configs/patchrefiner_zoedepth/pr_u4k.py --ckp-path work_dir/zoedepth/u4k/pr/checkpoint_36.pth --cai-mode r32 --cfg-option general_dataloader.dataset.rgb_image_dir='./examples/' --save --work-dir ./work_dir/predictions --test-type general --image-raw-shape 1080 1920 --patch-split-num 2 2

This example performs inference using the pr_u4k.py configuration, loads the specified checkpoint work_dir/zoedepth/u4k/pr/checkpoint_36.pth, sets the PatchRefinerV2 mode to r32, specifies the input image directory ./examples/, and saves the output to ./work_dir/predictions ./work_dir/predictions. The original dimensions of the input image is 1080x1920 and the input image is divided into 2x2 patches.

User Training (Will be updated after releasing the pretrained models)

Please refer to user_training for more details.

Citation

If you find our work useful for your research, please consider citing the paper

@article{li2025patchrefinerv2,
    title={PatchRefiner V2: Fast and Lightweight Real-Domain High-Resolution Metric Depth Estimation}, 
    author={Li, Zhenyu and Cui, Wenqing and Bhat, Shariq Farooq and Wonka, Peter},
    journal={arXiv preprint arXiv:2501.01121},
    year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages