Skip to content

Latest commit

 

History

History
55 lines (37 loc) · 1.58 KB

INFERENCE.md

File metadata and controls

55 lines (37 loc) · 1.58 KB

Running Inference with HyperSeg

We provide the pretrained HyperSeg-3B model weights. Please download them from HyperSeg-3B and put them under the current path.

RES (RefCOCO/+/g)

deepspeed /eval/eval_refcoco.py \
  --image_folder /dataset/coco/train2014 \
  --json_path /dataset/RES/refcoco/refcoco_val.json \
  --model_path /model/HyperSeg-3B \
  --output_dir /output/RES \

ReasonSeg

deepspeed /eval/eval_ReasonSeg.py \
  --reason_path /dataset/ReasonSeg \
  --model_path /model/HyperSeg-3B \
  --output_dir /output/ReasonSeg \
  --reason_seg_data ReasonSeg|val \

ReasonVOS

deepspeed /eval/eval_ReasonVOS.py \
  --revos_path /dataset/ReVOS \
  --model_path /model/HyperSeg-3B \
  --save_path /output/ReasonVOS \

MMBench

Refer to MMBench GitHub to download the benchmark dataset.

sh hyperseg/eval/script/test_mmb.sh

The response file can be found in /output/mmb/answers_upload. You can submit the Excel file to submission link to obtain the evaluation scores.

VQAv2

Refer to here to prepare the VQAv2 benchmark dataset.

sh hyperseg/eval/script/test_vqav2.sh

The response file can be found in /output/vqav2/vqav2_answers_upload.json. You can submit the json response file to submission link (Test-Dev Phase) to obtain the evaluation scores.