Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MOTCUSTOM Evaluation #883

Closed
1 task done
Hunter-v1 opened this issue May 19, 2023 · 55 comments
Closed
1 task done

MOTCUSTOM Evaluation #883

Hunter-v1 opened this issue May 19, 2023 · 55 comments
Labels
bug Something isn't working

Comments

@Hunter-v1
Copy link

Search before asking

  • I have searched the Yolov8 Tracking issues and discussions and found no similar questions.

Yolv8 Tracking Component

Evaluation

Bug

The val.py keeps generation KEYERROR 'HOTA'

dict_keys([])
Traceback (most recent call last):
File "(project_env) D:\Project\yolov8_tracking\val.py", line 358, in
e.run(opt)
File "(projec_env) D:\Project\yolov8_tracking\val.py", line 305, in run
print('HOTA:', combined_results['HOTA'])
KeyError: 'HOTA'

I need to know how to let the val.py go for the evaluation and skip the track task or how can I change to run the val.py correctly.
please not that I changed the mot_challenge_2d_box.py so that the evaluation suits my classes.

Environment

No response

Minimal Reproducible Example

No response

@Hunter-v1 Hunter-v1 added the bug Something isn't working label May 19, 2023
@mikel-brostrom
Copy link
Owner

val.py calls track because codewise, the only difference is a call to a track_eval script. This is in order not to duplicate code. Track is not skippable

@Hunter-v1
Copy link
Author

Why is that while I can provide the track result --eval-existing True the problem is that the arguments is there but it didn't work!
If there is any specifications not mentioned please check !

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented May 19, 2023

That argument is to evaluate an already generated set of txt files (this is generated by track.py). So if they have already been generated you can use this flag. Will check if this hot ported correctly after the last release 😊

@Hunter-v1
Copy link
Author

Yes , that's what I mean!! I m really confused because the track process results (in the val.py) is different from the results of track.py for the same models same track method same sequence!!

@mikel-brostrom
Copy link
Owner

#889 (comment)

@mikel-brostrom
Copy link
Owner

Does it work for you @Hunter-v1 ?

@Hunter-v1
Copy link
Author

It generates seq_paths variable error ! It says that this local variable is referenced before assignment

UnboundLocalError: local variable 'seq_paths' referenced before assignment

@mikel-brostrom
Copy link
Owner

Thanks for the feedback!

@Hunter-v1
Copy link
Author

Hunter-v1 commented May 20, 2023

could u help just be mentionning where to change or adjust val.py so that I specify the track result folder (all track results will be in one folder having the same name as the sequences ) and go for the evaluation directly.
I' ll be very grateful .

 def eval_setup(self, opt, val_tools_path):
    # set paths
    gt_folder = val_tools_path / 'data' / self.opt.benchmark / self.opt.split
    mot_seqs_path = val_tools_path / 'data' / opt.benchmark / opt.split
    if opt.benchmark == 'MOT17':
        # each sequences is present 3 times, one for each detector
        # (DPM, FRCNN, SDP). Keep only sequences from  one of them
        seq_paths = sorted([str(p / 'img1') for p in Path(mot_seqs_path).iterdir() if Path(p).is_dir()])
        seq_paths = [Path(p) for p in seq_paths if 'FRCNN' in p]
    elif opt.benchmark == 'MOT16' or opt.benchmark == 'MOT20':
        # this is not the case for MOT16, MOT20 or your custom dataset
        seq_paths = [p / 'img1' for p in Path(mot_seqs_path).iterdir() if Path(p).is_dir()]
    elif opt.benchmark == 'MOT17-mini':
        mot_seqs_path = Path('./assets') / self.opt.benchmark / self.opt.split
        gt_folder = Path('./assets') / self.opt.benchmark / self.opt.split
        seq_paths = [p / 'img1' for p in Path(mot_seqs_path).iterdir() if Path(p).is_dir()]
   **

elif opt.benchmark == 'MOTCUSTOM':
# this is the case for your custom dataset
det_folder = val_tools_path / 'data' / self.opt.benchmark / self.op.split
seq_paths = [p / 'img1' for p in Path(mot_seqs_path).iterdir() if Path(p).is_dir()]
save_dir = det_folder

**
if opt.eval_existing and (Path(opt.project) / opt.name).exists():
save_dir = Path(opt.project) / opt.name
if not (Path(opt.project) / opt.name).exists():
LOGGER.error(f'{save_dir} does not exist')
else:
save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exists_ok)
MOT_results_folder = val_tools_path / 'data' / 'trackers' / 'mot_challenge' / opt.benchmark / save_dir.name / 'data'
(MOT_results_folder).mkdir(parents=True, exist_ok=True) # make
return seq_paths, save_dir, MOT_results_folder, gt_folder

mikel-brostrom pushed a commit that referenced this issue May 21, 2023
@mikel-brostrom
Copy link
Owner

Actually the custom dataset case was never handled. Now seq_path is generated for custom dataset as well

@Hunter-v1
Copy link
Author

Can I run the val.py on different classes (change the class to eval from mot_challenge_2d_box.py ) because I m the dict_keys([])

Traceback (most recent call last):
File "(project_env) D:\Project\yolov8_tracking\val.py", line 358, in
e.run(opt)
File "(projec_env) D:\Project\yolov8_tracking\val.py", line 305, in run
print('HOTA:', combined_results['HOTA'])
KeyError: 'HOTA'

is percisting no matter how I try ( executing the track.py the val.py with --eval-existing --project --name --exist-ok arguments.

@mesllo-bc
Copy link

Can I run the val.py on different classes (change the class to eval from mot_challenge_2d_box.py ) because I m the dict_keys([])

Traceback (most recent call last):
File "(project_env) D:\Project\yolov8_tracking\val.py", line 358, in
e.run(opt)
File "(projec_env) D:\Project\yolov8_tracking\val.py", line 305, in run
print('HOTA:', combined_results['HOTA'])
KeyError: 'HOTA'

is percisting no matter how I try ( executing the track.py the val.py with --eval-existing --project --name --exist-ok arguments.

Same here.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented May 21, 2023

Give me your full command + full output. Not just the last few lines. There is something failing prior to HOTA extraction

@mesllo-bc
Copy link

mesllo-bc commented May 21, 2023

Full command:

python val.py --benchmark bar --split test --tracking-method bytetrack --yolo-weights weights/my_custom_model.pt

Full stack-trace:

Traceback (most recent call last):
  File "val_utils/scripts/run_mot_challenge.py", line 84, in <module>
    dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
  File "/home/foo/repos/yolov8_tracking/val_utils/trackeval/datasets/mot_challenge_2d_box.py", line 141, in __init__
    raise TrackEvalException(
trackeval.utils.TrackEvalException: Tracker file not found: bar/exp31/data/bar-42.txt

Eval Config:
USE_PARALLEL         : True                          
NUM_PARALLEL_CORES   : 4                             
BREAK_ON_ERROR       : True                          
RETURN_ON_ERROR      : False                         
LOG_ON_ERROR         : /home/foo/repos/yolov8_tracking/val_utils/error_log.txt
PRINT_RESULTS        : True                          
PRINT_ONLY_COMBINED  : False                         
PRINT_CONFIG         : True                          
TIME_PROGRESS        : True                          
DISPLAY_LESS_PROGRESS : False                         
OUTPUT_SUMMARY       : True                          
OUTPUT_EMPTY_CLASSES : True                          
OUTPUT_DETAILED      : True                          
PLOT_CURVES          : True                          

MotChallenge2DBox Config:
PRINT_CONFIG         : True                          
GT_FOLDER            : val_utils/data/bar/test
TRACKERS_FOLDER      : /home/foo/repos/yolov8_tracking/val_utils/data/trackers/mot_challenge/
OUTPUT_FOLDER        : None                          
TRACKERS_TO_EVAL     : ['bar']                
CLASSES_TO_EVAL      : ['pedestrian']                
BENCHMARK            : bar                    
SPLIT_TO_EVAL        : train                         
INPUT_AS_ZIP         : False                         
DO_PREPROC           : True                          
TRACKER_SUB_FOLDER   : exp31/data                    
OUTPUT_SUB_FOLDER    :                               
TRACKER_DISPLAY_NAMES : None                          
SEQMAP_FOLDER        : None                          
SEQMAP_FILE          : None                          
SEQ_INFO             : {'bar-42': None, 'bar-41': None}
GT_LOC_FORMAT        : {gt_folder}/{seq}/gt/gt.txt   
SKIP_SPLIT_FOL       : True                          

Tracker file not found: /home/foo/repos/yolov8_tracking/val_utils/data/trackers/mot_challenge/bar/exp31/data/bar-42.txt

Traceback (most recent call last):
  File "val.py", line 357, in <module>
    e.run(opt)
  File "val.py", line 307, in run
    writer.add_scalar('HOTA', combined_results['HOTA'])
KeyError: 'HOTA'

Note that my detection classes differ completely to the default. I am not detection pedestrians, I am detecting multiple different objects but so far I have not modified anything in that regard. The detection model is able to output the particular classes though.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented May 21, 2023

@mesllo-bc, update your repo please. --yolo-weights doesn't even exist anymore

@Hunter-v1
Copy link
Author

!python val.py --tracking-method deepocsort --yolo-model best_dji.pt --reid-model osnet_ain_x1_0.pt --benchmark MOTCUSTOM --split train --project runs/track --name exp6 --exist-ok --eval-existing
The results is

val: yolo_model=best_dji.pt, reid_model=
osnet_ain_x1_0.pt, tracking_method=deepocsort, name=exp6, project=runs/track, exist_ok=True, benchmark=MOTCUSTOM, split=train, eval_existing=True, conf=0.45, imgsz=[1280], device=[''], processes_per_device=2
Eval repo already downloaded
save_dir_eval_exist runs/track/exp6
val: yolo_model=best_dji.pt, reid_model=osnet_ain_x1_0.pt, tracking_method=deepocsort, name=exp6, project=runs/track, exist_ok=True, benchmark=MOTCUSTOM, split=train, eval_existing=True, conf=0.45, imgsz=[1280], device=[''], processes_per_device=2
Traceback (most recent call last):
File "yolov8_tracking/val_utils/scripts/run_mot_challenge.py", line 84, in
dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
File "yolov8_tracking/val_utils/trackeval/datasets/mot_challenge_2d_box.py", line 75, in init
raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.')
trackeval.utils.TrackEvalException: Attempted to evaluate an invalid class. Only pedestrian class is valid.

Eval Config:
USE_PARALLEL : True
NUM_PARALLEL_CORES : 4
BREAK_ON_ERROR : True
RETURN_ON_ERROR : False
LOG_ON_ERROR : yolov8_tracking/val_utils/error_log.txt
PRINT_RESULTS : True
PRINT_ONLY_COMBINED : False
PRINT_CONFIG : True
TIME_PROGRESS : True
DISPLAY_LESS_PROGRESS : False
OUTPUT_SUMMARY : True
OUTPUT_EMPTY_CLASSES : True
OUTPUT_DETAILED : True
PLOT_CURVES : True

MotChallenge2DBox Config:
PRINT_CONFIG : True
GT_FOLDER : val_utils/data/MOTCUSTOM/train
TRACKERS_FOLDER : runs/track/exp6
OUTPUT_FOLDER : None
TRACKERS_TO_EVAL : ['labels']
CLASSES_TO_EVAL : ['Dji-Matrice']
BENCHMARK :
SPLIT_TO_EVAL : train
INPUT_AS_ZIP : False
DO_PREPROC : True
TRACKER_SUB_FOLDER :
OUTPUT_SUB_FOLDER :
TRACKER_DISPLAY_NAMES : None
SEQMAP_FOLDER : None
SEQMAP_FILE : None
SEQ_INFO : { 'MOTCUSTOM-10': None}
GT_LOC_FORMAT : {gt_folder}/{seq}/gt/gt.txt
SKIP_SPLIT_FOL : True

Traceback (most recent call last):
File "yolov8_tracking/val.py", line 355, in
e.run(opt)
File "yolov8_tracking/val.py", line 312, in run
writer.add_scalar('HOTA', combined_results['HOTA'])
KeyError: 'HOTA'

@mikel-brostrom
Copy link
Owner

The error is clear: trackeval.utils.TrackEvalException: Attempted to evaluate an invalid class. Only pedestrian class is valid.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented May 21, 2023

@Hunter-v1
Copy link
Author

Hunter-v1 commented May 21, 2023

Get classes to eval

    self.valid_classes = ['Dji-Phantom', 'Dji-Mavic', 'Dji-Matrice']
    self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None
                       for cls in self.config['CLASSES_TO_EVAL']]
    if not all(self.class_list):
        raise TrackEvalException('Attempted to evaluate an invalid class. Only drone class is valid.')
    self.class_name_to_class_id = {'DJi-Phantom': 1, 'Dji-Mavic': 2, 'Dji-Matrice': 3}
    self.valid_class_numbers = list(self.class_name_to_class_id.values())

CLASSES_TO_EVAL : ['Dji-Phantom', 'Dji-Mavic', 'Dji-Matrice']
And even after that it generates the same error

@mesllo-bc
Copy link

Traceback (most recent call last):
File "val_utils/scripts/run_mot_challenge.py", line 84, in
dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
File "/home/foo/repos/yolov8_tracking/val_utils/trackeval/datasets/mot_challenge_2d_box.py", line 141, in init
raise TrackEvalException(
trackeval.utils.TrackEvalException: Tracker file not found: bar/exp31/data/bar-42.txt

Could you provide an indication as to how to solve the above error though? This error happens whether or not I use the latest commit. I feel that it may be happening because the detector is not getting any detections and hence not creating a text file for frame sequence set 42, but then in that case it would make sense to implement a catch for such a situation.

@Hunter-v1
Copy link
Author

if u pass that sequence to track.py it generates detections and appropriate classes ?

@mesllo-bc
Copy link

I fixed the area by having to remove sequences that the detector can't provide detections for. Would be nice to adapt the repo to catch such situations. Anyway my problem now is also related to the appropriate classes. In my case I have multiple classes and have had to tweak val.py to include my custom classes (that my detector can detect). It turns out that I also have to specify these classes as part of the detector results. Basically the following format:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>

Does not work because it also expects the class as part of the results. Will look into it further. @Hunter-v1 and @mikel-brostrom have you come across the need to modify or add the class of the detection as part of the results too before?

@Hunter-v1
Copy link
Author

Hunter-v1 commented May 23, 2023

I just descovered that the gt_folder shouldn't have that format instead it should be like this:

<frame>
<id>
<bb_left>
<bb_top>
<bb_width>
<bb_height>
<conf>
<class_id>
<visibility_ratio>

You should also create in mot_challenge_2d_box.py a class map that I'm working on these hours !! If u solve the problem I may need ur solution also!

@mesllo-bc
Copy link

mesllo-bc commented May 23, 2023

Then this doc ideally should be updated if this is the case , right @mikel-brostrom?

@Hunter-v1 could you please elaborate on what you mean by a class map and where I'd be able to find more info on this please? Also how is this format not even mentioned in the official MOT challenge site? Seems strange.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented May 23, 2023

I fixed the area by having to remove sequences that the detector can't provide detections for. Would be nice to adapt the repo to catch such situations.

Never though this could be a possibility. But you are right, when no detection are found, no MOT txt is generated. Will look into this.

It turns out that I also have to specify these classes as part of the detector results.

I have never tried to run this on a multi-class dataset. So, everything regarding this is new for me as well. If you find a solution we can update val.py based on it.

could you please elaborate on what you mean by a class map and where I'd be able to find more info on this please? Also how is this format not even mentioned in the official MOT challenge site?

The trackeval docs are missing a lot of information. But maybe you can find some useful information in the KITTI evaluation file as I know for a fact it is a multi-class MOT dataset.

@Hunter-v1
Copy link
Author

For a custom dataset, I personally modelled mine after the MOT20 one. However, again, I followed the doc under this repo for the format:

MOTCUSTOM
├── test
│   ├── MOTCUSTOM-01          # the content of all the sequence folders should be the same
│   │   ├── img1
│   │   │    ├── 000001.jpg
│   │   │    ├── 000002.jpg
│   │   │    ├── ...
│   │   │    └── XXXXXX.jpg
│   │   ├── gt
│   │   │   └── gt.txt
│   │   └── seqinfo.ini
│   ├── MOTCUSTOM-02
│   ├── ...
│   └── MOTCUSTOM-05
└── train
    ├── MOTCUSTOM-06
    ├── ...
    └── MOTCUSTOM-10

It seems that after tweaking it, the trackeval repo that is installed automatically should be able to handle it but only after changing the gt.txt files as per your suggestion @Hunter-v1 . I don't even have det.txt files as they appear in MOT20 test/ and I'm not even sure what those are for. Also @Hunter-v1 , if you could specify why you need to add the class map that would help a lot because so far it doesn't seem like it's required for my custom dataset.

@mesllo-bc for the class map I meant the classes in the mot_challenge_2d_box.py because it ll always track pedestrian class (or the class with class_id =1) !
In Summary, u should in my opinion change the data format of gt.txt and the class names in mot_challenge_2d_box.py (so that the output mention u're classes names )and the evaluation should run properly

@Hunter-v1
Copy link
Author

I can confirm that applying the fix to the gt.txt files that @Hunter-v1 suggested has brought me to at least finally output full results with val.py, i.e no errors are popping up now. Indeed, I still need to further look into whether the results are actually correct because I have not even modified the detection outputs in track.py to also include the class_id, so essentially the results are probably nonsense.

@mesllo-bc are the results correct now ?

@mesllo-bc
Copy link

mesllo-bc commented May 23, 2023

In Summary, u should in my opinion change the data format of gt.txt and the class names in mot_challenge_2d_box.py (so that the output mention u're classes names )and the evaluation should run properly

Yes indeed. This is precisely the tweak I did to fix the HOTA issue that was originally described in this ticket. Basically add the class id to the ground truth files of your dataset and make sure that the trackeval scripts also refer to these classes where necessary as opposed to the original pedestrian class.

I have not been able to confirm whether the results are correct yet. I believe for the results to be correct we also need to ensure that the detector in track.py also saves the class_id to the tracking files accordingly:

# also add class_id to the output text file of tracker
f.write(('%g ' * 10 + '\n') % (frame_idx + 1, id, bbox_left,  # MOT format
                                                               bbox_top, bbox_w, bbox_h, conf, class_id, -1, -1))

This way the ground truth should be able to compare the classes outputted too. I don't know if this is the right way to go though for the results to be accurate, I have not been able to test it yet. Please let me know if you manage from your end!

@Hunter-v1
Copy link
Author

In Summary, u should in my opinion change the data format of gt.txt and the class names in mot_challenge_2d_box.py (so that the output mention u're classes names )and the evaluation should run properly

Yes indeed. This is precisely the tweak I did to fix the HOTA issue that was originally described in this ticket. Basically add the class id to the ground truth files of your dataset and make sure that the trackeval scripts also refer to these classes where necessary as opposed to the original pedestrian class.

I have not been able to confirm whether the results are correct yet. I believe for the results to be correct we also need to ensure that the detector in track.py also saves the class_id to the tracking files accordingly:

# also add class_id to the output text file of tracker
f.write(('%g ' * 10 + '\n') % (frame_idx + 1, id, bbox_left,  # MOT format
                                                               bbox_top, bbox_w, bbox_h, conf, class_id, -1, -1))

This way the ground truth should be able to compare the classes outputted too. I don't know if this is the right way to go though for the results to be accurate, I have not been able to test it yet. Please let me know if you manage from your end!

@mesllo-bc
For the track results I think that it's not necessary because even the output of val.py of MOT17-mini for exemple are in the same format recommended by MOTChallenge. And it work properly so the real issue is gt.txt file format

@mesllo-bc
Copy link

In Summary, u should in my opinion change the data format of gt.txt and the class names in mot_challenge_2d_box.py (so that the output mention u're classes names )and the evaluation should run properly

Yes indeed. This is precisely the tweak I did to fix the HOTA issue that was originally described in this ticket. Basically add the class id to the ground truth files of your dataset and make sure that the trackeval scripts also refer to these classes where necessary as opposed to the original pedestrian class.
I have not been able to confirm whether the results are correct yet. I believe for the results to be correct we also need to ensure that the detector in track.py also saves the class_id to the tracking files accordingly:

# also add class_id to the output text file of tracker
f.write(('%g ' * 10 + '\n') % (frame_idx + 1, id, bbox_left,  # MOT format
                                                               bbox_top, bbox_w, bbox_h, conf, class_id, -1, -1))

This way the ground truth should be able to compare the classes outputted too. I don't know if this is the right way to go though for the results to be accurate, I have not been able to test it yet. Please let me know if you manage from your end!

@mesllo-bc For the track results I think that it's not necessary because even the output of val.py of MOT17-mini for exemple are in the same format recommended by MOTChallenge. And it work properly so the real issue is gt.txt file format

But then how will the evaluation take into account whether the tracker matches the bounding box of the correct object class that is also in the ground truth?

@Hunter-v1
Copy link
Author

@mesllo-bc Because u choose the class to evaluate by setting the class name so all IDs should be referred to that class. Just check for class=car for MOT-17 the val.py just know that those IDs are related to the pedestrian class (in gt.txt) and in evaluation it relate to appropriate class.
But to be sure try that ! I ll wait for u're results. Let me updated please.
But if u re evaluating a multi-class u should go for KITTI as @mikel-brostrom said then I will be grateful if u give feedback! I m struggling since a week to discover that gt.tx is incorrect.

@mesllo-bc
Copy link

Ok so let's say your video segment contains two ground-truth cars being tracked: a ferrarri and a mercedes

If you want to evaluate the tracking on the ferrari you have to set:

CLASSES_TO_EVAL = ['ferrari']

Similarly for Mercedes. Then the classes of the detection sent by the detection model are cross referenced with this class and then the results are outputted accordingly. Is this correct?

If so then why wouldn't you just set:

CLASSES_TO_EVAL = ['ferrari', 'mercedes']

Wouldn't this get results for all classes?

@Hunter-v1
Copy link
Author

I can confirm that the classes are imported not from the list only (it should be in the correct order in the mot_challenge_2d_box.py By The Way)
And u can set for all classes even in mot_challenge collection.
For me the problem now is from where it imports classes names!!!
And how he set the class_id !! Is it in the train of ReID model or the Detection model??

@Hunter-v1
Copy link
Author

The class_id is imported from the detection model. I just discovered while printing u're suggestion.
Thnx @mesllo-bc

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented May 25, 2023

Closing this related issue: #889

@github-actions
Copy link

github-actions bot commented Jun 5, 2023

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

@github-actions github-actions bot added the Stale label Jun 5, 2023
@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jun 5, 2023

It would be great if somebody could share a multi-class dataset or at least a very small subset of it. So that I can fix this. Otherwise I have no way of moving forward with this issue.

@mikel-brostrom
Copy link
Owner

#927

@Hunter-v1
Copy link
Author

For CUSTOMDATASET u should
First, change this line in CLASS_EVAL .
Then, CLASS_EVAL and then specify the yolo.yaml of ur trained yolo-model
Then, Class_id u should specify the id of the class subject of the evaluation.
For the gt.txt it should be as I mentioned before gt.txt format

@github-actions github-actions bot removed the Stale label Jun 6, 2023
@mikel-brostrom
Copy link
Owner

Thanks for sharing this information @Hunter-v1! So basically, when it comes to trackeval the model classes should be passed to MotChallenge2DBox to the two lines you mentioned?

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jun 6, 2023

All this:

For CUSTOMDATASET u should First, change this line in CLASS_EVAL . Then, CLASS_EVAL and then specify the yolo.yaml of ur trained yolo-model Then, Class_id u should specify the id of the class subject of the evaluation. For the gt.txt it should be as I mentioned before gt.txt format

got added to https://github.com/mikel-brostrom/yolo_tracking/wiki/How-to-evaluate-on-custom-tracking-dataset. MOT results now follows this format: #883 (comment), changed here: https://github.com/mikel-brostrom/yolo_tracking/actions/runs/5188019801

@mikel-brostrom mikel-brostrom mentioned this issue Jun 6, 2023
1 task
@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jun 6, 2023

Sadly, this line of code in trackeval:

https://github.com/JonathonLuiten/TrackEval/blob/12c8791b303e0a0b50f753af204249e622d0281a/trackeval/datasets/mot_challenge_2d_box.py#L71

cannot be affected by any output argument. Which makes this not automatable @Hunter-v1

@Hunter-v1
Copy link
Author

There is also this line that u should revisit distractor_classes
U may also revisit this line if ur class_id is different from 1
If Not Only id=1 is valid

Sadly, this line of code in trackeval:

https://github.com/JonathonLuiten/TrackEval/blob/12c8791b303e0a0b50f753af204249e622d0281a/trackeval/datasets/mot_challenge_2d_box.py#L71

cannot be affected by any output argument. Which makes this not automatable @Hunter-v1

@mikel-brostrom yes it's not automatable

@Hunter-v1
Copy link
Author

Thanks for sharing this information @Hunter-v1! So basically, when it comes to trackeval the model classes should be passed to MotChallenge2DBox to the two lines you mentioned?

Not only those two check those also:

There is also this line that u should revisit distractor_classes U may also revisit this line if ur class_id is different from 1 If Not Only id=1 is valid

@mikel-brostrom
Copy link
Owner

There is also this line that u should revisit distractor_classes
U may also revisit this line if ur class_id is different from 1
If Not Only id=1 is valid

Thanks. Will try to find a way of using the internals of trackeval to make this happen. But yup, with the official run_mot_challenge.py it is definitely not possible.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Jun 6, 2023

Something like this would do for customizing the eval

""" run_mot_challenge.py

Run example:
run_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_T

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']
        'BENCHMARK': 'MOT17',  # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
        'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test', 'all'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'DO_PREPROC': True,  # Whether to perform preprocessing (never done for 2D_MOT_2015)
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'VACE']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support
from boxmot.utils import logger as LOGGER
from boxmot.utils import ROOT, EXAMPLES

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity'], 'THRESHOLD': 0.5}
    default_dataset_config['GT_FOLDER'] = str(ROOT / 'assets' / 'MOT17-mini' / 'train')
    default_dataset_config['SEQ_INFO'] = {'MOT17-02-FRCNN': None}
    default_dataset_config['SPLIT_TO_EVAL'] = 'train'
    default_dataset_config['BENCHMARK'] = ''
    default_dataset_config['TRACKERS_FOLDER'] = EXAMPLES / 'runs' / 'val' / 'exp44'
    default_dataset_config['TRACKER_SUB_FOLDER'] =  ""
    default_dataset_config['DO_PREPROC'] =  False
    default_dataset_config['SKIP_SPLIT_FOL'] =  True
    default_dataset_config['TRACKERS_TO_EVAL'] = ['labels']
    #default_dataset_config['CLASSES_TO_EVAL'] = str(ROOT / 'assets' / 'MOT17-mini' / 'train')
    print(default_dataset_config)
    print(default_metrics_config)
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            elif setting == 'SEQ_INFO':
                x = dict(zip(args[setting], [None]*len(args[setting])))
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    ########################################################## NOTICE THIS BLOCK
    evaluator = trackeval.Evaluator(eval_config)
    mc2dBox = trackeval.datasets.MotChallenge2DBox(dataset_config)
    mc2dBox.class_list = ['pedestrian']
    mc2dBox.class_name_to_class_id['pedestrian'] = 0
    mc2dBox.valid_class_numbers = [0] 
    dataset_list = [mc2dBox]
    ############################################################################

    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.VACE]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric(metrics_config))
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

I guess this will be the next thing I implement for this repo

@mikel-brostrom
Copy link
Owner

But still need a multi-class dataset sample to debug and test so that I can proceed with the integration...

@captain0305
Copy link

captain0305 commented Dec 1, 2024

Something like this would do for customizing the eval

""" run_mot_challenge.py

Run example:
run_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_T

Command Line Arguments: Defaults, # Comments
    Eval arguments:
        'USE_PARALLEL': False,
        'NUM_PARALLEL_CORES': 8,
        'BREAK_ON_ERROR': True,
        'PRINT_RESULTS': True,
        'PRINT_ONLY_COMBINED': False,
        'PRINT_CONFIG': True,
        'TIME_PROGRESS': True,
        'OUTPUT_SUMMARY': True,
        'OUTPUT_DETAILED': True,
        'PLOT_CURVES': True,
    Dataset arguments:
        'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data
        'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location
        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)
        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)
        'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']
        'BENCHMARK': 'MOT17',  # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
        'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test', 'all'
        'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped
        'PRINT_CONFIG': True,  # Whether to print current config
        'DO_PREPROC': True,  # Whether to perform preprocessing (never done for 2D_MOT_2015)
        'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
        'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
    Metric arguments:
        'METRICS': ['HOTA', 'CLEAR', 'Identity', 'VACE']
"""

import sys
import os
import argparse
from multiprocessing import freeze_support
from boxmot.utils import logger as LOGGER
from boxmot.utils import ROOT, EXAMPLES

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval  # noqa: E402

if __name__ == '__main__':
    freeze_support()

    # Command line interface:
    default_eval_config = trackeval.Evaluator.get_default_eval_config()
    default_eval_config['DISPLAY_LESS_PROGRESS'] = False
    default_dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config()
    default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity'], 'THRESHOLD': 0.5}
    default_dataset_config['GT_FOLDER'] = str(ROOT / 'assets' / 'MOT17-mini' / 'train')
    default_dataset_config['SEQ_INFO'] = {'MOT17-02-FRCNN': None}
    default_dataset_config['SPLIT_TO_EVAL'] = 'train'
    default_dataset_config['BENCHMARK'] = ''
    default_dataset_config['TRACKERS_FOLDER'] = EXAMPLES / 'runs' / 'val' / 'exp44'
    default_dataset_config['TRACKER_SUB_FOLDER'] =  ""
    default_dataset_config['DO_PREPROC'] =  False
    default_dataset_config['SKIP_SPLIT_FOL'] =  True
    default_dataset_config['TRACKERS_TO_EVAL'] = ['labels']
    #default_dataset_config['CLASSES_TO_EVAL'] = str(ROOT / 'assets' / 'MOT17-mini' / 'train')
    print(default_dataset_config)
    print(default_metrics_config)
    config = {**default_eval_config, **default_dataset_config, **default_metrics_config}  # Merge default configs
    parser = argparse.ArgumentParser()
    for setting in config.keys():
        if type(config[setting]) == list or type(config[setting]) == type(None):
            parser.add_argument("--" + setting, nargs='+')
        else:
            parser.add_argument("--" + setting)
    args = parser.parse_args().__dict__
    for setting in args.keys():
        if args[setting] is not None:
            if type(config[setting]) == type(True):
                if args[setting] == 'True':
                    x = True
                elif args[setting] == 'False':
                    x = False
                else:
                    raise Exception('Command line parameter ' + setting + 'must be True or False')
            elif type(config[setting]) == type(1):
                x = int(args[setting])
            elif type(args[setting]) == type(None):
                x = None
            elif setting == 'SEQ_INFO':
                x = dict(zip(args[setting], [None]*len(args[setting])))
            else:
                x = args[setting]
            config[setting] = x
    eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
    dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
    metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}

    ########################################################## NOTICE THIS BLOCK
    evaluator = trackeval.Evaluator(eval_config)
    mc2dBox = trackeval.datasets.MotChallenge2DBox(dataset_config)
    mc2dBox.class_list = ['pedestrian']
    mc2dBox.class_name_to_class_id['pedestrian'] = 0
    mc2dBox.valid_class_numbers = [0] 
    dataset_list = [mc2dBox]
    ############################################################################

    metrics_list = []
    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.VACE]:
        if metric.get_name() in metrics_config['METRICS']:
            metrics_list.append(metric(metrics_config))
    if len(metrics_list) == 0:
        raise Exception('No metrics selected for evaluation')
    evaluator.evaluate(dataset_list, metrics_list)

I guess this will be the next thing I implement for this repo

Mike I want to know what is the ''THRESHOLD': 0.5' used for? I can see it in the HOTA,CLEAR,Identity metrics. Why these metrics use a same THRESHOLD? I've tried to change the THRESHOLD and found that the result was seems to be incorrect. Looking forward to ur reply! I wish u have a good day and all the best in ur work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants