-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MOTCUSTOM Evaluation #883
Comments
|
Why is that while I can provide the track result |
That argument is to evaluate an already generated set of txt files (this is generated by track.py). So if they have already been generated you can use this flag. Will check if this hot ported correctly after the last release 😊 |
Yes , that's what I mean!! I m really confused because the track process results (in the |
Does it work for you @Hunter-v1 ? |
It generates
|
Thanks for the feedback! |
could u help just be mentionning where to change or adjust
** |
Actually the custom dataset case was never handled. Now |
Can I run the
is percisting no matter how I try ( executing the |
Same here. |
Give me your full command + full output. Not just the last few lines. There is something failing prior to HOTA extraction |
Full command:
Full stack-trace:
Note that my detection classes differ completely to the default. I am not detection pedestrians, I am detecting multiple different objects but so far I have not modified anything in that regard. The detection model is able to output the particular classes though. |
@mesllo-bc, update your repo please. |
|
The error is clear: |
For classes other than people check: https://github.com/JonathonLuiten/TrackEval/blob/12c8791b303e0a0b50f753af204249e622d0281a/scripts/run_mot_challenge.py#L24 |
This may also be of interest for you @Hunter-v1: https://github.com/JonathonLuiten/TrackEval/blob/12c8791b303e0a0b50f753af204249e622d0281a/trackeval/datasets/mot_challenge_2d_box.py#L76 |
Get classes to eval
CLASSES_TO_EVAL : ['Dji-Phantom', 'Dji-Mavic', 'Dji-Matrice'] |
Could you provide an indication as to how to solve the above error though? This error happens whether or not I use the latest commit. I feel that it may be happening because the detector is not getting any detections and hence not creating a text file for frame sequence set 42, but then in that case it would make sense to implement a catch for such a situation. |
if u pass that sequence to |
I fixed the area by having to remove sequences that the detector can't provide detections for. Would be nice to adapt the repo to catch such situations. Anyway my problem now is also related to the appropriate classes. In my case I have multiple classes and have had to tweak
Does not work because it also expects the class as part of the results. Will look into it further. @Hunter-v1 and @mikel-brostrom have you come across the need to modify or add the class of the detection as part of the results too before? |
I just descovered that the gt_folder shouldn't have that format instead it should be like this:
You should also create in |
Then this doc ideally should be updated if this is the case , right @mikel-brostrom? @Hunter-v1 could you please elaborate on what you mean by a class map and where I'd be able to find more info on this please? Also how is this format not even mentioned in the official MOT challenge site? Seems strange. |
Never though this could be a possibility. But you are right, when no detection are found, no MOT txt is generated. Will look into this.
I have never tried to run this on a multi-class dataset. So, everything regarding this is new for me as well. If you find a solution we can update
The |
@mesllo-bc for the class map I meant the classes in the |
@mesllo-bc are the results correct now ? |
Yes indeed. This is precisely the tweak I did to fix the I have not been able to confirm whether the results are correct yet. I believe for the results to be correct we also need to ensure that the detector in
This way the ground truth should be able to compare the classes outputted too. I don't know if this is the right way to go though for the results to be accurate, I have not been able to test it yet. Please let me know if you manage from your end! |
@mesllo-bc |
But then how will the evaluation take into account whether the tracker matches the bounding box of the correct object class that is also in the ground truth? |
@mesllo-bc Because u choose the class to evaluate by setting the class name so all IDs should be referred to that class. Just check for |
Ok so let's say your video segment contains two ground-truth cars being tracked: a ferrarri and a mercedes If you want to evaluate the tracking on the ferrari you have to set:
Similarly for Mercedes. Then the classes of the detection sent by the detection model are cross referenced with this class and then the results are outputted accordingly. Is this correct? If so then why wouldn't you just set:
Wouldn't this get results for all classes? |
I can confirm that the classes are imported not from the list only (it should be in the correct order in the |
The class_id is imported from the detection model. I just discovered while printing u're suggestion. |
Closing this related issue: #889 |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. |
It would be great if somebody could share a multi-class dataset or at least a very small subset of it. So that I can fix this. Otherwise I have no way of moving forward with this issue. |
For CUSTOMDATASET u should |
Thanks for sharing this information @Hunter-v1! So basically, when it comes to |
All this:
got added to https://github.com/mikel-brostrom/yolo_tracking/wiki/How-to-evaluate-on-custom-tracking-dataset. MOT results now follows this format: #883 (comment), changed here: https://github.com/mikel-brostrom/yolo_tracking/actions/runs/5188019801 |
Sadly, this line of code in trackeval: cannot be affected by any output argument. Which makes this not automatable @Hunter-v1 |
There is also this line that u should revisit distractor_classes
@mikel-brostrom yes it's not automatable |
Not only those two check those also:
|
Thanks. Will try to find a way of using the internals of |
Something like this would do for customizing the eval """ run_mot_challenge.py
Run example:
run_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_T
Command Line Arguments: Defaults, # Comments
Eval arguments:
'USE_PARALLEL': False,
'NUM_PARALLEL_CORES': 8,
'BREAK_ON_ERROR': True,
'PRINT_RESULTS': True,
'PRINT_ONLY_COMBINED': False,
'PRINT_CONFIG': True,
'TIME_PROGRESS': True,
'OUTPUT_SUMMARY': True,
'OUTPUT_DETAILED': True,
'PLOT_CURVES': True,
Dataset arguments:
'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data
'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location
'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER)
'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder)
'CLASSES_TO_EVAL': ['pedestrian'], # Valid: ['pedestrian']
'BENCHMARK': 'MOT17', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test', 'all'
'INPUT_AS_ZIP': False, # Whether tracker input files are zipped
'PRINT_CONFIG': True, # Whether to print current config
'DO_PREPROC': True, # Whether to perform preprocessing (never done for 2D_MOT_2015)
'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
Metric arguments:
'METRICS': ['HOTA', 'CLEAR', 'Identity', 'VACE']
"""
import sys
import os
import argparse
from multiprocessing import freeze_support
from boxmot.utils import logger as LOGGER
from boxmot.utils import ROOT, EXAMPLES
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import trackeval # noqa: E402
if __name__ == '__main__':
freeze_support()
# Command line interface:
default_eval_config = trackeval.Evaluator.get_default_eval_config()
default_eval_config['DISPLAY_LESS_PROGRESS'] = False
default_dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config()
default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity'], 'THRESHOLD': 0.5}
default_dataset_config['GT_FOLDER'] = str(ROOT / 'assets' / 'MOT17-mini' / 'train')
default_dataset_config['SEQ_INFO'] = {'MOT17-02-FRCNN': None}
default_dataset_config['SPLIT_TO_EVAL'] = 'train'
default_dataset_config['BENCHMARK'] = ''
default_dataset_config['TRACKERS_FOLDER'] = EXAMPLES / 'runs' / 'val' / 'exp44'
default_dataset_config['TRACKER_SUB_FOLDER'] = ""
default_dataset_config['DO_PREPROC'] = False
default_dataset_config['SKIP_SPLIT_FOL'] = True
default_dataset_config['TRACKERS_TO_EVAL'] = ['labels']
#default_dataset_config['CLASSES_TO_EVAL'] = str(ROOT / 'assets' / 'MOT17-mini' / 'train')
print(default_dataset_config)
print(default_metrics_config)
config = {**default_eval_config, **default_dataset_config, **default_metrics_config} # Merge default configs
parser = argparse.ArgumentParser()
for setting in config.keys():
if type(config[setting]) == list or type(config[setting]) == type(None):
parser.add_argument("--" + setting, nargs='+')
else:
parser.add_argument("--" + setting)
args = parser.parse_args().__dict__
for setting in args.keys():
if args[setting] is not None:
if type(config[setting]) == type(True):
if args[setting] == 'True':
x = True
elif args[setting] == 'False':
x = False
else:
raise Exception('Command line parameter ' + setting + 'must be True or False')
elif type(config[setting]) == type(1):
x = int(args[setting])
elif type(args[setting]) == type(None):
x = None
elif setting == 'SEQ_INFO':
x = dict(zip(args[setting], [None]*len(args[setting])))
else:
x = args[setting]
config[setting] = x
eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()}
dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()}
metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()}
########################################################## NOTICE THIS BLOCK
evaluator = trackeval.Evaluator(eval_config)
mc2dBox = trackeval.datasets.MotChallenge2DBox(dataset_config)
mc2dBox.class_list = ['pedestrian']
mc2dBox.class_name_to_class_id['pedestrian'] = 0
mc2dBox.valid_class_numbers = [0]
dataset_list = [mc2dBox]
############################################################################
metrics_list = []
for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.VACE]:
if metric.get_name() in metrics_config['METRICS']:
metrics_list.append(metric(metrics_config))
if len(metrics_list) == 0:
raise Exception('No metrics selected for evaluation')
evaluator.evaluate(dataset_list, metrics_list) I guess this will be the next thing I implement for this repo |
But still need a multi-class dataset sample to debug and test so that I can proceed with the integration... |
Mike I want to know what is the ''THRESHOLD': 0.5' used for? I can see it in the HOTA,CLEAR,Identity metrics. Why these metrics use a same THRESHOLD? I've tried to change the THRESHOLD and found that the result was seems to be incorrect. Looking forward to ur reply! I wish u have a good day and all the best in ur work. |
Search before asking
Yolv8 Tracking Component
Evaluation
Bug
The
val.py
keeps generation KEYERROR 'HOTA'I need to know how to let the
val.py
go for the evaluation and skip the track task or how can I change to run theval.py
correctly.please not that I changed the
mot_challenge_2d_box.py
so that the evaluation suits my classes.Environment
No response
Minimal Reproducible Example
No response
The text was updated successfully, but these errors were encountered: