Skip to content

Evaluation with model_main.py not using old checkpoints/don't evaluate choosen dataset #10710

Open
@Petros626

Description

@Petros626

Hello,

I want to evaluate my training and test data and I found several approaches do to that with eval.py and model_main.py.

Now my problem is, that the script begins to train from the beginning and evaluates the test data, even if the the flag --eval_training_data is True. One guess is to change the eval_input_reader in my config file to train.record, but not sure.

Could please someone help me with this, don't want to start from the beginning, if i already have checkpoints...

set CONFIG_FILE=C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\training\ssd_mobilenet_v2_quantized_300x300_coco_custom_aspect_ratios.config

set OUTPUT_DIR=C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\tensorboard_outputs\after_training\eval_train_data

set CHECKPOINT_PATH= C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\training\model.ckpt-200000

python model_main.py --pipeline_config_path=%CONFIG_FILE% --model_dir=%OUTPUT_DIR% --eval_training_data=True --checkpoint_dir=%CHECKPOINT_PATH% --run_once=True

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions