Description
Hello,
I want to evaluate my training and test data and I found several approaches do to that with eval.py and model_main.py.
Now my problem is, that the script begins to train from the beginning and evaluates the test data, even if the the flag --eval_training_data is True. One guess is to change the eval_input_reader
in my config file to train.record
, but not sure.
Could please someone help me with this, don't want to start from the beginning, if i already have checkpoints...
set CONFIG_FILE=C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\training\ssd_mobilenet_v2_quantized_300x300_coco_custom_aspect_ratios.config
set OUTPUT_DIR=C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\tensorboard_outputs\after_training\eval_train_data
set CHECKPOINT_PATH= C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\training\model.ckpt-200000
python model_main.py --pipeline_config_path=%CONFIG_FILE% --model_dir=%OUTPUT_DIR% --eval_training_data=True --checkpoint_dir=%CHECKPOINT_PATH% --run_once=True