Train on AF01 dataset on SSD[1]
Put xml
files into ~/data/AF01/"dataset"/Annotations
and put image files into $HOME/data/AF01/"dataset"/BMPImages
respectively. /dataset
is the folder name of the AF01 dataset, e.g. $HOME/data/AF01/20170112/Annotations
and ~/data/AF01/20170112/BMPImages
.
This step is to create three text files: trainval.txt
, test.txt
, and test_name_size.txt
in $CAFFE_ROOT/data/AF01/20170112
. The dataset has been separated into trainval and test.
$ cd $CAFFE_ROOT/data/ && mkdir AF01
$ cp -r VOC0712/* AF01/
Contact me for the files create_list.sh
and create_data.sh
and replace them in $CAFFE_ROOT/data/AF01
.
cd $CAFFE_ROOT/data/AF01
./create_list.sh
Create lmdb files for trainval and test with encoded original image and make soft links at $CAFFE_ROOT/examples/AF01_20170112/
. The generated database files will be in $HOME/data/AF01/20170112/lmdb/AF01_20170112_trainval_lmdb
and $HOME/data/AF01/20170112/lmdb/AF01_20170112_test_lmdb
.
./create_data.sh
Image pre-processing like data normalization is one of the critical factors that affect the training process. To do data normalization, one way is to use the mean image.
Download the pre-trained SSD network weights from SSD github, unzip and copy the file VGG_VOC0712_SSD_300x300_iter_120000.caffemodel
to $CAFFE_ROOT/models/VGGNet
.
Modify training script ssd_train_af01.py
and train:
$ cd $CAFFE_ROOT
$ python examples/ssd/AF01/ssd_train_af01.py
If you only want to test your trained network, run:
$ python examples/ssd/AF01/ssd_test_af01.py