Mobile friendly EfficientNet based on ImageNet dataset with tensorflow keras
⭐ Star us on GitHub — it helps!!
Mobile friendly EfficientNet is described in Here
You will need a machine with a GPU and CUDA installed.
Then, you prepare runtime environment:
pip install -r requirements.txt
-
Clone this repository.
$ git clone https://github.com/da2so/EfficientNet_Mobile.git $ cd EfficientNet_Mobile/dataset/raw_data
-
Download the "Training images (Task 1 & 2)" and "Validation images (all tasks)" from the ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) download page at
dataset/raw_data
directory.$ ls -l ./ -rwxr-xr-x 1 shkang shkang 147897477120 Feb 14 14:55 ILSVRC2012_img_train.tar -rwxr-xr-x 1 shkang shkang 6744924160 Feb 14 15:58 ILSVRC2012_img_val.tar
-
Untar the "train" and "val" files. For example, I put the untarred files at
dataset/raw_data
.$ mkdir train $ cd train $ tar xvf ../ILSVRC2012_img_train.tar $ find . -name "*.tar" | while read NAME ; do \ mkdir -p "${NAME%.tar}"; \ tar -xvf "${NAME}" -C "${NAME%.tar}"; \ rm -f "${NAME}"; \ done $ cd .. $ mkdir validation $ cd validation $ tar xvf ../ILSVRC2012_img_val.tar
-
Pre-process the validation image files. (The script would move the JPEG files into corresponding subfolders.)
$ cd ../../../data # EfficientNet_Mobile/dataset/raw_data/validation -> EfficientNet_mobile/data $ python ./process_val.py \ ./dataset/raw_data/valiation/ \ imagenet_2012_validation_synset_labels.txt
-
Build TFRecord files for "train" and "validation". (This step could take a couple of hours, since there are 1,281,167 training images and 50,000 validation images in total.)
$ cd .. $ mkdir /dataset/tfrecord/ $ python convert2tfrecord.py \ --output_directory=./dataset/tfrecord/ \ --train_directory=./dataset/raw_data/train/ \ --validation_directory=./dataset/raw_data/val/
Train mobile friendly EfficientNet based on ImageNet dataset
$ CUDA_VISIBLE DEVICES=0 python train.py \
--imagenet_path=./dataset/tfrecord/ \
--model_name=EfficientNetB0_M \
--bs=16 \
--epochs=100 \
--lr=1e-2 \
--save_dir=./result/ \
$ CUDA_VISIBLE DEVICES=0,1,2,3 python train.py \
--imagenet_path=./dataset/tfrecord/ \
--model_name=EfficientNetB0_M \
--bs=16 \
--epochs=100 \
--lr=1e-2 \
--save_dir=./result/ \
The total number of batch size will be (16 x The number of GPUs) = 16 x 4 = 64.
Arguments:
imagenet_path
- ImageNet dataset pathmodel_name
- Name of Mobile friendly EfficientNet- Choices = ['EfficientNetB0_M', 'EfficientNetB1_M', 'EfficientNetB2_M', 'EfficientNetB3_M']
bs
- Batch sizeepochs
- The number of epochslr
- Learning ratesave_dir
- Save directory for EfficientNet model
- Multi-GPU training
- TFLite conversion
- Comparing results between original and our EfficientNet
- Train a model based on Tiny ImageNet