Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added building a Docker image for the course #4

Merged
merged 7 commits into from
Mar 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
name: Build and publish Docker image for the project

on:
push:
branches:
- main
pull_request:
branches:
- main

jobs:
Docker:
if: github.repository == 'antmicro/dl-in-iot-course'
runs-on: ubuntu-latest
steps:
- name: Cancel previous run
uses: styfle/[email protected]
with:
access_token: ${{ github.token }}
- name: Checkout sources
uses: actions/checkout@v4
- name: Build Docker image
run: docker build . -f environments/Dockerfile --tag ghcr.io/${{ github.repository }}
- name: Login to registry
if: github.ref == 'refs/heads/main' && github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ github.token }}
- name: Push image to registry
if: github.ref == 'refs/heads/main' && github.event_name != 'pull_request'
run: docker push ghcr.io/${{ github.repository }}
22 changes: 0 additions & 22 deletions .github/workflows/flake8.yml

This file was deleted.

20 changes: 20 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: pre-commit checks

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: Run pre-commit
uses: pre-commit/[email protected]
12 changes: 12 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.2
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
- id: ruff-format
exclude: .+\.patch
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Optimization of Deep Learning applications for IoT devices - Course tasks

Copyright (c) 2021-2022 [Antmicro](https://www.antmicro.com)
Copyright (c) 2021-2024 [Antmicro](https://www.antmicro.com)

This repository contains tasks for laboratories for the "Optimization of Deep Learning applications for IoT devices" course.
This repository contains tasks for laboratories for the "Optimization of Neural Network applications for IoT devices" course.

## Course classes

Expand All @@ -14,6 +14,7 @@ Please follow the links to go to the list of tasks:
* [Lab 04 - Introduction to Apache TVM](dl_in_iot_course/l04_tvm)
* [Lab 05 - Implementing a TensorFlow Lite delegate](dl_in_iot_course/l05_tflite_delegate)
* [Lab 06 - Fine-tuning of model and operations in Apache TVM](dl_in_iot_course/l06_tvm_fine_tuning)
* [Lab 07 - Accelerating ML models on FPGAs with TFLite Micro and CFU Playground](cfu-playground)

## Cloning the repository

Expand Down
2 changes: 1 addition & 1 deletion dl_in_iot_course/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@
import os
import sys

sys.path.insert(0, os.path.abspath(__file__ + '../'))
sys.path.insert(0, os.path.abspath(__file__ + "../"))
4 changes: 2 additions & 2 deletions dl_in_iot_course/l02_quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ It requires implementing methods for:
* `[2pt]` Finish the `ImbalancedINT8Model` class:

* Implement `optimize_model` method, where the `calibration_dataset_generator` will take all examples for objects with 5 class and use them for calibration:

* Use `self.dataset.dataX` and `self.dataset.dataY` to extract all inputs for a particular class.
* Remember to use self.dataset.prepare_input_sample method.

Expand All @@ -64,7 +64,7 @@ It requires implementing methods for:
```

In the `build/results` directory, the script will create:

* `<prefix>-metrics.md` file - contains basic metrics, such as accuracy, precision, sensitivity or G-Mean, along with inference time
* `<prefix>-confusion-matrix.png` file - contains visualization of confusion matrix for the model evaluation.
Those files will be created for:
Expand Down
123 changes: 37 additions & 86 deletions dl_in_iot_course/l02_quantization/model_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ def __init__(self, modelpath: Path, dataset: PetDataset, from_file=True):
self.from_file = from_file
self.numclasses = dataset.numclasses
self.mean, self.std = dataset.get_input_mean_std()
self.inputspec = tf.TensorSpec((1, 224, 224, 3), name='input_1')
self.inputspec = tf.TensorSpec((1, 224, 224, 3), name="input_1")
self.dataset = dataset
self.prepare()

def load_model(self):
tf.keras.backend.clear_session()
if hasattr(self, 'model') and self.model is not None:
if hasattr(self, "model") and self.model is not None:
del self.model
self.model = tf.keras.models.load_model(str(self.modelpath))

Expand All @@ -30,43 +30,23 @@ def prepare(self):
self.load_model()
else:
self.base = tf.keras.applications.MobileNetV2(
input_shape=(224, 224, 3),
include_top=False,
weights='imagenet'
input_shape=(224, 224, 3), include_top=False, weights="imagenet"
)
self.base.trainable = False
avgpool = tf.keras.layers.GlobalAveragePooling2D()(
self.base.output
)
layer1 = tf.keras.layers.Dense(
1024,
activation='relu')(avgpool)
avgpool = tf.keras.layers.GlobalAveragePooling2D()(self.base.output)
layer1 = tf.keras.layers.Dense(1024, activation="relu")(avgpool)
d1 = tf.keras.layers.Dropout(0.3)(layer1)
layer2 = tf.keras.layers.Dense(
512,
activation='relu')(d1)
layer2 = tf.keras.layers.Dense(512, activation="relu")(d1)
d2 = tf.keras.layers.Dropout(0.3)(layer2)
layer3 = tf.keras.layers.Dense(
128,
activation='relu')(d2)
layer3 = tf.keras.layers.Dense(128, activation="relu")(d2)
d3 = tf.keras.layers.Dropout(0.3)(layer3)
output = tf.keras.layers.Dense(
self.numclasses,
name='out_layer'
)(d3)
self.model = tf.keras.models.Model(
inputs=self.base.input,
outputs=output
)
output = tf.keras.layers.Dense(self.numclasses, name="out_layer")(d3)
self.model = tf.keras.models.Model(inputs=self.base.input, outputs=output)
print(self.model.summary())

def train_model(
self,
batch_size: int,
learning_rate: int,
epochs: int,
logdir: Path):

self, batch_size: int, learning_rate: int, epochs: int, logdir: Path
):
def preprocess_input(path, onehot):
data = tf.io.read_file(path)
img = tf.io.decode_jpeg(data, channels=3)
Expand All @@ -79,93 +59,69 @@ def preprocess_input(path, onehot):
img = (img - self.mean) / self.std
return img, tf.convert_to_tensor(onehot)

Xt, Xv, Yt, Yv = self.dataset.split_dataset(
0.25
)
Xt, Xv, Yt, Yv = self.dataset.split_dataset(0.25)
Yt = list(self.dataset.onehotvectors[Yt])
Yv = list(self.dataset.onehotvectors[Yv])
traindataset = tf.data.Dataset.from_tensor_slices((Xt, Yt))
traindataset = traindataset.map(
preprocess_input,
num_parallel_calls=tf.data.experimental.AUTOTUNE
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
).batch(batch_size)
validdataset = tf.data.Dataset.from_tensor_slices((Xv, Yv))
validdataset = validdataset.map(
preprocess_input,
num_parallel_calls=tf.data.experimental.AUTOTUNE
preprocess_input, num_parallel_calls=tf.data.experimental.AUTOTUNE
).batch(batch_size)

tensorboard_callback = tf.keras.callbacks.TensorBoard(
str(logdir),
histogram_freq=1
str(logdir), histogram_freq=1
)

model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=str(logdir / 'weights.{epoch:02d}-{val_loss:.2f}.h5'),
monitor='val_categorical_accuracy',
mode='max',
save_best_only=True
filepath=str(logdir / "weights.{epoch:02d}-{val_loss:.2f}.h5"),
monitor="val_categorical_accuracy",
mode="max",
save_best_only=True,
)

self.model.compile(
optimizer=tf.keras.optimizers.Adam(lr=learning_rate),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=[
tf.keras.metrics.CategoricalAccuracy()
]
metrics=[tf.keras.metrics.CategoricalAccuracy()],
)

self.model.fit(
traindataset,
epochs=epochs,
callbacks=[
tensorboard_callback,
model_checkpoint_callback
],
validation_data=validdataset
callbacks=[tensorboard_callback, model_checkpoint_callback],
validation_data=validdataset,
)


if __name__ == '__main__':
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--modelpath", help="Path to the model file", type=Path)
parser.add_argument("--dataset-root", help="Path to the dataset file", type=Path)
parser.add_argument(
'--modelpath',
help='Path to the model file',
type=Path
)
parser.add_argument(
'--dataset-root',
help='Path to the dataset file',
type=Path
)
parser.add_argument(
'--download-dataset',
help='Download the dataset before training',
action='store_true'
"--download-dataset",
help="Download the dataset before training",
action="store_true",
)
parser.add_argument(
'--batch-size',
help='Batch size for the training process',
"--batch-size",
help="Batch size for the training process",
type=int,
default=128
default=128,
)
parser.add_argument(
'--learning-rate',
help='Starting learning rate for Adam optimizer',
"--learning-rate",
help="Starting learning rate for Adam optimizer",
type=float,
default=0.0001
default=0.0001,
)
parser.add_argument(
'--num-epochs',
help='Number of training epochs',
type=int,
default=50
"--num-epochs", help="Number of training epochs", type=int, default=50
)
parser.add_argument(
'--logdir',
help='The path to the logging directory',
type=Path,
default='logs'
"--logdir", help="The path to the logging directory", type=Path, default="logs"
)

args = parser.parse_args()
Expand All @@ -175,10 +131,5 @@ def preprocess_input(path, onehot):

args.logdir.mkdir(parents=True, exist_ok=True)

model.train_model(
args.batch_size,
args.learning_rate,
args.num_epochs,
args.logdir
)
model.train_model(args.batch_size, args.learning_rate, args.num_epochs, args.logdir)
model.save_model()
Loading
Loading