You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
seg_interactive = segmentation(PATH,LABELS)
Which backbone model do you want to use?
-'mobilenet' or 'mobilenetv2': efficient and light for real-word application
-'inceptionv3': Deep Convolutional Neural Network with sparsely connected architecture developped by Google (using different types of convolutional blocks at each layer)
-'resnet18','resnet34','resnet50','resnet101' or'resnet152': core idea of this model is 'identity shortcut connection' that skips one or more layers
We encourage you to try mobilenet first to see if it is sufficient for your segmentation task
mobilenet
Which loss function do you want to use ?
-'cross_entropy': fastest to compute,
-'dice_loss': Overlap measure that performs better at class imbalanced problems
-'focal_loss' : To down-weight the contribution of easy examples so that the CNN focuses more on hard examples
Could also be a mix of those loss functions
Examples :
cross_entropy + dice_loss
dice_loss + focal_loss
cross_entropy
Do you want Data Augmentation ? Yes or No
yes
Do you want to use pre-trained weights trained on Imagenet for the encoder ?
Yes or No
yes
What is your batch_size ?
6
What is your steps_per_epoch ?
For guidance, you have 12 training images and a chosen batch_size of 6
Normally (with many images), the steps_per_epoch is equal to Nbr_training_images//batch_size==2
However, if you have a few images, you could increase that number because you'll have data augmentation
10
How many epochs do you want to run ?
20
Do you want to freeze the encoder layer ? Yes or No
No
Found 12 images belonging to 1 classes.
Found 12 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
~/miniconda3/envs/segmentation/lib/python3.8/site-packages/keras_applications/mobilenet.py in MobileNet(input_shape, alpha, depth_multiplier, dropout, include_top, weights, input_tensor, pooling, classes, **kwargs)
294 weight_path,
295 cache_subdir='models')
--> 296 model.load_weights(weights_path)
297 elif weights is not None:
298 model.load_weights(weights)
~/miniconda3/envs/segmentation/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in load_weights(self, filepath, by_name, skip_mismatch)
248 raise ValueError('Load weights is not yet supported with TPUStrategy '
249 'with steps_per_run greater than 1.')
--> 250 return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
251
252 def compile(self,
~/miniconda3/envs/segmentation/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in load_weights(self, filepath, by_name, skip_mismatch)
1257 'first, then load the weights.')
1258 self._assert_weights_created()
-> 1259 with h5py.File(filepath, 'r') as f:
1260 if 'layer_names' not in f.attrs and 'model_weights' in f:
1261 f = f['model_weights']
seg_interactive = segmentation(PATH,LABELS)
Which backbone model do you want to use?
-'mobilenet' or 'mobilenetv2': efficient and light for real-word application
-'inceptionv3': Deep Convolutional Neural Network with sparsely connected architecture developped by Google (using different types of convolutional blocks at each layer)
-'resnet18','resnet34','resnet50','resnet101' or'resnet152': core idea of this model is 'identity shortcut connection' that skips one or more layers
We encourage you to try mobilenet first to see if it is sufficient for your segmentation task
mobilenet
Which loss function do you want to use ?
-'cross_entropy': fastest to compute,
-'dice_loss': Overlap measure that performs better at class imbalanced problems
-'focal_loss' : To down-weight the contribution of easy examples so that the CNN focuses more on hard examples
Could also be a mix of those loss functions
Examples :
cross_entropy
Do you want Data Augmentation ? Yes or No
yes
Do you want to use pre-trained weights trained on Imagenet for the encoder ?
Yes or No
yes
What is your batch_size ?
6
What is your steps_per_epoch ?
For guidance, you have 12 training images and a chosen batch_size of 6
Normally (with many images), the steps_per_epoch is equal to Nbr_training_images//batch_size==2
However, if you have a few images, you could increase that number because you'll have data augmentation
10
How many epochs do you want to run ?
20
Do you want to freeze the encoder layer ? Yes or No
No
Found 12 images belonging to 1 classes.
Found 12 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
The text was updated successfully, but these errors were encountered: